text
stringlengths
454
608k
url
stringlengths
17
896
dump
stringclasses
91 values
source
stringclasses
1 value
word_count
int64
101
114k
flesch_reading_ease
float64
50
104
Red Hat Bugzilla – Bug 73754 ghostscript 6.52-9.4 update degrades print quality on HP895Cse printers Last modified: 2007-04-18 12:46:32 EDT From Bugzilla Helper: User-Agent: Mozilla/4.0 (compatible; MSIE 5.5; Windows 98; T312461) Description of problem: After installing RedHat 7.3, which comes with ghostscript 6.52 and HPIJS 1.0.2- 8, our machines are then configured to utilize HPIJS for HP895Cse printers and all works well. RedHat 7.3 currently requires a security update of ghostscript to version 6.52- 9.4. There is no problem in performing the update. But, when the ghostscript 6.52-9.4 update is performed (it requires the removal of HPIJS 1.0.2-8 due to a conflict with ghostscript) the result is severe degradation in print quality from the HP895Cse printers. Running the # hpijs -h command shows that hpijs 1.02 still exists. We require the use of ghostscript for other printers, as well as hpijs for our HP895Cse printers. Therefore, we currently cannot perform a ghostscript update to any of the machines running RedHat Linux 7.3 Version-Release number of selected component (if applicable):6.52-9.4 How reproducible: Always Steps to Reproduce: Additional info: hpijs is bundled into the ghostscript package in the update. Please show me the output of 'rpm -V ghostscript', and attach the /etc/alchemist/namespace/printconf/local.adl file (which contains the printer settings). Created attachment 75625 [details] printersettings file output of # rpm -V ghostscript output of # rpm -v ghostscript RPM Version 4.0.4 This Program may be freely distributed under the terms of the GNU GPL Usage: {--help} {--version} Can you describe the quality loss some more? Does it look like the resolution is too low? Go to the 'driver options' tab in the printer configuration tool (printconf-gui) and change the quality setting---does that help? Well as Murphy's Law would have it, the problem is gone! Instead of using the previous ghostscript download, I downloaded a new copy this morning and installed it on a machine with a clean RH-7.3 install (one that had a previous problem). I really hope this is all it was. Originally, as each text or graphics line ended, the print output would continue to print a band of pixels to the right margin the same height as the text or graphic preceding it. Thanks... > Originally, as each text or graphics line ended, the print output would > continue to print a band of pixels to the right margin the same height as the > text or graphic preceding it. FWIW, I've seen overclocked machines with insufficient cooling and power supplies do this kind of thing.
https://bugzilla.redhat.com/show_bug.cgi?id=73754
CC-MAIN-2018-05
refinedweb
452
66.54
| - Page 2 Data Management XML Schema XML with Java XML with Visual Basic and VB.NET XSLT 21-40 of 4906 Simple Script to Run Java Applications on Linux and Windows by Bea Petrovicova This simple script lets you run a Java application on both Linux and Windows. How to Store Oracle 10 g Log Files in XML by Srinath MS Oracle 10 g supports several ways to store log files, including XML.. Calling JTidy from Java to Convert HTML to XHTML by Leonard Anghel This tip shows how to invoke the JTidy open-source project from Java code to convert an HTML file to XHTML. How to Declare a java.util.Date in a EJB 3.0 Entity by Leonard Anghel To map a date value to a Java date in EJB 3.0, use this code. XML Signature Core Validation Failure with Java and Apache Axis by Chandan khanna Added namespaces cause XML Signature core validation failures. Writing a Parameterized SQL Query in EJB 3.0 by Leonard Anghel Follow this DELETE statement example to write parameterized SQL queries in EJB 3.0. Avoid Object Instantiation Within Loops by Kulkarni Vasudeva Creating new objects in loops can seriously damage performance. Measuring A Program's Speed by Zachary Edwards The simplest way to measure the performance of a block of code (or an entire program) is to obtain the elapsed time. This tip shows you how. Unzipping an Archive from a Servlet by Leonard Anghel This tip shows you how to unzip an archive from a Java servlet. Write a Complex Query in Hibernate by Leonard Anghel This example shows how to use the findByExample method in Hibernate in conjunction with the SQL AND operator to find and log in a user given an email address and password. Create an XML File or XmlDocument Directly from a StringBuilder by Julio Henriquez Creating an XML file (or XmlDocument object) using a StringBuilder is easier than working directly with the System.Xml.XmlDocument Class. Inject an EJB 3.0 into the init() Method of a Servlet by Leonard Anghel This tip shows you how to inject an EJB 3.0 into the init() method of a servlet. Build a Custom Formatter for a Java.util.logging Logger by Leonard Anghel See how to develop a custom log formatter and customize the formatter for your logging needs. Create a New Event Using AWTEventMulticaster by Anghel Leonard This code showa how to create a new event using AWTEventMulticaster . Obtain the Local Absolute Path of a Class File by This tip shows how to obtain the local absolute path of a class file containing the specified class name, as prescribed by the current classpath. 21-40 of 4906 Thanks for your registration, follow us on our social networks to keep up-to-date
http://www.devx.com/tips/xml/2
CC-MAIN-2015-22
refinedweb
469
61.67
Steve Jobs has taken to the stage in Cupertino to offer free cases to all disgruntled iPhone 4 users. Apple has acknowledged there's a problem with the antenna and has offered users a free case as a way to rectify the issue. However, Jobs said Apple can't make enough Bumpers to offer everyone one of those, so will be offering alternatives: "We're going to send you a free case. We can't make enough bumpers. No way we can make enough in the quarter. So we're going to source some cases and give you a choice." iPhone 4 refunds too Users who have bought an iPhone 4 bumper will be able to get a refund, and the refunds will apply to all iPhone 4s bought from opening day to 30 September. Users will also be able to return (undamaged) iPhone 4s to Apple stores for a full refund with no restocking fee up to 30 days after purchase, as the company clearly goes into damage limitation mode. Early estimates predict this could cost Apple up to £120 million if everyone eligible for the case takes up the offer, but this is miniscule compared to the cost of a full product recall. You can see the full report of the Apple iPhone 4 press conference here. TechRadar has contacted a number of Apple iPhone case and peripherals manufacturers to guage their response to the latest news. Stay tuned for updates.
http://www.techradar.com/au/news/phone-and-communications/mobile-phones/jobs-offers-free-cases-to-all-iphone-4-users-703824
CC-MAIN-2015-22
refinedweb
243
68.3
Hands-On Neural Networks with TensorFlow 2.0 by Ekta Saraogi and Akshat Gupta. This book by Packt Publishing explains how TensorFlow works, from basics to advanced level with case-study based approach. Representing computation using graphs comes with the advantage of being able to run the forward and backward passes required to train a parametric machine learning model via gradient descent, applying the chain rule to compute the gradient as a local process to every node; however, this is not the only advantage of using graphs. Reducing the abstraction level and thinking about the implementation details of representing computation using graphs brings the following advantages: The main structure – tf.Graph Python simplifies the graph description phase since it even creates a graph without the need to explicitly define it; in fact, there are two different ways to define a graph: In order to explicitly define a graph, TensorFlow allows the creation of tf.Graph objects that, through the as_default method, create a context manager; every operation defined inside the context is placed inside the associated graph. In practice, a tf.Graph object defines a namespace for the tf.Operation objects it contains. The second peculiarity of the tf.Graph structure is its graph collections. Every tf.Graph uses the collection mechanism to store metadata associated with the graph structure. A collection is uniquely identified by a key and its content is a list of objects/operations. The user does not usually need to worry about the existence of a collection since they are used by TensorFlow itself to correctly define a graph. Graph definition – from tf.Operation to tf.Tensor A dataflow graph is the representation of a computation where the nodes represent units of computation, and the edges represent the data consumed or produced by the computation. In the context of tf.Graph, every API call defines tf.Operation (node) that can have multiple inputs and outputs tf.Tensor (edges). For instance, referring to our main example, when calling tf.constant([[1, 2], [3, 4]], dtype=tf.float32), a new node (tf.Operation) named Const is added to the default tf.Graph inherited from the context. This node returns a tf.Tensor (edge) named Const:0. Since each node in a graph is unique, if there is already a node named Const in the graph (that is the default name given to all the constants), TensorFlow will make it unique by appending the suffix '_1', '_2', and so on to the name. If a name is not provided, as in our example, TensorFlow gives a default name to each operation added and adds the suffix to make them unique in this case too. The output tf.Tensor has the same name as the associated tf.Operation, with the addition of the :ID suffix. The ID is a progressive number that indicates how many outputs the operation produces. In the case of tf.constant, the output is just a single tensor, therefore ID=0; but there can be operations with more than one output, and in this case, the suffixes :0, :1, and so on are added to the tf.Tensor name generated by the operation. It is also possible to add a name scope prefix to all operations created within a context—a context defined by the tf.name_scope call. The default name scope prefix is a / delimited list of names of all the active tf.name_scope context managers. In order to guarantee the uniqueness of the operations defined within the scopes and the uniqueness of the scopes themselves, the same suffix appending rule used for tf.Operation holds. The following code snippet shows how our baseline example can be wrapped into a separate graph, how a second independent graph can be created in the same Python script, and how we can change the node names, adding a prefix, using tf.name_scope. First, we import the TensorFlow library: (tf1) import tensorflow as tf Then, we define two tf.Graph objects (the scoping system allows you to use multiple graphs easily): g1 = tf.Graph() g2 = tf.Graph() with g1.as_default(): A = tf.constant([[1, 2], [3, 4]], dtype=tf.float32) x = tf.constant([[0, 10], [0, 0.5]]) b = tf.constant([[1, -1]], dtype=tf.float32) y = tf.add(tf.matmul(A, x), b, name="result") with g2.as_default(): with tf.name_scope("scope_a"): x = tf.constant(1, name="x") print(x) with tf.name_scope("scope_b"): x = tf.constant(10, name="x") print(x) y = tf.constant(12) z = x * y Then, we define two summary writers. We need to use two different tf.summary.FileWriter objects to log two separate graphs. writer = tf.summary.FileWriter("log/two_graphs/g1", g1) writer = tf.summary.FileWriter("log/two_graphs/g2", g2) writer.close() Run the example and use TensorBoard to visualize the two graphs, using the left-hand column on TensorBoard to switch between "runs." Nodes with the same name, x in the example, can live together in the same graph, but they have to be under different scopes. In fact, being under different scopes makes the nodes completely independent and completely different objects. The node name, in fact, is not only the parameter name passed to the operation definition, but its full path, complete with all of the prefixes. In fact, running the script, the output is as follows: Tensor("scope_a/x:0", shape=(), dtype=int32) Tensor("scope_b/x:0", shape=(), dtype=int32) As we can see, the full names are different and we also have other information about the tensors produced. In general, every tensor has a name, a type, a rank, and a shape: Being a C++ library, TensorFlow is strictly statically typed. This means that the type of every operation/tensor must be known at graph definition time. Moreover, this also means that it is not possible to execute an operation among incompatible types. Looking closely at the baseline example, it is possible to see that both matrix multiplication and addition operations are performed on tensors with the same type, tf.float32. The tensors identified by the Python variables A and b have been defined, making the type clear in the operation definition, while tensor x has the same tf.float32 type; but in this case, it has been inferred by the Python bindings, which are able to look inside the constant value and infer the type to use when creating the operation. Another peculiarity of Python bindings is their simplification in the definition of some common mathematical operations using operator overloading. The most common mathematical operations have their counterpart as tf.Operation; therefore, using operator overloading to simplify the graph definition is natural. The following table shows the available operators overloaded in the TensorFlow Python API: Operator overloading allows a faster graph definition and is completely equivalent to their tf.* API call (for example, using __add__ is the same as using the tf.add function). There is only one case in which it is beneficial to use the TensorFlow API call instead of the associated operator overload: when a name for the operation is needed. tf.device creates a context manager that matches a device. The function allows the user to request that all operations created within the context it creates are placed on the same device. The devices identified by tf.device are more than physical devices; in fact, it is capable of identifying devices such as remote servers, remote devices, remote workers, and different types of physical devices (GPUs, CPUs, and TPUs). It is required to follow a device specification to correctly instruct the framework to use the desired device. A device specification has the following form: /job:<JOB_NAME>/task:<TASK_INDEX>/device:<DEVICE_TYPE>:<DEVICE_INDEX> Broken down as follows: There is no need to specify every part of a device specification. For example, when running a single-machine configuration with a single GPU, you might use tf.device to pin some operations to the CPU and GPU. We can thus extend our baseline example to place the operations on the device we choose. Thus, it is possible to place the matrix multiplication on the GPU, since it is hardware optimized for this kind of operation, while keeping all the other operations on the CPU. Please note that since this is only a graph description, there's no need to physically have a GPU or to use the tensorflow-gpu package. First, we import the TensorFlow library: (tf1) import tensorflow as tf Now, use the context manager to place operations on different devices, first, on the first CPU of the local machine: with tf.device("/CPU:0"): A = tf.constant([[1, 2], [3, 4]], dtype=tf.float32) x = tf.constant([[0, 10], [0, 0.5]]) b = tf.constant([[1, -1]], dtype=tf.float32) Then, on the first GPU of the local machine: with tf.device("/GPU:0"): mul = A @ x When the device is not forced by a scope, TensorFlow decides which device is better to place the operation on: y = mul + b Then, we define the summary writer: writer = tf.summary.FileWriter("log/matmul_optimized", tf.get_default_graph()) writer.close() If we look at the generated graph, we'll see that it is identical to the one generated by the baseline example, with two main differences: The matmul node is placed on the first GPU of the local machine, while any other operation is executed in the CPU. TensorFlow takes care of communication among different devices in a transparent manner: tf.Session is a class that TensorFlow provides to represent a connection between the Python program and the C++ runtime. The tf.Session object is the only object able to communicate directly with the hardware (through the C++ runtime), placing operations on the specified devices, using the local and distributed TensorFlow runtime, with the goal of concretely building the defined graph. The tf.Session object is highly optimized and, once correctly built, caches tf.Graph in order to speed up its execution. Being the owner of physical resources, the tf.Session object must be used as a file descriptor to do the following: Typically, instead of manually defining a session and taking care of its creation and destruction, a session is used through a context manager that automatically closes the session at the block exit. The constructor of tf.Session is fairly complex and highly customizable since it is used to configure and create the execution of the computational graph. In the simplest and most common scenario, we just want to use the current local hardware to execute the previously described computational graph as follows: (tf1) # The context manager opens the session with tf.Session() as sess: # Use the session to execute operations sess.run(...) # Out of the context, the session is closed and the resources released There are more complex scenarios in which we wouldn't want to use the local execution engine, but use a remote TensorFlow server that gives access to all the devices it controls. This is possible by specifying the target parameter of tf.Session just by using the URL (grpc://) of the server: (tf1) # the IP and port of the TensorFlow server ip = "192.168.1.90" port = 9877 with tf.Session(f"grpc://{ip}:{port}") as sess: sess.run(...) By default, the tf.Session will capture and use the default tf.Graph object, but when working with multiple graphs, it is possible to specify which graph to use by using the graph parameter. It's easy to understand why working with multiple graphs is unusual, since even the tf.Session object is able to work with only a single graph at a time. The third and last parameter of the tf.Session object is the hardware/network configuration specified through the config parameter. The configuration is specified through the tf.ConfigProto object, which is able to control the behavior of the session. The tf.ConfigProto object is fairly complex and rich with options, the most common and widely used being the following two (all the others are options used in distributed, complex environments): The baseline example can now be extended to not only define a graph, but to proceed on to an effective construction and the execution of it: import tensorflow as tf import numpy as np A = tf.constant([[1, 2], [3, 4]], dtype=tf.float32) x = tf.constant([[0, 10], [0, 0.5]]) b = tf.constant([[1, -1]], dtype=tf.float32) y = tf.add(tf.matmul(A, x), b, name="result") writer = tf.summary.FileWriter("log/matmul", tf.get_default_graph()) writer.close() with tf.Session() as sess: A_value, x_value, b_value = sess.run([A, x, b]) y_value = sess.run(y) # Overwrite y_new = sess.run(y, feed_dict={b: np.zeros((1, 2))}) print(f"A: {A_value}\nx: {x_value}\nb: {b_value}\n\ny: {y_value}") print(f"y_new: {y_new}") The first sess.run call evaluates the three tf.Tensor objects, A, x, b, and returns their values as numpy arrays. The second call, sess.run(y), works in the following way: The addition is the entry point of the graph resolution (Python variable y) and the computation ends. The first print call, therefore, produces the following output: A: [[1. 2.] [3. 4.]] x: [[ 0. 10. ] [ 0. 0.5]] b: [[ 1. -1.]] y: [[ 1. 10.] [ 1. 31.]] The third sess.run call shows how it is possible to inject into the computational graph values from the outside, as numpy arrays, overwriting a node. The feed_dict parameter allows you to do this: usually, inputs are passed to the graph using the feed_dict parameter and through the overwriting of the tf.placeholder operation created exactly for this purpose. tf.placeholder is just a placeholder created with the aim of throwing an error when values from the outside are not injected inside the graph. However, the feed_dict parameter is more than just a way to feed the placeholders. In fact, the preceding example shows how it can be used to overwrite any node. The result produced by the overwriting of the node identified by the Python variable, b, with a numpy array that must be compatible, in terms of both type and shape, with the overwritten variable, is as follows: y_new: [[ 0. 11.] [ 0. 32.]] The baseline example has been updated in order to show the following: So far, we have used graphs with constant values and used the feed_dict parameter of the sess.run call to overwrite a node parameter. However, since TensorFlow is designed to solve complex problems, the concept of tf.Variable has been introduced: every parametric machine learning model can be defined and trained with TensorFlow. In this post we talked about how dataflow graphs work in TensorFlow. To know, the implementation of convolution neural network in TensorFlow for via Churn Prediction Case Study and pneumonia detection from the x-ray case study, read the book Hands-On Neural Networks with TensorFlow 2.0 by Packt Publishing. Views: 472 Comment © 2019 Data Science Central ® Badges | Report an Issue | Privacy Policy | Terms of Service You need to be a member of Data Science Central to add comments! Join Data Science Central
https://www.datasciencecentral.com/profiles/blogs/understanding-dataflow-graphs-in-tensorflow
CC-MAIN-2019-47
refinedweb
2,505
57.27
Bug #13557 there's no way to pass backtrace locations as a massaged backtrace Description When re-raising exceptions, it is sometimes useful to "massage" the backtrace (especially in DSLs). There is currently no way to do it using only backtrace locations. This causes the new exception to have #backtrace_locations return nil, and thus makes backtrace_locations unreliable as a whole. Example: def test raise ArgumentError, "", caller_locations end begin test rescue ArgumentError => e p e.backtrace_locations end attempting to pass caller_location to Kernel#raise in the test method fails with bla.rb:2:inset_backtrace': backtrace must be Array of String (TypeError)` History Updated by sylvain.joyeux (Sylvain Joyeux) about 2 years ago Did a mistake. The code example uses caller_locations (and therefore causes the TypeError exception) while I meant it to use caller and therefore cause e.backtrace_locations to be nil Updated by Eregon (Benoit Daloze) about 2 years ago +1 This would be a nice feature and let the VM still keep the backtrace information in an flexible way (instead of dumping it to a String like Exception#backtrace). Also available in: Atom PDF
https://bugs.ruby-lang.org/issues/13557
CC-MAIN-2019-22
refinedweb
184
53.81
Difference Between var(), QQ() and PolynomialRing() I am rather new to Sage and am trying to understand the internals of Sage better. I encountered some confusion when reading through the reference manual as to the difference between the different ring constructs used in sage. The var() function is of course used to declare a variable for symbolic manipulation but when should one use QQ[] or PolynomialRing()? I ran into this issue with the convolution() function which requires variables within functions to be declared using QQ[] or Polynomial ring and will not work with var(). Why is this? Is QQ the default namespace? How do these namespaces relate to the symbolic ring used with var? Thank you for your help!
https://ask.sagemath.org/question/9289/difference-between-var-qq-and-polynomialring/
CC-MAIN-2021-17
refinedweb
119
64.81
Using runc to explore the OCI Runtime Specification In recent posts I explored how to use user namespaces and cgroups v2 on OpenShift. My main objective is to run systemd-based workloads in user namespaces that map to unprivileged users on the host. This is a prerequisite to running FreeIPA securely in OpenShift, and supporting multitenancy. Independently, user namespaces and cgroups v2 already work well in OpenShift. But for systemd support there is a critical gap: the pod’s cgroup directory (mounted as /sys/fs/cgroup/ in the container) is owned by root—the host’s UID 0, which is unmapped in the pod’s user namespace. As a consequence, the container’s main process ( /sbin/init, which is systemd) cannot manage cgroups, and terminates. To understand how to close this gap, I needed to become familiar with the low-level container runtime behaviour. This post discusses the relationship between various container runtime components and demonstrates how to use runc directly to create and run containers. I also outline some possible approaches to solving the cgroup ownership issue. Podman, Kubernetes, CRI, CRI-O, runc, oh my! § What actually happens when you “run a container”. Abstractly, a container runtime sets up a sandbox and runs a process in it. The sandbox consists of a set of namespaces (PID, UTS, mount, cgroup, user, network, etc), and a restricted view of a filesystem (via chroot(2) or similar mechanism). There are several different container runtimes in widespread use. In fact, there are several different layers of container runtime with different purposes: End-user focused container runtimes include Podman and Docker. Kubernetes defines the Container Runtime Interface (CRI), which it uses to run containers. Compliant implementations include containerd and CRI-O. The Open Container Initiative (OCI) runtime spec defines a low-level container runtime interface. Implementations include runcand crun. OCI runtimes are designed to be used by higher-level container runtimes. They are not friendly for humans to use directly. Running a container usually involves a higher-level runtime and a low-level runtime. For example, Podman uses an OCI runtime; crun by default on Fedora but runc works fine too. OpenShift (which is built on Kubernetes) uses CRI-O, which in turn uses runc (CRI-O itself can use any OCI runtime). Division of responsibilities § So, what are responsibilities of the higher-level runtime compared to the OCI (or other low-level) runtime? In general the high-level runtime is responsible for: Image management (pulling layers, preparing overlay filesystem) Determining the mounts, environment, namespaces, resource limits and security policies for the container Network setup for the container Metrics, accounting, etc. The steps performed by the low-level runtime include: Create and and enter required namespaces chroot(2)or pivot_root(2)to the specified root filesystem path Create requested mounts Create cgroups and apply resource limits Adjust capabilities and apply seccomp policy Execute the container’s main process I mentioned several features specific to Linux in the list above. The OCI Runtime Specification also specifies Windows, Solaris and VM-based workloads. This post assumes a Linux workload, so many details are Linux-specific. The above list is just a rough guide and not absolute. Depending on use case the high-level runtime might perform some of the low-level steps. For example, if container networking is required, Podman might create the network namespace, setting up devices and routing. Then, instead of asking the OCI runtime to create a network namespace, it tells the runtime to enter the existing namespace. Running containers via runc § Because our effort is targeting OpenShift, the rest of this post mainly deals with runc. The functions demonstrated in this post were performed using runc version 1.0.0-rc95+dev, which I built from source (commit 19d75e1c). The Fedora 33 and 34 repositories offer runc version 1.0.0-rc93, which does not work. Clone and build § Install the Go compiler and libseccomp development headers: % sudo dnf -y --quiet install libseccomp-devel Installed: golang-1.16.3-1.fc34.x86_64 golang-bin-1.16.3-1.fc34.x86_64 golang-src-1.16.3-1.fc34.noarch libseccomp-devel-2.5.0-4.fc34.x86_64 Clone the runc source code and build the program: % mkdir -p ~/go/src/github.com/opencontainers % cd ~/go/src/github.com/opencontainers % git clone --quiet % cd runc % make --quiet % ./runc --version runc version 1.0.0-rc95+dev commit: v1.0.0-rc95-31-g19d75e1c spec: 1.0.2-dev go: go1.16.3 libseccomp: 2.5.0 Prepare root filesystem § I want to create a filesystem from my systemd based test-nginx container image. To avoid configuring overlay filesystems myself, I used Podman to create a container, then exported the whole container filesystem, via tar(1), to a local directory: % podman create --quiet quay.io/ftweedal/test-nginx e97930b3… % mkdir rootfs % podman export e97930b3 | tar -xC rootfs % ls rootfs bin dev home lib64 media opt root sbin sys usr boot etc lib lost+found mnt proc run srv tmp var Create config.json § OCI runtimes read the container configuration from config.json in the bundle directory. ( runc uses the current directory as the default bundle directory). The runc spec command generates a sample config.json which can serve as a starting point: % ./runc spec --rootless % file config.json config.json: JSON data % jq -c .process.args < config.json ["sh"] We can see that runc created the sample config. The command to execute is sh(1). Let’s change that to /sbin/init: % mv config.json config.json.orig % jq '.process.args=["/sbin/init"]' config.json.orig \ > config.json jq(1) cannot operate on JSON files in situ, so you first have to copy or move the input file. The sponge(1) command, provided by the moreutils package, offers an alternative approach. Run container § Now we can try and run the container: % ./runc --systemd-cgroup run test Mount failed for selinuxfs on /sys/fs/selinux: No such file or directory Another IMA custom policy has already been loaded, ignoring: Permission denied Failed to mount tmpfs at /run: Operation not permitted [!!!!!!] Failed to mount API filesystems. Freezing execution. That didn’t work. systemd failed to mount a tmpfs (temporary, memory-based filesystem) at /tmp, and halted. The container itself was still running (but frozen). I was able to kill it from another terminal: % ./runc list --quiet test % ./runc kill test KILL % ./runc list --quiet It turned out that in addition to the process to run, the config requires several changes to successfully run a systemd-based container. I will not repeat the whole process here, but I achieved a working config through a combination of trial-and-error, and comparison against OCI configurations produced by Podman. The following jq(1) program performs the required modifications: .process.args = ["/sbin/init"] | .process.env |= . + ["container=oci"] | [{"containerID":1,"hostID":100000,"size":65536}] as $idmap | .linux.uidMappings |= . + $idmap | .linux.gidMappings |= . + $idmap | .linux.cgroupsPath = "user.slice:runc:test" | .linux.namespaces |= . + [{"type":"network"}] | .process.capabilities[] = [ "CAP_CHOWN", "CAP_FOWNER", "CAP_SETUID", "CAP_SETGID", "CAP_SETPCAP", "CAP_NET_BIND_SERVICE" ] | {"type": "tmpfs", "source": "tmpfs", "options": ["rw","rprivate","nosuid","nodev","tmpcopyup"] } as $tmpfs | .mounts |= [{"destination":"/var/log"} + $tmpfs] + . | .mounts |= [{"destination":"/tmp"} + $tmpfs] + . | .mounts |= [{"destination":"/run"} + $tmpfs] + . This program performs the following actions: Set the container process to /sbin/init(which is systemd). Set the $containerenvironment variable, as required by systemd. Add UID and GID mappings for IDs 1– 65536in the container’s user namespace. The host range (started at 100000) is taken from my user account’s assigned ranges in /etc/subuidand /etc/subgid. You may need a different number. The mapping for the container’s UID 0to my user account already exists in the config. Set the container’s cgroup path. A non-absolute path is interpreted relative to a runtime-determined location. Tell the runtime to create a network namespace. Without this, the container will have no network stack and nginx won’t run. Set the capabilities required by the container. systemd requires all of these capabilities, although CAP_NET_BIND_SERVICEis only required for network name resolution (systemd-resolved). And nginx. Tell the runtime to mount tmpfsfilesystems at /run, /tmpand /var/log. I ran the program to modify the config, then started the container: % jq --from-file filter.jq config.json.orig > config.json % ./runc --systemd-cgroup run test systemd v246.10-1.fc33 running in system mode. (+PAM … Detected virtualization container-other. Detected architecture x86-64. Welcome to Fedora 33 (Container Image)! … [ OK ] Started The nginx HTTP and reverse proxy server. [ OK ] Reached target Multi-User System. [ OK ] Reached target Graphical Interface. Starting Update UTMP about System Runlevel Changes. [ OK ] Finished Update UTMP about System Runlevel Changes. Fedora 33 (Container Image) Kernel 5.11.17-300.fc34.x86_64 on an x86_64 (console) runc login: OK! systemd initialised the system properly and started nginx. We can confirm nginx is running properly by running curl in the container: % ./runc exec test curl --silent --head localhost:80 HTTP/1.1 200 OK Server: nginx/1.18.0 Date: Thu, 27 May 2021 02:29:58 GMT Content-Type: text/html Content-Length: 5564 Last-Modified: Mon, 27 Jul 2020 22:20:49 GMT Connection: keep-alive ETag: "5f1f5341-15bc" Accept-Ranges: bytes At this point we cannot access nginx from outside the container. That’s fine; I don’t need to work out how to do that. Not today, anyhow. How runc creates cgroups § runc manages container cgroups via the host’s systemd service. Specifically, it communicates with systemd over DBus to create a transient scope for the container. Then it binds the container cgroup namespace to this new scope. Observe that the inode of /sys/fs/cgroup/ in the container is the same as the scope created for the container by systemd on the host: % ./runc exec test ls -aldi /sys/fs/cgroup 64977 drwxr-xr-x. 5 root root 0 May 27 02:26 /sys/fs/cgroup % ls -aldi /sys/fs/cgroup/user.slice/user-1000.slice/user@1000.service/user.slice/runc-test.scope 64977 drwxr-xr-x. 5 ftweedal ftweedal 0 May 27 12:26 /sys/fs/cgroup/user.slice/user-1000.slice/user@1000.service/user.slice/runc-test.scope The mapping of root in the container’s user namespace to ftweedal is confirmed by the UID map of the container process: % id --user ftweedal 1000 % ./runc list -f json | jq '.[]|select(.id="test").pid' 186718 % cat /proc/186718/uid_map 0 1000 1 1 100000 65536 Next steps § systemd is running properly in the container, but root in the container is mapped to my main user account. The container is not as isolated as I would like it to be. A partial sandbox escape could lead to the containerised process(es) messing with local files, or other processes owned by my user (including other containers). User-namespaced containers in OpenShift (via CRI-O annotations) are allocated non-overlapping host ID ranges. All the host IDs are essentially anonymous. I confirmed this in a previous blog post. That is good! But the container’s cgroup is owned by the host’s UID 0, which is unmapped in the container. systemd-based workloads cannot run because the container cannot write to its cgroupfs. Therefore, the next steps in my investigation are: Alter the ID mappings to use a single mapping of only “anonymous” users. This is a simple change to the OCI config. The host IDs still have to come from the user’s allocated sub-ID range. Find (or implement) a way to change the ownership of the container’s cgroup scope to the container’s UID 0. When using the systemd cgroup manager, runc uses the transient unit API to ask systemd to create a new scope for the container. I am still learning about this API. Perhaps there is a way to specify a different ownership for the new scope or service. If so, we should be able to avoid changes to higher-level container runtimes like CRI-O. That would be the best outcome. Otherwise, I will investigate whether we could use the OCI createRuntime hook to chown(2) the container’s cgroup scope. Unfortunately, the semantics of createRuntime is currently underspecified. The specification is ambiguous about whether the containers cgroup scope exists when this hook is executed. If this approach is valid, we will have to update CRI-O to add the relevant hook command to the OCI config. Another possible approach is for the high-level runtime to perform the ownership change itself. This would be done after it invokes the OCI runtime’s create command, but before it invokes start. (See also the OCI container lifecycle description). However, on OpenShift CRI-O runs as user containers and the container’s cgroup scope is owned by root. So I have doubts about the viability of this approach, as well as the OCI hook approach. Whatever the outcome, there will certainly be more blog posts as I continue this long-running investigation. I still have much to learn as I struggle towards the goal of systemd-based workloads running securely on OpenShift.
https://frasertweedale.github.io/blog-redhat/posts/2021-05-27-oci-runtime-spec-runc.html
CC-MAIN-2022-27
refinedweb
2,175
58.69
" %> examples examples Hi sir...... please send me the some of the examples... more connectivity examples with different queries from the following links... questions . Hello Friend, Which type of connectivity examples do Console visually edit Struts, Tiles and Validator configuration files. The Struts Console... files Come to more detail:... Struts Console   i am Getting Some errors in Struts - Struts i am Getting Some errors in Struts I am Learning Struts Basics,I am Trying examples do in this Site Examples.i am getting lot of errors.Please Help me struts - Struts struts i want to learn more examples on struts like menu creation and some big application on struts.and one more thing that custom validation and client side validation in struts are not running which are present on rose india Ajax Examples these are invaluable to learning AJAX, some people need a bit more information than just... for the server, clicking another button, and then waiting some more. With Ajax... Ajax Examples   Struts Articles examples of Struts extensions are the Struts Validation framework and the Tiles.... The first section will provide an overview of both Struts and Web application security.... This article identifies some of the gaps between Struts and Hibernate, particularly Struts 1 Tutorial and example programs with Struts Tiles In this lesson we will create Struts Tiles.... Struts Actions Examples  ... actions shipped with Struts APIs. These built-in utility actions provide More APIs Become Available More APIs Become Available 3. More APIs Become Available: Some APIs and new methods have been added in JDBC 6.0. More APIs have been added to this version of JDBC to provide access Exceptions - More /a/today/2003/12/04/exceptions.html, has some useful "rules", but even more useful... Java NotesExceptions - More Exceptions | Exception Usage | Exceptions - More Kinds of Exceptions There are many exceptions, but they can Struts Books for more experienced readers eager to exploit Struts to the fullest.  ... are rolling, you can get more details from the Jakarta Struts documentation or one... Request Dispatcher. In fact, some Struts aficionados feel that I tiles - Struts Struts Tiles I need an example of Struts Tiles Struts Tag Lib - Struts . JSP Syntax Examples in Struts : Description The taglib..., sun, and sunw etc. For more information on Struts visit to : http...Struts Tag Lib Hi i am a beginner to struts. i dont have MySQL Examples, Learn MySQL with examples. the SQL examples using MySQL Database Server. These MySQL example SQL's tutorial... the different types of SQL Statements for managing you data. Few examples...-set by specified one or more columns.   Struts Projects Struts Projects Easy Struts Projects to learn and get into development ASAP. These Struts Project will help you jump the hurdle of learning complex Struts Technology. Struts Project highlights: Struts Project to make Struts 2 Format Examples Struts 2 Format Examples In this section you will learn how to format Date and numbers in Struts 2 Framework. Our Struts 2 Format Examples are very easy to grasp and you will learn these concepts Hi... - Struts more information,tutorials and examples on Struts with Hibernate visit... provide you hibernate tutorial with running code Please visit for more...Hi... Hi, If i am using hibernet with struts then require : JPA Examples In Eclipse JPA Examples In Eclipse In the JPA Examples section we will provide you almost all... Subquery and many more. So, let's get started with JPA Examples using Eclipse IDE Javascript Examples JavaScript Examples Clear cookie example Cookies can... In this part of JavaScript examples, we have created a simple example which shows the use... will provide you the clear understanding of blur event. JavaScript Change link Struts Tutorial , internationalization, error handling, tiles layout etc. Struts framework is also...). Features of Struts Struts has various of features some of them are as follows.... Struts View Component : Takes the input from the user and provide Struts - Struts provide the that examples zip. Thanks and regards Sanjeev. Hi friend...Struts Dear Sir , I am very new in Struts and want... to understand but little bit confusion,Plz could u provide the zip for address JSF Examples JSF Examples In this section we will discuss about various examples of JSF. This section describes some important examples of JSF which will help you... examples, I have tried to list these examples in a sequence that will help you Error - Struts to test the examples Run Struts 2 Hello...Error Hi, I downloaded the roseindia first struts example... create the url for that action then "Struts Problem Report Struts has detected Struts Reference struts and many other examples. Topics Covered in this Struts Reference... Struts Validation framework with example Struts Tiles example Struts... Struts Reference Welcome to the Jakarta Online Reference Struts - Struts , For read more information,Tutorials and Examples on Struts visit to : Thanks... in struts 1.1 What changes should I make for this?also write struts-config.xml Struts2 UI - Struts Struts2 UI Can you please provide me with some examples of how to do a multi-column layout in JSP (using STRUTS2) ? Thanks more than one struts-config.xml file more than one struts-config.xml file Can we have more than one struts-config.xml file for a single Struts application JPA Examples . Some miscellaneous examples JPA CRUD Example... to create applications using NetBeans IDE. For more examples...JPA Examples In this section we will discuss about the examples of JPA Hello - Struts Hello Hi friends, I ask some question please read... development kit (JDK) supplied by Sun does not provide a tool to create platform specific... as an executable .jar file instead of a .exe file. For read more information java struts DAO - Struts java struts DAO hai friends i have some doubt regarding the how to connect strutsDAO and action dispatch class please provide some example to explain this connectivity. THANKS IN ADVANCE Some Notes on Java Programming Environments Appendix 2: Some Notes on Java Programming Environments ANYONE WHO...-line environments. All programming environments for Java require some text... programmer should have some experience with IDEs, but I think that it's an experience Struts Code - Struts more information,Tutorials and Examples on struts visit to : http...Struts Code How to add token , and encript token and how decript token in struts. Hi friend, Using the Token methods The methods we Can you give me some good factory pattern examples? Can you give me some good factory pattern examples? HI, I am looking for some factory pattern examples to learn about them, if you can point me towards some of the examples that would we very helpful. Thanks Hello Browser Back/Forward - Struts a web application using Struts 2. Now, i want to enable browser back/forward/bookmarking in that. Can anyone post me some links/tutorial/examples or how to go... your reply. Thanks, Paddy Hi friend, For more information Downloading Struts Official example ://archive.apache.org/dist/struts/. You can also find many official examples... you how you can download the Struts Official examples from their website. Video View More tutorials at Struts Tutorials page.  Please provide the code Please provide the code Program to find depth of the file in a directory and list all files those are having more number of parent directories Flex Examples Flex Examples In this section we will see some examples in Flex. This section... the for each loop in other languages like C#, Java etc. For more Examples... the various examples in Flex which will help you in to understand that how Provide the code please Provide the code please Program to calculate the sum of two big numbers (the numbers can contain more than 1000 digits). Don't use any library classes or methods (BigInteger etc HTML5 examples examples to learn in detail with the help of many examples. Please provide me the urls of HTML5 examples. Thanks Hi, There are many new features... supported by major browsers. View examples at HTML5 Examples tutorial page Struts 2 Tutorial the database. More Struts Validator Examples User input validations... Struts 2 Framework with examples. Struts 2 is very elegant and flexible front... framework. Struts 2 Tags Examples   Struts - Struts Struts examples for beginners Struts tutorial and examples for beginners struts 1.3 struts 1.3 After entering wrong password more than three times,the popup msg display on the screen or the username blocked .please provide its code in struts jsp what is struts? - Struts technologies to provide the Model and the View. For the Model, Struts can interact...what is struts? What is struts?????how it is used n what... of the Struts framework is a flexible control layer based on standard technologies like Java Java Tutorials - JDK Tutorials, JAVA Examples, JDK Examples of Java. To learn more, please read the preface. Short Table of Contents... Features of Java Appendix 2: Some Notes on Java Programming Environments Appendix 3: Source Code for All Examples in this Book News and Errata Struts Tutorials is provided with the example code. Many advance topics like Tiles, Struts Validation... libraries introduced in Struts made JSP pages more readable and maintainable... with Struts and some tricks hidden deep inside the Struts framework Connecting Oracle database with struts - Struts Connecting Oracle database with struts Can anyone please provide me some solutions on Connection between Oracle database and struts provide code - Swing AWT provide code Dear frnds please provide code for two player CHESS GAME.....using swings,awt concepts Hi friend, import java.awt....); } } ------------------------------------- visit for more information. java struts error - Struts the problem what you want in details. For more information,Tutorials and examples on struts visit to : Thanks...java struts error my jsp page is post the problem Easy Struts Tomcat, Resin, Lomboz... (or simply a Java project). Provide Struts.... Provide a global view of any Java project with Easy Struts support... Easy Struts   iPhone SDK Examples While developing the iPhone program, many times you need some code to copy... small example code that you can copy and paste in your code. Small examples... with examples. In our examples we will be discussing about the following struts struts how to start struts? Hello Friend, Please visit the following links: Here you will get lot of examples through which you what are Struts ? and proven design patterns. Struts Examples...what are Struts ? What are struts ?? explain with simple example. The core of the Struts framework is a flexible control layer based JOptionPane - More Dialogs Prev: JOptionPane - Simple Dialogs | Next: none Java: JOptionPane - More Dialogs Here are some more useful static methods from javax.swing.JOptionPane that allow you to ask the user to indicate a choice. ValueMethod Hi, Here my quation is can i have more than one validation-rules.xml files in a struts application java - Struts : For more information on struts visit to : how can i get dynavalidation in my applications using struts? Hi friend, For dyna validation some step to be remember using displaytag with struts2 - Struts to develop an application. But, while surfing some code on internet, i saw some... it more simple and better to use than struts2 tag to generate the output on web page... is not suitable all the time. In dotnet generally they provide all this inbuild Apache MyFaces Examples . In this tutorial, some examples have been explored to understand the usage... through some examples : Data Scroller Example:(dataScroller.jsp...Apache MyFaces Examples   Introduction to Struts 2 Tags will provide you the examples of the these tags in detail. The Struts 2... Struts 2 Tags In this section we will introduce you with the tags provided along with Struts 2 framework struts for clarifying my doubts.this site help me a lot to learn more & more technologies like servlets, jsp,and struts. i am doing one struts application where i... the following links: More About Simple Trigger More About Simple Trigger  ... be used for creating a trigger which fires every some specified time (20 seconds.... There are following examples for implementing the SimpleTrigger: 1. Example Struts - Framework are the part of Controller. For read more information,examples and tutorials... struts application ? Before that what kind of things necessary..., Struts : Struts Frame work is the implementation of Model-View-Controller Struts Alternative of Struts to provide XML transformations based on the content produced.../PDF/more Automatic serialization of the ActionErrors, Struts... and more development tools provided support for building Struts based applications unable to execute the examples unable to execute the examples unable to execute the examples given for struts,ejb,or spring tutorials.Please help I am a beginner JPA 2.1 CRUD examples Learn how to create CRUD operation examples in JPA 2.1 In this section you... for more details about the different methods of EntityManager interface. Basic Concepts of JPA Let's learn some of the basic concepts of JPA before Struts Struts Why struts rather than other frame works? Struts... is not only thread-safe but thread-dependent. Struts2 tag libraries provide... Roseindia application. Struts 2 Tags contain output data and provide style sheet driven markup... support validation and localization of coding offering more utilization. Struts... Struts 2 Tags Examples Struts 2 Tutorials - Struts version Struts 2 Ajax applications. In this section we will provide you many examples to use Ajax... Struts 2 Ajax In this section, we explain you Ajax based development in Struts 2. Struts 2 provides built Your Articles Provide Money Making Opportunities To Your Customers Too! Your Articles Provide Money Making Opportunities To Your Customers Too... or service free of cost. It is also more effective as compared to the commonly available... write or do not write but writing is more of a necessity which you must fulfill Java - Struts . ------------------------------------------- Read for more information. Thanks. my doubt is some times somebady will tell mvc design pattern , and some times will tell mvc Struts Provide material over struts? Hi Friend, Please visit the following link: Thanks JSP Tutorial For Beginners With Examples presentation logic. For more tutorial/examples you may go through the link http...JSP Tutorial For Beginners With Examples In this section we will learn about... to add the dynamic content in JSP page. Examples that are given Need some help urgently Need some help urgently Can someone please help me with this below... that you should divide the number of comibnations with X or more 4's by the total number of possible combinations. The actual question got some error. Its Struts - Struts Struts What is Struts Framework? Hi,Struts 1 tutorial with examples are available at Struts 2 Tutorials... are looking for Struts projects to learn struts in details then visit at http More About Triggers the misfire is occurred. More descriptions about the misfire instruction will provide... More About Triggers In this section we will try to provide the brief description Struts 2 Date Examples Struts 2 Date Examples In this section we will discuss the date processing functionalities available in the Struts 2... provided by Struts 2 Framework. Date Format Examples Java tutorials for beginners with examples Java Video tutorials with examples are being provided to help the beginners... to make softwares for desktops, some websites use Java to run, applications... programmer is unending hence more and more programmers are moving towards struts struts how to make one jsp page with two actions..ie i need to provide two buttons in one jsp page with two different actions without redirecting to any other page Why Struts 2 to the Struts 2.0.1 release announcement, some key features are: Simplified...; results - Unlike ActionForwards, Struts 2 Results provide flexibility.... Struts 2 tags are more capable and result oriented. Struts 2 tag markup can Struts 2 Eclipse Plugin Struts 2 Eclipse Plugin This page contains the examples and the list of plugins that are available for Struts 2... for Struts 2. You can view the status of the project at struts-netbeans - Framework struts-netbeans hai friends please provide some help "how to execute struts programs in netbeans IDE?" is requires any software or any supporting files to execute this. thanks friends in advance Java Exceptions Tutorials With Examples Java Exceptions Tutorials With Examples  ... are nothing but some anomalous conditions that occur during... an exception that is more appropriate What Java - Struts Java hello friends, i am using struts, in that i am using tiles framework. here i wrote the following code in tiles-def.xml... doubt is when we are using struts tiles, is there no posibulity to use action class Textarea - Struts characters.Can any one? Given examples of struts 2 will show how to validate... the value more then 250 characters and cant left blank.Index.jsp<ul><...;%@ taglib prefix="s" uri="/struts-tags" %><html><head>
http://roseindia.net/tutorialhelp/comment/19793
CC-MAIN-2014-15
refinedweb
2,785
58.28
Hello, I'm using LAPACKE_dgetrf to compute the LU factorization of square matrices in double precision. The matrix is in column major. Here is what I am doing. The environment is MKL 2018 Update 3 for Windows + Visual studio 2017. for(...) { lapack_int m = dim; //dim is around 40 to 80 info = LAPACKE_dgetrf(mat_layout, m, m, A, m, ipiv); if(info != 0) {clean up memory; break;} } I checked the info that was returned by LAPACKE_dgetrf and it was always 0. However, I found out I got duplicate items in ipiv array after each dgetrf call. I mean I got ipiv == ipiv What is the possible reason for getting duplicate values in partial pivoting array? Thank you. Link Copied Hi Xiaolin, could you please attach one test case to show the problem? for example, below #include <stdio.h> #include <stdlib.h> #include <memory.h> #include <mkl.h> int main(void) { int len = 198; char buf[198]; mkl_get_version_string(buf, len); printf("%s\n", buf); printf("\n"); lapack_int m = 3; //dim is around 40 to 80 double A[9] = { 0, 5, 5, 2, 9, 0,6, 8, 8 }; / = -1; info = LAPACKE_dgetrf(mat_layout, m, m, A, m, ipiv); if (info != 0) { printf("info error %d", info); //break; } printf("\nLU_A\n"); for (int i = 0; i <m; i++) { for (int j = 0; j < m; j++) { printf("%f ", A[i + j * 3]); } printf("\n"); } printf("ipiv:\n"); for (int i = 0; i < m; i++) { printf("%d ", ipiv); } return 0; } i did a quick test with MKL 2018 u3. the result looks fine with small size input Best Regards, Ying The ipiv array is an array that lists row interchanges, it is not a true pivot vector. So if you see an ipiv array for a 3x3 case that looks like [2,2,2], that means that row 1 was interchanged with row 2, then row 2 (which is the original row 1) stayed the same, then row 3 was interchanged with row 2 (which was the original row 1). You get find more information about this in the description for ipiv in the getrf documentation. The routines getrs and getri are provided to work with pivot arrays of this form. Thank you.
https://community.intel.com/t5/Intel-oneAPI-Math-Kernel-Library/Strange-partial-pivoting-of-LAPACKE-dgetrf/td-p/1143469
CC-MAIN-2021-31
refinedweb
365
70.33
java.lang.Object org.netlib.lapack.SPOTRIorg.netlib.lapack.SPOTRI public class SPOTRI SPOTRI is a simplified interface to the JLAPACK routine spot * ======= * * SPOTRI computes the inverse of a real symmetric positive definite * matrix A using the Cholesky factorization A = U**T*U or A = L*L**T * computed by SPOTRF. * * Arguments * ========= * * UPLO (input) CHARACTER*1 * = 'U': Upper triangle of A is stored; * = 'L': Lower triangle of A is stored. * * N (input) INTEGER * The order of the matrix A. N >= 0. * * A (input/output) REAL array, dimension (LDA,N) * On entry, the triangular factor U or L from the Cholesky * factorization A = U**T*U or A = L*L**T, as computed by * SPOTRF. * On exit, the upper or lower triangle of the (symmetric) * inverse of A, overwriting the input factor U or L. * * LDA (input) INTEGER * The leading dimension of the array A. LDA >= max(1,N). * * INFO (output) INTEGER * = 0: successful exit * < 0: if INFO = -i, the i-th argument had an illegal value * > 0: if INFO = i, the (i,i) element of the factor U or L is * zero, and the inverse could not be computed. * * ===================================================================== * * .. External Functions .. public SPOTRI() public static void SPOTRI(java.lang.String uplo, int n, float[][] a, intW info)
http://icl.cs.utk.edu/projectsfiles/f2j/javadoc/org/netlib/lapack/SPOTRI.html
CC-MAIN-2015-22
refinedweb
207
51.07
Question: I am maintaining a SOAP web service (ASP.NET version 2.0) and I have to make some changes that will modify the return values of particular methods. What is the generally accepted method of doing this without breaking existing implementations. My initial thoughts are that the following would all be possible. a) Provide new version specific methods within the existing web service e.g. getPerson_v1.4 b) Provide a complete copy of the .asmx file with a new version number e.g. http:/. This is not an idea I relish as the service has more than 50 methods and copying that code for changes to 2/3 methods seems like too much duplicated code. c) Override the web-service constructor to allow passing in a version number. This does not seem to work, and on reflection I'm not sure how that would be represented within a WSDL Is there a generally accepted way of doing this, or do people have advice based upon their experiences in this area. Solution:1 In the general case, there's more to versioning a web service than just versioning method names and .asmx file names. Ideally, the interface to a web service (its WSDL) should be a permanent contract, and should never change. One of the implications would be that clients that do not need the changed functionality would never need to change, and therefore would never need to be retested. Instead of breaking the existing contract, you should create a new contract that contains the changed operations. That contract could "inherit" from the existing contract, i.e., you could "add the methods to the end". Note, however, that you should also put the new contract into a new XML namespace - the namespace basically identifies the WSDL, and keping the namespace but changing the WSDL would be a lie. You should then implement this new contract at a new endpoint (.asmx file). Whether or not this is in a different directory, or even on a different web site doesn't really matter. What matters is that clients who want the new functionality can refer to the new WSDL at the new URL and call the new service at its new URL, and be happy. Be aware that one effect of changing an existing contract is that the next time an "Update Web Reference" is performed, you will be changing the code of the client proxy classes. In most shops, changing code requires re-testing, and redeploying. You should therefore think of "just adding methods" as "just adding some client code that has to be tested and deployed", even if the existing client code does not use the new methods. Solution:2 We deploy to version directories, for example: etc. Solution:3 I've just thought of another possible solution which seems quite clean. I could check for a version number included as a SOAP Header and assume the existing version number if this is not provided. I can then make the code behave differently for different versions without changing the method signatures. This is possbile as the return values from the Web-Services are XML objects so the method signature remains the same but the content of the XML changes based on version. Solution:4 I have the same versioning issue with webservices that I am developing. We make our users pass a schema version number in the header. They tell us which version of the XML schema they want back. This way, we are always backwards compatible and code is not duplicated. At my job, we cannot tell the client that they have to switch the URL to the webservice when we version it. In big corporations, changes as small as a URL could take months of testing. It is my feeling that you should not break your clients connection. What we do is, add new features to the latest version. When the client asks for the new features, if they want them, they are forced to upgrade to the newest schema. Solution:5 Unless you change most of the method signatures with each new version, I'd go with (a) - versioned method names. This is how our providers do it and it's working fine for us. Note:If u also have question or solution just comment us below or mail us on toontricks1994@gmail.com EmoticonEmoticon
http://www.toontricks.com/2018/05/tutorial-what-is-best-way-to-version.html
CC-MAIN-2018-51
refinedweb
728
71.34
corejava corejava if we declare abstract as final what happen What is an abstract method? What is an abstract method? Hi, What is an abstract method? thanks Corejava Interview,Corejava questions,Corejava Interview Questions,Corejava . In an abstract class we can add a method with default...; Q 1. When should I use the abstract class rather than an interface ? Ans : A Java interface is an abstract data type like Corejava Interview,Corejava questions,Corejava Interview Questions,Corejava implementation. An abstract class must have at least one method.... In an abstract class we can add a method with default implementation... an immutable class should not contain any modifier method. But a developer should Corejava Interview,Corejava questions,Corejava Interview Questions,Corejava . Q 3. Is it possible to convert a string to an abstract path ? Ans : The class java.io.File class contains several methods which create abstract file... a complete abstract path name. The constructor File(File, String) takes File abstract method abstract method is final method is in abstract class Thanks - Java Beginners Thanks Hi Rajnikant, Thanks for reply..... I am... is the advantage of interface and what is the use of interface... Thanks... bird be a class with a method fly(). It will be ridiculous for an aeroplane abstract method abstract method Can we have abstract class with no abstract methods abstract method abstract method Can a concrete class have an abstract method Corejava Interview,Corejava questions,Corejava Interview Questions,Corejava interface is an abstract data type like a class having all its methods abstract Abstract programs - Java Beginners Abstract programs give me the Abstract Methods programms and defind the Abstract Method and Abstract Class Hi friend, This code.... Thanks Abstract class and abstract method not necessary to have all methods abstract, it may have non abstract method... of a concrete class. A single abstract method must exists in a abstract class. A concrete class cannot hold abstract method. An abstract class is meant abstract method in php - PHP abstract method in php How do i implement the abstract method in php? Which is the best way Abstract Classes - Java Interview Questions as abstract prevents it from creating an instance. Why? Thanks in Advance... abstractMethod() { System.out.println("The class method in abstract...("The instance method in abstract abstract class - Java Interview Questions abstract class Explain the abstract class and abstract method...:// Hope that it will be helpful for you. Thanks abstract class and interface - Java Beginners commonly by the subclasses with common implementation Abstract Method An abstract method one that have the empty implementation. All the methods in any interface are abstract by default. Abstract method provides the standardization Java-Abstract Class - Java Beginners (){ System.out.println("display() method in Abstract Class"); } } public class...Java-Abstract Class Can a abstract class have a constructor ?When would the constructor in the abstract class be called ? Please give me with good corejava - Java Beginners corejava pass by value semantics Example of pass by value semantics... arguments to the method call is passed to the method as its parameters. Note very... of the reference's value is what is passed to the method -- it's passed by value abstract class and overriding - Java Beginners contatins one or more abstract methods. *)Abstract class contains the method... and parameters. In the above example in abstract class A abs_value() method...abstract class and overriding what is the difference between java abstract class - Development process java abstract class how to save and run abstract class program... example, abstract class A { abstract void hi(); void hello() { System.out.println("This is a hello() method."); } } class B extends A { void hi abstract class abstract class what is abstract class .why we use it or what is the need of this class? Abstract class is like base class which contains abstract method and cannot instantiated. Please go class Abstract class Calendar cal=Calendar.getInstance() We know that Calendar is an abstract class so it can't be instantiated. So how we can say that cal is an instance of Calendar??? Beginner Question I suppose. Thanks in advace core Abstract class,Abstract methods and classes is used with methods and classes. Abstract Method An abstract method one... by default. Abstract method provides the standardization for the " name... Abstract methods and Please help need quick!!! Thanks errors and don't understand why. i'm supposed to use abstract to test a boat race... abstract class RentforthBoatRace { public abstract double getSum...(); ^ 6 errors any help would be appreciated guys. Thanks so Java Abstract Class lets you put the common method names in one abstract class without having... Java Abstract Class An abstract class is a class that is declared by using the abstract keyword What is use of a abstract variable? What is use of a abstract variable? Hi, What is use of a abstract variable? thanks abstract methods in java abstract methods in java what is abstract methods in java.give better examples for understanding Hi Friend, Please visit the following link: Thanks Abstract Class in Java . An abstract method is declared without body but is followed by a semicolon. If we have to define an abstract method under class then we have to make the class..., the abstract method of abstract class must be defined in the subclass. Points Abstract class and interface in Java be implemented, but only extend one class an abstract class may have some method...Abstract class and interface in Java What is the difference between abstract class and interfaces in Java? Differences between The abstract Keyword : Java Glossary ; Abstract keyword used for method declaration declares... an interface which will missing all method bodies in the program. Abstract class... { ... ? abstract Type MethodName(); ? ? Type Method abstract class - Java Beginners abstract class what exactly is abstract class and in which cases its use becomes necessary and useful ?give some examples of abstract classes..., In java programming language, abstract classes are those that works only Abstract class - Java Beginners abstract method it should be implemented in its subclasses 3.abstract methods...Abstract class Why can use abstract class in java.?when abstract class use in java.plz Explain with program. abstract class abs{ public Interface and Abstract class methods and these method implemented by sub classes. Abstract class definition...Interface and Abstract class hello,, Can some body tell me what is the difference between an Interface and an Abstract class? hi, Very What is Abstract classes in Java? What is Abstract classes in Java? What is Abstrack class in Java programming language? Please suggest any reference for finding examples. thanks, Hi, In Java programming language we are used the Java Abstract Can a abstract class be defined without any abstract methods? Can a abstract class be defined without any abstract methods? hi, Can a abstract class be defined without any abstract methods? thanks method method can you tell me how to write an abstract method called ucapan() for B2 class class A2{ void hello(){ system.out.println("hello from A2"); }} class B2 extends A2{ void hello(){ system.out.println("hello from B2 Java Abstract Class Example as well as non-abstract methods. It is not necessary that the abstract method... and call the method of abstract class. Then we will compile and execute the program...Java Abstract Class Example In this section we will read about the Abstract Inheritance, abstract classes method for the balance. Also included two abstract get methods-one for each...Inheritance, abstract classes Hi. I wish to thank you for answering... the Checking class, the get method displays the string"Checking Account Information Using Abstract Class method should be abstract. We can't instantiate the object of the abstract class...Using Abstract Class We does not make a object of the abstract Abstract class or methods example-1 ;class can hold non abstract method abstract ...;buzzwordAnimal class method } } class AbstractExample ...;take place as it has similar method   Interface Vs Abstract Class extend one class an abstract class may have some method implementation (non... members, the use abstract method In case of abstract class, you are free... Interface Vs Abstract Class   CoreJava Project CoreJava Project Hi Sir, I need a simple project(using core Java, Swings, JDBC) on core Java... If you have please send to my account Interfaces and Abstract Classes - Development process Interfaces and Abstract Classes What are the Scenarios where we use Interface and Abstract Classes? Hi Friend, Interface: Java does.... It relate classes from different types of hierarchy. Abstract Class: Any class Abstract class and Interface - Java Magazine Abstract class and Interface Dear Sir, Please anyone help me........I wane exact difference between Abstract class and Interface.what is the advantage of static method. Thanks®ards, VijayaBabu.M Hi Can you create an object of an abstract class? Can you create an object of an abstract class? Hi, Can you create an object of an abstract class? Thanks Can we instantiate abstract class in Java? Can we instantiate abstract class in Java? HI, Can we instantiate abstract class in Java? Thanks Hi, No, you can't instantiate an abstract class. Thanks Abstract methods and classes is used with methods and classes. Abstract Method An abstract method one... by default. Abstract method provides the standardization for the " name... Abstract methods and classes   PHP Abstract Class descendant class(es). If a class contains abstract method then the class must be declared as abstract. Any method which is declared as abstract must not have...PHP Abstract Class: Abstract classes and methods are introduced in PHP 5 Difference between abstract class and an interface contains both abstract method as well as non-abstract method(concrete), but interface can contain only abstract method, no concrete method allow... while abstract class can have both abstract and concrete method. Abstract class overriding in derived class from abstract class overriding in derived class from abstract class why should override base class method in derived class? Instead of override abstract class method, we can make method implementation in derived class itself know Java Inheritance and abstract - Java Interview Questions abstract class or interface. Because both provides the abstract method. Thank... Interface in which we are not doing any operation except declaring empty abstract method and implementing this interface to force the subclass to implements all Uses of abstract class & interface - Java Beginners Uses of abstract class & interface Dear sir, I'm new to java. I knew the basic concepts of interface and the abstract class. But i dont... classes from different types of hierarchy. Abstract Class: Any class Abstract Factory Pattern Abstract Factory Pattern II Abstract Factory Pattern : This pattern is one level of abstraction higher than factory pattern. This means that the abstract factory returns abstract class abstract class Can there be an abstract class with no abstract methods Abstract class Abstract class what is an Abstract class Abstract class Abstract class Can an abstract class be final java Method Error - Java Beginners mathdemo.java:7: missing method body, or declare abstract static int mul(int x,int y); ^ mathdemo.java:9: return outside method...java Method Error class mathoperation { static int add(int Abstract and Interface Abstract and Interface what is the difference between Abstract and Interface accurateatly abstract class abstract class Explain the concept of abstract class and it?s use with a sample program. Java Abstract Class An abstract class is a class that is declared by using the abstract keyword. It may or may not have thanks - JSP-Servlet thanks thanks sir i am getting an output...Once again thanks for help corejava - Java Beginners corejava - Java Interview Questions CoreJava - Java Beginners thanks - Java Beginners main method in paranthesis denote? Hi Friend, public-It indicates that the main() method can be called by any object. static-It indicates that the main() method is a class method. void- It indicates that the main() method has no return value Abstract and Interface Abstract and Interface What is interface? When time is most suitable for using interface? Why we use interface instead of abstract? What is the differences between abstract and interface? An interface defines a set private method private method how can define private method in a class and using...(5,6); Method meth = rectangle.class.getDeclaredMethod("calculate...); System.out.println("Area of rectangle= " + result); } } Thanks Advertisements If you enjoyed this post then why not add us on Google+? Add us to your Circles
http://roseindia.net/tutorialhelp/comment/92985
CC-MAIN-2015-32
refinedweb
2,057
58.38
figured it out myself So i am trying to make a character movement system that looks cool and i've run into a major problem. I have this 2D top-down character that consists of three sprites: head, body and legs. The head rotates towards the mouse using the atan2 function, the legs rotate if the angle between the head and legs is greater than 90 degrees, and the body is just half the angle between the head and legs. The problem is that the atan2 function jumps between -PI and PI, and instead of the legs continuing to follow the head, it acts as if the head got rotated the other way really quickly, which is not at all what i want. Please help, my brain is melting! using System.Collections; using System.Collections.Generic; using UnityEngine; public class playerController : MonoBehaviour { float a; float aH; float aL; float aB; float t = 10f; float legspace = Mathf.PI / 2; public GameObject head; public GameObject body; public GameObject legs; void Update () { Vector3 mousePos = Camera.main.ScreenToWorldPoint(Input.mousePosition); a = Mathf.Atan2(mousePos.y,mousePos.x); if (a < 0) a += Mathf.PI * 2; Quaternion a1 = Quaternion.Euler(0, 0, aH); Quaternion a2 = Quaternion.Euler(0, 0, a); Quaternion aHq = Quaternion.Slerp(a1, a2, t * Time.deltaTime); aH = aHq.eulerAngles.z; if ((aH - aL) > legspace) aL = aH - legspace; else if ((aH - aL) < -legspace) aL = aH + legspace; aB = ((aH - aL)) / -2 + aH; legs.transform.rotation = Quaternion.Euler(0, 0, Mathf.Rad2Deg * aL); body.transform.rotation = Quaternion.Euler(0, 0, Mathf.Rad2Deg * aB); head.transform.rotation = Quaternion.Euler(0, 0, Mathf.Rad2Deg * aH); } } Answer by Wolfride · Dec 16, 2018 at 03:28 AM I mean... Can't you just parent all 3 pieces to an empty game object and just rotate that? Or you don't want them all to move at the same time and your problem is the head is moving too fast while everything else is fine? And if you just didn't have the head rotate as far, it would look really good as it. The thing is that i want the player to continue turning one way, without that sudden jump on the right side. And i dont wanna limit the head movement either, i want him to be able to turn 360 freely without the sudden. Vector art Player 0 Answers Sprite animations won't export/run in a published build unless you play in editor first 1 Answer Hover over text VERY glitchy 0 Answers Sprite Diffuse shader not affected by point lights 1 Answer How to make the pixels of an object that are obscured by a specific object transparent Unity2D 1 Answer
https://answers.unity.com/questions/1581347/2d-top-down-multiple-sprite-rotation.html
CC-MAIN-2019-35
refinedweb
445
73.88
Conservation of mass in chemical reactions Posted February 27, 2013 at 10:54 AM | categories: linear algebra | tags: reaction engineering | View Comments Updated March 06, 2013 at 04:27 PM. We consider the water gas shift reaction : \(CO + H_2O \rightleftharpoons H_2 + CO_2\). We can illustrate the conservation of mass with the following equation: \(\bf{\nu}\bf{M}=\bf{0}\). Where \(\bf{\nu}\) is the stoichiometric coefficient vector and \(\bf{M}\) is a column vector of molecular weights. For simplicity, we use pure isotope molecular weights, and not the isotope-weighted molecular weights. This equation simply examines the mass on the right side of the equation and the mass on left side of the equation. import numpy as np nu = [-1, -1, 1, 1]; M = [28, 18, 2, 44]; print np.dot(nu, M) 0 You can see that sum of the stoichiometric coefficients times molecular weights is zero. In other words a CO and H_2O have the same mass as H_2 and CO_2. number of C, O, and H on each side of the reaction. Now if we add the mass of atoms in the reactants and products, it should sum to zero (since we used the negative sign for stoichiometric coefficients of reactants). import numpy as np # C O H reactants = [-1, -2, -2] products = [ 1, 2, 2] atomic_masses = [12.011, 15.999, 1.0079] # atomic masses print np.dot(reactants, atomic_masses) + np.dot(products, atomic_masses) >>> ... >>> >>> >>> >>> >>> 0.0 That is all there is to mass conservation with reactions. Nothing changes if there are lots of reactions, as long as each reaction is properly balanced, and none of them are nuclear reactions! Copyright (C) 2013 by John Kitchin. See the License for information about copying.
http://kitchingroup.cheme.cmu.edu/blog/2013/02/27/Conservation-of-mass-in-chemical-reactions/
CC-MAIN-2018-30
refinedweb
285
65.01
Introduction With the rise in adoption of smartphones in the world currently, there has been an influx of mobile applications to achieve a wide variety of tasks. Some of the applications we use on a daily basis communicate with other systems to give us a seamless experience across multiple devices and platforms. How is this possible? Application Programming Interfaces (APIs) are responsible for this extended connectivity. They make it possible for mobile and web applications to interact and facilitate data transfer between them and other systems. In this article, we will discuss APIs, the best practices when building them and we will also build an API using the Test-Driven Development approach and Spring Boot framework. The Rise of APIs An API defines a set of routines and protocols for the interaction between software systems. Many mobile and web applications interact with servers that handle requests and respond to them - referred to as clients. As systems grow in size, they become robust and can become hard to maintain and make updates. By decoupling a system into several specific APIs, flexibility is achieved and the parts of the robust system can now be updated or deployed in parts easily without affecting the uptime or performance of the rest of the system. This results in a micro-services architecture, which is heavily reliant on API development. In such a system, APIs provide a mode of communication within the system and the various parts of the system can still interact and share the workload. Smartphones enabled us to stay connected and with their increasing power, we can achieve so much more. Internet access has also become more common, hence most smartphones are constantly connected to the internet. These two factors drive the usage of mobile applications that interact with web servers where APIs come into the picture. APIs facilitate the communication between mobile applications and servers and the rise in mobile applications usage has driven the rise of APIs. Web applications have also evolved over time and complexity has increased. This has, in turn, led to the separation of the presentation and logic layers of a normal web application. Initially, you would have both layers of a web application built together and deployed as one for use by the masses. Now, the frontend section is decoupled from the backend to ease the separation of concerns. APIs also enable companies to a single backend setup to serve mobile applications and web applications at the same time. This saves on development time and technical debt since the backend system is only modified at one point. Smartphones are also as diverse and companies now have to cater for multiple types of smartphones at the same time in order to provide a uniform experience to its users. APIs make it possible for mobile applications running on different platforms to interact in a uniform way with a single backend system, or API. It's really important to mention that APIs also make it possible for other developers using different programming languages to tap into our system for information. This makes it easier to integrate systems that use different programming languages. This, again, allows us to make modular applications, using various languages, tools and frameworks together to bring out the best of each. Building Better APIs APIs also act as a point of contact with other developers' work since they can allow other developers to consume them for their own use. For instance, Twitter has exposed some of its APIs for use by other developers to build other Twitter clients and use the platform in other unique ways. Some have built bots on platforms like Telegram to send tweets or fetch tweets, all of which is achieved through APIs. This makes APIs important in the current and coming software ecosystems as they allow us to integrate with other systems in flexible ways. Not just APIs, but good APIs. It is paramount that our API is well built and documented so that anyone else that will consume it has an easier time. Documentation is the foremost important aspect of an API, it lets other developers know what it accomplishes and what is required to tap into that functionality. It also helps maintainers know what they're dealing with and make sure their changes do not affect or break existing functionality. HTTP Status Codes were defined to identify various situations that may occur when an application is interacting with an API. They are divided into five categories that include codes for: - Informational Responses: 1xx statuses, such as 100 Continue, 101 Switching Protocols, etc. - Success: 2xx statuses, such as 200 OK, 202 Accepted, etc. - Redirection: 3xx statuses, such as 300 Multiple Choices, 301 Moved Permanently, etc. - Client Errors: 4xx statuses, such as 400 Bad Request, 403 Forbidden, 404 Not Found, etc. - Server Errors: 5xx statuses, such as 500 Internal Server Error, 502 Bad Gateway, 503 Service Unavailable, etc. These codes help the system and the people interacting with it identify and understand the nature of the events that occur and the causes of any errors. By adhering to the HTTP Status Codes in our APIs, we can make our APIs easy to interact and integrate with. Besides these, we can also define our own error codes for our APIs, but it is important we document them clearly to make it easier for the consumers and maintainers of the APIs. Before cars or phones or electronic devices are released to their users, they are thoroughly tested to ensure that they do not malfunction when in use. APIs have become more common and important, therefore, they also need the same amount of attention to detail. They should be thoroughly tested before being released to avoid malfunctioning while in production. Building an API Project Architecture Let us assume that we are building an app that helps users maintain a list of their cars. They will be able to add new cars, update existing cars, and even remove cars that they no longer possess. This application will be available for both Android and iOS devices and also as a web application. Using the Spring Boot Framework, we can build a single API that can serve all the three applications, or clients, simultaneously. Our journey starts at the Spring Initializer tool that helps us quickly bootstrap our Spring Boot API in a matter of minutes. There are a lot of dependencies and packages that help us achieve various functionality in our APIs and the Spring Initializer tool helps integrate them in our starter project. This is aimed at easing our development process and letting us direct our attention to the logic of our application: The tool allows us to choose between Maven and Gradle, which are tools to helps us automate some aspects of our build workflow such us testing, running, and packaging our Java application. We also get the option to choose between using Java or Kotlin when building our API using Spring Boot for which we can specify the version. When we click on "Switch to the full version" we get more options to bundle into our API. A lot of these options come in handy when building Microservices such as "Cloud Config" and "Cloud Discovery" sections. For our API, we will pick the following dependencies: Webto help us develop a web-based API MySQLwhich will help us connect to our MySQL database, JPAwhich is the Java Persistence API to meet our database interaction needs, and Actuatorto help us maintain and monitor our web application. With the dependencies set, we click the "Generate Project" button to get a zip containing our boilerplate code. Let us identify what comes in the package using the tree command: $ └── CarsApplication.java │ └── resources │ ├── application.properties │ ├── static │ └── templates └── test └── java └── com └── example └── cars └── CarsApplicationTests.java At the root folder, there is a pom.xml file that contains the project configuration for our Spring Boot API. If we used Gradle, we would have a build.gradle file instead. It includes information such as the details of our new API and all its dependencies. We will mostly work in the main and test folders inside the source ( src) folder. This is where we will place our controllers, models, utility classes among others. Let us start by creating our database and configuring our API to use it. Follow this guide to install and verify that MySQL is running. Once ready, let us create our database as follows: $ mysql -u root -p mysql> CREATE DATABASE cars_database; Query OK, 1 row affected (0.08 sec) Some details of our service will be different from environment to environment. For example, the database we use during development will not be the same one that the end users will use to store their information. Configuration files make it easy for us to switch such details making our API easy to migrate and modify. This achieved through the configuration file, which in a Spring Boot API is the application.properties file that is located in the src/main/resources folder. To enable our JPA dependency to access and modify our database, we modify the configuration file by adding the properties: # Database Properties spring.datasource.url = jdbc:mysql://localhost:3306/cars_database?useSSL=false spring.datasource.username = root spring.datasource.password = password # We now need an entity class to define our API's resources and their details as they will be saved in our database. A Car is our resource on this API and what this means is that it represents our object or real life item whose information we will perform actions on. Such actions include Create, Read, Update, and Delete, simply put as CRUD operations. These operations are behind the HTTP Methods or Verbs that refer to various operations that an API can expose. They include: GETwhich is a read operation that only fetches the specified data, POSTwhich enables the creation of resourcesby supplying their information as part of the request, PUTwhich allows us to modify a resource, and DELETEwhich we use to remove a resource and its information from our API. To better organize our code, we will introduce some more folders in our project at the src/main/java/com/example/cars/ level. We will add a folder called models to host the classes that define our objects. The other folders to be added include a controllers folder that contains our controllers, a repository folder for the database management classes and a utils folder for any helper classes we might need to add to our project. The resulting folder structure will be: $ ├── CarsApplication.java │ │ ├── controllers │ │ ├── models │ │ ├── repository │ │ └── utils │ └── resources │ ├── application.properties │ ├── static │ └── templates └── test └── java └── com └── example └── cars └── CarsApplicationTests.java Domain Model Let us define our Car class in the models folder: /** * This class will represent our car and its attributes */ @Entity @Table(name="cars") // the table in the database tht will contain our cars data @EntityListeners(AuditingEntityListener.class) public class Car { @Id @GeneratedValue(strategy=GenerationType.AUTO) private long id; // Each car will be given an auto-generated unique identifier when stored @Column(name="car_name", nullable=false) private String carName; // We will also save the name of the car @Column(name="doors", nullable=false) private int doors; // We will also save the number of doors that a car has // getters and setters } Note: I have stripped off the imports to make the code snippet shorter. Please refer to the Github repo attached at the end of the article for the full code. DAO With our car model ready, let us now create the CarRepository file that will be used in the interaction with the database: public interface CarRepository extends JpaRepository<Car, Long> { } Writing Tests We can now expose the functionality of our API through our controller, but in the spirit of Test-Driven Development (TDD), let us write the tests first in the CarsApplicationTests file: // These are a subset of the tests, the full test file is available on the Github repo attached at the end of this article .... /** * Here we test that we can get all the cars in the database * using the GET method */ @Test public void testGetAllCars() { HttpHeaders headers = new HttpHeaders(); HttpEntity<String> entity = new HttpEntity<String>(null, headers); ResponseEntity<String> response = restTemplate.exchange(getRootUrl() + "/cars", HttpMethod.GET, entity, String.class); Assert.assertNotNull(response.getBody()); } /** * Here we test that we can fetch a single car using its id */ @Test public void testGetCarById() { Car car = restTemplate.getForObject(getRootUrl() + "/cars/1", Car.class); System.out.println(car.getCarName()); Assert.assertNotNull(car); } /** * Here we test that we can create a car using the POST method */ @Test public void testCreateCar() { Car car = new Car(); car.setCarName("Prius"); car.setDoors(4); ResponseEntity<Car> postResponse = restTemplate.postForEntity(getRootUrl() + "/cars", car, Car.class); Assert.assertNotNull(postResponse); Assert.assertNotNull(postResponse.getBody()); } /** * Here we test that we can update a car's information using the PUT method */ @Test public void testUpdateCar() { int id = 1; Car car = restTemplate.getForObject(getRootUrl() + "/cars/" + id, Car.class); car.setCarName("Tesla"); car.setDoors(2); restTemplate.put(getRootUrl() + "/cars/" + id, car); Car updatedCar = restTemplate.getForObject(getRootUrl() + "/cars/" + id, Car.class); Assert.assertNotNull(updatedCar); } The tests simulate various actions that are possible on our API and this is our way of verifying that the API works as expected. If a change was to made tomorrow, the tests will help determine if any of the functionality of the API is broken and in doing so prevent us from breaking functionality when effecting changes. Think of tests as a shopping list when going into the supermarket. Without it, we might end up picking almost everything we come across that we think might be useful. It might take us a long time to get everything we need. If we had a shopping list, we would be able to buy exactly what we need and finish shopping faster. Tests do the same for our APIs, they help us define the scope of the API so that we do not implement functionality that was not in the plans or not needed. When we run our tests using the mvn test command, we will see errors raised and this is because we have not yet implemented the functionality that satisfies our test cases. In TDD, we write tests first, run them to ensure they initially fail, then implement the functionality to make the tests pass. TDD is an iterative process of writing tests and implementing the functionality to make the tests pass. If we introduce any changes in the future, we will write the tests first, then implement the changes to make the new tests pass. Controller Let us now implement our API functionality in a CarController which goes into the controllers folder: @RestController @RequestMapping("/api/v1") public class CarController { @Autowired private CarRepository carRepository; // GET Method for reading operation @GetMapping("/cars") public List<Car> getAllCars() { return carRepository.findAll(); } // GET Method for Read operation @GetMapping("/cars/{id}") public ResponseEntity<Car> getCarsById(@PathVariable(value = "id") Long carId) throws ResourceNotFoundException { Car car = carRepository .findById(carId) .orElseThrow(() -> new ResourceNotFoundException("Car not found on :: " + carId)); return ResponseEntity.ok().body(car); } // POST Method for Create operation @PostMapping("/cars") public Car createCar(@Valid @RequestBody Car car) { return carRepository.save(car); } // PUT Method for Update operation @PutMapping("/cars/{id}") public ResponseEntity<Car> updateCar( @PathVariable(value = "id") Long carId, @Valid @RequestBody Car carDetails) throws ResourceNotFoundException { Car car = carRepository .findById(carId) .orElseThrow(() -> new ResourceNotFoundException("Car " + carId + " not found")); car.setCarName(carDetails.getCarName()); car.setDoors(carDetails.getDoors()); final Car updatedCar = carRepository.save(car); return ResponseEntity.ok(updatedCar); } // DELETE Method for Delete operation @DeleteMapping("/car/{id}") public Map<String, Boolean> deleteCar(@PathVariable(value = "id") Long carId) throws Exception { Car car = carRepository .findById(carId) .orElseThrow(() -> new ResourceNotFoundException("Car " + carId + " not found")); carRepository.delete(car); Map<String, Boolean> response = new HashMap<>(); response.put("deleted", Boolean.TRUE); return response; } } At the top, we have the @RestController annotation to define our CarController class as the controller for our Spring Boot API. What follows is the @RequestMapping where we specify the base path of our API URL as /api/v1. This also includes the version. Versioning is good practice in an API to enhance backward compatibility. If the functionality changes and we already have other people consuming our APIs, we can create a new version and have them both running concurrently to give them ample time to migrate to the new API. Earlier, we learned about the Create, Read, Update, and Delete operations in an API and how they are mapped to HTTP Methods. These methods are accommodated in the Spring framework as PostMapping, GetMapping, PutMapping and DeleteMapping annotations, respectively. Each of these annotations helps us expose endpoints that only perform the CRUD operation specified. We can also have a single endpoint that handles various HTTP Methods: @RequestMapping(value="/cars", method = { RequestMethod.GET, RequestMethod.POST }) Now that we have implemented the functionality, let us run our tests: The passing tests show us that we have implemented the functionality as desired when writing the tests and our API works. Let us interact with our API via Postman, which is a tool that helps interact with APIs when developing or consuming them. We start by fetching all the cars we have stored in our database: At the start, we have no cars stored. Let us add our first car: The response is the id and details of the car we have just added. If we add some more cars and fetch all the cars we have saved: These are the cars we have created using our Spring Boot API. A quick check on the database returns the same list: Swagger UI We have built and tested our API using TDD and now to make our API better, we are going to document it using Swagger UI, which allows us to create an auto-generated interface for other users to interact with and learn about our API. First, let us add the following dependencies in our pom.xml: <dependency> <groupId>io.springfox</groupId> <artifactId>springfox-swagger2</artifactId> <version>2.7.0</version> </dependency> <dependency> <groupId>io.springfox</groupId> <artifactId>springfox-swagger-ui</artifactId> <version>2.7.0</version> </dependency> Next, we will create a SwaggerConfig.java in the same folder as CarsApplication.java, which is the entry point to our API. The SwaggerConfig.java file allows to also add some information about our API: @Configuration @EnableSwagger2 public class SwaggerConfig { @Bean public Docket api() { return new Docket(DocumentationType.SWAGGER_2) .select() .apis(RequestHandlerSelectors.basePackage("com.example.cars")) .paths(PathSelectors.any()) .build() .apiInfo(metadata()); } /** * Adds metadata to Swagger * * @return */ private ApiInfo metadata() { return new ApiInfoBuilder() .title("Cars API") .description("An API to store car details built using Spring Boot") .build(); } } Now we annotate our endpoints so that they appear on the Swagger UI interface that will be generated. This is achieved as follows: // Add this import in our controller file... import io.swagger.annotations.ApiOperation; // ...then annotate our HTTP Methods @ApiOperation(value="Fetches all cars in the database", response=Car.class) @PostMapping("/...") // Our endpoint We have specified our response class as the Car class since it is the one that will be used to populate the details of our responses. We have done this because Swagger UI allows us to add information about the request payloads and response details. This will help provide more information about the payloads such as the kind of values that our API requires and the kind of response that will be returned. We can also specify mandatory fields in the documentation. In our case, we will also be using the Car class to format and validate our request parameters. Therefore, we annotate its "getters" as follows: @ApiModelProperty(name="id", value="The id of the car", example="1") public long getId() { return id; } @ApiModelProperty(name="carName", value="The name of the car to be saved", example="Bugatti", required=true) public String getCarName() { return carName; } @ApiModelProperty(name="doors", value="The number of doors that the car has", example="2", required=true) public int getDoors() { return doors; } That's it! Our documentation is ready. When we run our API using mvn spring-boot:run and navigate to we can see our API's documentation: Swagger UI has documented all our endpoints and even provided functionality to interact with our API directly from the documentation. As can be seen on the lower right section of the screenshot, our example values have been pre-filled so that we can quickly test out the API without having to rewrite the values. Conclusion Java is a powerful language and we have harnessed its power to build an Application Programming Interface, or API, using the Spring Boot framework. We have been able to implement four of the HTTP Methods to handle the various Create, Read, Update and Delete operations on the details about our cars. Swagger UI has also enabled us to document our API in a simple yet verbose and manner and have this documentation exposed as an endpoint in our service. Having noted the advantages of Test-Driven development, we went ahead and wrote tests for our endpoints and made sure our functionality and tests are aligned. The source code for this project is available here on Github.
https://stackabuse.com/test-driven-development-for-spring-boot-apis/
CC-MAIN-2019-43
refinedweb
3,507
52.09
Mark a (usually) empty struct as a base tag by inheriting from this. More... #include <Tag.hpp> Mark a (usually) empty struct as a base tag by inheriting from this. A base tag may be the base class of a simple tag. In such a case, the base tag can be used to fetch the item corresponding to the simple tag (or a compute tag derived from that simple tag) from a DataBox. Base tags are empty structs and therefore do not contain information about the type of the object to which they refer. Base tags are designed so that retrieving items from the DataBox or setting argument tags in compute items can be done without any knowledge of the type of the item. Base tags should be used rarely, only in cases where it is difficult to propagate the type information to the call site. Please consult a core developer before introducing a new base tag. By convention, the name of a base tag should either be the name of the simple tag that derives from it appended by Base. Alternatively, if the simple tag is templated with a type used to determine its type type alias, the base tag can have the same name as the simple tag template with an empty template parameter list. A base tag may optionally specify a static std::string name() method to override the default name produced by db::tag_name.
https://spectre-code.org/structdb_1_1BaseTag.html
CC-MAIN-2021-49
refinedweb
238
66.67
Cerebral uses a single state tree to store all the state of your application. It is just a single object: {} That’s it. You will normally store other objects, arrays, strings, booleans and numbers in it. Forcing you to think of your state in this simple form gives us benefits. To define the initial state of any application all we need to do is to add it to our Controller in controller.js import {Controller} from 'cerebral' const controller = Controller({ state: { title: 'Cerebral Tutorial' } }) export default controller Later you will learn about modules which allows you to encapsulate state with other logic. If you want to see this in action, install the debugger and load this BIN on Webpackbin.
http://cerebraljs.com/docs/introduction/state.html
CC-MAIN-2017-34
refinedweb
119
62.98
The tinacms package provides two possible user interfaces: the sidebar and the toolbar. The main difference between the two UIs is aesthetics. Both provide access to Screen Plugins and buttons for saving and resetting Forms. The main difference is in how they expect the Form's content to be edited: Form's are rendered within the sidebar, while the toolbar is designed to work with Inline Editing. Also, widgets can be added to the toolbar as plugins, as in the case of the Branch Switcher provided by react-tinacms-github . By default neither UI is enabled. You can enable one (or both) by setting their flags to true in the TinaCMS configuration: new TinaCMS({ enabled: true, sidebar: true, toolbar: true, }) This will enable the UIs with their default configuration. If you need to configure either UI further, you can pass that object instead: new TinaCMS({ enabled: true, sidebar: { position: 'displace', }, }) Let's take a look at the configuration options for each UI. interface SidebarOptions { position?: 'displace' | 'overlay' placeholder?: React.FC buttons?: { save: string reset: string } } A site configured to use Tina will display a blue edit button in the lower-left corner. Clicking this button will open the sidebar. Sidebar contents are contextual. For example, when using Tina with Markdown files, a conventional implementation will display a form for the current page's markdown file. In the event a page is composed of multiple files, it is possible to add multiple forms to the sidebar for that page's context. All forms available in the current context will then be displayed. interface ToolbarOptions { buttons?: { save: string reset: string } } On its own, the toolbar will display the 'Save' and 'Reset' buttons, along with a form status indicator to show whether there are unsaved changes. Custom Widgets can also be added to extend functionality for the particular workflow. Note: It is now recommended to configure the 'Save' & 'Reset' button text on the form intead of in the UI options. Please note that if buttonsare configured on the CMS through the sidebaror toolbaroptions (as in the examples below), those values will take precedent over custom button values passed to a form. You'll want to pass in this option to wherever the plugin is registered in the gatsby-config file. gatsby-config.js { resolve: "gatsby-plugin-tinacms", options: { enabled: process.env.NODE_ENV !== "production", sidebar: { position: 'displace', buttons: { save: "Commit", reset: "Reset", } }, } } If you followed the implementation in our Next.js docs, you'll want to go to the _app.js file where the CMS is registered. Again, depending on your setup with Next + Tina, this config may look slightly different. Note this is also where you might specify the sidebar display options. pages/_app.js class MyApp extends App { constructor() { super() this.cms = new TinaCMS({ toolbar: true, }) } render() { const { Component, pageProps } = this.props return ( <Tina cms={this.cms}> <Component {...pageProps} /> </Tina> ) } }
https://tinacms.org/docs/ui
CC-MAIN-2020-34
refinedweb
478
56.66
Python one-liners in the shell in the spirit of Perl and AWK Python one-liners in the spirit of Perl and AWK. pyfil gives you the rep command. This is because when I initially posted it in #python IRC channel, user [Tritium] (that ray of sunshine) said I had recreated the REP of the python REPL (read evaluate print loop). That is more or less the case. rep reads python expressions at the command line, evaluates them and prints them to stdout. It might be interesting as a quick calculator or to test something, like the Python REPL, but it also has some special flags for iterating on stdin, which make it useful as a filter for shell one-liners or scripts (like Perl). As a more modern touch, if the return value is a container type, python will attempt to serialize it as json before printing, so you can pipe output into other tools that deal with json, store it to a file for later use, or send it over http. This, combined with the abilitiy to read json from stdin (with –json) make a good translator between the web, which tends to speak json these days, and the posix environment, which tends to think about data in terms of lines in a file (frequently with multiple fields per line). pyfil is in pypi (i.e. you can get it easily with pip, if you want) Contents pyfil ain’t the first project to try something like this. Here are some other cracks at this problem: Don’t worry. I’ve stolen some of their best ideas already, and I will go on stealing as long as it takes! rep [-h] [-l] [-x] [-q] [-j] [-o] [-b PRE] [-e POST] [-s] [-F PATTERN] [-n STRING] [-R] [-S] [-H EXCEPTION_HANDLER] expression [expression ...] rep automatically imports any modules used in expressions. If you’d like to create any other objects to use in the execution environment ~/.config/pyfil-env.py and put things in it. default objects: These are empty containers you might wish to add items to during iteration, for example. The execution environment also has a special object for stdin, creatively named stdin. This differs from sys.stdin in that it rstrips (aka chomps) all the lines when you iterate over it, and it has a property, stdin.l, which returns a list of the (rstripped) lines. pyfil is quite bullish about using rstrip because python’s print function will supply an additional newline, and if you just want the value of the text in the line, you almost never want the newline character. If you do want the newlines, access sys.stdin directly. stdin inherits the rest of its methods from sys.stdin, so you can use stdin.read() to get a string of all lines, if that’s what you need. Certain other flags; –loop (or anything that implies –loop), –json, –split or –field_sep; may create additional objects. Check the flag descriptions for further details. By default, pyfil prints the return value of expressions. Different types of objects use different printing conventions. Iterators will also try to serialize each returned object as json if they are not strings. json objects will be indented if only one is being printed. If –loop is set or an number of objects is being serialzed from an iterator, it will be one object per-line. –force-oneline-json extends this policy to printing single json objects as well. examples: $ # None gets skipped $ rep None $ # strings and numbers just print $ rep sys.platfrom linux $ rep math.pi 3.141592653589793 $ # objects try to print as json $ rep sys.path [ "/home/ninjaaron/.local/bin", "/usr/lib/python35.zip", "/usr/lib/python3.5", "/usr/lib/python3.5/plat-linux", "/usr/lib/python3.5/lib-dynload", "/home/ninjaaron/.local/lib/python3.5/site-packages", "/usr/lib/python3.5/site-packages" ] $ rep '{i: n for n, i in enumerate(sys.path)}' { "/usr/lib/python3.5/plat-linux": 3, "/usr/lib/python35.zip": 1, "/usr/lib/python3.5": 2, "/usr/lib/python3.5/lib-dynload": 4, "/usr/lib/python3.5/site-packages": 6, "/home/ninjaaron/.local/lib/python3.5/site-packages": 5, "/home/ninjaaron/.local/bin": 0 } $ # unless they can't $ rep '[list, print, re]' [<class 'list'>, <built-in function print>, <module 're' from '/usr/lib/python3.5/re.py'>] $ # iterators print each item on a new line, applying the same conventions $ rep 'iter-package $ rep '(i.split('/')[1:] for i in-packages"] Most JSON is also valid Python, but be aware that you may occasionally see null instead of None along with true and false instead of True and False, and your tuples will look like list. I guess that’s a risk I’m willing to take. (The rational for this is that pyfil, despite what the name of the rep command may indicate, is more about composability in the shell than printing valid Python literals. JSON is the defacto standard for serialization, or should be, if only people would stop using XML for that…) Because these defaults use eval() internally to get value of expressions, statements may not be used. exec() supports statements, but it does not return the value of expressions when they are evaluated. When the -x/–exec flag is used, automatic printing is suppressed, and expressions are evaluated with exec, so statements, such as assignments, may be used. Values may still be printed explicitly. –quite suppresses automatic printing, but eval is still used. The –post option is immune from –quiet and –exec. It will always be evaluated with eval(), and it will always try to print. The only difference is that if –quiet or –exec was used, json will be printed with indentation unless –force-oneline-json is used. rep doesn’t have any parameters for input and output files. Instead, use redirection. rep -s 'i.upper' > output.txt < input.txt rep can take as many expressions as desired as arguments. When used with –exec, this works pretty much as expected, and assignment must be done manually. Without –exec, the return value of each expression is assigned to the variable x, which can be used in the next expression. The final value of x is what is ultimately printed, not any intermediate values. $ rep 'reversed("abcd")' '(i.upper() for i in x)' D C B A one can do simple loops with a generator expression. (note that any expression that evaluates to an iterator will print each item on a new line unless the --join option is specified.) $ ls / | rep '(i.upper() for i in stdin)' BIN@ BOOT/ DEV/ ETC/ HOME/ ... However, the -l/--loop flag rep loops over stdin in a context like this: for n, i in enumerate(stdin): expressions Therefore, the above loop can also be written thusly: $ ls / | rep -l 'i.upper()' --pre and --post (-b and -e) options can be used to specify actions to run before or after the loop. Note that the –pre option is run with exec instead of eval, and therefore output is never printed, and statements may be used. This is for things like initializing container types. –post is automatically printed and statements are not allowed (unless –exec is used). –loop is implied if --post is used. --pre can be used without a –loop to import additional modules (or whatever else you may want to do with a statement). Using -s/--split or -F/--field-sep for doing awk things also implies –loop. The resulting list is named f in the execution environment, in quazi-Perl fashion. (oh, and that list is actually a subclass of collections.UserList that returns an empty string if the index doesn’t exist, so it acts more like awk with empty fields, rather than throwing and error and interrupting iteration). rep can parse json objects from stdin with the -j/--json flag. They are passed into the environment as the j object. combining with the –loop flag will treat stdin as one json object per line. It’s probably obvious that the most powerful way to format strings is with Python’s str.format method and the -F or -s options. $ ls -l /|rep -s '"{0}\t{2}\t{8}".format(*f)' Error: tuple index out of range ... However, you will note that using string.format(*f) produces an error and does not print anything to stdout (error message is sent to stderr; see error handling for more options) for lines without enough fields, which may not be the desired behavior when dealing with lines containing arbitrary numbers of fields. For simpler cases, you may wish to use the -n/--join option, which will join any iterables with the specified string before printing, and, in the case of the f list, will replace any none-existent fields with an empty string. $ ls -l /|rep -sn '\t' 'f[0], f[2], f[8]' total, the first line of ls -l / provides values for all available fields. I realize that it’s much better to do most of these things with the original utility. This is just to give some ideas of how to use `rep` replace wc -l: $ ls / | rep 'len(stdin.l)' 20 replace fgrep: $ ls / | rep '(i for i in stdin if "v" in i)' $ ls / | rep -l 'i if "v" in i else None' replace grep: $ ls / | rep 'filter(lambda x: re.search("^m", x), stdin)' $ ls / | rep -lS 're.search("^m", i).string)' $ # using the -S option to suppress a ton of error messages replace sed 's/...: $ ls / | rep -l 're.sub("^([^aeiou][aeiou][^aeiou]\W)", lambda m: m.group(0).upper(), i)' BIN@ boot/ data/ DEV/ etc/ ... This example illustrates that, while you might normally prefer sed for replacement tasks, the ability to define a replacement function with re.sub does offer some interesting possibilities. Indeed, someone familiar with coreutils should never prefer to do something they already comfortable doing the traditional way with rep (coreutils are heavily optimized). Python is interesting for this use-case because it offers great logic, anonymous functions and all kinds of other goodies that only full-fledged, modern programming language can offer. Use coreutiles for the jobs they were designed to excel in. Use rep to do whatever they can’t… and seriously, how will coreutils do this?: $ wget -qO- | rep -j 'j["urls"][0]["filename"]' pyfil-0.5-py3-none-any.whl $ ls -l | rep -qSs \ "d.update({f[8]: {'permissions': f[0], 'user': f[2], 'group': f[3], 'size': int(f[4]), 'timestamp': ' '.join(f[5:8])}})" \ --post 'd' { "README.rst": { "group": "users", "user": "ninjaaron", "permissions": "-rw-r--r--", "timestamp": "Sep 6 20:55", "size": 18498 }, "pyfil/": { "group": "users", "user": "ninjaaron", "permissions": "drwxr-xr-x", "timestamp": "Sep 6 20:20", "size": 16 }, "setup.py": { "group": "users", "user": "ninjaaron", "permissions": "-rw-r--r--", "timestamp": "Sep 6 20:30", "size": 705 }, "LICENSE": { "group": "users", "user": "ninjaaron", "permissions": "-rw-r--r--", "timestamp": "Sep 3 13:32", "size": 1306 } } Other things which might be difficult with coreutils: $ ls / | rep -n ' ' 'reversed(stdin.l)' var/ usr/ tmp/ sys/ srv/ sbin@ run/ root/ proc/ opt/ ... $ # ^^ also, `ls /|rep -n ' ' 'stdin.l[::-1]' If pyfil encounters an exception while evaluating user input the default is to print the error message to stderr and continue (if looping over stdin), as we saw in the section on formatting output. However, errors can also be silenced entirely with the -S/--silence-errors option. In the below example, the first line produces an error, but we don’t hear about it. $ ls -l /|rep -sS '"{0}\t{2}\t{8}".format(*f)' ... Alternatively, errors may be raised when encountered, which will stop execution and give a (fairly useless, in this case) traceback. This is done with the -R/--raise-errors flag. $ ls -l /|rep -sR '"{0}\t{2}\t{8}".format(*f)' Traceback (most recent call last): File "/home/ninjaaron/src/py/pyfil/venv/bin/rep", line 9, in <module> load_entry_point('pyfil', 'console_scripts', 'rep')() File "/home/ninjaaron/src/py/pyfil/pyfil/pyfil.py", line 242, in main run(expressions, a, namespace) File "/home/ninjaaron/src/py/pyfil/pyfil/pyfil.py", line 164, in run handle_errors(e, args) File "/home/ninjaaron/src/py/pyfil/pyfil/pyfil.py", line 134, in handle_errors raise exception File "/home/ninjaaron/src/py/pyfil/pyfil/pyfil.py", line 162, in run value = func(expr, namespace) File "<string>", line 1, in <module> IndexError: tuple index out of range In addition to these two handlers, it is possible to specify a rudimentary custom handler with the -H/--exception-handler flags. The syntax is -H 'Exception: expression', where Exception can be any builtin exception class (including Exception, to catch all errors), and expression is the alternative expression to evaluate (and print, if not –quiet). $ ls -l /|rep -sH 'IndexError: i' '"{0}\t{2}\t{8}".format(*f)' total 32, we’ve chosen to print line without any additional formatting. If other errors are encountered, it will fall back to other handlers (-S, -R, or the default). For more sophisticated error handling… Write a real Python script, where you can handle to your heart’s content. Also note that this case is possible to handle with a test instead of an exception handler because f is a special list that will return an empty string instead of throw an index error if the index is out of range: ls -l / | rep -s '"{0}\t{2}\t{8}".format(*f) if f[2] else i' Easy-peasy. Download the file for your platform. If you're not sure which to choose, learn more about installing packages.
https://pypi.org/project/pyfil/
CC-MAIN-2017-30
refinedweb
2,261
64.91
no_loop() Contents no_loop()# Stops py5 from continuously executing the code within draw(). Examples# def setup(): py5.size(200, 200) py5.no_loop() def draw(): py5.line(10, 10, 190, 190) x = 0.0 def setup(): py5.size(200, 200) def draw(): global x py5.background(204) x = x + 0.1 if x > py5.width: x = 0 py5.line(x, 0, x, py5.height) def mouse_pressed(): py5.no_loop() def mouse_released(): py5.loop() some_mode = False def setup(): py5.no_loop() def draw(): if some_mode: # do something pass def mouse_pressed(): some_mode = True py5.redraw() # or call loop() Description# Stops py5 from continuously executing the code within draw(). If loop() is called, the code in draw() begins to run continuously again. If using no_loop() in setup(), it should be the last line inside the block. When no_loop() is used, it’s not possible to manipulate or access the screen inside event handling functions such as mouse_pressed() or key_pressed(). Instead, use those functions to call redraw() or loop(), which will run draw(), which can update the screen properly. This means that when no_loop() has been called, no drawing can happen, and functions like save_frame() or load_pixels() may not be used. Note that if the Sketch is resized, redraw() will be called to update the Sketch, even after no_loop() has been specified. Otherwise, the Sketch would enter an odd state until loop() was called. Underlying Processing method: noLoop Signatures# no_loop() -> None Updated on September 01, 2022 16:36:02pm UTC
https://py5.ixora.io/reference/sketch_no_loop.html
CC-MAIN-2022-40
refinedweb
242
79.06
From: Jeremy Graham Siek (jsiek_at_[hidden]) Date: 2004-07-05 15:43:59 Hi Toon, On Jul 5, 2004, at 10:54 AM, Toon Knapen wrote: > I think Dave's evaluation of uBlas should be closely looked at and we > should address the issues mentioned. So I would like to comment so > that > we can further these isues and really tackle these problems. > > > David Abrahams wrote: > > > Jeremy and I have just completed a re-evaluation of uBlas based on > > what's in uBlas' own CVS repository, having not discovered that > until > > recently (you should have that info on uBlas' Boost page!) We have > > some major complaints with uBlas' design. The list is long, and the > > issues run deep enough that we don't believe that uBlas is a > suitable > > foundation for the work we want to do. > I however regret that you never provided your feedback before. I would have like to, but was swamped with other work when uBLAS went through review (I was probably writing the BGL book). > > > > Here is a partial list of things we take issue with: > > > >. > The problem regarding documentation regularly comes up on the ml. Some > people started with an alternative documentation for uBlas. Maybe it's > time to merge and rework all uBlas documentation. And indeed > documentation is crucial for developing generic concepts. > > > > > * Redundant specification of element type in matrix/vector storage. > Could you elaborate ? > > > > > * size1 and size2 should be named num_rows and num_columns or > > something memnonic > This is just a matter of documentation. AFAIK there is no accepted > concept that demands 'num_rows' and 'num_columns'. The '1' and '2' > refer > to the first and second index which is also a nice convention IMO. No, there are uBLAS concepts that refer to size1 and size2. For example, the Matrix Expression concept. > > > > * iterator1 and iterator2 should be named column_iterator and > > row_iterator or something memnonic. > same as above. > > > > > * prod should be named operator*; this is a linear algebra library > > after all. > This also came up a few times and we never agreed how users could > clearly distinguish the product and the element-product so there was > decided to use more explicit function-names. element-wise products do not play the same central role that products do in linear algebra. Thus, operator* should be used for product, and some other name, like element_prod for element-wise product. > But the most important thing about operator-overloading is being able > to > use generic algorithms that exploit the given operator. Not having > operator* AFAIK never prohibited the reuse of generic LA algorithms. > > > > > * begin() and end() should never violate O(1) complexity > expectations. > That would indeed be ideal but why exactly do you insist on O(1) > complexity. Because the C++ Standard requirements for Container say constant time, it's what people expect, and because it follows the STL design style (e.g., std::list has no operator[]). > > > > * insert(i,x) and erase(i) names used inconsistently with standard > library. > agreed. Again this could be solved by generating proper documentation > that is also inline with the documentation of the standard library. This is an interface change, so code would need to change too. > > > > * Matrix/Vector concept/class interfaces are way too "fat" and need > to > > be minimized (e.g. rbegin/rend *member* functions should be > > eliminated). > agreed. > > > > > * The slice interface is wrong; stride should come last and be > > optional; 2nd argument should be end and not size; then a separate > > range interface could be eliminated. > I'm not convinced slice and range could be united. A range is a > special > case that can more easily optimised compared to the slice > implementation. > > > > > * No support for unorderd sparse formats -- it can't be made to fit > > into the uBlas framework. > I'm not qualified to comment here. > > > > > Implementation > > -------------- > > > > * Expressions that require temporaries are not supported by uBLAS > > under release mode. They are supported under debug mode. For > > example, the following program compiles under debug mode, but not > > under release mode. > > > > #include <boost/numeric/ublas/matrix.hpp> > > #include <boost/numeric/ublas/io.hpp> > > > > int main () { > > using namespace boost::numeric::ublas; > > matrix<double> m (3, 3); > > vector<double> v (3); > > for (unsigned i = 0; i < std::min (m.size1 (), v.size ()); ++ > i) { > > for (unsigned j = 0; j < m.size2 (); ++ j) > > m (i, j) = 3 * i + j; > > v (i) = i; > > } > >. > Agreed. The semantics in debug and release mode should be identical. > > > > > * Should use iterator_adaptor. There is a ton of boilerplate > iterator > > code in the uBLAS that needs to be deleted. > uBlas originates from before the new iterator_adaptor lib. But indeed > we > might need to look at using the iterator_adaptor library. > > > > > * Should use enable_if instead of CRTP to implement operators. uBLAS > > avoids the ambiguity problem by only using operator* for > > vector-scalar, matrix-scalar ops, but that's only a partial > > solution. Its expressions can't interact with objects from other > > libraries (e.g. multi-array) because they require the intrusive > CRTP > > base class. > True but operator* between uBlas classes and multi_arrays are used > very > frequently. > > > > > Testing > > ------- > > * There should be a readme describing the organization of the tests. > Indeed but this is a problem that can be solved (but must be dealed > with) > > > > * Tests should not print stuff that must be manually inspected for > > correctness. > Does it really bother you ? Yes, very much. It should bother you too. > > > > * Test programs should instead either complete successfully > > (with exit code 0) or not (and print why it failed). > agreed. > > > > > Documentation > > ------------- > > > > * In really bad shape. Redundant boilerplate tables make the head > > spin rather than providing useful information. > > > > * Needs to be user-centered, not implementation centered. > > > > * Need a good set of tutorials. > > > all agreed. Man, we really need to improve our documentation! > _______________________________________________
https://lists.boost.org/Archives/boost/2004/07/67345.php
CC-MAIN-2020-05
refinedweb
942
58.38
Prerequisites: MySQL-Connector, XAMPP Installation A connector is employed when we have to use MySQL with other programming languages. The work of mysql-connector is to provide access to MySQL Driver to the required language. Thus, it generates a connection between the programming language and the MySQL Server. Requirements - XAMPP: Database / Server to store and display data. - MySQL-Connector module: For connecting the database with the python file. Use the below command to install this module. pip install mysql-connector - Wheel module: A command line tool for working with wheel files. Use the below command to install this module. pip install wheel Step-by-step Approach: Procedure to create a table in the database: - Start your XAMPP web server. - Type in your browser. - Go to Database create database with name and click on Create. - Create a table with in GEEK database and click on Go. - Define column names and click on save. - Your table is created. - Insert data in your database by clicking on SQL tab then select INSERT. - The data in your table is: - Now you can perform operation IE display data in your web page using python Procedure for writing Python program: - Import mysql connector module in your Python code. import mysql.connector - Create connection object. conn_object=mysql.connector.connect(hostname,username,password,database_name) Here, you will need to pass server name, username, password, and database name) - Create a cursor object. cur_object=conn_object,cursor() - Perform queries on database. query=DDL/DML etc cur_obj=execute(query) - Close cursor object. cur_obj.close() - Close connection object. conn_obj.close() Below is the complete Python program based on the above approach: Python3 Output: Note: XAMPP Apache and MySQL should be kept on during the whole process. Attention geek! Strengthen your foundations with the Python Programming Foundation Course and learn the basics. To begin with, your interview preparations Enhance your Data Structures concepts with the Python DS Course.
https://www.geeksforgeeks.org/extract-data-from-database-using-mysql-connector-and-xampp-in-python/?ref=rp
CC-MAIN-2021-04
refinedweb
314
59.09
# Checking rdesktop and xrdp with PVS-Studio ![Изображение3](https://habrastorage.org/r/w1560/getpro/habr/post_images/0c3/022/209/0c302220992a72800d96b21b634c13f0.png) This is the second post in our series of articles about the results of checking open-source software working with the RDP protocol. Today we are going to take a look at the rdesktop client and xrdp server. The analysis was performed by [PVS-Studio](https://www.viva64.com/en/pvs-studio/). This is a static analyzer for code written in C, C++, C#, and Java, and it runs on Windows, Linux, and macOS. I will be discussing only those bugs that looked most interesting to me. On the other hand, since the projects are pretty small, there aren't many bugs in them anyway :). **Note**. The previous article about the check of FreeRDP is available [here](https://habr.com/en/company/pvs-studio/blog/444246/). rdesktop -------- [rdesktop](https://www.rdesktop.org/) is a free RDP client for UNIX-based systems. It can also run on Windows if built under Cygwin. rdesktop is released under GPLv3. This is a very popular client. It is used as a default client on ReactOS, and you can also find third-party graphical front-ends to go with it. The project is pretty old, though: it was released for the first time on April 4, 2001, and is 17 years old, as of this writing. As I already said, the project is very small — about 30 KLOC, which is a bit strange considering its age. Compare that with FreeRDP with its 320 KLOC. Here's Cloc's output: ![Изображение1](https://habrastorage.org/r/w1560/getpro/habr/post_images/4a0/8e4/19f/4a08e419fa943d0792b40ea970d7d800.png) ### Unreachable code [V779](https://www.viva64.com/en/w/v779/) Unreachable code detected. It is possible that an error is present. rdesktop.c 1502 ``` int main(int argc, char *argv[]) { .... return handle_disconnect_reason(deactivated, ext_disc_reason); if (g_redirect_username) xfree(g_redirect_username); xfree(g_username); } ``` The first error is found immediately in the *main* function: the code following the *return* statement was meant to free the memory allocated earlier. But this defect isn't dangerous because all previously allocated memory will be freed by the operating system once the program terminates. ### No error handling [V557](https://www.viva64.com/en/w/v557/) Array underrun is possible. The value of 'n' index could reach -1. rdesktop.c 1872 ``` RD_BOOL subprocess(char *const argv[], str_handle_lines_t linehandler, void *data) { int n = 1; char output[256]; .... while (n > 0) { n = read(fd[0], output, 255); output[n] = '\0'; // <= str_handle_lines(output, &rest, linehandler, data); } .... } ``` The file contents are read into the buffer until EOF is reached. At the same time, this code lacks an error handling mechanism, and if something goes wrong, *read* will return -1 and execution will start reading beyond the bounds of the *output* array. ### Using EOF in char [V739](https://www.viva64.com/en/w/v739/) EOF should not be compared with a value of the 'char' type. The '(c = fgetc(fp))' should be of the 'int' type. ctrl.c 500 ``` int ctrl_send_command(const char *cmd, const char *arg) { char result[CTRL_RESULT_SIZE], c, *escaped; .... while ((c = fgetc(fp)) != EOF && index < CTRL_RESULT_SIZE && c != '\n') { result[index] = c; index++; } .... } ``` This code implements incorrect *EOF* handling: if *fgetc* returns a character whose code is 0xFF, it will be interpreted as the end of file (*EOF*). *EOF* is a constant typically defined as -1. For example, in the CP1251 encoding, the last letter of the Russian alphabet is encoded as 0xFF, which corresponds to the number -1 in type *char*. It means that the 0xFF character, just like *EOF* (-1), will be interpreted as the end of file. To avoid errors like that, the result returned by the *fgetc* function should be stored in a variable of type *int*. ### Typos **Snippet 1** [V547](https://www.viva64.com/en/w/v547/) Expression 'write\_time' is always false. disk.c 805 ``` RD_NTSTATUS disk_set_information(....) { time_t write_time, change_time, access_time, mod_time; .... if (write_time || change_time) mod_time = MIN(write_time, change_time); else mod_time = write_time ? write_time : change_time; // <= .... } ``` The author of this code must have accidentally used the *||* operator instead of *&&* in the condition. Let's see what values the variables *write\_time* and *change\_time* can have:* Both variables have 0. In this case, execution moves on to the *else* branch: the *mod\_time* variable will always be evaluated to 0 no matter what the next condition is. * One of the variables has 0. In this case, *mod\_time* will be assigned 0 (given that the other variable has a non-negative value) since *MIN* will choose the least of the two. * Neither variable has 0: the minimum value is chosen. Changing that line to *write\_time && change\_time* will fix the behavior:* Only one or neither variable has 0: the non-zero value is chosen. * Neither variable has 0: the minimum value is chosen. **Snippet 2** [V547](https://www.viva64.com/en/w/v547/) Expression is always true. Probably the '&&' operator should be used here. disk.c 1419 ``` static RD_NTSTATUS disk_device_control(RD_NTHANDLE handle, uint32 request, STREAM in, STREAM out) { .... if (((request >> 16) != 20) || ((request >> 16) != 9)) return RD_STATUS_INVALID_PARAMETER; .... } ``` Again, it looks like the problem of using the wrong operator — either *||* instead of *&&* or *==* instead of *!=* because the variable can't store the values 20 and 9 at the same time. ### Unlimited string copying [V512](https://www.viva64.com/en/w/v512/) A call of the 'sprintf' function will lead to overflow of the buffer 'fullpath'. disk.c 1257 ``` RD_NTSTATUS disk_query_directory(....) { .... char *dirname, fullpath[PATH_MAX]; .... /* Get information for directory entry */ sprintf(fullpath, "%s/%s", dirname, pdirent->d_name); .... } ``` If you could follow the function to the end, you'd see that the code is OK, but it may get broken one day: just one careless change will end up with a buffer overflow since *sprintf* is not limited in any way, so concatenating the paths could take execution beyond the array bounds. We recommend replacing this call with *snprintf(fullpath, PATH\_MAX, ....)*. ### Redundant condition [V560](https://www.viva64.com/en/w/v560/) A part of conditional expression is always true: add > 0. scard.c 507 ``` static void inRepos(STREAM in, unsigned int read) { SERVER_DWORD add = 4 - read % 4; if (add < 4 && add > 0) { .... } } ``` The *add > 0* check doesn't make any difference as the variable will always be greater than zero because *read % 4* returns the remainder, which will never be equal to 4. xrdp ---- [xrdp](http://www.xrdp.org/) is an open-source RDP server. The project consists of two parts: * xrdp — the protocol implementation. It is released under Apache 2.0. * xorgxrdp — a collection of Xorg drivers to be used with xrdp. It is released under X11 (just like MIT, but use in advertising is prohibited) The development is based on rdesktop and FreeRDP. Originally, in order to be able to work with graphics, you would have to use a separate VNC server or a special X11 server with RDP support, X11rdp, but those became unnecessary with the release of xorgxrdp. We won't be talking about xorgxrdp in this article. Just like the previous project, xrdp is a tiny one, consisting of about 80 KLOC. ![Изображение2](https://habrastorage.org/r/w1560/getpro/habr/post_images/904/e44/6bd/904e446bdb2edc2f39c5db3c928a61ea.png) ### More typos [V525](https://www.viva64.com/en/w/v525/) The code contains the collection of similar blocks. Check items 'r', 'g', 'r' in lines 87, 88, 89. rfxencode\_rgb\_to\_yuv.c 87 ``` static int rfx_encode_format_rgb(const char *rgb_data, int width, int height, int stride_bytes, int pixel_format, uint8 *r_buf, uint8 *g_buf, uint8 *b_buf) { .... switch (pixel_format) { case RFX_FORMAT_BGRA: .... while (x < 64) { *lr_buf++ = r; *lg_buf++ = g; *lb_buf++ = r; // <= x++; } .... } .... } ``` This code comes from the librfxcodec library, which implements the jpeg2000 codec to work with RemoteFX. The «red» color channel is read twice, while the «blue» channel is not read at all. Defects like this typically result from the use of copy-paste. The same bug was found in the similar function *rfx\_encode\_format\_argb*: [V525](https://www.viva64.com/en/w/v525/) The code contains the collection of similar blocks. Check items 'a', 'r', 'g', 'r' in lines 260, 261, 262, 263. rfxencode\_rgb\_to\_yuv.c 260 ``` while (x < 64) { *la_buf++ = a; *lr_buf++ = r; *lg_buf++ = g; *lb_buf++ = r; x++; } ``` ### Array declaration [V557](https://www.viva64.com/en/w/v557/) Array overrun is possible. The value of 'i — 8' index could reach 129. genkeymap.c 142 ``` // evdev-map.c int xfree86_to_evdev[137-8+1] = { .... }; // genkeymap.c extern int xfree86_to_evdev[137-8]; int main(int argc, char **argv) { .... for (i = 8; i <= 137; i++) /* Keycodes */ { if (is_evdev) e.keycode = xfree86_to_evdev[i-8]; .... } .... } ``` In the genkeymap.c file, the array is declared 1 element shorter than implied by the implementation. No bug will occur, though, because the evdev-map.c file stores the correct size, so there'll be no array overrun, which makes it a minor defect rather than a true error. ### Incorrect comparison [V560](https://www.viva64.com/en/w/v560/) A part of conditional expression is always false: (cap\_len < 0). xrdp\_caps.c 616 ``` // common/parse.h #if defined(B_ENDIAN) || defined(NEED_ALIGN) #define in_uint16_le(s, v) do \ .... #else #define in_uint16_le(s, v) do \ { \ (v) = *((unsigned short*)((s)->p)); \ (s)->p += 2; \ } while (0) #endif int xrdp_caps_process_confirm_active(struct xrdp_rdp *self, struct stream *s) { int cap_len; .... in_uint16_le(s, cap_len); .... if ((cap_len < 0) || (cap_len > 1024 * 1024)) { .... } .... } ``` The value of a variable of type *unsigned short* is read into a variable of type *int* and then checked for being negative, which is not necessary because a value read from an unsigned type into a larger type can never become negative. ### Redundant checks [V560](https://www.viva64.com/en/w/v560/) A part of conditional expression is always true: (bpp != 16). libxrdp.c 704 ``` int EXPORT_CC libxrdp_send_pointer(struct xrdp_session *session, int cache_idx, char *data, char *mask, int x, int y, int bpp) { .... if ((bpp == 15) && (bpp != 16) && (bpp != 24) && (bpp != 32)) { g_writeln("libxrdp_send_pointer: error"); return 1; } .... } ``` The not-equal checks aren't necessary because the first check does the job. The programmer was probably going to use the *||* operator to filter off incorrect arguments. Conclusion ---------- Today's check didn't reveal any critical bugs, but it did reveal a bunch of minor defects. That said, these projects, tiny as they are, are still used in many systems and, therefore, need some polishing. A small project shouldn't necessarily have tons of bugs in it, so testing the analyzer only on small projects isn't enough to reliably evaluate its effectiveness. This subject is discussed in more detail in the article "[Feelings confirmed by numbers](https://www.viva64.com/en/b/0158/)". The demo version of PVS-Studio is available on our [website](https://www.viva64.com/en/pvs-studio-download/).
https://habr.com/ru/post/447878/
null
null
1,788
59.19
Many functions accept pointers as arguments. If the function dereferences an invalid pointer (as in EXP34-C. Do not dereference null pointers) or reads or writes to a pointer that does not refer to an object, the results are undefined. Typically, the program will terminate abnormally when an invalid pointer is dereferenced, but it is possible for an invalid pointer to be dereferenced and its memory changed without abnormal termination [Jack 2007]. Such programs can be difficult to debug because of the difficulty in determining if a pointer is valid. One way to eliminate invalid pointers is to define a function that accepts a pointer argument and indicates whether or not the pointer is valid for some definition of valid. For example, the following function declares any pointer to be valid except NULL: int valid(void *ptr) { return (ptr != NULL); } Some platforms have platform-specific pointer validation tools. The following code relies on the _etext address, defined by the loader as the first address following the program text on many platforms, including AIX, Linux, QNX, IRIX, and Solaris. It is not POSIX-compliant, nor is it available on Windows. #include <stdio.h> #include <stdlib.h> int valid(void *p) { extern char _etext; return (p != NULL) && ((char*) p > &_etext); } int global; int main(void) { int local; printf("pointer to local var valid? %d\n", valid(&local)); printf("pointer to static var valid? %d\n", valid(&global)); printf("pointer to function valid? %d\n", valid((void *)main)); int *p = (int *) malloc(sizeof(int)); printf("pointer to heap valid? %d\n", valid(p)); printf("pointer to end of allocated heap valid? %d\n", valid(++p)); free(--p); printf("pointer to freed heap valid? %d\n", valid(p)); printf("null pointer valid? %d\n", valid(NULL)); return 0; } On a Linux platform, this program produces the following output: pointer to local var valid? 1 pointer to static var valid? 1 pointer to function valid? 0 pointer to heap valid? 1 pointer to end of allocated heap valid? 1 pointer to freed heap valid? 1 null pointer valid? 0 The valid() function does not guarantee validity; it only identifies null pointers and pointers to functions as invalid. However, it can be used to catch a substantial number of problems that might otherwise go undetected. Noncompliant Code Example In this noncompliant code example, the incr() function increments the value referenced by its argument. It also ensures that its argument is not a null pointer. But the pointer could still be invalid, causing the function to corrupt memory or terminate abnormally. void incr(int *intptr) { if (intptr == NULL) { /* Handle error */ } *intptr++; } Compliant Solution This incr() function can be improved by using the valid() function. The resulting implementation is less likely to dereference an invalid pointer or write to memory that is outside the bounds of a valid object. void incr(int *intptr) { if (!valid(intptr)) { /* Handle error */ } *intptr++; } The valid() function can be implementation dependent and perform additional, platform-dependent checks when possible. In the worst case, the valid() function may only perform the same null-pointer check as the noncompliant code example. However, on platforms where additional pointer validation is possible, the use of a valid() function can provide checks. Risk Assessment A pointer validation function can be used to detect and prevent operations from being performed on some invalid pointers. Automated Detection Related Vulnerabilities Search for vulnerabilities resulting from the violation of this rule on the CERT website. 9 Comments Peter Gutmann The use of _etext is somewhat unreliable across systems. For example on an x86-64 FreeBSD system from within a shared lib I'm currently getting: p = 0xB34110, &_etext = 0x7F9B0C495246, &_edata = 0x7F9B0C7183A0, &_end = 0x7F9B0C718EE8. In this case p is on the stack, not a malloc'd variable. I've also had reports of it failing on a PPC Linux embedded device. Presumably this is because the single shared image can be mapped pretty much anywhere into each process' address space, and in addition there can be multiple _etext's for different shared libs. Perhaps someone who knows more about how different OSes lay things out in memory could comment on this, and under which conditions it's safe to use _etext. Peter Gutmann Some more comments on _etext, I think in general this is too unreliable to use safely in code unless you have complete control over the environment in which it's deployed. It doesn't work with shared libs, it fails with SELinux (which is something you'd expect to see used in combination with code that's been carefully written to do things like perform pointer checking), and I have no idea what it'll do in combination with different approaches to ASLR but I suspect it'll break with some of those as well. Perhaps if you could wrap the _etext check in some sort of libmemcheck that runs a self-test on startup and turns the comparison into a no-op if the _etext check can't be relied upon it would be safe, but without this it's too risky to enable on any cross-platform or heterogeneous-environment code. I've (reluctantly) turned it off in my code, I was getting too many error reports... it might be a good idea in the text above to warn about its high level of nonportability, that even on the same system it can break depending on whether something like SELinux is enabled or not. Robert Seacord Maybe I'm just tired and grumpy, but the first Compliant Solution (validation) does exactly the same thing as the non-compliant example (in cases where the validate()function only checks for null pointers) and does significantly less in the second Compliant Solution (assertion) as the assert()would presumably be eliminated in a non-debug build. Peter Gutmann Maybe the text could do with a bit of cleaning up, as you say solution #2 follows rather trivially from #1, I don't think it's necessary to have two examples with one being a runtime check and the other a debug-build-only check. It's also making a rather fine distinction between what's "compliant" and what isn't, the "Compliant Solution" is only compliant if your 'valid' macro uses the non-portable _etext trick, but non-compliant otherwise. Perhaps a better layout would be to remove the distinction between "compliant" and "non-compliant" and re-word the text for solution #1 to say that, depending on whether your OS supports _etext or not, you may get more or less checking that you bargained for. Even if your OS does have _etext, it may not actually work as you want it to, so perhaps phrase it as "a NULL check works everywhere but won't catch many invalid pointers, the _etext check may work but can be erratic" or "may not work with shared libs". Peter Gutmann Looks good now, thanks! Jim Gimpel I would imagine the compiler/library vendor would have a much better idea on how to write the valid() function. I would hope that it would be inline. There are 3 kinds of valid (at least). There is a valid function pointer, a valid data pointer and a pointer that you could pass to free. Peter Gutmann The only vendor I know of that's ever done this is Microsoft with their IsBadXXXPtr() checks, which used mem probing and first-chance exceptions to detect invalid pointers. This worked really well but had some unfortunate side-effects that meant they turned them into no-ops starting with Vista. So there really isn't any vendor-blessed way of doing this, which is why there's all the suggestions for platform- and situation-specific checking methods here. For more on the problems with IsBadXXXPtr(), see Raymond Chen's comments on this. Having used these functions for years though, I know they've caught huge numbers of user errors, particularly when called from a non-C language where the programmer hasn't got the memory-access convention quite right, and I'm not aware of them causing any problems. Another option for Unix systems, and this is just pseudocode, is something like: (assuming the OS doesn't fast-path the checking for special-case FDs, which it shouldn't be doing). If I did use something like the above in a program I think I'd do it under an assumed name though... Anand Tiwari for struct/object pointer, we can do little more in valid(), like checking a magic field in a structure, which can be set while allocating the object and can be set to something different while freeing the object. This will provide us whether its a valid object pointer or already freed object pointer or corrupt pointer. David Svoboda This is commonly called a 'canary', and is often a good partial solution. Canaries can be spoofed by regular corruption (rarely) or by attackers (more common!).
https://wiki.sei.cmu.edu/confluence/pages/viewpage.action?pageId=87152072
CC-MAIN-2019-22
refinedweb
1,488
60.04
Contexts¶ Contexts in Gordon are groups of variables which you want to make accessible to your code, but you don’t want to hardcode into it because it’s values are dependant on the the deployment. This could be for example because dev and production lambdas (although beeing the same code), need to connect to different resources, use different passwords or produce slightly different outputs. In the same way, same lambdas deployed to different regions will probably need to connect to different places. Contexts solve this problem by injecting a small payload into the lambdas package on deploy time, and letting you read that file on run time using your language of choice. How contexts works¶ The first thing you’ll need to do is define a context in your project settings file ( project/settings.yml). --- project: my-project default-region: eu-west-1 code-bucket: my-bucket apps: ... contexts: default: database_host: 10.0.0.1 database_username: dev-bob database_password: shrug ... As you can see, we have defined a context called default. All lambdas by default inject the context called default if it is present. After doing this, Gordon will leave a .context JSON file at the root of your lambda package. You can use your language of choice to read and use it. In the following example, we use python to read this file. import json def handler(event, context): with open('.context', 'r') as f: gordon_context = json.loads(f.read()) return gordon_context['database_host'] # Echo the database host Same example, but written in Javascript: var gordon_context = JSON.parse(require('fs').readFileSync('.context', 'utf8')); exports.handler = function(event, context) { context.succeed(gordon_context['database_host']); // Echo the database host }; And Java: // Remember to add 'org.json:json:20160212' to your gradle file package example; import java.io.FileNotFoundException; import java.util.Scanner; import java.io.File; import com.amazonaws.services.lambda.runtime.Context; import org.json.JSONObject; public class Hello { public static class EventClass { public EventClass() {} } public String handler(EventClass event, Context context) throws FileNotFoundException{ JSONObject gordon_context = new JSONObject( new Scanner(new File(".context")).useDelimiter("\\A").next() ); return gordon_context.getString("database_host"); } } Advanced contexts¶ For obvious reasons, hardcoding context values in your project/settings.yml file is quite limited and not very flexible. For this reason Gordon allows you to make the value of any of the context variables reference any parameter. In the following example, we are going to make all three variables point to three respective parameters. This will allow us to change the value of the context variables easily between stages or regions. --- project: my-project default-region: eu-west-1 code-bucket: my-bucket apps: ... contexts: default: database_host: ref://DatabaseHost database_username: ref://DatabaseUsername database_password: ref://DatabasePassword ... Now we only need to define what is the value for each of these parameters creating (for example) a parameters/common.yml file --- DatabaseHost: 10.0.0.1 DatabaseUsername: "{{ stage }}-bob" DatabasePassword: env://MY_DATABASE_PASSWORD As you can see this is quite a fancy example, because values are now dynamically generated. Now you should have a basic understanding of how contexts works. If you want to learn more about parameters you’ll find all the information you need in: - Parameters How parameters works - Advanced Parameters Advanced usages of parameters.
http://gordon.readthedocs.io/en/latest/contexts.html
CC-MAIN-2017-17
refinedweb
535
51.04
#include <unistd.h> char *ttyname(int fildes); char *ttyname_r(int fildes, char *name, int namelen); cc [ flag...] file ... –D_POSIX_PTHREAD_SEMANTICS [ library ... ] int ttyname_r(int fildes, char *name, size_t namesize); The ttyname() function returns a pointer to a string containing the null-terminated path name of the terminal device associated with file descriptor fildes. The return value points to thread–specific data whose content is overwritten by each call from the same thread. The ttyname_r() function has the same functionality as ttyname() except that the caller must supply a buffer name with length namelen to store the result; this buffer must be at least _POSIX_PATH_MAX in size (defined in <limits.h>). The standard-conforming version (see standards(5)) of ttyname_r() takes a namesize parameter of type size_t. Upon successful completion, ttyname() and ttyname_r() return a pointer to a string. Otherwise, a null pointer is returned and errno is set to indicate the error. The standard-conforming ttyname_r() returns 0 if successful or the.
http://docs.oracle.com/cd/E36784_01/html/E36874/ttyname-3c.html
CC-MAIN-2017-22
refinedweb
161
52.29
Suppose I’m writing some code using Conduits, but need to use some old function f::[a]->[b] (defined in a library somewhere) that transforms a lazy list. Is there a way of turning f into a Conduit without ending up with all of the list being in memory? ie something that looks like toConduit:: ([a]->[b]) -> ConduitT a b m () I’ve got nowhere with Hoogle or Hayoo -- Jón Fairbairn [hidden email] _______________________________________________ Haskell-Cafe mailing list To (un)subscribe, modify options or view archives go to: Only members subscribed via the mailman list are allowed to post. Michael Snoyman <[hidden email]> writes: > You can use Data.Conduit.Lazy for this. Thanks. Not as straightforward as I had hoped, but I can see why. On a different note, still attempting to learn, I am trying to use Network.Wai.Conduit with a conduit that has effects (ie involves sourceFile), and so lives in (ResourceT IO). eg example:: ConduitT i (Flush Builder) (ResourceT IO) () Now, responseSource expects IO, not ResourceT IO, so I don’t think I can use that, so I wrote this: > responseSourceRes status headers res_conduit > = responseStream status200 headers > (\send flush -> runConduitRes $ res_conduit > .| mapM_ (\e->lift $ > case e of > Chunk c -> send c > Flush -> flush )) which runs, but (rather to my surprise) doesn’t produce output (not even headers) until all the effects have completed. That gives rise to two questions: Why does that not stream output? What should I do instead? -- Jón Fairbairn [hidden email] _______________________________________________ Haskell-Cafe mailing list To (un)subscribe, modify options or view archives go to: Only members subscribed via the mailman list are allowed to post. Michael Snoyman <[hidden email]> writes: > I'd have to see a complete repro to know why the program in question > doesn't stream. Thanks. Here’s a fairly small example ``` module Main where import Prelude hiding (mapM_) import Conduit import Data.Conduit.List (mapM_) import System.FilePath import Data.ByteString.UTF8 import Data.Binary.Builder import GHC.IO.Exception (IOException) import Network.Wai.Handler.FastCGI (run) import Network.Wai.Conduit (Application, responseStream) import Network.HTTP.Types.Status import Network.HTTP.Types.Header data_directory = "./test-data/" main = run $ app app:: Application app request respond = do respond $ responseSourceRes status200 [(hContentType, fromString "text/plain; charset=UTF-8")] $ do yieldCBS "\nBEGIN\n" yield Flush wrapSourceFile $ data_directory </> "file1" wrapSourceFile $ data_directory </> "a_pipe" yieldCBS "END\n" yield Flush wrapSourceFile:: (MonadUnliftIO m, MonadResource m) => FilePath -> ConduitM a (Flush Builder) m () wrapSourceFile path = do yieldCBS ("\n" ++ path ++ ":\n") catchC (sourceFile path .| mapC (Chunk . fromByteString)) (\e -> yieldCBS $ "Error: " ++ show (e::IOException) ++ "\n") yieldCBS "\n" yield Flush yieldCBS:: Monad m => String -> ConduitT i (Flush Builder) m () yieldCBS = yield . Chunk . fromByteString . fromString responseSourceRes status headers res_conduit = responseStream status200 headers (\send flush -> runConduitRes $ res_conduit .| mapM_ (\e->liftIO $ case e of Chunk c -> send c Flush -> flush )) ``` The various flushes in there were attempts to make something come out. > But I _can_ explain how best to do something like this. > To frame this: why is something like ResourceT needed here? The issue is we > want to ensure exception safety around the open file handle, and guarantee > that the handle is closed regardless of any exceptions being thrown. > ResourceT solves this problem by letting you register cleanup actions. This > allows for solving some really complicated dynamic allocation problems, but > for most cases it's overkill. Instead, a simple use of the bracket pattern > is sufficient. You can do that with `withSourceFile`: > > ``` > #!/usr/bin/env stack > -- stack --resolver lts-11.10 script > import Network.Wai > import Network.Wai.Handler.Warp > import Network.Wai.Conduit > import Network.HTTP.Types > import Conduit > import Data.ByteString.Builder (byteString) > > main :: IO () > main = run 3000 app > > app :: Application > app _req respond = > withSourceFile "Main.hs" $ \src -> > respond $ responseSource status200 [] > $ src .| mapC (Chunk . byteString) I don’t think that will work for what I’m trying to do as the decision to open which file is made within the conduit. > You can also do this by sticking with ResourceT, which requires jumping > through some hoops with monad transformers to ensure the original ResourceT > context is used. I don't recommend this approach unless you really need it: > it's complicated, and slightly slower than the above. But in case you're > curious: Thanks. I think that may be what I want, but it’ll take a while to digest -- Jón Fairbairn [hidden email] _______________________________________________ Haskell-Cafe mailing list To (un)subscribe, modify options or view archives go to: Only members subscribed via the mailman list are allowed to post.
http://haskell.1045720.n5.nabble.com/New-to-Conduits-mixing-lazy-lists-and-Conduits-td5878361.html
CC-MAIN-2019-35
refinedweb
750
56.86
Often you may be interested in finding the max value of one or more columns in a pandas DataFrame. Fortunately you can do this easily in pandas using the max() function. This tutorial shows several examples of how to use this function. Example 1: Find the Max Value of a Single Column Suppose we have the following pandas DataFrame: import pandas as pd import numpy as np #create DataFrame df = pd.DataFrame({'player': ['A', 'B', 'C', 'D', 'E', 'F', 'G', 'H', 'I', 'J'], 'points': [25, 20, 14, 16, 27, 20, 12, 15, 14, 19], 'assists': [5, 7, 7, 8, 5, 7, 6, 9, 9, 5], 'rebounds': [np.nan, 8, 10, 6, 6, 9, 6, 10, 10, 7]}) #view DataFrame df player points assists rebounds 0 A 25 5 NaN 1 B 20 7 8.0 2 C 14 7 10.0 3 D 16 8 6.0 4 E 27 5 6.0 5 F 20 7 9.0 6 G 12 6 6.0 7 H 15 9 10.0 8 I 14 9 10.0 9 J 19 5 7.0 We can find the max value of the column titled “points” by using the following syntax: df['points'].max() 27 The max() function will also exclude NA’s by default. For example, if we find the max of the “rebounds” column, the first value of “NaN” will simply be excluded from the calculation: df['rebounds'].max() 10.0 The max of a string column is defined as the highest letter in the alphabet: df['player'].max() 'J' Example 2: Find the Max of Multiple Columns We can find the max of multiple columns by using the following syntax: #find max of points and rebounds columns df[['rebounds', 'points']].max() rebounds 10.0 points 27.0 dtype: float64 Example 3: Find the Max of All Columns We can find also find the max of all numeric columns by using the following syntax: #find max of all numeric columns in DataFrame df.max() player J points 27 assists 9 rebounds 10 dtype: object Example 4: Find Row that Corresponds to Max We can find also return the entire row that corresponds to the max value in a certain column. For example, the following syntax returns the entire row that corresponds to the player with the max points: #return entire row of player with the max points df[df['points']==df['points'].max()] player points assists rebounds 4 E 27 5 6.0 If multiple rows have the same max value, each row will be returned. For example, suppose player D also scored 27 points: #return entire row of players with the max points df[df['points']==df['points'].max()] player points assists rebounds 3 D 27 8 6.0 4 E 27 5 6.0 You can find the complete documentation for the max() function here.
https://www.statology.org/how-to-find-the-max-value-of-columns-in-pandas/
CC-MAIN-2022-21
refinedweb
477
73.58
. It is mean to be used in cases where you mainly consume the content. Git sees the project in a submodule as a single file which is updated. Other is subtrees. Intended for similar purpose like submodules, but are more write-oriented. Git sees entire subdirectory, knows the history of the project allowing you to commit more easily. I’m not going to go into depths with those solutions, you can read more about them clicking on the links listed at the bottom. Both of them have certain problems: - changes in the source code of the library require recompilation of the project. It might be a better idea to compile them once and then only reuse them. This is especially important when we’re using script language and have some binary code for it. For example in Python we might want to write some code in C for efficiency (compare with “Writing Python Modules in C“). In this case we would like to pull a precompiled package instead of its source code. - You have separate way of pulling normal dependencies and the submodules/subtrees. - Last, but not least (probably this is the most important problem), you are tempted to get to know the internals of the projects. Since it’s assumed that it’s a big dependency, which has a dedicated team working on it, it would be wise to separate concerns and deliver a solution, which requires familiarity only with public API, not the way it’s installed, built, compiled and so on. Same applies to “write” side: bug fixes, new features are meant to be tracked using dedicated approach, versioning. They should be thought of as a separate project not a part of something else. GitLab’s Package Registry An answer to these issues might be a dedicated package registry. Let’s imagine a library with simple math functions (entire project is linked below): def add(a: int, b: int) -> int: return a + b def sub(a: int, b: int) -> int: return a - b def mul(a: int, b: int) -> int: return a * b def div(a: int, b: int) -> float: return a / b Now in order to turn this into a functional library, we need a setup.py file: import setuptools setuptools.setup( name="mymath", version="0.0.1", author="gonczor", author_email="", description="A small example package", packages=setuptools.find_packages(), classifiers=[ "Programming Language :: Python :: 3", "License :: OSI Approved :: MIT License", "Operating System :: OS Independent", ], python_requires='>=3.7', ) This defines the name of the module, author, version, required Python version (so if one team using our project has an outdated python version, we can still prepare a dedicated library just for them). The project structure is following: $ tree -I '*pycache*' . ├── Makefile ├── README.md ├── my_math │ ├── __init__.py │ └── my_math.py └── setup.py 2 directories, 5 files Now, we can run make command which invokes python3 setup.py sdist bdist_wheel and see built files: $ ls -l total 24 -rw-r--r-- 1 wgonczaronek staff 42 11 paź 22:02 Makefile -rw-r--r-- 1 wgonczaronek staff 11 11 paź 21:41 README.md drwxr-xr-x 4 wgonczaronek staff 128 11 paź 22:11 build drwxr-xr-x 4 wgonczaronek staff 128 11 paź 22:11 dist drwxr-xr-x 5 wgonczaronek staff 160 11 paź 21:41 my_math drwxr-xr-x 6 wgonczaronek staff 192 11 paź 22:11 mymath.egg-info -rw-r--r-- 1 wgonczaronek staff 404 11 paź 21:41 setup.py What we actually need is in dist directory. It can be uploaded using: python3 -m twine upload --repository gitlab dist/* Which I’ve also included in the Makefile for your convenience. To deploy, you’ll need a .pypirc file with some data about project and personal access token (read this to learn how to get it). My .pypirc content: [distutils] index-servers = gitlab [gitlab] repository = username = __token__ password = S0M3S3(RE7 OK, we can try it out. And viola: We’re ready to download the library. Let’s Test it! Let’s create and example project and add this library as a dependency: $ cat requirements.txt -i mymath==0.0.1 And the code we’re integrating with is: $ cat main.py from my_math import * print(add(1, 2)) print(sub(2, 3)) print(mul(3, 4)) print(div(4, 5)) Note the underscore in module name. What we’re importing has different name from what we’re about to install. OK, let’s get down to business. $ pip3 install -r requirements.txt Looking in indexes:, Collecting mymath==0.0.1 Using cached (1.5 kB) Installing collected packages: mymath Successfully installed mymath-0.0.1 $ python3 main.py 3 -1 12 0.8 Success! Summary Today we’ve learnt how to use manage project dependencies in GitLab by using it as a package registry. Furthermore, we’ve seen why we shouldn’t use git in a way it’s not really intended to. Managing dependencies should be done with tools for managing dependencies, not version control systems. Moreover, we can extend this example with automated CI builds and uploads, but that’d enough for a separate blog post. Last, but not least, please also note that in GitLab’s docs you’ll see how to use these libraries in private projects. I’m not doing this because I wanted to show a simplified process and also let you download the example, so I open-sourced it. Consider signing up for the newsletter to get updated about latest posts.
https://blacksheephacks.pl/how-to-manage-project-dependencies-with-gitlab/
CC-MAIN-2022-40
refinedweb
913
64.91
- buster 7.0.0+dfsg-3 - testing 7.3.0+dfsg-4 - unstable 7.3.1+dfsg-1 - experimental 7.3.1~rc2+dfsg-1 NAME¶pypy3 - fast, compliant alternative implementation of the Python 3 language SYNOPSIS¶pypy3 [options] [-c cmd|-m mod|file.py|-] [arg...] OPTIONS¶ - -i - Inspect interactively after running script. - -O - Skip assert statements. - -OO - Remove docstrings when importing modules in addition to -O. - -c CMD - Program passed in as CMD (terminates option list). - -S - Do not import site on initialization. - -s - Don't add the user site directory to sys.path. - -u - Unbuffered binary stdout and stderr. - -h, --help - Show a help message and exit. - -m MOD - Library module to be run as a script (terminates option list). - -W ARG - Warning control (arg is action:message:category:module:lineno). - -E - Ignore environment variables (such as PYTHONPATH). - -B - Disable writing bytecode (.pyc) files. - -X track-resources - Produce a ResourceWarning whenever a file or socket is closed by the garbage collector. - --version - Print the PyPy version. - --info - Print translation information about this PyPy executable. - --jit ARG - Low level JIT parameters. Mostly internal. Run --jit help for more information. ENVIRONMENT¶ - PYTHONPATH - Add directories to pypy or +fname - logging for profiling: includes all debug_start/debug_stop but not any nested debug_print. fname can be - to log to stderr. The +fname form can be used if there is a : in fname - . - PYPY_IRC_TOPIC - If set to a non-empty value, print a random #pypy IRC topic at startup of interactive mode. PyPy's default garbage collector is called incminimark - it's an incremental, generational moving collector. Here we hope to explain a bit how it works and how it can be tuned to suit the workload. Incminimark first allocates objects in so called nursery - place for young objects, where allocation is very cheap, being just a pointer bump. The nursery size is a very crucial variable - depending on your workload (one or many processes) and cache sizes you might want to experiment with it via PYPY_GC_NURSERY environment variable. When the nursery is full, there is performed a minor collection. Freed objects are no longer referencable and just die, just by not being referenced any more; on the other hand, objects found to still be alive must survive and are copied from the nursery to the old generation. Either to arenas, which are collections of objects of the same size, or directly allocated with malloc if they're larger. (A third category, the very large objects, are initially allocated outside the nursery and never move.) Since Incminimark is an incremental GC, the major collection is incremental: the goal is not to have any pause longer than 1ms, but in practice it depends on the size and characteristics of the heap: occasionally, there can be pauses between 10-100ms. Semi-manual GC management¶If there are parts of the program where it is important to have a low latency, you might want to control precisely when the GC runs, to avoid unexpected pauses. Note that this has effect only on major collections, while minor collections continue to work as usual. As explained above, a full major collection consists of N steps, where N depends on the size of the heap; generally speaking, it is not possible to predict how many steps will be needed to complete a collection. gc.enable() and gc.disable() control whether the GC runs collection steps automatically. When the GC is disabled the memory usage will grow indefinitely, unless you manually call gc.collect() and gc.collect_step(). gc.collect() runs a full major collection. gc.collect_step() runs a single collection step. It returns an object of type GcCollectStepStats, the same which is passed to the corresponding GC Hooks. The following code is roughly equivalent to a gc.collect(): while True: if gc.collect_step().major_is_done: break For a real-world example of usage of this API, you can look at the 3rd-party module pypytools.gc.custom, which also provides a with customgc.nogc() context manager to mark sections where the GC is forbidden. Fragmentation¶Before we discuss issues of "fragmentation", we need a bit of precision. There are two kinds of related but distinct issues: - If the program allocates a lot of memory, and then frees it all by dropping all references to it, then we might expect to see the RSS to drop. (RSS = Resident Set Size on Linux, as seen by "top"; it is an approximation of the actual memory usage from the OS's point of view.) This might not occur: the RSS may remain at its highest value. This issue is more precisely caused by the process not returning "free" memory to the OS. We call this case "unreturned memory". - After doing the above, if the RSS didn't go down, then at least future allocations should not cause the RSS to grow more. That is, the process should reuse unreturned memory as long as it has got some left. If this does not occur, the RSS grows even larger and we have real fragmentation issues. gc.get_stats¶There is a special function in the gc module called get_stats(memory_pressure=False). memory_pressure controls whether or not to report memory pressure from objects allocated outside of the GC, which requires walking the entire heap, so it's disabled by default due to its cost. Enable it when debugging mysterious memory disappearance. Example call looks like that: >>> gc.get_stats(True) Total memory consumed: GC used: 4.2MB (peak: 4.2MB) in arenas: 763.7kB rawmalloced: 383.1kB nursery: 3.1MB raw assembler used: 0.0kB memory pressure: 0.0kB ----------------------------- Total: 4.2MB Total memory allocated: GC allocated: 4.5MB (peak: 4.5MB) in arenas: 763.7kB rawmalloced: 383.1kB nursery: 3.1MB raw assembler allocated: 0.0kB memory pressure: 0.0kB ----------------------------- Total: 4.5MB In this particular case, which is just at startup, GC consumes relatively little memory and there is even less unused, but allocated memory. In case there is a lot of unreturned memory or actual fragmentation, the "allocated" can be much higher than "used". Generally speaking, "peak" will more closely resemble the actual memory consumed as reported by RSS. Indeed, returning memory to the OS is a hard and not solved problem. In PyPy, it occurs only if an arena is entirely free---a contiguous block of 64 pages of 4 or 8 KB each. It is also rare for the "rawmalloced" category, at least for common system implementations of malloc(). The details of various fields: - GC in arenas - small old objects held in arenas. If the amount "allocated" is much higher than the amount "used", we have unreturned memory. It is possible but unlikely that we have internal fragmentation here. However, this unreturned memory cannot be reused for any malloc(), including the memory from the "rawmalloced" section. - GC rawmalloced - large objects allocated with malloc. This is gives the current (first block of text) and peak (second block of text) memory allocated with malloc(). The amount of unreturned memory or fragmentation caused by malloc() cannot easily be reported. Usually you can guess there is some if the RSS is much larger than the total memory reported for "GC allocated", but do keep in mind that this total does not include malloc'ed memory not known to PyPy's GC at all. If you guess there is some, consider using jemalloc as opposed to system malloc. - nursery - amount of memory allocated for nursery, fixed at startup, controlled via an environment variable - raw assembler allocated - amount of assembler memory that JIT feels responsible for - memory pressure, if asked for - amount of memory we think got allocated via external malloc (eg loading cert store in SSL contexts) that is kept alive by GC objects, but not accounted in the GC GC Hooks¶GC hooks are user-defined functions which are called whenever a specific GC event occur, and can be used to monitor GC activity and pauses. You can install the hooks by setting the following attributes: - gc.hook.on_gc_minor - Called whenever a minor collection occurs. It corresponds to gc-minor sections inside PYPYLOG. - gc.hook.on_gc_collect_step - Called whenever an incremental step of a major collection occurs. It corresponds to gc-collect-step sections inside PYPYLOG. - gc.hook.on_gc_collect - Called after the last incremental step, when a major collection is fully done. It corresponds to gc-collect-done sections inside PYPYLOG. To uninstall a hook, simply set the corresponding attribute to None. To install all hooks at once, you can call gc.hooks.set(obj), which will look for methods on_gc_* on obj. To uninstall all the hooks at once, you can call gc.hooks.reset(). The functions called by the hooks receive a single stats argument, which contains various statistics about the event. Note that PyPy cannot call the hooks immediately after a GC event, but it has to wait until it reaches a point in which the interpreter is in a known state and calling user-defined code is harmless. It might happen that multiple events occur before the hook is invoked: in this case, you can inspect the value stats.count to know how many times the event occurred since the last time the hook was called. Similarly, stats.duration contains the total time spent by the GC for this specific event since the last time the hook was called. On the other hand, all the other fields of the stats object are relative only to the last event of the series. The attributes for GcMinorStats are: - count - The number of minor collections occurred since the last hook call. - duration - The total time spent inside minor collections since the last hook call, in seconds. - duration_min - The duration of the fastest minor collection since the last hook call. - duration_max - total_memory_used - The amount of memory used at the end of the minor collection, in bytes. This include the memory used in arenas (for GC-managed memory) and raw-malloced memory (e.g., the content of numpy arrays). - pinned_objects - the number of pinned objects. The attributes for GcCollectStepStats are: - count, duration, duration_min, duration_max - See above. - oldstate, newstate - Integers which indicate the state of the GC before and after the step. - major_is_done - Boolean which indicate whether this was the last step of the major collection The value of oldstate and newstate is one of these constants, defined inside gc.GcCollectStepStats: STATE_SCANNING, STATE_MARKING, STATE_SWEEPING, STATE_FINALIZING, STATE_USERDEL. It is possible to get a string representation of it by indexing the GC_STATES tuple. The attributes for GcCollectStats are: - count - See above. - num_major_collects - The total number of major collections which have been done since the start. Contrarily to count, this is an always-growing counter and it's not reset between invocations. - arenas_count_before, arenas_count_after - Number of arenas used before and after the major collection. - arenas_bytes - Total number of bytes used by GC-managed objects. - rawmalloc_bytes_before, rawmalloc_bytes_after - Total number of bytes used by raw-malloced objects, before and after the major collection. Note that GcCollectStats has not got a duration field. This is because all the GC work is done inside gc-collect-step: gc-collect-done is used only to give additional stats, but doesn't do any actual work. Here is an example of GC hooks in use: import sys import gc class MyHooks(object): done = False def on_gc_minor(self, stats): print 'gc-minor: count = %02d, duration = %d' % (stats.count, stats.duration) def on_gc_collect_step(self, stats): old = gc.GcCollectStepStats.GC_STATES[stats.oldstate] new = gc.GcCollectStepStats.GC_STATES[stats.newstate] print 'gc-collect-step: %s --> %s' % (old, new) print ' count = %02d, duration = %d' % (stats.count, stats.duration) def on_gc_collect(self, stats): print 'gc-collect-done: count = %02d' % stats.count self.done = True hooks = MyHooks() gc.hooks.set(hooks) # simulate some GC activity lst = [] while not hooks.done: lst = [lst, 1, 2, 3] Environment variables¶PyPy's default incminimark garbage collector is configurable through several environment variables: - PYPY_GC_NURSERY - The nursery size. Defaults to 1/2 of your last-level cache, or 4M if unknown. Small values (like 1 or 1KB) are useful for debugging. - PYPY_GC_NURSERY_DEBUG - If set to non-zero, will fill nursery with garbage, to help debugging. -). - PYPY_GC_MAX_PINNED - The maximal number of pinned objects at any point in time. Defaults to a conservative value depending on nursery size and maximum object size inside the nursery. Useful for debugging by setting it to 0.
https://manpages.debian.org/experimental/pypy3/pypy3.1.en.html
CC-MAIN-2020-24
refinedweb
2,048
58.18
Problem 36 Expected time to find a DelawareanDue: February 9 Points: 3 The State of Delaware sports 3 representatives in the U.S. Congress, out of a total of 541 voting and non-voting members. Based on this, we might expect, on average 3 in every 541 Midshipmen to be from Delaware. (*The number is probably even higher than this, based on the vice president and the general quality of people who grew up in Delaware. If this ratio of approximately 0.55% seems small, compare it to the total population of the state compared to the entire country, which is 917092/315487000, or about 0.29%. Take that, Texas!) Consider the following algorithm to find a Delawarean midshipman: def findBlueHen(): M = chooseRandomMidshipman() if M is from Delaware: return M else: return findBlueHen() Part 1: Write a recurrence to describe the running time of the recursive algorithm. There should be 2 cases, and a probability for each case. Part 2: Solve your recurrence to determine the expected running time of the algorithm.
https://www.usna.edu/Users/cs/roche/courses/s16si486h/probs/036.php
CC-MAIN-2018-09
refinedweb
172
54.52
Comment on Tutorial - append() in Java By Jagan Comment Added by : Mohamed firnaz Comment Added at : 2011-12-07 04:54:27 Comment on Tutorial : append() in Java By Jagan Excellent. I want you to publish about memory allocation for every sample programs side by side, as it plays a major role in live projects. For example : public class stringmemory { public static void main(String[] args){ String str = "Sri seshaa"; String str1 = " Sri seshaa"; String str2 = str + str1 + " technologies"; StringBuffer s; String d; StringBuffer bufr = new StringBuffer(str); s = bufr.append(str2); d = String.valueOf(str); System.out.println("......"); System.out.println(str2); System.out.println("......"); System.out.println(d); } } In this the memory allocated for concatenation in string class is more compared to the append operation in StringBuffer. Like this there are many advantages and disadvantages in it. Please publish it also, it will be helpful for the young programmers like java developer and interested to write scjp e View Tutorial By: Parvathi at 2013-07-05 14:24:08 2. nice View Tutorial By: narendra at 2012-04-23 09:40:00 3. hi gud mrng........ i wanna connectivity of View Tutorial By: sac at 2011-09-02 04:56:43 4. Hello sir, I observe that in the first version of View Tutorial By: Ganny at 2012-08-13 17:04:58 5. I've been satisfied with your tutorial site. Since View Tutorial By: Mark Harold F. Manguino at 2008-08-06 05:00:54 6. helpfull !!!! View Tutorial By: Anonymous at 2009-05-12 03:48:15 7. I download this program, but I need how to run. wh View Tutorial By: prabha at 2011-06-21 04:52:42 8. thank u sir ,this helped a lot ************ View Tutorial By: yugandharr at 2012-09-11 15:47:19 9. Hi, please do you know how can i use a cel connect View Tutorial By: nobody at 2010-04-06 20:25:54 10. Great! Concise, helpful, a real life-saver. Google View Tutorial By: Motti Shneor at 2012-07-31 14:29:37
https://www.java-samples.com/showcomment.php?commentid=37161
CC-MAIN-2021-49
refinedweb
349
57.57
While debugging performance issues in a Spark program, I've found a simple way to slow down Spark 1.6 significantly by filling the RDD memory cache. This seems to be a regression, because setting "spark.memory.useLegacyMode=true" fixes the problem. Here is a repro that is just a simple program that fills the memory cache of Spark using a MEMORY_ONLY cached RDD (but of course this comes up in more complex situations, too): import org.apache.spark.SparkContext import org.apache.spark.SparkConf import org.apache.spark.storage.StorageLevel object CacheDemoApp { def main(args: Array[String]) { val conf = new SparkConf().setAppName("Cache Demo Application") val sc = new SparkContext(conf) val startTime = System.currentTimeMillis() val cacheFiller = sc.parallelize(1 to 500000000, 1000) .mapPartitionsWithIndex { case (ix, it) => println(s"CREATE DATA PARTITION ${ix}") val r = new scala.util.Random(ix) it.map(x => (r.nextLong, r.nextLong)) } cacheFiller.persist(StorageLevel.MEMORY_ONLY) cacheFiller.foreach(identity) val finishTime = System.currentTimeMillis() val elapsedTime = (finishTime - startTime) / 1000 println(s"TIME= $elapsedTime s") } } If I call it the following way, it completes in around 5 minutes on my Laptop, while often stopping for slow Full GC cycles. I can also see with jvisualvm (Visual GC plugin) that the old generation of JVM is 96.8% filled. sbt package ~/spark-1.6.0/bin/spark-submit \ --class "CacheDemoApp" \ --master "local[2]" \ --driver-memory 3g \ --driver-java-options "-XX:+PrintGCDetails" \ target/scala-2.10/simple-project_2.10-1.0.jar If I add any one of the below flags, then the run-time drops to around 40-50 seconds and the difference is coming from the drop in GC times: --conf "spark.memory.fraction=0.6" OR --conf "spark.memory.useLegacyMode=true" OR --driver-java-options "-XX:NewRatio=3" All the other cache types except for DISK_ONLY produce similar symptoms. It looks like that the problem is that the amount of data Spark wants to store long-term ends up being larger than the old generation size in the JVM and this triggers Full GC repeatedly. I did some research: - In Spark 1.6, spark.memory.fraction is the upper limit on cache size. It defaults to 0.75. - In Spark 1.5, spark.storage.memoryFraction is the upper limit in cache size. It defaults to 0.6 and... - even says that it shouldn't be bigger than the size of the old generation. - On the other hand, OpenJDK's default NewRatio is 2, which means an old generation size of 66%. Hence the default value in Spark 1.6 contradicts this advice. recommends that if the old generation is running close to full, then setting spark.memory.storageFraction to a lower value should help. I have tried with spark.memory.storageFraction=0.1, but it still doesn't fix the issue. This is not a surprise: explains that storageFraction is not an upper-limit but a lower limit-like thing on the size of Spark's cache. The real upper limit is spark.memory.fraction. To sum up my questions/issues: - At least should be fixed. Maybe the old generation size should also be mentioned in configuration.html near spark.memory.fraction. - Is it a goal for Spark to support heavy caching with default parameters and without GC breakdown? If so, then better default values are needed.
https://issues.apache.org/jira/browse/SPARK-15796
CC-MAIN-2021-43
refinedweb
551
52.15
Using Vue.js WordPress MVC (WPMVC) flexibility allows developers to freely add progressive frameworks, like Vue or React, into its assets compilation cycle. This article will describe shortly how to configure your WordPress MVC project to work with Vue.js. Step 1: Install dependencies You will need to add several NPM package dependencies, the first one is webpack, this package will let you compile .vue components. The following commands will install webpack packages in your project and save them as dependencies: npm install webpack --save npm install webpack-cli --save The second one will be vue. The following command will install vue package in your project and save them as a dependency: npm install vue --save The third group of dependencies will be those needed by webpack to compile .vue components, compile SASS or SCSS styles, and generate .js and .css needed for the browser. The following commands will install the packages in your project and save them as dependencies: npm install @babel/core --save npm install @babel/preset-env --save npm install babel-loader --save npm install css-loader --save npm install mini-css-extract-plugin --save npm install node-sass --save npm install sass-loader --save npm install vue-loader --save npm install vue-style-loader --save npm install vue-template-compiler --save Step 2: Create assets folder structure Vue components and applications are “raw” and need to be included in WPMVC build cycle. To preserve the structure given in WPMVC, you will create your uncompiled Vue files in the /assets/raw/vue folder. Inside you will create an empty init.js file and the following folders /apps, /components and /mixins. The following example shows the structure described above: - /apps + /assets + /raw - /css - /js - /sass + /vue - /apps - /components - /mixins - init.js /apps: Holds all Vue applications that you might create for your project. /components: Holds all .vuecomponents that you might create and might want to provide to your applications. /mixins: Holds any vue mixin that you might create for your components. init.jsIs the script used to initialize a global variable and any other global setting needed in your project. Step 3: Define init.js Open /assets/raw/vue/init.js and paste the following code: /** * Vue init script. * @version 1.0.0 */ /** * myApp will hold all your Vue applications. * @var {object} */ window.myApp = {}; window.myApp will hold all your Vue applications during execution, meaning that will be able to access them through the browser. You can change myApp to your namespace or something that suits you and your project better. Note: init.js isn’t required to run your applications, the benefit of this script is to help us configure your Vue commons in just one place. Step 4: Create HelloWorld.vue component Create a new file named HelloWorld.vue inside /assets/raw/vue/components and paste the following code in it: <template> <div class="hello-world"> {{message}} </div> </template> <style lang="scss"> .hello-world { color: red; } </style> <script> export default { name: 'hello-world', props: { message: { required: true, type: String, }, }, }; </script> The code above is a simple “Hello world” component that shows the message requested as a property (or HTML attribute). Step 5: Create a demo.js application The component created in the step above needs an application in order to be used. Create a new file named demo.js inside /assets/raw/vue/apps and paste the following code in it: import HelloWorld from './../components/HelloWorld.vue'; /** * Demo application. * @version 1.0.0 */ window.myApp.demo = new Vue( { el: '#demo', components: { HelloWorld, }, } ); Notice how the example above is creating a new Vue application in window.myApp.demo, if you changed myApp in “Step 5” then you will need to change it here as well. Notice how HelloWorld.vue is imported and added as a component. Step 6: Add webpack compilation Create a new file named webpack.config.js at the root of your project (same path where package.json is located) and paste the following code in it: // webpack.config.js const VueLoaderPlugin = require( 'vue-loader/lib/plugin' ); const MiniCssExtractPlugin = require( 'mini-css-extract-plugin' ); const webpack = require( 'webpack' ); const path = require( 'path' ); /** * Webpack configuration file. * @version 1.0.0 */ module.exports = { mode: 'production', entry: { 'vue-init': './assets/raw/vue/init.js', 'vue-demo': './assets/raw/vue/apps/demo.js', }, output: { filename: '[name].js', path: path.resolve( __dirname, 'assets/js' ), publicPath: './assets/js', }, module: { rules: [ { test: /\.vue$/, loader: 'vue-loader', }, { test: /\.js$/, loader: 'babel-loader', }, { test: /\.css$/, use: [ 'vue-style-loader', MiniCssExtractPlugin.loader, 'css-loader', ] }, { test: /\.scss$/, use: [ 'vue-style-loader', MiniCssExtractPlugin.loader, 'css-loader', 'sass-loader', ] }, ] }, plugins: [ // make sure to include the plugin! new VueLoaderPlugin(), new MiniCssExtractPlugin( { filename: '[name].css', chunkFilename: '[id].css', path: path.resolve( __dirname, 'assets/css' ), publicPath: './assets/css', } ), ], resolve: { alias: { 'vue$': 'vue/dist/vue.common.js', }, }, }; This webpack configuration file will compile your .vue components, applications, and dependencies into JavaScript and CSS (browser-ready) files. The compiled files will be put inside /assets/js and /assets/css folders, same as what WPMVC does with other asset files. Entries The entry property inside the webpack configuration file indicates which files need compilation. module.exports = { ... entry: { 'vue-init': './assets/raw/vue/init.js', 'vue-demo': './assets/raw/vue/apps/demo.js', }, ... }; The example above indicates that the files you want webpack to compile are the init.js file and demo.js application file. The expected output would be 1 Javascript file for init.js (this file does not output any styling), and 1 Javascript file and 1 CSS file for demo.js: /assets/js/vue-init.js /assets/js/vue-demo.js /assets/css/vue-demo.css Step 7: Add to WPMVC compilation cycle You need to add the following entry in the scripts section of your package.json file in order for NPM and WPMVC to know that webpack is executable: { ..., "scripts": { "webpack": "webpack --hide-modules" } } WPMVC will detect that the webpack.config.js file exists and will execute the script line defined above during its compilation cycle. Step 8: Add Vue dependency and a watch Step 7 will compile your Vue assets, but the browser will need Vue in order to run them, therefore you will need to include it to your project as an asset. Paste the following code in your projects gulpfile.js file (after // START - CUSTOM TASKS line): config.prescripts = ['vendor-js']; /** * Vendor JS */ gulp.task('vendor-js', function() { return gulp.src([ './node_modules/vue/dist/vue.min.js', ]) .pipe(gulp.dest('./assets/js')); }); The task vendor-js will copy dependencies from NPM’s /node_modules folder into your project’s /assets for visibility. The config.prescripts = ['vendor-js'] line adds the task into WPMVC compilation process (before any JavaScript file in /assets/raw is processed). Paste the following code in your projects gulpfile.js file (after the code above): /** * Vue watch. */ gulp.task('watch-vue', async function () { gulp.watch([ './assets/raw/vue/**/*.vue', './assets/raw/vue/components/**/*.vue', './assets/raw/vue/**/*.js', ], gulp.series('webpack')); }); The task watch-vue adds a new watch command that will allow gulp to watch any changes in your Vue code and proceed to compile it automatically. Run the following command at the root of your project to watch vue: gulp watch-vue Step 9: Run WPMVC compilation cycle Run the following command at the root of your project: gulp dev The command above should run the WPMVC compilation process, including your added customizations to support Vue. The following files should be created as output: /assets/js/vue.min.js /assets/js/vue-init.js /assets/js/vue-demo.js /assets/css/vue-demo.css Note: gulp build and gulp deploy will also consider these customizations. Step 10: Enqueue assets Finally, you need to enqueue the assets into WordPress. You will use WPMVC auto-enqueue system for this. Add the following to your /app/Config/app.php configuration file: return [ ..., 'autoenqueue' => [ 'assets' => [ [ 'id' => 'vue', 'asset' => 'js/vue.min.js', 'version' => '2.6.14', 'footer' => true, ], [ 'id' => 'vue-init', 'asset' => 'js/vue-init.js', 'dep' => ['vue'], 'footer' => true, ], [ 'id' => 'vue-demo', 'asset' => 'js/vue-demo.js', 'dep' => ['vue-init'], 'footer' => true, ], [ 'id' => 'vue-demo', 'asset' => 'css/vue-demo.css', 'footer' => 'all', ], ], ], ]; The configuration above will auto-enqueue all assets, you should be ready to view it in a browser.
https://www.wordpress-mvc.com/2021/09/05/using-vue-js/?utm_source=rss&utm_medium=rss&utm_campaign=using-vue-js
CC-MAIN-2021-49
refinedweb
1,374
50.73
Fractran is insanely difficult to program in, but based on one of the most bizarrely elegant concepts of computation. A Fractran program is an ordered list of positive fractions together with an initial positive integer input. The program is run by updating the accumulator. Any number that can't be divided by any other number, apart from itself and one, is prime. Since primes can't be divided, we can think of them as the DNA of other numbers. In Fractran, each prime is a register and their exponent is their value. The Accumulator The state of the accumulator is held as a single number, whose prime factorization holds these registers(2, 3, 5, 7, 11, 13, 17, ..). If the state of the accumulator is 1008(2⁴ × 3² × 7), r2 has the value 4, r3 has the value 2, r7 has the value 1, and all other registers are unassigned. The Operators A fractran operation is a positive fraction, each fraction represents an instruction that tests one or more registers, represented by the prime factors of its denominator. The Fractran computer goes through each fraction in order, in terms of our current accumulator value. 18(21 × 32) 2/3 = 8(23) addition r2+r3->r2 To run the adder operation( 2/3), we will take the state of the accumulator. If multiplying it by this fraction will give us an integer, we will do so and start again at the beginning of the program. Otherwise, we will stop and consider the program complete. We will do this repeatedly until we can no longer produce an integer with this method. To add the values 1 and 2, we will store the values in registers 2 and 3, our starting state is therefore 18(21 × 32). For each step of the program, we will multiply our state with the program(18 × 2/3 = 12, 12 × 2/3 = 8, ..) until our our working value cannot be reduced to a whole number(16/3), we have exhausted the program. Alternatively, the program 3/2 will do the same operation but store the result in the register 3. 576(26 × 32) 1/6 = 16(24) subtraction r2-r3->r2 Operations become more readable when broken down into their primes., you subtract from the registers all of the values in the denominator, add all the values specified in the numerator, and then jump back to the first instruction. Otherwise, if any register is less than the value specified in the denominator, continue to the next fraction. The Programs A Fractran program is a list of fractions together with an initial positive integer input n. The program is run by updating the integer n as follows: - For each fraction in the list for which the multiplication of the accumulator and the fraction is an integer, replace the accumulator by the result of that multiplication. - Repeat this rule until no fraction in the list produces an integer when multiplied by the accumulator, then halt. Let's put together an adder program similar from the one above( 2/3) but which writes to a third register. The following program first moves the content in r2 to r3, and then the content of r3 to r5. 18(21 × 32) 3/2 5/3 = 125(53) addition r2+r3->r5(9 steps) Alternatively, a faster way to do this would be to directly move powers of 2 over to 5, then powers of 3. 18(21 × 32) 5/2 5/3 = 125(53) addition r2+r3->r5(7 steps) Each of the 7 steps of this last program looks like: 18 5/2 5/3 [18] r2=01 r3=02 ------------------ ----------------- 18 × 5/2 = 45/1 [45] r3=02 r5=01 45 × 5/2 = 225/2 45 × 5/3 = 75/1 [75] r3=01 r5=02 75 × 5/2 = 375/2 75 × 5/3 = 125/1 [125] r5=03 125 × 5/2 = 625/2 125 × 5/3 = 625/3 [125] r5=03 Both of these programs are destructive, meaning that they drain the registers of their original values. We can make ( 2/3) less destructive with ( 10/3) by storing a copy of r3 in r5. And we can create a non-destructive adder but this requires coming in with the program with the flag r7 set: 126(21 × 32 × 71) 7/11 715/14 935/21 1/7 2/13 3/17 = 2250(21 × 32 × 53) As an extra demonstration, let us consider the following programs representing all the logic gates: Interpreter A simple Fractran interpreter, written in ANSI C, showing the value in the registers as it steps through the program. #include <stdio.h> /* Copyright (c) 2020 Devine Lu Linvega. */ typedef struct Fraction { unsigned int num, den; } Fraction; typedef struct Machine { int len; Fraction acc, program[256]; } Machine; int gcd(int a, int b) { if(b == 0) return a; return gcd(b, a % b); } Fraction Frac(unsigned int num, unsigned int den) { Fraction f; unsigned int d = gcd(num, den); f.num = num / d; f.den = den / d; return f; } void printstate(Machine *m) { unsigned int fac = 2, num = m->acc.num; printf("[%d] ", num); while(num > 1) { if(num % fac == 0) { unsigned int pow = 1; printf("r%02u=", fac); num /= fac; while(!(num % fac)) { num /= fac; pow++; } printf("%02u", pow); if(num != 1) putchar(' '); } else fac++; } putchar('\n'); } void run(Machine *m) { int i = 0, steps = 0; while(i < m->len && m->acc.num) { Fraction res, *f = &m->program[i++]; res = Frac(m->acc.num * f->num, m->acc.den * f->den); printf("%u × %u/%u = %u/%u \n", m->acc.num, f->num, f->den, res.num, res.den); if(res.den == 1) { m->acc = res; printstate(m); i = 0; } steps++; } if(steps) { printstate(m); printf("Completed in %d steps.\n", steps); } } void push(Machine *m, char *w) { Fraction f; if(!m->acc.den) { if(sscanf(w, "%u", &m->acc.num) > 0) m->acc.den = 1; return; } if(sscanf(w, "%u/%u", &f.num, &f.den) > 0) m->program[m->len++] = f; } Machine m; int main(void) { int len = 0; char c, word[64]; while((c = fgetc(stdin)) != EOF) { if(c == ' ' || c == '\n') { word[len] = '\0'; len = 0; push(&m, word); } else word[len++] = c; if(c == '\n') break; } printstate(&m); run(&m); return 0; } A common man marvels at uncommon things; a wise man marvels at the commonplace. —Confucius - Fractran Interpreter(C89) - Fractran Interpreter(Web) - Intro to Fractran - Article on Esolang - Collatz function - Register Machine - OISC incoming(1): firth Last update on 14Y10, edited 5 times. +18/37fh -----|
https://wiki.xxiivv.com/site/fractran.html
CC-MAIN-2021-04
refinedweb
1,091
60.65
Red Hat Bugzilla – Bug 97636 LTC3150-RedHat EL3 Alpha 3: pthread_mutex_destroy() can destroy a locked mutex Last modified: 2007-11-30 17:06:56 EST The following has be reported by IBM LTC: RedHat EL3 Alpha 3: pthread_mutex_destroy() can destroy a locked mutex Please fill in each of the sections below. Hardware Environment: 2-way POWER3 Software Environment: RedHat EL3 Alpha 3 Steps to Reproduce: On Redhat EL3 Alpha 3, pthread_mutex_destroy() can successfully destroy a mutex while it is locked. The Linux man page of pthread_mutex_destroy() says: The pthread_mutex_destroy function returns the following error code on error: EBUSY the mutex is currently locked. The IEEE Std 1003.1, 2003 (POSIX) standard specifies: The pthread_mutex_destroy() function may fail if: [EBUSY] The implementation has detected an attempt to destroy the object referenced by mutex while it is locked or .... Test case and execution: /home/xxue/work> cat destroymutex.c #include <pthread.h> #include <stdio.h> #include <unistd.h> #include <string.h> pthread_mutex_t mymutex=PTHREAD_MUTEX_INITIALIZER; int main() { int rc; pthread_mutex_lock(&mymutex); rc = pthread_mutex_destroy(&mymutex); if (rc != 0) { printf("Error from pthread_mutex_destroy() <%s>.\n", strerror(rc)); } else { printf("pthread_mutex_destroy() returns <%d>.\n", rc); } } /home/xxue/work> gcc destroymutex.c -lpthread /home/xxue/work> a.out pthread_mutex_destroy() returns <0>. The same test case works as expected on other platforms, including SLES8: /home/xxue/work> gcc destroymutex.c -lpthread /home/xxue/work> a.out Error from pthread_mutex_destroy() <Device or resource busy>. 1. 2. 3. Actual Results: Expected Results: Additional Information:Mike - I know this is an alpha bug (against RHEL 3.0 Alpha), but I'd like to ask if you can check the relevant POSIX standard for this issue before we send it off to Red Hat. Thanks.I found a thread of discussion about this on a Red Hat-oriented message board, though none of the people in the thread are from Red Hat. Apparently this behaviour is undefined in POSIX and therefore optional. Error-checking of this nature is left otional to allow better performance in environments where the application can be trusted to be well-written. Here is the link to the discussion: Here is a link to a detailed discussion of pthread_mutex_destroy() on the Open Group site: ml It confirms this error-checking *may* be implemented, but it is optional (note the word *may* in Xing Xue's quote of the standard in the bug description above). *Shall* would be the word used for required implementation. This document references IEEE Std 1003.1, 2003 Edition, so we know this is current information. I tried the test on RHAS 2.1 and it failed, so apparently this behaviour changed going from Linuxthreads in RHAS 2.1 to NPTL in RHEL Alpha 3. I verified the man page under RHAS 2.1 does indicate that this should return an error. If RHEL Alpha 3 is showing the same man page, perhaps the bug is that they have not updated the thread library man pages for NPTL. But as far as this being a POSIX incompatibility, I recommend rejecting this bug as NOTABUG. Given the POSIX standard says it "may" fail, I don't think we should rush into the conclusion 'NOTABUG'. I'd suggest to verify with Redhat to see if this is the designed behavior of NPTL.Either NPTL is exibiting the wrong behavior or Red Hat has not updated the man pages for RHEL Alpha 3 (more likely). In any case, a bug exists, so we will assign this bug to Red Hat. This is indeed no bug and your test program will continue to not work. I added a test to detect *some* cases in which a mutex is still in use for erorr checking mutexes. But not for any other. But even that test won't catch all cases (neither did the one in LinuxThreads). The remaining cases can be handled only with major performance impacts which is why it won't be done. As for the man pages: they are written as part of LinuxThreads and document that implementation. Feel free to contribute edits. ------ Additional Comments From khoa@us.ibm.com 2003-19-06 18:59 ------- Glen - please specify in the RH bug report that the purpose of the bug report is to fix the man page. I think we already accepted that this is not a real functional bug. Thanks. ------ Additional Comments From sjmunroe@us.ibm.com 2003-23-06 14:00 ------- This is fixed in nptl-0.48. Still need to find out if this will be in RHEL 3 (RHEL 3 Alpha 3 was at nptl- 0.38). ------ Additional Comments From xingxue@ca.ibm.com 2003-23-06 14:18 ------- Looks like the NPTL was fixed. Just to make sure, is it the man page or the NPTL got fixed? Since Ulrich said it was not a bug....
https://bugzilla.redhat.com/show_bug.cgi?id=97636
CC-MAIN-2017-39
refinedweb
803
66.74
0. JeffreyWay commented on Parsing Markdown Taylor Otwell follows a convention for comments. Three lines total. Each new line contains three fewer characters than the one before it. Make for a pretty comment block. But in that 4:42 case, the convention wasn't followed precisely. No big deal. I was only being silly. JeffreyWay commented on Reflect Into Functions Heads up, if you want to play around with PHP generators, you could alternatively do this: public function test_load_users() { $this->load(function (User $user, User $user2) { dump($user); }); } protected function load($callback) { $users = LazyCollection::make(function () use ($callback) { $usersRequired = (new ReflectionFunction($callback)) ->getNumberOfParameters(); for ($i = 1; $i <= $usersRequired; $i++) { yield new User; } }); $callback(...$users); } JeffreyWay commented on At A Glance That's the plan. 👍 JeffreyWay commented on At A Glance It's an example you don't have to think about. JeffreyWay commented on Lazy Collections @foremostdigital Because ->first() is called on the LazyCollection instance. It's not part of the query builder. JeffreyWay left a reply on Laracast Comments Bugs JeffreyWay left a reply on Laracast Comments Bugs however the orginal bug with editing threads still parsists. That was fixed too. JeffreyWay left a reply on Laracast Comments Bugs JeffreyWay left a reply on The Best Way To Build A Table Of Files For A Variety Of Models If I understand you correctly, the answer is don't. Each Eloquent model should/will have its own table. JeffreyWay commented on Frontend Scaffolding Has Been Moved To Laravel UI It was redundant since Axios already attaches the appropriate header. More info here: JeffreyWay left a reply on Npm Run Watch Issues That's fine. ^4.0.7 will include 4.1.4. JeffreyWay left a reply on Npm Run Watch Issues This was fixed earlier today. Can you ensure that you have the latest version of Mix installed? It should be 4.1.4. JeffreyWay commented on Lazy Collections Absolutely. JeffreyWay commented on Lazy Collections Just hit the enter key to make a new line. JeffreyWay commented on Lazy Collections Yeah, thought that was a little strange too. For this video, I was still on the dev build of Laravel 6. The next episode covers Ignition, which will use the latest build. JeffreyWay left a reply on Non-profit Pricing Please use the contact support form instead for questions like this. JeffreyWay commented on Explain Real-Time Facades From The Inside-Out Ah yes of course you're right about the $accessor variable. JeffreyWay commented on Determine The Average Rentals Per Day Likely a mix of indexing and caching. JeffreyWay commented on Authorization Essentials Have a look at this video. I answer your question directly: JeffreyWay left a reply on Laravel 5.7 From Scratch Episode 27 - Not Working Okay I figured out why this was sporadically happening. All fixed now. JeffreyWay commented on Fetch The Most Popular Authors Here's the query we wrote in this episode: select users.id, users.name, count(*) as readers from users left join post_reads on post_reads.post_id in ( select id from posts where user_id = users.id ) group by users.id order by readers desc limit 10 JeffreyWay commented on Authorization Essentials \Gate means i should find a class called Gate at the root directory of my app, but there is none No, it means to look for Gate in the global namespace. Also, we are not importing the Gate class at the top of the file; i.e. there is no use \Path\to\Gate so why it still works That's because of the backslash at the beginning: \Gate. This means to begin at the global namespace root. Without the backslash, PHP would look for the class within the namespace of the current file. JeffreyWay commented on Test-Driving Threads If you're working on a small website, then, sure, do that. Otherwise, without tests you're going to be refreshing the browser 800 times for every little change. JeffreyWay commented on Reduce A Query From 12 Seconds To 1 Millisecond You definitely don't want to do that. We'll talk more about this in future episodes, but an index can reduce write times (inserts and updates) considerably. JeffreyWay commented on Core Concepts: Service Providers Yep, done. 👍 JeffreyWay commented on One-to-Many Thanks, everyone! JeffreyWay left a reply on Noob Here I agree. Ditch it if you can and write your forms manually. JeffreyWay left a reply on Updating DOM After AJAX Inserts Updated Element If I understand you correctly, it's because the event handlers are no longer attached after you update the DOM. Event delegation is the solution. JeffreyWay left a reply on Updating DOM After AJAX Inserts Updated Element I'm not sure what you mean. If it correctly updates the #myDiv element, then the DOM has been updated. JeffreyWay left a reply on How Edit A Reply Fills Markdown In Laracasts Forum Model @erikverbeek is correct. The Markdown conversion is done on the server-side. I use the CommonMark Composer package. JeffreyWay left a reply on Node_modules Folder Sure. You can always generate it again when/if you need to. JeffreyWay left a reply on OOP Bootcamp - Lesson 5 The constructor method on your Staff class is protected. That's why you're seeing that error. Change it to: public function __construct($members = []) { $this->members = $members; } JeffreyWay commented on Filtering Aggregated Data Here's the final SQL query from this episode: select title, sum(amount) sales, count(*) rentals from rental join payment on payment.rental_id = rental.rental_id join inventory on inventory.inventory_id = rental.inventory_id join film on film.film_id = inventory.film_id group by film.film_id having sales > 200 order by sales desc JeffreyWay commented on Multiple Joins In One Query Yes, views will be covered later in the series. JeffreyWay commented on Core Concepts: Service Container And Auto-Resolution It's a cornerstone of building apps in Laravel. You 100% need to know it. JeffreyWay commented on The Strategy And Factory Patterns At some point, you must write logic to decide which strategy to use. The key is to bubble up that logic as high as you can go. You can even move it to a dedicated factory class. Then, if you do have a new strategy, you simply update the factory class and you're good to go. JeffreyWay commented on The Example MySQL Database I think TablePlus is excellent, too. For the Sequel Pro issues, try pulling in the nightly builds. JeffreyWay commented on You May Only View Your Projects As you noted, Laravel recently introduced a change to make the primary key for all new table migrations a type of BIGINT. This means every foreign key you create should also have a type of BIGINT. $table->unsignedBigInteger('post_id'); JeffreyWay commented on The Skeleton You can use whatever you want. Yarn offered significant performance improvements at the time, which is why so many people switched over. That's not so much the case anymore. NPM is great. JeffreyWay commented on Foreign Key Constraints Nah - they're the perfect examples that anyone can instantly understand. JeffreyWay left a reply on [Laracasts.com] Notifications Aren't Working Should be fixed now. Thanks. JeffreyWay commented on Workshop - FAQs Subscribers don't see ads. Only guests. JeffreyWay left a reply on Laracasts "Browse" No design change. I only changed the nav link from Browse to Search. If you want the newest episodes first, either click "Episode" in the sidebar, or choose the "What's New" link under your avatar in the top right. JeffreyWay left a reply on Upgrading From Yearly To Lifetime @omario169 Send a quick support request and I'll set you up. JeffreyWay left a reply on Laracast: Compact Series List Sorted By Last Updated This page? JeffreyWay left a reply on Print Tutorial I don't do anything special for that. At the bottom of the page, I have: <script> window.print(); </script> JeffreyWay commented on An Alternative To Magic Numbers Yes. You're assigning a name to an important number in your system. JeffreyWay left a reply on Laravel Mix Extremly Slow @adrianwix Yeah, something odd is going on there. When using npm run watch on the Laracasts codebase, everything recompiles within a second or two. Can you, one by one, comment out each call in your webpack.mix.js file to pinpoint where the hangup is? JeffreyWay commented on Tabs Yep, I have that on my list for this series. :) JeffreyWay commented on Tabs It would return the first child that has the active prop set to true.
https://laracasts.com/@JeffreyWay
CC-MAIN-2019-39
refinedweb
1,426
66.23
In my app i use Entites as Tables in database representation. I have an OrderEntity, that have fields like ProductEntity, CustomerEntity, then CustomerEntity has fields like AddressEntity etc. Now I try to get OrderEntity filled with all the entity-type properties and so on. It looks like I have to load data from 8 tables. I just have no idea how to do it properly. I have a OrderRepository with Get method, wher I want to return OrderEntity. So should I create SQL with 7 joins, one class with all the columns from the SQL and then after executing SQL create manually OrderEntity etc. in this repository's Get method? Using repository etc. is easy when I have to get/update 1 table, but when the model is built of more than 1-2 tables, It's becomming really tough for me. Option 1: The approach I have used is to load each relationship individually (for a small N tables). If you have 8 tables, then 8 queries will provide all of the data you require. Here is a contrived example of 3 tables. public class Person { public int PersonID { get; set; } public string PersonName { get; set; } public Address[] Addresses { get; set; } } public class Address { public int AddressID { get; set; } public int PersonID { get; set; } public string AddressLine1 { get; set; } public string City{ get; set; } public string StateCode { get; set; } public string PostalCode { get; set; } public Note[] Notes { get; set; } } public class Note { public int AddressID { get; set; } public int NoteID { get; set; } public string NoteText { get; set; } } You would query each of the tables. var people = conn.Query<Person>("select * from Person where ..."); var personIds = people.Select(x => x.PersonID); var addresses = conn.Query<Address>("select * from Address where PersonID in @PersonIds", new { personIds }); var addressIds = addresses.Select(x => x.AddressID); var notes = conn.Query<Note>("select * from Note where AddressID in @AddressIds", new { addressIds }); Then, once you have all of the data, wire it up to fix the relationships between these records you have loaded. // Group addresses by PersonID var addressesLookup = addresses.ToLookup(x => x.PersonID); // Group notes by AddressID var notesLookup = notes.ToLookup(x => x.AddressID); // Use the lookups above to populate addresses and notes people.Each(x => x.Addresses = addressesLookup[x.PersonID].ToArray()); addresses.Each(x => x.Notes = notesLookup[x.AddressID].ToArray()); There are other ways, but a view may not satisfy all conditions, especially when given complex relationships, leading to an explosion of records. Option 2: From the following link, you can use QueryMultiple. Code as follows, where your child queries will have to select all of the records. var results = conn.QueryMultiple(@" SELECT Id, CompanyId, FirstName, LastName FROM dbo.Users WHERE LastName = 'Smith'; SELECT Id, CompanyName FROM dbo.Companies WHERE CompanyId IN ( SELECT CompanyId FROM dbo.Users WHERE LastName = 'Smith' ); "); var users = results.Read<User>(); var companies = results.Read<Company>(); Then you would fix the relationships as in Option 1. OK (as requested above) - an example using a Tuple and Dapper. I've really quickly written this out so if there are any mistakes let me know and I'll rectify. I'm 100% sure it can be optimised too! Using this above structure as an example: public class Person { public int PersonID { get; set; } public string PersonName { get; set; } public IEnumerable<Address> Addresses { get; set; } } public class Address { public int AddressID { get; set; } public int PersonID { get; set; } public string AddressLine1 { get; set; } public string City{ get; set; } public string StateCode { get; set; } public string PostalCode { get; set; } public IEnumerable<Note> Notes { get; set; } } public class Note { public int AddressID { get; set; } public int NoteID { get; set; } public string NoteText { get; set; } } string cmdTxt = @"SELECT p.*, a.*, n.* FROM Person p LEFT OUTER JOIN Address a ON p.PersonId = a.PersonId LEFT OUTER JOIN Note n ON a.AddressId = n.AddressId WHERE p.PersonId = @personID"; var results = await conn.QueryAsync<Person,Address,Note,Tuple<Person,Address,Note>>(cmdTxt, map: (p,a,n)=>Tuple.Create((Person)p, (Address)a, (Note)n), param: new { personID = 1 }); if(results.Any()) { var person = results.First().Item1; //the person var addresses = results.Where(n => n.Item2 != null).Select(n=>n.Item2); //the person's addresses var notes = results.Where(n => n.Item3 != null).Select(n=>n.Item3); //all notes for all addresses if(addresses.Any()) { person.Addresses = addresses.ToList(); //add the addresses to the person foreach(var address in person.Addresses) { var address_notes = notes.Where(n=>n.AddressId==address.AddressId).ToList(); //get any notes if(address_notes.Any()) { address.Notes = address_notes; //add the notes to the address } } } }
https://dapper-tutorial.net/knowledge-base/58509339/how-to-load-entity-from-multiple-tables-with-dapper-
CC-MAIN-2021-10
refinedweb
758
50.33
#abstract #and #as #AST #break #callable #cast #char #class #constructor #continue #def #destructor #do #elif #else #ensure #enum #event #except #failure #final #from #for #false #get #given #goto #if #import #in #interface #internal #is #isa #not #null #of #or #otherwise #override #namespace #partial #pass #public #protected #private #raise #ref #retry #return #self #set #super #static #struct #success #transient #true #try #typeof #unless #virtual #when #while #yield "abstract" is used to designate a class as a base class. A derivative of the abstract class must implement all of its abstract methods and properties. "and" is a logical operator that is applied to test if two boolean expressions are true. The "as" keyword declares a variables type. "AST" is used to create AST objects for use with the Boo compiler. "break" is a keyword used to escape program execution. Typically break is used inside a loop and may be coupled with the "if" or "unless" keywords. "callable" allows function or type to be called by another. "cast" is a keyword used to explicitly transform a variable from one data type to another. "char" is a data type representing a single character. The char data type is distinct from a string containing a single character. char('t') refers to a System.Char type, whereas "t" or 't' is a System.String. "class" is a definition of an object including its properties and methods. "constructor" is a method belonging to a class that is used to define how an instance of the class should be created. The constructor may include input parameters and may be overloaded. see the examples for the keyword "class" "continue" is a keyword used to resume program execution at the end of the current loop. The continue keyword is used when looping. It will cause the position of the code to return to the start of the loop (as long as the condition still holds). "def" is used to define a new function or method. "destructor" is used to destroy objects. Destructors are necessary to release memory used by non-managed resources in the .NET CLI. Desctructors should never be called explicitly. They can be invoked by implementing the IDisposable() interface. "do" is synonymous with 'def' for closures. However, "do" reads as an imperative and therefore should be used in an active sense. "elif" is similar the same as the "if" conditional statement in form, except that it needs to be preceded by an if statement or another elif statement and that it is only evaluated (checked) if the if/elif statement preceding it evaulates to false. If one of the preceding if/elifs statements evaluates to true, the rest of the elifs will not be evaluated, thus sparing extra CPU power from a pointless task. "else" is defines a statement that will be executed should a preceding "if" condition fail. "ensure" is used with the "try" and "except" keywords to guarantee a certain block of code runs whether the try/except block is successful or not. "ensure" is often used to add some post executions to an exception event. "enum" is used to create a list of static values. Internally the names are assigned to an Int32 value. "event" is (insert text here) "except" is keyword use to identify a block of code that is to be executed if the "try" block fails. See examples under the "ensure" keyword. "failure" is not yet implemented in boo. "final" is a keyword used to identify a class that cannot have subclasses. final may also be used to declare a field as a constant. "from" is used with the "import" keyword to identify the assembly being imported from. Form usage is "import TARGET (from ASSEMBLY). The "from" keyword is optional. "for" is used to loop through items in a series. "for" loops are frequently used with a range or a listarray. "false" represents a negative boolean outcome. "get" is used to identify a field that is exposed for external access. Use "get" to make a field available as read-only. Use "set" to add write access. "get" is suffixed by a colon when implemented and includes a return statement. It is possible to modify the value of the field being returned. See example 1. "get" is also used when defining an interface to define which fields should be implemented as accessible. When "get" is used to define an interface the colon and return statements are excluded. See example 2. "given" is used as the entry to a "given ... when" loop. "given" identifies a state. A series of "when" statements may be executed based on the identified state. _ The "given" keyword is currently not implemented. _ "goto" exits a line of code and moves to a named line in the code. The named line must be prefixed wtih a colon. Good programming practice eschews the use of "goto" The example below names two lines ":start" and "test". They are referenced in the code by separate goto statements. This example produces an endless loop. The "ensure" statement includes a Console.Readline() that prevents the loop from continuing without user input. "if" is a conditional statement, followed by a statement that either evaluates to true or false. In block form, the code within the block is executed only if the expression following the if evaluates to true. The if statement can be used to selectively execute a line of code by placing "if <expression>" at the very end of the statement. This form of the if conditional is useful in circumstances when you are only going to perform one operation based entirely on an expression: this makes the code cleaner to read than an unnecessary if block. "import" is used to include a namespace from other assemblies within your program. If the assembly is not automatically included, the "from" keyword must be included to identify the respective assembly. "in" is used in conjunction with "for" to iterate through items in a list. "in" may also be used to test items in a set. See examples for the keyword "for". "inteface" is used to define the fields and methods that may be implemented by a class. The implementation is never performed by the interface. Interfaces allow you to establish an API that is the basis for other classes. "internal" is a keyword that precedes a class definition to limit the class to the assembly in which it is found. "is" is an equvalence operator keyword that is used to test a value. "is" may not be used with ints, doubles, or boolean types. "is" is commonly used to test for null. "isa" determines if one element is an instance of a specific type. "not" is used with "is" to perform a negative comparison. "not" can also be used in logical expressions. "null" is a keyword used to specify a value is absent. "of" is used to specify type arguments to a generic type or method. "or" is a logical operator that is applied to test if either of two boolean expressions are true. "otherwise" is part of the conditional phrase "given ... when ... otherwise". The otherwise block is executed for a given state if none of the when conditions match. _ The otherwise keyword is not yet implemented _ See examples for "given". "override" is used in a derived class to declare that a method is to be used instead of the inherited method. "override" may only be used on methods that are defined as "virtual" or "abstract" in the parent class. "namespace" is a name that uniquely identifies a set of objects so there is no ambiguity when objects from different sources are used together. To declare a namespace place the namespace followed by the name you choose at the top of the file. "partial" is (insert text here) "pass" is a keyword used when you do not want to do anything in a block of code. "public" is used to define a class, method, or field as available to all. "public class" is never required because a defined class defaults to public. "protected" is a keyword used to declare a class, method, or field as visible only within its containing class. Fields are by default protected. Prefixing a field name with an underscore is recommended practice. "private" is keyword used to declare a class, method, or field visible within only its containing class and inherited classes.. "raise" is (insert text here) "ref" makes a parameter be passed by reference instead of by value. This allows you to change a variable's value outside of the context where it is being used "retry" is not yet implemented. "return" is a keyword use to state the value to be returned from a function definition "self" is used to reference the current class. "self" is not required for boo but may be used to add clarity to the code. "self" is synonymous with the c# keyword "this". "set" is a keyword used to define a field as writeable. "static" is (insert text here) "struct" is short for structure. A structure is similar to a class except it defines value types rather than reference types. Refer to the Boo Primer for more information on structures. "success" is not yet implemented. "super" is used to reference a base class from a child class when one wants to execute the base behavior. "transient" transient marks a member as not to be serialized. By default, all members in Boo are serializable. "true" is keyword used to represent a positive boolean outcome. "try" is used with the "ensure" and "except" keywords to test whether a block of code executes without error. typeof returns a Type instance. Unnecessary, in Boo since you can pass by type directly. "unless" is similar to the "if" statement, except that it executes the block of code unless the expression is true. "virtual" is a keyword that may precede the 'def' keyword when the developer wishes to provide the ability to override a defined method in a child class. The 'virtual' keyword is used in the parent class. "when" is used with the "given" keyword to identify the condition in a which the "given" value may be executed. _b "when" is currently not implemented. see examples for the "given" keyword. "while" will execute a block of code as long as the expression it evaluates is true. It is useful in cases where a variable must constantly be evalulated (in another thread, perhaps) , such as checking to make sure a socket still has a connection before emptying a buffer (filled by another thread, perhaps). "yield" is similar to "return" only it can be called multiple times within a single method.
http://docs.codehaus.org/plugins/viewsource/viewpagesrc.action?pageId=18437
CC-MAIN-2013-20
refinedweb
1,771
65.73
ACS I think we didn’t get far from the mark I would also like to take this chance to thank Southworks, our long-time partner on this kind of activities, for their great work on the ACS Exensions for Umbraco. Once again, I’ll apply the technique I used yesterday for the ACS+WP7+OAuth2+OData lab post; I will paste here the documentation as is. I am going to break this in 3 parts, following the structure we used in the documentation as well. Access Control Service (ACS) Extensions for Umbraco 'Click here for a video walkthrough of this tutorial' Setting up the ACS Extensions in Umbraco is very simple. You can use the Add Library Package Reference from Visual Studio to install the ACS Extensions NuGet package to your existing Umbraco 4.7.0 instance. Once you have done that, you just need to go to the Umbraco installation pages, where you will find a new setup step: there you will fill in few data describing the ACS namespace you want to use, and presto! You’ll be ready to take advantage of your new authentication capabilities. Alternatively, if you don’t want the NuGet Package to update Umbraco’s source code for you, you can perform the required changes manually by following the steps included in the manual installation document found in the ACS Extensions package. Once you finished all the install steps, you can go to the Umbraco install pages and configure the extension as described above. You should consider the manual installation procedure only in the case in which you really need fine control on the details of how the integration takes place, as the procedure is significantly less straightforward than the NuGet route. In this section we will walk you through the setup process. For your convenience we are adding one initial section on installing Umbraco itself. If you already have one instance, or if you want to follow a different installation route than the one we describe here, feel free to skip to the first section below and go straight to the Umbraco.ACSExtensions NuGet install section. Install Umbraco using the Web Platform Installer and Configure It Launch the Microsoft Web Platform Installer from Figure 1 - Windows Web App Gallery | Umbraco CMS Click on Install button. You will get to a screen like the one below: Figure 2 - Installing Umbraco via WebPI Choose Options. From there you’ll have to select IIS as the web server (the ACS Extensions won’t work on IIS7.5). Figure 3 - Web Platform Installer | Umbraco CMS setup options Click on OK, and back on the Umbraco CMS dialog click on Install. Select SQL Server as database type. Please note that later in the setup you will need to provide the credentials for a SQL administrator user, hence your SQL Server needs to be configured to support mixed authentication. Figure 4 - Choose database type Accept the license terms to start downloading and installing Umbraco. Configure the web server settings with the following values and click on Continue. Figure 5 - Site Information Complete the database settings as shown below. Figure 6 - Database settings When the installation finishes, click on Finish button and close the Web Platform Installer. Open Internet Information Services Manager and select the web site created in step 7. In order to properly support the authentication operations that the ACS Extensions will enable, your web site needs to be capable of protecting communications. On the Actions pane on the right, click Bindings… and add one https binding as shown below. Figure 7 - Add Site Binding Open the hosts file located in C:\Windows\System32\drivers\etc, and add a new entry pointing to the Umbraco instance you’ve created so that you will be able to use the web site name on the local machine. Figure 8 - Hosts file entry At this point you have all the bits you need to run your Umbraco instance. All that’s left to do is make some initial configuration: Umbraco provides you with one setup portal which enables you to do just that directly from a browser. Browse to http://{yourUmbracoSite}/install/; you will get to a screen like the one below. Figure 9 - The Umbraco installation wizard Please refer to the Umbraco documentation for a detailed explanation of all the options: here we will do the bare minimum to get the instance running. Click on “Let’s get started!” button to start the wizard. Accept the Umbraco license. Hit Install in the Database configuration step and click on Continue once done. Set a name and password for the administrator user in the Create User step. Pick a Starter Kit and a Skin (in this tutorial we use Simple and Sweet@s). Click on Preview your new website: your Umbraco instance is ready. Figure 10 - Your new Umbraco instance is ready! Install the Umbraco.ACSExtensions via NuGet Package Installing the ACS Extensions via NuGet package is very easy. Open the Umbraco website from Visual Studio 2010 (File -> Open -> Web Site… ) Open the web.config file and set the umbracoUseSSL setting with true. Figure 11 - umbracoUseSSL setting Click on Save All to save the solution file. Right-click on the website project and select “Add Library Package Reference…” as shown below. If you don’t see the entry in the menu, please make sure that NuGet 1.2 is correctly installed on your system. Figure 12 - umbracoUseSSL setting Select the Umbraco.ACSExtensions package form the appropriate feed and click install. At the time in which you will read this tutorial, the ACS Extensions NuGet will be available on the NuGet official package source: please select Umbraco.ACSExtensions from there. At the time of writing the ACS Extensions are not published on the official feed yet, hence in the figure here we are selecting it from a local repository. (If you want to host your own feed, see Create and use a NuGet local repository ) Figure 13 - Installing theUmbraco. ACSExtensions NuGet package If the installation takes place correctly, a green checkmark will appear in place of the install button in the Add Library Package Reference dialog. You can close Visual Studio, from now on you’ll do everything directly from the Umbraco management UI. Configure the ACS Extensions Now that the extension is installed, the new identity and access features are available directly in the Umbraco management console. You didn’t configure the extensions yet: the administrative UI will sense that and direct you accordingly. Navigate to the management console of your Umbraco instance, at http://{yourUmbracoSite}/umbraco/. If you used an untrusted certificate when setting up the SSL binding of the web site, the browser will display a warning: dismiss it and continue to the web site. The management console will prompt you for a username and a password, use the credentials you defined in the Umbraco setup steps. Navigate to the Members section as shown below. Figure 14 - The admin console home page The ACS Extensions added some new panels here. In the Access Control Service Extensions for Umbraco panel you’ll notice a warning indicating that the ACS Extensions for Umbraco are not configured yet. Click on the ACS Extensions setup page link in the warning box to navigate to the setup pages. Figure 15 - The initial ACS Extensions configuration warning. Figure 16 - The ACS Extensions setup step. The ACS Extensions setup page extends the existing setup sequence, and lives at the address https://{yourUmbracoSite}/install/?installStep=ACSExtensions. It can be sued both for the initial setup, as shown here, and for managing subsequent changes (for example when you deploy the Umbraco site form your development environment to its production hosting, in which case the URL of the web site changes). Click Yes to begin the setup. Access Control Service Settings Enter your ACS namespace and the URL at which your Umbraco instance is deployed. Those two fields are mandatory, as the ACS Extensions cannot setup ACS and your instance without those. The management key field is optional, but if you don’t enter most of the extensions features will not be available. Figure 17 - Access Control Service Setttings The management key can be obtained through the ACS Management Portal. The setup UI provides you a link to the right page in the ACS portal, hut you’ll need to substitute the string {namespace} with the actual namespace you want to use Social Identity Providers Decide from which social identity providers you want to accept users from. This feature requires you to have entered your ACS namespace management key: if you didn’t, the ACS Extensions will use whatever identity providers are already set up in the ACS namespace. Note that in order to integrate with Facebook you’ll need to have a Facebook application properly configured to work with your ACS namespace. The ACS Extensions gather from you the Application Id and Application Secret that are necessary for configuring ACS to use the corresponding Facebook application. Figure 18 - Social Identity Providers SMTP Settings Users from social identity providers are invited to gain access to your web site via email. In order to use the social provider integration feature you need to configure a SMTP server. Figure 19 - SMTP Settings Click on Install to configure the ACS Extensions. Figure 20 - ACS Extension Configured If everything goes as expected, you will see a confirmation message like the one above. If you navigate back to the admin console and to the member section, you will notice that the warning is gone. You are now ready to take advantage of the ACS Extensions. Figure 21 - The Member section after the successful configuration of the ACS Extensions
https://docs.microsoft.com/en-us/archive/blogs/vbertocci/acs-extensions-for-umbraco-part-i-setup
CC-MAIN-2020-10
refinedweb
1,615
52.7
#include <sys/usb/usba.h> void usb_pipe_reset(dev_info_t *dip, usb_pipe_handle_t pipe_handle, usb_flags_t usb_flags, void (*callback)(usb_pipe_handle_t cb_pipe_handle, usb_opaque_t arg, int rval, usb_cb_flags_t flags), usb_opaque_t callback_arg); Solaris DDI specific (Solaris DDI) Pointer to the device's dev_info structure. Handle of the pipe to reset. Cannot be the handle to the default control pipe. USB_FLAGS_SLEEP is the only flag recognized. Wait for completion. Function called on completion if the USB_FLAGS_SLEEP flag is not specified. If NULL, no notification of completion is provided. Second argument to callback function. Call usb_pipe_reset() to reset a pipe which is in an error state, or to abort a current request and clear the pipe. The usb_pipe_reset() function can be called on any pipe other than the default control pipe. A pipe can be reset automatically when requests sent to the pipe have the USB_ATTRS_AUTOCLEARING attribute specified. Client drivers see an exception callback with the USB_CB_STALL_CLEARED callback flag set in such cases. Stalls on pipes executing requests without the USB_ATTRS_AUTOCLEARING attribute set must be cleared by the client driver. The client driver is notified of the stall via an exception callback. The client driver must then call usb_pipe_reset() to clear the stall. The usb_pipe_reset() function resets a pipe as follows: Requests to reset the default control pipe are not allowed. No action is taken on a pipe which is closing. If USB_FLAGS_SLEEP is specified in flags, this function waits for the action to complete before calling the callback handler and returning. If not specified, this function queues the request and returns immediately, and the specified callback is called upon completion. callback is the callback handler. It takes the following arguments: Handle of the pipe to reset. Callback_arg specified to usb_pipe_reset(). Return value of the reset call. Status of the queueing operation. Can be: USB_CB_NO_INFO — Callback was uneventful. USB_CB_ASYNC_REQ_FAILED — Error starting asynchronous request. Status is returned to the caller via the callback handler's rval argument. Possible callback hander rval argument values are: Pipe successfully reset. pipe_handle specifies a pipe which is closed or closing. dip or pipe_handle arguments are NULL. USB_FLAGS_SLEEP is clear and callback is NULL. Called from interrupt context with the USB_FLAGS_SLEEP flag set. pipe_handle specifies the default control pipe. Asynchronous resources are unavailable. In this case, USB_CB_ASYNC_REQ_FAILED is passed in as the callback_flags arg to the callback hander. Exception callback handlers of interrupt-IN and isochronous-IN requests which are terminated by these commands are called with a completion reason of USB_CR_STOPPED_POLLING. Exception handlers of incomplete bulk requests are called with a completion reason of USB_CR_FLUSHED. Exception handlers of unstarted requests are called with USB_CR_PIPE_RESET. Note that messages mirroring the above errors are logged to the console logfile on error. This provides status for calls which could not otherwise provide status. May be called from user or kernel context regardless of arguments. May be called from any callback with the USB_FLAGS_SLEEP clear. May not be called from a callback executing in interrupt context if the USB_FLAGS_SLEEP flag is set. If the USB_CB_ASYNC_REQ_FAILED bit is clear in usb_cb_flags_t, the callback, if supplied, can block because it is executing in kernel context. Otherwise the callback cannot block. Please see usb_callback_flags(9S) for more information on callbacks. void post_reset_handler( usb_pipe_handle_t, usb_opaque_t, int, usb_cb_flags_t); /* * Do an asynchronous reset on bulk_pipe. * Execute post_reset_handler when done. */ usb_pipe_reset(dip, bulk_pipe, 0, post_reset_handler, arg); /* Do a synchronous reset on bulk_pipe. */ usb_pipe_reset(dip, bulk_pipe, USB_FLAGS_SLEEP, NULL, NULL); See attributes(5) for descriptions of the following attributes: attributes(5), usb_get_cfg(9F), usb_pipe_bulk_xfer(9F), usb_pipe_close(9F), usb_get_status(9F), usb_pipe_ctrl_xfer(9F), usb_pipe_drain_reqs(9F), usb_pipe_get_state(9F), usb_pipe_intr_xfer(9F), usb_pipe_isoc_xfer(9F), usb_pipe_open(9F), usb_pipe_stop_intr_polling(9F), usb_pipe_stop_isoc_polling(9F), usb_callback_flags(9S)
http://docs.oracle.com/cd/E36784_01/html/E36886/usb-pipe-reset-9f.html
CC-MAIN-2017-17
refinedweb
600
51.34
Adding a Back button in Python Webkit-GTK I have a little browser script in Python, called quickbrowse, based on Python-Webkit-GTK. I use it for things like quickly calling up an anonymous window with full javascript and cookies, for when I hit a page that doesn't work with Firefox and privacy blocking; and as a quick solution for calling up HTML conversions of doc and pdf email attachments. Python-webkit comes with a simple browser as an example -- on Debian it's installed in /usr/share/doc/python-webkit/examples/browser.py. But it's very minimal, and lacks important basic features like command-line arguments. One of those basic features I've been meaning to add is Back and Forward buttons. Should be easy, right? Of course webkit has a go_back() method, so I just have to add a button and call that, right? Ha. It turned out to be a lot more difficult than I expected, and although I found a fair number of pages asking about it, I didn't find many working examples. So here's how to do it. Add a toolbar button In the WebToolbar class (derived from gtk.Toolbar): In __init__(), after initializing the parent class and before creating the location text entry (assuming you want your buttons left of the location bar), create the two buttons: backButton = gtk.ToolButton(gtk.STOCK_GO_BACK) backButton.connect("clicked", self.back_cb) self.insert(backButton, -1) backButton.show() forwardButton = gtk.ToolButton(gtk.STOCK_GO_FORWARD) forwardButton.connect("clicked", self.forward_cb) self.insert(forwardButton, -1) forwardButton.show() Now create those callbacks you just referenced: def back_cb(self, w): self.emit("go-back-requested") def forward_cb(self, w): self.emit("go-forward-requested") That's right, you can't just call go_back on the web view, because GtkToolbar doesn't know anything about the window containing it. All it can do is pass signals up the chain. But wait -- it can't even pass signals unless you define them. There's a __gsignals__ object defined at the beginning of the class that needs all its signals spelled out. In this case, what you need is "go-back-requested": (gobject.SIGNAL_RUN_FIRST, gobject.TYPE_NONE, ()), "go-forward-requested": (gobject.SIGNAL_RUN_FIRST, gobject.TYPE_NONE, ()),Now these signals will bubble up to the window containing the toolbar. Handle the signals in the containing window So now you have to handle those signals in the window. In WebBrowserWindow (derived from gtk.Window), in __init__ after creating the toolbar: toolbar.connect("go-back-requested", self.go_back_requested_cb, self.content_tabs) toolbar.connect("go-forward-requested", self.go_forward_requested_cb, self.content_tabs) And then of course you have to define those callbacks: def go_back_requested_cb (self, widget, content_pane): # Oops! What goes here? def go_forward_requested_cb (self, widget, content_pane): # Oops! What goes here? But whoops! What do we put there? It turns out that WebBrowserWindow has no better idea than WebToolbar did of where its content is or how to tell it to go back or forward. What it does have is a ContentPane (derived from gtk.Notebook), which is basically just a container with no exposed methods that have anything to do with web browsing. Get the BrowserView for the current tab Fortunately we can fix that. In ContentPane, you can get the current page (meaning the current browser tab, in this case); and each page has a child, which turns out to be a BrowserView. So you can add this function to ContentPane to help other classes get the current BrowserView: def current_view(self): return self.get_nth_page(self.get_current_page()).get_child() And now, using that, we can define those callbacks in WebBrowserWindow: def go_back_requested_cb (self, widget, content_pane): content_pane.current_view().go_back() def go_forward_requested_cb (self, widget, content_pane): content_pane.current_view().go_forward() Whew! That's a lot of steps for something I thought was going to be just adding two buttons and two callbacks. [ 16:45 Aug 06, 2016 More programming | permalink to this entry | comments ]
http://shallowsky.com/blog/tags/gtk/
CC-MAIN-2017-04
refinedweb
648
57.57
Security :: How To Create User Button In Create User WizardAug 31, 2010 May i know how i can change postion of Create user button in Create USer Wizard as i want to change according to requirement!View 4 Replies May i know how i can change postion of Create user button in Create USer Wizard as i want to change according to requirement!View 4 Replies Does anyone know how to disable a create user button in the Create User Wizard if the Terms and Conditions checkbox is not checked? I have a CUW with additional fields (the data of which is stored in an additional table that I have added to the ordinaty SQL membership database) and I want the user to check the Terms and Conditions checkbox before the user is created. By any chance, do you also know how to prevent the creation of the user if the additional fields have not been filled? I triend with Java, code behind and many method but it still dont work: the user is created even if the Terms and Conditions are not checked. I am using a create User Wizard for registering a user and have converted it to a a template to capture additional information. In the CreatedUserEvent handler in Code Behind I am getting the UserName of the new user and their unique GUID key. This is then added to a seperate data table with addditional info captured from the user. The textboxes within the create user Wizard have validation controls. If the user has missed an entry these fire and it shops the process of creating the user. BUT If I then complete the textboxes and then click submit it says the USER NAME already Exists! This is not what I want as the User just needs to correct the errors in the form and they should be able to keep their iniital User Name choice. Do I need to change the Event handler and if so am I still able to capture the USerName and Key so that I can add the data to the other non membership table. im creating a multi step create user wizard for new members but I run into a problem. If the create of an account is in step 3 how do I capture the values from step 1 and 2. Should I try to pass the values to sessions or is there some other code. Here is some codebehind I tried so far (did not work). [Code].... [Code].... I Have Question : Can I Edit Create User Wizard To Save Info To My Project Sql ? And Where Normal Create User Wizard Save Info Of Register?View 5 Replies I'm trying to add a step into a Create User Wizard to set roles, but it don't works, it create the user but don't set the role, [Code].... using System; I'd like to add a payment step to the create user wizard so that it follows:Sign upPay (via paypal or something similar)omplete (only if payment successful)Has anyone done this before and could they point me in the right direction? I've had a scour of the internet and not had too much luck yet.It's for a charity site I'm working on if that makes a differenceView 4 Replies I'm modifying Scottgu's tutorial for adding profile information to a login.I have put in a provider tag and the profile tag parts in the web config, and hooked it up to a connection to my SQL Server DB.I have modified the create user wizard step 1 to contain extra controls to capture forename and surname. I then altered the tutorial code to match this.however, though the _CreatedUser event code fires, nothing is stored in the aspnet_Profiles table in my DB.I took the tutorial code for the display profile info page and modified it to displAy the users name, but nothing appears on the page, even though the users username appears using the loginname control.all I would like to do is display message in the master page that says 'welcome, John Doe, you are logged in as doej'View 8 Replies How to Access the controls exist in create user step template in a createuserwizard?View 4 Replies After a user creates their account with the create new user wizard, I would like to have a confirmation email sent that requires them to click on a link in the email to confirm their account and verify their email address before account is activated.View 4 Replies I use asp.net default membership provider. in my register.aspx page i use a CreateUserWizard to create new user. in this wizard i want to create a custom step in first step to show the user some roles with a checkBox if user check it mean he agree with the roles and can create account. i create the step and i put the some text and a checkBox in it. the problem is i don't know how to get the value of checkBox and how active the next button in first Step of my wizard. I am trying to collect extra information about a user when the user account is created using the Create User Wizard. This info will be stored in a new table in the standard ASP membership SQL database.I have read several books and loads of online tutorials on the subject and they all take different approaches and seem to make the process hard work.Is there any reason why I can't add an SQL datasource and a number of textboxes to the wizard step, 'connect' the values from the textboxes to the Insert Parameters and then put an Insert Statement in a suitable event handler to cause the insert?View 9 Replies I m facing major problems as i want to clear fields i.e username password firstname in create user wizard as i have tried follwing option but not working 1) username texbox empty 2) username viewstate false 3) createUser wizard viestep false as nothing is working I have create user wizard control on my page as below with mail definition setup to send a welcome email to new registerd user. <asp:CreateUserWizard <MailDefinition BodyFileName="~/EmailTemplates/CreateUserWizard.txt" From="myemailaddress" Subject="New User"> </MailDefinition> <WizardSteps> <asp:CreateUserWizardStep <ContentTemplate> layout content here </ ContentTemplate> </WizardSteps > </asp:CreateUserWizard > Problem is I am not receiving the welcome email. To test it, I placed another create user wizard on another page in its default form as below <asp:CreateUserWizard <MailDefinition BodyFileName="~/EmailTemplates/CreateUserWizard.txt" From="myemailaddress" Subject="New User"> </MailDefinition> <WizardSteps> <asp:CreateUserWizardStep <asp:CompleteWizardStep </WizardSteps> </asp:CreateUserWizard> This one worked and I got welcome email. The setup in Web Config is correct. Is it because I have the oncreateduser="CreateUserWizard1_CreatedUser" or some other conflict issue. Im very new to ASP.net and have no experience with it at all.Im currently developing a website,and am confused about creating users.Im using the create user wizard,everything is working fine.However,I am worried about the security of user information.I believe the user information is stored in the APP_Data folder, am I right in believing that this folder is secure? Exactly how is the user information stored when using the create a user wizard?In a database in the APP_Data folder?Is it encrypted automatically when a user signs up? I want to show user already exist erroe in user create wizard but user create wizard is disaply in pop up so after click create user button ot shows error user already exist and pop got closed how i can show error on popup or how can skip postback so clicking on button it will not close popup???View 6 Replies Using Create user wizard enter email address send mail to that email address when click submit button and redirect login pageView 1 Replies i am using create user wizard and capturing other information within content template when a new userregisters.from the create user wizard membership UserName and Password Textboxesis it possilbe to hook all of these up so I get one message box with all errors including membership ones?View 3 Replies i m trying to add new field named mobile in my create user wizard control for that in my web.config i add profileproperties>addproperties>profile> in my create user wizard using edit template i add one textbox named 'txtmobile' nd in my cs i write CreateUserWizard2_CreatedUser { ProfileCommon p = (ProfileCommon)ProfileCommon.Create(CreateUserWizard2.UserName, true); TextBox txtM = ((TextBox)CreateUserWizard2.CreateUserStep.ContentTemplateContainer.FindControl("txtMobile")); p.Mobile = txtM.Text; //p.Mobile = ((TextBox)CreateUserWizard2.CreateUserStep.ContentTemplateContainer.FindControl("txtMobile")).Text; p.Save(); } but it throws the error of 'object reference' nd also let me know how to store that field in db? May i know how i can handle client side validation in user creat wizard on Create user button?View 5 Replies I've built a new sharepoint site page using the example I found here: purpose of the page is to add a new user to the aspnet membership database that serves as the authentication provider for my sharepoint site,which uses forms based authentication.I've slightly customized the asp createuser control.The sharepoint site is forms based but the top level site is accessible anonymously, and I've created a subsite for members (hence the user registration page). The site page is in the top level site so that people can register.If I'm already logged in and fill out the form, the user is successfully added to the membership store, however if I access the page anonymously and fill out the form, the user is successfully added to the membership database, but I can no longer navigate the website,I keep getting http 500 page cannot be displayed errors until I clear the browser cache and cookies. In create user wizard control forgot password and change password...View 1 Replies I have a Create User Wizard and I want to make the UserName textbox border red when a user doesn't enter text in the textbox. So I made a custom validator that looks like this: [Code].... When I click the Create User button inside of the Create User Wizard, it throws this error: Object reference not set to an instance of an object. I want to access my textbox with id=UserName from javascript but its giving me error (on the highlighted JS line) that UserName does not exists in the current context. Can any one tell me how to acces a controls value with is residing in asp.net's CreateUserWizard control? Here is my code- [Code]....
http://asp.net.bigresource.com/Security-How-to-Create-user-button-in-Create-user-wizard-t7X1cJQ6l.html
CC-MAIN-2019-09
refinedweb
1,790
58.52
It looks like I'll be helping a friend with a contract he has, and I'd be the main developer, which is great. But it's a .NET / Winforms gig, and I'm a Python developer. So I need to get up to speed quickly. Can anyone name any good resources, aside from msdn/channel 9/MVA? Not looking for hand holding type stuff, a quick pace is fine. Cheers in advance. Discussion You'll have to jump quite widely from Python to C#. But chances are you'll stay. LINQ is a wonderful thing and I find it sad that no non-Lisp language has something that comes close. Thanks. I did dabble in .net core 1.1 when it came out, so it's not entirely new to me. But I probably remember less than I think. 😂 Any resources you'd recommend? MSDN, actually. The official docs are pretty good especially anything at docs.microsoft.com the old docs at msdn are hit and miss. dot.net or docs.microsoft.com. Those are good resources to start with .NET/ C# Pluralsight has a lot of C# (and a ten day free trial)... They have a couple courses on winforms but one of them is in VB.NET Thanks. I have access to pluralsight, so I'll definitely check them out further. for x in range(len(ans)): if x!=len(ans)-1: print (ans[x],end=" ") else : print (ans[x]) import telnyx telnyx.api_key = "YOUR_API_KEY" your_telnyx_number = "+13115552368" destination_number = "+13115552367" telnyx.Message.create( from_=your_telnyx_number, to=destination_number, text="Hello, world!", )
https://dev.to/endlesstrax/from-python-to-c-1c32
CC-MAIN-2020-50
refinedweb
262
79.46
Optimization toolboxes under SAGE Can we use SAGE to solve some optimization problems? Can we import optimization toolboxes such as YALMIP, MPT, CVX, TOMLAB in SAGE? Thanks in advance! asked 2013-11-15 00:36:23 -0600600 Seen: 348 times Last updated: Nov 15 '13 solving an iterated optimization problem How do I get solve () to use floats? import sage packages in python Executing python modules from package import module in SageMathCloud best_known_objective_bound() The Cake Eating Problem in Sage How to import the print function from __future__ ? How do I use a sage graph in LaTEX? How to import a module at startup?
https://ask.sagemath.org/question/10737/optimization-toolboxes-under-sage/?sort=oldest
CC-MAIN-2021-04
refinedweb
103
65.62
What I need is to display the JLabel that asks the user if they want to play again and the two buttons "Yes" and "No". The two ways I see of doing this are to either clear the JFrame and add the JLabel and JButtons, or to make the JFrame invisible, dispose of it, and create a new JFrame with the three objects. I have tried searching on this forum and the Java API, but I am having no luck getting anything accomplished. Thanks for any assistance you can give me. import javax.swing.*; import java.awt.event.*; import java.awt.FlowLayout; import java.util.Random; /** * Class GUI is a GUI representation of the classic number guessing game. * * @author Sam Lanzo * @version 0.1 (10/22/2009) */ public class GUI extends JFrame { int lowNumber = 0; int highNumber = Integer.parseInt(JOptionPane.showInputDialog (null, "<html>Let's play a number guessing game!<P>" + "What should the highest number be?")); int numGuesses = 0; JLabel guessingGame = new JLabel ("Guess a number between 1 and " + highNumber + "."); JLabel numberRange = new JLabel ("The number is higher than " + lowNumber + ", but less than " + (highNumber + 1) + "."); JTextField guess = new JTextField ("", 20); JButton guessButton = new JButton ("Guess"); JLabel correctness = new JLabel (""); JLabel rightAnswer = new JLabel (""); JButton yes = new JButton ("Yes"); JButton no = new JButton ("No"); /** * constructor for objects of class GUI */ public GUI () { Random randomGenerator = new Random(); final int RANDOM = randomGenerator.nextInt (highNumber) + 1; // the +1 is because it starts at 0 and goes to highNumber exclusive this.setVisible(true); this.setBounds (200,150,400,150); this.setLayout (new FlowLayout(FlowLayout.CENTER)); this.add(this.rightAnswer); this.add(this.guessingGame); this.add(this.numberRange); this.add(this.guess); this.add(this.guessButton); this.add(this.correctness); this.guessButton.addActionListener (new ActionListener() { /** * respond to a button press. * * @param theEvent and ActionEvent */ public void actionPerformed (ActionEvent theEvent) { int userGuess = Integer.parseInt(guess.getText()); if (userGuess != RANDOM) { if (userGuess < RANDOM) { lowNumber = userGuess; numGuesses++; correctness.setText(userGuess + " was too low, try again."); } else // if (guess >0) { highNumber = userGuess; numGuesses++; correctness.setText(userGuess + " was too high, try again."); } numberRange.setText("The number is higher than " + lowNumber + ", but less than " + highNumber + "."); guess.setText (null); } else // if (userGuess == RANDOM) { /** * this is where I want it to remove everything and make a new window. */ guessingGame.setText(""); numberRange.setText(""); guess.setText(""); correctness.setText(""); rightAnswer.setText("<html> You are right!<P>" + "The number was " + RANDOM + ".<P>" + "It took you " + numGuesses + " attempts.<P> " + "Would you like to play again?"); } } }); } public static void main (String[]args) { new GUI(); } }
http://www.dreamincode.net/forums/topic/134767-gui-window-removal-help/
CC-MAIN-2017-26
refinedweb
415
51.14
Re: question - From: "Bill Cunningham" <nospam@xxxxxxxxx> - Date: Sun, 13 Jan 2008 17:05:53 GMT "Keith Thompson" <kst-u@xxxxxxx> wrote in message news:87fxx2s8al.fsf@xxxxxxxxxxxxxxxxxx "Bill Cunningham" <nospam@xxxxxxxxx> writes: I wrote this small program to read a 512 block of binary data and write the same to a file. My code compiled well. The only thing is when I ran the compilers binary instead of a data file of 512 bytes I got one of 2048 bytes. #include <stdio.h> main(){ "int main(void) {" int buf[512]; FILE *fp; fp=fopen("r.dsk","rb"); if (fp==NULL) {printf("Error"); exit(0);} You need "#include <stdlib.h>" for exit. fread(buf,sizeof(int),512,fp); No error checking. fclose(fp); No error checking (yes, fclose can fail). I don't remember reading that in my lliterature. Thanks for the tip. fp=fopen("dat","wb"); if (fp==NULL) {printf("Error");} Above, you printed an error message and terminated the program. Here you print an error message and continue. fwrite(buf,sizeof(int),512,fp); No error checking. fclose(fp);} No error checking (yes, fclose can fail). Add "return 0;". *Please* put the closing "}" on a line by itself. It's very difficult to see. Is it the code or some overhead from the compiler or linker? And finally, the answer to your question: The program is doing exactly what it's supposed to do. Read the documentation for fread() and fwrite(). They both take two size_t arguments, the size in bytes of each element and the number of elements. You're asking fread() to read 512 element, each of which is sizeof(int) bytes (in other words, 512 ints, not 512 bytes). If int is 4 bytes on your system, you'll read and write 2048 bytes (assuming there are no errors). One more thing: it's conventional to print error messages to stderr, and to use the argument to exit() to indicate success or failure. Rather than if (fp==NULL) {printf("Error"); exit(0);} I'd write: if (fp == NULL) { fprintf(stderr, "Error\n"); exit(EXIT_FAILURE); } Okay. stderr. I skipped ahead in the tutorial to write this. But I'm actually learning C! If I could get as good at as I am Basic I'll be like Richard Heathfield or Ben Pfaff. Maybe even dmr. It's great to have a community. Yes, I also changed the code layout. Whitespace is not in short supply; use as much as you need to make the code clear and readable. This is a question of style. That I'll have to learn. All my code so far is snippets. I'll have to catch up on that :) Bill . - Follow-Ups: - Re: question - From: Serve Lau - References: - question - From: Bill Cunningham - Re: question - From: Keith Thompson - Prev by Date: Re: question - Next by Date: Re: Pointer conversions - Previous by thread: Re: question - Next by thread: Re: question - Index(es):
http://coding.derkeiler.com/Archive/C_CPP/comp.lang.c/2008-01/msg01668.html
crawl-002
refinedweb
490
76.82
Some experience suggested different default stations & volume settings for the streamers in various rooms, so the Python code now parses its command line to determine how to configure itself: import argparse as args cmdline = args.ArgumentParser(description='Streaming Radio Player',epilog='KE4ZNU -') cmdline.add_argument('Loc',help='Location: BR1 BR2 ...',default='any',nargs='?') args = cmdline.parse_args() I should definitely pick a different variable name to avoid the obvious clash. With that in hand, the customization takes very effort: CurrentKC = 'KEY_KP7' MuteDelay = 8.5 # delay before non-music track; varies with buffering UnMuteDelay = 7.5 # delay after non-music track MixerVol = '15' # mixer gain Location = vars(args)['Loc'].upper() print 'Player location: ',Location logging.info('Player setup for: ' + Location) if Location == 'BR1': CurrentKC = 'KEY_KPDOT' MixerVol = '10' elif Location == 'BR2': MuteDelay = 6.0 UnMuteDelay = 8.0 MixerVol = '5' The Location = vars() idiom returns a dictionary of all the variables and their values, of which there’s only one at the moment. The rest of the line extracts the value and normalizes it to uppercase. Now we can poke the button and get appropriate music without having to think very hard. Life is good! The Python source code, which remains in dire need of refactoring, as a GitHub Gist:
https://softsolder.com/2017/01/16/raspberry-pi-streaming-radio-player-command-line-parsing/
CC-MAIN-2017-51
refinedweb
205
50.73
INTRODUCTION TO JAVA THREADS Mult. Multithreading has several advantages over Multiprocessing such as; - Threads are lightweight compared to processes - Threads share the same address space and therefore can share both data and code - Context switching between threads is usually less expensive than between processes - Cost of thread intercommunication is relatively low that that of process intercommunication - Threads allow different tasks to be performed concurrently. The following figure shows the methods that are members of the Object and Thread Class. THREAD CREATION There are two ways to create thread in java; - Implement the Runnable interface (java.lang.Runnable) - By Extending the Thread class (java.lang.Thread) IMPLEMENTING THE RUNNABLE INTERFACE The Runnable Interface Signature public interface Runnable { void run(); One way to create a thread in java is to implement the Runnable Interface and then instantiate an object of the class. We need to override the run() method into our class which is the only method that needs to be implemented. The run() method contains the logic of the thread. The procedure for creating threads based on the Runnable interface is as follows: 1. A class implements the Runnable interface, providing the run() method that will be executed by the thread. An object of this class is a Runnable object. 2. An object of Thread class is created by passing a Runnable object as argument to the Thread constructor. The Thread object now has a Runnable object that implements the run() method. 3. The start() method is invoked on the Thread object created in the previous step. The start() method returns immediately after a thread has been spawned. 4. The thread ends when the run() method ends, either by normal completion or by throwing an uncaught exception. Below is a program that illustrates instantiation and running of threads using the runnable interface instead of extending the Thread class. To start the thread you need to invoke the start() method on your object. class RunnableThread implements Runnable { Thread runner; public RunnableThread() { } public RunnableThread(String threadName) { runner = new Thread(this, threadName); // (1) Create a new thread. System.out.println(runner.getName()); runner.start(); // (2) Start the thread. } public void run() { //Display info about this particular thread System.out.println(Thread.currentThread()); } } public class RunnableExample { public static void main(String[] args) { Thread thread1 = new Thread(new RunnableThread(), "thread1"); Thread thread2 = new Thread(new RunnableThread(), "thread2"); RunnableThread thread3 = new RunnableThread("thread3"); //Start the threads thread1.start(); thread2.start(); try { //delay for one second Thread.currentThread().sleep(1000); } catch (InterruptedException e) { } //Display info about the main thread System.out.println(Thread.currentThread()); } } Output thread3 Thread[thread1,5,main] Thread[thread2,5,main] Thread[thread3,5,main] Thread[main,5,main]private Download Runnable Thread Program Example This approach of creating a thread by implementing the Runnable Interface must be used whenever the class being used to instantiate the thread object is required to extend some other class. EXTENDING THREAD CLASS The procedure for creating threads based on extending the Thread is as follows: 1. A class extending the Thread class overrides the run() method from the Thread class to define the code executed by the thread. 2. This subclass may call a Thread constructor explicitly in its constructors to initialize the thread, using the super() call. 3. The start() method inherited from the Thread class is invoked on the object of the class to make the thread eligible for running. Below is a program that illustrates instantiation and running of threads by extending the Thread class instead of implementing the Runnable interface. To start the thread you need to invoke the start()method on your object. class XThread extends Thread { XThread() { } XThread(String threadName) { super(threadName); // Initialize thread. System.out.println(this); start(); } public void run() { //Display info about this particular thread System.out.println(Thread.currentThread().getName()); } } public class ThreadExample { public static void main(String[] args) { Thread thread1 = new Thread(new XThread(), "thread1"); Thread thread2 = new Thread(new XThread(), "thread2"); // The below 2 threads are assigned default names Thread thread3 = new XThread(); Thread thread4 = new XThread(); Thread thread5 = new XThread("thread5"); //Start the threads thread1.start(); thread2.start(); thread3.start(); thread4.start(); try { //The sleep() method is invoked on the main thread to cause a one second delay. Thread.currentThread().sleep(1000); } catch (InterruptedException e) { } //Display info about the main thread System.out.println(Thread.currentThread()); } } Output Thread[thread5,5,main] thread1 thread5 thread2 Thread-3 Thread-2 Thread[main,5,main] Download Java Thread Program Example When creating threads, there are two reasons why implementing the Runnable interface may be preferable to extending the Thread class: -. An example of an anonymous class below shows how to create a thread and start it: ( new Thread() { public void run() { for(;;) System.out.println(“Stop the world!”); } } ).start();
https://www.wideskills.com/java-tutorial/java-threads-tutorial
CC-MAIN-2021-21
refinedweb
791
52.9
NAMEdrbr, drbr_free, drbr_enqueue, drbr_dequeue, drbr_dequeue_cond, drbr_flush, drbr_empty, drbr_inuse, — network driver interface to buf_ring SYNOPSIS#include < sys/param.h> #include < net/if.h> #include < net/if_var.h> void drbr_free( struct buf_ring *br, struct malloc_type *type); int drbr_enqueue( struct ifnet *ifp, struct buf_ring *br, struct mbuf *m); struct mbuf * drbr_dequeue( struct ifnet *ifp, struct buf_ring *br); struct mbuf * drbr_dequeue_cond( struct ifnet *ifp, struct buf_ring *br, int (*func) (struct mbuf *, void *), void *arg); void drbr_flush( struct ifnet *ifp, struct buf_ring *br); int drbr_empty( struct ifnet *ifp, struct buf_ring *br); int drbr_inuse( struct ifnet *ifp, struct buf_ring *br); DESCRIPTIONThe drbr functions provide an API to network drivers for using buf_ring(9) for enqueueing and dequeueing packets. This is meant as a replacement for the IFQ interface for packet queuing. It allows a packet to be enqueued with a single atomic and packet dequeue to be done without any per-packet atomics as it is protected by the driver's tx queue lock. If INVARIANTS is enabled, drbr_dequeue() will assert that the tx queue lock is held when it is called. The drbr_free() function frees all the enqueued mbufs and then frees the buf_ring. The drbr_enqueue() function is used to enqueue an mbuf to a buf_ring, falling back to the ifnet's IFQ if ALTQ(4) is enabled. The drbr_dequeue() function is used to dequeue an mbuf from a buf_ring or, if ALTQ(4) is enabled, from the ifnet's IFQ. The drbr_dequeue_cond() function is used to conditionally dequeue an mbuf from a buf_ring based on whether func returns TRUE or FALSE. The drbr_flush() function frees all mbufs enqueued in the buf_ring and the ifnet's IFQ. The drbr_empty() function returns TRUE if there are no mbufs enqueued, FALSE otherwise. The drbr_inuse() function returns the number of mbufs enqueued. Note to users that this is intrinsically racy as there is no guarantee that there will not be more mbufs when drbr_dequeue() is actually called. Provided the tx queue lock is held there will not be less. RETURN VALUESThe drbr_enqueue() function returns ENOBUFS if there are no slots available in the buf_ring and 0 on success. The drbr_dequeue() and drbr_dequeue_cond() functions return an mbuf on success and NULL if the buf_ring is empty.
http://www.yosbits.com/opensonar/rest/man/freebsd/man/en/man9/drbr_empty.9.html?l=en
CC-MAIN-2017-51
refinedweb
367
60.04
MEAM.Design - MAEVARM - USB Communications Overview For either debugging or data communications, it is possible to send and receive data between the m2 and a computer via the same USB port that you use for programming. Setup To setup the m2 to communicate over USB, you will need to download the following mUSB-specific support files: Be sure to include the C file in your project: - For Windows OS users, if you're using Option 1, you will then need to right-click on the Source Files folder, and select Add Existing Source File(s)..., then select the m_usb.c file, and if you're using Option 2, place m_usb.c in src/. - For Mac and Linux users, if you're using Option 1, edit your Makefile to add "m_usb.o" after "main.o" on the OBJECTS line, and if you're using Option 2, place m_usb.c in src/. Also, place the H file next to your main file for Option 1 or place the H file in inc/ for Option 2, and include m_usb.h in your main routine. Functions NOTE - It is assumed that you are running the system clock at 16MHz, though it has been tested to work slower. As described in the header file, this will give you access to a number of public functions (note - datatype is void unless specified): For example, the following code will initialize the USB subsystem, wait for a connection, then wait for a packet to be received and then echo it back to the computer as a decimal number #include "m_general.h" #include "m_usb.h" int main(void) { unsigned int value; m_usb_init(); while(!m_usb_isconnected()); // wait for a connection while(1) { if(m_usb_rx_available()) { value = m_usb_rx_char(); m_usb_tx_uint(value); } } } Computer-side connection options On a PC, open a serial terminal and connect to the proper COM port. Options include: RealTerm In Mac OS X, you can use terminal to send/receive packets. First, find the serial object ("ls /dev/tty.*"), then start the session ("screen /dev/tty.usbmodem###"). To end the session, press Ctrl-A then Ctrl-\. In Linux, it's the same except for the name of the serial object ("ls /dev/ttyACM*"); You can stream data directly in/out of Matlab, like this: handle = serial(port,'Baudrate', 9600);where port is either 'COM#' in Windows or '/dev/tty.usbmodem#' in OS X. fopen(handle); fprintf(handle, message);where message is a string fwrite(handle, variable);where variable is a value fclose(handle); Other thoughts on USB communications: AVR CDC Demo Code Hacking AVR provides some demo code for communication device class (CDC) operation on the ATmega32u4. This code will compile with AVR-studio, or with a modified Makefile on OS X (Change "avr-gcc.exe" to "avr-gcc". When building, flashing, and running, this causes the MaEvArM to appear as a COM port on a connected host machine. The code is intended to be used with a specific development board that has various other widgets on it (temperature sensor, joystick, etc), but can be modified to send whatever data one wants it to. It gives access to the user by overloading the standard printf() function. Unfortunately (for the sake of simplicity), the asynchronous digital communication seems to require the use of a process scheduler. The schedule mediates between the usb driver on the MCU and the user-supplied code that actually outputs the data. This means that the user defines two functions: The task code function will be called repeatedly in a loop by the scheduler, so it needs to return quickly, otherwise the USB communication will be blocked. The code to build is at the path EVK527-series4-cdc-2_0_2-doc/demo/EVK527-series4-cdc while the rest is support code that we can go through and cut some fat from.
https://alliance.seas.upenn.edu/~medesign/wiki/index.php/Guides/MaEvArM-usb
CC-MAIN-2017-13
refinedweb
632
61.56
TweenFilterLite (AS2 Version) - Easily Tween Filters & Image Effects - Compatibility: Flash Player 8 and later (ActionScript 2.0) (Click here for the AS3 version) - File Size added to published SWF: About 6Kb Join Club GreenSock to get updates and a lot more RECENT VERSION HIGHLIGHTS - IMPORTANT: syntax change! - instead of defining a "type" property that indicates the kind of filter, you define a property like "blurFilter", "glowFilter", "colorMatrixFilter", "dropShadowFilter", or "bevelFilter" and pass an object with whatever properties you want to tween. So OLD SYNTAX: TweenFilterLite.to(mc, 2, {type:"blur", blurX:20, blurY:20, delay:1}); NEW SYNTAX: TweenFilterLite.to(mc, 2, {blurFilter:{blurX:20, blurY:20}, delay:1}); Why make this change? Here are a few reasons: - It allows you to tween multiple filters with a single call, like: TweenFilterLite.to(mc, 2, {blurFilter:{blurX:30}, colorMatrixFilter:{colorize:0xFF0000}}); - It eliminates problems with ambiguous properties like "alpha", mostly in the AS3 version. For example, some people wanted to be able to tween a MovieClip's alpha property AND a DropShadowFilter on that Sprite with the drop shadow's alpha changing too, but with the old syntax, you could only define one alpha - It makes it more flexible to extend (e.g. my upcoming TweenMax class) - Removed MovieClip limitation: - Previously, TweenFilterLite only allowed you to tween MovieClips, but you can now use it for any object. It is still recommended that you use TweenLite directly to tween non-MovieClip objects because it'll perform slightly faster, but unless you doing a LOT of tweens simultaneously, you'd probably never notice a difference. DESCRIPTION: 6k). The syntax is identical to the TweenLite class. If you're unfamiliar with TweenLite, I'd highly recommend that you check it out. It provides easy way to tween multiple object properties over time including a MovieClip's position, alpha, volume, color, etc. Just like the TweenLite class, TweenFilterLite allows you to build in a delay, call any function when the tween starts or has completed (even passing any number of parameters you define), automatically kill other tweens that are affecting the same object (to avoid conflicts), tween arrays, etc. One of the big benefits of this class (and the reason "Lite" is in the name) is that it was carefully built to minimize file size. There are several other Tweening engines out there, but in my experience, they required more than triple the file size which was unacceptable when dealing with strict file size requirements (like banner ads). I haven't been able to find a faster tween engine either. The syntax is simple and the class doesn't rely on complicated prototype alterations that can cause problems with certain compilers. TweenFilterLite is simple, very fast, and more lightweight (about 5k) than any other popular tweening engine. And if you want even more features and don't mind a few extra Kb, check out the new TweenMax class which adds bezier tweening, pause/resume, easier sequencing, and more. It can do everything TweenFilterLite does, plus more. OBJECTIVES - Minimize file size - Maximize flexibility and efficiency by extending the TweenLite class. That way, if you don't need to tween filters, you can just use TweenLite (about 3k); otherwise, this class will only add another 3k (6k total) - Minimize the amount of code required to initiate a tween - Maximize performance - Allow for very flexible callbacks (onComplete, onUpdate, onStart, all with the ability to pass any number of parameters) FITLERS & PROPERTIES: - blurFilter - blurX, blurY, quality - glowFilter - alpha, blurX, blurY, color, strength, quality - colorMatrixFilter - colorize, amount, contrast, brightness, saturation, hue, threshold, relative, matrix - dropShadowFilter - alpha, angle, blurX, blurY, color, distance, strength, quality - bevelFilter - angle, blurX, blurY, distance, highlightAlpha, highlightColor, shadowAlpha, shadowColor, strength, quality USAGE - Description: Tweens the target's properties from whatever they are at the time you call the method to whatever you define in the variables parameter. - Parameters: - target : Object - Target). Pass in one object for each filter (named appropriately, like blurFilter, glowFilter, colorMatrixFilter, etc.) Filter objects can contain any number of properties specific to that filter, like blurX, blurY, contrast, color, distance, colorize, brightness, highlightAlpha, etc. Special Properties: - delay : Number - Number of seconds to delay before the tween begins. This can be very useful when sequencing tweens. - ease : Function - You can specify a function to use for the easing with this variable. For example, mx.transitions.easing.Elastic.easeOut. The Default is Regular.easeOut. -. - autoAlpha : Number - Same as changing the "alpha" property but with the additional feature of toggling the "visible" property to false if the alpha ends at 0. It will also toggle visible to true before the tween starts if the value of autoAlpha is greater than zero. - volume : Number - To change a MovieClip's volume, just set this to the value you'd like the MovieClip to end up at (or begin at if you're using TweenFilterLite.from()). - tint : Number -. Before version 5.8, tint was called mcColor (which is now deprecated and will likely be removed at a later date although it still works) - frame : Number - Use this to tween a MovieClip to a particular frame. - - Use this to define the scope of your onStart functionUpdateScope : Object - Use this to define the scope of your onUpdate function call - your onComplete function call - renderOnStart : Boolean - If you're using TweenFilterLite.from() with a delay and want to prevent the tween from rendering until it actually begins, set this to true. By default, it's false which causes TweenFilterLite.from() to render its values immediately, even before the delay has expired. - overwrite : Boolean - If you do NOT want the tween to automatically overwrite any other tweens that are affecting the same target, make sure this value is false. - blurFilter : Object - To apply a BlurFilter, pass an object with one or more of the following properties: - blurX - blurY - quality - glowFilter : Object - To apply a GlowFilter, pass an object with one or more of the following properties: - alpha - blurX - blurY - color - strength - quality - inner - knockout - colorMatrixFilter : Object - To apply a ColorMatrixFilter, pass an object with one or more of the following properties: - colorize - amount - contrast - brightness - saturation - hue - threshold - relative - matrix - dropShadowFilter : Object - To apply a ColorMatrixFilter, pass an object with one or more of the following properties: - alpha - angle - blurX - blurY - color - distance - strength - quality - bevelFilter : Object - To apply a BevelFilter, pass an object with one or more of the following properties: - angle - blurX - blurY - distance - highlightAlpha - highlightColor - shadowAlpha - shadowColor - strength - quality - Description: Exactly the same as TweenFilterFilterLite.to(). (see above) - - [optional] An array of parameters to pass the onComplete function when it's called. - scope : Object - [optional] Defines the scope of the function. - Description: Provides an easy way to kill all tweens of a particular Object/MovieClip. You can optionally force it to immediately complete (which will also call the onComplete function if you defined one) - Parameters: - target : Object - Any/All tweens of this Object/MovieClip. EXAMPLES - import gs.TweenFilterLite; - TweenFilterLite.to(clip_mc, 1.5, {blurFilter:{blurX:20, blurY:20}}); - import gs.TweenFilterLite; - import mx.transitions.easing.Back; - TweenFilterLite.to(clip_mc, 5, {colorMatrixFilter:{saturation:0},.TweenFilterLite; - TweenFilterLite.from(clip_mc, 5, {colorMatrixFilter:{colorize:0xFF0000}}); -FilterLiteFilterFilterLite: - TweenFilterLite.from(myButton, 2, {_alpha:0, overwrite:false}); - var scaleTween:TweenFilterLite; - myButton.onRollOver = function():Void { - TweenFilterLite.removeTween(scaleTween); - scaleTween = TweenFilterLite.to(myButton, 0.5, {_xscale:120, _yscale:120, overwrite:false}); - } - myButton.onRollOut = function():Void { - TweenFilterLite.removeTween(scaleTween); - scaleTween = TweenFilterLite.to(myButton, 0.5, {_xscale:100, _yscale:100,. - Can I set up a sequence of tweens so that they occur one after the other? Of course! blur it over the course of 1 second: - import gs.TweenFilterLite; - TweenFilterLite.to(clip_mc, 2, {colorMatrixFilter:{colorize:0xFF0000, amount:1}}); - TweenFilterLite.to(clip_mc, 1, {blurFilter:{blurX:20, blurY:20}, delay:2, overwrite:false}); - Why aren't my filters working? If you're using a filter that has an alpha property, try setting it to 1. The default alpha value is zero, so the filter may be working just fine, but you're not seeingFilterLite.to(clip_mc, 1, {colorMatrixFilter:{colorize:0xFF0000, amount:1}}) is the same as TweenFilterLite.to(clip_mc, 1, {colorMatrixFilter:{amount:1, colorize:0xFF0000}}); - Can I use TweenFilterLite to tween things other than filters? Sure. It extends TweenLite, so you can tween any property you want. TweenFilterLite.to(my_mc, 1, {_x:200}) gives you the same result as TweenLite.to(my_mc, 1, {_x:200}). However, I'd recommend using TweenLite to tween properties other than filters for two reasons: - In order to accommodate the specialized nature of filters, TweenFilterLite's code is a bit lengthier which translates into more work for the processor. It's doubtful that anyone would notice a performance hit unless you're tweening hundreds or thousands of instances simultaneously, but I'm a bit of an efficiency freak. - TweenLite can tween any property of ANY object whereas TweenFilterLite tweens properties MovieClips. - Why are TweenLite and TweenFilterLite split into 2 classes instead of building all the functionality into one class? - File size. Only a portion of projects out there require tweening of filters. Almost every project I work on uses TweenLite, but only a few require tweening filters (TweenFilterLite). TweenLite is 3k whereas TweenFilterLite is 6k. Again, one of the stated purposes of TweenLite is to minimize file size & code bloat. If someone only wants to use TweenFilterLite, fine. But I think many people appreciate being able to use the most lightweight option for their needs and shave off the 3k when possible. - Speed. Tweening filters is a more complex task. There are additional if/else statements and calculations in the rendering loop inside TweenFilterLite which could potentially slow things down a bit, even for non-filter tweens (I doubt anyone would notice a difference unless they’re running hundreds or thousands of simultaneous tweens, but I'm a big fan of keeping things as efficient & fast as possible) - Do I have to purchase a license to use this code? Can I use it for commercial purposes? TweenFilterLite. on May 26th, 2007 at 3:44 am Jack, I can only say that I wish you designed and developed vehicles, were in charge of the deficit, and ran an organically based fast food restaurant chain. Because if you did, by now, we would all be flying around in our government issued hover cars that ran on raw sewage and emitted nothing but water vapor while consuming conveniently prepared, organic gourmet meals, which contributed to each of us becoming the healthy, happy, active centurions God intended us to be. Yes, these classes are that idealistically good. Adobe(Macromedia) should give you millions. I wish that I could, but perhaps a pittance of 10$ will help set a precedence. This new class seems every bit as brilliant and compact as TweenLite (which I love). Can’t wait to start playing with it. Thank you, thank you, and thank you. on May 26th, 2007 at 3:47 am P.S. Make it easy for people to give you money. Put up a direct PayPal link . Genorosity and laziness aren’t mutually exclusive traits :) on June 7th, 2007 at 4:39 pm Great class. I was about to write a function to do this and thought I’d give google a try. Excellent! on July 3rd, 2007 at 8:23 am Wow!!! You got my fully respect!! D@mn you know something! :D It’s working great!! The only thing I don’t know is how to apply a color and a shadow modify at once. Cool though! on July 3rd, 2007 at 8:53 am Actually, Eagle, it’s pretty simple to apply a colorize filter tween and also a drop shadow tween (or any filter) - just use two tweens and make sure the second one sets the overwrite property to false, like: TweenFilterLite.to(my_mc, 2, {colorMatrixFilter:{colorize:0xFF0000}}); TweenFilterLite.to(my_mc, 2, {dropShadowFilter:{blurX:5, blurY:5, color:0×00FF00}, overwrite:false}); on August 29th, 2007 at 10:09 am Flash Genius again! Lovely thanks, Dan C on September 3rd, 2007 at 1:20 pm I’ve been teaching myself as2 & as3, as well as everything else that goes with interactive design… and there’s just some things that take a bit more than understanding the logic to figure out. I’ve been trying to understand the blurs and other filters for DAYS. It’s highly pertinent to a website i’m strying to create. You sir, are a miracle man in my book. Every question I’ve had with this project, you answered in entirety with this one blog alone. You’re a life saver… more importantly so, a sanity saver ;) Thank you VERY much for taking the time with this, C. W. Calabrese on October 26th, 2007 at 4:44 pm you are the man! i’ve been lookin for one of this classes for a long time ;) kudos keep up the good work dude! on January 21st, 2008 at 7:59 am I have been using TweenLite for about a month now and all I can say is that your classes are absolutely brilliant! Thanks a lot! One thing I’m curious about is how you apply several filters at the same time to an object, lets say you want to move it while adding some glow and removing a drop shadow. on January 21st, 2008 at 8:41 am Alex, you should be able to do that without a problem - just remember that unless you set the “overwrite” property to false, TweenLite (and TweenFilterLite) will always overwrite existing tweens of the same object. So to move a MovieClip while adding a glow and removing a drop shadow, you could do something like: TweenFilterLite.to(my_mc, 2, {glowFilter:{color:0xFF0000, strength:2, blurX:10, blurY:10}, _x:100, _y:300}); TweenFilterLite.to(my_mc, 2, {dropShadowFilter:{alpha:0}, overwrite:false}); on January 23rd, 2008 at 10:17 am I can think of just two more major functions. One would be the ability to pause (all) tweens. This would be wildly useful in some cases. Right now I’m making a video banner which uses vector animations, but people are going to be pausing and playing the video. Another possibility would be the ability to tween through a timeline animation. I once made an animation of a sunrise which used several motion tweens; the entire animation timeline was then tweened from start to end in a certain time, using a standard quadratic equation. It looked great, and works really well if the animation is long enough (to provide enough frames for low-movement parts of the equation). on January 23rd, 2008 at 10:53 am Michiel, I have avoiding adding pause/resume functionality to TweenLite in order to keep file size way down, but TweenMax is now officially released and it includes that feature (and many more). Check it out at Also, it’s very easy to do the frame tween you’re talking about. As of version 6, frame tweening is built into TweenLite! on February 24th, 2008 at 4:31 am dude. I’ve been tweening tings for years…I’ve been too complacent with the ol’ lmc_tween.as because you could write two tweens on the one movieclip without the first overwriting the second which I found no other tweener did…I just noticed you’re little snippet of code: overwrite:false in your tweener…You’re a F*&king legend!! I am a better deviner after visiting your website tonight. Radical on April 3rd, 2008 at 4:20 pm Hey - very cool! One question - is there an easy translation between the numbers you use for the color matrix and the numbers used in flash’s CS3’s built in filter tweener. I have some tweens I want to make to a color, and I have their exact values in flash (example: HUE: -63, SATURATION: 70) - but this doesn’t really translate over to Tweenfilter lite. How best to approximate those values? Nice work! on April 3rd, 2008 at 7:56 pm heaversm, here’s an easy way to match exactly what you created in the Flash CS3 (or Flash 8) authoring environment in terms of the ColorMatrixFilter. All you need to do is get the necessary values in the matrix array and pass it to the new version of TweenFilterLite (7.04). Here’s some code that grabs those values, traces them to your output window (in case you want to copy/paste), clears the filter, and tweens it back into place after a 1 second delay: import gs.*; import flash.filters.*; function getCurrentMatrix($mc:MovieClip):Array { var filters:Array = $mc.filters; for (var i:Number = 0; i < filters.length; i++) { if (filters[i] instanceof ColorMatrixFilter) { return filters[i].matrix; } } } var curMatrix:Array = getCurrentMatrix(mc); trace(”TweenFilterLite.to(mc, 3, {colorMatrixFilter:{matrix:[" + curMatrix + "]}})”); mc.filters = []; //clears filters; TweenFilterLite.to(mc, 3, {colorMatrixFilter:{matrix:curMatrix}, delay:1});
http://blog.greensock.com/tweenfilterliteas2/
crawl-001
refinedweb
2,799
53.71
Using this step-by-step tutorial, you will discover how to integrate Arduino and Google cloud platform. You will learn how to send temperature, pressure and other data to Google sheet using Arduino This Arduino practical tutorial is a tutorial on how to integrate Arduino and Google cloud platform. In more details, this Arduino tutorial describes how to implement an Arduino sketch that sends data to Google Sheets. This is a quite common scenario, where you have to send data acquired by Arduino to a remote IoT cloud platform. Let us suppose we have to monitor the room temperature and store these values somewhere in the cloud so that they can be elaborated later. This is a typical scenario where it is necessary to know how to send data from Arduino to the cloud. When it is necessary to send data from Arduino to the cloud there must be an IoT cloud platform that accepts these values and stores them. Usually, an IoT cloud platform exposes a set of API to simplify the data exchange process. The cloud IoT platform protects these APIs using an authentication mechanism. Therefore, it is necessary to implement all the code required on the Arduino side to accomplish the task of sending data to the cloud. There is another way of achieving the same result. Using this guide about IoT with Arduino you can experiment, by yourself, the power of Arduino and how to connect it to Google cloud, building your first IoT project. Quicky, in a few minutes and without knowledge of programming, it is possible to dive into the IoT and build an Arduino cloud data logger that uses Google Sheets to store data. Keep reading… As stated before, this Arduino tutorial describes how to integrate Arduino and Google cloud. What will you learn? You will learn: - How to connect Arduino to sensors - How to send data from Arduino to Google Cloud (Google sheets) - How to use Temboo with Arduino Introduction IoT (aka Internet of things) is one of the most important technological trends nowadays. In this post, we want to explore how to build an IoT project that uses Arduino and Google. The interesting aspect of Internet of things is that we can experiment IoT using simple development devices and existing IoT cloud platforms that provide services. It is not necessary to spend a lot of money to jump into the IoT ecosystem. The purpose of this project is building a cloud data logger that records the temperature and pressure and stores these values into Google cloud platform using Google sheet. There are several scenarios where this Arduino project could be useful and it is necessary to connect Arduino to Google cloud. As stated before, we might want to track the temperature and the humidity of our room and store this information in a worksheet in order to elaborate the values somehow. Moreover, we could calculate the mean value of the temperature and the humidity during the days. We might want to plot these values using some charts. Therefore, this Arduino project has several applications and it is very useful. Nevertheless, it is a good starting point when you approach for the first time Arduino and how to use Arduino in IoT. To integrate Arduino and Google cloud and to help Arduino to send data to Google Sheet this IoT project uses Temboo. This is an IoT cloud platform that provides several integration services that simplify the process of integrating different systems. Other useful resources Internet of Things with Android and Arduino: Control remote Led How to notify event to your smartphone using Arduino This Arduino programming tutorial has three simple steps to get the goal: - Authorize our device using OAuth 2 and get the token to exchange data with Google - Connect the sensor to the Arduino - Send the data to the Google Cloud (Google sheet) using Temboo Integrating Arduino and Google cloud platform using OAuth In this first step, we have to authorize our Arduino board to send data to Google cloud platform using OAuth2 mechanism. To simplify this authorization process, we will use Temboo that provides a set of services to get the OAuth token. We have covered how to integrate Arduino MKR1000 with Twitter using Temboo. During this step, we assume you have already a Google cloud account so that you can access the Google Developer console. Before digging into the details of using Arduino and Goole cloud for this IoT project, it is necessary you have created a Google API project. Moreover, it is important to enable Google Sheet API in order to use this API with Arduino. Once you have your project configured correctly using the Google cloud console, you should have a Client ID and Secret key. These two keys are very important and we will them later during the project. The final result is shown in the picture below: The Arduino software code is very simple. This is only a little piece of code to use. Create a new Arduino sketch using your Arduino IDE (or any IDE you like) and add these lines at the beginning to read the sensor data: #include <Adafruit_Sensor.h> #include <Adafruit_BMP280.h> Then to read the values you have to add: float temp = bme.readTemperature(); float pressure = bme.readPressure(); That’s all. If you want to optimize the way Arduino uses the battery without draining it fast, you can read this article describing how to manage device power in IoT. Sending the data to Google cloud platform (Google Sheet) The last step is sending the data acquired by the sensor to Google cloud platform. Therefore, we will use another Temboo choreo (AppendValue) under Google>Sheets. After clicking on this choreo, Temboo shows a form where you have to add all the information required as ClientId, Token and so on as shown in the picture below: Summary At the end of this post, you have built an IoT system and explored how to integrate Arduino and Google cloud. Writing a few lines of code, you have built an IoT system that sends data to the Google sheet. The interesting part of this project is the way you have implemented it. Configuring Google cloud platform, without much knowledge about IoT, you have developed your first IoT project. Is it possible to upload sensor values on Google cloud manually as we can do on ThingSpeak?
https://www.survivingwithandroid.com/integrate-arduino-and-google-cloud-iot-project/
CC-MAIN-2020-05
refinedweb
1,063
58.92
2019 Topcoder Open Algorithm Round 1A Editorials TCO19 Round 1A Saturday, April 20th, 2019 Match summary The 20-th of April marked the start of the Algorithm track of 2019 Topcoder Open with Round 1A. Out of the 1514 registered a bit less than 600 didn’t open any of the problems, making it fairly obvious that a non-zero score would be enough to qualify for round 2. In the end only 649 people managed to do so and will join the 250 byes there. For the rest of the people – both those who didn’t participate and those that couldn’t get a positive score there will be Round 1B scheduled for the first of May (International Worker’s Day, also known as Labour Day, which, ironically, is non-working in many countries). The problem set was prepared by me – espr1t – this being the 17-th round I’ve given in Topcoder (either TCO or SRM). I’ve always liked problems which allowed multiple challenges (remember Sheep?), and today’s problems were no exception. Both the 500 and the 1000 had to be implemented very carefully, in the first taking care of the multiple corner cases, and in the second, possible problems with precision. In the end coming up with a good test case on the 1000 proved somewhat hard, thus not many people used the opportunity for challenge. The 500, on the other hand, was much easier in that regard, which lead to literally hundreds of people being challenged. After the dust settled, mrho888 claimed the top spot (not without the help of 5 successful challenges, on the 500 and 1000). Within a challenge of him was Ping_Pong who had slightly faster 1000, but “only” 3 challenges. The user EveRy rounded up the top three with slightly slower times, but again 5 successful challenges. The Problems EllysAndXor The first problem was supposed to be rather trivial, but still, some people managed to fail it (some of them yellows, nonetheless!). The task required to put bitwise AND (&) and XOR (^) operators between up to 10 numbers in such a way that the result of the expression is as large as possible. This was further simplified by stating that the operators have equal precedence, thus the expression is evaluated left to right. The low count of the numbers should be a clear indicator that a bruteforce solution might be viable here. Indeed, a simple backtrack that tests all possible expressions runs in milliseconds. Another option would be to use iterative bruteforce (testing all bitmasks below 2**N). Both of these solutions have complexity of O(2**N), which, for the given constraints, was more than enough. The recursion might have looked as follows:. int recurse(int idx, int num) { if (idx >= n) return num; return max(recurse(idx + 1, num AND a[idx]), recurse(idx + 1, num ^ a[idx])); } EllysCodeConstants. Used as: Division One – Level Two: This was (expectedly) a very easy-to-get-wrong problem, which lead to a bloodbath during the challenge phase. Out of the 670 submissions at the end of the coding phase, only 250 survived the challenge and testing phases. This isn’t too bad, actually, as the vast majority (>90%) of the participants submitted a solution and around 1/3 of them got it right, thus still not a hard 500 – just deceivingly easy. The problem itself was, given a string, to express it as a hexadecimal literal (using the hex digits A-F, as well as 1 as ‘I’, 2 as ‘Z’, 5 as ‘S’, 7 as ‘T’, and the possible suffixes U, L, LL, UL, ULL, LU, and LLU). This way, for example, the word TASTEFUL could be expressed as the hexadecimal literal 0x7A57EFUL. The solution was to just implement the decomposition of the string to digits and a suffix and check whether the digits contain invalid characters. And do it very, very carefully. There are multiple things to get wrong here: - Only suffix, which violates the rule “A hexadecimal literal must have at least one valid digit (0-9, A-F).”. For example, 0xUL is invalid. - Have multiple suffixes. An example invalid literal would be 0xALLLL - Have a suffix not at the end. An example invalid literal would be 0xBLUE Implementing this carefully was the key to success in this problem. One should have not cared about efficiency, as the constraints were really low. Inefficient implementations could simplify the code and actually be a good thing here: map <char, char> REP = { {'O', '0'}, {'I', '1'}, {'Z', '2'}, {'S', '5'}, {'T', '7'}, {'A', 'A'}, {'B', 'B'}, {'C', 'C'}, {'D', 'D'}, {'E', 'E'}, {'F', 'F'} }; set <string> SUF = {"", "L", "LL", "U", "UL", "ULL", "LU", "LLU"}; string getLiteral(string str) { string ret = ""; while (!str.empty() AND-AND REP.find(str[0]) != REP.end()) { ret += REP[str[0]]; str.erase(str.begin()); } return (ret == "" || SUF.find(str) == SUF.end()) ? "" : "0x" + ret + str; } Alternative solutions and additional comments. An alternative solution would be to use regular expressions, which could make the code very short and tidy. See EveRy’s solution for a reference implementation. EllysTicketPrices Used as: Division One – Level Three: The second deceivingly easy problem in the set was the 1000. It looked like a simple binary search, and the rounding to two digits after the decimal point seemed to take care of possible rounding errors which often arise when dealing with floats. It turns out that exactly this rounding is the thing that leads to problems! As it happens multiple times (approximately O(log * N)) the chance that at least once it is computed wrong become high. My various implementations show that a random test (which has a valid answer) has chance around 1 in 4 to yield wrong results with doubles. To make the problem fun, I chose such examples that neither of my four implementations with floats fails on them. The problem, in short, is the following. Assuming you have a number X, you are given rules how to mutate it N-1 times so you generate N floats with exactly two digits after the decimal point. You want the average of these N numbers to be a certain value. You are to find a X that leads to this target average. Since the mutation of X was fixed, it was fairly obvious that larger X would yield larger average, and lower X would yield lower average. This monotonically changing average was a hint towards binary search – which, in fact, was indeed the solution. However, one had to see that floats are likely to lead to precision errors, thus one should multiply all numbers by a 100 and work with integers only. A very good article on the topic of floats is written by none other but the current Algorithm admin misof! You can find the relative topics here and here. So, after converting floats to integers, the solution was as follows. Do a binary search over X (the answer). For each value of the binary search, apply the mutation, compute the average, and compare with the target. If it is lower, increase X. If it is higher – decrease it. And that’s it! long divNum(long num, long divisor) { if ((num % divisor) * 2 >= divisor) num += divisor; return num / divisor; } long eval(long price, int n, int[] change) { long average = price; for (int i = 0; i < n - 1; i++) { price = divNum(price * (100 + change[i]), 100); average += price; } return divNum(average, n); } public double getPrice(int N, int[] C, int target) { long left = 0, right = 1000000001L; while (left <= right) { long mid = (left + right) / 2; if (eval(mid, N, C) < target * 100) { left = mid + 1; } else { right = mid - 1; } } return (right + 1) / 100.0; } Alternative solutions and additional comments. In terms of complexity, this was O(N * log(target * N)), since X was in the range [0, target * N], and for each iteration of the binary search we need to compute the average, which is an O(N) subroutine. An interesting question is why X is in [0, target * N] – can you figure that out?
https://www.topcoder.com/blog/2019-topcoder-open-algorithm-round-1a-editorials/
CC-MAIN-2019-26
refinedweb
1,342
60.45
Comparing Clojure IDEs — Emacs/Cider vs IDEA/Cursive Introduction Recently I edited a blog post in which I interviewed Metosinians regarding their favorite Clojure editors. It was quite interesting to see that there is a diverse group of editors in use. In that blog post, I told myself that I have configured my Emacs/Cider setup to be as close to IDEA/Cursive regarding look-and-feel. Someone in Reddit asked about this and I got an idea that I look at my latest Cursive and Dygma related configurations and check if there is some need for fine-tuning regarding my Emacs setup to make both Cursive and Emacs as similar regarding their look-and-feel as possible — and write a new blog post about this experience. There are a couple of earlier blog posts in which I have touched this topic a bit: Versions and Repository I experimented with Cursive and Emacs editors using my latest World Statistics exercise — you can find e.g. the deps.edn (e.g. emacs profile) and Justfile (e.g. commands backend-debug-reveal-kari-port for starting the repl for Cursive, and backend-debug-kari-emacs for starting the repl for Emacs). If you are interested in reading about the World Statistics exercise itself, I have another blog post about that: World Statistics Exercise - I just updated the above mentioned deps.edn and Justfile files for this new blog post. Versions used in this blog post: - IntelliJ IDEA & Cursive: IntelliJ IDEA 2020.3 Ultimate, Cursive 1.10.0–2020.3, and nrepl 0.8.2. - Emacs & Cider: Emacs 26.3, emacs-cider 1.0.0, and cider-nrepl 0.25.5. My workstation is Ubuntu 20. Emacs and Cider Emacs is an old editor — the first versions were created in the 1970’s, the GNU Emacs development started in the 1980’s. I started using Emacs back in my studies at the Helsinki University of Technology in the 1990’s — and have been using Emacs ever since. I really like Emacs, though I’m not by any standard an Emacs or eLisp guru. Emacs is implemented using Lisp (Emacs Lisp) which makes it a nice editor for writing Lisp code, e.g. Clojure. Having said that I must emphasize that Emacs is a general-purpose editor, not specifically meant for Lisp programming — you can find a so-called Emacs major mode basically for any programming language. The best way to describe Emacs is that it is an extensible, customizable editor — you can customize it any way you like using the eLisp language. And since eLisp is a Lisp you can add new editing commands while editing. There are also a lot of Emacs packages someone has written for you using the eLisp language — the best way to extend Emacs is first to search a melpa package, and if you can’t find one that suits your needs, only then write one for yourself. Cider is an Emacs package that extends Emacs into a full-blown Clojure/script development environment. Using Cider you can do the same REPL wizardry as with any Clojure REPL. The trick actually is that you run a nREPL server and then connect Emacs via Cider to this server — Cider is the nREPL client that communicates with the nREPL server. This way you can send any form inside your Clojure code for evaluation to the nREPL and it sends back the results to Emacs. IntelliJ and Cursive IntelliJ IDEA just turned 20 years old, but I haven’t been using IDEA that long. I have been programming with Java and Python for some 20 years, but I started programming Java / Python with Emacs (with Java and Python major modes), and a few years after that started using Eclipse for Java (and continued using Emacs for Python). I used Eclipse quite a few years but at some point, I started using IntelliJ IDEA for both Java and Python programming — kind of nice to use the same editor with the same look-and-feel for both programming languages, and I have stuck with IntelliJ IDEA ever since, though I still use Emacs for various other programming / editing tasks. (Lately, I have also started using Visual Studio Code, but that’s another story.) IntelliJ IDEA is a really good IDE, I really love it. It’s not so bloated as Eclipse, and I really like the layout of the IDE tools more in IDEA than in Eclipse (which I always found difficult to use). Cursive is an IDEA plugin and provides a really enjoyable Clojure programming environment. So, nowadays I can use IDEA for all of my favorite programming languages. At this point, I must admit that Clojure is my favorite programming language, no question about it. I used to use Python as a quick scripting language, but with Babashka you can write shell scripts also using Clojure without the long start-up time of the JVM. Java is a bit yesterday — I strongly recommend using Clojure in the JVM, and if Clojure is not an option, then Kotlin which is a kind of “Java done right”. Btw, you can read more about my experiences regarding different programming languages in my blog post Five Languages — Five Stories and Kotlin — Much More Than Just a Better Java — I really should update the “Five stories” blog post into “Six stories” blog post one day. What Is a Clojure REPL? The REPL is the secret weapon of the Lisp world — a way to interact with the Lisp program under development. Other programmers might say that their favorite language also has a “repl” — it’s nothing compared to a real Lisp REPL — you really need a homoiconic language to implement a real powerful REPL, and Lisp (and Clojure) is a homoiconic language. A REPL is an acronym for Read–eval–print loop. When programming Lisp (and Clojure) a seasoned programmer always keeps a REPL open, and evaluates various forms (the whole namespace, top level form, or some S-expression inside a top level form) in the source code. There are good introductions for the Clojure REPL — I strongly recommend reading them and learn how to use the REPL. All right! I guess that’s enough for the introduction. Let’s get into the business. Starting the REPLs for the Editors For Clojure newbies I explain that there are a couple of ways to use the REPL — either start the repl as part of your IDE, or start an external REPL and connect to it from your editor — I’m using this second way. I have in my deps.edn the Clojure, nrepl and cider-nrepl versions defined in the dedicated aliases: {:paths ["resources"] :deps {org.clojure/clojure {:mvn/version "1.10.1"}} :aliases { ... :backend {:extra-paths ["src/clj"] :extra-deps {metosin/ring-http-response {:mvn/version "0.9.1"} ... nrepl/nrepl {:mvn/version "0.8.2"} ... ;; Emacs Cider specific. :emacs {:extra-deps {cider/cider-nrepl {:mvn/version "0.25.5"}}}} The backend alias was the actual alias for backend development - I created the emacs alias for this blog post. The Justfile provides the commands for starting the repls with the aliases: backend-debug-reveal-kari-port for starting the repl for Cursive, and backend-debug-kari-emacs for starting the repl for Emacs: # For Cursive @backend-debug-reveal-kari-port: # clj -J-Dvlaaad.reveal.prefs="{:theme :light}" -M:dev:test:common:backend:reveal:kari -m nrepl.cmdline --middleware '[com.gfredericks.debug-repl/wrap-debug-repl vlaaad.reveal.nrepl/middleware]' -p 44444 -i -C clj -M:dev:test:common:backend:reveal:kari -m nrepl.cmdline --middleware '[com.gfredericks.debug-repl/wrap-debug-repl vlaaad.reveal.nrepl/middleware]' -p 44444 -i -C# Start backend repl with my toolbox for Emacs. @backend-debug-kari-emacs: PROFILE=emacs clj -M:dev:test:common:backend:reveal:kari:emacs -m nrepl.cmdline --middleware '[com.gfredericks.debug-repl/wrap-debug-repl vlaaad.reveal.nrepl/middleware cider.nrepl/cider-middleware]' -p 55555 -i -C I created both Just recipes for this blog post — the idea is that I added an explicit port for both repls so that I can be sure that Cursive and Emacs are connecting to the respective repls (Cursive port 44444, and Emacs port 55555). The first command has another version with a ligth theme that I tried but didn’t like it that much. In both REPLs I start the nrepl ( -m nrepl.cmdline) and then add some middleware ( '[com.gfredericks.debug-repl/wrap-debug-repl vlaaad.reveal.nrepl/middleware]' , and cider.nrepl/cider-middleware for Emacs). The first one is a debug helper and the next is a REPL output tool, more about that later. Connecting to the REPLs from the Editors In IDEA / Cursive create a Run/Debug Configuration as in the picture below: In Emacs / Cider give command: cider-connect and give the localhost and port you used when starting the repl previously ( 55555). The REPL Output in the Editors The picture below shows the Cursive REPL output window. It is important to mention here that Clojurians do not write in the REPL. You write in the editor and send the forms for evaluation to the REPL, typically using some hotkey (more about that later). So, in the picture below I have first reset my Integrant state (using a hotkey, of course). Then I have evaluated the forms (e.g. (set! *print-length* 100)) one by one with a dedicated hotkey. I actually have three monitors at my table, and I keep my Cursive REPL output window in another monitor than the editor itself. Recently I have been experimenting with the Reveal REPL tool and I keep both output windows side by side when experimenting stuff in the Reveal: Let’s next see the same thing using Emacs: In Emacs, you see 4 buffers. The top buffer shows a Clojure file (one of my scratch files in which I do some experiments). Below that are side by side the cider-repl buffer which is connected to the REPL I described in the previous chapter, and a message buffer next to it (REPL output echoed). At the bottom is the command buffer which also echoes the REPL output. Emacs also nicely echoes the REPL output also in the editor buffer: => {:country_name "Finland", :country_code :FIN, :series-name "Hospital beds (per 1,000 people)", :series-code :SH.MED.BEDS.ZS, :year 2002, :value 7.4, :country-id 246} You can start the same way the Reveal REPL output tool, as previously (it’s a REPL middleware and not connected to the editors) — I have the Reveal window next to Emacs. Now I have explained how to start the REPLs and how to connect to the REPLs using the editors. Let’s next dive into the look-and-feel. Look-And-Feel — the Look The look is basically the theme of the editor — how colors are used, font ligature, etc. IDEA/Cursive. When I started Clojure programming with IDEA/Cursive I wanted to find a light theme that would be nice to my eyes. I have never liked the dark themes that much. I actually thought that there was a Leuven theme in IntelliJ but now that I checked it I noticed that I have created a custom “KariLeuven” theme — possibly used some existing Leuven-like theme as a basis, can’t remember anymore. Anyway, the color theme is rather simple: a light theme with just some coloring ( def, defn and language macros like comment using blue, keywords using violet, and strings using green). The picture below shows my theme customizations. Emacs. I took the initial setup for my Clojure Emacs configuration from flyingmachine’s Emacs Cursive setup. Then I fine-tuned it a bit, e.g. using the Leuven theme which I initially got from fniessen, and fine-tuned it a bit further. Look-And-Feel — the Feel The Feel part is how one navigates in the editor and manipulates S-expressions. Since Clojure is a Lisp and therefore a homoiconic language all expressions are so called S-expressions — S-expressions can be constructed from other S-expressions. That’s why Lisps have a lot of parentheses — but this also makes the language really powerful (macros etc) and nice to edit. There are two major schools related to how to edit Lisp code: the older paredit style and the newer parinfer style. Using paredit you have certain commands that you use to manipulate the S-expressions, e.g. move this S-expression inside the next S-expression, etc. When using parinfer you don’t have to remember any special commands but you achieve the same results by indenting Lisp code. Here are a couple of web pages that illustrate the difference nicely: I use paredit myself. I tried parinfer but it felt a bit odd. I have the same hotkeys for slurping and barfing for both IDEA/Cursive and Emacs and it is therefore pretty effortless for me to edit Lisp code using paredit. When interviewing my colleagues at Metosin most of the programmers used paredit. But if you are a newcomer to the Lisp / Clojure land I suggest using parinfer — it is easier to start editing the code when you don’t have to learn any special commands. Hotkeys for Slurping and Barfing To understand my slurping and barfing hotkeys the reader needs to read my previous blog post Dygma Raise Keyboard Reflections Part 1 first. In that blog post, I explain how I have configured the CapsLock key to function as AltGr key in order to use it to get the various parentheses without twisting my right thumb. Then the reader needs to understand the two layers I have configured in my Dygma Raise. I really recommend Dygma Raise — the best keyboard I have ever used — a perfect keyboard for a programmer. The following two pictures show my favorite hotkeys in IDEA/Cursive: The most used REPL hotkeys are the Integrant reset = Alt-J and Send Form Before Caret to REPL = Alt-L. Then the paredit manipulation hotkeys: Then the same settings in Emacs: ;; override the default keybindings in paredit (eval-after-load 'paredit '(progn (define-key paredit-mode-map (kbd "C-M-j") nil) (define-key paredit-mode-map (kbd "C-M-l") nil) (define-key paredit-mode-map (kbd "C-M-j") 'paredit-backward-slurp-sexp) (define-key paredit-mode-map (kbd "M-<right>") 'paredit-forward-slurp-sexp) (define-key paredit-mode-map (kbd "C-M-l") 'paredit-backward-barf-sexp) (define-key paredit-mode-map (kbd "M-<left>") 'paredit-forward-barf-sexp) (define-key paredit-mode-map (kbd "C-<right>") 'right-word) (define-key paredit-mode-map (kbd "C-<left>") 'left-word) )) ... (eval-after-load 'cider '(progn ... (define-key cider-mode-map (kbd "M-l") 'cider-eval-last-sexp) (define-key cider-mode-map (kbd "M-ö") 'cider-eval-defun-at-point) (define-key cider-mode-map (kbd "M-n") 'cider-repl-set-ns) (define-key cider-mode-map (kbd "M-m") 'cider-load-buffer) (define-key cider-mode-map (kbd "M-{") 'cider-format-buffer) (define-key cider-mode-map (kbd "M-å") 'cider-test-run-ns-tests) (define-key cider-mode-map (kbd "M-ä") 'cider-test-run-test) As you can see it is pretty simple to configure the same REPL hotkeys and the same paredit hotkeys for both IDEA/Cursive and Emacs/Cider. When reading this you have to remember that I’m not using arrow keys but my arrow keys are defined as CapsLock + i/j/k/l, so when I’m forward barfing I actually have my left little finger in CapsLock and my left thumb in one of the Dygma thumb keys (which is Alt) and hit J with my right index finger. This may sound very complicated, but actually, it isn’t - I have considered various layouts and the current layout is a kind of evolutionary result of my keyboard layout experiments. I really like the system since it resembles a bit playing the classical guitar: I hit the “control” (ctrl, shift, alt, CapsLock) key combinations using my left hand, and the navigation (arrow), manipulation (barfing/slurping), and evaluation (REPL) keys with my right hand. All these key combinations are in my muscle memory and I don’t consciously think about them - I just edit code. Even though this system suits me nicely I wouldn’t recommend it to someone else - you have to experiment and find your own keyboard, keyboard layout, and finally the hotkeys for your favorite programming language in that layout. Conclusions Both IDEA/Cursive and Emacs/Cider are excellent editors and Clojure integrated development environments. If you want to switch from one to another you can quite easily configure both editors to have pretty much the same look-and-feel.:
https://kari-marttila.medium.com/comparing-clojure-ides-emacs-cider-vs-idea-cursive-8852d0ccc7d2?source=post_internal_links---------7----------------------------
CC-MAIN-2021-21
refinedweb
2,795
59.94
Hi, I'm new to this forum and this is my first post so I hope I'm posting in the correct spot and that I am doing it correctly. I'm really new to programming but very keen to learn. Doing an IT degree. If anyone can tell me what I have done wrong here that would be great. The program should count the amount of vowels that are in a string entered by the user. Code Java: import java.util.Scanner; public class VowelCounter { public static void main(String[] args) { Scanner input = new Scanner(System.in); char vowelCounter[] = new char[5]; vowelCounter[0] = 'a'; vowelCounter[1] = 'e'; vowelCounter[2] = 'i'; vowelCounter[3] = 'o'; vowelCounter[4] = 'u'; //Prompt user to input a string int counter = 0; String aString; System.out.println("Enter a string of text: "); aString = input.nextLine(); if (aString == "") { aString = input.nextLine(); } for (int i = 0; i < aString.length(); i++) { char charInPosition = aString.charAt(i); for (int j = i+1; j < vowelCounter.length; j++ ) { if (charInPosition == vowelCounter[j]) { counter++; } } } System.out.println(counter); } }
http://www.javaprogrammingforums.com/%20whats-wrong-my-code/9276-confused-printingthethread.html
CC-MAIN-2015-18
refinedweb
176
60.72
Variables in C and C++ | A Complete Guide for Beginners When we hear about variables, we tend to think about mathematical formulas and questions in which variables are unknown values or quantities limited to some numbers. But, in the C/C++ programming language, variables connote a different meaning. In mathematical terms, we use representations of variables such as x or y that indicates an unknown value that we are supposed to find. Here, variables in the C and C++ programming language are the basic units, which help us to build C Programs. So, without wasting time, start exploring C/C++ variables. Stay updated with the latest technology trends while you're on the move - Join DataFlair's Telegram Channel What are Variables in C and C++? A C/C++ program performs many tasks and operations that help you to resolve many problems by storing values into computer memory. But, how does the compiler understand the names given to these values? A variable helps to specify the existence of these values by defining and declaring them. In simple words, a variable is a storage space associated with a unique name to identify them. When you want to store some data on your system in the computer memory, is it possible for you to be able to remember these memory addresses? The answer is no, and that is the reason why we use variables. Variables in C/C++ programming help us to store values depending upon the size of the variable. With the help of variables, we can decide what amount and type of data store in a variable. When you assign a data type and name to some space in the memory, variables are defined. Variables reserve some memory in the storage space, that you can access later in the program. By declaring a variable, you inform the operating system to reserve memory indicated by some name. i.e. variable_name. Enhance your fundamental skills with Operators in C Naming a Variable in C/C++ You need to follow some rules, before naming a variable in C and C++: - A variable must not start with a digit. - A variable can begin with an alphabet or an underscore. - Variables in C and C++ are case-sensitive which means that uppercase and lowercase characters are treated differently. - A variable must not contain any special character or symbol. - White spaces are not allowed while naming a variable. - Variables should not be of the same name in the same scope. - A variable name cannot be a keyword. - The name of the variable should be unique. Let’s see some examples of both valid and invalid variable names. Valid variable names ticketdata _ticketdata ticket_data Invalid variable names 56ticketdata ticket@data ticket data Variable Definition A variable definition in C and C++ defines the variable name and assigns the data type associated with it in some space in computer memory. After giving its definition, this variable can be used in the program depending upon the scope of that variable. By defining a variable, you indicate the name and data type of the variable to the compiler. The compiler allocates some memory to the variable according to its size specification. Learn Data Types in C and C++ with Example in Just 4 mins. Rules for Defining Variables in C and C++ - Must contain data_type of that variable. Example: int start; float width; char choice; - The variable name should follow all the rules of the naming convention. - After defining the variable, terminate the statement with a semicolon otherwise it will generate a termination error. Example: int sum; - The variable with the same data type can work with a single line definition. Example: float height, width, length; Defining Variables in C and C++ with Example int var; Here, a variable of integer type with the variable name var is defined. This variable definition allocates memory in the system for var. Another example, char choice ; When we define this variable named choice, it allocates memory in the storage space according to the type of data type in C, i.e., character type. Variable Declaration There is a huge difference between defining a variable and declaring a variable. By declaring a variable in C and C++, we simply tell the compiler that this variable exists somewhere in the program. But, the declaration does not allocate any memory for that variable. Declaration of variable informs the compiler that some variable of a specific type and name exists. The definition of variable allocates memory for that variable in the program. So, it is safe to say that the variable definition is a combination of declaration and memory allocation. A variable declaration can occur many times but variable definition occurs only once in a program, or else it would lead to wastage of memory. Variable Initialization Variable initialization means assigning some value to that variable. The initialization of a variable and declaration can occur in the same line. int demo = 23; By this, you initialize the variable demo for later use in the program. Before we start to discuss types of variables, you should know Functions and their importance in C Types of Variables in C and C++ There are 5 types of Variables in C/C++; let’s discuss each variable with example. 1. Local Variables The scope of local variables lies only within the function or the block of code. These variables stay in the memory till the end of the program. Example of Local Variable in C #include <stdio.h> int main() { printf("Welcome to DataFlair tutorials!\n\n"); int result = 5; //local variable printf("The result is %d \n", result); return 0; } Code on Screen- Output- Example of Local Variable in C++ #include <iostream> using namespace std; int main() { cout<<"Welcome to DataFlair tutorials!"<<endl<<endl; int result = 5; //local variable cout<<"The result is: "<< result<<endl; return 0; } Code on Screen- Output- 2. Global Variables A global variable has a global scope, that is, this variable is valid until the end of the program. These variables are available to all the functions in that program. Let us now see an example to understand the difference between local and global variables in C and C++. Example of Global Variables in C #include <stdio.h> int sumf(); int sum = 2; //global variable int main () { printf("Welcome to DataFlair tutorials!\n\n"); int result =5; //local variable sumf(); return 0; } int sumf() { printf("\n Sum is %d \n", sum); printf("\n The result is %d \n", result); } Code on Screen- When we compile this program, As the variable result is not defined globally, that is why the code is not compiled, and an error occurs when we use this variable out of that function. Let us correct it, by making the result as the global variable in C/C++. #include <stdio.h> int sumf(); int sum = 2; //global variable int result =5; //global variable int main () { printf("Welcome to DataFlair tutorials!\n\n"); sumf(); return 0; } int sumf() { printf("\n Sum is %d \n", sum); printf("\n The result is %d \n", result); } Code on Screen- Output- Example of Global Variables in C++ Let us take the same example in C++ to understand the difference between a local and a global variable: #include <iostream> using namespace std; int sumf(); int sum = 2; // global variable int main () { cout<<"Welcome to DataFlair tutorials!"<<endl<<endl;; int result =5; //local variable sumf(); return 0; } int sumf() { cout<<"Sum is: "<< sum <<endl; cout<<"The result is: "<< result <<endl; } Error- In this way, we can implement a global variable in C++ #include <iostream> using namespace std; int sumf(); int sum = 2; //global variable int result =5; //global variable int main () { cout<<"Welcome to DataFlair tutorials!"<<endl<<endl; sumf(); return 0; } int sumf() { cout<<"Sum is: "<< sum <<endl; cout<<"The result is: "<< result <<endl; } Code on Screen- Output- 3. Static Variables Static variable retains its value within the function calls. ‘static’ keyword is used to define a static variable. A static variable can preserve its value, and you can not initialize a static variable again. Example of Static Variables in C Let us now see how a local variable and static variable are different. #include <stdio.h> void statf(); int main () { printf("Welcome to DataFlair tutorials!\n\n"); int i; for(i = 0; i < 5; i++) { statf(); } } void statf() { int a = 20; //local variable static int b = 20; //static variable a = a + 1; b = b + 1; printf("\n Value of a %d\t,Value of b %d\n",a ,b); } Code on Screen- Output- Example of Static Variables in C++ Let us now see how a local variable and static variable are different in C++: #include <iostream> using namespace std; void statf(); int main () { cout<<"Welcome to DataFlair tutorials!"<<endl<<endl; int i; for(i = 0; i < 5; i++) { statf(); } } void statf() { int a = 20; //local variable static int b = 20; //static variable a = a + 1; b = b + 1; cout<<"Value of a: " << a << " ,Value of b: "<< b <<endl; } Code on Screen- Output- With every function call, static variable uses the preserved value and increment the value of a variable, whereas a local variable re-initializes the value every time function calls takes place. 4. Automatic Variables The variable declared inside a block is called an automatic variable. The automatic variables in C are different from the local variables, automatic variable allocates memory upon the entry to that block and frees the occupied space when the control exits from the block. ‘auto’ keyword defines an automatic variable. Example auto int var = 39; 5. External Variables We use ‘extern’ keyword to increase the visibility of the program. An extern variable is available to other files too. The difference between global and extern variable is, a global variable is available anywhere within the file or program in which global variable declares but an extern variable is available for other files too. Note: Variables with keyword ‘extern’ are only declared; they are not defined. Also, initialization of extern keywords can also be considered as its definition. extern int var; It means that extern variable var is available or valid in the program and in the other files too. Note: Declaration of var takes place and not the definition. Summary In this tutorial, we focused on all significant points related to variables. Follow these rules for variables in C and C++, - Every variable uses some memory in the storage space when it is defined not declared. - Initializing a variable can be done along with a definition. - There are various types of variables that C support and these variables are categorized based on their scope. It’s time to learn and execute the datatypes in C with example Please provide suggestions for improvement and queries in the comment section.
https://data-flair.training/blogs/variables-in-c-and-c-plus-plus/
CC-MAIN-2019-51
refinedweb
1,794
60.24
WTP JEE5 Test Scenarios-2 Web Application with using JSF 1.1 and JPA 1.0 - 8 EJB/EAR SCENARIOS - Click Next. In the next wizard page, you have the option to specify context root and content directories, and option to create a Deployment Descriptor. Make sure that Generate Deployment Descriptor is checked. WTP will create the web.xml file WebContents/Web-Inf folder. web.xml should look like the following: <?xml version="1.0" encoding="UTF-8"?> <web-app </web-app> - Step> - Step 6 - Right click test.jsp, Choose Run As: Run on Server, select Tomcat 6 as the server - Step 7 - Tomcat 6 starts and results are correctly displayed (using current code in CVS, 3/28) UC-1(b) ADD A SERVLET TO SIMPLE WEB APPLICATION USE CASE This use case adds a servlet to the simple Web application scenario (previos use-case)and testing it using Run As> Run on Server - Step 1 - Use File->new to choose the Servlet Wizard. Complete the wizard pages to create a Servlet class in a package named demo with the class named HelloWorldServlet. The servlet should be the subclass of HttpServlet. Click next to proceed to the next pages and give the name helloworld to the servlet, /helloworld as the URL mapping and select check the box to create the doGet method. - Step 2 - The wizard will create the Servlet class and add the required entries to the web.xml file. <?xml version="1.0" encoding="UTF-8"?> <web-app <servlet> <servlet-name>helloworld</servlet-name> <servlet-class>demo.HelloWorldServlet</servlet-class> </servlet> <servlet-mapping> <servlet-name>helloworld</servlet-name> <url-pattern>/helloworld</url-pattern> </servlet-mapping> </web-app> - Step 2 - Edit the Servlet class so that the code looks like the following: public class HelloWorldServlet extends HttpServlet { @Override public void doGet(HttpServletRequest req, HttpServletResponse resp) throws ServletException, IOException { resp.getWriter().print("Hello World"); } } - Step 3 - Right click on the Servlet, Choose Run As: Run on Server, select Tomcat 6 as the server, or after starting Tomcat type thw URL () to test your servlet - Step 4 - Tomcat 6 starts and results are correctly displayed UC -.) - Step 8 -> - Step 9 -) - Step 10 -; } } -.
http://wiki.eclipse.org/index.php?title=WTP_JEE5_Test_Scenarios&oldid=36543
CC-MAIN-2017-34
refinedweb
361
62.48
Speed Up Your Web Site with Varnish Varnish Subroutines The Varnish subroutines have default definitions, which are shown in default.vcl. Just because you redefine one of these subroutines doesn't mean the default definition will not execute. In particular, if you redefine one of the subroutines but don't return a value, Varnish will proceed to execute the default subroutine. All the default Varnish subroutines return a value, so it makes sens that Varnish uses them as a fallback. The first subroutine to look at is called vcl_recv. This gets executed after receiving the full client request, which is available in the req object. Here you can inspect and make changes to the original request via the req object. You can use the value of req to decide how to proceed. The return value is how you tell Varnish what to do. I'll put the return values in parentheses as they are explained. Here you can tell Varnish to bypass the cache and send the back end's response back to the client (pass). You also can tell Varnish to check its cache for a match (lookup). Next is the vcl_pass subroutine. If you returned pass in vcl_recv, this is where you'll be just before sending the request to the back end. You can tell Varnish to continue as planned (pass) or to restart the cycle at the vcl_recv subroutine (restart). The vcl_miss and vcl_hit subroutines are executed depending on whether Varnish found a suitable response in the cache. From vcl_miss, your main options are to get a response from the back-end server and cache it (fetch) or to get a response from the back end and not cache it (pass). vcl_hit is where you'll be if Varnish successfully finds a matching response in its cache. From vcl_hit, you have the cached response available to you in the obj object. You can tell Varnish to send the cached response to the client (deliver) or have Varnish ignore the cached response and return a fresh response from the back end (pass). The vcl_fetch subroutine is where you'll be after getting a fresh response from the back end. The response will be available to you in the beresp object. You either can tell Varnish to continue as planned (deliver) or to start over (restart). From vcl_deliver, you can finish the request/response cycle by delivering the response to the client and possibly caching it as well (deliver), or you can start over (restart). As previously stated, you express your caching policy within the subroutines in default.vcl. The return values tell Varnish what to do next. You can base your return values on many things, including the values held in the request (req) and response (resp) objects mentioned earlier. In addition to req and resp, there also is a client object representing the client, a server object and a beresp object representing the back end's response. It's important to realize that not all objects are available in all subroutines. It's also important to return one of the allowed return values from subroutines. One of the hardest things to remember when starting out with Varnish is which objects are available in which subroutines, and what the legal return values are. To make it easier, I've created a couple reference tables. They will help you get up to speed quickly by not having to memorize everything up front or dig through the documentation every time you make a change. Tip: Be sure to read the full explanation of VCL, available subroutines, return values and objects in the vcl(7) man page. Let's put it all together by looking at some examples. Normalizing the request's Host header: sub vcl_recv { if (req.http.host ~ "^") { set req.http.host = "example.com"; } } Notice you access the request's host header by using req.http.host. You have full access to all of the request's headers by putting the header name after req.http. The ~ operator is the match operator. That is followed by a regular expression. If you match, you then use the set keyword and the assignment operator (=) to normalize the hostname to simply "example.com". A really good reason to normalize the hostname is to keep Varnish from caching duplicate responses. Varnish looks at the hostname and the URL to determine if there's a match, so the hostnames should be normalized if possible. Here's a snippet from the default vcl_recv subroutine: sub vcl_recv { if (req.request != "GET" && req.request != "HEAD") { return (pass); } return (lookup); } That's a snippet of the default vcl_recv subroutine. You can see that if it's not a GET or HEAD request, varnish returns pass and won't cache the response. If it is a GET or HEAD request, it looks it up in the cache. Removing request's Cookies if the URL matches: sub vcl_recv { if (req.url ~ "^/images") { unset req.http.cookie; } } That's an example from the Varnish Web site. It removes cookies from the request if the URL starts with "/images". This makes sense when you recall that Varnish won't cache a request with a cookie. By removing the cookie, you allow Varnish to cache the response. Removing response cookies for image files: sub vcl_fetch { if (req.url ~ "\.(png|gif|jpg)$") { unset beresp.http.set-cookie; set beresp.ttl = 1h; } } That's another example from Varnish's Web site. Here you're in the vcl_fetch subroutine, which happens after fetching a fresh response from the back end. Recall that the response is held in the beresp object. Notice that here you're accessing both the request (req) and the response (beresp). If the request is for an image, you remove the Set-Cookie header set by the server and override the cached response's TTL to one hour. Again, you do this because Varnish won't cache responses with the Set-Cookie.
http://www.linuxjournal.com/content/speed-your-web-site-varnish?page=0,2
CC-MAIN-2014-49
refinedweb
990
64.91
PluralFormat and SelectFormat Message and i18n Tool - A JavaScript Implemenation of the ICU standards. The experience and subtlety of your program's text can be important. MessageFormat (PluralFormat + SelectFormat) is a mechanism for handling both pluralization and gender in your applications. It can also lead to much better translations, as it was built by ICU to help solve those two problems for all known CLDR languages - likely all the ones you care about. There is a good slide-deck on Plural and Gender in Translated Messages by Markus Scherer and Mark Davis. But, again, remember that many of these problems apply even if you're only outputting english. See just how many different pluralization rules there are. MessageFormat in Java-land technically incorporates all other type formatting (and the older ChoiceFormat) directly into its messages, however, in the name of filesize, messageformat.js only strives to implement SelectFormat and PluralFormat. There are plans to pull in locale-aware NumberFormat parsing as a "plugin" to this library, but as of right now, it's best to pass things in preformatted (as suggested in the ICU docs). We have also ported the Google Closure implementation of NumberFormat, but there is no direct integration of these two libraries. (They work well together!) A progression of strings in programs: There are 1 results. There are 1 result(s). Number of results: 5. These are generally unacceptable in this day and age. Not to mention the problem expands when you consider languages with 6 different pluralization rules. You may be using something like Gettext to solve this across multiple languages, but even Gettext falls flat. ICU bills the format as easy to read and write. It may be more easy to read and write, but I'd still suggest a tool for non-programmers. It looks a lot like Java's ChoiceFormat - but is different in a few significant ways, most notably its addition of the plural keyword, and more friendly select syntax. {GENDER, select,male {He}female {She}other {They}} found {NUM_RESULTS, plural,one {1 result}other {# results}} in {NUM_CATEGORIES, plural,one {1 category}other {# categories}}. Here's a few data sets against this message: "GENDER" : "male""NUM_RESULTS" : 1"NUM_CATEGORIES" : 2> "He found 1 result in 2 categories.""GENDER" : "female""NUM_RESULTS" : 1"NUM_CATEGORIES" : 2> "She found 1 result in 2 categories.""GENDER" : "male""NUM_RESULTS" : 2"NUM_CATEGORIES" : 1> "He found 2 results in 1 category.""NUM_RESULTS" : 2"NUM_CATEGORIES" : 2> "They found 2 results in 2 categories." There is very little that needs to be repeated (until gender modifies more than one word), and there are equivalent/appropriate plural keys for every single language in the CLDR database. The syntax highlighting is less than ideal, but parsing a string like this gives you flexibility for your messages even if you're only dealing with english. UX++; filesize--; > npm install messageformatvar MessageFormat = require'messageformat'; <!-- after the messageformat.js include, but before you need to use the locale --> TODO:: In node, we can automatically pull in all known locales for you. // Any time after MessageFormat is includedMessageFormatlocale"locale_name" = ;// Or during instantiationvar mf = 'locale_name' ; These require node: > make test> make test-browser You really should take advantage of this. It is much faster than parsing in real-time. I will eventually release a Handlebars and Require.js (r.js) plugin to do this automatically, but if you would like to output the raw javascript function, the following does that: var mf = 'en';var js_string_represenation = mfprecompilemfparse'Your {NUM, plural, one{message} other{messages}} go here.';// This returns an unnamed - unreferenced function that needs to be passed the// MessageFormat object. See the source of `MessageFormat.compile` for more details. If you don't want to compile your templates programmatically, you can use the built in CLI compiler. This tool is in early stage. It was tested on Linux and Windows, but if you find a bug, please create an issue. > [sudo] npm install -g messageformat > messageformat Usage: messageformat -l [locale] [INPUT_DIR] [OUTPUT_DIR] --locale, -l locale to use [mandatory] --inputdir, -i directory containings messageformat files to compile $PWD --output, -o output where messageformat will be compiled $PWD --watch, -w watch `inputdir` for change false --namespace, -ns object in the browser containing the templates window.i18n --include, -I Glob patterns for files to include in `inputdir` **/*.json --stdout, -s Print the result in stdout instead of writing in a file false --module, -m create a commonJS module, instead of a window variable false --verbose, -v Print logs for debug false If your prefer looking at an example go there. messageformat will read every JSON files in inputdir and compile them to output. When using the CLI, the following commands will works exactly the same: > messageformat --locale en ./example/en > messageformat --locale en ./example/en ./i18n.js > messageformat --locale en --inputdir ./example/en --output ./i18n.js or even shorter > cd example/en > messageformat -l en You can also do it with a unix pipe > messageformat -l en --stdout > i18n.js Take a look at the example inputdir and output A watch mode is available with the --watch or -w option. The original JSON files are simple objects, with a key and a messageformat string as value, like this one: { "test": "Your {NUM, plural, one{message} other{messages}} go here." } The CLI walks into inputdir recursively so you can structure your messageformat with dirs and subdirs. Now that you have compiled your messageformat, you can use it in your html by adding a <script src="index.js"></script>. In the browser, the global window.i18n is an object containing the messageformat compiled functions. > i18n Object colors: Object blue: [ Function ] green: [ Function ] red: [ Function ] "sub/folder/plural": Object test: [ Function ] You could then use it: $('<div>').text( window.i18n[ 'sub/folder/plural' ].test( { NUM: 1 } ) ).appendTo('#content'); The namespace window.i18n could be changed with the --namespace or -ns option. Subdirectories messageformat are available in the window.i18n namespace, prefixed with their relative path : > window.i18n['sub/folder/plural'] Object * test: [ Function ] sub/folder is the path, plural is the name of the JSON file, test is the key used. A working example is available here. The most simple case of MessageFormat would involve no formatting. Just a string passthrough. This sounds silly, but often it's nice to always use the same i18n system when doing translations, and not everything takes variables. // Insantiate a MessageFormat object on your localevar mf = 'en';// Compile a messagevar message = mfcompile 'This is a message.' ; // returns a function// You can call the function to get data out> message;"This is a message."// NOTE:: if a message _does_ require data to be passed in, an error is thrown if you do not. The second most simple way to use MessageFormat is for simple variable replacement. MessageFormat looks odd at first, but it's actually fairly simple. One way to think about the { and } is that every level of them bring you into and out-of literal and code mode. By default (like in the previous example), you are just writing a literal. Then the first level of brackets brings you into one of several data-driven situations. The most simple is variable replacement. Simply putting a variable name in between { and } will place that variable there in the output. // Instantiate new MessageFormat object for your localevar mf = 'en';// Compile a messagevar message = mfcompile'His name is {NAME}.';// Then send that data into the function> message "NAME" : "Jed" ;"His name is Jed."// NOTE:: it's best to try and stick to keys that would be natively// tolerant in your JS runtimes (think valid JS variable names). SelectFormat is a lot like a switch statement for your messages. Most often it's used to select gender in a string. Here's an example: // Insantiate an instance with your language settingsvar mf = 'en';// Compile a message - returns a functionvar message = mfcompile'{GENDER, select, male{He} female{She} other{They}} liked this.';// Run your message function with your data> message"GENDER" : "male";"He liked this."> message"GENDER" : "female";"She liked this."// The 'other' key is **required** and in the case of GENDER// it should be phrased as if you are too far away to tell the gender of the subject.> message{};"They liked this." PluralFormat is a similar mechanism to SelectFormat (especially syntax wise), but it's specific to numbers, and the key that is chosen is generated by a Plural Function. // Insantiate a new MessageFormat objectvar mf = 'en';// You can use the provided locales in the `/locale` folder// (include the file directly after including messageformat.jsvar mf = 'sl' ;// OR - you can pass a custom plural function to the MessageFormat constructor function.var mf = 'requiredCustomName'if n === 42return 'many';return 'other';;// Then the numbers that are passed into a compiled message will run through this function to select// the keys. This is for the 'en' locale:var message = mfcompile'There {NUM_RESULTS, plural, one{is one result} other{are # results}}.';// Then the data causes the function to output:> message"NUM_RESULTS" : 0;"There are 0 results."> message"NUM_RESULTS" : 1;"There is one result."> message"NUM_RESULTS" : 100;"There are 100 results." ICU declares the 6 named keys that CLDR defines for their plural form data. Those are: All of them are fairly straight-forward, but do remember, that for some languages, they are more loose "guidelines" than they are exact. The only required key is other. Your compilation will throw an error if you forget this. In english, and many other languages, the logic is simple: If N equals 1, then ONE, otherwise OTHER Other languages (take a peak at ar.js or sl.js) can get much more complicated. Remember. English only uses one and other - so including zero will never get called, even when the number is 0 The most simple (to pluralize) languages have no pluralization rules an rely solely on the other named key. {NUM, plural,zero {There are zero - in a lang that needs it.}one {There is one - in a lang that has it.}two {There is two - in a lang that has it.}few {There are a few - in a lang that has it.}many {There are many - in a lang that has it.}other {There is a different amount than all the other stuff above.}} There also exists the capability to put literal numbers as keys in a select statement. These are delimited by prefixing them with the = character. These will match single, specific numbers. If there is a match, that branch will immediately run, and the corresponding named key will not also run. There are plenty of legitimate uses for this, especially when considering base cases and more pleasant language. But if you're a Douglas Adams fan, might use it like so: You have {NUM_TASKS, plural,one {one task}other {# tasks}=42 {the answer to the life, the universe and everything tasks}} remaining. When NUM_TASKS is 42, this outputs smiles. Remember, these have priority over the named keys. ICU provided the ability to extend existing select and plural functionality, and the only official extension (that I could find) is the offset extension. It goes after the plural declaration, and is used to generate sentences that break up a number into multiple sections. For instance: You and 4 others added this to their profiles. In this case, the total number of people who added 'this' to their profiles is actually 5. We can use the offset extension to help us with this. var mf = 'en';// For simplicity's sake, let's assume the base case here isn't silly.// The test suite has a bigger offset example at the bottom// Let's also assume neutral gender for the same reason// Set the offset to 1var message = mfcompile'You {NUM_ADDS, plural, offset:1' +'=0{didnt add this to your profile}' + // Number literals, with a `=` do **NOT** use'zero{added this to your profile}' + // the offset value'one{and one other person added this to their profile}' +'other{and # others added this to their profiles}' +'}.';// Tip: I like to consider the `=` prefixed number literals as more of an "inductive step"// e.g. in this case, since (0 - 1) is _negative_ 1, we want to handle that base case.> message"NUM_ADDS" : 0 ;"You didnt add this to your profile."> message"NUM_ADDS" : 1 ;"You added this to your profile."> message"NUM_ADDS" : 2 ;"You and one other person added this to their profile."> message"NUM_ADDS" : 3 ;"You and 3 others added this to their profile." Very simply, you can nest both SelectFormat blocks into PluralFormat blocks, and visa-versa, as deeply as you'd like. Simply start the new block directly inside: {SEL1, select,other {{PLUR1, plural,one {1}other {{SEL2, select,other {deep in the heart.}}}}}} messageformat.js tries to a good job of being tolerant of as much as possible, but some characters, like the ones used the actual MessageFormat spec itself, must be escaped to be a part of your string. For {, } and # (only inside of a select value) literals, just escape them with a backslash. (If you are in a JS string, you'll need to escape the escape backslash so it'll look like two). // Technically, it's just:\\\#// But in practice, since you're often dealing with string literals, it looks more likevar msg = mfcompile"\\{ {S, select, other{# is a \\#}} \\}";> msgS:5;"{ 5 is a # }" Gettext can generally go only one level deep without hitting some serious roadblocks. For example, two plural elements in a sentence, or the combination of gender and plurals. He found 5 results in 2 categories. She found 1 result in 1 category. He found 2 results in 1 category. It can likely be done with contexts/domains for gender and some extra plural forms work to pick contexts for the plurals, but it's less than ideal. Not to mention every translation must be completed in its entirety for every combination. That stinks too. You can easily mix Gettext and MessageFormat by storing MessageFormat strings in your .po files. However, I would stop using the built in plural functions of Gettext. I tend to only use Gettext on projects that are already using it in other languages, so we can share translations, otherwise, I like to live on the wild-side and use PluralFormat and SelectFormat. Most Gettext tools will look up the Plural Forms for a given locale for you. This is also the opinion of PluralFormat. The library should just contain the known plural forms of every locale, and not force translators to reinput this information each time. 0.1.8 You may use this software under the Apache License, version 2.0. You may contribute to this software under the Dojo CLA - Thanks to:
https://www.npmjs.com/package/messageformat
CC-MAIN-2015-40
refinedweb
2,444
55.95
The objective of this post is to explain how to change the speech rate and volume on the pyttsx module. Introduction The objective of this post is to explain how to change the speech rate and volume on the pyttsx module. If you need help installing the module, please check this previous “Hello World” tutorial. If you ran the mentioned “Hello World” program, which uses the default values for the speech rate and volume, you may have noticed that the speech is a little bit fast (although the volume is probably fine). So, we will check how to change these properties. The code The first portion of the code is similar to the “Hello World” example and consists on importing the pyttsx module and getting an instance of the voice engine. import pyttsx voiceEngine = pyttsx.init() Then, we will get the default values for both the speech rate and the volume. To do so, we will call the getProperty method on the speech engine instance. This method receives as input the name of the property and returns its value [1]. You can check here the available properties. So, we will get and print the values for the speech rate and volume and print then. We will also get the currently defined voice, although we will not change this property. rate = voiceEngine.getProperty('rate') volume = voiceEngine.getProperty('volume') voice = voiceEngine.getProperty('voice') print rate print volume print voice Now, to change a property, we just call the setProperty module, which receives as input the name of the property and the value [2]. Note that the rate is an integer which corresponds to the number of words per minute and the volume a floating point between 0 and 1 [1]. You can check the full source code bellow, where we will iterate through multiple voice speech ratings and volumes. Note that we are increasing the speech rate by 50 words per minute in each iteration (starting by 50) until 300 words per minute. Then, we maintaint the speech rate at 125 words per minute and iterate the volume from 0.1 to 1, increasing 0.3 in each iteration. Also, don’t forget to call the runAndWait method in the end of each iteration, so the voice is synthesized. import pyttsx voiceEngine = pyttsx.init() rate = voiceEngine.getProperty('rate') volume = voiceEngine.getProperty('volume') voice = voiceEngine.getProperty('voice') print rate print volume print voice newVoiceRate = 50 while newVoiceRate <= 300: voiceEngine.setProperty('rate', newVoiceRate) voiceEngine.say('Testing different voice rates.') voiceEngine.runAndWait() newVoiceRate = newVoiceRate+50 voiceEngine.setProperty('rate', 125) newVolume = 0.1 while newVolume <= 1: voiceEngine.setProperty('volume', newVolume) voiceEngine.say('Testing different voice volumes.') voiceEngine.runAndWait() newVolume = newVolume+0.3 Testing the code To test the code, just run it on your Python environment. In my case, I usually use IDLE. On the Python shell, you should get an output similar to figure 1, with the default values for the speech rate, volume and voice. Note that the voice will probably be different, depending on the Operating System, which may have different speech engines. Figure 1 – Default values for speech rate, volume and voice. Check in the video bellow the result for the speech synthesized. References [1] [2] Technical details - Python version: 2.7.8 - Pyttsx version: 1.1 Hi! Great post, How can i add spanish voice? LikeLiked by 1 person Hi! Thank you 🙂 I haven’t played with changing voices yet, but from what I’ve seen in the documents,you can iterate over the voices supported in your system: I’ve just ran that example and it shows me a list of voice synthesizers available. Although none of them has the languages array defined, one of them just happens to have the following description (I’m on Windows 8): ” name=Microsoft Helena Desktop – Spanish (Spain)” So it seems like each synthesizer has a different voice rather that a single synthesizer having different choices. It also seems that we can install new voices if needed: Let me know if it helps 🙂
https://techtutorialsx.com/2017/05/06/python-pyttsx-changing-speech-rate-and-volume/
CC-MAIN-2017-26
refinedweb
669
66.94
Peter Torr's blog on HD DVD, JScript, the Pet Shop Boys, and anything else he feels like blogging. Normal disclaimers apply. I am not responsible for anything, and neither is Microsoft. It's traditional to introduce any new programming language or environment with a "Hello, World" program, so we'll do something similar here for iHD. Writing applications in iHD is a bit like writing web pages; the basic layout and styling is described in an HTML-like dialect of XML, and the behaviour of the application is coded in ECMAScript (aka JScript or JavaScript). In addition to the markup-plus-script model of basic HTML, iHD includes a rich animation engine based on SMIL (Synchronised Multimedia Integration Language) that enables dynamic changes in layout to be described declaratively. So without further ado, let's look at a very basic iHD application. Now, unfortunately you folks at home won't be able to play along with the code samples here because there aren't any tools available yet... but at least you can get an idea for what it's like. And maybe if you're a web developer looking for something new, you embark on a new career! :-) [Note: I apologise for the poor formatting; I'll try to fix it soon] An XMU is an "XML Mark Up File" and contains the markup portion of an iHD application. Here's one way "Hello, World" could look: <?xml version="1.0"?> <root xml:lang="en" xmlns="" xmlns:style="" xmlns: <head> <styling /> <timing clock="page"/> </head> <body> <div style:position="absolute" style:x="822px" style:y="551px" style: <p style:font="Verdana.ttf" style:fontSize="48px" style:Hello, World</p> </div> </body> </root> This will place the white text "Hello, World" on a blue background in the middle of a 1920 x 1080 screen. Let's look at the elements one by one; anyone familiar with HTML should be able to see the resemblance already. This standard tag identifies the version of XML used in this file. This is the root node of the document; think of it like the <html> element in HTML. We declare three namespaces that we're going to use (this is a standard XML practice): This is similar to the <head> element in HTML in that it defines things that aren't part of the visible document. For now, we won't worry about the two child elements, <styling> and <timing>; suffice it to say that the former is for defining styles (similar in concept to CSS) and the latter is for performing animations. The clock="page" attribute simply tells iHD that if we have any times listed for our animations, they are based on the lifetime of his page, not on the timecode of the underlying video. But we'll get to that later... Again, this is similar to <body> in HTML in that it defines the things that are visible in the document. In this case, the text "Hello, World" and its background. Once more, we find a familiar HTML-like element, <div>. It is a container for other elements, and we have given it a blue background. We have chosen to position the container at co-ordinates (822, 551) with a width of (250, 50), which will place it in the middle of a high-def screen. This is very similar to what you might do with CSS inside an HTML element, but we are using an XML syntax instead of the CSS syntax. And, our final HTML-alike element, <p>. It defines a paragraph of text (in this case, "Hello, World") that will be in 48-pixel-high white Verdana. And that's it, for now. Next time, we'll add some simple animation. So, you've been tasked with creating a menu for an HD DVD and don't know where to start. Lets start out
http://blogs.msdn.com/ptorr/archive/2006/03/29/564548.aspx
crawl-002
refinedweb
647
69.92
Hello, I'm looking to learn to create a dedicated server application. For example, a "cmd.exe" window where I can start the server, see if there is client connected, send a message to client... Something like that : I saw fews topics of people trying to do something like that, but all the answers said "Use Photon". I don't want something in the cloud.. My goal is to create a small game where player can connect to a server, spawn, shoot, die and leave. The server can be host by anyone, but i repeat, I want to learn how to make a standalone server. If someone have anything that can help me, It will be very nice to give me. Thanks Answer by Dev_Max · Aug 27, 2016 at 11:04 AM I made a small "server", where I can get a Transform position : I found scripts on First, I made with a Visual Studio Console C# application. /* C# Network Programming by Richard Blum Publisher: Sybex ISBN: 0782141765 */ using System; using System.Net; using System.Net.Sockets; using System.Text; public class SimpleUdpSrvr { public static void Main() { int recv; byte[] data = new byte[1024]; IPEndPoint ipep = new IPEndPoint(IPAddress.Any, 9050); Socket newsock = new Socket(AddressFamily.InterNetwork, SocketType.Dgram, ProtocolType.Udp); newsock.Bind(ipep); Console.WriteLine("Waiting for a client..."); IPEndPoint sender = new IPEndPoint(IPAddress.Any, 0); EndPoint Remote = (EndPoint)(sender); recv = newsock.ReceiveFrom(data, ref Remote); Console.WriteLine("Message received from {0}:", Remote.ToString()); Console.WriteLine(Encoding.ASCII.GetString(data, 0, recv)); string welcome = "Welcome to my test server"; data = Encoding.ASCII.GetBytes(welcome); newsock.SendTo(data, data.Length, SocketFlags.None, Remote); while (true) { data = new byte[1024]; recv = newsock.ReceiveFrom(data, ref Remote); Console.WriteLine(Encoding.ASCII.GetString(data, 0, recv)); newsock.SendTo(data, recv, SocketFlags.None, Remote); } } } Second, I made with Unity a game object Called UDPClient with this script attached : using UnityEngine; using UnityEngine.Networking; using System; using System.Net; using System.Net.Sockets; using System.Text; public class UDPClient : NetworkBehaviour { public Transform Cube; void Start() { print (Cube.position); byte[] data = new byte[1024]; IPEndPoint ipep = new IPEndPoint(IPAddress.Parse("127.0.0.1"), 9050); Socket server = new Socket(AddressFamily.InterNetwork, SocketType.Dgram, ProtocolType.Udp); string welcome = "Hello, what's up?"; data = Encoding.ASCII.GetBytes(welcome); server.SendTo(data, data.Length, SocketFlags.None, ipep); IPEndPoint sender = new IPEndPoint(IPAddress.Any, 0); EndPoint tmpRemote = (EndPoint)sender; data = new byte[1024]; int recv = server.ReceiveFrom(data, ref tmpRemote); Debug.Log (String.Format("Message received from {0}:", tmpRemote.ToString())); Debug.Log (String.Format(Encoding.ASCII.GetString(data, 0, recv))); server.SendTo(Encoding.ASCII.GetBytes(Cube.position.ToString()), tmpRemote); Console.WriteLine("Stopping client"); server.Close(); } } This is a small modification of the original script. And this is the result ! This is not amazing, but this is a start ! I've added a Username string that is transmitted along with the Vector3, and I've set up a list of accepted Usernames. How would I disconnect players that don't have a Username that is on the list? Answer by NFMynster · Aug 25, 2016 at 04:55 PM A standalone server for Unet is in the road map for Unity. If you don't want to wait, you can create a simple version of it yourself. The network manager has the option to only host a server, and you can build functionality from there. (if server host, create player list and chat, etc) Answer by ElementalVenom · Aug 25, 2016 at 04:58 PM Another option would be to directly use TCP or UDP connectionis to do networking. Its rather simple and much more flexible then UNET. UDP: TCP: Good luck It look like what i'm looking for, I'm going to try ! Have you ever did. Unity networking tutorial? 6 Answers How can I send a mouse click to a server using an RPC? 0 Answers Peer-to-Peer vs Client-Server vs Dedicated Server Networking 1 Answer Is server the sender of RPC? 0 Answers Photon PUN, HTC Vive camera prefab over network instantiation 0 Answers
https://answers.unity.com/questions/1231279/custom-dedicated-server-application.html
CC-MAIN-2020-29
refinedweb
677
52.56
03 June 2008 22:35 [Source: ICIS news] TORONTO (ICIS news)--Private equity firm CVC Capital Partners has bought a 25.01% stake in Germany’s Evonik Industries for about €2.4bn ($3.75bn), Evonik owner RAG-Stiftung said on Tuesday, confirming earlier newspaper reports that a deal had been reached. Evonik Industries includes the former Degussa specialty chemicals business. ?xml:namespace> RAG-Stiftung also said it agreed with CVC to list Evonik Industries on the stock exchange over the mid-term. CVC had offered a good price, the company had comprehensive expertise in the chemical industry and it was experienced in listing firms in its portfolio on the stock exchange, RAG chief executive Wilhelm Bonse-Geuking said in a brief statement. To discuss issues facing the chemical industry go to ICIS
http://www.icis.com/Articles/2008/06/03/9129306/cvc-buys-25.01-stake-in-evonik-for-2.4bn.html
CC-MAIN-2013-48
refinedweb
132
53.92
If we want to serve JSON data and want it to be cross-domain accessible we can implement JSONP. This means that if we have a Groovlet with JSON output and want users to be able to access it with an AJAX request from the web browser we must implment JSONP. JSONP isn't difficult at all, but if we implement it our Groovlet is much more useful, because AJAX requests can be made from the web browser to our Groovlet. The normal browser security model only allows calls to be made to the same domain as the web page from which the calls are made. One of the solutions to overcome this is the use of the script tag to load data. JSONP uses this method and basically let's the client decide on a bit of text to prepend the JSON data and enclose it in parentheses. This way our JSON data is encapsulated in a Javascript method and is valid to be loaded by the script element! The following code shows a simple JSON data structure: { "title" : "Simple JSON Data", "items" : [ { "source" : "document", "author" : "mrhaki" } { "source" : "web", "author" : "unknown"} ] } If the client decides to use the text jsontest19201 to make it JSONP we get: jsontest19201({ "title" : "Simple JSON Data", "items" : [ { "source" : "document", "author" : "mrhaki" } { "source" : "web", "author" : "unknown"} ] }) Okay, so what do we need to have this in our Groovlet? The request for the Groovlet needs to be extended with a query parameter. The value of this query parameter is the text the user decided on to encapsulate the JSON data in. We will use the query parameter callback or jsonp to get the text and prepend it to the JSON data (notice we use Json-lib to create the JSON data): import net.sf.json.JSONObject response.setContentType('application/json') def jsonOutput = JSONObject.fromObject([title: 'Simple JSON data']) jsonOutput.accumulate('items', [source: 'document', author: 'mrhaki']) jsonOutput.accumulate('items', [source: 'web', author: 'unknown']) // Check query parameters callback or jsonp (just wanted to show off // the Elvis operator - so we have two query parameters) def jsonp = params.callback ?: params.jsonp if (jsonp) print jsonp + '(' jsonOutput.write(out) if (jsonp) print ')' We deploy this Groovlet to our server. For this blog post I've uploaded the Groovlet to Google App Engine. The complete URL is. So if we get this URL without any query parameters we get: {"title":"Simple JSON data","items":[{"source":"document","author":"mrhaki"},{"source":"web","author":"unknown"}]} Now we get this URL again but append the query parameter callback=jsontest90210 () and get the following output: jsontest90210({"title":"Simple JSON data","items":[{"source":"document","author":"mrhaki"},{"source":"web","author":"unknown"}]}) We would have gotten the same result if we used. The good thing is user's can now use for example jQuery's getJSON() method to get the results from our Groovlet from any web page served on any domain. The following is generated with jQuery.getJSON() and the following code: $(document).ready(function() { $.getJSON('?', function(data) { $.each(data.items, function(i, item) { $("<p/>").text("json says: " + item.source + " - " + item.author).appendTo("#jsonsampleitems"); }); }); }); JSONP output:
http://mrhaki.blogspot.com/2009/08/serving-json-data-jsonp-way-with.html
CC-MAIN-2017-30
refinedweb
518
53.51
COCI 2016/2017 round 6 Join us today on the 6th round of this year's COCI! Click to see where the coding begins in your timezone. We can discuss the problems here after the contest. Why the main site isn't working ? UPD : Fixed now How to solve 140? Edit: wrong algorithm. It's actually possible to end up with a cycle. Not 100% sure but can you explain how your idea works on something like 2 3 6 where it's optimal to choose edge 2-6 and 3-6 wouldn't your idea only consider edge 2-3 instead?? 2 3 6 2-6 3-6 2-3 No, I'd incrase cnt[6] 3 times, so 6 would find an edge with 2*3 and cost 0, and then with 3*2 and also cost 0. When the results will be published? Currently there is a backlog of submissions waiting to be judged. Results will be published when the submission queue is empty. That will take approximately 1h. I think that in less of a half hour, because in the judge, them published this: "Currently there is a backlog of submissions waiting to be judged. Results will be published when the submission queue is empty. That will take approximately 1h." It seems that the problemsetter likes the sieve of Eratosthenes. How to solve the last problem? 120 can solved by prime factorization yes? Yes. Still no idea why they would put this as a 120-pointer. I used a modified sieve of Eratosthenes: What's the complexity of that algorithm? where M = 107 Just decided to elaborate more on it. for (i = 1 .. M) for (j = i .. M, j += i) This works in The sum is known as Harmonic series 1, and it is known that partial sums of these series have a "... logarithmic growth" and "... sum of the first 1043 terms is less than 100". So, you can assume that this works, as pllk said, in . 1 (just copy the link) Pretty cool, thanks for the explanation :D My solution is not that fast, but whatever... Calculate DP[i][j] = min cost to make number j to number 1, by i step of operation (no lucky number). This takes O(maxA * lg(maxA) ^ 2) time. for each query from a to b, we use at most one lucky number when making a move. With that observation, we can reduce each Q * M queries as a minimum y-intercept at position L_i, which can be solved with CHT. This whole operation takes O(maxA * lg(maxA)^2 + QM + QT * lg(maxA)^2 ). It runs under 1.21s in analysis mode. code (In the contest it got 0 points because my code was quite bugged) How to solve 100? My algorithm is: count elements which equal or less than arr[i], then print log2(cnt+1). I could do it in O(n) time, but I started 1 hour late, so I did in O(n2). But I don't know if my solution is correct or not. Sorry for bad English. It should be log2(cnt). Also you can calculate cnt by sorting. If you are counting arr[i] too, then log2(cnt) RESULTS ARE OUT NOW!! LOOOOOOOOOL! First place :D, how could it be? Achievement unlocked :P I think tests for fifth are weak. How did you solve it then? Code For N ≤ 104 I did Prim's O(N2) algo (for safety) and for bigger I just connected all components (via DSU) with edges of cost 0, then with edges of cost 1, ... until the graph gets connected. :P Look at the code. Can someone create an anti-test? As your array size of bool e[] was too small (M + 9 which is 107 + 9), it is quite easy to hack your solution with a case that the maximum weight of an edge in the MST is greater than 10, your solution may connect some incorrect vertices like 107 + 13 as e[j + r] may return TRUE for j + r is greater than your array size. bool e[] M + 9 e[j + r] TRUE j + r Meanwhile, I originally want to make your solution TLE on some cases instead of WA, so i change your array size of bool e[] to M + M which is sufficient. M + M Finally, your code will TLE in cases that number of unique elements is greater than 104, and the weight of an edge in the MST is quite large, as the time complexity of your solution is . For instance, your code will TLE on this case: 100000 9999999 7999999 5000000 4999999 . . 4900003 Anyway, i found it not easy to create such a case that your code(assuming the problem of illegal access to array is fixed) will TLE. But, I think the official data set does not contain any max cases with big is quite disappointing. Yeah, I got TL on your test. :P I thought that tests maybe weak and not include such test. :D Hello, I need you help guys Please give any ideas how could this code for C problem and this code for B problem get SIGSEV on some of the tests??! I am just sick of getting this verdict, I get it almost every contest! In the code for problem C, you have a bad limit, because you have maxn = 500009 instead of maxn = 1048576, that is 2^20. maxn = 500009 maxn = 1048576 2^20 Thank you. What can you say about the B problem? Does seg tree require limit of N*4? I thought N*3 is more than enough.. Even though, it won't pass. Cause, utilizing map here takes more memory than intended. He should remove his map and use another way to compress. I sent the code with the correct limit, and it works perfectly. For problem B, change the size of s array by maxn*5 and get AC. I know, that's sad :'( Maybe with maxn*4 also works s maxn*5 maxn*4 maxn*4 is enough. Guys, can anyone prove that N*3 is not enough? Because, in one tutorial, I have read, that N*2 is already enough (it wasn't seg tree built by loop, it was the same reqursive aproach..) see I think that for find a correct limit you need to know that the Segment Tree will have a height of , now you know that for every level in the Segment Tree the number of nodes is equal to , then the total nodes in the Tree is . See this link There's actually a really good implementation of segment tree that uses 2*N memory: click. It's iterative, and therefore much faster than the recursion based segment tree. There are some problems where recursive segment trees are required. However, this should work on most problems. For this problem you actually didn't need segment tree. There exists a very simple greedy solution (< 15 lines lol). I'd like to share my (approximately) O(N) solution to 120 Omg!!! o_O O (N) is compilation time here :D Will there be any editorial ? That feeling when one if costs you 120 points... and 11th place. I feel dumb anyways since I didn't use a basic Sieve of Eratosthenes in D, what I did was for every number in the interval I used logN prime factorization and used the formula f(N) = product of f(p^i) where p is every prime factor, and i is its exponent. f(p^i) can be calculated in logarithmic time, so the algorithm is about O(NlogN). (the solution passed in under 1s in analysis mode) My approach was the same as yours. After reading pllk's solution I also feel extremely stupid. How to solve E? I used DSU to group the elements and applied greedy approach to find the minimum result. First group all the elements by finding whose modulus gives 0, then go for 1,then 2 and so on.. till you have grouped all. My Code Weak tests or there is some legit proof that this works fast? Won't your code TLE on this case? UPD By submitting your code and the input to, your code results in TLE: I have a solution for problem Sirni that can solve up to Subtask 3 (n ≤ 105, p ≤ 106), but I don't know how to deal with p ≤ 107. First, remove all duplicate from array P. Then, call nextx the index of the smallest number in P that is larger than or equal to x. Now, consider index i. For each integer k, let m = kPi, we will only add the edge (i, nextm). Finally, we build the MST for the graph. Why we can ignore all edge that connect index i and all index a1, a2, ..., ak such that Pnextm + 1 ≤ Pa1 ≤ Pa2 ≤ ... ≤ Pak ≤ m + Pi - 1? Because, the algorithm will eventually add edge (nextm;a1), (a1, a2), (a2, a3), ..., (ak - 1, ak). So, for each j in [1;k], instead of using edge (i;aj) with cost Paj - m, we can use edges (i;nextm), (nextm;a1), (a1, a2), ..., (aj - 1, aj) with the same cost and more benefit. So, we only need to consider edge (i, nextm). The maximum number of edge in the graph is , which is about . UDP: My code It's enough to change std::sort to countsort. I've modified your code a bit and it got accepted. Modified code I used the same approach and got AC(worked for p ≤ 107). It seems that you're sorting the edges in order to create the MST(please correct me if I'm wrong), this works in which wont work for p ≤ 107, but since the weight of the edges are ≤ 107, you can create an array of vectors of size 107 and add the edges to the corresponding vector, this way the complexity is . #include<bits/stdc++.h> using namespace std; #define ll long long #define f(i, x, n) for(int i = x; i < (int)(n); ++i) int x[100000], nxt[10000001], p[10000001]; vector<pair<int, int> > w[10000001]; int P(int v){ if (p[v])return p[v] = P(p[v]); return v; } int main(){ int n; scanf("%d", &n); f(i, 0, n)scanf("%d", x + i), nxt[x[i]] = x[i]; sort(x, x + n); n = unique(x, x + n) - x; for (int i = 9999999; i >= 0; --i)if (!nxt[i])nxt[i] = nxt[i + 1]; f(i, 0, n){ int t = x[i], z = nxt[t + 1]; if (z && z - t < t)w[z - t].push_back(make_pair(t, z)); for (int j = t << 1; j <= 10000000; j += t){ z = nxt[j]; if (!z)break; if (z - j < t)w[z - j].push_back(make_pair(t, z)); } } ll an = 0; int k = 0; f(i, 0, 10000000){ vector<pair<int, int> > &v = w[i]; int s = v.size(); f(j, 0, s){ int x = P(v[j].first), y = P(v[j].second); if (x == y)continue; an += i; p[y] = x; if (++k + 1 == n)break; } if (k + 1 == n)break; } printf("%lld\n", an); } With counting sort it can get accepted, UPD: Actually, I was bit late :(
http://codeforces.com/blog/entry/50241
CC-MAIN-2018-51
refinedweb
1,880
82.44
Creates a new, empty map object. am_map_create() creates an instance of a am_map_t and returns a pointer back to the caller. #include "am_map.h" AM_EXPORT am_status_t am_map_create(am_map_t *map_ptr); This function takes the following parameter: Pointer specifying the location of the new map object. Be sure not to pass map_ptr as a valid am_map structure as the reference will be lost. This function returns one of the following values of the am_status_t enumeration (defined in the <am_types.h> header file): If the map object was successfully created. If unable to allocate memory for the new map object. If the map_ptr argument is NULL.
http://docs.oracle.com/cd/E19528-01/819-4676/adocx/index.html
CC-MAIN-2015-14
refinedweb
103
58.79
Before we get started with some hands-on web development with Java, let’s take the time to clarify a couple of important Java concepts before we move on. As mentioned in the prework for week one, we will be discussing general programming related topics as well teaching you applicable skills as part of this course on Java. This is slightly more complex stuff, so give this a proper readthrough! Imagine that we created a data model (AKA class definition) that looked like this: public class Rectangle { private int length; private int width; public Rectangle(int length, int width) { this.length = length; this.width = width; } public int getLength() { return length; } public int getWidth() { return width; } public boolean isSquare() { return length == width; } public int area() { return length * width; } } This simple class definition with two properties that we get from our user and then use to make a simple object through a constructor. We also have two getters to retrieve information about our instantiated objects, and two methods that we can call to get information about our object. One returns a boolean, one returns an int. Note: If the above is feeling confusing, please stop, and revisit relevant lessons on objects and encapsulation from week 1 before you proceed. The above is called a POJO in Java slang. This stands for Plain Old Java Object. As you get more experience in writing Java, writing POJOs will become second nature. Everything contained in the class is either an attribute about an instance of the class, or a method to do something with an instance. For example, getLength() returns a Rectangle's private length attribute, and isSquare() evaluates the Rectangle's length and width properties to determine if it is also a square. Essentially, all code here revolves around individual instances, or objects of the class. This has been the case with all classes we've created thus far. So far, so good. Cool. Consider the following: Let's pretend we’re a nerd, and want to know how many Toyota Camry cars have been produced. Would we seek out this information by examining an individual instance of a Toyota Camry? Or would we attempt to gather this information by asking the factory where these vehicles are manufactured? You'd ask the manufacturer, right? After all, the red Toyota Camry down the street doesn't "know" how many other Camrys have been manufactured. But the factory responsible for creating them most likely does. Now, let's consider the Rectangle class from the example above. Pretend we’re an even bigger nerd and need to know how many Rectangle objects have been created. Do we ask an individual instance of the Rectangle class? Or do we ask the class itself? Similar to the Toyota Camry example above, it makes far more sense to ask the class itself. Just like the Toyota factory, it is the entity responsible for manufacturing each of these objects. One individual Rectangle doesn't "know" how many other Rectangles have been constructed; it only knows its own length and width properties. Methods and variables that store or manipulate information about an entire class are declared static. Static means that instead of referring to or interacting with an individual instance/object, they interact with/refer to with the class as a whole. It’s an important distinction. To revisit our metaphor, static methods and variables are similar to the Toyota factory. They have access to information about each individual object that factory creates. Non-static methods and variables (every method and variable we've written so far) are like the specific instance of a red Toyota Camry. While it was created at the factory, it only has access to its own properties. It doesn't know about all the other Camrys ever produced. We could declare a static property that maintains a list of all Rectangle objects we create like this: import java.util.List; import java.util.ArrayList; public class Rectangle { private int length; private int width; private static List<Rectangle> instances = new ArrayList<Rectangle>(); ... As you can see, we've added a new property to the Rectangle class called instances. ArrayListthat will eventually contain all Rectangleobjects created by this class. privatein order to follow best practices of encapsulation. staticbecause the information it holds is not about a singular Rectangle, but related to the class as a whole. Similar to other class attributes, we must include a corresponding public getter method to retrieve the information stored in instances. But before we do that we'll write a test. ... @Test public void all_returnsAllInstancesOfRectangle_true() throws Exception { Rectangle firstRectangle = new Rectangle(10, 20); Rectangle secondRectangle = new Rectangle(5, 5); assertTrue(Rectangle.all().contains(firstRectangle)); assertTrue(Rectangle.all().contains(secondRectangle)); } ... A few things to note regarding the test above: First, you may notice the last two lines look different then assertions we've written previously. There are many ways to construct our JUnit assert statements. assertTrue(Rectangle.all().contains(firstRectangle)); is simply asserting that the statement we pass to it returns true. This is the same as writing: assertEquals(true, Rectangle.all().contains(firstRectangle)); We use two assertions here because we want to ensure that the ArrayList returned by Rectangle.all() includes both test rectangles, so we write two lines to check for each one individually. Next, we now need to add every Rectangle object we create to the instances ArrayList. But where is the best place to do this? Because we want to include every new Rectangle in instances, the logic for adding them to the ArrayList must run every time we create a Rectangle, without fail. What method is always called when we create instances of a class? The constructor! We'll add the following code to our Rectangle constructor: public class Rectangle { private int length; private int width; private static List<Rectangle> instances = new ArrayList<Rectangle>(); //all Rectangle types “share” this list public Rectangle(int length, int width) { length = length; width = width; instances.add(this); // automatically adds itself on creation! } ... When we are working inside of the constructor, we can use the keyword this to reference that object. This is similar to the manner we used this in JavaScript during Intro to Programming. So, each time we call our constructor method, it constructs a new Rectangle. Now, it also adds this new Rectangle to instances. Next, we can define the corresponding getter method to retrieve the instances list: ... public static List<Rectangle> all() { return instances; // this is all rectangles, ever! } ... Like any getter method, it returns the private instances attribute. However, unlike other getter methods, it is declared static. Both variables and methods that deal with entire classes must be declared static. After adding the code above, our test passes. Additionally, because the method we've just written is static it must be called on the class itself and not a particular instance. We saw this in the JUnit test we wrote above. Calling a method directly on a class looks like this: Rectangle.all() // note the uppercase. Calling method on CLASS ITSELF! NOT like this: Rectangle testRectangle = newRectangle(15, 13); testRectangle.all(); As you can see, all() is called directly on the capitalized class name. Remember, static methods and variables deal with the entire class not a particular instance. In upcoming lessons, we'll practice creating and using static variables and methods further, including integrating them into a Spark user interface when we add them to our To Do List application. Example GitHub Repo for Rectangle Application m. Remember, mstands for member, referring to a particular instance of a class. Static variables hold information about the class itself, not a particular instance.
https://www.learnhowtoprogram.com/java/web-applications-with-java/big-topic-pojos-static-variables-and-methods
CC-MAIN-2018-34
refinedweb
1,271
56.45
Feature #15230closed RubyVM.resolve_feature_path Description I'd like a feature to know what will be loaded by require(feature) without actual loading. $ ./local/bin/ruby -e 'p RubyVM.resolve_feature_path("set")' [:r, "/home/mame/work/ruby/local/lib/ruby/2.6.0/set.rb"] $ ./local/bin/ruby -e 'p RubyVM.resolve_feature_path("etc")' [:s, "/home/mame/work/ruby/local/lib/ruby/2.6.0/x86_64-linux/etc.so"] This feature is useful for a static analysis tool of Ruby programs. It might also be useful to check $LOAD_PATH configuration. I don't think that RubyVM is the best place to have this method, but a good place to experiment the new feature. Kernel#resolve_feature_path looks too aggressive. diff --git a/load.c b/load.c index ddde2baf3b..dd609105ee 100644 --- a/load.c +++ b/load.c @@ -942,6 +942,26 @@ load_ext(VALUE path) return (VALUE)dln_load(RSTRING_PTR(path)); } +VALUE +rb_resolve_feature_path(VALUE klass, VALUE fname) +{ + VALUE path; + int found; + char s[2]; + + fname = rb_get_path_check(fname, 0); + path = rb_str_encode_ospath(fname); + found = search_required(path, &path, 0); + + if (!found) { + load_failed(fname); + } + + s[0] = found; + s[1] = 0; + return rb_ary_new_from_args(2, ID2SYM(rb_intern2(s, 1)), path); +} + /* * returns * 0: if already loaded (false) diff --git a/vm.c b/vm.c index fababaa2ec..2a72d16f47 100644 --- a/vm.c +++ b/vm.c @@ -2834,6 +2834,8 @@ static VALUE usage_analysis_operand_stop(VALUE self); static VALUE usage_analysis_register_stop(VALUE self); #endif +VALUE rb_resolve_feature_path(VALUE klass, VALUE fname); + void Init_VM(void) { @@ -3140,6 +3142,8 @@ Init_VM(void) /* vm_backtrace.c */ Init_vm_backtrace(); + + rb_define_singleton_method(rb_cRubyVM, "resolve_feature_path", rb_resolve_feature_path, 1); } void Related issues Updated by Eregon (Benoit Daloze) over 2 years ago What's the leading one letter Symbol in the return value? That seems fairly cryptic. Do you need it? I would expect such a method to return a path, i.e., a String. Updated by mame (Yusuke Endoh) over 2 years ago :r means .rb and :s means .so, I guess :-) It is not absolutely necessary. But it would be somewhat useful for a static analysis tool of Ruby programs because such a tool typically needs to skip .so files. Updated by Eregon (Benoit Daloze) over 2 years ago If it's reliable enough (I think it is) to detect native extensions by the file extension (.so, .dylib) then I think that should be preferred as it would simplify the method's result. Updated by duerst (Martin Dürst) over 2 years ago mame (Yusuke Endoh) wrote: :rmeans .rband :smeans .so, I guess :-) If this information is kept, please make it easier to understand. :rb/:so or :'.rb'/:'.so' at a minimum. Updated by mame (Yusuke Endoh) over 2 years ago - Status changed from Assigned to Closed Applied in changeset trunk|r66237. load.c (RubyVM.resolve_feature_path): New method. [Feature #15230] Updated by znz (Kazuhiro NISHIYAMA) over 2 years ago It returns false as path when feature is already loaded. Is it intentional? % rbenv exec irb --simple-prompt -r irb/completion >> RubyVM.resolve_feature_path('set') => [:rb, false] Updated by Eregon (Benoit Daloze) about 2 years ago mame (Yusuke Endoh) should the behavior that znz (Kazuhiro NISHIYAMA) reports be fixed? resolve_feature_path sounds like it should always resolve a feature, no matter if the feature is loaded or not. Updated by Eregon (Benoit Daloze) about 2 years ago - Related to Feature #15903: Move RubyVM.resolve_feature_path to Kernel.resolve_feature_path added Updated by mame (Yusuke Endoh) about 2 years ago Eregon (Benoit Daloze) wrote: mame (Yusuke Endoh) should the behavior that znz (Kazuhiro NISHIYAMA) reports be fixed? I hear nobu (Nobuyoshi Nakada) has already fixed the issue. (Thanks!) Updated by Eregon (Benoit Daloze) about 2 years ago Great! Also available in: Atom PDF
https://bugs.ruby-lang.org/issues/15230
CC-MAIN-2021-31
refinedweb
598
59.9
Many air quality devices are statically mounted. We wondered, what if the tools and information needed to understand the severity of air pollution was available on the go? What if we have a system which gives live updates or live readings of the pollution level in a given area? Thus, came the idea to provide a hardware-based solution using HERE XYZ to provide a visual demographic interpretation of the pollution level. Overview In this tutorial we will show you how to setup an Arduino Uno to become a GPS-enabled, Air Pollution Monitor. Then we will walk you through augmenting the generated data to include the street address nearest to the GPS positions using the HERE Reverse Geocoding API, then we will show you how to visualize the results in HERE XYZ Studio to publish an interactive map like this. View this map with HERE XYZ Studio. Our Hackathon Problem Many air quality devices are statically mounted. These devices typically cannot travel. We wondered, what if we made the tools and information needed to understand the severity of air pollution available on the go? How could we create a system that could give live updates or live readings of the pollution level in a given area? Thus, came the idea to prototype a hardware-based solution to collect the data and then use the HERE XYZ platform to provide a visual representation and interpretation of the pollution levels. In this tutorial we will show you how to: - Setup and configure an Arduino Uno with an Air Quality Sensor and GPS - Calibrate the Air Quality Sensor - Collect the data - Augment the data with HERE's Reverse Geocoding API - Visualize the results using HERE XYZ Studio Hardware Many of these components can be found readily available through many outlets. We are not affiliated with any of the links below. - Arduino UNO (micro controller) - - GPS Module (NMEA) with Active GPS Antenna 3-5V 28dB 5-meter SMA - Gas Sensor (MQ-135) - Battery Adapter and 9v Battery Software - Arduino Studio - - TinyGPS++ - - MQ135 Calibration - - Eclipse - Calibrating the gas sensor By default, the gas sensor gives out raw analog / digital values, however, to get PPM values we need to calibrate the gas sensor. To calibrate the sensor, we need to use a library (MQ135.h) which can be downloaded from the following link. Next, we connected the gas sensor to the Arduino board as shown below and then ran the following code using Arduino Studio. We monitored the results and let it run for a minute using the Serial Console. We copied the last result (in our case 494.63 into the MQ135.h library file. This value is then used to help the library produce PPM values that we will use for the visualization part of this tutorial. #include "MQ135.h" void setup (){ //Baud rate might differ for your GPS module Serial.begin (9600); } void loop() { MQ135 gasSensor = MQ135(A0); // Attach sensor to pin A0 float rzero = gasSensor.getRZero(); Serial.println (rzero); //Setting 1 sec delay delay(1000); } Once you get a value, insert it into the MQ135 Library header file (MQ135.h). This will a provide a calibrated reading for Parts Per Million (PPM). #define RZERO 494.63 Once we have the gas sensor calibrated, we connected the hardware and we used the following code and deploy it on the Arduino board using Arduino IDE (please see the hardware wiring diagram above). Arduino Code: // This code is based on the TinyGPS++ Library example named "DeviceExample" #include <tinygps++.h> #include <SoftwareSerial.h> #include "MQ135.h" TinyGPSPlus gps; SoftwareSerial ss(4,3); // Assigning pins 3 & 4 of arduino as TX & RX void setup() { Serial.begin(9600); //setting baud rates ss.begin(9600); Serial.println("Latitude, Longitude, Date, PPM"); } void loop() { while(ss.available() > 0) { // This section will print out the GPS Coordinates if(gps.encode(ss.read())) { Serial.print(gps.location.lat(), 6); Serial.print(F(",")); Serial.print(gps.location.lng(), 6); Serial.print(F(",")); if (gps.date.isValid()) { Serial.print(gps.date.year()); Serial.print(F("/")); Serial.print(gps.date.month()); Serial.print(F("/")); Serial.print(gps.date.day()); Serial.print(" "); Serial.print(gps.time.hour()); Serial.print(":"); Serial.print(gps.time.minute()); Serial.print(":"); Serial.print(gps.time.second()); Serial.print(","); } else { Serial.print(F("INVALID")); } //This section will print out the PPM Values from the MQ135 MQ135 gasSensor = MQ135(A0); // Read values from A0 pin of arduino float air_quality = gasSensor.getPPM(); // Using MQ135.h library to get PPM values Serial.print(air_quality); Serial.println(); } } } Working with captured readings and Preparing them for HERE XYZ Studio This completes the hardware setup then began a little leg work. To begin collecting data, we use CoolTerm to output the values from the Arduino Uno to a CSV file. You can configure CoolTerm to do this by Next, we connected to the Arduino and started recording our journey. We travelled around for about 30 mins to get the live readings of the pollution level along with latitude & longitude. Once complete we disconnected from the Arduino and the file was saved to our Desktop. GPS Locations are useful, but we wanted to see the relative address of each data point. To get the addresses we need to do a "Reverse Geocode Lookup" so we created a REST client (Java code below). This program calls a HERE's Reverse GeoCoding API to conduct a reverse lookup and give us address information. To use these API's, you need to obtain a HERE AppID & App_Code. You will need to create a HERE Developer Account. Sign up here. Reverse Geocoding JAVA Program Using your HERE Developer Account, insert your App ID & App Code below and then run this file from Eclipse. For a HERE Developer Account go to. You will also need to hardcode the location of the file you outputted while traveling around collecting data. package geocoder; import java.io.BufferedReader; import java.io.BufferedWriter; import java.io.File; import java.io.FileReader; import java.io.FileWriter; import java.io.IOException; import java.io.InputStreamReader; import java.io.PrintWriter; import java.net.HttpURLConnection; import java.net.URL; import org.json.JSONArray; import org.json.JSONObject; /* * About Geocoder * * This is an application that reads in a CSV file where the column format is as follows * [Latitude],[Longitude],[Date],[PPM] * * This application reads the file in and then for each line creates a request to HERE's * reverse Geocoder API and then waits for a request. * * REQUIREMENTS to Run this code: * - You will need to hard code the location of the input file "csvFile" variable below. * - You will also need to hard code the location of the output file "outfile" variable below. * - Will also need to insert your HERE Developer AppID and AppCode in the "url" variable below. * * Potential Extensions: * - Turn the REST calls into an Asynchronous process * - Try using HERE's Batch Reverse Geocoder API - * - Open up a serial connection to the Arduino board and process the incoming data in real time while asynchronously emitting the data to XYZ. */ public class Geocoder { private static final String USER_AGENT = "Mozilla/5.0"; //Your csv file with captured data static String csvFile = "[INSERT PATH TO INPUT CSV]"; public static void main(String[] args) { String line = ""; String cvsSplitBy = ","; String[] capturedData = null ; writeToCsv("Latitude","Longitude","Date & Time","PPM Values","Address"); try (BufferedReader br = new BufferedReader(new FileReader(csvFile))) { // will skip first line String hearderLine = br.readLine(); while ((line=br.readLine()) != null) { // use comma as separator capturedData = line.split(cvsSplitBy); //URL that we are calling String url = ""+capturedData[0]+","+capturedData[1]+"&mode=retrieveAddresses"+"&app_id=[APP ID]"+"&app_code=[APP CODE]"; String latitude = capturedData[0]; String longitude = capturedData[1]; String datetime = capturedData[2]; String ppmValues = capturedData[3]; //Creating an object of URL Class URL obj = new URL(url); HttpURLConnection con = (HttpURLConnection) obj.openConnection(); // optional default is GET con.setRequestMethod("GET"); //add request header con.setRequestProperty("User-Agent", USER_AGENT); //Print this response code if you are facing any issues int responseCode = con.getResponseCode(); BufferedReader in = new BufferedReader( new InputStreamReader(con.getInputStream())); String inputLine; StringBuffer response = new StringBuffer(); while ((inputLine = in.readLine()) != null) { response.append(inputLine); } //Getting the response in String String jsonobj = response.toString(); //Reading JSON output using json.org library //Its recommended to study the JSON response from server prior to extracting the required output //Refer link JSONObject json = new JSONObject(jsonobj); JSONObject response1 = json.getJSONObject("Response"); JSONArray View = response1.getJSONArray("View"); JSONObject obj1 = View.getJSONObject(0); JSONArray result = obj1.getJSONArray("Result"); JSONObject obj2 = result.getJSONObject(0); JSONObject location = obj2.getJSONObject("Location"); JSONObject address = location.getJSONObject("Address"); String label = address.getString("Label"); in.close(); //print result System.out.println(label); //write to output file writeToCsv(latitude,longitude,datetime,ppmValues,label); } } catch (IOException e) { e.printStackTrace(); } } private static void writeToCsv(String latitude, String longitude, String datetime, String ppmValues, String label) { try { // Your output file path File outfile =new File("[INSERT PATH TO OUTPUT CSV]"); FileWriter fw = new FileWriter(outfile,true); BufferedWriter bw = new BufferedWriter(fw); PrintWriter pw = new PrintWriter(bw); pw.println(latitude+","+longitude+","+datetime+","+ppmValues+","+"\""+label+"\""); pw.flush(); pw.close(); fw.close(); bw.close(); }catch(Exception E) { System.out.println(E); } } } Once the program completes, your data file will have been updated to include the Addresses nearest to the GPS coordinates captured while traveling. Next, we will show you how to upload the data into HERE XYZ Studio and configure the visualization so that it is more easily interpreted. About the Data Recorded The MQ135 sensor is recording Benzene, Alcohol & NH3 to produce a single reading. We don't know how much of each gas was detected, but we can still visualize the results with a few calculations. The sensor needs to be calibrated so that a PPM value can be used (this was in the Calibration step above). For our visualization we determined the minimum, maximum and average PPM values from the recorded data to produce a color scheme that you see in the map below. We used Excel to find these values. In our case the low was 250, the high was 2270 and the average was 500. With this information we can complete our map. Visualizing with XYZ Studio After we obtained, assembled and augmented the data, the next step was to feed it into the HERE XYZ Studio for visualization. HERE XYZ supports CSV and other formats, so all we had to do was create a new map and upload our dataset. Go ahead and log into HERE XYZ Studio (if you don't have an account signup, it is free) and click "Create new project" and give personalize the project's Name and Description. Next click "Add" Next, we want to upload our data set. Click "Upload Files" navigate to your dataset and once complete click "Done" to return to your map. Your map will look something like ours does in the image below. What you are seeing is a bunch of dots all bunched together without any color coding. Our next step is to add color to represent different colors for the Low and High readings. Then we will add some rules to represent the range of data between our High, Average and Low readings. On the next screen tap "Points" to reveal the point configuration menu. Next click "Add new point style". This will reveal the style configuration menu where we will define our "Maximum" value like this. Once you click "Confirm" your map will look like this. What you are seeing here is a raw data representation where none of the data point have been styled. Next, we will change the colors on the Maximum value to purple so that we can easily see the data. (If you don't see a color choose like below, click on "Data1" to reveal the Point Configuration menu. Next click Points and then "Maximum" to reveal the color chooser below.). Click on the colors and choose purple. You will want to do this for the minimum value of your dataset as well. Next, we created more complex rules for color coding to represent ranges of data (below average, average and above average). The average configuration look like this. For Below and above average you would repeat this step with different values. Soon your map will start coming together and look like this. (Depending on the order of the Color Configuration rules you may need to resort your rules to look like this so that the Minimum and Maximum rules are executed first.) You can also inspect each point by selecting it to see the Address, GPS position and PPM values, but first you'll need to select the "Cards" pulldown and then drag the "Address" field into the area with "PPM and Date". Now your map is complete! Let's show you how to publish and share it with your friends. Click on the "Publish" button in the bottom right of the main map screen, tap the button to the right of Publish. Next make any configurations you like; we tend to turn on Description and Legend. Once complete scroll to the bottom and tap the "Copy" link next to the URL. This will copy to your clipboard and you are now complete. Conclusion The goal of our project was to showcase the possibilities we can achieve by combining a simple mobile hardware setup with HERE APIs and HERE XYZ studio to develop a solution that could be helpful to a lot of sector and individuals. About the Authors Harishjitu Saseendran, based out of HERE’s Mumbai office, holds engineering degrees in Electronics and Telecommunication. He also has experience in Java, php, MySQL, Python, Android and Arduino Programming. Most of his spare time is spent creating Android-based applications and IOT projects. He likes to get his hands on different tech, leading him to create and evolve innovative solutions like this Air Pollution Monitoring System. Mayuresh Sarang, is an engineer in Computer Science. He spends most of his time developing Web Applications for HERE Technologies based out of Mumbai. His experience excels in Java EE, JSP Servlets, Spring Boot and Hibernate. He has sound knowledge of building Restful Web Services and Rest Clients and he likes creating smart IOT based solutions which gave him an idea of monitoring air pollution using Arduino. Aamer Khan, based out of HERE’s Mumbai office has a bachelor's in computer science and is a strong believer in how modern technologies can better the world. He has experience in Visual Basic and excels in creating advanced Excel macros for automation. He enjoys writing and sharing experiences with his readers. He assisted in documenting this project and providing input to the team. Editor's Note: This is a guest post from a HERE team based out of Mumbai, India. This project came from an internal Hackathon presentation and we would like to share it with you. If you would like to contribute to the HERE Developer Blog, please reach out to us via DM @HEREDev on Twitter.
https://developer.here.com/blog/air-quality-mapping-and-visualization-using-xyz-arduino-uno
CC-MAIN-2019-22
refinedweb
2,489
56.55
Can I add the SQLite.NET component to my cross-platform Xamarin.Forms project? I don't see any option to add components, I can only do that for the native Android/iOS project. There is a PCL sqlite library you can use. Answers There is a PCL sqlite library you can use. @ken_tucker actually, I've just found out that the SQLite.NET component suggested by Xamarin does have a PCL package on NuGet. Thanks a lot @ken_tucker nevermind, the one I posted doesn't work, I will try what you suggested. I'm an idiot. There is a PCL version: Edit: Which doesn't seem to be working: System.Exception: Something went wrong in the build configuration. This is the bait assembly, which is for referencing by portable libraries, and should never end up part of the app. Reference the appropriate platform assembly instead. @SpaceMonkey Have you been able to resolve the exception? I'm getting the same thing. From what I understand it's because the platform specific sqlite implementation isn't being loaded, but I haven't been able to figure out how to fix that. @ScottMacRitchie I don't even remember why I said what I said in the previous comment. Anyway: Add SQLite-Net-PCL to the PCL project and the native project too. In your PCL when you need to initialise a new connection you'll need an ISQLitePlatform implementation. You do that using dependency injection: So in the PCL you're doing: and in the native project (Android in my case) you're doing: Thanks @SpaceMonkey I receive an SQLite exception when creating a connection. This worked before I installed VS2015 RTM. Client (PCL): Android project: I receive an exception when executing: NOTE: SQLitePCL.raw_basic is on 0.7.1 I get errors whenever I attempt to upgrade the version to 0.8.1 Again, this all worked before I installed VS2015 RTM Any suggestions? @ScottNimrod Are you using this: The way I explained in the previous comment? @SpaceMonkey I followed your steps. But I am confused. I just don't know what to do with the platform object. var platform = DependencyService.Get().GetSqlite(); It's not apparent to me on how I am to perform any CRUD operation with this platform interface. Any suggestions? @ScottNimrod Creating a connection requires an object implementing ISQLitePlatform. You should pass the object you get using DependecyService Refer the below link to get working sample about SQLite.NET : C-Sharp Corner : Interacting With Local Database in Xamarin.Forms after installation SQLite-net-pcl, I donot see SQLite.net namespace, but only see sqlit namespace why? thanks jeffchen.5589 Did you figure out where SQLite.net is? For me the fix was to uninstall the package SQLite-net-pcl and install SQLite-Net-PCL. Notice the difference in punctuation. I did this in PCL and platform projects.
https://forums.xamarin.com/discussion/comment/141301/
CC-MAIN-2020-10
refinedweb
480
68.77
Don't understand this generic method I am working through the O'Reilly book "Java Generics and Collections". In the "Comparison and Bounds" chapter, they provide an example method that takes a Comparator of a given type as an argument, and returns a Comparator of alist of that type. here's the code: import java.util.*; publicclass Test{ publicstatic <E> Comparator<List><E>> listComparator(final Comparator<E> cmp){ returnnew Comparator<List><E>>(){ publicint compare(List<E> list1, List<E> list2){ int n1 = list1.size(); int n2 = list2.size(); for (int i = 0; i < Math.min(n1, n2); i++){ int k = cmp.compare(list1.get(i), list2.get(i)); if (k != 0)return k; } return (n1 < n2) ? -1 : (n1 == n2) ? 0 : 1; } }; } } I'm trying to understand how this method works... I think maybe there are some language components/rules in use here that i'm just not familiar with. Anyway, the part that has me scratching my head is the nested compare method - where are the list1 and list2 arguments coming from? Can anyone break down for me line by line what is going on here? Thanks in advance!
http://www.java-index.com/java-technologies-archive/519/java-programming-5199840.shtm
crawl-001
refinedweb
189
68.97
In this blog post, Oracle Linux kernel developers Alexandre Chartre and Konrad Rzeszutek Wilk give an update on the Spectre v1 and L1TF software solutions. In August of 2018 the L1TF speculative execution side channel vulnerabilities were presented (see Foreshadow – Next Generation (NG). However the story is more complicated. In particular, an explanation in L1TF - L1 Terminal Fault mentions that if hyper-threading is enabled, and the host is running an untrusted guest - there is a possibility of one thread snooping the other thread. Also the recent Microarchitectural Data Sampling, aka Fallout, aka RIDL, aka Zombieland demonstrated that there are more low-level hardware resources that are shared between hyperthreads. Guests on Linux are treated the same way as any application in the host kernel, which means that the Completely Fair Scheduler (CFQ) does not distinguish whether guests should only run on specific threads of a core. Patches for core scheduling provide such a capability but, unfortunately, its performance is rather abysmal and as Linus mentions: Because performance is all that matters. If performance is bad, then it's pointless, since just turning off SMT is the answer. However turning off SMT (hyperthreading) is not a luxury that everyone can afford. But then, with hyperthreading enabled, a malicious guest running on one hyperthread can snoop from the other hyperthread, if the host kernel is not hardened. There are two pieces of exploitation technologies combined: In fact a Proof of Concept has been posted RFC x86/speculation: add L1 Terminal Fault / Foreshadow demo which does exactly that. The reason this is possible is due to the fact that hyperthreads share CPU resources - and a well-timed attack can occur in between the time we exit in the hypervisor and go back to running the guest: This is how an attacker can leak the kernel data - using a combination of Spectre v1 code gadgets and using L1TF attack in the little VMEXIT windows that an guest can force. As mentioned disabling hyperthreading automatically solves the security problem, but that may not be a solution as it halves the capacity of a cluster of machines. All the solutions revolve around the idea of allowing code gadgets to exist but they would either not be able to be execute in the speculative path, or they can execute - but are only be able to collect non-sensitive data. The first solution that comes in mind is - can we inhibit the secondary thread from executing code gadgets. One naive approach is to simply always kick the other sibling whenever we enter the kernel (or hypervisor) and have the other sibling spin until we are done in a safe space. Not surprisingly the performance was abysmal. Several other solutions that followed this path that have been proposed, including: Both of those follow the same pattern - lock-step enter the kernel (or hypervisor) when needed on both threads. This mitigates the Specte v1 issue by the guest or user space program not being able to leverage it - but it comes with unpleasant performance characteristics (on some workloads worst performance than turning hyperthreading off). Another solution includes proactively patching the kernel for Spectre v1 code gadgets, along with meticulous nanny-sitting of the scheduler to never schedule one customer guests from sharing another customer siblings, and other low-level mitigations not explained in this blog. However that solution also does not solve the problem of the host kernel being leaked using the Spectre v1 code gadgets and L1TF attack (see _Details of data leak exploit_ above). But what if just remove sensitive data from being mapped to that virtual address space to begin with? This would mean even if the code gadgets were found they would never be able to bridge the gap to the attacker controlled signal array. One idea that has been proposed in order to reduce sensitive data is to remove from kernel memory pages that solely belong to a userspace process and that the kernel don't currently need to access. This idea is implemented in a patch series called XPFO that can be found here: Add support for eXclusive Page Frame Ownership, earlier explained in 2016 Exclusive page-frame ownership and the original author's patches Add support for eXclusive Page Frame Ownership (XPFO). Unfortunately this solution does not help with protecting the hypervisor from having data leaked, just protects user space data. Which for guest to guest protection is enough - even if a naughty guest caused the hypervisor to speculatively execute Spectre v1 code gadgets along with spilling the hypervisor data using L1TF attack - the hypervisor at that point has only the naughty guest mapped on the core and not the other guest's memory on the same core - so only hypervisor data is leaked and other guests' vCPUs register data. Only is not good enough - we want better security. And if one digs in deeper there are also some other issue such as non trivial performance hit as a result of TLB flushes which make it slower than just disabling hyperthreading. Also if vhost is used then XPFO does not help at all as each guest vhost thread ends up mapping the guest memory in the kernel virtual address space and re-opening the can of worms. Process-local memory allocations (v2) addresses this problem a bit differently - mainly that each process has a kernel virtual address space slot (local, or more of secret) in which the kernel can squirrel sensitive data on behalf of the process. The patches focus only on one module (kvm) which would save guest vCPU registers in this secret area. Each guest is considered a separate process, which means that each guest is precluded from touching the other guest secret data. The "goal here is to make it harder for a random thread using cache load gadget (usually a bounds check of a system call argument plus array access suffices) to prefetch interesting data into the L1 cache and use L1TF to leak this data." However there are still issues - all of the guests memory is globally mapped inside the kernel. And the kernel memory itself can still be leaked in the guest. This is similar to XPFO in that it is a black-list approach - we decide on specific items in the kernel virtual address space and remove them. And it fails short of what XPFO does (XPFO removes the guest memory from the kernel address space). Combining XPFO with Process-local memory allocations would provide much better security than using them separately. Address Space Isolation is a new solution which isolates restricted/secret and non-secret code and data inside the kernel. This effectively introduces a firewall between sensitive and non-sensitive kernel data while retaining the performance (we hope). This design is inspired by Microsoft Hyper-V HyperClear Mitigation for L1 Terminal Fault. Liran Alon who sketched out the idea thought about this idea as follow: The most naive approach to prevent the SMT attack vector is to force sibling hyperthreads to exit every time one hyperthread exits. But it introduce in-practical perf hit. Therefore, next thought was to just remove what could be leaked to begin with. We assume that everything that could be leaked is something that is mapped into the virtual address space that the hyperthread is executing in after it exits to host. Because we assume that leakable CPU resources are only loaded with sensitive data from virtual address space. This is an important assumption. Going forward with this assumption, we need techniques to remove sensitive information from host virtual address space. XPFO and Kernel-Process-Local-Memory patch series goes with a black-list approach to remove explicitly specific parts of virtual address space which we consider to have sensitive information. The problem with this approach is that we are maybe missing here something and therefore a white-list approach is preferred. At this point, after being inspired from Microsoft HyperClear, the KVM ASI came about. The unique distinction about KVM ASI is that it creates a separate virtual address space for most of the exits to host that is built in a white-list approach: we only map the minimum information necessary to handle these exits and do not map sensitive information. Some exits may require more or sensitive information, and in those cases we kick the sibling hyperthreads and switch to the full address space. QEMU and the KVM kernel module work together to manage a guest, and each guest is associated with a QEMU process. From userspace, QEMU uses the KVM_RUN ioctl (#1 and #2) to request KVM to run the VM (#3) from the kernel using Intel Virtual Machine Extensions (VMX). When an event causes the VM to return (VM-Exit, step #4) to KVM, KVM handles the VM-Exit (#5) and then transfer control to the VM again (VM-Enter). See below: However, most of the KVM VM-Exit handlers only need to access per-VM structures and KVM/vmlinux code and data that is not sensitive. Therefore, these KVM VM-Exit handlers can be run in an address space different from the standard kernel address space. So, we can define a KVM address space, separated from the kernel address space, which only needs to map the code and data required for running these KVM VM-Exit handlers (#5 see below). This provides a white-list approach of exactly what could be leaked while running the KVM VM-Exit code (in the picture below it is yellowish). When the KVM VM-Exit (#5a see below) code reaches a point where it does architecturally need to access sensitive data (and therefore not mapped in this isolated virtual address space), then it will kick all sibling hyperthreads outside of guest and switch to the full kernel address space. This kicking guarantees that there is no untrusted guest code running on sibling hyperthreads while KVM is bringing data into the L1 cache with the full kernel address space mapped. This overall operation happens, for example, when KVM needs to return to QEMU or the host needs to run an interrupt handler. Note that KVM flushes the L1 cache before VM-Enter back to running guest code to ensure nothing is leaked via the L1 cache back to the guest. In effect, we have made the KVM module a less privileged kernel module. That has three fantastic side-effects: The guest already knows about guest data on which KVM operates most of the time so if it is leaked to the guest that is okay. If the attacker does exploit a code gadget, it will only be able to run on the KVM module address space, not outside of it. Nice side-affect of ASI is that it can also assist against ROP exploitation and architectural (not speculative) info-leak vulnerabilities because much less information is mapped in the exit handler virtual address space.` If the KVM module needs to access restricted data or routines, it needs to switch to the full kernel page-table, and also bring the other sibling back to the kernel so that the other thread will be unable to insert code gadgets and slurp data in. The first version, posted back in May, RFC KVM 00/27 KVM Address Space Isolation received many responses from the community. These patches - RFC v2 00/27 Kernel Address Space Isolation posted by Alexandre Chartre are the second step in this. The patches are posted as a Request For Comments which solicits guidance from the the Linux Kernel community on how they would like this to be done. The framework is more generic with the first user being KVM but could very well be extended to other modules We would like also to thank the following folks for help with this article:
https://blogs.oracle.com/linux/improve-security-with-address-space-isolation-asi
CC-MAIN-2020-45
refinedweb
1,966
53.65
0 5 Years Ago i forgot to put DO in my program. since i was instructed to use DO-WHILE LOOP. i still have the same output if i use this code. But better to have the DO inserted. output: 0 0 2 0 2 4 0 2 4 6 0 2 4 6 8 0 2 4 6 8 10 #include <iostream.h> #include<conio.h> #include<stdio.h> using namespace std; int main() { int i=10; int x=2; int y=0; while (x <= i) { y = 0; while ( y <= x) { cout << y << " "; y += 2; } cout <<endl; x += 2; } return 0; c++ Edited 5 Years Ago by WaltP: Added CODE Tags again for the third time
https://www.daniweb.com/programming/software-development/threads/355551/check-my-code-for-do-while-loop-something-missing
CC-MAIN-2016-50
refinedweb
117
87.05
Contents TL;DR I’m working on a lightweight lens library for Elixir. The library is on Hex and the project is on GitHub. Introduction As far as definitions for lenses go, this one from the Racket documentation is very straightforward1: A lens is a value that composes a getter and a setter function to produce a bidirectional view into a data structure. This definition is intentionally broad—lenses are a very general concept, and they can be applied to almost any kind of value that encapsulates data. – Racket ‘lens’ documentation Generally, a lens provides a way to both get and set some piece of data inside of a data structure. Lenses can do three primary things lenses to the data onto which they focus: For a lens to be considered ‘well-behaved’, there are three laws that it must obey: - Put - Get: If you set a value, you should be able to get it back out get l (put l v s) == v - Get - Put: If you get a value and set it to the same thing, there is no change put l (get l s) s == s - Put - Put: If you set two things in succession, the final value is result of the second setting put l x (put l y s) == put l x s Focus currently implements versions of lenses and prisms. The functionality is inspired by Edward Kmett’s lens library and the Racket lens library. Mutable and Immutable Data Structures Lenses are particularly useful when working with immutable, deeply nested data structures. In languages with mutable data structures, changing values in deeply nested structures is easy: marge = { address: { street: { number: 742, name: "Evergreen Terrace" } } } marge[:address][:street][:name] = "Fake St." marge # { # address: { # street: { # number: 742, # name: "Fake St." # } # } # } To update a value three levels deep in a nested Ruby hash, we just had to assign the chain of keys/accessors to a new value. This updated the marge data structure in place with the new street name value. In a language with immutable data structures, e.g. Elixir, updating deeply nested data is another story: marge = %{ address: %{ street: %{ number: 742, name: "Evergreen Terrace" } } } %{marge | address: %{ marge.address | street: %{ marge.address.street | name: "Fake St." } } } marge # %{ # address: %{ # street: %{ # number: 742, # name: "Evergreen Terrace" # } # } # } Updating the street name in this nested map is much more involved than chaining the accessors and assigning a new value3. Additionally, because we’re working with immutable data, the data structure bound to marge is not actually modified. The update ( %{marge | address: %{...}}) returns a copy of the data structure with the change made to the street name. I would argue that this is a good thing, as immutability makes it easier to avoid unintended side-effects. This isn’t directly relevant to the current discussion of lenses (and lenses won’t behave any different in this respect). It is however something to be aware of – to do anything with the data structure after it has been updated in Elixir, it must be bound to a variable (it can be rebound to marge, but it can also be bound to any valid name). What difference can lenses make? With lenses we can make these sorts of updates less verbose: import Focus alias Lens marge = %{ address: %{ street: %{ number: 742, name: "Evergreen Terrace" } } } Lens.make_lens(:address) ~> Lens.make_lens(:street) ~> Lens.make_lens(:name) |> Focus.set(marge, "Fake St.") We can also bind lenses to variables and reuse them to operate on data throughout the data structure: import Focus alias Lens marge = %{ address: %{ street: %{ number: 742, name: "Evergreen Terrace" } } } # binding the lenses address = Lens.make_lens(:address) street = Lens.make_lens(:street) name = Lens.make_lens(:name) # using them to set the same value as before address ~> street ~> name |> Focus.set(marge, "Fake St.") # %{ # address: %{ # street: %{ # number: 742, # name: "Fake St." # } # } # } # viewing a piece of the structure address ~> street |> Focus.view(marge) # %{ # number: 742, # name: "Fake St." # } Focus’ API4 Optic creation To make a lens or prism, focus provides5 the following functions: Lens.make_lens/1 given v (an atom, string, or integer), returns a lens focused on vNote that atoms and strings are intentionally not interchangeable. Lens.make_lens(:username) Lens.make_lens("address") Lens.make_lens(42) Lens.make_lenses/1 given a map, m, returns a map l from key(m) => Lens(m) bart = %{ name: "Bart", age: 10, friends: ["Milhouse"], pets: ["Santa's Little Helper"] } lenses = Lens.make_lenses(bart) #%{ # name: %Lens{…}, # age: %Lens{…}, # friends: %Lens{…}, # pets: %Lens{…} #} Lens.idx/1 given i, an integer representing an index, returns a lens focused on i Lens.idx(0) Lens.idx(42) Prism.ok/0 - returns a lens focused on the {:ok, val} tuple Prism.error/0 returns a lens focused on the {:error, reason} tuple Prism.ok Prism.error Composition Optics can be composed together to build up more complex lenses/prisms that focus deeper into a structure: Focus.compose/2 given a lens f and a lens g, returns a new lens, f(g) marge = %{ address: %{ street: %{ number: 742, name: "Evergreen Terrace" } } } address = make_lens(:address) street = make_lens(:street) Focus.compose(address, street) # %Lens{…} that focuses into street through address ~>/2 drill, an infix operator for Focus.compose/2; this operator signifies drilling from one lens to another, deeper into the data structure # Focus.compose(address, street) can be written infix as: address ~> street # This syntax is even more useful as more lenses are composed: name = Lens.make_lens(:name) address ~> street ~> name Use The core functionality is exposed via the Focus module: Focus.view/2 given a lens l, and a data structure s, view the value l focuses on in s # Given marge and the address, street, and name lenses previously defined: address ~> street ~> name |> Focus.view(marge) # "Evergreen Terrace" Focus.over/3 given a lens l, a data structure s, and a function f, apply f to the value l focuses (v) on in s and replace v with f(v) address ~> street ~> name |> Focus.over(marge, &String.reverse/1) #%{ # address: %{ # street: %{ # number: 742, # name: "ecarreT neergrevE" # } # } #} Focus.set/3 given a lens l, a data structure s, and a new value y, replace the value l focuses (v) in s with y address ~> street ~> name |> Focus.set(marge, "Fake St.") #%{ # address: %{ # street: %{ # number: 742, # name: "Fake St." # } # } #} There are a few additional functions currently in focus, but these are currently the core feature set. Conclusion Functional lenses are an interesting concept that can help facilitate working with nested data. In the near term, I intend to continue working on this library, adding additional combinators and optic generators. I’m also planning a follow up post to this one demonstrating how focus can be used to work with a JSON API. References - The Haskell lens package - Edward Kmett, et al. - A Little Lens Starter Tutorial - Joseph Abrahamson. - Overloading Functional References - Twan van Laarhoven. Footnotes Compare with e.g. “Lenses are the coalgebras for the costate comonad”↩ Setting can be seen as a special case of function application in which the value the lens focuses on (the function’s argument) is ignored and the value to be set is the returned value.↩ This is slightly disingenuous for the purpose of illustration; put_in/3would achieve the same result in a more compact form: put_in(marge, [:address, :street, :name], "Fake St.").↩ - -
https://travispoulsen.com/blog/posts/2017-02-19-Focus-and-Functional-Lenses.html
CC-MAIN-2021-21
refinedweb
1,207
62.27
Things used in this project Story This is a really fun and easy project that can be done in about an hour. On the bottom of the skateboard is an accelerometer/gyro with an Arduino board that transmits the angular motion of the board via Bluetooth to a little VR game I made in Unity for Android phones. So, when you turn on your Arduino and the bluetooth connects with your phone, you start moving forward. Lean left and you go left, lean right and you go right. Lift up the front wheels and your character will jump. This only works for Android phones and your phone must be compatible with Google Cardboard. So, if you have an old skateboard laying around, turn it into a virtual reality snowboard. Here's how: You will need: A Google Cardboard style virtual reality headset. A skateboard. 4 tennis balls (to keep from rolling away) An Arduino Leonardo (or Uno) Some jumper wires A mini breadboard An HC-06 Bluetooth module An MPU-6050 accelerometer/gyro A 9V battery with battery box that has a on/off switch and a barrel plug (to power the Arduino board) A soldering iron and maybe a hot glue gun The Android app, Arduino code, and parts list with links can be found here: Step 1: Create your Arduino Device/Transmitter/Input/Thing Assemble your device as shown in the picture above. You will need to solder the header pins onto the MPU-6050, if your soldering iron is new and the tip is clean this can be done in about 11 seconds. If your soldering iron tip is dirty and old this will quickly become the hardest thing you have ever done (speaking from experience). Make sure the orientation of everything is exactly as shown in the pictures above so you do not have to get into the code and edit anything. Attach everything to you skateboard EXACTLY as shown (really just make sure the MPU6050 is facing the same direction as the picture) Here I glued the battery box to the board with hot glue and screwed the Arduino board down. I used an old school board because a wider board is better for the purposes of this experience. It is pretty hard to balance on a board while in virtual reality. NOTE: The Leonardo works much better for this...however... If you are using an Arduino Uno all the connections are the same except: SDA goes to A4 and SCL goes to A5 Step 2: Upload the Arduino code to your Device / Transmitter / Input Thing IMPORTANT: Unplug the RX and TX pins before uploading the code to your board. Step 3: Almost done! Download App to your Phone! Follow this link and download the app to your phone. Google Cardboard Virtual Reality Skateboard/Snowboard App You will need to go into your security settings and allow apps to be installed from unknown developers. Make sure to pair the Bluetooth module to your phone, do not change the default name (HC-06). The password should be 1234. Notes: This is my first Unity Game...so it's pretty basic. If enough people make this I can try to make a better snowboard game, or make an actual skateboard game depending on what I get requests for. When your turn on your Arduino device make sure the skateboard is flat on the ground and then open the app. As soon as the Bluetooth connects you will drop in. If the Bluetooth becomes disconnected, restart the app (it is set to connect at the start of the app only, for right now). Schematics Code Arduino CodeArduino #include "I2Cdev.h" #include "MPU6050_6Axis_MotionApps20.h" #if I2CDEV_IMPLEMENTATION == I2CDEV_ARDUINO_WIRE #include "Wire.h" #endif MPU6050 mpu; bool dmpReady = false; uint8_t mpuIntStatus; uint8_t devStatus; uint16_t packetSize; uint16_t fifoCount; uint8_t fifoBuffer[64]; Quaternion q; VectorInt16 aa; VectorInt16 aaReal; VectorInt16 aaWorld; VectorFloat gravity; float euler[3]; float ypr[3]; volatile bool mpuInterrupt = false; void setup() { #if I2CDEV_IMPLEMENTATION == I2CDEV_ARDUINO_WIRE Wire.begin(); TWBR = 24; #elif I2CDEV_IMPLEMENTATION == I2CDEV_BUILTIN_FASTWIRE Fastwire::setup(400, true); #endif Serial.begin(9600); //For use with Arduino Uno Serial1.begin(9600); //For use with Leonardo(")")); } } void sendData(int x, int y, int z) { if(z < -10){ //forward Serial1.write("f"); // Write to Leonardo Serial1.write(10); //Stop Bit Serial.write("f"); // Write to Uno Serial.write(10); //Stop bit } else if (z > 0){ //backward Serial1.write("b"); Serial1.write(10); Serial.write("b"); Serial.write(10); } else if (y > 5){ //To make more sensitive change value to 4 or less //right Serial1.write("r"); Serial1.write(10); Serial.write("r"); Serial.write(10); } else if (y < -5){ //To make more sensitive change to -4 or greater //left Serial1.write("l"); Serial1.write(10); Serial.write("l"); Serial.write(10); } else //stop Serial1.write("s"); Serial1.write(10); Serial.write("s"); Serial.write(10); } void loop() { if (!dmpReady) return;); sendData(ypr[0] * 180/M_PI, ypr[1] * 180/M_PI, ypr[2] * 180/M_PI); } } void dmpDataReady() { mpuInterrupt = true; } Credits Matthew Hallberg My name is Matthew and I attend the University of Pittsburgh for Info Sci and CS. I need motivated friends, serious inquiries send me an email. Replications Did you replicate this project? Share it!I made one Love this project? Think it could be improved? Tell us what you think!
https://www.hackster.io/MatthewHallberg/diy-virtual-reality-skateboard-097bf4?ref=channel&ref_id=3165_trending___&offset=35
CC-MAIN-2017-13
refinedweb
883
63.59
Visual Introduction to UML for Object-Oriented Design, Page 2 A UML formal parameter that is preceded by in or nothing has the value of its actual parameter passed into the operation. However, the operation cannot modify the actual parameter's value. This is similar to a parameter of a Java method or a VB.NET parameter that is preceded by ByVal. A UML formal parameter that is preceded by inout has the value of its actual parameter passed into the operation. Any changes that the operation makes to the value of its formal parameter cause the value of the actual parameter to be modified. This similar to a VB.NET parameter that is preceded by ByRef. The usual way to implement this in Java is to make the value of a method's parameter a single element array. The value of the array element is used as the parameter value. The method can change the value of the actual parameter by modifying the value of the array's element. public class Util { /** * Increment the argument and check for overflow. * * @param x * An array with a single element whose value is to be * incremented. * @return true unless the value of the element overflowed. */ public static boolean incrementCheckingOverflow(short[] x) { short i = x[0]; x[0] = (short)(i + 1); return i > 0 && x[0] < 0; } } A UML formal parameter that is preceded by out is used only to pass a value out of the operation. The operations shown in the class in Figure 1 are preceded by a word in guillemets (double angle brackets), like this: «constructor» In a UML drawing, a word in guillemets is called a stereotype. A stereotype is used like an adjective to modify what comes after it. Some stereotypes have predefined meanings. The «constructor» stereotype indicates that the operations following it are constructors. The «misc» stereotype indicates that the operations following it are regular operations. One last element that appears in Figure 1 is an ellipsis (…). If an ellipsis appears in the bottom compartment of a class, the class has additional operations that the diagram does not show. If an ellipsis appears in the middle compartment of a class, it means that the class has additional variables that the diagram does not show. Often, it is not necessary or helpful to show as many details of a class as were shown in the preceding class in Figure 1. We may choose to omit details because we have not decided them yet or because they seem extraneous. As is shown in Figure 2, a class may be drawn with only two compartments. Figure 2: A Two Compartment Class It is common in UML diagrams not to include all possible details. The usual reason for leaving a detail out of a diagram is either that the detail is not relevant or that the detail has not yet been decided. UML syntax allows most details of a class, other than its name, to be omitted. When a class is drawn with only two compartments, as shown in Figure 2, its top compartment contains the class's name and its bottom compartment shows the class's operations. Leaving out the compartment that contains the attributes just means that the class's attributes are not shown. It does not mean that the class has no attributes. Visibility indicators may be omitted. When an operation or attribute is shown without a visibility indicator, it means there is no indication of the operation's or attribute's visibility. It does not imply that they are public, protected, or private. An operation's parameters may be omitted if its return values also are omitted. Omitting an operation's parameters is common in a high-level design that just identifies operations. In Figure 3, for example, the return values and parameters are omitted from the class. Figure 3: A Simplified Class The simplest form of a class has just one compartment that contains the class name, as shown in Figure 4. Figure 4: One Compartment Class
http://www.developer.com/design/article.php/10925_3790731_2/Visual-Introduction-to-UML-for-Object-Oriented-Design.htm
CC-MAIN-2013-20
refinedweb
671
54.93
In this post, I’ll show you how to use custom fonts in Flutter applications. To use custom fonts in your Flutter application, you must include them in your pubspec.yaml file under the fonts heading. I think it is a good time to save a copy of your application so that you can restore it if something goes wrong while editing the code. How to use custom fonts in Flutter? To use custom fonts in the application, we must: - Download the fonts and import them to our project. - Specify it in the pubspec.yaml file. - Use them in our application. In this tutorial, I’ll be using two fonts. First, let’s download the fonts for our application. You can download the fonts from this site. Importing custom fonts To download a font visit this site and select a font: - Click on the ➕ icon near to the font. - From the box that appears at the bottom of the screen, click on ➖ icon. - Click the download icon ⬇ to download the font. - Extract the downloaded file. Next, we should import the font files to our application. For that, open the folder where the files are extracted. Come back Android studio and Right-click on the project folder (my_fluttter_app) -> New -> Directory and name it as fonts to create a new directory. Now copy the font files from the extracted folder. Right-click on fonts folder in Android Studio -> Paste to import the font to our project. Adding fonts to pubspec.yaml After importing the fonts, it should be added to pubspec.yaml. Open pubspec.yaml and scroll down to the bottom of the file. You will see these lines there. # fonts: # - family: Schyler # fonts: # - asset: fonts/Schyler-Regular.ttf # - asset: fonts/Schyler-Italic.ttf # style: italic # - family: Trajan Pro # fonts: # - asset: fonts/TrajanPro.ttf # - asset: fonts/TrajanPro_Bold.ttf # weight: 700 This is the section where you can add all the custom fonts you are using in the application. To add the fonts, uncomment these lines by selecting them and pressing Ctrl + / . Now remove the additional space from each lines. # example: fonts: - family: Schyler fonts: - asset: fonts/Schyler-Regular.ttf - asset: fonts/Schyler-Italic.ttf style: italic - family: Trajan Pro fonts: - asset: fonts/TrajanPro.ttf - asset: fonts/TrajanPro_Bold.ttf weight: 700 # Replace the default font with the custom font. # example: fonts: - family: Montserrat fonts: - asset: fonts/Montserrat-Regular.ttf - asset: fonts/Montserrat-Italic.ttf style: italic - family: Sofia fonts: - asset: fonts/Sofia-Regular.ttf # Click on Package Get. Make sure the code has a proper indentation of two spaces. Otherwise the code won’t work. Here’s my pubspec.yaml file. name: my_flutter_app description: A new Flutter application. #.1.0 <3.0.0" dependencies: flutter: sdk: flutter #: # - images/a_dot_burr.jpeg # -: Montserrat fonts: - asset: fonts/Montserrat-Regular.ttf - asset: fonts/Montserrat-Italic.ttf style: italic - family: Sofia fonts: - asset: fonts/Sofia-Regular.ttf # # For details regarding fonts from package dependencies, # see Using custom fonts Now we can use the custom fonts in our application. To keep it simple, let’s apply the custom font to our Text widget which we have created in the previous tutorials. Modify main.dart as shown below. import 'package:flutter/material.dart'; void main() { runApp( MaterialApp( title: "My App", home: Container( color: Colors.amber, alignment: Alignment.center, child: Text( "Using Custom Fonts", textDirection: TextDirection.ltr, style: TextStyle( decoration: TextDecoration.none, fontFamily: "Sofia", fontWeight: FontWeight.bold, color: Colors.white, ), ), ), ) ); } Run the project to see the output. Happy coding. 👍
https://www.geekinsta.com/using-custom-fonts-in-flutter/
CC-MAIN-2021-31
refinedweb
578
62.54
Pages: 1 Dear Java gurus, I've been having an odd problem with the JScrollBar. I've made an example class that illustrates the issue. Imagine I have a panel containing a JScrollPane, and within that a component that is likely to be wider than the width of its container (hence the need of the scrollbar.), in this instance, I've put a JLabel with a very long text. Now, for this app, if the component within the scrollpane is wider, I would like to have the component centered within the scrollpane. I have written a method to achieve this by essentially setting the "value" of the scrollbar so that it is in the middle of its range. I added a JButton to the bottom of the main panel with an ActionListener added that will run the centering method. Testing this reveals that it does in fact work nicely. However, what I really want to do if for the scroll pane to be centered without the need for the user to click the button (ie done automatically after the JScrollPane is added to the panel). Well, no problem I thought, as I have the method to center - I can just call this directly after I add the scrollpane. So I did, and this is where the problem occurs. It doesn't center! Ok, let's have a look at some screenshots. Here is the default app where you see the long label within the scrollpane. No code has been run to try and center the scrollbar. If I hit the button to center, this is what happens: As you can see, the scroll bar is central. Now, if I amend the code so that I run the same method to center the scroll bar after it's been added to the panel, this is what it looks like: The scrollbar has shifted to the left slightly, but I can't see why it's not gone all the way. I can't see why this has happened. Is it some sort of threading issue? Many thanks. The code is as follows: import java.awt.BorderLayout; import java.awt.event.ActionEvent; import java.awt.event.ActionListener; import javax.swing.JButton; import javax.swing.JFrame; import javax.swing.JLabel; import javax.swing.JPanel; import javax.swing.JScrollPane; public class ScrollBarProblem extends JPanel { private JScrollPane sp; public ScrollBarProblem() { super(new BorderLayout()); initialise(); } private void initialise() { sp = new JScrollPane(new JLabel("A really really really really really really really really really really really really really really really really really really really really really long string!")); add(sp, BorderLayout.CENTER); JButton center = new JButton("Center scrollbar"); center.addActionListener(new ActionListener() { public void actionPerformed(ActionEvent e) { centerScrollBar(); } }); add(center, BorderLayout.SOUTH); centerScrollBar(); // <-- why does this work here?! } private void centerScrollBar() { if (sp.getHorizontalScrollBar() != null) { int mid = ((sp.getHorizontalScrollBar().getMaximum() - sp.getHorizontalScrollBar().getVisibleAmount()) - sp.getHorizontalScrollBar().getMinimum()) / 2; sp.getHorizontalScrollBar().setValue(mid); //concScroll.revalidate(); } } /** * Create the GUI and show it. For thread safety, * this method should be invoked from the * event-dispatching thread. */ private static void createAndShowGUI() { JFrame frame = new JFrame("ScrollBar Problem"); frame.getContentPane().add(new ScrollBarProblem()); frame.setDefaultCloseOperation(JFrame.EXIT_ON_CLOSE); frame.setSize(300, 100); frame.setVisible(true); } public static void main(String[] args) { // Schedule a job for the event-dispatching thread: //creating and showing this application's GUI. javax.swing.SwingUtilities.invokeLater(new Runnable() { public void run() { createAndShowGUI(); } }); } } Offline The scrollbar has shifted to the left slightly, but I can't see why it's not gone all the way. I can't see why this has happened. This happens because the JScrollPane has not been 'realized'. Because the frame has not been shown, the layout has not been set yet.. v/r Suds Offline. Thanks for the reply. You are right, it's a threading issue. I did in fact remedy my problem about 10mins after posting (that's always the case with me!) whereby I wrapped my call to centerScrollBar with the SwingUtilities.invokeLater() method (like I did in main()). Offline Pages: 1
https://bbs.archlinux.org/viewtopic.php?id=11639
CC-MAIN-2017-47
refinedweb
669
59.7
Dec 04, 2011 09:16 PM|LINK It is likely that MyUser is defined in a namespace that is not included in the View. See what the name of the Namespace is and add @using namespaceName Dec 04, 2011 10:21 PM 09:25 AM|LINK See if also roles is null. If not there is a type mismatch between allRoles and roles, I suspect the first one is just an IEnumerable, while the secondone is an IEnumerable<MembershipUser>. Dec 05, 2011 08:18 PM|LINK Create a List<MyUser>, sat MyList and in a foreach loop on roles add all MyUSer contained in roles to MyList, wuth something like MyList.Add(currItem) then put this list as value of AllUsers. Dec 05, 2011 08:32 PM 08:40 PM 26 replies Last post Dec 06, 2011 12:01 PM by JonLWright
http://forums.asp.net/p/1746171/4714752.aspx/1?Re+showing+user+roles+for+users+in+database
CC-MAIN-2013-20
refinedweb
142
75.24
This class implements the following algorithms used to create a ConicalSurface from Geom. More... #include <GC_MakeConicalSurface.hxx> This class implements the following algorithms used to create a ConicalSurface from Geom. The "ZAxis" is the symmetry axis of the ConicalSurface, it gives the direction of increasing parametric value V. The apex of the surface is on the negative side of this axis. ConicalSurface the U and V directions of parametrization are such that at each point of the surface the normal is oriented towards the "outside region". A2 defines the local coordinate system of the conical surface. Ang is the conical surface semi-angle ]0, PI/2[. Radius is the radius of the circle Viso in the placement plane of the conical surface defined with "XAxis" and "YAxis". The "ZDirection" of A2 defines the direction of the surface's axis of symmetry. If the location point of A2 is the apex of the surface Radius = 0 . At the creation the parametrization of the surface is defined such that the normal Vector (N = D1U ^ D1V) is oriented towards the "outside region" of the surface. Status is "NegativeRadius" if Radius < 0.0 or "BadAngle" if Ang < Resolution from gp or Ang >= PI/ - Resolution. Creates a ConicalSurface from a non persistent Cone from package gp. Make a ConicalSurface from Geom <TheCone> passing through 3 Pnt <P1>,<P2>,<P3>. Its axis is <P1P2> and the radius of its base is the distance between <P3> and <P1P2>. The distance between <P4> and <P1P2> is the radius of the section passing through <P4>. An error iss raised if <P1>,<P2>,<P3>,<P4> are colinear or if <P3P4> is perpendicular to <P1P2> or <P3P4> is colinear to <P1P2>. Make a ConicalSurface with two points and two radius. The axis of the solution is the line passing through <P1> and <P2>. <R1> is the radius of the section passing through <P1> and <R2> the radius of the section passing through <P2>. Returns the constructed cone. Exceptions StdFail_NotDone if no cone is constructed.
https://dev.opencascade.org/doc/refman/html/class_g_c___make_conical_surface.html
CC-MAIN-2020-45
refinedweb
333
57.27
Hey folks, let us understand the basics of akka streams. I hope you have a basic understanding of Akka Actor. What is Akka Streams Akka Streams is a library to process and transfer a sequence of elements. It is built on top of Akka Actors to make the ingestion and processing of streams easy. As it is build on top of Akka Actors, it provide a higher-level abstraction over Akka’s existing actor model.. Features of Akka Streams - Akka-streams is very useful for fast streaming data. - It avoids lots of boilerplate code required to manage the actor. - It is best suited for big data-based applications. - As it is built on Akka Toolkit, we will get all Akka Toolkit benefits, such as Reactiveness, Distributed, Location Transparency, Clustering, Remoting etc. - It provides reusability, which means once we design data flow graph, we can reuse it any number of times. Terminology in Akka-Streams 1. Source : This is the entry point to your stream. There must be at least one source in every stream. It takes two type parameters. The first one represents the type of data it emits and the second one is the type of the auxiliary value it can produce when run. If we don’t produce any we use the NotUsed type provided by Akka. It has only one output point. Source can be considered as publisher . val source : Source[Int, NotUsed] = (1 to 1000) 2. Sink : This is the exit point of your stream. There must be at least one sink in every stream.The Sink is the last element of our stream. Basically it’s a subscriber of the data sent/processed by a source. Usually it outputs its input to some system IO.It is the endpoint of a stream and therefore consumes data. A Sink has a single input channel and no output channel. Sinks are especially needed when we want to specify the behaviour of the data collector in a reusable way and without evaluating the stream. Sink can be considered as subscriber. val sink: Sink[Int, Future[Done]] = Sink.foreach(println) 3. Flow : The flow is a processing step within the stream. It combines one incoming channel and one outgoing channel as well as some transformation of the messages passing through it. If a flow is connected to a source a new source is the result. Likewise, a flow connected to a sink creates a new sink. And a flow connected with both a source and a sink results in a RunnableFlow. Therefore, they sit between the input and the output channel but by themselves do not correspond to one of the flavors as long as they are not connected to either a Source or a Sink. Here Flow sits in between the Source and Sink as they are the Transformations applied on the Source data. val flow: Flow[Int, Int, NotUsed] = Flow[Int].map(_ + 1) 4. RunnableGraph : A Flow that has both ends attached to a Source and Sink respectively is ready to be run() and is called a RunnableGraph. Even after constructing the RunnableGraph by connecting all the source, sink and different operators, no data will flow through it. This is where Materialization comes into action! 5. Materializer : Flows and graphs in Akka Streamsare like preparing a blueprint/execution plan. Stream materialization is the process of taking a stream description and allocating all the necessary resources it needs in order to run. This means starting up Actors which power the processing, and much more under the hood depending on what the stream needs.After running (materializing) the RunnableGraph we get back the materialized value of specified type. Every stream operator can produce a materialized value. Akka has .toMat to indicate that we want to transform the materialized value of the source and sink. Now we have an idea of what is akka streams, how they work etc. So let’s see Akka Streams in action. Akka Streams in action import akka.{Done, NotUsed} import akka.actor.ActorSystem import akka.stream.ActorMaterializer import akka.stream.scaladsl.{Flow, Sink, Source} import scala.concurrent.Future object Application extends App { implicit val system: ActorSystem = ActorSystem("akka-streams-demo") implicit val materializer: ActorMaterializer = ActorMaterializer() val numberSource: Source[Int, NotUsed] = Source(1 to 100) val sink: Sink[Int, Future[Done]] = Sink.foreach(println) val flow: Flow[Int, Int, NotUsed] = Flow[Int].filter(number => isPrime(number)) numberSource.via(flow).to(sink).run() private def isPrime(number: Int): Boolean = { if (number <= 1) false else if (number == 2) true else !(2 until number).exists(i => number % i == 0) } } Output : - We have created an ActorSystem and an ActorMaterializer instances in scope to materialize the graph. - Now Create a Source with range 1 to 100 - Flow that filters only prime number - Create a sink that will print out its input to the console using println. - Finally connect numberSourcevia flowto sinkand running it by using run(). Conclusion In this article, we were looking at the akka-stream library. What is akka-stream, Features of akka-streams and a very basic exmaple to see akka-streams in action. We defined a process that combines Flows to filter prime numbers. Then, we defined a Source that is an entry point of the stream processing and a Sink that triggers the actual processing. References If you find this article interesting, please check out our other articles.
https://blog.knoldus.com/introduction-to-akka-streams/
CC-MAIN-2022-05
refinedweb
894
58.79
JBoss.orgCommunity Documentation Version: 5.1.0.trunk Drools is a business rule management system (BRMS) with a forward chaining inference based rules engine, more correctly known as a production rule system, using an enhanced implementation of the Rete algorithm. In this guide we are going to get you familiar with Drools Eclipse plugin which provides development tools for creating, executing and debugging Drools processes and rules from within Eclipse. It is assumed that you has some familiarity with rule engines and Drools in particular. If no, we suggest that you look carefully through the Drools Documentation. Drools Tools come bundled with JBoss Tools set of Eclipse plugins. How to install JBoss Tools you can find in the Getting Started Guide. The following table lists all valuable features of the Drools Tools. The latest JBossTools/JBDS documentation builds All JBoss Tools/JBDS documentation you can find on the documentation release page. In this chapter we are going to show you how to setup an executable sample Drools project to start using rules immediately. First, we suggest that you use Drools perspective which is aimed at work with Drools specific resources. To create a new Drools project follow to File > New > Drools Project. This will open New Drools Project wizard like on the figure below. On the first page type the project name and click Next. Next you have a choice to add some default artifacts to it like sample rules, decision tables or ruleflows and Java classes for them. Let's select first two check boxes and press Next. Next page asks you to specify a Drools runtime. If you have not yet set it up, you should do this now by clicking the Configure Workspace Settings link. You should see the Preferences window where you can configure the workspace settings for Drools runtimes. To create a new runtime, press the Add button. The appeared dialog prompts you to enter a name for a new runtime and a path to the Drools runtime on your file system. A Drools runtime is a collection of jars on your file system that represent one specific release of the Drools project jars. While creating a new runtime, you must either point to the release of your choice, or you can simply create a new runtime on your file system from the jars included in the Drools Eclipse plugin. Let's simply create a new Drools 5 runtime from the jars embedded in the Drools Eclipse plugin. Thus, you should press Create a new Drools 5 runtime button and select the folder where you want this runtime to be created and hit OK. You will see the newly created runtime show up in your list of Drools runtimes. Check it and press OK. Now press Finish to complete the project creation. This will setup a basic structure, classpath and sample rules and test case to get you started. Now let's look at the structure of the organized project. In the Package Explorer you should see the following: The newly created project contains an example rule file Sample.drl in the src/main/rules directory and an example java file DroolsTest.java that can be used to execute the rules in a Drools engine in the folder src/main/java , in the com.sample package. All the others jar's. Now we are going to add a new Rule resource to the project. You can either create an empty text .drl file or make use of the special New Rule Resource wizard to do it. To open the wizard follow to File > New > Rule Resource or use the menu with the JBoss Drools icon on the toolbar. On the wizard page first select /rules as a top level directory to store your rules and type the rule name. Next it's mandatory to specify the rule package name. It defines a namespace that groups rules together. As a result the wizard generates a rule skeleton to get you started. This chapter describes how to debug rules during the execution of your Drools application. At first, we'll focus on how to add breakpoints in the consequences of your rules. Whenever such a breakpoint is uncounted during the execution of the rules, the execution is halted. It's possible then inspect the variables known at that point and use any of the default debugging actions to decide what should happen next (step over, continue, etc). To inspect the content of the working memory and agenda the Debug views can be used. You can add/remove rule breakpoints in .drl files in two ways, similar to adding breakpoints to Java files: Double-click the ruler in the Rule. Right-click the ruler. Select Toggle Breakpoint action in the appeared popup menu. Clicking the action will add a breakpoint at the selected line or remove it if there is one already. The Debug perspective contains a Breakpoints view which can be used to see all defined breakpoints, get their properties, enable/disable or remove them, etc. You can switch to it by navigating to Window > Perspective > Others > Debug. Drools breakpoints are only enabled if you debug your application as a Drools Application. To do this you should perform one of the actions: Select the main class of your application. Right click it and select Debug As > Drools Application. Alternatively, you can also go to Debug As > Debug Configuration to open a new dialog for creating, managing and running debug configurations. Select the Drools Application item in the left tree and click the New launch configuration button (leftmost icon in the toolbar above the tree). This will create a new configuration and already fill in some of the properties (like the Project and Main class) based on main class you selected in the beginning. All properties shown here are the same as any standard Java program. Remember to change the name of your debug configuration to something meaningful. Next click the Debug button on the bottom to start debugging your application. After enabling the debugging, views can also be used to determine the contents of the working memory and agenda at that time as well (you don't have to select a working memory now, the current executing working memory is automatically shown). A domain-specific language is a set of custom rules, that is created specifically to solve problems in a particular domain and is not intended to be able to solve problems outside it. A DSL's configuration is stored in plain text. In Drools this configuration is presented by .dsl files that can be created by right click on the project->New->Other->Drools->Domain Specific Language. DSL Editor is a default editor for .dsl files: In the table below all the components of the DSL Editor page are described: This wizard can be opened by double clicking some line in the table of language message mappings or by clicking the Editbutton. On the picture below you can see all the options,Edit language mapping Wizard allow to change. Their names as well as the meaning of the options are correspond to the rows of the table. To change the mapping a user should edit the otions he want and finally click Ok. This wizard is equal to Edit language mapping Wizard. It can be opened by clicking the Add button. The only difference is that instead of editing the information you should enter new one. Drools tools also provide some functionality to define the order in which rules should be executed.Ruleflow file allows you to specify the order in which rule sets should be evaluated using a flow chart. So you can define which rule sets should be evaluated in sequence or in parallel as well as specify conditions under which rule sets should be evaluated. Ruleflows can be set only by using the graphical flow editor which is part of the Drools plugin for Eclipse. Once you have set up a Drools project,you can start adding ruleflows. Add a ruleflow file(.rf) by clicking on the project and selecting "New -> Other...->Flow File": By default these ruleflow files (.rf) are opened in the graphical Flow editor. You can see it on the picture below. The Flow editor consists of a palette, a canvas and an outline view. To add new elements to the canvas, select the element you would like to create in the palette and then add it to the canvas by clicking on the preferred location. Clicking on the Select option in the palette and then on the element in your ruleflow allows you to view and set the properties of that element in the properies view. Outline View is useful for big complex schemata where not all nodes are seen at one time. So using your Outline view you can easly navigate between parts of a schema. Flow editor supports three types of control elements. They are: The Rule editor works on files that have a .drl (or .rule in the case of spreading rules across multiple rule files) extension. The editor follows the pattern of a normal text editor in eclipse, with all the normal features of a text editor: While working in the Rule editor you can get a content assistance the usual way by pressing Ctrl + Space. Content Assist shows all possible keywords for the current cursor position. Content Assist inside of the Message suggests all available fields. Code folding is also available in the Rule editor. To hide/show sections of the file use the icons with minus/plus on the left vertical line of the editor. The Rule editor works in synchronization with the Outline view which shows the structure of the rules, imports in the file and also globals and functions if the file has them. The view is updated on save. It provides a quick way of navigating around rules by names in a file which may have hundreds of rules. The items are sorted alphabetically by default. The Rete Tree view shows you the current Rete Network for your .drl file. Just click on the Rete Tree tab at the bottom of the Rule editor. Afterwards you can generate the current Rete Network visualization. You can push and pull the nodes to arrange your optimal network overview. If you got hundreds of nodes, select some of them with a frame. Then you can pull groups of them. You can zoom in and out the Rete tree in case not all nodes are shown in the current view. For this use the combo box or "+" and "-" icons on the toolbar. The Rete Tree view works only in Drools Rule Projects, where the Drools Builder is set in the project properties. We hope, this guide helped you to get started with the JBoss BPMN Convert module. Besides, for additional information you are welcome on JBoss forum.
http://docs.jboss.org/tools/3.1.0.CR2/en/drools_tools_ref_guide/html_single/index.html
crawl-003
refinedweb
1,814
72.56
I have a file that I want copied into a directory multiple times. It could be 100, it could be 1000. That’s a variable. I came up with this: import shutil count = 0 while (count < 100): shutil.copy2('/Users/bubble/Desktop/script.py', '/Users/bubble/Desktop/pics') count = count + 1 It puts 1 copy of the file in the directory, but only 1 file. My guess is that it doesn’t automatically add a 2,3,4,5 etc onto the end of the file as it would if you were copying and pasting. Any ideas how to do this? Regards. Best answer Use str.format: import shutil for i in range(100): shutil.copy2('/Users/bubble/Desktop/script.py', '/Users/bubble/Desktop/pics/script{}.py'.format(i)) To make it even more useful, one can add the format specifier {:03d} (3 digit numbers, i.e. 001, 002 etc.) or {:04d} (4 digit numbers, i.e. 0001, 0002 etc.) according to their needs as suggested by @Roland Smith.
https://pythonquestion.com/post/python-making-copies-of-a-file/
CC-MAIN-2020-16
refinedweb
169
69.89
1619704733 We understand Machine Learning, a subset of Artificial Intelligence, as a computer being programmed with the ability to self-learn and improve itself on a particular task. Supervised Learning in Machine Learning allows one to produce or collect data based on previous experience. It helps one to optimize performance criteria using past experience and work on real-time computational problems. Great Learning brings you this tutorial on Classification using Decision Trees where we understand how classification can be implemented with decision trees using R language. This video discusses the advantages of using tree-based models, followed by looking at a case study to better understand the topic. Then we look at the Gini index, entropy and misclassification error. Following this, we will look at the concept of measuring impurity. Finally, we look at the types of decision tree algorithms! This video teaches Classification using Decision Trees and their key functions and concepts with a variety of demonstrations & examples. #machine-learning #artificial-intelligence #developer 1596286260 Decision Tree is one of the most widely used machine learning algorithm. It is a supervised learning algorithm that can perform both classification and regression operations. As the name suggest, it uses a tree like structure to make decisions on the given dataset. Each internal node of the tree represent a “decision” taken by the model based on any of our attributes. From this decision, we can seperate classes or predict values. Let’s look at both classification and regression operations one by one. In Classification, each leaf node of our decision tree represents a **class **based on the decisions we make on attributes at internal nodes. To understand it more properly let us look at an example. I have used the Iris Flower Dataset from sklearn library. You can refer the complete code on Github — Here. A node’s samples attribute counts how many training instances it applies to. For example, 100 training instances have a petal width ≤ 2.45 cm . A node’s value attribute tells you how many training instances of each class this node applies to. For example, the bottom-right node applies to 0 Iris-Setosa, 0 Iris- Versicolor, and 43 Iris-Virginica. And a node’s gini attribute measures its impurity: a node is “pure” (gini=0) if all training instances it applies to belong to the same class. For example, since the depth-1 left node applies only to Iris-Setosa training instances, it is pure and its gini score is 0. Gini Impurity Formula where, pⱼ is the ratio of instances of class j among all training instances at that node. Based on the decisions made at each internal node, we can sketch decision boundaries to visualize the model. But how do we find these boundaries ? We use Classification And Regression Tree (CART) to find these boundaries. CART is a simple algorithm that finds an attribute _k _and a threshold _t_ₖat which we get a purest subset. Purest subset means that either of the subsets contain maximum proportion of one particular class. For example, left node at depth-2 has maximum proportion of Iris-Versicolor class i.e 49 of 54. In the _CART cost function, _we split the training set in such a way that we get minimum gini impurity.The CART cost function is given as: After successfully splitting the dataset into two, we repeat the process on either sides of the tree. We can directly implement Decision tree with the help of Scikit learn library. It has a class called DecisionTreeClassifier which trains the model for us directly and we can adjust the hyperparameters as per our requirements. #machine-learning #decision-tree #decision-tree-classifier #decision-tree-regressor #deep learning 1596428520 Decision tree is one of the popular machine learning algorithms which is the stepping stone to understand the ensemble techniques using trees. Also, Decision Tree algorithm is a hot topic in many of the interviews which are conducted related to data science field. Understanding Decision Tree… Decision Tree is more of a kind of Management tool which is used by many professionals to take decisions regarding the resource costs, decision to be made on the basis of filters applied. The best part of a Decision Tree is that it is a non-parametric tool, which means that there are no underlying assumptions about the distribution of the errors or the data. It basically means that the model is constructed based on the observed data. They are adaptable at solving any kind of problem at hand (classification or regression). Decision Tree algorithms are referred to as CART (Classification and Regression Trees). Common terms used with Decision trees: classic example to demonstrate a Decision Tree How a Decision Tree works! Main Decision Areas: The node with homogeneous class distribution are preferred. 2. Measures of Node Impurity: Below are the measures of the impurity (a). Gini Index (b). Entropy ©. Mis-classification error Understanding each terminologies with the example: Let us take a dataset- weather, below is the snapshot of the header of the data: Now according to the algorithm written above and the decision points to be considered, we need the feature having maximum information split possible. Note: At the root node, the impurity level will be maximum with negligible information gain. As we go down the tree, the Entropy reduces with maximizing the Information gain.Therefore, we choose a feature with maximum gain achieved. #data-science #machine-learning #decision-tree #algorithms #algorithms 1596285180 Both of Regression Trees and Classification Trees are a part of CART (Classification And Regression Tree) Algorithm. As we mentioned in Regression Trees article, tree is composed of 3-major parts; root-node, decision-node and terminal/leaf-node. The criteria used here in node splitting differs from that being used in Regression Trees. As before we will run our example and then learn how the model is being trained. There are three commonly measures are used in the attribute selection Gini impurity measure, is the one used by CART classifier. For more information on these, see Wikipedia. irisdata set import numpy as np import pandas as pd from sklearn.tree import DecisionTreeClassifier from sklearn.tree import export_graphviz from six import StringIO from IPython.display import Image # pip/conda install pydotplus import pydotplus from sklearn import datasets iris = datasets.load_iris() xList = iris.data # Data will be loaded as an array labels = iris.target dataset = pd.DataFrame(data=xList, columns=iris.feature_names) dataset['target'] = labels targetNames = iris.target_names print(targetNames) print(dataset) Iris Flowers When an observation or row is passed to a non-terminal node, the row answers the node’s question. If it answers yes, the row of attributes is passed to the leaf node below and to the left of the current node. If the row answers no, the row of attributes is passed to the leaf node below and to the right of the current node. The process continues recursively until the row arrives at a terminal (that is, leaf) node where a prediction value is assigned to the row. The value assigned by the leaf node is the mean of the outcomes of the all the training observations that wound up in the leaf node. Classification trees split a node into two sub-nodes. Splitting into sub-nodes will increase the homogeneity of resultant sub-nodes. In other words, we can say that the purity of the node increases with respect to the target variable. The decision tree splits the nodes on all available variables and then selects the split which results in most homogeneous/pure sub-nodes. There are major measures being used to determine which attribute/feature is used for splitting and which value within this attribute we will start with. Some of these measures are: We will start with Gini index measure and try to understand it Gini index is an impurity measure used to evaluate splits in the dataset. It is calculated by getting the sum of the squared probabilities of each class (target-class) within a certain attribute/feature then benig subtacted from one. #machine-learning #mls #decision-tree #decision-tree-classifier #classification #deep learning 1624442580. #2021 jan tutorials #overviews #algorithms #decision tree #explained #algorithms 1596960480 A classification tree is very alike to a regression tree, besides that it is used to predict a qualitative response rather than a quantitative one. classification tree, we predict that each observation belongs to the most ordinarily occurring class of training observations in the region to which it belongs. In the classification, RSS cannot be used for making the binary splits. An alternative to RSS is the classification error rate. we assign an observation in a given region to the most commonly occurring class of training observations in that region,** the classification error rate is simply the fraction of the training observations in that region that do not belong to the most common class:** p̂mk = proportion of training observations in the mth region that is from the kth class. classification error is not good enough for tree-growing, and in practice, two other measures are favoured. 2.** Cross-Entropy **is given by Since, 0 ≤ p̂mk ≤ 1, it follows that 0 ≤ −p̂mk log p̂mk. The cross-entropy will take on a value near zero if the p̂mk ’s are all near 0 or near 1. Therefore, the cross-entropy will take on a small value if the mth node is pure. It turns out that the Gini index and the cross-entropy are quite similar numerically. Two approaches are more sensitive to node purity than is the classification error rate. Any of these three approaches might be used when pruning the tree, but the classification error rate is preferable if the prediction accuracy of the final pruned tree is the goal. Decision tree classifier using sklearn To implement the decision tree classifier, we’re going to use scikit-learn, and we’ll import our ‘’DecisionTreeClassifier’’ from sklearn.tree import numpy as np import pandas as pd import matplotlib.pyplot as plt from sklearn import datasets from sklearn.tree import DecisionTreeClassifier Load the Data Once the libraries are imported, our next step is to load the data, which is the iris dataset, itis a classic and very easy multi-class classification dataset available in sklearn datasets. This data sets consists of 3 different types of irises’ (Setosa, Versicolour, and Virginica) petal and sepal length, stored in a 150x4 numpy.ndarray. The rows being the samples and the columns being: Sepal Length, Sepal Width, Petal Length and Petal Width. #iris-dataset #decision-tree #decision-tree-classifier #machine-learning #sklearn #deep learning
https://morioh.com/p/df954018176b
CC-MAIN-2022-21
refinedweb
1,766
54.12
Translations of this post (I can't vouch for their accuracy): Many systems boast of being ‘powerful’, and it sounds difficult to argue that this is a bad thing. Almost everyone who uses the word assumes that it is always a good thing. The thesis of this post is that in many cases we need less powerful languages and systems. Before I get going, I should say first of all that there is very little in this post by way of original insight. The train of thought behind it was set off by reading Hofstadter’s book Gödel, Escher, Bach — an Eternal Golden Braid which helped me pull together various things in my own thinking where I’ve seen the principle in action. Philip Wadler’s post on the rule of least power was also formative, and most of all I’ve also taken a lot from the content of this video from a Scala conference about everything that is wrong with Scala, which makes the following fairly central point: Every increase in expressiveness brings an increased burden on all who care to understand the message. My aim is simply to illustrate this point using examples that might be more accessible to the Python community than the internals of a Scala compiler. I also need a word about definitions. What do we mean by “more powerful” or “less powerful” languages? In this article, I mean something roughly like this: “the freedom and ability to do whatever you want to do”, seen mainly from the perspective of the human author entering data or code into the system. This roughly aligns with the concept of “expressiveness”, though not perhaps with a formal definition. (More formally, many languages have equivalent expressiveness in that they are all Turing complete, but we still recognise that some are more powerful in that they allow a certain outcome to be produced with fewer words or in multiple ways, with greater freedoms for the author). The problem with this kind of freedom is that every bit of power you insist on have when writing in the language corresponds to power you must give up at other points of the process — when ‘consuming’ what you have written. I’ll illustrate this with various examples which range beyond what might be described as programming, but have the same principle at heart. We’ll also need to ask “Does this matter?” It matters, of course, to the extent that you need to be able to ‘consume’ the output of your system. Different players who might ‘consume’ the message are software maintainers, compilers and other development tools, which means you almost always care — this has implications both for performance and correctness as well as human concerns. Databases and schema Starting at the low end of the scale in terms of expressiveness, there is what you might call data rather than language. But both “data” and “language” can be thought of as “messages to be received by someone”, and the principle applies here. In my years of software development, I’ve found that clients and users often ask for “free text” fields. A free text field is maximally powerfully as far as the end user is concerned — they can put whatever they like in. In this sense, this is the “most useful” field — you can use it for anything. But precisely because of this, it is also the least useful, because it is the least structured. Even search doesn’t work reliably because of typos and alternative ways of expressing the same thing. The longer I do software development involving databases, the more I want to tightly constrain everything as much as possible. When I do so, the data I end up with is massively more useful. I can do powerful things when consuming the data only when I severely limit the power (i.e. the freedom) of the agents putting data into the system. In terms of database technologies, the same point can be made. Databases that are “schema-less” give you great flexibility and power when putting data in, and are extremely unhelpful when getting it out. A key-value store is a more technical version of “free text”, with the same drawbacks — it is pretty unhelpful when you want to extract info or do anything with the data, since you cannot guarantee that any specific keys will be there. HTML The success of the web has been partly due to the fact that some of the core technologies, HTML and CSS, have been deliberately limited in power. Indeed, you probably wouldn’t call them programming languages, but markup languages. This, however, was not an accident, but a deliberate design principle on the part of Tim Berners Lee. I can’t do better than to quote that page at length:. This is has become a W3C principle: Good Practice: Use the least powerful language suitable for expressing information, constraints or programs on the World Wide Web. Note that this is almost exactly the opposite of Paul Graham’s advice (with the caveat that 'power' is often too informally defined to compare): if you have a choice of several languages, it is, all other things being equal, a mistake to program in anything but the most powerful one. Python setup.py MANIFEST.in file Moving up towards ‘proper’ programming language, I came across this example — the MANIFEST.in file format used by distutils/setuptools. If you have had to create a package for a Python library, you may well have used it. The file format is essentially a very small language for defining what files should be included in your Python package (relative to the MANIFEST.in file, which we’ll call the working directory from now on). It might look something like this: include README.rst recursive-include foo *.py recursive-include tests * global-exclude *~ global-exclude *.pyc prune .DS_Store There are two types of directive: include type directives (include, recursive-include, global-include and graft), and exclude type directives (exclude, recursive-exclude, global-exclude and prune). There comes a question — how are these directives to be interpreted (i.e. what are the semantics)? You could interpret them in this way: A file from the working directory (or sub-directories) should be included in the package if it matches at least one include type directive, and does not match any exclude type directive. This would make it a declarative language. Unfortunately, that is not how the language is defined. The distutils docs for MANIFEST.in are specific about this — the directives are to be understood as follows (my paraphrase): -. As you can see, this interpretation defines a language that is imperative in nature — each line of MANIFEST.in is a command that implies an action with side effects. The point to note is that this makes the language more powerful than my speculative declarative version above. For example, consider the following: recursive-include foo * recursive-exclude foo/bar * recursive-include foo *.png The end result of the above commands is that .png files that are below foo/bar are included, but all other files below foo/bar are not. If I’m thinking straight, to replicate the same result using the declarative language is harder — you would have to do something like the following, which is obviously sub-optimal: recursive-include foo * recursive-exclude foo/bar *.txt *.rst *.gif *.jpeg *.py ... So, because the imperative language is more powerful, there is a temptation to prefer that one. However, the imperative version comes with significant drawbacks: It is much harder to optimise. When it comes to interpreting the MANFIEST.in and building a list of files to include in the package, one fairly efficient solution for a typical case is to first build an immutable list of all files in the directory and its sub-directories, and then apply the rules: addition rules involve copying from the full list to an output list, and subtraction rules involve removing from the output list. This is how the Python implementation currently does it. This works OK, unless you have many thousands of files in the full list, most of which are going to get pruned or not included, in which case you can spend a lot of time building up the full list, only to ignore most of it. An obvious shortcut is to not recurse into directories that would be excluded by some exclude directive. However, you can only do that if the exclude directives come after all include directives. This is not a theoretical problem — I’ve found that doing setup.py sdist and other commands can take 10 minutes to run, due to a large number of files in the working directory if you use the tool tox for instance. This means that runs of tox itself (which uses setup.py) become very slow. I am currently attempting to fix this issue, but it is looking like it will be really hard. Adding the optimised case might not look that hard (you can shortcut the file system traversal using any exclude directives that come after all include directives), but it adds sufficiently to the complexity that a patch is unlikely to be accepted — it increases the number of code paths and the chances of mistakes, to the point of it not being worth it. It might be that the only practical solution is to avoid MANIFEST.in altogether and optimise only the case when it is completely empty. The power has a second cost — MANIFEST.in files are harder to understand. First, in understanding how the language works — the docs for this are considerably longer than for the declarative version I imagined. Second, in analysing a specific MANIFEST.in file — you have to execute the commands in your head in order to work out what the result will be, rather than being able to take each line on its own, or in any order that makes sense to you. This actually results in packaging bugs. For instance, it would be easy to believe that a directive like: global-exclude *~ at the top of a MANIFEST.in file would result in any file name ending in ~ (temporary files created by some editors) being excluded from the package. In reality it does nothing at all, and the files will be erroneously included if other commands include them. Examples I’ve found of this mistake (exclude directives that don’t function as intended or are useless) include: - hgview (exclude directives at the top do nothing) - django-mailer (global-exclude at the top does nothing) Another result is that you cannot groups lines in the MANIFEST.in file in any way you please, for clarity, since re-ordering changes the meaning of the file. In addition, virtually no-one will actually use the additional power. I’m willing to bet that 99.99% MANIFEST.in files do not make use of the additional power of the imperative language (I downloaded 250 and haven’t found any that do). So we could have been served much better by a declarative language here instead of an imperative one. But backwards compatibility forces us to stick with this. That highlights another point — it is often possible to add features to a language to make it more powerful, but compatibility concerns usually don’t allow you to make it less powerful, for example by removing features or adding constraints. URL reversing One core piece of the Django web framework is URL routing. This is the component that parses URLs and dispatches them to the handler for that URL, possibly passing some components extracted from the URL. In Django, this is done using regular expressions. For an app that displays information about kittens, you might have a kittens/urls.py with the following: from django.conf.urls import url from kittens import views urlpatterns = [ url(r'^kittens/$', views.list_kittens, name="kittens_list_kittens"), url(r'^kittens/(?P<id>\d+)/$', views.show_kitten, name="kittens_show_kitten"), ] The corresponding views.py file looks like: def list_kittens(request): # ... def show_kitten(request, id=None): # ... Regular expressions have a capture facility built in, which is used to capture parameters that are passed to the view functions. So, for example, if this app were running on cuteness.com, a URL like results in calling the Python code show_kitten(request, id="23"). Now, as well as being able to route URLs to specific functions, web apps almost always need to generate URLs. For example, the kitten list page will need to include links to the individual kitten page i.e. show_kitten. Obviously we would like to do this in a DRY way, re-using the URL routing configuration. However, we would be using the URL routing configuration in the opposite direction. When doing URL routing, we are doing: URL path -> (handler function, arguments) In URL generation, we know the handler function and arguments we want the user to arrive at, and want to generate a URL that will take the user there, after going through the URL routing: (handler function, arguments) -> URL path In order to do this, we essentially have to predict the behaviour of the URL routing mechanism. We are asking “given a certain output, what is the input?” In the very early days Django did not include this facility, but it was found that with most URLs, it was possible to 'reverse' the URL pattern. The regex can be parsed looking for the static elements and the capture elements. Note, first of all, that this is only possible at all because the language being used to define URL routes is a limited one — regular expressions. We could easily have defined URL routes using a more powerful language. For example, we could have defined them using functions that: - take a URL path as input - raise NoMatch if they do not match - return a truncated URL and an optional set of captures if they do match. Our kittens urls.py would look like something like this: from django.conf.urls import url, NoMatch def match_kitten(path): KITTEN = 'kitten/' if path.startswith(KITTEN): return path[len(KITTEN):], {} raise NoMatch() def capture_id(path): part = path.split('/')[0] try: id = int(part) except ValueError: raise NoMatch() return path[len(part)+1:], {'id': id} urlpatterns = [ url([match_kitten], views.list_kittens, name='kittens_list_kittens'), url([match_kitten, capture_id], views.show_kitten, name="kittens_show_kitten"), ] Of course, we could provide helpers that make things like match_kitten and capture_id much more concise: from django.conf.urls import url, m, c urlpatterns = [ url([m('kitten/'), views.list_kittens, name='kittens_list_kittens'), url([m('kitten/'), c(id=int)], views.show_kitten, name="kittens_show_kitten"), ] Now, this language for URL routing is actually a lot more powerful than our regex based one, assuming that m and c are returning functions as above. The interface for matching and capturing is not limited to the capabilities of regexes — for instance, we could do database lookups for the IDs, or many other things. The downside, however, is that URL reversing would be entirely impossible. For general, Turing complete languages, you cannot ask “given this output, what is the input?”. We could potentially inspect the source code of the function and look for known patterns, but it quickly becomes totally impractical. With regular expressions, however, the limited nature of the language gives us more options. In general, URL configuration based on regexes is not reversible — a regex as simple as . cannot be reversed uniquely. (Since we want to generate canonical URLs normally, a unique solution is important. As it happens, for this wild card, Django currently picks an arbitrary character, but other wild cards are not supported). But as long as wild cards of any sort are only found within capture groups (and possibly some other constraints), the regex can be reversed. So, if we want to be able to reliably reverse the URL routes, we actually want a language less powerful than regular expressions. Regular expressions were presumably chosen because they were powerful enough, without realising that they were too powerful. Additionally, in Python defining mini-languages for this kind of thing is quite hard, and requires a fair amount of boilerplate and verbosity both for implementation and usage — much more than when using a string based language like regexes. In languages like Haskell, relatively simple features like easy definitions of algebraic data types and pattern matching make these things much easier. Regular expressions The mention of regexes as used in Django’s URL routing reminds me of another problem: Many usages of regexes are relatively simple, but whenever you invoke a regex, you get the full power whether you need it or not. One consequence is that for some regular expressions, the need to do backtracking to find all possible matches means that it is possible to construct malicious input that takes a huge amount of time to be processed by the regex implementation. This has been the cause of a whole class of Denial Of Service vulnerabilities in many web sites and services, including one in Django due to an accidentally 'evil' regex in the URL validator — CVE-2015-5145. Django templates vs Jinja templates The Jinja template engine was inspired by the Django template language, but with some differences in philosophy and syntax. One major advantage of Jinja2 over Django is that of performance. Jinja2 has an implementation strategy which is to compile to Python code, rather than run an interpreter written in Python, which is how Django works, and this results in a big performance increase — often 5 to 20 times. (YMMV etc.) Armin Ronacher, the author of Jinja, attempted to use the same strategy to speed up Django template rendering. There were problems, however. The first he knew about when he proposed the project — namely that the extension API in Django makes the approach taken in Jinja very difficult. Django allows custom template tags that have almost complete control over the compilation and rendering steps. This allows some powerful custom template tags like addtoblock in django-sekizai that seems impossible at first glance. However, if a slower fallback was provided for these less common situations, a fast implementation might still have been useful. However, there is another key difference that affects a lot of templates, which is that the context object that is passed in (which holds the data needed by the template) is writable within the template rendering process in Django. Template tags are able to assign to the context, and in fact some built-in template tags like url do just that. The result of this is a key part of the compilation to Python that happens in Jinja is impossible in Django. Notice that in both of these, it is the power of Django’s template engine that is the problem — it allows code authors to do things that are not possible in Jinja2. However, the result is that a very large obstacle is placed in the way of attempts to compile to fast code. This is not a theoretical consideration. At some point, performance of template rendering becomes an issue for many projects, and a number have been forced to switch to Jinja because of that. This is far from an optimal situation! Often the issues that make optimisation difficult are only clear with the benefit of hindsight, and it isn’t true to say that simply adding restrictions to a language is necessarily going to make it easier to optimise. There are certainly languages which somehow manage to hit a “sour spot” of providing little to power to either the authors or the consumers! You might also say that for the Django template designers, allowing the context object to be writable was the obvious choice because Python data structures are typically mutable by default. Which brings us to Python... Python There are many ways that we could think about the power of the Python language, and how it makes life hard for every person and program that wants to make sense of Python code. Compilation and performance of Python is an obvious one. The unrestricted effects that are possible at any point, including writable classes and modules etc., not only allow authors to do some very useful things, they make it extremely difficult to execute Python code quickly. PyPy has made some impressive progress, but looking at the curve from PyPy 1.3 onward, which shows diminishing returns, makes it clear that they are unlikely to make much bigger gains in the future. And the gains that have been made in terms of run time have often been at the expense of memory usage. There is simply a limit to how well you can optimise Python code. (Please note, to all who continue reading this — I’m not a Python basher, or a Django basher for that matter. I’m a core developer of Django, and I use Python and Django in almost all my professional programming work. The point of this post is to illustrate the problems caused by powerful languages). However, rather than focus on the performance problems of Python, I’m going to talk about refactoring and maintenance. If you do any serious work in a language, you find yourself doing a lot of maintenance, and being able to do it quickly and correctly often becomes very important. So, for example, in Python, and with typical VCS tools (Git or Mercurial, for instance), if you re-order functions in a module e.g. move a 10 line function to a different place, you get a 20 line diff, despite the fact that nothing changed in terms of the meaning of the program. And if something did change (the function was both moved and modified), it’s going to be very difficult to spot. This happened to me recently, and set me off thinking just how ridiculously bad our toolsets are. Why on earth are we treating our highly structured code as a bunch of lines of text? I can’t believe that we are still programming like this, it is insane! At first, you might think that this could be solved with a more intelligent diff tool. But the problem is that in Python, the order in which functions are defined can in fact change the meaning of a program (i.e. change what happens when you execute it). Here are a few examples: Using a previously defined function as a default argument: def foo(): pass def bar(a, callback=foo): pass These functions can’t be re-ordered or you’ll get a NameError for foo in the definition of bar. Using a decorator: @decorateit def foo(): pass @decorateit def bar(): pass Due to unrestricted effects that are possible in @decorateit, you can’t safely re-order these functions and be sure the program will do the same thing afterwards. Similarly, calling some code in the function argument list: def foo(x=Something()): pass def bar(x=Something()): pass Similarly, class level attributes can’t be re-ordered safely: class Foo(): a = Bar() b = Bar() Due to unrestricted effects possible inside the Bar constructor, the definitions of a and b cannot be re-ordered safely. (This might seem theoretical, but Django, for instance, actually uses this ability inside Model and Form definitions to provide a default order for the fields, using a cunning class level counter inside the base Field constructor). Ultimately, you have to accept that a sequence of function statements in Python is a sequence of actions in which objects (functions and default arguments) are created, possibly manipulated, etc. It is not a re-orderable set of function declarations as it might be in other languages. This gives Python an amazing power when it comes to writing it, but imposes massive restrictions on what you can do in any automated way to manipulate Python source code. Above I used the simple example of re-ordering two functions or class attributes. But every single type of refactoring that you might do in Python becomes virtually impossible to do safely because of the power of the language e.g. duck typing means you can’t do method renames, the possibility of reflection/dynamic attribute access (getattr and friends) means you can’t in fact do any kind of automated renames (safely). So, if we are tempted to blame our crude VCS or refactoring tools, we actually have to blame the power of Python — despite the huge amount of structure in correct Python source code, there is very little that any software tool can do with it when it comes to manipulating it, and the line-based diffing that got me so mad is actually a reasonable approach. Now, 99% of the time, we don’t write Python decorators which mean that the order of function definitions makes a difference, or silly things like that — we are responsible “adults”, as Guido put it, and this makes life easier for human consumers. But the fact remains that our tools are limited by what we do in the 0.01% of cases. For some consumers, we can optimise on the basis of the common case, and detect when that fails e.g. a JIT compiler using guards. But with others e.g. VCS or refactoring tools, the “runtime” information that you hit the unlucky case comes far too late — you might have released your subtly-broken code by the time you find out, so you have to be safe rather than sorry. In an ideal world, with my dream language, when you rename a function, the entire “diff” in your VCS should simply be “Function foo renamed to bar”. (And, this should be exportable/importable, so that when you upgrade a dependency to a version in which foo is renamed to bar, it should be exactly zero work to deal with this). In a “less powerful” language, this would be possible, but the power given to the program author in Python has taken power from all the other tools in the environment. Does this matter? It depends on how much time you spend manipulating your code, compared to using code to manipulate data. At the beginning of a project, you may be tempted to desire the most powerful language possible, because it gives you the most help and freedom in terms of manipulating data. But later on, you spend a huge amount of time manipulating code, and often using an extremely basic tool to do so — a text editor. This treats your highly structured code as one of the least structured forms of data — a string of text — exactly the kind of manipulation you would avoid at all costs inside your code. But all the practices you would choose and rely on inside your program (manipulating all data inside appropriate containers) are no longer available to you when it comes to manipulating the program itself. Some popular languages make automated refactoring easier, but more is needed: to actually make use of the structure of your code, you need an editor and VCS that understand your code properly. Projects like Lamdu and Unison are in the right direction, but still in their infancy, and unfortunately involving re-thinking the entire software development stack :-( Summary When you consider the total system and all the players (whether software or human), including the need to produce efficient code, and long term maintainability, less powerful languages are actually more powerful — “slavery is freedom”. There is a balance between expressiveness and reasonability. The more powerful a language, the greater the burden on software tools, which either are need to be more complicated in order to work, or are forced to do less than they could. This includes: - compilers — with big implications for performance - automated refactoring and VCS tools — with big implications for maintenance. Similarly, the burden also increases for humans — for anyone attempting to understand the code or modify it. A natural instinct is to go for the most powerful solution, or a solution that is much more powerful than is actually needed. We should try to do the opposite — find the least powerful solution that will do the job. This won’t happen if creating new languages (which might involve parsers etc.) is hard work. We should prefer software ecosystems that make it easy to create very small and weak languages.
http://lukeplant.me.uk/blog/posts/less-powerful-languages/
CC-MAIN-2016-26
refinedweb
4,705
59.43
What I have: A basic program for infinitely looping after and testing for an active process. It has a delay system built in to make it so it is not constantly iterating. What I need: A way to get process IDs from other Processes other than my program. With a way to use that ID to detect if that process is active on the computer or not. Parts of my code that need changing: /* Code for handling the process would go here */ /* Code for detecting the target would go here */ What the goal of my program is: Perform operations to terminate the process of cmd.exe when it is active on the user's computer. Then output the status of the process and the time it took to find that process to the user. Link(s) of tutorials that did not help: Process Group Functions - The GNU C Library It contained functions that I needed, but I need more information on how to apply them to other processes instead of the parent of my program and its' children. Here is the code I prewrote for this so far: I would appreciate the help for making this program, I am doing this as a learning experience and not homework. Thanks for your time.I would appreciate the help for making this program, I am doing this as a learning experience and not homework. Thanks for your time.Code: #include <stdio.h> #include <stdlib.h> #include <time.h> #define ZERO 0 #define INFINITE 1000000000000000 #ifdef _WIN32 #define SYSTEM "Windows" #endif // _WIN32 #ifdef linux #define SYSTEM "Linux" #endif // linux #define TARGET "cmd.exe" void Delay( int seconds ); int FindProcess(void); int main() { time_t start, stop; start = time(NULL); int status = FindProcess(); do { #ifdef _WIN32 system("cls"); #endif // _WIN32 #ifdef linux system("clear"); #endif // linux printf("%s is currently %s \n \n", TARGET ,( status==1 ) ? "ENABLED" : "DISABLED" ); Delay(2); if ( status==1 ) { /* Code for handling the process would go here */ stop = time( NULL ); printf("Process was found after %.0f seconds \n", difftime( start, stop ) ); start = time( NULL ); Delay(2); } } while ( ZERO < INFINITE ); /* Creates an almost infinite loop */ #ifdef _WIN32 system("pause>nul"); #endif // _WIN32 return 0; } int FindProcess(void) { int PROCESS=0; /* Code for detecting the target would go here */ return PROCESS; } void Delay( int seconds ) { clock_t end_of_delay = ( clock() + ( seconds * CLOCKS_PER_SEC ) ); while ( clock() < end_of_delay ) {}; }
http://cboard.cprogramming.com/c-programming/155337-c-procceses-process-id-help-printable-thread.html
CC-MAIN-2013-48
refinedweb
390
68.81
blekko 0.1.1 bindings for the Blekko search engine API This module provides simple bindings to the Blekko API. To use the API, contact Blekko for an API key. This module currently only supports search queries and page statistics. The API also provides tools for manipulating slashtags, but this library doesn't support that yet. The library is internally rate-limited to one query per second in accordance with Blekko's guidelines. Searching To use the API, first create a Blekko object using your "source" or "auth" API key: import blekko api = blekko.Blekko(source='my_api_key') Then, to perform searches, use the query method. Its arguments are the search terms (as a string) and, optionally, the page number: results = api.query('peach cobbler') The returned object is a sequence containing Result objects, which themselves have a number of useful fields: for result in results: print result.url_title print result.url print result.snippet Errors in communicating with the server are raised as BlekkoError exceptions, so you'll want to handle these exceptions when making calls to the API. An Example Putting it all together, here's a short script that gets a single link for search terms on the command line: import blekko import sys _api = blekko.Blekko(source='my_api_key') def get_link(terms): try: res = _api.query(terms + ' /ps=1') except blekko.BlekkoError as exc: print >>sys.stderr, str(exc) return None if len(res): return res[0].url if __name__ == '__main__': link = get_link(' '.join(sys.argv[1:])) if link: print(link) else: sys.exit(1) Page Statistics Blekko provides an API for getting SEO-related statistics for a URL. Use the pagestats method, which takes a URL as its only parameter, to get a dictionary containing information about a page: >>> api.pagestats('') {u'cached': True, u'ip': u'82.94.164.162', u'host_rank': 3835.107267, u'host_inlinks': 467267, u'adsense': None, u'dup': True, u'rss': u''} Credits These bindings were written by Adrian Sampson and modeled after the Perl bindings by Greg Lindahl. The source is made available under the MIT license. - Downloads (All Versions): - 58 downloads in the last day - 345 downloads in the last week - 739 downloads in the last month - Author: Adrian Sampson - License: MIT - Platform: ALL - Categories - Package Index Owner: Adrian - DOAP record: blekko-0.1.1.xml
https://pypi.python.org/pypi/blekko
CC-MAIN-2014-15
refinedweb
386
66.94
76,595 times. Learn more... Cats are cute and fun to have around...until they’re not, like when they’re stray cats chasing away the birds in your yard and using your lawn as a litter box. Plus, stray cats can pass on illnesses to your domestic pets, which you definitely don’t want. Don't worry though—whether you're dealing with an occasional stray or an entire cat colony, there are simple steps you can take to keep them out of your yard and out of your life. Steps Method 1 of 3: Removing Sources of Food and Shelter - 1Remove the feral cats’ food sources. Start by making sure your trash isn’t overflowing from the can, and that you secure the can with a tight-fitting lid. Make sure that you’re not leaving any organic food scraps sitting around outside. Also ask your neighbors to use tight-fitting lids to seal their trash cans. - Keep in mind, though, that cats can subsist on very little, so it may be impossible to completely remove their food sources in your area. - If you do feed cats, put the food at least 30 feet (9.1 m) away from your house. Don’t place it at your front door unless you want to encourage cats to collect there. - 2Remove or block sources of shelter to prevent cats from moving in. Cats seek out warm, dry spaces for shelter from the elements. If they’re unable to find suitable places, they will move on to the next neighborhood. So, fence off any small openings under your porch or deck, and make sure your shed door is tightly closed. Remove woodpiles and trim thick brush so cats can’t make their homes in these areas.[1] X Research source - If you notice cats gathering in a particular area of your property, figure out what they’re using for shelter. Then block the cats’ access to it. Tip: Plywood and chicken wire are inexpensive and effective materials for covering openings. Staple or nail the plywood or wire over the openings to make them inaccessible to cats. - 3Spray your yard with a commercial cat-repellant. Various companies produce cat-deterring chemical sprays. These sprays contain ingredients and smells (whether natural or synthetic) that cats find unpleasant. Follow the directions printed on the packaging as far as how often to spray the repellant. Spray areas of your yard in which the cats frequently spend time. - You can purchase cat repellants at most pet stores and home-improvement stores. - These products are safe and non-toxic to both feral and domesticated cats. - 4Call animal control if you can’t control the population on your own. If your property is being overrun by feral and stray cats, you may need to call your county animal control office. They’ll take steps to help remove the cats. Be aware, though, that animal control agencies usually trap the cats and euthanize them. Advertisement - Removing a community of cats from where they are living creates a vacuum effect. New cats quickly move into the vacant area and start using the resources to thrive and survive. Method 2 of 3: Repelling Cats from Your Garden - 1Install a motion-sensing sprinkler to spray encroaching cats. It’s a well-known fact that cats and water do not mix, so felines will stay out of the water’s range and off of your lawn. Set the sprinkler to go off at night when an animal comes within about 4 feet (1.2 m) of it to avoid soaking passers-by on a sidewalk. - An added bonus is that your grass and flowers will get a nice watering in the process. - 2Toss citrus fruit peels directly into your garden plot. Cats dislike the smell and taste of citrus fruits like orange, lemon, lime, and grapefruit. So, the next time you’re eating or juicing one of these fruits, throw the peels and rinds out into your garden. The cats should give the area a wide berth. Note: Planting citrus trees will not be effective in keeping cats out of a garden, since the smell won’t be as strong. - 3Lay chicken wire over the soil in a garden where cats dig. If you find that stray cats keep digging up your garden or gnawing on exposed plant roots, you can block them with chicken wire. Purchase a length that’s sufficient to cover the length of your garden. Lay the wire directly on the ground, and put stones on the 4 corners so cats won’t be able to move the wire.[2] X Research source - You can purchase any length of chicken wire at a local hardware store or a home-improvement store. - 4Plant herbs and botanicals that cats find unpleasant. The idea here is similar to the citrus peels. If you fill your garden or planter with herbs that cats can’t stand, they’ll be much less likely to dig through the soil. Put at least 3-4 cat-deterring plants in your garden to keep the pests away. Plants that will deter cats include:[3] X Research source - Lavender - Lemon thyme - Rue - Pennyroyal - 5Sprinkle ground black pepper around areas where cats congregate. The cats will be bothered by their spicy paws at grooming time. If you routinely apply the pepper to your yard, cats will soon learn that your property is the culprit. Sprinkle pepper under your porch, in your shed, on your back patio, or wherever you see cats playing or napping. Advertisement - Pepper works to keep cats off of a grass-covered lawn, too. But, you will have to reapply it frequently, especially after heavy rains. Method 3 of 3: Employing a Trap-Neuter-Return System - 1Trap feral cats on your property to neuter and return them..[4] X Research source - When you catch a cat, don’t let it out of the box trap. Cover the trap with a blanket to calm the cat down. - You can purchase humane cat box traps at a local pet store, animal shelter, or home-improvement store. - 2Don’t take cats to the animal shelter. Most shelters don’t accept feral cats, since they are most often not adoptable. Feral cats are often shy and unsociable, so they shouldn’t be invited into people’s homes. Feral cats that enter shelters are almost always euthanized. - 3Take the cats to a veterinarian who can neuter and tag them. Many vets have programs that allow them to spay or neuter feral cats at no cost, since the feral cat population is known to be a problem. Call around to vets and shelters in your area to find a program suitable for your situation. Explain that you’d like to bring in a feral cat for neutering. Most vets will also clip the cat’s ear as a sign that it’s already been caught and neutered.[6] X Research source - Make sure the vet you use is aware in advance that you’re bringing a feral cat in, as they may not handle feral animals. - Spaying or neutering the cat is a humane way to keep it from reproducing and control local cat populations. - 4Take the cat back home with you and allow it to recuperate. Once you bring the cat to the vet, you’ll be responsible for its well-being in the short term. Take the cat home with you and to make sure it has healed enough to live in the wild once the procedure is complete.[7] X Research source - Never release a cat that is injured or anesthetized into the wild. - 5Release the cat at the location where you trapped it. The cat is likely already feeling traumatized and will adjust best to familiar turf. Additionally, male cats keep strange males away from their colonies. This keeps non-spayed females from additional opportunities to mate, which helps to control population. The end-goal of the trap-neuter-return strategy is to prevent the continual breeding of free-roaming cats. Advertisement - In order for the trap-neuter-return method to be an effective way to control a cat population, most or all of the cats in the population need to. Community Q&A - QuestionI have a cat that is coming through the doggie door and I cannot shut it. I need a natural herb to repel it so it will go away. What can I do?Amelia AshtonCommunity AnswerTry spraying the outside of the doggie door with white or apple cider vinegar. All cats are different, but most cats hate the smell, and it is nontoxic. - QuestionWhy does a stray cat keep attacking my pet rabbit?Community AnswerCats naturally attack smaller animals. The cat will attack, kill, and eat your pet rabbit. Make sure that your rabbit is in an enclosed area or inside at times you are not around him. - QuestionHow do I keep cats from mating in my garage?Community AnswerI recommend you close your garage, or close the space where the cats are coming in from. - QuestionWill orange peel or garlic deter cats?Community AnswerGarlic will not, but orange peels or lemon skins are good options because cats hate citrus. Most cats also don't like fruit smells. - QuestionHow do I get stray cats out of the attic in my garage?Community AnswerWe had the same problem and there is usually a hole or gap in the attic where cats can enter. We went up to our roof, which scared the cats out, and then closed the hole up. - QuestionI have 2 fixed, male, outside cats of my own. Recently, other cats are coming around and having cat fights. How do I get rid of them and protect my cats?Community AnswerTry spraying the other cats with a hose if possible or have a spray bottle ready to spray them. Consider bringing your outside cats inside if you sense a fight will break out. - QuestionMy sister has taken in over 25 cats. She must get rid of all of them or get fined. How can we get rid of them humanely? We cannot afford the fee the ASPCA is asking.Community AnswerIf she can't afford to surrender them to a shelter, I suggest trying to adopt them out yourselves. Use local Facebook groups, local free advertising platforms, town bulletin boards, posters, craigslist, etc. Just include a phone number and pictures and offer the cats for free to good homes. - QuestionIs it mean to not give stray cats some help?Community AnswerIt's not mean. You might feel bad about leaving them out there, but many strays could bring diseases and fleas into the house. Most strays are pretty capable of taking care of themselves. - QuestionHow do I get rid of fleas on my cat?Community Answer - QuestionWho do I contract to remove a trapped cat from my roof?Community AnswerYou can either call animal control, or have a house inspecter open up the roof and he/she can help remove the cat. Tips - Stray cats are cats that have become separated from their owners, while feral cats are non-domesticated cats that were born, and survive, in the wild. - A trap-neuter-return (TNR) approach should only be used for truly feral cats. Stray cats that have been domesticated but no longer live with their owners should be taken to a shelter where they can be cleaned and re-homed.[8] X Research source - If the offending cat is a legally registered pet, contact the owner and request that they keep the cat indoors. If the owner is uncooperative, contact your local animal control or police department to file a complaint. - The most effective method of reducing the population of cats in an area is TNR. If you’re uncomfortable trapping cats by yourself, you may also be able to ask animal control to help you employ the trap-neuter-return method. -. Warnings - Do not attempt to trap or corner a feral cat, as they can be quite ferocious. If you’re bitten or scratched by a feral cat, seek medical treatment to ensure you can be properly immunized.Thanks! - Never attempt to harm or injure a trespassing cat. Not only is this idea inhumane and cruel, it is also illegal in most states.Thanks! References - ↑ - ↑ - ↑ - ↑ - ↑ - ↑ - ↑ - ↑ About This Article To get rid of stray cats, keep the lids on your garbage cans closed so the cats don't have a food source. Also, block off any shelters the cats may be sleeping in with plywood or chicken wire. If the cats stick around, try sprinkling pepper flakes around your property, which will irritate the cats' paws and make them leave. If there are a lot of stray cats, you may want to contact your local animal rescue to have the cats trapped and relocated. For information on trapping the cats yourself, read on!
https://www.wikihow.com/Get-Rid-of-Cats
CC-MAIN-2021-04
refinedweb
2,150
73.07
[‘result’][‘SCAN_RESULT’]) url = “” % isbn droid.startActivity(‘. 🙂 52 Responses to Android barcode scanner in 6 lines of Python code (Leave a comment) Hmmm, perhaps I should add an iPhone to my shopping list instead of that barcode scanner you recommended, eh Matt? Can I tell my wife you’re in favour of this purchase?! 😀 That is very cool, I can’t wait to try this!! “make phone calls” are you mad 😉 text-to-speech is cool, but I much prefer speech-to-text on cellphones. Any S60 apps you know of Matt? I mean, Symbian is the Linux of the mobile world. You must be interested in that OS as well. Android phone becoming more useful compare to iPhone, the applications are more industrialized. I salute the Android developer for this. hate to say admit it – but seriously tempted by an Android, after the #o2fail pricing of the new iPhone, its great but I am a poor broke SEO bloke with a family … Excellent that I can get my hands ‘mucky’ enough without breaking it … 6 lines of Python backed by a STRONG built in API! Without the heavy lifting of having : droid.scanBarcode() built in, it definitely would be a much different proof of concept. I’m with Gerry on the iPhone, absolutely love it, but 1) AT&T (on this side of the pond) really shot themselves in the foot with their $499/$699 pricing and 2) Trying to code for the iPhone is like trying to learn Chinese from a cow. Gotta say that I’m looking forward to this: It’s going to be really useful to write quick scripts, rather than having to take on the Android SDK, since testing even very basic applications is ridiculously laborious when you’re only running a Linux netbook. That said, I only bought a G1 a week or so ago, and aside from the usual battery issues (for which I’ve already ordered a larger battery), I’m noticing that I’m getting multiple issues with freezing and rebooting, sometimes just after loading the OS; I’m fairly sure it’s an app or widget I’m using causing it, since booting into Safe Mode seems to alleviate any problems, but since there seems to be no obvious form of logging or system monitoring apps available through the OS, it’s impossible to tell which might be the cause without individually removing each app, or making a clean start, then reinstalling each app one by one and testing each time. I don’t suppose you, or any of your readers have any tips for tracking down the cause of my woes? It’d be nice to have a stable phone before I start working on scripts 🙂 When I saw your post about the barcode scanner I thought to myself, there has got to be a way to get my G1 to do this. It has all sorts of other barcode scanning applications. To those above excited about the iphone pricing, I got my G1 from T-mobile for $99 with a 2 year commitment. I think you may have to be a new customer, but we easily talked the guy at the mall into coming down $150. You just have to be willing to haggle. And if the T-mobile store you go to tells you no, try an independent reseller at a kiosk. I can’t say enough good things about my G1, though. I’ve played with the iphone, and you couldn’t get me to switch. 🙂 Matt, So when are you going to get to the post where Android writes the books for me and I can retire? That’s what I call displacement technology:-) Morris I like the bar code scanner app for the iphone, I wind up using it at best buy. Damn. Cool. Not as cool as the garage door opening automagically, but… Damn. Any chance that tech will come to the iPhone soon? seems its time to buy G1 now :)) and start using bar code applications This only emphasizes my point from a blog article I wrote: In the future everybody develops… Does this work with those ISBN barcodes that end in an `X’ rather than a digit? Only I note you’re using `%d’ as a format specifier for the Google URL. Some guys have used the Barcode reader built into android (I believe it’s the zxing library) to scan barcodes in to Beep My Stuff (, disclaimer, I coded and run BMS). I don’t think it’s in the app store yet but the code is open source “Symbian is the Linux of the mobile world” Symbian is not Linux. Android is closest to Linux. iPhone has a *nix heart (BSD). Symbian is it’s own thing. And the hardest of the 3 to program for. But it’s been around a very long time. I wouldn’t try to coerce it to an integer — you’re using it as an unmodified string value. isbn = int(code['result']['SCAN_RESULT']) url = “” % isbn If you had an exception handler, I suppose it might make some sense. Anyways, knock that “6 lines” down to 5. 🙂 Yea, go buy an G1 so you can buy more things “more easy”. I would like to see Intel’s OpenCV (Computer Vision) API implemented on the Android. So you could put your face on a Muscle Man or a Seal or something… Sony Camcorder style. that code doesn’t tell me much those are all encapsulated functions. Since I renewed my site with Google apps, it has been giving me hell. not showing up in SERP’s then reappearing like nothing. then saying that the site has expired, then going back to normal. now its sending me to a godaddy parked free page. the site is a solid music blog musicandartsblog.com My friend has one and he loves it. Runs all kinds of crazy stuff on it. I’d love to get one as well. Alex, thats pretty weird. Having the same issue for my main term, there one day gone the next. Perhaps Matt could enlighten us? I’ve been in technology and business for about 20 years now, and I love how new technologies continually appear. There is always something getting faster and better yet cheaper, and there is always someone finding a new way to apply it. Gotta love that! Totally off subject, but I read an article that Google is looking at Twitter and may even display twitter returns in the serps. If that’s true, all I’ve got to say is “are you kidding me”? Many people, myself included would not like to search through useless 140 posts by narcissistic people when I’m looking for something. If I did, I would go to Twitter and search. If Google does this, you will be playing right into the hands of MS. I’ll stop using Google as I’m sure many others who find Twitter useless will. Looking forward to trying this out! Thanks Matt Is it possible to access the Speech Recognition engine in a similar manner using ASE? Specifically “RecognizerIntent”? It’s pretty oversized. Let’s start with removing the useless formatting code import android droid=android.Android() code=droid.scanBarcode() url=””+code[‘result’][‘SCAN_RESULT’] droid.startActivity(’android.intent.action.VIEW’, url) We still have some useless assignments import android droid.startActivity(’android.intent.action.VIEW’, “”+android.Android().scanBarcode()[‘result’][‘SCAN_RESULT’]) Well, we still have an import. Let’s make it into a nice oneliner.Nathan droid.startActivity(’android.intent.action.VIEW’, “”+__import__(‘android’).Android().scanBarcode()[‘result’][‘SCAN_RESULT’]) Wow, Nathan, you took an easily comprehensible script and turned it into a real mess. Missing the entire point of writing a script. Congrats. Please stay away from all code in the future. HI, Can someone post the complete working sample code of using the same script above for scanning the barcode I get a syntax error on line 2: Syntax error: “(” unexpected I also get “import: permission denied” just before that. Any ideas? I cannot make these six lines of code working.. It alwas comes with a syntax error for the isbn that says something about a tuple. I’m using ZXing’s barcode scanner.. is that the problem? Not able to start an activity. I have followed the same code. can anyone tell how to start an simple activity ? This is what worked for me: I have installed XZing barcode scanner from marked. Then I have this python code: ———- import android droid = android.Android() (id, result, error) = droid.scanBarcode() isbn = int(result['SCAN_RESULT']) url = "" % isbn droid.view(url) ——– And it works! THANK YOU for this AWSOME POST! hmm.. a new bug… my browser Problem is that my browser do not work correctly at .. When I add the book to a shelf it’s not really done though I’ve pressed save… I don’t believe that this is something that ASE can bypass.. Okay.. this is not a six line code. But I am totally new at coding.. From the six lines and the ASE API about Android I have made this 77 lines code that will allow you to scan books without starting the application all the time. You can find the code here : Know it can be short and prettier, but I think it has a good layout that explains everything. If you can help me debug how to get the sendMail() working it would be awsome. Here’s what I’ve got now. Nothing above was working for me, and I figured it was the result of the scanBarcode function call, and I was right (see fix below). I’m on an HTC Incredible, and on Android 2.1. I hope this helps someone down the line: import android droid = android.Android() (id, result, error) = droid.scanBarcode() isbn = int(result[‘extras’][‘SCAN_RESULT’]) url = “” % isbn droid.view(url) hmm.. don’t know what happend at pastebin… you can download the script to your phone from this link: Some things that helped me get it working. #1 I had to have the interpreter set to Python. Change it by going to Menu > View > Interpreters hit the menu button again to add Python. Somehow in the mad copying and pasting that I was doing, some things got erased. Go through and double check to make sure everything’s there. I ended up using Carl M’s code and changing the url to a different url, but it works perfectly for me on a Droid running 2.2 after about an hour of tinkering. Thanks for the article! The original version failed to work for me, a complaint about using a string as an index for a tuple. The versions suggested by Carl M and Lasse Nørfeldt work for me… this may have something to do with the way particular barcode applications return their information? Does anyone know if there is a URL parameter for adding a book straight to your Google library? I’ve expanded on your script to allow adding the books automatically to your library through the gdata.books API. It isn’t quite 6 lines any more but it is still pretty simple. Cheers, Craig Okay, so here’s my question…..I’ve been wondering if anyone made a scanner or application for phones that would scan a bar code of a book and tell you that you had already read or bought this book. I cannot tell you how many times I have bought the same book, because they’ve brought it out again, but in a new cover!!!! My mother, who’s 80 and her friends are also curious about this. It has to be simple, though. I’ve been reading about the other phones that will scan bar codes, but I don’t understand half of what they’re saying, so…..is there an item (that is portable…you don’t hook it up to your computer to complete the scannning process) that will do this for us older people who are technically unable to mess around with aps and codes and whatever….and if not, Christmas is coming and boy, wouldn’t that be a great thing to sell to us older people? Moira: There is a reference to beepmystuff.com in this thread, but they are closing the site. They recommend these services as more complete and crucially better supported products. Give them a try: Delicious Library, Library Thing and Shelfworthy @ Peter (IMC) ewww Symbian is the Linux of phones? how about Symbian is the swamp of eternal despair of phones? can’t install: not signed can’t install: certificate expired pre-installed pdf reader takes up 30% of the screen for the UI with no full screen option ever tried to make a playlist in the audio player? ouch… stoneage 70mb PC sync software that is absolute crap oh and the occasional OS crashes that suddenly reset everything including the date to 1980 or something? Was it really that iPhone and Android are so amazing or was there just such an incredible vacuum, anything would have been good enough to fill the void? Well, I guess Android is pretty amazingly well thought out… thanks for the light This is actually going to be really useful for me to scan books at thrift stores to check their prices, to see if I should buy them for resale on Ebay/Amazon. Very neat tool! Anyone had luck with decoding ITF (Interleaved 2 of 5) barcodes using python scripting. I need to scan some lengths that the scanner does not read. Is there a way to send DecodeHints like “Allowed_Lengths” with Python scripting? I have started learning Android Programming with Python! You can’t survive in futuer without Android programming! I get the following error , TypeError : list indices must be integers , not str I used the folliwing code import android droid = android.Android() (id, result, error) = droid.scanBarcode() isbn = int(result['extras']['SCAN_RESULT']) url = “” % isbn droid.view(url) The error comes up at line isbn = int(result['extras']['SCAN_RESULT']) I’m using Nexus with android 2.3.4 (build GRJ22) and i have QRDroid as the barcode scanner app , the QRDroid intent opens when i run the script , but when the intent returns , this error is thrown and the script stops Thanks Vidhuran In PHP: scanBarcode(); $isbn = $code[“result”]->extras->SCAN_RESULT; $url = “”.$isbn; $droid->startActivity(“android.intent.action.VIEW”, $url); ?> require_once("Android.php"); $droid = new Android(); $code = $droid->scanBarcode(); $isbn = $code["result"]->extras->SCAN_RESULT; $url = "".$isbn; $droid->startActivity("android.intent.action.VIEW", $url); ?> I had the same problem, Vidhuran. Switch the lines 3 and 4 from your code to: code = droid.scanBarcode() isbn = int(code.result[‘extras’][‘SCAN_RESULT’]) i solve problem. import android droid = android.Android() (id, result, error)=droid.scanBarcode() barcode = (result[‘extras’]) droid.view(‘’ + barcode[‘SCAN_RESULT’])
https://www.mattcutts.com/blog/android-barcode-scanner/
CC-MAIN-2016-50
refinedweb
2,477
73.07
Pure functions are quite similar to mathematical functions. They are the reason that Haskell is called a pure functional programming language. I compare in the table pure and impure functions. Pure functions have a crucial drawback. They can not communicate with the outside world. Because functions for input and output, functions for building a state, or functions for creating random numbers can not be pure. The only effect that a pure function can have is according to Simon Peyton Jones to warm up the room. Haskell solves this dead-end by embedded impure, imperative subsystems in the pure functional language. These imperative subsystems are called monads. I will write more about monads in a few seconds. What is the story of purity in C++? This story is based - similar to Immutable Data - on the discipline of the programmer. I will present in the following program a function, a meta-function, and a constexpr function. All three are pure functions. 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 // pureFunctions.cpp #include <iostream> int powFunc(int m, int n){ if (n == 0) return 1; return m * powFunc(m, n-1); } template<int m, int n> struct PowMeta{ static int const value = m * PowMeta<m,n-1>::value; }; template<int m> struct PowMeta<m,0>{ static int const value = 1; }; constexpr int powConst(int m, int n){ int r = 1; for(int k=1; k<=n; ++k) r*= m; return r; } int main(){ std::cout << powFunc(2,10) << std::endl; // 1024 std::cout << PowMeta<2,10>::value << std::endl; // 1024 std::cout << powConst(2,10) << std::endl; // 1024 } Although the three functions give the same result, they are completely different. powFunc (line 5 - 8) is a classical function. It will be executed at the run time of the program and can get non-constant arguments. On contrary, PowMeta (line 10 - 18) is a meta-function that is executed at the compile time of the program. Therefore, PowMeta needs constant expressions as arguments. The constexpr function powConst (line 20 - 24) can run at compile time and at run time. To be executed at the compile time, powConst needs - accordingly to the meta-function PowMeta - constant expressions as arguments. And now to something completely different. I will introduce in the next section monads. I will refer to this first definition of monads in later posts and I will show you examples of monads in future C++: std::optional in C++17, the ranges library from Eric Nieber in C++20, and the extended futures in C++20. But now, I'm talking about the future concept in C++. Or, to say it in the words of Bartosz Milewski: I See a Monad in Your Future. I strongly encourage you the read the post or watch his. Haskell as a pure functional language has only pure functions. The key of these pure functions is that they will always give the same result when given the same arguments. Thanks to this property which the name referential transparency a Haskell function can not have side effects. Therefore, Haskell has a conceptional issue. The word is full of calculations that have side effects. These are calculations that can fail, that can return an unknown number of results, or that are dependent on the environment. To solve this conceptional issue, Haskell uses monads and embeds them in the pure functional langue. The classical monads encapsulate one side effect: The concept of the monad is from the category theory. The category theory is a part of the mathematic that deals with objects and mapping between the objects. Monads are abstract data types (type classes), which transform simple types into enriched types. Values of these enriched types are called monadic values. Once in a monad, a value can only be transformed by a function composition in another monadic value. This composition respects the special structure of a monad. Therefore, the error monad will interrupt its calculation, if an error occurs or the state monad builds its state. To make this happen, a monad consists of three components: In order for the error monad to become an instance of the type class Monad, the error monad has to support the identity function and the bind operator. Both functions define how the error monad deals with an error in the calculation. If you use error monad, the error handling is done in the background. A monad consists of two control flows. The explicit control for calculating the result and the implicit control flow for dealing with the specific side effect. A few months ago, after I published this post in German a reader said: Hey, the definition of a monad is quite simple. "A monad is just a monoid in the category of endofunctors." I hope you get it. Pure functional languages have no mutable data. Therefore, they use recursion instead of loops. So you know, what the next post will be. At the last line, did you mean to say,"Pure functional languages have no MUTABLE data."? Hunting Today 4288 Yesterday 6041 Week 36144 Month 150034 All 10307796 Currently are 125 guests and no members online Kubik-Rubik Joomla! Extensions Read more... "Pure functional languages have no MUTABLE data."? Thanks. I will fix it.
https://www.modernescpp.com/index.php/pure-functions
CC-MAIN-2022-40
refinedweb
893
65.01
--------------------------------------------------------------------------- Debian Weekly News Debian Weekly News - September 10th, 2002 --------------------------------------------------------------------------- Welcome to this year's 35th issue of DWN, the weekly newsletter for the Debian community. The most interesting news for this week probably is the removal of Qmail from Debian's [1]list server. Thanks to the admin and listmaster team, the [2]server now happily runs Postfix. Additionally, those who own an X-Box, may want to run [3]Debian on it. 1. 2. 3. Placement of PHP Files. Matthew Palmer wondered where [4]libraries and [5]programs for PHP packages should be installed. There is a mini policy in [6]development that will probably document the correct location for PHP extension libraries which are written in PHP. Installing the files into /var/www may end up in the wrong web space, however, installing them into another directory and linking it into the real web space may not [7]work with all web servers. 4. 5. 6. 7. Handling of Task Packages. Javier Fernández-Sanguino Peña [8]asked how tasks are currently handled in Debian. Joey Hess . 8. 9. CPU optimized OpenSSL packages? Christoph Martin [10]wondered whether there is an opinion or policy on optimized library versions. Mike Stone [11]added that OpenSSL has processor-specific assembly routines that are selected at compile time and Christoph [12]explained that optimizing for 80486 instead of 80386 causes a [13]speedup of 2 times and optimizing for sparcv8 instead of sparcv7 even results in a [14]speedup of 8 times. Selecting some optimization at run time would probably be worth it. 10. 11. 12. 13. 14. Download of non-US illegal in US? Richard Atterer [15]noticed that the [16. 15. 16. Input from Donald Knuth on TeX License Discussion. David Carlisle found a [17]statement from Donald Knuth on the distribution of modified Computer Modern TeX fonts, that [18]heats up the discussion. Even though the fonts are placed in the public domain, modified versions should not be named as the original, which would cause a [19]violation of Debian's guidelines if this is required. 17. 18. 19. Debian Trademark in Spain. Back in May, a person associated to a Spanish training company obviously registrated the term [20]Debian as trademark. Jacobo Tarrio [21]found out that there are three such applications. Ignacio García Fernández [22]added an explanation by the company in question. 20. 21. 22. Java Policy Discussion. Ola Lundqvist [23]wrote that since woody is released he would like to propose that the proposed [24]Java Policy be made official. Ola is seeking comment on it and requests a discussion. The proposed policy talks about virtual machines, Java libraries, programs and compilers. 23. 24. Renaming Boot Script Utilities. Henrique de Moraes Holschuh [25, [26]considers it a waste of time, for no technical benefit. 25. 26. Monitorless Installation. Mario Lang [27]tries to figure the best way to integrate accessibility support into the debian-installer. The goal is to allow installation with completely different display types than a normal monitor. This will allow easier installations for the visually impaired. 27. Graphical Installer? Michael Cardenas [28]released his patch to cdebconf that adds a gtk2.0 frontend. It still required a little bit of work but others finished it and Tollef Fog Heen already [29]committed it. This is an important step forward in the direction of a graphical installer for Debian. 28. 29. Bug Reports as a Mailbox. Adam Heath [30]announced that he installed a new CGI program for the [31. 30. 31. Evaluating Package Integrity. Jérôme Marant [32]reminded developers of a talk Martin Michlmayr gave at [33]Debian Conference 1 on regression testing of packages. Regression tests are tests that are made to ensure that the behaviour of a given program has not changed across releases. Testing the [34]installation could done by using [35]pbuilder. Additionally, an existing [36]framework for testing the behaviour of a package is already included in Debian. 32. 33. 34. 35. 36. On Moving Configuration Files. Joey Hess [37]exhorted that it is the duty of a package or its scripts respectively to deal with moving a configuration file if the files were moved between updates. The [38]policy mentions that the maintainer should check for an upgrade to a version in which the conffile no longer exists, and use debconf to ask the user wether or not they would like the conffile removed. 37. 38. New DebianEdu Subproject. Raphaël Hertzog [39]announced the birth of the DebianEdu subproject. This subproject aims to make Debian the best distribution available for educational use. He hopes that this subproject will cooperate with similar initiatives like the french [40]Debian Education distribution (French only) and [41]SkoleLinux in Norway. 39. 40. 41. Technical Review for Debian Securing Manual. Javier Fernández-Sanguino Peña is [42]seeking people for a technical review of the [43]Debian Securing Manual. Some sections require a rewrite, especially the configuration checklist, which is no longer reflective. Also, not all translations are up-to-date. 42. 43. Changing the Documentation Structure. Rob Bradford [44]proposed to tidy up the way the [45]Debian Documentation Project implements its namespace. Currently there doesn't seem to be a consistant [46]content negotiation. 44. 45. 46. Reviewing Policy. Manoj Srivastava [47]started to review pending bug reports against [48]Debian Policy. He commented on twelve such reports. They cover perl module [49]naming, postscript file [50]requirements, [51]adding the GNU [52]Free Documentation License to the list of free licenses, the [53]menu policy and others. 47. 48. 49. 50. 51. 52. 53. Security Updates. You know the drill. Please make sure that you update your systems if you have any of these packages installed. * [54]Mantis -- Privilege escalation. * [55]ethereal -- Buffer overflow. * [56]mhonarc -- Cross site scripting. * [57]cacti -- Arbitrary code execution. 54. 55. 56. 57. New or Noteworthy Packages. The following packages were added to the Debian archive recently or contain important updates. * [58]aseqview -- ALSA Sequencer Event Viewer. * [59]avview -- TV viewing and capture software for ATI video cards. * [60]blackbook -- GTK+ Address Book Applet. * [61]blackhole-exim -- Spam/Virus Blocking/General email filtering. * [62]carpaltunnel -- Configuration helper for OpenVPN. * [63]eterm-themes -- Themes for Eterm, the Enlightened Terminal Emulator. * [64]jlint -- A Java Program Checker. * [65]keylookup -- A tool to fetch keys from keyservers. * [66]lpairs -- The classical memory card game. * [67]mairix -- Indexes and searches email in Maildir and MH formats. * [68]mp32ogg -- Converts MP3 file to Ogg Vorbis. * [69]mpeg2dec -- Simple libmpeg2 video decoder application. * [70]slash -- The code that runs Slashdot. * [71]statslog -- An IRC Channel Logger. * [72]tdfsb -- A 3D filesystem browser. * [73]terminatorx -- A realtime audio synthesizer. * [74]totem -- A simple movie player for the Gnome desktop based on xine. * [75]xdx -- DX-cluster client for amateur radio. 58. 59. 60. 61. 62. 63. 64. 65. 66. 67. 68. 69. 70. 71. 72. 73. 74. 75. Orphaned Packages. 2 packages were orphaned this week and require a new maintainer. This makes a total of 113 orphaned packages. Many thanks to the previous maintainers who contributed to the Free Software community. Please see the [76]WNPP pages for the full list, and please add a note to the bug report and retitle it to ITA: if you plan to take over a package. 76. * [77]kde-theme-plessky -- Matte family of themes for KDE. ([78]Bug#159406) * [79]kleandisk -- a file cleanup and backup tool for KDE. ([80]Bug#159405) 77. 78. 79. 80. Want to continue reading DWN? Please help us create this newsletter. Currently, it's mostly a one-man show, which is anticipated to fail in the long term. We urgently need volunteer writers who prepare items. Please see the [81]contributing page to find out how to help. We're looking forward to receiving your mail at [82]dwn@debian.org. 81. 82. mailto:dwn@debian.org
https://lists.debian.org/debian-news/2002/msg00038.html
CC-MAIN-2016-44
refinedweb
1,321
58.79
The context structure. More... #include <context.h> The context structure. Contains two pipes for async service qq : write queries to the async service pid/tid. rr : read results from the async service pid/tid. The context has been finalized This is after config when the first resolve is done. The modules are inited (module-init()) and shared caches created. List of alloc-cache-id points per threadnum for notinuse threads. Simply the entire struct alloc_cache with the 'super' member used to link a simply linked list. Reset super member to the superalloc before use. Tree of outstanding queries. Indexed by querynum Used when results come in for async to lookup. Used when cancel is done for lookup (and delete). Used to see if querynum is free for use. Content of type ctx_query. Referenced by add_bg_result().
http://www.unbound.net/documentation/doxygen/structub__ctx.html
CC-MAIN-2018-30
refinedweb
135
79.67
I have written a small piece of code for printing: BufferedWriter out = null; try { out = new BufferedWriter(new OutputStreamWriter( new FileOutputStream(FileDescriptor.out), "ASCII"), 512); out.write(msg + '\n'); out.flush(); } catch (Unsupport this is my first time setting up and writing an android app, and also my first time setting up an ad, I've been googling and testing for almost 2 days but I still can't seem to make the ad appear and it still gives me the error "There was a problem g I got an old version of an RTD Server, which notify and update excel after changing values in my Program. I "just" want to bring this code snipped to a newer Version. Starting the Server and loading the inital data in the Excelsheet works fine, I'm having problems with relation @RelationshipEntity(type = RelTypes.Tag.TAG_ON_OBJECT_EVALUATION) public class TagOnObjectEvaluation { @StartNode private Mashup taggableObject; @EndNode private Tag tag; // Other fields, getters and setters } In bot package com.shaun.spring.config; import org.springframework.context.annotation.ComponentScan; import org.springframework.context.annotation.Configuration; @Configuration @ComponentScan("com.shaun.spring") @EnableTransactionManagement public clas I create this project with phonegap / nodejs tutorial. I am have problem with adroid studio, The emulator doesnt start when I tried to run it. I create one virtual device with this configurations: Android 5.1.1 - API 22 512MB ram API 22 3,3 WQVGA 240 This question already has an answer here: How to parse a dynamic JSON key in a Nested JSON result 2 answers I have been looking for parsing JSON data in java/android. unfortunately, there is no JSON that same as mine. i have JSON data that include we Python's collections.deque has a maxlen argument, such that [...] the deque is bounded to the specified maximum length. Once a bounded length deque is full, when new items are added, a corresponding number of items are discarded from the opposite end I haven't done any programming for a while, so I could be missing something obvious here. I am trying to run the following code, which should create an empty JFrame and put it in the center of the screen: public class MainGUI { // This initilizes the I need have SSRS reports deployed to a report sever.I am trying to generate reports using java web services.As an example i tried to generate successfully referring the tutorial. I am working on NTLM Implementation with Java. I am trying to access shared folders inside my own machine. But I get the following Exception: jcifs.smb.SmbAuthException: Logon failure: unknown user name or bad password. I got the machine name and wor I have create a File watcher using org.apache.commons.io.monitor.FileAlterationMonitor. The file changes are captured correctly. But I want to stop the monitor task using a separate method. Its not working. The source code is as below. import java.io At the moment I have a single scene with multiple mediaviews, each with their own play and pause buttons, in fxml. I was wondering if there is a way to play/pause which ever mediaview had its button clicked without making a play/pause controller for I am building a Spring RESTfull service and a I have the following method that retrieves a Place object based on given zipcode: @RequestMapping(value = "/placeByZip", method = RequestMethod.GET) public Place getPlaceByZipcode(@RequestParam(value Hello I want to do a really simple thing. Just make a template function for any numbers. I actually want as little as ability to "add". In C++ it would be really trivial like this: template <typename T> inline T add (T a, T b) { return a + This question already has an answer here: Java: Can't to generic List<? extends Parent> mylist 4 answers public static void main(String... args) { List<Child> list1 = new ArrayList<Child>(); method2(list1); } public static void method2(L I have a list of objects. List<MyObject> myList; This list is populated in the beginning and no update is done after that. The order of objects in the list is significant because I need to iterate over this list in that order later. During execution
http://www.pcaskme.com/category/java/10/
CC-MAIN-2019-04
refinedweb
696
55.64
Welcome! This is the first installment in a series called "Objective Viewpoint" that will teach you about C++ and Java. You can go to an index of the series by clicking on the banner immediately above, or you can follow the tour at the bottom of this document. Enjoy! An Introduction to C++ by Saveen Reddy and G. Bowden Wise Welcome to the inaugural edition of the ObjectiveViewPoint column! Here we will touch on many aspects of object-orientation. The word object has surfaced in more ways than you can count. There are OOPLs (Object-Oriented Programming Languages) and OODBs (Object-Oriented Databases), OOA (object-oriented analysis), and OOD (object-oriented design). We are sure you can come up with some OOisms of your own. Our goal in this column. Our intended audience consists of humble beginners to seasoned hackers. We assume that you have programmed in at least one procedural language, such as C or Pascal. Even if you are familiar with C++, please stay with us, you may learn some interesting new language features. Also, we will illustrate our points with many self-contained examples that you may later wish to incorporate into your own programs. C++: A Historical Perspective We begin our journey of C++ with a little history. C, the predecessor to C++, has become one of the most popular programming languages. Originally designed for systems programming, C enables programmers to write efficient code and provided close access to the machine. C compilers, found on practically every Unix system, are now available with most operating systems. During the 1980s and into the 1990s, an explosive growth in object-oriented technology began with the introduction of the Smalltalk language. Object-Oriented Programming (OOP) began to replace the more traditional structured programming techniques. This explosion led to the development of languages which support programming with objects. Many new object-oriented programming languages appeared: Object-Pascal, Modula-2, Mesa, Cedar, Neon, Objective-C, LISP with the Common List Object System (CLOS), and, of course, C++. Although many of these languages appeared in the 1980s, many ideas of OOP were taken from Simula-67. Yes! OOP has been around since 1967.OP capability. Note that using C++ does not imply that your are doing OOP. C++ does not force you to use its OOP features. You can simply create structured code that uses only C++'s non-OOP features. C++: A Better C The designers of C++ wanted to add object-oriented mechanisms without compromising the efficiency and simplicity that made C so popular. One of the driving principles for the language designers was to hide complexity from the programmer, allowing her to concentrate on the problem at hand. Because C++ retains C as a subset, it gains many of the attractive features of the C language, such as efficiency, closeness to the machine, and a variety of built-in types. A number of new features were added to C++ to make the language even more robust, many of which are not used by novice programmers. By introducing these new features here, we hope that you will begin to use them in your own programs early on and gain their benefits. Some of the features we will look at are the role of constants, inline expansion, references, declaration statements, user defined types, overloading, and the free store. Most of these features can be summarized by two important design goals: strong compiler type checking and a user-extensible language. By enforcing stricter type-checking, the C++ compiler makes us acutely aware of data types in our expressions. Stronger type checking is provided through several mechanisms, including: function argument type checking, conversions, and a few other features we will examine below. C++ also enables programmers to incorporate new types into the language, through the use of classes. A class is a user-defined type. The compiler can treat new types as if they are one of the built-in types. This is a very powerful feature. In addition, the class provides the mechanism for data abstraction and encapsulation, which are key to object-oriented programming. As we examine some of the new features of C++ we will see these two goals resurface again and again. A NEW FORM FOR COMMENTS. It is always good practice to provide comments within your code so that it can be read and understood by others. In C, comments were placed between the tokens /* and */ like this: /* This is a traditional C comment */ C++ supports traditional C comments and also provides an easier comment mechanism, which only requires an initial comment delimiter: // This is a C++ comment Everything after the // and to the end of the line is a comment. THE CONST KEYWORD. In C, constants are often specified in programs using #define . The #define is essentially a macro expansion facility, for example, with the definition: #define PI 3.14159265358979323846 the preprocessor will substitute 3.14159265358979323846 wherever PI is encountered in the source file. C++ allows any variable to be declared a constant by adding the const keyword to the declaration. For the PI constant above, we would write: const double PI = 3.14159265358979323846; A const object may be initialized, but its value may never change. The fact that an object will never change allows the compiler to ensure that constant data is not modified and to generate more efficient code. Since each const element also has an associated type, the compiler can also do more explicit type checking. A very powerful use of const is found when it is combined with pointers. By declaring a ``pointer to const'', the pointer cannot be used to change the pointed-to object. As an example, consider: int i = 10; const int *pi = &i; *pi = 15; // Not allowed! pi is a const pointer! It is not possible to change the value of i through the pointer because *pi is constant. A pointer used in this way can be thought of as a read-only pointer; the pointer can be used to read the data to which it points, but the data cannot be changed via the pointer. Read-only pointers are often used by class member functions to return a pointer to private data stored within the class. The pointer allows the user to read, but not change, the private data. Unfortunately, the user can still modify the data pointed at by the read-only pointer by using a type cast. This is called ``casting away the const-ness''. Using the above example, we can still change the value of i like this: // Cast away the constness of the pi pointer and modify i *((int*) pi) = 15; By returning a const pointer we are telling users to keep their hands off of internal data. The data can still be modified, but only with extra work (the type cast). So, in most cases users will realize they are not to modify that data, but can do so at their own risk. There are two ways to add the const keyword to a pointer declaration. Above, when const comes before the * , what the pointer points to is constant. It is not possible to change the variable that is pointed to by the pointer. When when const comes after the *, like this: int i = 10; int j = 11; int* const ptr = &i; // Pointer initialized to point to i the pointer itself becomes constant. This means that the pointer cannont be changed to point to some other variable after it has been initialized. In the above example, the pointer ptr must always point at the variable i. So, statements such as: ptr = &j; // Not allowed, since the pointer is const! are not allowed and are caught by the compiler. However, it is possible to modify the variable that the pointer points to: *ptr = 15; // This is ok, what is pointed at is not const If we want to prevent modification of what the pointer points to and prevent the value of the pointer from being changed, we must provide a const on both sides of the * like this: const int * const ptr = &i; Remember that adding const to a declaration simply invokes extra compile time type checking; it does not cause the compiler to generate any extra code. Another advantage of using the const mechanism is that the C++ construct will be available to a symbolic debugger, while the preprocessing symbols generally are not. INLINE EXPANSION Another common use of the C #define macro expansion facility is to avoid function call overhead for small functions. Some functions are so small that the overhead of invoking the function call takes more time than the body of the function itself. C++ provides the inline keyword to inform the compiler to place the function inline rather than generate the code for calling the routine. For example, the macro #define max (x, y) ((x)>(y)?(x):(y)) can be replaced for integers vy the C++ inline function inline int max (int x, int y) { return (x > y ? x : y); } When a similar function is needed for multiple types, the C++ template mechanism can be used. Macro expansion can lead to notorious results when encountering an expression with side effects, such as max (f(x), z++); which, after macro expansion becomes: ((f(x)) > (z++) ? (f(x) : (z++)); The variable z will be incremented once or twice, depending on the values of the x and y arguments to the function max(). Such errors are avoided when using the inline mechanism. When defining a C++ class, the body of a class member function can also be specified. This code is also treated as inline code provided it does not contain any loops (e.g., while). For example: class A { int a; public: A() { } // inline int Value() { return a; } // inline } Since the code for both the constructor A() and the member function Value() are specified as part of the class definition, the code between the braces will be expanded inline whenever these functions are invoked. REFERENCES Unlike C, C++ provides true call-by-reference through the use of reference types. A reference is an alias or a name to an existing object. They are simliar to pointers in that they must be initialized before they can be used. For example, let's declare an integer: int n = 10; and then declare a reference to it: int& r= n; Now r is an alias for n; both identify the same object and can be used interchangeably. Hence, the assignment r = - 10; changes the value of both r and n to -10. It is important to note that initialization and assignment are completely different for references. A reference must have an initializer. Initialization is an operator that operates only on the reference itself. The initialization int& r = n; establishes the correspondence between the reference and the data object that it names. Assignment behaves like we expect an operation to, and operates through the reference on the object referred to. The assignment, r = -10; is the same for references as for any other lvalue, and simply assigns a new value to the designated data object. C programmers know that C uses the call-by-value parameter mechanism. In order to enable functions to modify the values of their parameters, pointers to the parameters must be used as the ``value'', which is passed. For example, a routine Swap(), which swaps its parameters would be written like this in C: void Swap (int* a, int* b) { int tmp; tmp = *a; *a = *b; *b = tmp; } The routine would be invoked like this: int x = 1; int y = 2; Swap (&x, &y); C programmers are all too familiar with what happens when one of the ampersands is forgotten; the program usually ends with a core dump! Now consider the C++ version of Swap() which makes use of true call-by-reference. void Swap (int& a, int& b) { int tmp; tmp = a; a = b; b = tmp; } The routine would be invoked like this: int x = 1; int y = 2; Swap (x, y); The compiler ensures that the parameters of Swap() will be passed by reference. In C, often a run-time error results if the value of a parameter is passed instead of its address. References eliminates these errors and is syntactically more pleasing. Another use for references is as return types. Consider this routine: int& FindByIndex (int* theArray,int index) { return theArray[index]; } Note that the FindByIndex() returns a reference to the element in the array rather than its value. The expression FindByIndex (A, i) yields a reference to the ith element of the array A. Now, because a reference is an lvalue, it can be used on the left hand side of an expression, we can write: FindByIndex(A, i) = 25; which will assign 25 to the ith element of the array A. Note that if FindByIndex() is made inline, the overhead due to the function call is eliminated. Inline functions that return references are attractive for the sake of efficiency. DECLARATIONS AS STATEMENTS. In a C++ program, a declaration can be placed wherever a statement can appear, which can be anywhere within a program block. Any initializations are done each time their declaration statement is executed. Suppose we are searching a linked list for a certain key: int IsMember (const int key) { int found = 0; if (NotEmpty()) { List* ptr = head; // Declaration while (ptr && !found) { int item = ptr->data; // Declaration ptr = ptr->next; if (item == key) found = 1; } } return found; } By putting declarations closer to where the variables are used, you write more legible code. IMPROVED TYPE SYSTEM. Through the use of classes, user-defined types may be created, and if properly defined, C++ will behave as if they are one of the built-in types: int, char, float, and double. It is possible to define a Vector type and perform operations such as addition and multiplication just as easily as is done with ints: // Define some arrays of doubles double a[3] = { 11, 12, 13 }; double b[3] = { 21, 22, 23 }; // Initialize vectors from the // double arrays Vector v1 = a; Vector v2 = b; // Add the two matrices. Vector v3 = v1 + v2; The Vector class has been defined with all of the appropriate arithmetic operations so that it can be treated as a built-in type. It is even possible to define conversion operators so that we can convert the Vector to a double, we get the magnitude, or norm, of the Vector: double norm = (double) v3; OVERLOADING. One of the many strengths of C++ is the ability to overload functions and operators. By overloading, the same function name or operator symbol can be given several different definitions. The number and types of the arguments supplied to a function or operator tell the compiler which definition to use. Overloading is most often used to provide different definitions for member functions of a class. But overloading can also be used for functions that are not a member of any class. Suppose we need to search different types of arrays for a certain value. We can provide implementations for searching arrays of integers, floats, and doubles: int Search ( const int* data, const int key); int Search ( const float* data, const float key); int Search ( const double* data, const double key); The compiler will ensure that the correct function is called based on the types of the arguments passed to Search(). When arguments do not exactly match the formal parameter types, the compiler will perform implicit type conversions (e.g., int to float) in an attempt to find a match. Overloading is most often used for member functions and operators of classes. Most classes have overloaded constructors, for there is often more than one way to create a given object. All of the built-in types also have operators such as addition, subtraction, multiplication, and division. In fact, we can mix different types and still add them together: int i = 1; char c = 'a'; float f = -1.0; double d = 100.0; int result = i + c + f + d; The compiler takes applies the type conversions appropriate for the above calculation. When we define our own types, we can inform the compiler which operations and type conversions can be applied to our type. The compiler will allow our type to blend in with the built-in types. We will see more examples of this when we look at classes in detail. A FREE STORE IS PROVIDED. In C, variables are placed in the free store by using the sizeof() macro to determine the needed allocation size and then calling malloc() with that size. Variables are removed from the free store by calling free(). With classes, using malloc() and free() becomes tedious. C++ provides the operators new and delete, which can allocate not only built-in types but also user-defined types. This provides a uniform mechanism for allocating and deallocating memory from the free store. For example, to allocate an integer: int *pi; pi = new int; *pi = 1; and to allocate an array of 10 ints: int *array = new int [10]; for (int i=0;i < 10; i++) array[i] = i; Just as with malloc() the memory returned by new is not initialized; only static memory has a default initial value of zero. Suppose we have defined a type for complex numbers, called complex. We can dynamically allocate a complex number as follows: complex* pc = new complex (1, 2); In this case, the complex pointer pc will point to the complex number 1 + 2i. All memory allocated using new should be deallocated using delete. However, delete takes on different forms depending on whether the variable being deleted is an array or a simple variable. For the complex number above, we simply call delete: delete pc; Delete calls the destructor for the object to be deleted. However, to delete each element of an array, you must explicitly inform delete that an array is to be deleted: delete [] array; The C++ compiler maintains information about the size and number of objects in an array and retrieves this information when deleting an array. The empty bracket pair informs the compiler to call the class destructor for each element in the array. Be careful, attempting to delete a pointer that has not been initialized by new results in undefined program behavior. However, it is safe to apply the delete operator to a null pointer. New and delete are global C++ operators and can be redefined (e.g., if it is desirable to trap every memory allocation). This is useful in debugging, but is not recommended for general programming. More often, the operators new and delete are overridden by providing new and delete operators for a specific class. When C++ allocates memory for a user-defined class, the new operator for that class is used if it exists, otherwise the global new is used. Most often, programmers define new for certain classes to achieve improved memory management (i.e., reference counting for a class). The Class: Data Encapsulation, Data Hiding, and Objects Like a C structure, a C++ class is a data type. An object is simply an instantiation of a class. C++ classes have additional capabilities as the following example should show: Vector v1(1,2), Vector v2(2,3), Vector vr; vr = v1 + v2; Vector is a class. v1, v2, and vr are objects of class Vector. v1 and v2 are given initial values through their constructor. vr is also initialized through its constructor to certain default values. The example illustrates a major power of C++. Namely, we can define functions on a class as well as data members. Here, we have an overloaded addition operator which makes our expression involving Vectors seem much more natural than the equivalent C code: Vector v1, v2, vr; add_vector( &vr , &v1, &v2 ); The ability to define these member functions allows us to have a constructor for Vector, code that creates an object of class Vector. The constructor ensures proper initialization of our Vectors. Though not illustrated in the above example, a class can limit the use of its data members and member functions by non-member code. This is encapsulation. If class K defines member M as private, then only members of class K can use M. Defining M as public means any other class or function can use M. Let's take a look at a trivial implementation of Vector that will show is a little about constructors, operators, and references. #include <iostream.h> class Vector { public: Vector(double new_x=0.0,double new_y=0.0) { if ((new_x<100.0) && (new_y<100.0)) { x=new_x; y=new_y; } else { x=0; y=0; } } Vector operator + ( const Vector & v) { return (Vector (x + v.x, y + v.y)); } void PrintOn (ostream& os) { os << "[" << x << ", " << y << "]"; } private: double x, y; }; int main() { Vector v1, v2, v3(0.0,0.0); v1=Vector(1.1,2.2); v2=Vector(1.1,2.2); v3=v1+v2; cout << "v1 is "; v1.PrintOn (cout); cout << endl; cout << "v2 is "; v2.PrintOn (cout); cout << endl; cout << "v3 is "; v3.PrintOn (cout); cout << endl; } Encapsulation of x and y means that they cannot be altered without the help of specific member functions. Any member function or data member of Vector can use x and y freely. For everyone else, the member functions provide a strict interface. They ensure a particular behavior in our objects. In the example above, no Vector can be created that has an x or y component that exceeds 100. If at some point code tries to do this, then the constructor performs bounds-checking and sets x and y both to zero. In a normal C structure we can simply do the following: Vector v1; InitVector( & v1, 99 , 99 ); v1.x = 1000; InitVector() closely approximates a C++ constructor. Assume it tries to behave like the constructor Vector() in example three. This C code demonstrates how without encapsulation we can easily violate the rules set up in our pseudo-constructor. With class Vector, both x and y are private. As a result, they can only be accessed by member functions. If our goal is to prevent x and y from exceeding 100, we simply have all accessor functions perform bounds-checking. In fact, once created and outside of the addition operation there's no way to modify x or y. They are private and no member function, outside of the constructor, sets their values. Notice how the constructor Vector() limits our Vector component values. By returning a new object, the addition operator uses the constructor to check for overflow. We could have made `+' do multiplication instead. Though such manipulation is atypical, it can be quite useful. For example, C++ comes standard with a streams library which uses the << operator to provide output. There is one useful thing about the addition operator: we don't have to pass the addresses of arguments. The arguments for the addition operator specify that the parameters are references (using the reference operator &-not the same as the address-of operator &). Recall that the reference operator allows us to use the same calling syntax as call-by-value and yet modify the value of an argument. The reference operator avoids the overhead of actually creating a new object. Thus, we can avoid a lot of indirection. However, the most powerful OOP extension C++ provides is probably inheritance. Classes can inherit data and functions from other classes. For this purpose we can declare members in the base class as protected: not usable publicly but usable by derived classes. In conclusion, we looked at some of the features that make C++ a better C. C++ provides stronger type checking by checking arguments to functions and reduces syntactic errors through the use of reference types. Programmers can also add new types to the language by defining classes. Although we have only taken a brief look at classes, we will see more abstract discussion of C++ object-orientation as well as general OOP concepts in upcoming columns. Crossroads 1.1 September 1994 Want to learn more about C++? You can go to an index or the next installment of this series.
http://www.acm.org/crossroads/xrds1-1/ovp.html
crawl-002
refinedweb
4,018
62.07
I have been working on my game for a month. And this problem just occured. I did google this. But i cant solve it. This does not occur when i try to build an empty scene in another project, but only in my current project. I need help. This is the error in the console. (i dont get much of it): Error building Player: CommandInvokationFailure: Failed to re-package resources. See the Console for details. C:\adt-bundle-windows-x86_64-20140702\sdk\build-tools\android-4.4W\aapt.exe package --auto-add-overlay -v -f -m -J gen -M AndroidManifest.xml -S "res" -I "C:/adt-bundle-windows-x86_64-20140702/sdk/platforms/android-20\android.jar" -F bin/resources.ap_ stderr[ AndroidManifest.xml:9: error: Error: No resource found that matches the given name (at 'value' with value '@integer/google_play_services_version'). ] stdout[ Configurations: (default) hdpi ldpi xhdpi xxhdpi Files: Type values values\strings.xml Src: () res\values\strings.xml Including resources from package: C:\adt-bundle-windows-x86_64-20140702\sdk\platforms\android-20 Processing image: res\drawable\app_icon.png Processing image: res\drawable-hdpi\app_icon.png Processing image: res\drawable-ldpi\app_icon.png Processing image: res\drawable-xhdpi\app_icon.png (processed image res\drawable-ldpi\app_icon.png: 114% size of source) Processing image: res\drawable-xxhdpi\app_icon.png (processed image res\drawable\app_icon.png: 104% size of source) (processed image res\drawable-hdpi\app_icon.png: 96% size of source) (processed image res\drawable-xhdpi\app_icon.png: 94% size of source) (processed image res\drawable-xxhdpi\app_icon.png: 97% size of source) (new resource id app_icon from drawable\app_icon.png #generated) (new resource id app_icon from hdpi\drawable\app_icon.png #generated) (new resource id app_icon from ldpi\drawable\app_icon.png #generated) (new resource id app_icon from xhdpi\drawable\app_icon.png #generated) (new resource id app_icon from xxhdpi\drawable\app_icon.png #generated) ] Do you have a custom AndroidManifest.xml in your project ? nope. nothing like that. i didnt touch anything to do with SD$$anonymous$$s. Can you please list your Assets/Plugins/Android folder if there's anything there? there was AndroidManifest.xml and a unity-plugin-library.jar file. i just deleted the xml file. and it worked. i never made that idk how it got there. thanks :) and sorry. im very new to android development Thanks a lot, it works, just by deleting . :) Answer by liortal · Oct 20, 2014 at 01:17 PM The issue is that you have an AndroidManifest.xml file (Probably under Plugins/Android or a plugin in a subfolder of this folder) that tries to access a resource by name (google_play_services_version), but this value is never defined anywhere. Usually, this value is added via some method such as adding it to the AndroidManifest.xml or to an xml under res/values. For example, see the documentation here (under Add the Google Play services version to your app's manifest). NOTE: Sometimes, it can get a bit tricky to find out why the build fails. If all else fails, i recommend you to check out this link - it is a professional service to help fix Android related build issues (due to manifest merging, conflicting plugins, etc). Check it out if you're unable to resolve your issues ! sorry that i hadn't said that liortal's answer was correct. And your's is too. Thanks to both of you I implemented the fix posted by liortal... it sounds good but... I now get this error.. (from the fix code) Error: No resource found that matches the given name (at 'value' with value '@integer/google_play_services_version'). What does this mean ? Can unity not find google play services libraries or something ? Answer by khayamgondal · Apr 07, 2015 at 04:28 AM To fix this error, you have to copy the version.xml file from version.xml android-sdk/extras/google/google_play_services/libproject/google-play-services_lib/res/values/ into Assets/Plugins/Android/res/values/ of your Unity project's folder. there isnt a google folder in my sdk.sxtras, only a android folder, sorry for my bad english, i am Chinese, so, what happened? I didn't have a res folder in google-play-services_lib, but there were some res folders inside OTHER Folders inside google-play-services_lib. I pasted that versson.xml file into each of them and it solved the problem :D you made it ( create folder named "values" inside the values create folder named "res") didn't worked... wow, this magically worked, thank you I have the same problem as you all have. But I tried all the options possible and I can't Build and Run the app. I can't find the version.xml file, I've deleted the AndroidManifest.xml. If there is something I could do and you know how, it would be of great help. CommandInvokationFailure: Failed to re-package resources. See the Console for details. C:\Users\Bautista\AppData\Local\Android\sdk\build-tools\24.0.0\aapt.exe package --auto-add-overlay -v -f -m -J gen -M AndroidManifest.xml -S "res" -I "C:/Users/Bautista/AppData/Local/Android/sdk\platforms\android-24\android.jar" -F bin/resources.ap_ stderr[ ] stdout[ ] UnityEditor.Android.Command.Run (System.Diagnostics.ProcessStartInfo psi, UnityEditor.Android.WaitingForProcessToExit waitingForProcessToExit, System.String errorMsg) UnityEditor.Android.PostProcessAndroidPlayer.Exec (System.String command, System.String args, System.String workingdir, System.String[] progress_strings, Single progress_value, System.String errorMsg) UnityEditor.Android.PostProcessAndroidPlayer.CompileResources (System.String stagingArea, System.String packageName, UnityEditor.Android.AndroidLibraries androidLibraries) UnityEditor.Android.PostProcessAndroidPlayer.PostProcessInternal (System.String stagingAreaData, System.String stagingArea, System.String playerPackage, System.String installPath, System.String companyName, System.String productName, BuildOptions options, UnityEditor.RuntimeClassRegistry usedClassRegistry).HostView:OnGUI() This issue seems to be related to build-tools version. I had the same issue today after Android Studio upgraded build-tools to 24.0.0. Downgraded back to 23.0.2 and then it worked again. I had the exact same issue and Downgrading back to 23.0.2 worked for me. Thank you! Answer by pophead2 · Jun 22, 2016 at 08:36 PM Moro! I had this issue when I was trying to push my project to Android. Here is the Fix(Founded in reddit) that worked for me. 1) Find and open your SDK Manager.exe at android-sdk folder. 2) under tool, have Android SDK tools 25.1.7 and Android SDK Platfomr-tools 24 installed. 3) uninstall Android SDK Build-tools 24 and install 23.0.1 instead. Done. Thanks a lot! This indeedly works! and by the way: if someone encountered the "URL NOT Found" issue in SD$$anonymous$$ manager while installing 23.0.1 on Win10, you can try run the "SD$$anonymous$$ Manager.exe" as administrator. If this doesnt work, can still try run "C:\Program Files (x86)\Android\android-sdk\tools\android.bat" as administrator on Win 10 and this worked for me. I logged in just to upvote this and thank you. I have been going through every possible solution for hours now and this finally worked. Thanks a bunch! i did it but i have same error with 23.0.1 Please help me! CommandInvokationFailure: Failed to re-package resources. See the Console for details. /Users/yusufguven/Documents/Android/sdk/build-tools/23.0.1/aapt package --auto-add-overlay Hard to tell since this is not the full log. I can assist with fixing Android related issues, see here: i did please help. Awesome! Worked for me too! This also solved the problem I was having, Answer by chaurasiyapawan · Oct 18, 2014 at 05:08 AM I found a better solution: Copy google-play-services_lib folder to Plugins/Android, so you don't have to hard code Google Play Service's value. This worked great, just want to underline that it HAS to go in the plugins/android folder (got me once hah). i dont have the google-play-services-lib folder and i cant find where is the Plugins/Android! help me on what steps should i take to fix the error! Answer by mubasherikram · Mar 31, 2015 at 03:08 PM RESOLVED !!! I was facing this issue while integrating AdMob and ChartBoost via Prime[31]. his issue was resolved by following steps. Delete all plugin Directories. Import first plugin import second plugin but remember to uncheck google-game-palay services_lib folder its done Thanks a ton mubasherikram(how to pronounce :p) this help me a lot Same here if you do it right it works...THAN$$anonymous$$S!!! Thanks a lot! Otherwise, my steps was - close In app purchase services, remove Plugins folder, install admob package, install chartboost package, enable in app purchase service. what kinda plugins? i dont have anything. i am SO confused! Julian, things might have changed since this thread started. I also didn't find those folders. Thing is, with new android SD$$anonymous$$, things might be done in different way. check newer post for different. Having issue building apk 1 Answer Error build android app 1 Answer Failed to re-package resources using everyplay plugin and google play plugin for leaderboard 0 Answers Android build error aapt.exe after latest SDK updates today 0 Answers Android SDK does not include error 1 Answer
https://answers.unity.com/questions/760989/failed-to-re-package-resources-2.html
CC-MAIN-2020-16
refinedweb
1,531
54.39
08 November 2010 08:02 [Source: ICIS news] SINGAPORE (ICIS)--Oil and gas giant Shell announced on Monday its plan to sell its 29.18% stake in Woodside Petroleum Ltd for A$3.31bn ($3.28bn) as part of its strategy to refocus its business in ?xml:namespace> The stake up for sale comprised 78.24m shares at A$42.23 each, leaving Shell with a 24.27% interest in the Australian firm held by subsidiary Shell Energy Holdings Australia Ltd (SEHAL), the company said. UBS was tapped as the underwriter for the share sale of Woodside, which has a 930,000 tonne/year liquefied natural gas (LNG) plant in “With Shell’s recent portfolio progress in “We will manage our remaining position in Woodside over time in the context of our global portfolio,” he added. SEHAL has a commitment to hold the remaining stake in Woodside for at least a year, with limited exceptions, Shell said. The exceptions included “a sale to a strategic third party of an interest greater than 3% in Woodside provided the purchaser agrees to be bound by the same escrow restrictions to which SEHAL is subject or in pursuit of an acceptance to a bona fide takeover offer for Woodside,” the company said. Shell said it hoped to keep Woodside as a partner on growth projects, while it expands its directly-owned LNG assets in “Shell’s directly-owned Australia LNG capacity is around 2.7m tonnes/year today, and is forecast to more than double to some 6.5m tonnes/year by 2015, as Gorgon comes on line,” said Shell Australia country chair Ann Pickard. The oil and gas giant owns a quarter of the 15m tonne/year Gorgon LNG project. “Our directly-owned assets in As of end-2009, Shell’s global LNG capacity was at 18.5m tonnes/year, with interests in seven LNG plants. ($1 =
http://www.icis.com/Articles/2010/11/08/9408042/shell-to-partially-sell-stake-in-australias-woodside-for-3.3bn.html
CC-MAIN-2014-49
refinedweb
315
70.13
On Thursday 10 June 2010 23:38:15, Martin Drautzburg wrote: >". Here, we define (<*>) for the type (<*>) :: (Named (a -> b)) -> (Named a) -> (Named b) (redundant parentheses against ambiguity errors). A 'Named' thing is a thing together with a name. So how do we apply a function with a name to an argument with a name? What we get is a value with a name. The value is of course the function applied to the argument ignoring names. The name of the result is the textual representation of the function application, e.g. Named "sin" sin <*> Named "pi" pi ~> Named "sin(pi)" 1.2246063538223773e-16 (<*>) is application of named functions to named values, or 'lifting function application to named things'. > Upgrade. We're at 6.12 now! Lots of improvements. permutations was added in 6.10, IIRC. > "base", whatever that is. > > > guard $ namedPure 42 == f' <*> g' <*> h' <*> namedPure 42 > > Ah, the 42 needs namedPure. Simplest way, it could be Named "answer to Life, the Universe and Everything" 42 > Again this <*> operator... > I believe the whole thing is using a List Monad. > > > return $ show f' ++ " . " ++ show g' ++ " . " ++ show h' > > I wonder if the thing returns just one string or a list of strings. I A list, one string for every permutation satisfying the condition. > guess "return" cannot return anything more unwrapped than a List, so it > must be a List. But does it contain just the first match or all of them? > All of them! And how many brackets are around them? do x <- list guard (condition x) return (f x) is syntactic sugar for concat (map (\x -> if condition x then [f x] else []) list)
http://www.haskell.org/pipermail/haskell-cafe/2010-June/078813.html
CC-MAIN-2013-48
refinedweb
274
77.23
I still get the problem but I've been able to use the compile-config.xml to simply the build.xml, it was enough to add out of the compiler tag : <include-namespaces> <uri></uri> </include-namespaces> I guess, it's missing in the apache lib too, that the reason why it does not compile what there's in the manifest file and run without errors: <include-namespaces> <uri></uri> </include-namespaces> - Fred -----Message d'origine----- From: Frédéric THOMAS Sent: Sunday, December 16, 2012 2:44 AM To: flex-dev@incubator.apache.org Subject: Re: New Spark components @Justin, The compile target of the apache build.xml is simpler because it deleagates the detail of the config to compile-config.xml, I tried to do so but I ran into the problem that the manifest.xml hasn't been taken in account and had to add manualy all the classes to ExperimentalClasses.as to solved this issue and been to the same point I'm at the moment with the <s:SolidColorStroke>. BTW, if you look at the catalog.xml of the apache.swc, there's nothing inside relative to the exposed classes, then I'm not sure the swc exposes any classes. - Fred -----Message d'origine----- From: Justin Mclean Sent: Sunday, December 16, 2012 2:16 AM To: flex-dev@incubator.apache.org Subject: Re: New Spark components Hi, The compile target is looking a bit too complex than needed, for instance if you look at the apache build.xml one (which relies on mx classes) it's much simpler. It's it perhaps just the order in which compile targets are compiled? Thanks, Justin
http://mail-archives.apache.org/mod_mbox/incubator-flex-dev/201212.mbox/%3CBLU162-ds66563557EE59EBB6BCFD8B4330@phx.gbl%3E
CC-MAIN-2014-23
refinedweb
276
63.9
Calculating RPA correlation energies¶ The Random Phase Approximation (RPA) can be used to derive a non-local expression for the ground state correlation energy. The calculation requires a large number of unoccupied bands and is significantly heavier than standard DFT calculation using semi-local exchange-correlation functionals. However, when combined with exact exchange the method has been shown to give a good description of van der Waals interactions and a decent description of covalent bonds (slightly worse than PBE). For more details on the theory and implementation we refer to RPA correlation energy. Below we give examples on how to calculate the RPA atomization energy of \(N_2\) and the correlation energy of graphene an a Co(0001) surface. Note that some of the calculations in this tutorial will need a lot of CPU time and is essentially not possible without a supercomputer. Example 1: Atomization energy of N2¶ The atomization energy of \(N_2\) is overestimated by typical GGA functionals, and the RPA functional seems to do a bit better. This is not a general trend for small molecules, however, typically the HF-RPA approach yields too small atomization energies when evaluated at the GGA equilibrium geometry. See for example Furche 1 for a table of atomization energies for small molecules calculated with the RPA functional. Ground state calculation¶ First we set up a ground state calculation with lots of unoccupied bands. This is done with the script: from __future__ import print_function from ase.optimize import BFGS from ase.build import molecule from ase.parallel import paropen from gpaw import GPAW, PW from gpaw.xc.exx import EXX # N N = molecule('N') N.cell = (6, 6, 7) N.center() calc = GPAW(mode=PW(600, force_complex_dtype=True), nbands=16, maxiter=300, xc='PBE', hund=True, txt='N_pbe.txt', parallel={'domain': 1}, convergence={'density': 1.e-6}) N.calc = calc E1_pbe = N.get_potential_energy() calc.write('N.gpw', mode='all') exx = EXX('N.gpw', txt='N_exx.txt') exx.calculate() E1_hf = exx.get_total_energy() calc.diagonalize_full_hamiltonian(nbands=4800) calc.write('N.gpw', mode='all') # N2 N2 = molecule('N2') N2.cell = (6, 6, 7) N2.center() calc = GPAW(mode=PW(600, force_complex_dtype=True), nbands=16, maxiter=300, xc='PBE', txt='N2_pbe.txt', parallel={'domain': 1}, convergence={'density': 1.e-6}) N2.calc = calc dyn = BFGS(N2) dyn.run(fmax=0.05) E2_pbe = N2.get_potential_energy() calc.write('N2.gpw', mode='all') exx = EXX('N2.gpw', txt='N2_exx.txt') exx.calculate() E2_hf = exx.get_total_energy() with paropen('PBE_HF.dat', 'w') as fd: print('PBE: ', E2_pbe - 2 * E1_pbe, file=fd) print('HF: ', E2_hf - 2 * E1_hf, file=fd) calc.diagonalize_full_hamiltonian(nbands=4800) calc.write('N2.gpw', mode='all') which takes on the order of 3-4 CPU hours. The script generates N.gpw and N2.gpw which are the input to the RPA calculation. The PBE and non- selfconsistent Hartree-Fock energy is also calculated and written to the file PBE_HF.dat. Converging the frequency integration¶ We will start by making a single RPA calculation with extremely fine frequency sampling. The following script returns the integrand at 2000 frequency points from 0 to 1000 eV from a cutoff of 50 eV. from __future__ import print_function from ase.parallel import paropen from gpaw.xc.rpa import RPACorrelation import numpy as np dw = 0.5 frequencies = np.array([dw * i for i in range(200)]) weights = len(frequencies) * [dw] weights[0] /= 2 weights[-1] /= 2 weights = np.array(weights) rpa = RPACorrelation('N2.gpw', txt='frequency_equidistant.txt', frequencies=frequencies, weights=weights) Es = rpa.calculate(ecut=[50]) Es_w = rpa.E_w with paropen('frequency_equidistant.dat', 'w') as fd: for w, E in zip(frequencies, Es_w): print(w, E.real, file=fd) The correlation energy is obtained as the integral of this function divided by \(2\pi\) and yields -6.62 eV. The frequency sampling is dense enough so that this value can be regarded as “exact” (but not converged with respect to cutoff energy, of course). We can now test the Gauss-Legendre integration method with different number of points using the same script but now specifying the Gauss-Legendre parameters instead of a frequency list: rpa = RPACorrelation(calc, nfrequencies=16, frequency_max=800.0, frequency_scale=2.0) This is the default parameters for Gauss-legendre integration. The nfrequencies keyword specifies the number of points, the frequency_max keyword sets the value of the highest frequency (but the integration is always an approximation for the infinite integral) and the frequency_scale keyword determines how dense the frequencies are sampled close to \(\omega=0\). The integrals for different number of Gauss-Legendre points is shown below as well as the integrand evaluated at the fine equidistant frequency grid. It is seen that using the default value of 16 frequency points gives a result which is very well converged (to 0.1 meV). Below we will simply use the default values although we could perhaps use 8 points instead of 16, which would half the total CPU time for the calculations. In this particular case the result is not very sensitive to the frequency scale, but if the there is a non-vanishing density of states near the Fermi level, there may be much more structure in the integrand near \(\omega=0\) and it is important to sample this region well. It should of course be remembered that these values are not converged with respect to the number of unoccupied bands and plane waves. Extrapolating to infinite number of bands¶ To calculate the atomization energy we need to obtain the correlation energy as a function of cutoff energy and extrapolate to infinity as explained in RPA correlation energy 2. This is accomplished with the script: from __future__ import print_function from ase.parallel import paropen from ase.units import Hartree from gpaw.xc.rpa import RPACorrelation rpa = RPACorrelation('N.gpw', nblocks=8, truncation='wigner-seitz', txt='rpa_N.txt') E1_i = rpa.calculate(ecut=400) rpa = RPACorrelation('N2.gpw', nblocks=8, truncation='wigner-seitz', txt='rpa_N2.txt') E2_i = rpa.calculate(ecut=400) ecut_i = rpa.ecut_i f = paropen('rpa_N2.dat', 'w') for ecut, E1, E2 in zip(ecut_i, E1_i, E2_i): print(ecut * Hartree, E2 - 2 * E1, file=f) f.close() which calculates the correlation part of the atomization energy with the bands and plane waved corresponding to the list of cutoff energies. Note that the default value of frequencies (16 Gauss-Legendre points) is used. The script takes on the order of two CPU hours, but can be efficiently parallelized over kpoints, spin and bands. If memory becomes and issue, it may be an advantage to specify parallelize over G-vectors, which is done by specifying nblocks=N. The result is written to rpa_N2.dat and can be visualized with the script: # Creates: extrapolate.png from ase.utils.extrapolate import extrapolate import numpy as np import matplotlib.pyplot as plt a = np.loadtxt('rpa_N2.dat') ext, A, B, sigma = extrapolate(a[:,0], a[:,1], reg=3, plot=False) plt.plot(a[:, 0]**(-1.5), a[:, 1], 'o', label='Calculated points') es = np.array([e for e in a[:, 0]] + [10000]) plt.plot(es**(-1.5), A + B * es**(-1.5), '--', label='Linear regression') t = [int(a[i, 0]) for i in range(len(a))] plt.xticks(a[:, 0]**(-1.5), t, fontsize=12) plt.axis([0., 150**(-1.5), None, -4.]) plt.xlabel('Cutoff energy [eV]', fontsize=18) plt.ylabel('RPA correlation energy [eV]', fontsize=18) plt.legend(loc='lower right') #show() plt.savefig('extrapolate.png') The figure is shown below Note that the extrapolate function can also be used to visualize the result by setting plot=True. The power law scaling is seen to be very good at the last three points and the extrapolated results is obtained using linear regression on the last three points (reg=3). We find an extrapolated value of -4.94 eV for the correlation part of the atomization energy. The results are summarized below (all values in eV) It should be noted that in general, the accuracy of RPA is comparable to (or worse) that of PBE calculations and N2 is just a special case where RPA performs better than PBE. The major advantage of RPA is the non-locality, which results in a good description of van der Waals forces. The true power of RPA thus only comes into play for systems where dispersive interactions dominate. Example 2: Adsorption of graphene on metal surfaces¶ As an example where dispersive interactions are known to play a prominent role, we consider the case of graphene adsorbed on a Co(0001) surface 3 and 4. First, the input .gpw files are generated with the following script: from __future__ import print_function import numpy as np from ase.dft.kpoints import monkhorst_pack from ase.parallel import paropen from ase.build import hcp0001, add_adsorbate from gpaw import GPAW, PW, FermiDirac, MixerSum from gpaw.xc.exx import EXX kpts = monkhorst_pack((16, 16, 1)) kpts += np.array([1/32., 1/32., 0]) a = 2.51 # Lattice parameter of Co slab = hcp0001('Co', a=a, c=4.07, size=(1,1,4)) pos = slab.get_positions() cell = slab.get_cell() cell[2,2] = 20. + pos[-1, 2] slab.set_cell(cell) slab.set_initial_magnetic_moments([0.7, 0.7, 0.7, 0.7]) ds = [1.75, 2.0, 2.25, 2.5, 2.75, 3.0, 3.25, 3.5, 3.75, 4.0, 5.0, 6.0, 10.0] for d in ds: pos = slab.get_positions() add_adsorbate(slab, 'C', d, position=(pos[3,0], pos[3,1])) add_adsorbate(slab, 'C', d, position=(cell[0,0]/3 + cell[1,0]/3, cell[0,1]/3 + cell[1,1]/3)) #view(slab) calc = GPAW(xc='PBE', eigensolver='cg', mode=PW(600), kpts=kpts, occupations=FermiDirac(width=0.01), mixer=MixerSum(beta=0.1, nmaxold=5, weight=50.0), convergence={'density': 1.e-6}, maxiter=300, parallel={'domain': 1, 'band': 1}, txt='gs_%s.txt' % d) slab.set_calculator(calc) E = slab.get_potential_energy() exx = EXX(calc, txt='exx_%s.txt' % d) exx.calculate() E_hf = exx.get_total_energy() calc.diagonalize_full_hamiltonian() calc.write('gs_%s.gpw' % d, mode='all') f = paropen('hf_acdf.dat', 'a') print(d, E, E_hf, file=f) f.close() del slab[-2:] Note that besides diagonalizing the full Hamiltonian for each distance, the script calculates the EXX energy at the self-consistent PBE orbitals and writes the result to a file. It should also be noted that the k-point grid is centered at the Gamma point, which makes the q-point reduction in the RPA calculation much more efficient. In general, RPA and EXX is more sensitive to Fermi smearing than semi-local functionals and we have set the smearing to 0.01 eV. Due to the long range nature of the van der Waals interactions, a lot of vacuum have been included above the slab. The calculation should be parallelized over spin and irreducible k-points. The RPA calculations are done with the following script from ase.parallel import paropen from gpaw.xc.rpa import RPACorrelation ds = [1.75, 2.0, 2.25, 2.5, 2.75, 3.0, 3.25, 3.5, 3.75, 4.0, 5.0, 6.0, 10.0] ecut = 200 for d in ds: rpa = RPACorrelation('gs_%s.gpw' % d, txt='rpa_%s_%s.txt' % (ecut, d)) E_rpa = rpa.calculate(ecut=[ecut], frequency_scale=2.5, skip_gamma=True, filename='restart_%s_%s.txt' % (ecut, d)) f = paropen('rpa_%s.dat' % ecut, 'a') print(d, E_rpa, file=f) f.close() The calculations are rather time consuming (~ 1000 CPU hours per distance point), but can be parallelized very efficiently over bands, k-points (default) and frequencies (needs to be specified). Here we have changed the frequency scale from the default value of 2.0 to 2.5 to increase the density of frequency points near the origin. We also specify that the Gamma point (in q) should not be included since the optical limit becomes unstable for systems with high degeneracy near the Fermi level. The restart file contains the contributions from different q-points, which is read if a calculation needs to be restarted. In principle, the calculations should be performed for a range of cutoff energies and extrapolated to infinity as in the example above. However, energy differences between systems with similar electronic structure converges much faster than absolute correlation energies and a reasonably converged potential energy surface can be obtained using a fixed cutoff of 200 eV for this system. The result is shown in the Figure below along with LDA, PBE and vdW-DF results. The solid RPA line was obtained using spline interpolation. Both LDA and PBE predicts adsorption at 2.0 A from the metal slab, but do not include van der Waals attraction. The van der Waals functional shows a significant amount of dispersive interactions far from the slab and predicts a physisorbed minimum 3.75 A from the slab. RPA captures both covalent and dispersive interactions and the resulting potential energy surface is a delicate balance between the two types of interactions. Two minima are seen and the covalent bound state at 2.2 A is slightly lower that the physisorbed state at 3.2 A, which is in good agreement with experiment.
https://wiki.fysik.dtu.dk/gpaw/tutorials/rpa/rpa_tut.html
CC-MAIN-2020-05
refinedweb
2,180
51.34