text
stringlengths
20
1.01M
url
stringlengths
14
1.25k
dump
stringlengths
9
15
lang
stringclasses
4 values
source
stringclasses
4 values
This sample shows how to “Remove warning/pop-up when modifying type parameter values within a schedule” using the DialogBoxShowing event. Please remove the warning/pop-up that appears when editing scheduled fields that are tied to type parameters. This is EXTREMELY annoying when entering data for our MEP schedules. I.E. Plumbing Fixtures, Air Devices, Lighting Fixtures, etc. The popup reads: “This change will be applied to all elements of type _______.” In instances where we are filling out a schedule, we are forced to enter our data, press enter or click out of the cell, and then respond to this popup. EVERY TIME); } } 13 thoughts on “#BILTNA2018 Wish granted: Remove warning when modifying type parameter values in schedule” Hi Harry, Could you show us how to open a family change subcategories within a nested array that contains generic forms? I have seen posts on modifying parameters in groups however I was wondering if this is possible within a family document. Attempts within the project environment have been met with frustration as the groups tend to move. Could you send an RVT/RFA sample (boostyourbim@gmail.com) ? If there is a Revit bug causing the groups to move, that will probably affect the API too. Can you in .net create ceiling and floors made from rooms that cut into the door openings to the door mid-point? The API can create floors, but not ceilings. What does it mean to cut into the door opening at the midpoint? oh the floor outline is to the boundary of the room and then where there is a door the floor extends into the door opening about half the wall thickness or even better where the door leaf centre line is located. Another one i just thought of, can you create and modify subcategories within a family document when the generic forms are inside of a group? further can you swap the linestyle subcategories inside of a group? Typically these would be arrayed groups so the other problem is that they might be nested. although most families we deal with have on one nesting level. Could you send an RVT/RFA sample (boostyourbim@gmail.com) ? What is problem you have when doing this through the UI that you’d like to solve with the API? Thank you for making this post. I came here through the Ideas forum. I’m trying to get this running but my macro won’t run (this is my first time making a Revit macro). Would you mind sharing the steps how you created the macro (the video seems to skip that part). Hi, If you are new to Revit macros, please take a look at my online video course at Regards Harry Hey Harry, On the long term I definitely want to look into the macro’s but today I was just hoping to get this going as I need to edit a schedule with thousands of element with tons of type properties 🙂 I think I’m a little bit further now but I’m still getting a compile error on this line with “uiapp.DialogBoxShowing += new EventHandler(dismissTaskDialog);” The error is: No overlead for ‘dismissTaskDialog’ matches delegate ‘System.Eventhandler’ (CS0123) Any ideas? 🙂 This is the code of my macro, anyone any idea what I’m doing wrong? using System; using Autodesk.Revit.UI; using Autodesk.Revit.DB; using Autodesk.Revit.UI.Selection; using System.Collections.Generic; using System.Linq; using Autodesk.Revit.DB.Visual; using Autodesk.Revit.ApplicationServices; using Autodesk.Revit.UI.Events; namespace test { [Autodesk.Revit.Attributes.Transaction(Autodesk.Revit.Attributes.TransactionMode.Manual)] [Autodesk.Revit.DB.Macros.AddInId(“739D5A0F-22AE-43D4-8F69-93CE63CAEC56”)] public partial class ThisApplication {); } } } } Try changing this line to include DialogBoxShowingEventArgs uiapp.DialogBoxShowing += new EventHandler(dismissTaskDialog); That worked! I changed it to uiapp.DialogBoxShowing += new EventHandler(dismissTaskDialog); and it compiled and ran fine! VERY helpful, thanks!
https://boostyourbim.wordpress.com/2018/08/08/biltna2018-wish-granted-remove-warning-when-modifying-type-parameter-values-in-schedule/
CC-MAIN-2019-47
en
refinedweb
i tried to migrate at largish fileserver (5.5TB) to vsan on friday, but it failed twice doing so. and im wondering if my issue is due to multiple vmdks with the same name? the server has 5 disks. disk one is located on datastore A and is called fileserver.vmdk disk two is located on datastore A and is called fileserver_1.vmdk disk three is located on datastore B and is called fileserver.vmdk disk four is located on datastore C and is called fileserver.vmdk disk five is located on datastore D and is called fileserver_5.vmdk is that an issue for vsan, or should it be able to handle this during a migration? tdlr: 5 disks, 3 of which has the exact same vmdk name. is it an issue to migrate to vsan? The VM having 3 disks with the same now is only a non-issue due to them being on different datastores (and different sub-directories) - when you migrate them to vSAN they are all in the same namespace and the 2nd one to start copying should fail (*should* get an error or log message indicating file name already exists). Change the names: You could easily validate this as the cause by trying XSvMotion of just disk 1, 2 & 5. then trying to XSvMotion either 3 or 4 (or even just disk 1, then try disk 3/4). That being said, this is a relatively large VM so there is more potential for vMotion failure due to timeout/access disruption so XSvMotion of some disks at a time may be a good idea regardless. Bob Thank you for the reply. Unfortunately my vsan hosts do not have access to the old storage, so I cannot migrate a few disks at a time. I'll look into the kb
https://communities.vmware.com/thread/610817
CC-MAIN-2019-47
en
refinedweb
#include <math.h> int isfinite(real-floating x); The isfinite() macro shall determine whether its argument has a finite value (zero, subnormal, or normal, and not infinite or NaN). First, an argument represented in a format wider than its semantic type is converted to its semantic type. Then determination is based on the type of the argument. The following sections are informative. The Base Definitions volume of POSIX.1-2008, <math.h> Any typographical or formatting errors that appear in this page are most likely to have been introduced during the conversion of the source files to man page format. To report such errors, see .
https://man.linuxreviews.org/man3p/isfinite.3p.html
CC-MAIN-2019-47
en
refinedweb
This visualization makes it seem like selection sorting is fast, but in reality, this algorithm iterates through all elements (gray bars) to select the smallest value. Selection sort is not stable. Having a runtime of always quadratic, selection sort may present itself as not the best sorting algorithm. However, this algorithm could be useful if the cost of swapping items is high. O(n2) comparisons, O(n) swaps Here's a sample implementation written in Java. Note that it extends the Sort.java class. import java.util.Arrays; public class SelectionSort extends Sort { public static void main(String[] args) { int[] testInt = {1,6,2,3,6,7,4,2,5}; selectionSort(testInt); System.out.println(Arrays.toString(testInt)); } public static void selectionSort(int[] input) { for (int i = 0; i < input.length; i++) { int minIndex = i; int min = input[i]; // Iterate through array to find minimal value for (int j = i; j < input.length; j++) { if (input[j] < min) { // Replace with new min if new min found minIndex = j; min = input[j]; } } // Swap with index at start of search swap(input, minIndex, i); } } } Algorithms are the procedures that software programs use to manipulate data structures. Besides clear and simple example programs, the author includes a workshop as a small demonstration program executable on a Web browser. The programs demonstrate in graphical form what data structures look like and how they operate.$ Check price More Data Structures resources Ad
https://code.snipcademy.com/tutorials/algorithms/sorting/selection
CC-MAIN-2019-47
en
refinedweb
Previously: This is part 3 of a 4-part project to build a generic TCP proxy. This proxy will be capable of handling any TCP-based protocol, not just HTTP. So far, the fake DNS server that we built in part 2 is tricking your smartphone into sending TCP data to your laptop. In this section we’re going to build our actual TCP proxy server, and run it on your laptop. Its job will be to listen for data sent from your phone; forward it to the real, remote server; and finally forward any responses from the server back to your phone. In other words, act like a proxy. Our proxy will run on your laptop and listen for incoming TCP connections on port 80 (by convention, the unencrypted HTTP port). When it receives one, presumably from your smartphone, it will first make another TCP connection, this time with our target hostname’s remote server. Second, it will take any data that it receives over the connection with your smartphone, and re-send it over its new connection with the remote server. Third, it will listen for response data coming back from the server. Fourth and finally, it will relay this response data back to your smartphone, completing a 4-step loop. Your smartphone will be able to talk to the remote server as normal, taking only a slight detour via our proxy. We will use HTTP for testing instead of some other TCP-based protocol because it’s easier. To get your phone to make an HTTP request, all you have to do is visit a website. Our proxy isn’t “non-HTTP” - it’s “non-HTTP-specific”. In addition, the first version of our TCP proxy will not be capable of handling TLS encryption. We will therefore have to take care to test using websites that use unecrypted HTTP, not HTTPS. We will add TLS support in part 4. Let’s take a closer look at each stage of our 4-step loop: from smartphone, to laptop, to remote server, and back again. The first step is almost completely taken care of by our DNS server from the previous section of the project. Your phone has already been tricked into sending its TCP connections for our target hostname to your laptop, and all that remains is to ensure that our proxy receives them safely. The second step, from laptop to remote server, is handled by our proxy. When our proxy receives a TCP conection from your phone, it will turn around and initiate a second TCP connection, this time with the target remote server. It will then re-send any data that it receives from your phone to the remote server. As we discussed in part 2, we will hardcode the hostname of the remote server that our proxy should make its second connection with. We will also take care to ensure that this hardcoded hostname matches the hardcoded hostname in our fake DNS server. Once our proxy has sent the remote server the data that it received from your smartphone, all that remains for it to do is to send the response data that it receives from the remote server back to your phone. In this third, server-to-laptop stage we will make sure that our proxy can receive response data from the remote server. Finally, our proxy will send this response data back to your phone. Your phone will receive the data in exactly the same form as if it had been talking to the remote server directly, and it will assume that everything that just happened was completely normal. I’ve written us an example proxy using Python’s twisted networking framework. I found that twisted gave the right amount of control over the innards of the proxy, whilst requiring very little boilerplate. In order to achieve this it introduces some of its own, new abstractions. These abstractions make twisted code very terse, but also a little cryptic for the uninitiated. Twisted is designed around “event-driven callbacks”. This means that it automatically runs particular methods (or “callbacks”) whenever a specific event occurs. The events that we are interested in are “connection made” and “data received”. We can tell twisted what to do when these events occur by defining a Protocol class with methods called connectionMade and dataReceived. When twisted sees a “connection made” event it runs our connectionMade method, and you can probably guess what it does when it sees a “data received” event. Here’s my code. It’s followed by a more detailed explanation of the different components. (This code is also on GitHub) from twisted.internet import protocol, reactor from twisted.internet import ssl as twisted_ssl import dns.resolver import netifaces as ni # Adapted from class TCPProxyProtocol(protocol.Protocol): """ TCPProxyProtocol listens for TCP connections from a client (eg. a phone) and forwards them on to a specified destination (eg. an app's API server) over a second TCP connection, using a ProxyToServerProtocol. It assumes that neither leg of this trip is encrypted. """ def __init__(self): self.buffer = None self.proxy_to_server_protocol = None def connectionMade(self): """ Called by twisted when a client connects to the proxy. Makes an connection from the proxy to the server to complete the chain. """ print("Connection made from CLIENT => PROXY") proxy_to_server_factory = protocol.ClientFactory() proxy_to_server_factory.protocol = ProxyToServerProtocol proxy_to_server_factory.server = self reactor.connectTCP(DST_IP, DST_PORT, proxy_to_server_factory) def dataReceived(self, data): """ Called by twisted when the proxy receives data from the client. Sends the data on to the server. CLIENT ===> PROXY ===> DST """ print("") print("CLIENT => SERVER") print(FORMAT_FNCPProxyProtocol, and uses the) def _noop(data): return data def get_local_ip(iface): ni.ifaddresses(iface) return ni.ifaddresses(iface)[ni.AF_INET][0]['addr'] FORMAT_FN = _noop LISTEN_PORT = 80 DST_PORT = 80 DST_HOST = "nonhttps.com" local_ip = get_local_ip('en0') # Look up the IP address of the target print("Querying DNS records for %s..." % DST_HOST) a_records = dns.resolver.query(DST_HOST, 'A') print("Found %d A records:" % len(a_records)) for r in a_records: print("* %s" % r.address) print("") assert(len(a_records) > 0) # THe target may have multiple IP addresses - we # simply choose the first one. DST_IP = a_records[0].address print("Choosing to proxy to %s" % DST_IP) print(""" #-#-#-#-#-#-#-#-#-#-#-#-#-#-#-#-#-#-#-#-#-# -#-#-#-#-#-RUNNING TCP PROXY-#-#-#-#-#- #-#-#-#-#-#-#-#-#-#-#-#-#-#-#-#-#-#-#-#-#-# Dst IP:\t%s Dst port:\t%d Dst hostname:\t%s Listen port:\t%d Local IP:\t%s """ % (DST_IP, DST_PORT, DST_HOST, LISTEN_PORT, local_ip))CPProxyProtocol reactor.listenTCP(LISTEN_PORT, factory) reactor.run() Let’s take a closer look at this code. You might find it useful to open the code on GitHub. TCPProxyProtocol is our main Protocol class. It handles communicating with your phone, and delegates communicating with the remote server to the ProxyToServerProtocol class. We initialize our proxy server by instantiating one of these TCPProxyProtocol objects, and telling twisted to use it to listen on port 80 - by convention, the unencrypted HTTP port. Next, nothing happens until your laptop receives a TCP connection on port 80 (presumably from your phone). When twisted sees this “connection made” event, it invokes the connectionMade callback on our TCPProxyProtocol. At this point our proxy has made a connection with your smartphone, and step 1 of our 4-step process is complete. Step 2, from proxy to remote server, is handled by the ProxyToServerProtocol class. When our TCPProxyProtocol#connectionMade method is called, it creates an instance of a ProxyToServerProtocol, and instructs this instance to connect to our target remote server on port 80. If our TCPProxyProtocol receives any data from your phone before the ProxyToServerProtocol’s connection to the remote server is complete, it adds the data to a buffer to make sure it doesn’t get dropped. Once the connection is ready, ProxyToServerProtocol sends any data that the buffer has collected to the remote server. At this point our proxy has opened separate connections with both your smartphone and the remote server, and is sending data from your smartphone on to the remote server. Step 2 complete. Finally, when the ProxyToServerProtocol receives data back from the remote server, twisted invokes the ProxyToServerProtocol’s own dataReceived callback. The code in this callback instructs the original TCPProxyProtocol to send the data that the ProxyToServerProtocol received from the remote server back to your phone. Steps 3 and 4 complete. Since we have not yet implemented TLS support for our proxy, we need to test our proxy using a website that does not have HTTPS enabled. I recommend nonhttps.com, a handy development hostname that, as promised, does not use HTTPS. Before you begin testing, make sure that: Then start both scripts and visit nonhttps.com on your phone. You should see your fake DNS server spoof the DNS request and return the IP address of your laptop. You should then see your TCP proxy receive HTTP data from your smartphone, and log its contents to the terminal. Next, it should log the corresponding HTTP response that comes back from nonhttps.com. Finally, nonhttps.com should load in your phone’s browser, as though nothing at all miraculous had just happened. If this doesn’t work then its time for some debugging. tcp port 80. Do you see anything that looks like an error? Do you see anything at all? Now you can proxy any TCP request that doesn’t use TLS encryption. Even though we have been testing using HTTP requests for simplicity, notice that nowhere in our code do we even mention HTTP. We see only a generic, TCP-transported stream of bytes that can have any structure and use any application protocol that it likes. All that remains is for us to make our proxy capable of handling TCP requests that do use TLS encryption. That’s in the fourth and final section of this project. Read on - Part 3: Fake Certificate Authority
https://robertheaton.com/2018/08/31/how-to-build-a-tcp-proxy-3/
CC-MAIN-2019-47
en
refinedweb
Source code for django.db.models.fields.files import datetime import posixpath.translation import gettext_lazy as _[docs]class FieldFile(File): def __init__(self, instance, field, name): getattr(self, '_file', None): """ The descriptor for the file attribute on the model instance. Return a FieldFile when accessed so you can write code like:: >>> from myapp.models import MyModel >>> instance = MyModel.objects.get(pk=1) >>> instance.file.size Assign, str)[docs] def open(self, mode='rb'): self._require_file() if getattr(self, '_file', None) is None: self.file = self.storage.open(self.name, mode) else: self.file.open(mode) return self# open() doesn't alter the file's contents, but it does reset the pointer open.alters_data = True # In addition to the standard File API, FieldFiles have extra methods # to further manipulate the underlying file, as well as update the # associated model instance.[docs][docs] @property def closed(self): file = getattr(self, '_file', None) return file is None or file.closed}[docs.storage = storage or default_storage self.upload_to = upload_to kwargs.setdefault('max_length', 100) super().__init__(verbose_name, name, **kwargs) def check(self, **kwargs): return [ *super().check(**kwargs), *self._check_primary_key(), *self._check_upload_to(), ] def _check_primary_key(self): if self._primary_key_set_explicitly: return [ checks.Error( "'primary_key' is not a valid argument for a %s." % self.__class__.__name__, obj=self, id='fields.E201', ) ] else: return [] def _check_upload_to(self): if isinstance(self.upload_to, str) =): value = super().get_prep_value(value) # Need to convert File objects provided via a form to string for database insertion if value is None: return None return str(value) def pre_save(self, model_instance, add): file = super().pre_save(model_instance, add) if file and not file._committed: # Commit the file to storage prior to saving the model file.save(file.name, file.file, save=False) return file def contribute_to_class(self, cls, name, **kwargs): super().contribute_to_class(cls, name, **kwargs) setattr(cls, self.name, self.descriptor_class(self)) = datetime.datetime.now().strftime str and stored in the # database, so leaving False as-is is not acceptable. setattr(instance, self.name, data or '') def formfield(self, **kwargs): return super().formfield(**{ 'form_class': forms.FileField, 'max_length': self.max_length, **kwargs, }().delete(save)[docs().__init__(verbose_name, name, **kwargs) def check(self, **kwargs): return [ *super().check(**kwargs), *self._check_image_library_installed(), ]().deconstruct() if self.width_field: kwargs['width_field'] = self.width_field if self.height_field: kwargs['height_field'] = self.height_field return name, path, args, kwargs def contribute_to_class(self, cls, name, **kwargs):): """ Update): return super().formfield(**{ 'form_class': forms.ImageField, **kwargs, })
https://docs.djangoproject.com/en/2.2/_modules/django/db/models/fields/files/
CC-MAIN-2019-47
en
refinedweb
import "github.com/cockroachdb/cockroach/pkg/util/uuid" codec.go generator.go sql.go uuid.go uuid_wrapper.go const ( V1 byte // Version 1 (date-time and MAC address) V3 // Version 3 (namespace name-based) V4 // Version 4 (random) V5 // Version 5 (namespace name-based) ) UUID versions. UUID layout variants. Size of a UUID in bytes. var ( NamespaceDNS = Must(FromString("6ba7b810-9dad-11d1-80b4-00c04fd430c8")) NamespaceURL = Must(FromString("6ba7b811-9dad-11d1-80b4-00c04fd430c8")) NamespaceOID = Must(FromString("6ba7b812-9dad-11d1-80b4-00c04fd430c8")) NamespaceX500 = Must(FromString("6ba7b814-9dad-11d1-80b4-00c04fd430c8")) ) Predefined namespace UUIDs. Nil is the nil UUID, as specified in RFC-4122, that has all 128 bits set to zero. Gen is a reference UUID generator based on the specifications laid out in RFC-4122 and DCE 1.1: Authentication and Security Services. This type satisfies the Generator interface as defined in this package. For consumers who are generating V1 UUIDs, but don't want to expose the MAC address of the node generating the UUIDs, the NewGenWithHWAF() function has been provided as a convenience. See the function's documentation for more info. The authors of this package do not feel that the majority of users will need to obfuscate their MAC address, and so we recommend using NewGen() to create a new generator. NewGen returns a new instance of Gen with some default values set. Most people should use this. NewGen by default uses crypto/rand.Reader as its source of randomness. func NewGenWithHWAF(hwaf HWAddrFunc) *Gen NewGenWithHWAF builds a new UUID generator with the HWAddrFunc provided. Most consumers should use NewGen() instead. This is used so that consumers can generate their own MAC addresses, for use in the generated UUIDs, if there is some concern about exposing the physical address of the machine generating the UUID. The Gen generator will only invoke the HWAddrFunc once, and cache that MAC address for all the future UUIDs generated by it. If you'd like to switch the MAC address being used, you'll need to create a new generator using this function. NewGenWithReader returns a new instance of gen which uses r as its source of randomness.. type Generator interface { NewV1() (UUID, error) // NewV2(domain byte) (UUID, error) // CRL: Removed support for V2. NewV3(ns UUID, name string) UUID NewV4() (UUID, error) NewV5(ns UUID, name string) UUID } Generator provides an interface for generating UUIDs. DefaultGenerator is the default UUID Generator used by this package. type HWAddrFunc func() (net.HardwareAddr, error) HWAddrFunc is the function type used to provide hardware (MAC) addresses. NullUUID can be used with the standard sql package to represent a UUID value that can be NULL in the database. MarshalJSON marshals the NullUUID as null or the nested UUID Scan implements the sql.Scanner interface. UnmarshalJSON unmarshals a NullUUID Value implements the driver.Valuer interface. ShortStringer implements fmt.Stringer to output Short() on String(). func (s ShortStringer) String() string String is part of fmt.Stringer. Timestamp is the count of 100-nanosecond intervals since 00:00:00.00, 15 October 1582 within a V1 UUID. This type has no meaning for V2-V5 UUIDs since they don't have an embedded timestamp. TimestampFromV1 returns the Timestamp embedded within a V1 UUID. Returns an error if the UUID is any version other than 1. Time returns the UTC time.Time representation of a Timestamp UUID is an array type to represent the value of a UUID, as defined in RFC-4122. FastMakeV4 generates a UUID using a fast but not cryptographically secure source of randomness. FromBytes returns a UUID generated from the raw byte slice input. It will return an error if the slice isn't 16 bytes long. FromBytesOrNil returns a UUID generated from the raw byte slice input. Same behavior as FromBytes(), but returns uuid.Nil instead of an error. FromString returns a UUID parsed from the input string. Input is expected in a form accepted by UnmarshalText. FromStringOrNil returns a UUID parsed from the input string. Same behavior as FromString(), but returns uuid.Nil instead of an error. FromUint128 delegates to FromBytes and wraps the result in a UUID. MakeV4 calls Must(NewV4) Must is a helper that wraps a call to a function returning (UUID, error) and panics if the error is non-nil. It is intended for use in variable initializations such as var packageUUID = uuid.Must(uuid.FromString("123e4567-e89b-12d3-a456-426655440000")) NewPopulatedUUID returns a populated UUID.. DeterministicV4 overwrites this UUID with one computed deterministically to evenly fill the space of possible V4 UUIDs. `n` represents how many UUIDs will fill the space and `i` is an index into these `n` (and thus must be in the range `[0,n)`). The resulting UUIDs will be unique, evenly-spaced, and sorted. Equal returns true iff the receiver equals the argument. This method exists only to conform to the API expected by gogoproto's generated Equal implementations. GetBytes returns the UUID as a byte slice. It incurs an allocation if the return value escapes. GetBytesMut returns the UUID as a mutable byte slice. Unlike GetBytes, it does not necessarily incur an allocation if the return value escapes. Instead, the return value escaping will cause the method's receiver (and any struct that it is a part of) to escape. Use only if GetBytes is causing an allocation and the UUID is already on the heap. MarshalBinary implements the encoding.BinaryMarshaler interface. MarshalJSON returns the JSON encoding of u. MarshalText implements the encoding.TextMarshaler interface. The encoding is the same as returned by the String() method. MarshalTo marshals u to data. Scan implements the sql.Scanner interface. A 16-byte slice will be handled by UnmarshalBinary, while a longer byte slice or a string will be handled by UnmarshalText. SetVariant sets the variant bits. SetVersion sets the version bits. Short returns the first eight characters of the output of String(). Size returns the marshaled size of u, in bytes. String returns a canonical RFC-4122 string representation of the UUID: xxxxxxxx-xxxx-xxxx-xxxx-xxxxxxxxxxxx. StringBytes writes the result of String directly into a buffer, which must have a length of at least 36. ToUint128 returns the UUID as a Uint128. Unmarshal unmarshals data to u. UnmarshalBinary implements the encoding.BinaryUnmarshaler interface. It will return an error if the slice isn't 16 bytes long. UnmarshalJSON unmarshals the JSON encoded data into u. UnmarshalText implements the encoding.TextUnmarshaler interface. Following formats are supported: "6ba7b810-9dad-11d1-80b4-00c04fd430c8", "{6ba7b810-9dad-11d1-80b4-00c04fd430c8}", "urn:uuid:6ba7b810-9dad-11d1-80b4-00c04fd430c8" "6ba7b8109dad11d180b400c04fd430c8" "{6ba7b8109dad11d180b400c04fd430c8}", "urn:uuid:6ba7b8109dad11d180b400c04fd430c8" ABNF for supported UUID text representation follows: URN := 'urn' UUID-NID := 'uuid' hexdig := '0' | '1' | '2' | '3' | '4' | '5' | '6' | '7' | '8' | '9' | 'a' | 'b' | 'c' | 'd' | 'e' | 'f' | 'A' | 'B' | 'C' | 'D' | 'E' | 'F' hexoct := hexdig hexdig 2hexoct := hexoct hexoct 4hexoct := 2hexoct 2hexoct 6hexoct := 4hexoct 2hexoct 12hexoct := 6hexoct 6hexoct hashlike := 12hexoct canonical := 4hexoct '-' 2hexoct '-' 2hexoct '-' 6hexoct plain := canonical | hashlike uuid := canonical | hashlike | braced | urn braced := '{' plain '}' | '{' hashlike '}' urn := URN ':' UUID-NID ':' plain Value implements the driver.Valuer interface. Variant returns the UUID layout variant. Version returns the algorithm version used to generate the UUID. Package uuid imports 20 packages (graph) and is imported by 191 packages. Updated 2019-10-14. Refresh now. Tools for package owners.
https://godoc.org/github.com/cockroachdb/cockroach/pkg/util/uuid
CC-MAIN-2019-47
en
refinedweb
Why. In this tutorial, you’ll learn about: - Storing images on disk as .pngfiles - Storing images in lightning memory-mapped databases (LMDB) - Storing images in hierarchical data format (HDF5) You’ll also explore the following: - Why alternate storage methods are worth considering - What the performance differences are when you’re reading and writing single images - What the performance differences are when you’re reading and writing many images - How the three methods compare in terms of disk usage If none of the storage methods ring a bell, don’t worry: for this article, all you need is a reasonably solid foundation in Python and a basic understanding of images (that they are really composed of multi-dimensional arrays of numbers) and relative memory, such as the difference between 10MB and 10GB. Let’s get started! Free Bonus: Click here to get the Python Face Detection & OpenCV Examples Mini-Guide that shows you practical code examples of real-world Python computer vision techniques. we were to use the full TinyImages dataset, then you would need about 400GB of free disk space, which would probably be a limiting factor. Credits for the dataset as described in chapter 3 of this tech report go to Alex Krizhevsky, Vinod Nair, and Geoffrey Hinton. If you’d like to follow along with the code examples in this article, you can download CIFAR-10 here, selecting the Python version. You’ll be sacrificing 163MB of disk space: When you download and unzip the folder, you’ll discover that the files are not human-readable image files. They have actually been serialized and saved in batches using cPickle. While we won’t consider pickle or cPickle in this article, other than to extract the CIFAR dataset, it’s worth mentioning that the Python pickle module has the key advantage of being able to serialize any Python object without any extra code or transformation on your part. It also has a potentially serious disadvantage of posing a security risk and not coping well when dealing with very large quantities of data. The following code unpickles each of the five batch files and loads all of the images into a NumPy array: import numpy as np import pickle from pathlib import Path # Path to the unzipped CIFAR data data_dir = Path("data/cifar-10-batches-py/") # Unpickle function provided by the CIFAR hosts def unpickle(file): with open(file, "rb") as fo: dict = pickle.load(fo, encoding="bytes") return dict images, labels = [], [] for batch in data_dir.glob("data_batch_*"): batch_data = unpickle(batch) for i, flat_im in enumerate(batch_data[b"data"]): im_channels = [] # Each image is flattened, with channels in order of R, G, B for j in range(3): im_channels.append( flat_im[j * 1024 : (j + 1) * 1024].reshape((32, 32)) ) # Reconstruct the original image images.append(np.dstack((im_channels))) # Save the label labels.append(batch_data[b"labels"][i]) print("Loaded CIFAR-10 training set:") print(f" - np.shape(images) {np.shape(images)}") print(f" - np.shape(labels) {np.shape(labels)}") All the images are now in RAM in the images variable, with their corresponding meta data in labels, and are ready for you to manipulate. Next, you can install the Python packages you’ll use for the three methods. Note: That last code block used f-strings. You can read more about them in Python 3’s f-Strings: An Improved String Formatting Syntax (Guide). Setup for Storing Images on Disk You’ll need to set up your environment for the default method of saving and accessing these images from disk. This article will assume you have Python 3.x installed on your system, and will use Pillow for the image manipulation: $ pip install Pillow Alternatively, if you prefer, you can install it using Anaconda: $ conda install -c conda-forge pillow Note: PIL is the original version of the Python Imaging Library, which is no longer maintained and is not compatible with Python 3.x. If you have previously installed PIL, make sure to uninstall it before installing Pillow, as they can’t exist together. Now you’re ready for storing and reading images from disk. Getting Started With LMDB LMDB, sometimes referred to as the “Lightning Database,” stands for Lightning Memory-Mapped Database because it’s fast and uses memory-mapped files. It’s a key-value store, not a relational database. In terms of implementation, LMDB is a B+ tree, which basically means that it is a tree-like graph structure stored in memory where each key-value element is a node, and nodes can have many children. Nodes on the same level are linked to one another for fast traversal. Critically, key components of the B+ tree are set to correspond to the page size of the host operating system, maximizing efficiency when accessing any key-value pair in the database. Since LMDB high-performance heavily relies on this particular point, LMDB efficiency has been shown to be dependent on the underlying file system and its implementation. Another key reason for the efficiency of LMDB is that it is memory-mapped. This means that it returns direct pointers to the memory addresses of both keys and values, without needing to copy anything in memory as most other databases do. Those who want to dive into a bit more of the internal implementation details of B+ trees can check out this article on B+ trees and then play with this visualization of node insertion. If B+ trees don’t interest you, don’t worry. You don’t need to know much about their internal implementation in order to use LMDB. We will be using the Python binding for the LMDB C library, which can be installed via pip: $ pip install lmdb You also have the option of installing via Anaconda: $ conda install -c conda-forge python-lmdb Check that you can import lmdb from a Python shell, and you’re good to go. Getting Started With HDF5 HDF5 stands for Hierarchical Data Format, a file format referred to as HDF4 or HDF5. We don’t need to worry about HDF4, as HDF5 is the current maintained version. Interestingly, HDF has its origins in the National Center for Supercomputing Applications, as a portable, compact scientific data format. If you’re wondering if it’s widely used, check out NASA’s blurb on HDF5 from their Earth Data project. HDF files consist of two types of objects: - Datasets - Groups Datasets are multidimensional arrays, and groups consist of datasets or other groups. Multidimensional arrays of any size and type can be stored as a dataset, but the dimensions and type have to be uniform within a dataset. Each dataset must contain a homogeneous N-dimensional array. That said, because groups and datasets may be nested, you can still get the heterogeneity you may need: $ pip install h5py As with the other libraries, you can alternately install via Anaconda: $ conda install -c conda-forge h5py If you can import h5py from a Python shell, everything is set up properly. Storing a Single Image Now that you have a general overview of the methods, let’s dive straight in and look at a quantitative comparison of the basic tasks we care about: how long it takes to read and write files, and how much disk memory will be used. This will also serve as a basic introduction to how the methods work, with code examples of how to use them. When I refer to “files,” I generally mean a lot of them. However, it is important to make a distinction since some methods may be optimized for different operations and quantities of files. For the purposes of experimentation, we can compare the performance between various quantities of files, by factors of 10 from a single image to 100,000 images. Since our five batches of CIFAR-10 add up to 50,000 images, we can use each image twice to get to 100,000 images. To prepare for the experiments, you will want to create a folder for each method, which will contain all the database files or images, and save the paths to those directories in variables: from pathlib import Path disk_dir = Path("data/disk/") lmdb_dir = Path("data/lmdb/") hdf5_dir = Path("data/hdf5/") Path does not automatically create the folders for you unless you specifically ask it to: disk_dir.mkdir(parents=True, exist_ok=True) lmdb_dir.mkdir(parents=True, exist_ok=True) hdf5_dir.mkdir(parents=True, exist_ok=True) Now you can move on to running the actual experiments, with code examples of how to perform basic tasks with the three different methods. We can use the timeit module, which is included in the Python standard library, to help time the experiments. Although the main purpose of this article is not to learn the APIs of the different Python packages, it is helpful to have an understanding of how they can be implemented. We will go through the general principles alongside all the code used to conduct the storing experiments. Storing to Disk Our input for this experiment is a single image image, currently in memory as a NumPy array. You want to save it first to disk as a .png image, and name it using a unique image ID image_id. This can be done using the Pillow package you installed earlier: from PIL import Image import csv def store_single_disk(image, image_id, label): """ Stores a single image as a .png file on disk. Parameters: --------------- image image array, (32, 32, 3) to be stored image_id integer unique ID for image label image label """ Image.fromarray(image).save(disk_dir / f"{image_id}.png") with open(disk_dir / f"{image_id}.csv", "wt") as csvfile: writer = csv.writer( csvfile, delimiter=" ", quotechar="|", quoting=csv.QUOTE_MINIMAL ) writer.writerow([label]) This saves the image. In all realistic applications, you also care about the meta data attached to the image, which in our example dataset is the image label. When you’re storing images to disk, there are several options for saving the meta data. One solution is to encode the labels into the image name. This has the advantage of not requiring any extra files. However, it also has the big disadvantage of forcing you to deal with all the files whenever you do anything with labels. Storing the labels in a separate file allows you to play around with the labels alone, without having to load the images. Above, I have stored the labels in a separate .csv files for this experiment. Now let’s move on to doing the exact same task with LMDB. Storing to LMDB Firstly, LMDB is a key-value storage system where each entry is saved as a byte array, so in our case, keys will be a unique identifier for each image, and the value will be the image itself. Both the keys and values are expected to be strings, so the common usage is to serialize the value as a string, and then unserialize it when reading it back out. You can use pickle for the serializing. Any Python object can be serialized, so you might as well include the image meta data in the database as well. This saves you the trouble of attaching meta data back to the image data when we load the dataset from disk. You can create a basic Python class for the image and its meta data: class CIFAR_Image: def __init__(self, image, label): # Dimensions of image for reconstruction - not really necessary # for this dataset, but some datasets may include images of # varying sizes self.channels = image.shape[2] self.size = image.shape[:2] self.image = image.tobytes() self.label = label def get_image(self): """ Returns the image as a numpy array. """ image = np.frombuffer(self.image, dtype=np.uint8) return image.reshape(*self.size, self.channels) Secondly, because LMDB is memory-mapped, new databases need to know how much memory they are expected to use up. This is relatively straightforward in our case, but it can be a massive pain in other cases, which you will see in more depth in a later section. LMDB calls this variable the map_size. Finally, read and write operations with LMDB are performed in transactions. You can think of them as similar to those of a traditional database, consisting of a group of operations on the database. This may look already significantly more complicated than the disk version, but hang on and keep reading! With those three points in mind, let’s look at the code to save a single image to a LMDB: import lmdb import pickle def store_single_lmdb(image, image_id, label): """ Stores a single image to a LMDB. Parameters: --------------- image image array, (32, 32, 3) to be stored image_id integer unique ID for image label image label """ map_size = image.nbytes * 10 # Create a new LMDB environment env = lmdb.open(str(lmdb_dir / f"single_lmdb"), map_size=map_size) # Start a new write transaction with env.begin(write=True) as txn: # All key-value pairs need to be strings value = CIFAR_Image(image, label) key = f"{image_id:08}" txn.put(key.encode("ascii"), pickle.dumps(value)) env.close() Note: It’s a good idea to calculate the exact number of bytes each key-value pair will take up. With a dataset of images of varying size, this will be an approximation, but you can use sys.getsizeof() to get a reasonable approximation. Keep in mind that sys.getsizeof(CIFAR_Image) will only return the size of a class definition, which is 1056, not the size of an instantiated object. The function will also not be able to fully calculate nested items, lists, or objects containing references to other objects. Alternately, you could use pympler to save you some calculations by determining the exact size of an object. You are now ready to save an image to LMDB. Lastly, let’s look at the final method, HDF5. Storing With HDF5 Remember that an HDF5 file can contain more than one dataset. In this rather trivial case, you can create two datasets, one for the image, and one for its meta data: import h5py def store_single_hdf5(image, image_id, label): """ Stores a single image to an HDF5 file. Parameters: --------------- image image array, (32, 32, 3) to be stored image_id integer unique ID for image label image label """ # Create a new HDF5 file file = h5py.File(hdf5_dir / f"{image_id}.h5", "w") # Create a dataset in the file dataset = file.create_dataset( "image", np.shape(image), h5py.h5t.STD_U8BE, data=image ) meta_set = file.create_dataset( "meta", np.shape(label), h5py.h5t.STD_U8BE, data=label ) file.close() h5py.h5t.STD_U8BE specifies the type of data that will be stored in the dataset, which in this case is unsigned 8-bit integers. You can see a full list of HDF’s predefined datatypes here. Note: The choice of datatype will strongly affect the runtime and storage requirements of HDF5, so it is best to choose your minimum requirements. Now that we have reviewed the three methods of saving a single image, let’s move on to the next step. Experiments for Storing a Single Image Now you can put all three functions for saving a single image into a dictionary, which can be called later during the timing experiments: _store_single_funcs = dict( disk=store_single_disk, lmdb=store_single_lmdb, hdf5=store_single_hdf5 ) Finally, everything is ready for conducting the timed experiment. Let’s try saving the first image from CIFAR and its corresponding label, and storing it in the three different ways: from timeit import timeit store_single_timings = dict() for method in ("disk", "lmdb", "hdf5"): t = timeit( "_store_single_funcs[method](image, 0, label)", setup="image=images[0]; label=labels[0]", number=1, globals=globals(), ) store_single_timings[method] = t print(f"Method: {method}, Time usage: {t}") Note: While you’re playing around with LMDB, you may see a MapFullError: mdb_txn_commit: MDB_MAP_FULL: Environment mapsize limit reached error. It’s important to note that LMDB does not overwrite preexisting values, even if they have the same key. This contributes to the fast write time, but it also means that if you store an image more than once in the same LMDB file, then you will use up the map size. If you run a store function, be sure to delete any preexisting LMDB files first. Remember that we’re interested in runtime, displayed here in seconds, and also the memory usage: There are two takeaways here: - All of the methods are trivially quick. - In terms of disk usage, LMDB uses more. Clearly, despite LMDB having a slight performance lead, we haven’t convinced anyone why to not just store images on disk. After all, it’s a human readable format, and you can open and view them from any file system browser! Well, it’s time to look at a lot more images… Storing Many Images You have seen the code for using the various storage methods to save a single image, so now we need to adjust the code to save many images and then run the timed experiment. Adjusting the Code for Many Images Saving multiple images as .png files is as straightforward as calling store_single_method() multiple times. But this isn’t true for LMDB or HDF5, since you don’t want a different database file for each image. Rather, you want to put all of the images into one or more files. You will need to slightly alter the code and create three new functions that accept multiple images, store_many_disk(), store_many_lmdb(), and store_many_hdf5: store_many_disk(images, labels): """ Stores an array of images to disk Parameters: --------------- images images array, (N, 32, 32, 3) to be stored labels labels array, (N, 1) to be stored """ num_images = len(images) # Save all the images one by one for i, image in enumerate(images): Image.fromarray(image).save(disk_dir / f"{i}.png") # Save all the labels to the csv file with open(disk_dir / f"{num_images}.csv", "w") as csvfile: writer = csv.writer( csvfile, delimiter=" ", quotechar="|", quoting=csv.QUOTE_MINIMAL ) for label in labels: # This typically would be more than just one value per row writer.writerow([label]) def store_many_lmdb(images, labels): """ Stores an array of images to LMDB. Parameters: --------------- images images array, (N, 32, 32, 3) to be stored labels labels array, (N, 1) to be stored """ num_images = len(images) map_size = num_images * images[0].nbytes * 10 # Create a new LMDB DB for all the images env = lmdb.open(str(lmdb_dir / f"{num_images}_lmdb"), map_size=map_size) # Same as before — but let's write all the images in a single transaction with env.begin(write=True) as txn: for i in range(num_images): # All key-value pairs need to be Strings value = CIFAR_Image(images[i], labels[i]) key = f"{i:08}" txn.put(key.encode("ascii"), pickle.dumps(value)) env.close() def store_many_hdf5(images, labels): """ Stores an array of images to HDF5. Parameters: --------------- images images array, (N, 32, 32, 3) to be stored labels labels array, (N, 1) to be stored """ num_images = len(images) # Create a new HDF5 file file = h5py.File(hdf5_dir / f"{num_images}_many.h5", "w") # Create a dataset in the file dataset = file.create_dataset( "images", np.shape(images), h5py.h5t.STD_U8BE, data=images ) meta_set = file.create_dataset( "meta", np.shape(labels), h5py.h5t.STD_U8BE, data=labels ) file.close() So you could store more than one file to disk, the image files method was altered to loop over each image in the list. For LMDB, a loop is also needed since we are creating a CIFAR_Image object for each image and its meta data. The smallest adjustment is with the HDF5 method. In fact, there’s hardly an adjustment at all! HFD5 files have no limitation on file size aside from external restrictions or dataset size, so all the images were stuffed into a single dataset, just like before. Next, you will need to prepare the dataset for the experiments by increasing its size. Preparing the Dataset Before running the experiments again, let’s first double our dataset size so that we can test with up to 100,000 images: cutoffs = [10, 100, 1000, 10000, 100000] # Let's double our images so that we have 100,000 images = np.concatenate((images, images), axis=0) labels = np.concatenate((labels, labels), axis=0) # Make sure you actually have 100,000 images and labels print(np.shape(images)) print(np.shape(labels)) Now that there are enough images, it’s time for the experiment. Experiment for Storing Many Images As you did with reading many images, you can create a dictionary handling all the functions with store_many_ and run the experiments: _store_many_funcs = dict( disk=store_many_disk, lmdb=store_many_lmdb, hdf5=store_many_hdf5 ) from timeit import timeit store_many_timings = {"disk": [], "lmdb": [], "hdf5": []} for cutoff in cutoffs: for method in ("disk", "lmdb", "hdf5"): t = timeit( "_store_many_funcs[method](images_, labels_)", setup="images_=images[:cutoff]; labels_=labels[:cutoff]", number=1, globals=globals(), ) store_many_timings[method].append(t) # Print out the method, cutoff, and elapsed time print(f"Method: {method}, Time usage: {t}") If you’re following along and running the code yourself, you’ll need to sit back a moment in suspense and wait for 111,110 images to be stored three times each to your disk, in three different formats. You’ll also need to say goodbye to approximately 2 GB of disk space. Now for the moment of truth! How long did all of that storing take? A picture is worth a thousand words: The first graph shows the normal, unadjusted storage time, highlighting the drastic difference between storing to .png files and LMDB or HDF5. The second graph shows the log of the timings, highlighting that HDF5 starts out slower than LMDB but, with larger quantities of images, comes out slightly ahead. While exact results may vary depending on your machine, this is why LMDB and HDF5 are worth thinking about. Here’s the code that generated the above graph: import matplotlib.pyplot as plt def plot_with_legend( x_range, y_data, legend_labels, x_label, y_label, title, log=False ): """ Displays a single plot with multiple datasets and matching legends. Parameters: -------------- x_range list of lists containing x data y_data list of lists containing y values legend_labels list of string legend labels x_label x axis label y_label y axis label """ plt.style.use("seaborn-whitegrid") plt.figure(figsize=(10, 7)) if len(y_data) != len(legend_labels): raise TypeError( "Error: number of data sets does not match number of labels." ) all_plots = [] for data, label in zip(y_data, legend_labels): if log: temp, = plt.loglog(x_range, data, label=label) else: temp, = plt.plot(x_range, data, label=label) all_plots.append(temp) plt.title(title) plt.xlabel(x_label) plt.ylabel(y_label) plt.legend(handles=all_plots) plt.show() # Getting the store timings data to display disk_x = store_many_timings["disk"] lmdb_x = store_many_timings["lmdb"] hdf5_x = store_many_timings["hdf5"] plot_with_legend( cutoffs, [disk_x, lmdb_x, hdf5_x], ["PNG files", "LMDB", "HDF5"], "Number of images", "Seconds to store", "Storage time", log=False, ) plot_with_legend( cutoffs, [disk_x, lmdb_x, hdf5_x], ["PNG files", "LMDB", "HDF5"], "Number of images", "Seconds to store", "Log storage time", log=True, ) Now let’s go on to reading the images back out. Reading a Single Image First, let’s consider the case for reading a single image back into an array for each of the three methods. Reading From Disk Of the three methods, LMDB requires the most legwork when reading image files back out of memory, because of the serialization step. Let’s walk through these functions that read a single image out for each of the three storage formats. First, read a single image and its meta from a .png and .csv file: def read_single_disk(image_id): """ Stores a single image to disk. Parameters: --------------- image_id integer unique ID for image Returns: ---------- image image array, (32, 32, 3) to be stored label associated meta data, int label """ image = np.array(Image.open(disk_dir / f"{image_id}.png")) with open(disk_dir / f"{image_id}.csv", "r") as csvfile: reader = csv.reader( csvfile, delimiter=" ", quotechar="|", quoting=csv.QUOTE_MINIMAL ) label = int(next(reader)[0]) return image, label Reading From LMDB Next, read the same image and meta from an LMDB by opening the environment and starting a read transaction: 1 def read_single_lmdb(image_id): 2 """ Stores a single image to LMDB. 3 Parameters: 4 --------------- 5 image_id integer unique ID for image 6 7 Returns: 8 ---------- 9 image image array, (32, 32, 3) to be stored 10 label associated meta data, int label 11 """ 12 # Open the LMDB environment 13 env = lmdb.open(str(lmdb_dir / f"single_lmdb"), readonly=True) 14 15 # Start a new read transaction 16 with env.begin() as txn: 17 # Encode the key the same way as we stored it 18 data = txn.get(f"{image_id:08}".encode("ascii")) 19 # Remember it's a CIFAR_Image object that is loaded 20 cifar_image = pickle.loads(data) 21 # Retrieve the relevant bits 22 image = cifar_image.get_image() 23 label = cifar_image.label 24 env.close() 25 26 return image, label Here are a couple points to not about the code snippet above: - Line 13: The readonly=Trueflag specifies that no writes will be allowed on the LMDB file until the transaction is finished. In database lingo, it’s equivalent to taking a read lock. - Line 20: To retrieve the CIFAR_Image object, you need to reverse the steps we took to pickle it when we were writing it. This is where the get_image()of the object is helpful. This wraps up reading the image back out from LMDB. Finally, you will want to do the same with HDF5. Reading From HDF5 Reading from HDF5 looks very similar to the writing process. Here is the code to open and read the HDF5 file and parse the same image and meta: def read_single_hdf5(image_id): """ Stores a single image to HDF5. Parameters: --------------- image_id integer unique ID for image Returns: ---------- image image array, (32, 32, 3) to be stored label associated meta data, int label """ # Open the HDF5 file file = h5py.File(hdf5_dir / f"{image_id}.h5", "r+") image = np.array(file["/image"]).astype("uint8") label = int(np.array(file["/meta"]).astype("uint8")) return image, label Note that you access the various datasets in the file by indexing the file object using the dataset name preceded by a forward slash /. As before, you can create a dictionary containing all the read functions: _read_single_funcs = dict( disk=read_single_disk, lmdb=read_single_lmdb, hdf5=read_single_hdf5 ) With this dictionary prepared, you are ready for running the experiment. Experiment for Reading a Single Image You might expect that the experiment for reading a single image in will have somewhat trivial results, but here’s the experiment code: from timeit import timeit read_single_timings = dict() for method in ("disk", "lmdb", "hdf5"): t = timeit( "_read_single_funcs[method](0)", setup="image=images[0]; label=labels[0]", number=1, globals=globals(), ) read_single_timings[method] = t print(f"Method: {method}, Time usage: {t}") Here are the results of the experiment for reading a single image: It’s slightly faster to read the .png and .csv files directly from disk, but all three methods perform trivially quickly. The experiments we’ll do next are much more interesting. Reading Many Images Now you can adjust the code to read many images at once. This is likely the action you’ll be performing most often, so the runtime performance is essential. Adjusting the Code for Many Images Extending the functions above, you can create functions with read_many_, which can be used for the next experiments. Like before, it is interesting to compare performance when reading different quantities of images, which are repeated in the code below for reference: def read_many_disk(num_images): """ Reads image from disk. Parameters: --------------- num_images number of images to read Returns: ---------- images images array, (N, 32, 32, 3) to be stored labels associated meta data, int label (N, 1) """ images, labels = [], [] # Loop over all IDs and read each image in one by one for image_id in range(num_images): images.append(np.array(Image.open(disk_dir / f"{image_id}.png"))) with open(disk_dir / f"{num_images}.csv", "r") as csvfile: reader = csv.reader( csvfile, delimiter=" ", quotechar="|", quoting=csv.QUOTE_MINIMAL ) for row in reader: labels.append(int(row[0])) return images, labels def read_many_lmdb(num_images): """ Reads image from LMDB. Parameters: --------------- num_images number of images to read Returns: ---------- images images array, (N, 32, 32, 3) to be stored labels associated meta data, int label (N, 1) """ images, labels = [], [] env = lmdb.open(str(lmdb_dir / f"{num_images}_lmdb"), readonly=True) # Start a new read transaction with env.begin() as txn: # Read all images in one single transaction, with one lock # We could split this up into multiple transactions if needed for image_id in range(num_images): data = txn.get(f"{image_id:08}".encode("ascii")) # Remember that it's a CIFAR_Image object # that is stored as the value cifar_image = pickle.loads(data) # Retrieve the relevant bits images.append(cifar_image.get_image()) labels.append(cifar_image.label) env.close() return images, labels def read_many_hdf5(num_images): """ Reads image from HDF5. Parameters: --------------- num_images number of images to read Returns: ---------- images images array, (N, 32, 32, 3) to be stored labels associated meta data, int label (N, 1) """ images, labels = [], [] # Open the HDF5 file file = h5py.File(hdf5_dir / f"{num_images}_many.h5", "r+") images = np.array(file["/images"]).astype("uint8") labels = np.array(file["/meta"]).astype("uint8") return images, labels _read_many_funcs = dict( disk=read_many_disk, lmdb=read_many_lmdb, hdf5=read_many_hdf5 ) With the reading functions stored in a dictionary as with the writing functions, you’re all set for the experiment. Experiment for Reading Many Images You can now run the experiment for reading many images out: from timeit import timeit read_many_timings = {"disk": [], "lmdb": [], "hdf5": []} for cutoff in cutoffs: for method in ("disk", "lmdb", "hdf5"): t = timeit( "_read_many_funcs[method](num_images)", setup="num_images=cutoff", number=1, globals=globals(), ) read_many_timings[method].append(t) # Print out the method, cutoff, and elapsed time print(f"Method: {method}, No. images: {cutoff}, Time usage: {t}") As we did previously, you can graph the read experiment results: The top graph shows the normal, unadjusted read times, showing the drastic difference between reading from .png files and LMDB or HDF5. In contrast, the graph on the bottom shows the log of the timings, highlighting the relative differences with fewer images. Namely, we can see how HDF5 starts out behind but, with more images, becomes consistently faster than LMDB by a small margin. Using the same plotting function as for the write timings, we have the following: disk_x_r = read_many_timings["disk"] lmdb_x_r = read_many_timings["lmdb"] hdf5_x_r = read_many_timings["hdf5"] plot_with_legend( cutoffs, [disk_x_r, lmdb_x_r, hdf5_x_r], ["PNG files", "LMDB", "HDF5"], "Number of images", "Seconds to read", "Read time", log=False, ) plot_with_legend( cutoffs, [disk_x_r, lmdb_x_r, hdf5_x_r], ["PNG files", "LMDB", "HDF5"], "Number of images", "Seconds to read", "Log read time", log=True, ) In practice, the write time is often less critical than the read time. Imagine that you are training a deep neural network on images, and only half of your entire image dataset fits into RAM at once. Each epoch of training a network requires the entire dataset, and the model needs a few hundred epochs to converge. You will essentially be reading half of the dataset into memory every epoch. There are several tricks people do, such as training pseudo-epochs to make this slightly better, but you get the idea. Now, look again at the read graph above. The difference between a 40-second and 4-second read time suddenly is the difference between waiting six hours for your model to train, or forty minutes! If we view the read and write times on the same chart, we have the following: You can plot all the read and write timings on a single graph using the same plotting function: plot_with_legend( cutoffs, [disk_x_r, lmdb_x_r, hdf5_x_r, disk_x, lmdb_x, hdf5_x], [ "Read PNG", "Read LMDB", "Read HDF5", "Write PNG", "Write LMDB", "Write HDF5", ], "Number of images", "Seconds", "Log Store and Read Times", log=False, ) When you’re storing images as .png files, there is a big difference between write and read times. However, with LMDB and HDF5, the difference is much less marked. Overall, even if read time is more critical than write time, there is a strong argument for storing images using LMDB or HDF5. Now that you’ve seen the performance benefits of LMDB and HDF5, let’s look at another crucial metric: disk usage. Considering Disk Usage Speed is not the only performance metric you may be interested in. We’re already dealing with very large datasets, so disk space is also a very valid and relevant concern. Suppose you have an image dataset of 3TB. Presumably, you have them already on disk somewhere, unlike our CIFAR example, so by using an alternate storage method, you are essentially making a copy of them, which also has to be stored. Doing so will give you huge performance benefits when you use the images, but you’ll need to make sure you have enough disk space. How much disk space do the various storage methods use? Here’s the disk space used for each method for each quantity of images: I used the Linux du -h -c folder_name/* command to compute the disk usage on my system. There is some approximation inherent with this method due to rounding, but here’s the general comparison: # Memory used in KB disk_mem = [24, 204, 2004, 20032, 200296] lmdb_mem = [60, 420, 4000, 39000, 393000] hdf5_mem = [36, 304, 2900, 29000, 293000] X = [disk_mem, lmdb_mem, hdf5_mem] ind = np.arange(3) width = 0.35 plt.subplots(figsize=(8, 10)) plots = [plt.bar(ind, [row[0] for row in X], width)] for i in range(1, len(cutoffs)): plots.append( plt.bar( ind, [row[i] for row in X], width, bottom=[row[i - 1] for row in X] ) ) plt.ylabel("Memory in KB") plt.title("Disk memory used by method") plt.xticks(ind, ("PNG", "LMDB", "HDF5")) plt.yticks(np.arange(0, 400000, 100000)) plt.legend( [plot[0] for plot in plots], ("10", "100", "1,000", "10,000", "100,000") ) plt.show() Both HDF5 and LMDB take up more disk space than if you store using normal .png images. It’s important to note that both LMDB and HDF5 disk usage and performance depend highly on various factors, including operating system and, more critically, the size of the data you store. LMDB gains its efficiency from caching and taking advantage of OS page sizes. You don’t need to understand its inner workings, but note that with larger images, you will end up with significantly more disk usage with LMDB, because images won’t fit on LMDB’s leaf pages, the regular storage location in the tree, and instead you will have many overflow pages. The LMDB bar in the chart above will shoot off the chart. Our 32x32x3 pixel images are relatively small compared to the average images you may use, and they allow for optimal LMDB performance. While we won’t explore it here experimentally, in my own experience with images of 256x256x3 or 512x512x3 pixels, HDF5 is usually slightly more efficient in terms of disk usage than LMDB. This is a good transition into the final section, a qualitative discussion of the differences between the methods. Discussion There are other distinguishing features of LMDB and HDF5 that are worth knowing about, and it’s also important to briefly discuss some of the criticisms of both methods. Several links are included along with the discussion if you want to learn more. Parallel Access A key comparison that we didn’t test in the experiments above is concurrent reads and writes. Often, with such large datasets, you may want to speed up your operation through parallelization. In the majority of cases, you won’t be interested in reading parts of the same image at the same time, but you will want to read multiple images at once. With this definition of concurrency, storing to disk as .png files actually allows for complete concurrency. Nothing prevents you from reading several images at once from different threads, or writing multiple files at once, as long as the image names are different. How about LMDB? There can be multiple readers on an LMDB environment at a time, but only one writer, and writers do not block readers. You can read more about that at the LMDB technology website. Multiple applications can access the same LMDB database at the same time, and multiple threads from the same process can also concurrently access the LMDB for reads. This allows for even quicker read times: if you divided all of CIFAR into ten sets, then you could set up ten processes to each read in one set, and it would divide the loading time by ten. HDF5 also offers parallel I/O, allowing concurrent reads and writes. However, in implementation, a write lock is held, and access is sequential, unless you have a parallel file system. There are two main options if you are working on such a system, which are discussed more in depth in this article by the HDF Group on parallel IO. It can get quite complicated, and the simplest option is to intelligently split your dataset into multiple HDF5 files, such that each process can deal with one .h5 file independently of the others. Documentation If you Google lmdb, at least in the United Kingdom, the third search result is IMDb, the Internet Movie Database. That’s not what you were looking for! Actually, there is one main source of documentation for the Python binding of LMDB, which is hosted on Read the Docs LMDB. While the Python package hasn’t even reached version > 0.94, it is quite widely used and is considered stable. As for the LMDB technology itself, there is more detailed documentation at the LMDB technology website, which can feel a bit like learning calculus in second grade, unless you start from their Getting Started page. For HDF5, there is very clear documentation at the h5py docs site, as well as a helpful blog post by Christopher Lovell, which is an excellent overview of how to use the h5py package. The O’Reilly book, Python and HDF5 also is a good way to get started. While not as documented as perhaps a beginner would appreciate, both LMDB and HDF5 have large user communities, so a deeper Google search usually yields helpful results. A More Critical Look at Implementation There is no utopia in storage systems, and both LMDB and HDF5 have their share of pitfalls. A key point to understand about LMDB is that new data is written without overwriting or moving existing data. This is a design decision that allows for the extremely quick reads you witnessed in our experiments, and also guarantees data integrity and reliability without the additional need of keeping transaction logs. Remember, however, that you needed to define the map_size parameter for memory allocation before writing to a new database? This is where LMDB can be a hassle. Suppose you have created an LMDB database, and everything is wonderful. You’ve waited patiently for your enormous dataset to be packed into a LMDB. Then, later down the line, you remember that you need to add new data. Even with the buffer you specified on your map_size, you may easily expect to see the lmdb.MapFullError error. Unless you want to re-write your entire database, with the updated map_size, you’ll have to store that new data in a separate LMDB file. Even though one transaction can span multiple LMDB files, having multiple files can still be a pain. Additionally, some systems have restrictions on how much memory may be claimed at once. In my own experience, working with high-performance computing (HPC) systems, this has proved extremely frustrating, and has often made me prefer HDF5 over LMDB. With both LMDB and HDF5, only the requested item is read into memory at once. With LMDB, key-unit pairs are read into memory one by one, while with HDF5, the dataset object can be accessed like a Python array, with indexing dataset[i], ranges, dataset[i:j] and other splicing dataset[i:j:interval]. Because of the way the systems are optimized, and depending on your operating system, the order in which you access items can impact performance. In my experience, it’s generally true that for LMDB, you may get better performance when accessing items sequentially by key (key-value pairs being kept in memory ordered alphanumerically by key), and that for HDF5, accessing large ranges will perform better than reading every element of the dataset one by one using the following: # Slightly slower for i in range(len(dataset)): # Read the ith value in the dataset, one at a time do_something_with(dataset[i]) # This is better data = dataset[:] for d in data: do_something_with(d) If you are considering a choice of file storage format to write your software around, it would be remiss not to mention Moving away from HDF5 by Cyrille Rossant on the pitfalls of HDF5, and Konrad Hinsen’s response On HDF5 and the future of data management, which shows how some of the pitfalls can be avoided in his own use cases with many smaller datasets rather than a few enormous ones. Note that a relatively smaller dataset is still several GB in size. Integration With Other Libraries If you’re dealing with really large datasets, it’s highly likely that you’ll be doing something significant with them. It’s worthwhile to consider deep learning libraries and what kind of integration there is with LMDB and HDF5. First of all, all libraries support reading images from disk as .png files, as long as you convert them into NumPy arrays of the expected format. This holds true for all the methods, and we have already seen above that it is relatively straightforward to read in images as arrays. Here are several of the most popular deep learning libraries and their LMDB and HDF5 integration: Caffe has a stable, well-supported LMDB integration, and it handles the reading step transparently. The LMDB layer can also easily be replaced with a HDF5 database. Keras uses the HDF5 format to save and restore models. This implies that TensorFlow can as well. TensorFlow has a built-in class LMDBDatasetthat provides an interface for reading in input data from an LMDB file and can produce iterators and tensors in batches. TensorFlow does not have a built-in class for HDF5, but one can be written that inherits from the Datasetclass. I personally use a custom class altogether that is designed for optimal read access based on the way I structure my HDF5 files. Theano does not natively support any particular file format or database, but as previously stated, can use anything as long as it is read in as an N-dimensional array. While far from comprehensive, this hopefully gives you a feel for the LMDB/HDF5 integration by some key deep learning libraries. A Few Personal Insights on Storing Images in Python In my own daily work analyzing terabytes of medical images, I use both LMDB and HDF5, and have learned that, with any storage method, forethought is critical. Often, models need to be trained using k-fold cross validation, which involves splitting the entire dataset into k-sets (k typically being 10), and k models being trained, each with a different k-set used as test set. This ensures that the model is not overfitting the dataset, or, in other words, unable to make good predictions on unseen data. A standard way to craft a k-set is to put an equal representation of each type of data represented in the dataset in each k-set. Thus, saving each k-set into a separate HDF5 dataset maximizes efficiency. Sometimes, a single k-set cannot be loaded into memory at once, so even the ordering of data within a dataset requires some forethought. With LMDB, I similarly am careful to plan ahead before creating the database(s). There are a few good questions worth asking before you save images: - How can I save the images such that most of the reads will be sequential? - What are good keys? - How can I calculate a good map_size, anticipating potential future changes in the dataset? - How large can a single transaction be, and how should transactions be subdivided? Regardless of the storage method, when you’re dealing with large image datasets, a little planning goes a long way. Conclusion You’ve made it to the end! You’ve now had a bird’s eye view of a large topic. In this article, you’ve been introduced to three ways of storing and accessing lots of images in Python, and perhaps had a chance to play with some of them. All the code for this article is in a Jupyter notebook here or Python script here. Run at your own risk, as a few GB of your disk space will be overtaken by little square images of cars, boats, and so on. You’ve seen evidence of how various storage methods can drastically affect read and write time, as well as a few pros and cons of the three methods considered in this article. While storing images as .png files may be the most intuitive, there are large performance benefits to considering methods such as HDF5 or LMDB. Feel free to discuss in the comment section the excellent storage methods not covered in this article, such as LevelDB, Feather, TileDB, Badger, BoltDB, or anything else. There is no perfect storage method, and the best method depends on your specific dataset and use cases. Further Reading Here are some references related to the three methods covered in this article: - Python binding for LMDB - LMDB documentation: Getting Started - Python binding for HDF5 (h5py) - The HDF5 Group - “Python and HDF5” from O’Reilly - Pillow You may also appreciate “An analysis of image storage systems for scalable training of deep neural networks” by Lim, Young, and Patton. That paper covers experiments similar to the ones in this article, but on a much larger scale, considering cold and warm cache as well as other factors.
https://realpython.com/storing-images-in-python/
CC-MAIN-2019-47
en
refinedweb
Many organizations today are choosing to deploy their applications using a microservice architecture, so what exactly is a microservice architecture? Microservices - also known as the microservice architecture - is an architectural style that structures an application as a collection of loosely coupled services, which implement business capabilities. The microservice architecture enables the continuous delivery/deployment of large, complex applications. (Chris Richardson;) These microservices are normally deployed on a container engine, such as Kubernetes. There are many cloud vendors that offer managed Kubernetes services for the deployment of these microservices architectures, including Oracle. Oracle recently released the managed Kubernetes product, Oracle Cloud Infrastructure Container Engine for Kubernetes (OKE). As this number of deployed microservice applications increases the need to monitor, manage, and secure these applications becomes more important. Kubernetes provides limited capabilities in these areas; therefore, a more robust service mesh is available from Istio. This discussion covers the installation and use of the Istio service mesh on the Oracle Container Engine for Kubernetes. Let’s first cover the Istio service mesh at the 10,000 foot level. If the reader desires a much more in-depth understanding of the Istio service mesh then I recommend you visit the Istio website. Following the overview, we’ll cover the installation of Istio on the OKE platform, and finally deploy an application to demonstrate the configurations, dashboards, and features of the service mesh. “Istio is an open platform that provides a uniform way to connect, manage, and secure microservices. Istio supports managing traffic flows between microservices, enforcing access policies, and aggregating telemetry data, all without requiring changes to the microservice code. Istio.” The service mesh consists of many moving parts. One of the key components in the service mesh, and the component that is critical for the mesh to monitor, manage, and secure the microservices is the sidecar implemented by Envoy. Since the sidecar is such a key component of the service mesh and the telemetry data collector let’s briefly digress and gain an understanding of what the sidecar does. The sidecar pattern gets its named from the sidecar that is attached to a motorcycle. A sidecar application is deployed alongside and attached to each microservice service that you have developed and deployed. With the Istio service mesh the sidecar is an Envoy proxy that mediates all inbound and outbound traffic for all services in the service mesh. Envoy has many built-in features such as: • Dynamic service discovery • Load balancing • TLS termination • HTTP/2 and gRPC proxies • Circuit Breakers • Health checks • Staged rollouts with %-based traffic split • Fault injection • Rich metrics The Envoy deployment allows Istio to extract signals about traffic behavior as attributes. Istio in turns uses these attributes to enforce policy decisions, and sends them to monitoring systems to provide information about the behavior of the entire mesh. Starting with the 0.8 release of the service mesh you can now configure Istio to do automatic sidecar injection. However, for the automatic sidecar injection to take place you must enable the application’s namespace. I will discuss this later. Before I get too far ahead of myself, let’s get the service mesh installed in OKE. Then I will show you some of the features and explain why a service mesh is important. 1. If you have not already done so, create an OKE cluster 2. Install kubectl on your local machine. 3. Install Helm on your local machine. 4. Do a quick upgrade of Tiller (just to be sure you are on the latest release) a. $ helm init --upgrade 5. Install and configure OCI-CLI to access the OKE from the command line. 6. Download the kubeconfig so that you can access the OKE cluster from the command line using kubectl a. Use the following command to download the kubeconfig and store it in a local file: oci ce cluster create-kubeconfig --cluster-id <cluster ocid> --file kubeconfig i. Cluster ocid: Is the ocid of your Kubernetes cluster ii. File: is the name of the file where you want to store the cluster configuration b. export KUBECONFIG=<location of the kubeconfig file> 7. Create a Role-Based Access Control policy a. $ kubectl create clusterrolebinding <admin-binding> --clusterrole=cluster-admin --user=<user-OCID> i. admin-binding: Is any string that you want, such as “adminrolebinding” ii. user: Your user OCID There are four different methods that can be used to install Istio in OKE. I recommend using Helm charts to do the installation. Helm is a tool to help you manage Kubernetes applications. Helm Charts, as they are referred to, help you define, install, and upgrade complex Kubernetes applications. I suspect the use of Helm charts will be the preferred method going forward and the Istio documentation also makes that recommendation today. Once the prerequisites have been completed you can install Istio. You can download Istio by executing the below command: curl -L | sh - I will cover the installation of Istio using Helm below. Prior to performing the installation, let’s make some changes to the Istio “values.yaml” file. The “values.yaml” file informs Helm which components to install on the OKE platform. The “values.yaml” file is located at:“/<istio installation directory>/install/kubernetes/helm/istio” In order to have the components Grafana, Prometheus, Servicegraph, and Jaeger deployed, the “values.yaml” file needs to be modified. For each of the components you want deployed, change the enabled property from “false” to “true”. Servicegraph: enabled: true replicaCount: 1 image: servicegraph service: name: http type: ClusterIP externalPort: 8088 internalPort: 8088 You’re now ready to install Istio. If you are using a version of Helm prior to 2.10.0 then you must install Istio’s Custom Resource Definitions via the kubectl apply. After command execution you will have to wait a few seconds for the Custom Resource Definitions (CRDs) to be committed in the kube-apiserver. $ kubectl apply -f install/kubernetes/helm/istio/templates/crds.yaml Once the CRDs have been deployed you can install the Istio service mesh. $ helm install install/kubernetes/helm/istio --name istio --namespace istio-system The helm install command will configure your cluster to do automatic sidecar injection. In fact, automatic sidecar injection is the default. To verify that your istio installation was successful execute the kubectl command and ensure you have the following containers deployed to your cluster. $ kubectl get pods -n istio-system Since the “values.yaml” was modified to enable the deployment of Grafana, Prometheus, ServiceGraph, and Jeager you will see those components deployed as well. While Istio states there is automatic sidecar injection; there is a caveat to this. Automatic sidecar injection must be specified per namespace; therefore, if you do not enable your namespace for automatic injection then the sidecar will not be injected into your pods. I do not recommend enabling the default namespace for automatic sidecar injection; however, for this blog we will ignore my own recommendation. If you are wondering why I recommend not to set the default namespace for automatic injection it is primarily a personal preference, but there may be some components that get deployed to the default namespace and you don’t want the sidecar deployed alongside of the component. It would be better to deploy your application to a specified namespace and then set this namespace for automatic injection. In order to have sidecar injection at deployment you must enable the namespace for your application. To enable the namespace for automatic injection execute the following command: $ kubectl label namespace default istio-injection=enabled You now have the Istio service mesh installed and are ready to begin monitoring, managing, and securing your services. In order to show some of these capabilities let’s deploy an application, execute the application, and show what is available in the graphs provided out-of-the-box by the service mesh. The easiest thing to do at this time is to deploy the “bookinfo” application. You can find this application in the samples directory from the Istio download, which was done earlier. Keep in mind that we previously enabled automatic sidecar injection during the installation of Istio and also enabled the default namespace for automatic sidecar injection. Therefore, when you deploy the book application an Envoy sidecar proxy is deployed in each pod. Each of the black boxes in the below diagram are instances of the Envoy proxy sidecar. When the “bookinfo” application is deployed to the Kubernetes cluster Istio deploys the sidecar in the pod alongside of the microservice. Let’s deploy the bookinfo application. $ kubectl apply -f /<istio installation directory>/samples/bookinfo/platform/kube/bookinfo.yaml After the successful deployment, let’s take a look at the pods that were deployed. $ kubectl get po The 2 pods curl-775f9567b5-w7btf and oke-efk-sz-elasticsearch-0 are pods that are not part of the bookinfo deployment. These are pods that I had installed on a previous occasion. It is important to take note that the “READY” column states 2/2,which means there are two containers in the pod and both are up and running. But wait, the application only deployed one image in the pod. The additional container is the sidecar proxy. Let’s do a describe on the pod to see which containers are in the pod. $ kubectl describe po productpage-v1-f8c8fb8-gshrl The name of your product page pod will be slightly different. A snippet of the describe option is shown. Take a look at the information within the Containers section. The data shows two images – the product page and the istio-proxy (Envoy proxy). The last thing to do is to make the application accessible from outside your Kubernetes cluster. To do that, we need to create an Istio gateway. $ kubectl apply -f /<istio installation directory>/samples/bookinfo/networking/bookinfo-gateway.yaml $ kubectl get gateway $kubectl get svc -n istio-system The output will look as follows: You can render the application from a browser. The IP address is provided by looking at the istio-ingressgateway’s external-ip. Accessing the bookinfo application is as easy as by providing the istio-ingressgateway’s external-ip followed by the path of productpage. As you can see from the ports, the gateway will be listening on port 80. Continuously refreshing the browser will send traffic to the services. At this point we need to exercise the application so we can generate traffic and demonstrate the features of the dashboards. When you install Istio, with all of the dashboards enabled, there will be 4 dashboards available, in addition to the standard Kubernetes dashboard. Each dashboard provides their own unique features and will be key for managing and monitoring your Kubernetes cluster. Since each dashboard is a product in its own right I will not cover each product in depth. To understand the key features of the dashboards I recommend that you review each product’s documentation page. There are also several books that have been written on many of these products. The Grafana add-on is a preconfigured instance of Grafana. The base image has been modified to start with both a Prometheus data source and the Istio Dashboard installed. The base install files for Istio, and Mixer in particular, ship with a default configuration of global metrics. The Istio Dashboard is built to be used in conjunction with the default Istio metrics configuration and a Prometheus backend. 1. A Mesh Summary View: This section provides Global Summary view of the Mesh and shows HTTP/gRPC and TCP workloads in the Mesh. 2. Individual Services View: This section provides metrics about requests and responses for each individual service within the mesh (HTTP/gRPC and TCP). Also, give metrics about client and service workloads for this service. 3. Individual Workloads View: This section provides metrics about requests and responses for each individual workload within the mesh (HTTP/gRPC and TCP). Also, give metrics about inbound workloads and outbound services for this workload. $ kubectl -n istio-system port-forward $(kubectl -n istio-system get pod -l app=grafana -o jsonpath='{.items[0].metadata.name}') 3000:3000 & Access the dashboard: The Istio service mesh delivers six Grafana dashboards. It is not possible to cover the dashboards in-depth as part of this blog. I will only show snapshots of each of the dashboards. I will leave it to the reader to dive deep into each of the dashboards.. 2. mixer (istio-mixer.istio-system:9093): all Mixer-specific metrics. Used to monitor Mixer itself. 3. envoy (istio-mixer.istio-system:9102): raw stats generated by Envoy (and translated from Statsd to Prometheus $ kubectl -n istio-system port-forward $(kubectl -n istio-system get pod -l app=prometheus -o jsonpath='{.items[0].metadata.name}') 9090:9090 & To access the Prometheus dashboard:. There are a number of predefined queries that you can run to investigate your applications. The query shown in the view was the “istio_requests_total” query. Jaeger is used for monitoring and troubleshooting microservices-based distributed systems, including: 1. Distributed context propagation 2. Distributed transaction monitoring 3. Root cause analysis 4. Service dependency analysis 5. Performance / latency optimization $ kubectl port-forward -n istio-system $(kubectl get pod -n istio-system -l app=jaeger -o jsonpath='{.items[0].metadata.name}') 16686:16686 & To access the Jaeger dashboard: For one of the invocations click on the “Span” button. This will provide the following view: Servicegraph is a small app that generates and visualizes graph representations of your Istio service mesh. Servicegraph is dependent on the Prometheus addon and the standard metrics configuration. $ kubectl -n istio-system port-forward $(kubectl -n istio-system get pod -l app=servicegraph -o jsonpath='{.items[0].metadata.name}') 8088:8088 & Access the dashboard: The Kubernetes dashboard is started the typical, old fashioned way: $ kubectl proxy & Usually to access the Kubernetes dashboard from a browser you would enter. However, this URL is now deprecated. To access the Kubernetes dashboard in OKE access the following URL from a browser: As you can see from the many dashboards there is a large number of metrics being captured as the services are executed. Since each dashboard captures specific data you should take the time to investigate each of the dashboards. Understanding each of the dashboards will help you monitor, manage, and investigate any issues that arise during your microservice application processing. The use of the Istio service mesh provides several features for monitoring, managing, and securing your deployed microservices. As your number of microservices increases it will become more important to deploy a service mesh such as Istio. The service mesh provides visibility at the service edge but does not provide the ability to “peek” into the application. In order to have a better view of the application behavior and application troubleshooting then tools such as elasticsearch, Fluentd, and Kibana are some products to consider. There are many others; therefore, I recommend you look at other offerings as well. There are many open-source software (OSS) products to manage your application. The Istio service mesh is key to doing that and appears to be the front runner among the service mesh tools.
https://www.ateam-oracle.com/istio-on-oke
CC-MAIN-2019-47
en
refinedweb
#include <fmtools.hxx> Definition at line 130 of file fmtools.hxx. Definition at line 274 of file fmtools.cxx. References setAdapter(). Implemented in DisposeListenerGridBridge. Referenced by FmXDisposeMultiplexer::disposing(). Definition at line 279 of file fmtools.cxx. References m_aMutex, and m_pAdapter. Referenced by FmXDisposeMultiplexer::dispose(), FmXDisposeMultiplexer::disposing(), FmXDisposeMultiplexer::FmXDisposeMultiplexer(), and ~FmXDisposeListener(). Definition at line 132 of file fmtools.hxx. Referenced by DisposeListenerGridBridge::DisposeListenerGridBridge(). Definition at line 135 of file fmtools.hxx. Referenced by setAdapter(). Definition at line 134 of file fmtools.hxx. Referenced by setAdapter().
https://docs.libreoffice.org/svx/html/classFmXDisposeListener.html
CC-MAIN-2019-47
en
refinedweb
import "github.com/pingcap/tidb/util/memory" action.go meminfo.go tracker.go const ( // PanicMemoryExceed represents the panic message when out of memory quota. PanicMemoryExceed string = "Out Of Memory Quota!" ) MemTotal returns the total amount of RAM on this system MemUsed returns the total used amount of RAM on this system type ActionOnExceed interface { // Action will be called when memory usage exceeds memory quota by the // corresponding Tracker. Action(t *Tracker) // SetLogHook binds a log hook which will be triggered and log an detailed // message for the out-of-memory sql. SetLogHook(hook func(uint64)) // SetFallback sets a fallback action which will be triggered if itself has // already been triggered. SetFallback(a ActionOnExceed) } ActionOnExceed is the action taken when memory usage exceeds memory quota. NOTE: All the implementors should be thread-safe. LogOnExceed logs a warning only once when memory usage exceeds memory quota. func (a *LogOnExceed) Action(t *Tracker) Action logs a warning only once when memory usage exceeds memory quota. func (a *LogOnExceed) SetFallback(ActionOnExceed) SetFallback sets a fallback action. func (a *LogOnExceed) SetLogHook(hook func(uint64)) SetLogHook sets a hook for LogOnExceed. PanicOnExceed panics when memory usage exceeds memory quota. func (a *PanicOnExceed) Action(t *Tracker) Action panics when memory usage exceeds memory quota. func (a *PanicOnExceed) SetFallback(ActionOnExceed) SetFallback sets a fallback action. func (a *PanicOnExceed) SetLogHook(hook func(uint64)) SetLogHook sets a hook for PanicOnExceed. Tracker is used to track the memory usage during query execution. It contains an optional limit and can be arranged into a tree structure such that the consumption tracked by a Tracker is also tracked by its ancestors. The main idea comes from Apache Impala: By default, memory consumption is tracked via calls to "Consume()", either to the tracker itself or to one of its descendents. A typical sequence of calls for a single Tracker is: 1. tracker.SetLabel() / tracker.SetActionOnExceed() / tracker.AttachTo() 2. tracker.Consume() / tracker.ReplaceChild() / tracker.BytesConsumed() NOTE: We only protect concurrent access to "bytesConsumed" and "children", that is to say: 1. Only "BytesConsumed()", "Consume()" and "AttachTo()" are thread-safe. 2. Other operations of a Tracker tree is not thread-safe. NewTracker creates a memory tracker. 1. "label" is the label used in the usage string. 2. "bytesLimit <= 0" means no limit. AttachTo attaches this memory tracker as a child to another Tracker. If it already has a parent, this function will remove it from the old parent. Its consumed memory usage is used to update all its ancestors. BytesConsumed returns the consumed memory usage value in bytes. BytesToString converts the memory consumption to a readable string. CheckBytesLimit check whether the bytes limit of the tracker is equal to a value. Only used in test. Consume is used to consume a memory usage. "bytes" can be a negative value, which means this is a memory release operation. When memory usage of a tracker exceeds its bytesLimit, the tracker calls its action, so does each of its ancestors. func (t *Tracker) FallbackOldAndSetNewAction(a ActionOnExceed) FallbackOldAndSetNewAction sets the action when memory usage exceeds bytesLimit and set the original action as its fallback. Label gets the label of a Tracker. MaxConsumed returns max number of bytes consumed during execution. ReplaceChild removes the old child specified in "oldChild" and add a new child specified in "newChild". old child's memory consumption will be removed and new child's memory consumption will be added. SearchTracker searches the specific tracker under this tracker. func (t *Tracker) SetActionOnExceed(a ActionOnExceed) SetActionOnExceed sets the action when memory usage exceeds bytesLimit. SetBytesLimit sets the bytes limit for this tracker. "bytesLimit <= 0" means no limit. SetLabel sets the label of a Tracker. String returns the string representation of this Tracker tree. Package memory imports 9 packages (graph) and is imported by 60 packages. Updated 2019-10-13. Refresh now. Tools for package owners.
https://godoc.org/github.com/pingcap/tidb/util/memory
CC-MAIN-2019-47
en
refinedweb
Accounts with zero posts and zero activity during the last months will be deleted periodically to fight SPAM! Problem 2Another issue is when I open my project on Linux, I am faced with a "Compiler selection" dialog that says "The defined compiler for W32_Debug cannot be located (ID: msvc8). Please choose the compiler you want to use instead and click OK". Obviously this makes little sense. The compiler setting for that build target is correct, but the build target is not in use at all on Linux! Is there a way to switch off this message? It was not a problem with the older C::B and I don't think build 4711 on Windows complains in the same way for my UX_Debug build target. Any advice? That's a bug, normally it should shut up about it, and during the build step tell that compiler is not available/installed and skip that target.I have similar projects with every target being build with a different compiler (total of 8 different compilers).I will investigate in future. Thing is this happens only on linux, on windows it's no problem (there when one of my compilers is missing it just skips that target). At last i found out what caused the crashes on close workspace i was still having.I experimented with a new workspace, with 2 new projects.In the projects i gradually added the files of my old projects, until the new workspace crashed on close.The file that caused the crash is a class definition file and the class in this file has the same name as a class in the library. The class in the library is in an other namespace, so this should be no problem.It seems to be a problem for the code completion, causing the crashes.I renamed the class in the app project, and the problems have gone!wobien
http://forums.codeblocks.org/index.php?topic=7428.msg56435
CC-MAIN-2019-47
en
refinedweb
marble #include <GeoSceneMercatorTileProjection.h> Detailed Description Converts the x and y indices of tiles to and from geo coordinates. For tiles of maps in Mercator projection. Tiles do have the same width and the same height per zoomlevel. The number of tiles per dimension is twice that of the previous lower zoomlevel. The indexing is done in x dimension eastwards, with the first tiles beginning at -180 degree and an x value of 0 and the last tiles ending at +180 degree, in y dimension southwards with the first tiles beginning at +85.05113 degree and a y value of 0 and the last tiles ending at -85.05113 degree. NOTE: The method tileIndexes() handles any latitude value >= +85.0 degree as exactly +85.0 degree and any latitude value <= -85.0 as exactly -85.0 degree. So for higher zoomlevels the outermost tiles will be masked by that and not included in any results. Definition at line 45 of file GeoSceneMercatorTileProjection.h. Constructor & Destructor Documentation Construct a new GeoSceneMercatorTileProjection. Definition at line 31 of file GeoSceneMercatorTileProjection.cpp. Definition at line 36 of file GeoSceneMercatorTileProjection.cpp. Member Function Documentation Implements Marble::GeoSceneAbstractTileProjection. Definition at line 155 of file GeoSceneMercatorTileProjection.cpp. Implements Marble::GeoSceneAbstractTileProjection. Definition at line 139 of file GeoSceneMercatorTileProjection.cpp. Implements Marble::GeoSceneAbstractTileProjection. Definition at line 40 of file GeoSceneMercatorTileProjection.cpp. The documentation for this class was generated from the following files: Documentation copyright © 1996-2019 The KDE developers. Generated on Tue Nov 12 2019 04:27:29 by doxygen 1.8.7 written by Dimitri van Heesch, © 1997-2006 KDE's Doxygen guidelines are available online.
https://api.kde.org/4.x-api/kdeedu-apidocs/marble/html/classMarble_1_1GeoSceneMercatorTileProjection.html
CC-MAIN-2019-47
en
refinedweb
Uniform Access Principle is a programming concept which was first introduced by Bertrand Meyer which stated that All services offered by a module should be available through a uniform notation, which does not betray whether they are implemented through storage or through computation. The principle simply means that the notation used to access a feature of a class shouldn’t differ depending on whether it’s an attribute or a method. For example, if you have an object Person and you want to find it’s age, you should use the same notation whether the age is stored field or a computed value. The client should not know or care whether the age is calculated or stored. This gives the Person object flexibility to change between the two which is an unnecessary concern from the client side. Implementation of UAP(Uniform Access Principle) in Scala Scala supports this principle by allowing parentheses to not be placed at call sites of parameterless functions. As a result, a parameterless function definition can be changed to a val, or vice versa, without affecting client code. Accessing methods and variables is the same in scala. For example, In above example, the methods and variables of class Runnable can be altered without affecting the client code which is calling it. The notation to access is same for both. But Java does not support this principle, as accessing the length of an array is different from accessing the length of a string. We cannot access both in the same way because the array length is a variable and the String.length() is actually a method inside the String class. Scala suffers from a namespace collision problem because of how the principle works. For example, the following code would lead to a compilation error in Scala while it would work fine in Java. Therefore, compiler checks for error as there can be ambiguity at the language level. This principle provides a few advantages to Scala developers or the language itself: - Leads to better design patterns. - Client-side remains unaffected and the code logic can be altered easily. - Refactoring of code becomes easier. - Scala Collections API benefits from this principle. - It leads to less confusion as in case of array.length or string.length, both are accessed in the same way. - Writing unit tests becomes easier. Hope you find this blog interesting, Thanks for reading.
https://blog.knoldus.com/what-is-uniform-access-principle/
CC-MAIN-2019-47
en
refinedweb
In my previous posts, I introduced you to Node.js and walked through a bit of its codebase. Now I want to get a simple, but non-trivial Node.js application running. My biggest problem with Node.js so far has been the lack of substantial examples: if I see one more Hello World or Echo Server, I’ll flip my lid (side note: I found the same thing for Ruby’s EventMachine so I created my evented repo). By far the best resource I’ve found for learning Node.js has been DailyJS, highly recommended! So I’ve spent the last few weeks building Chrono.js, an application metrics server. Chrono is still under development so it’s really not appropriate for general usage but it makes for a decent example app. Here’s a basic request which fetches data from MongoDB and renders it via jqtpl: app.get('/', function(req, res) { db.open(function(err, ignored) { if (err) console.log(err); db.collectionNames(function(err, names) { if (err) console.log(err); db.close(); pro = _(names).chain() .pluck('name') .map(function(x) { return x.substr(x.indexOf('.') + 1); }) .reject(function(x) { return (x.indexOf('system.') == 0); }) .value(); res.render('index.html.jqtpl', { locals: { metrics: pro } }); }); }); }); We are scanning MongoDB for the set of metric collections and displaying them in the selector so the user can switch between them. See the main source file for more examples about parameter handling, headers, JSON, etc. You will need MongoDB installed and running locally. You should already have Node.js and npm installed from Part I. Let’s install the javascript libraries required and Chrono itself: npm install express expresso jqtpl mongodb underscore tobi git clone git://github.com/mperham/chrono.js.git Express is a lightweight application server, similar to Sinatra. jqtpl is jQuery Templates, which allows us to dynamically render HTML. mongodb is the MongoDB client driver for Node.js. expresso is a TDD framework and tobi is an HTTP functional testing package. Finally, underscore is a great little utility library which provides JavaScript with a much more sane functional programming API. You can run the tests: expresso or run the Chrono.js server: node chrono.js Click to load in some fake data and visit (select ‘registrations’) to see the results, graphed in living color for you! Expresso gives us a simple DSL for testing our endpoints but I found it to be lacking basic setup/teardown functionality which I had to roll myself in order to insert some test data. I would have preferred to use vows.js but it appears to be incompatible with tobi. Check out the Chrono.js test suite for what I came up with, here’s a small sample: exports['POST /metrics'] = function() { assert.response(app, { url: '/metrics', method: 'POST', headers: { 'content-type': 'application/json' }, data: JSON.stringify({ k: 'registrations', v: 4, at: parseInt(Number(new Date())/1000) }) }, { status: 201, headers: { 'Content-Type': 'text/html; charset=utf8' }, body: '' } ); } With expresso, we export the set of functions to run from our test module. Expresso runs them all in parallel and collects the results. The parallelism means that you must be careful with any test data you create. Since MongoDB doesn’t support transactions, we can’t use transactions to isolate each of our tests (e.g. see Rails’s transactional fixtures) so you need to be careful about the data created or deleted by each test and how you assert the current state. While I’ve gotten much better in the last week, I’ll admit I’m still uncomfortable with Node.js. Its asynchronous semantics mean that your code runs as part of a chain of callbacks; knowing when, where and how that chain works is still difficult for me to understand. Frequently you’ll just get a black screen and a process that won’t exit or an error message that may or may not be related to the actual bug. That said, I still remember being very frustrated with Ruby, its frequent use of “magic” and poor documentation (e.g. navigating this still befuddles me). I overcame those issues to be very comfortable with Ruby in general; I hope more time with Node.js will give me the same relief. Interested in a Career at Carbon Five? Check out our job openings.
https://blog.carbonfive.com/node-js-part-iii-full-stack-application/
CC-MAIN-2021-25
en
refinedweb
Closed Bug 1413976 Opened 4 years ago Closed 4 years ago Build docker images and toolchains (or optimize them) rather than trying to use indexes . Categories (Thunderbird :: Build Config, enhancement) Tracking (Not tracked) Thunderbird 59.0 People (Reporter: tomprince, Assigned: tomprince) References Details Attachments (1 file, 2 obsolete files) Rather having to add support for looking for toolchain dependencies by index and dealing with things like Bug 1408574, have thunderbirds taskgraph include building images and toolchains (which should be optimized away most of the time anyway). My one concern is that we currently use the same namespace as thunderbird. However, since we aren't defining any of these tasks, and just referencing the upstream ones, we should never actually build these, and even if we did, the job definition will be identical to the one firefox would define anyway. Comment on attachment 8924614 [details] Bug 1413976: Support defining toolchain and docker-image tasks by reference to mozilla-centrals definitions; ::: commit-message-f76b3:1 (Diff revision 1) > +Bug 1413976: Support defining toolchains tasks by reference to mozilla-centrals definitions; r?dustin This is pretty clever! :) ::: build/virtualenv_packages.txt:1 (Diff revision 1) > +comm.pth:comm/taskcluster included by mistake? ::: taskcluster/comm_taskgraph/__init__.py:15 (Diff revision 1) > + > + if kind == 'toolchain': > + if job['run'].get('toolchain-alias'): > + aliases.add(job['run'].get('toolchain-alias')) > + > + return aliases This is a little icky, with toolchain-specific stuff in a general loader. But makes sense for now. Attachment #8924614 - Flags: review?(dustin) → review+ Comment on attachment 8924613 [details] Bug 1413976: Build docker images, rather than using indexed images; Why not do image tasks by reference, too? Attachment #8924613 - Flags: review?(dustin) → review+ Comment on attachment 8924614 [details] Bug 1413976: Support defining toolchain and docker-image tasks by reference to mozilla-centrals definitions; > included by mistake? Nope. This is how the loader gets onto the path. Comment on attachment 8924613 [details] Bug 1413976: Build docker images, rather than using indexed images; Mostly because this was written before I wrote the reference code, but also partly because the kind definition was so easy. But it probably does make sense to change this too. As discussed on IRC, I'll land this when bug 1415619 merges. Pushed by mozilla@jorgk.com: Support defining toolchain and docker-image tasks by reference to mozilla-centrals definitions; r=dustin Status: NEW → RESOLVED Closed: 4 years ago Resolution: --- → FIXED Target Milestone: --- → Thunderbird 59.0 Pushed by mozilla@hocat.ca: Update sync exceptions for having our own virtualenv packages; r=me
https://bugzilla.mozilla.org/show_bug.cgi?id=1413976
CC-MAIN-2021-25
en
refinedweb
Image classification of Bird species using Keras in Python In this article, Image classification for huge datasets is clearly explained, step by step with the help of a bird species dataset. The major techniques used in this project are Data Augmentation and Transfer Learning methods, for improving the quality of our model. VGG16 pre-trained model for transfer learning is a very efficient open-source model. It consists of various convolutional layers followed by pooling layers. Pooling layers are responsible for the narrowing of the layers. Few of the bird species in the dataset are the African Firefinch, Albatross, American coot, Anhinga, Bald Eagle, Bird of paradise, common loon, eastern bluebird, flamingo, golden ibis, hornbill, Javan magpie, killdear, king vulture, northern jacana, pelican, puffin, ostrich, robin, roadrunner, sand martin, etc. Few specifications about this dataset are: - A total of 31316 training images, 1125 test images(5 per species), and 1125 validation images(5 per species). - All images are 224 X 224 X 3 color images in jpg format (Thus, no formatting from our side is required). - Images gathered from internet searches by species name. We are going to use the dataset for the classification of bird species with the help of Keras TensorFlow deep learning API in Python. This is actually an image classification task where we will classify different species of birds. Note: This is always better to preprocess your dataset first and after that feed it to the learning algorithm otherwise preprocessing of our dataset will happen on each of the epoch. Google Colaboratory is the preferred medium for machine learning algorithms because it supports free cloud service and free GPU service. Click here to go to google collaboratory notebook. To enable GPU service, follow the steps after entering the Colab page, Go to Edit -> Notebook settings -> Enable GPU. Thank you and Stay tuned!!! INSTALLING DEPENDENCIES The following are the dependent Python libraries in this project. Google Colaboratory has all the dependencies for this project downloaded in the server. So if Google Colaboratory is the platform used for coding, ignore this code and move to the next directly. !pip install tensorflow !pip install keras !pip install numpy IMPORTING THE REQUIRED LIBRARIES The Python libraries are imported depending on the needs of this project. import keras import numpy as np from keras.preprocessing.image import ImageDataGenerator from keras.applications.vgg16 import preprocess_input from google.colab import files Using TensorFlow backend. Keras is already coming with TensorFlow. This is the deep learning API that is going to perform the main classification task. UPLOADING DATASET Datasets are procured from the Kaggle website, which is a large data science community with powerful tools and open-source datasets. The code activates the API token and downloads directly from the Kaggle website into the Colab notebook. Another method to use the dataset is as follows: - Press the download(1 GB) button on the web page. - Now, Click here to go to Google Colab. - Press the Upload to section storage and upload the downloaded dataset. files.upload() !mkdir -p ~/.kaggle !cp kaggle.json ~/.kaggle !chmod 600 ~/.kaggle/kaggle.json !kaggle datasets download -d gpiosenka/100-bird-species Saving kaggle.json to kaggle.json Downloading 100-bird-species.zip to /content 99% 1.27G/1.28G [00:21<00:00, 72.8MB/s] 100% 1.28G/1.28G [00:21<00:00, 63.2MB/s] EXTRACTING THE ZIP FILE Kaggle stores the dataset in zip format to keep all the related files together thus making moving files from one place to another easier. import zipfile local_zip = '/content/100-bird-species.zip' zip_ref = zipfile.ZipFile(local_zip, 'r') zip_ref.extractall('/content/') zip_ref.close() CREATING GENERATORS Generators load the dataset while training deep learning models. Data augmentation is a technique of artificially creating new data from existing training data. It helps in: - Increasing the size of the dataset - Introduces variability in the dataset, without the use of additional data. train_datagen = ImageDataGenerator( preprocessing_function=preprocess_input, shear_range=0.1, zoom_range=0.1, horizontal_flip=True) train_generator = train_datagen.flow_from_directory('/content/train',target_size=(224, 224),batch_size=64,class_mode='categorical') #Creating generator for Validation DataSet val_datagen = ImageDataGenerator(preprocessing_function=preprocess_input) val_generator = val_datagen.flow_from_directory('/content/valid',target_size=(224, 224),batch_size=32,class_mode='categorical') #Creating generator for Test DataSet test_datagen = ImageDataGenerator(preprocessing_function=preprocess_input) test_generator = test_datagen.flow_from_directory('/content/test',target_size=(224, 224),batch_size=32,class_mode='categorical') Found 29544 images belonging to 215 classes. Found 1075 images belonging to 215 classes. Found 1075 images belonging to 215 classes. PRE-TRAINED MODEL The VGG16 model loads the weights from pre-trained on ImageNet. VGG16 network’s bottom layers are closer to the image are wide, whereas the top layers are deep. base_model=keras.applications.VGG16(include_top=False, weights="imagenet", input_shape=(224,224,3)) Downloading data from 58892288/58889256 [==============================] - 3s 0us/step FREEZING BASE LAYER Freeze all layers in the base model. Advantages of freezing layers are: - Reduces the training model time - Backpropogate and update weights only for a couple of layers, thus saving computational time. base_model.trainable = False ADDING LAYERS In this command, new layers from the last layer of the pre-trained model are architected. The dropout layer prevents the model from overfitting. The 215 in the last dense layer indicates the total number of possible classes in the dataset. from keras.models import Sequential from keras.layers import Dense,Flatten,Dropout model=Sequential() model.add(base_model) model.add(Flatten()) model.add(Dense(2048,activation='relu',kernel_initializer='he_normal')) model.add(Dropout(0.35)) model.add(Dense(2048,activation='relu',kernel_initializer='he_normal')) model.add(Dropout(0.35)) model.add(Dense(215,activation='softmax',kernel_initializer='glorot_normal')) SUMMARY OF THE MODEL The summary of the model shows the number of layers and the specifics of the model layers precisely/ architecture of the neural network thus increasing the ease of understanding of the network. It displays all the layers including the pre-trained layers and the new layers included previously. model.summary() Model: "sequential_2" _________________________________________________________________ Layer (type) Output Shape Param # ================================================================= vgg16 (Model) (None, 7, 7, 512) 14714688 _________________________________________________________________ flatten_2 (Flatten) (None, 25088) 0 _________________________________________________________________ dense_4 (Dense) (None, 2048) 51382272 _________________________________________________________________ dropout_3 (Dropout) (None, 2048) 0 _________________________________________________________________ dense_5 (Dense) (None, 2048) 4196352 _________________________________________________________________ dropout_4 (Dropout) (None, 2048) 0 _________________________________________________________________ dense_6 (Dense) (None, 225) 440535 ================================================================= Total params: 70,733,847 Trainable params: 56,019,159 Non-trainable params: 14,714,688 _________________________________________________________ COMPILATION Compile defines the loss function, metrics/ learning rate, and the optimizer. The parameters defining the compile function are: - Lose function specifies how well the machine learns from a specific algorithm with the given data. Binary cross-entropy is for multi-label classifications, whereas categorical cross-entropy is for multi-class classification where each example belongs to a single class. Thus, Categorical entropy outperforms binary entropy. - Here learning rate is the tuning parameter in an optimization algorithm. It determines the size of step at each of the iteration while moving toward a minimum of a loss function. Adjust the training rate during training and check for optimal solutions. - Optimizers are algorithms or methods that change the attributes of the neural network such as weights and learning rates in order to reduce the losses. It helps to get results faster. model.compile(optimizer=keras.optimizers.Adam(1e-4),loss='categorical_crossentropy',metrics=['accuracy']) TRAINING The model.fit/ model.fit_generator does all the training part for the model using various parameters which includes the number of epochs, multiprocessing steps, batch size, etc. history=model.fit(train_generator,epochs=40,validation_data=val_generator,workers=10,use_multiprocessing=True) Epoch 1/40 462/462 [==============================] - 397s 859ms/step - loss: 9.0988 - accuracy: 0.0749 - val_loss: 2.8479 - val_accuracy: 0.3405 Epoch 2/40 462/462 [==============================] - 361s 780ms/step - loss: 3.7011 - accuracy: 0.2908 - val_loss: 1.3898 - val_accuracy: 0.6214 Epoch 3/40 462/462 [==============================] - 365s 791ms/step - loss: 2.5986 - accuracy: 0.4805 - val_loss: 1.2391 - val_accuracy: 0.7647 Epoch 4/40 462/462 [==============================] - 364s 787ms/step - loss: 2.0046 - accuracy: 0.5929 - val_loss: 0.8035 - val_accuracy: 0.8140 Epoch 5/40 462/462 [==============================] - 363s 786ms/step - loss: 1.6436 - accuracy: 0.6625 - val_loss: 0.5808 - val_accuracy: 0.8493 Epoch 6/40 462/462 [==============================] - 365s 789ms/step - loss: 1.4380 - accuracy: 0.7080 - val_loss: 0.2312 - val_accuracy: 0.8781 Epoch 7/40 462/462 [==============================] - 367s 793ms/step - loss: 1.2481 - accuracy: 0.7436 - val_loss: 0.1301 - val_accuracy: 0.8930 Epoch 8/40 462/462 [==============================] - 367s 795ms/step - loss: 1.2016 - accuracy: 0.7653 - val_loss: 0.5041 - val_accuracy: 0.8977 Epoch 9/40 462/462 [==============================] - 364s 788ms/step - loss: 1.0705 - accuracy: 0.7850 - val_loss: 0.2209 - val_accuracy: 0.9060 Epoch 10/40 462/462 [==============================] - 363s 786ms/step - loss: 1.0148 - accuracy: 0.8008 - val_loss: 0.1227 - val_accuracy: 0.9153 Epoch 11/40 462/462 [==============================] - 366s 793ms/step - loss: 0.9520 - accuracy: 0.8155 - val_loss: 0.0078 - val_accuracy: 0.9144 Epoch 12/40 462/462 [==============================] - 370s 801ms/step - loss: 0.8859 - accuracy: 0.8280 - val_loss: 0.2111 - val_accuracy: 0.9181 Epoch 13/40 462/462 [==============================] - 369s 798ms/step - loss: 0.8242 - accuracy: 0.8424 - val_loss: 0.0025 - val_accuracy: 0.9172 Epoch 14/40 462/462 [==============================] - 370s 801ms/step - loss: 0.7976 - accuracy: 0.8503 - val_loss: 0.3693 - val_accuracy: 0.9293 Epoch 15/40 462/462 [==============================] - 370s 801ms/step - loss: 0.7753 - accuracy: 0.8587 - val_loss: 0.3846 - val_accuracy: 0.9191 Epoch 16/40 462/462 [==============================] - 369s 800ms/step - loss: 0.7194 - accuracy: 0.8680 - val_loss: 0.6372 - val_accuracy: 0.9274 Epoch 17/40 462/462 [==============================] - 369s 798ms/step - loss: 0.7251 - accuracy: 0.8702 - val_loss: 0.4891 - val_accuracy: 0.9340 Epoch 18/40 462/462 [==============================] - 369s 800ms/step - loss: 0.6661 - accuracy: 0.8784 - val_loss: 0.0439 - val_accuracy: 0.9284 Epoch 19/40 462/462 [==============================] - 381s 826ms/step - loss: 0.6404 - accuracy: 0.8857 - val_loss: 0.2181 - val_accuracy: 0.9247 Epoch 20/40 462/462 [==============================] - 382s 827ms/step - loss: 0.6016 - accuracy: 0.8938 - val_loss: 0.0015 - val_accuracy: 0.9247 Epoch 21/40 462/462 [==============================] - 381s 824ms/step - loss: 0.6419 - accuracy: 0.8917 - val_loss: 0.4428 - val_accuracy: 0.9284 Epoch 22/40 462/462 [==============================] - 370s 802ms/step - loss: 0.5791 - accuracy: 0.8995 - val_loss: 0.4855 - val_accuracy: 0.9377 Epoch 23/40 462/462 [==============================] - 370s 801ms/step - loss: 0.5506 - accuracy: 0.9033 - val_loss: 0.0011 - val_accuracy: 0.9367 Epoch 24/40 462/462 [==============================] - 374s 809ms/step - loss: 0.5470 - accuracy: 0.9063 - val_loss: 0.0406 - val_accuracy: 0.9414 Epoch 25/40 462/462 [==============================] - 373s 808ms/step - loss: 0.5218 - accuracy: 0.9119 - val_loss: 3.8196e-04 - val_accuracy: 0.9367 Epoch 26/40 462/462 [==============================] - 372s 804ms/step - loss: 0.5487 - accuracy: 0.9087 - val_loss: 0.0682 - val_accuracy: 0.9488 Epoch 27/40 462/462 [==============================] - 372s 805ms/step - loss: 0.5054 - accuracy: 0.9155 - val_loss: 0.4439 - val_accuracy: 0.9386 Epoch 28/40 462/462 [==============================] - 372s 805ms/step - loss: 0.5257 - accuracy: 0.9180 - val_loss: 0.1204 - val_accuracy: 0.9442 Epoch 29/40 462/462 [==============================] - 371s 803ms/step - loss: 0.4760 - accuracy: 0.9224 - val_loss: 0.0936 - val_accuracy: 0.9395 Epoch 30/40 462/462 [==============================] - 368s 798ms/step - loss: 0.4884 - accuracy: 0.9214 - val_loss: 0.0071 - val_accuracy: 0.9330 Epoch 31/40 462/462 [==============================] - 367s 795ms/step - loss: 0.4349 - accuracy: 0.9284 - val_loss: 2.4745e-04 - val_accuracy: 0.9451 Epoch 32/40 462/462 [==============================] - 372s 805ms/step - loss: 0.4470 - accuracy: 0.9296 - val_loss: 3.8900e-07 - val_accuracy: 0.9423 Epoch 33/40 462/462 [==============================] - 372s 805ms/step - loss: 0.4205 - accuracy: 0.9332 - val_loss: 0.2065 - val_accuracy: 0.9451 Epoch 34/40 462/462 [==============================] - 368s 795ms/step - loss: 0.4177 - accuracy: 0.9347 - val_loss: 0.0225 - val_accuracy: 0.9526 Epoch 35/40 462/462 [==============================] - 363s 786ms/step - loss: 0.3954 - accuracy: 0.9351 - val_loss: 0.9441 - val_accuracy: 0.9442 Epoch 36/40 462/462 [==============================] - 367s 793ms/step - loss: 0.4037 - accuracy: 0.9355 - val_loss: 0.1522 - val_accuracy: 0.9414 Epoch 37/40 462/462 [==============================] - 367s 795ms/step - loss: 0.3815 - accuracy: 0.9394 - val_loss: 1.3292e-04 - val_accuracy: 0.9488 Epoch 38/40 462/462 [==============================] - 371s 803ms/step - loss: 0.4154 - accuracy: 0.9369 - val_loss: 0.1124 - val_accuracy: 0.9423 Epoch 39/40 462/462 [==============================] - 374s 810ms/step - loss: 0.3617 - accuracy: 0.9415 - val_loss: 0.5891 - val_accuracy: 0.9507 Epoch 40/40 462/462 [==============================] - 372s 806ms/step - loss: 0.3659 - accuracy: 0.9434 - val_loss: 0.0092 - val_accuracy: 0.9507 The 40th epoch is the best in terms of training accuracy and validation loss. It has a training accuracy of 94.34&, validation loss of 0.92%, and validation accuracy of 95.07% which is considered to be a well-trained model. More on the analysis can be studied with the help of Visualization using the Matplotlib library. VISUALIZATION Visualization is a technique that makes sense of the data being poured out of the model. Thus, making an informed decision about the changes that need to be made on the parameters or hyperparameters that affect the Machine Learning model. import matplotlib.pyplot as plt #Loss plt.plot(history.history['loss'],label='loss') plt.plot(history.history['val_loss'],label='val_loss') plt.legend() plt.show() #Accuracy plt.plot(history.history['accuracy'],label='acc') plt.plot(history.history['val_accuracy'],label='val_acc') plt.legend() plt.show() SAVING MODEL Saving the model is one of the vital steps in machine learning, which can be loaded from the local machine. Thus, to load the saved model, the model.load function can be used in the workspace. model.save("/content/drive/My Drive/yolov3/birds.h5") EVALUATION The Evaluate function predicts the output for the given input thus bringing in a clear understanding of our trained model. Then computes the metrics function specified in the compile function thus returning the computed metric value as the output. model.evaluate(test_generator,use_multiprocessing=True,workers=10) 34/34 [==============================] - 12s 358ms/step [8.5635492723668e-06, 0.9655814170837402] FINAL THOUGHTS It’s no secret that machine learning is the future, and gaining an understanding of it might determine whether you will succeed in it. In this article, we have discussed in detail various methods used in training models including Transfer learning and Data Augmentation. With the power of deep learning algorithms, we can create value on top of these huge datasets (31,316 to be precise). Here, I tried to give the readers a very clear understanding with an example of how to train with bird species using its huge dataset and classify them using Keras in Python. For more information about the basics of Keras, feel free to refer to the Keras documentation. And for learning more about such projects, view the valueml blog page. And if you have any queries regarding the article, feel free to drop a line.
https://valueml.com/image-classification-of-bird-species-using-keras-in-python/
CC-MAIN-2021-25
en
refinedweb
Heart Attack Detection Using TensorFlow | Python In this, post we will predict heart attack detection using deep neural networks in Python with the help of TensorFlow and Keras deep learning API. To achieve this goal, I am going to use an open-source dataset and I will create a deep neural network model with the help of Keras deep learning API. You can download the dataset from the link Dataset Download the dataset and look at the dataset carefully, you will find that the dataset is containing a categorical target variable in the form of 0’s and 1’s. Now let’s move forward to implement it with the help of TensorFlow and Keras deep learning API. NOTE: Please install all the required libraries into your system, If possible otherwise please follow the tutorial with me using google colab’s runtime environment. import numpy as np import pandas as pd %matplotlib inline import matplotlib as mpl import matplotlib.pyplot as plt import tensorflow as tf from tensorflow import keras In the above two code snippet, you can see that I am importing the required library. I am importing two important libraries Keras and Tensorflow and that is going to play an important role in building our model for heart attack detection or prediction. Let’s move forward and load our dataset. df=pd.read_csv("/content/drive/My Drive/Internship/heart.csv") df.tail() OUTPUT: The above code is helpful to load the dataset into our notebook, I am using tail() to look at my dataset from the bottom. df.head() OUTPUT: If you want to see the dataset from the top, you can use head() and it will result in showing you the dataset from the top. Now we will get the information about our dataset, for this purpose we will use info(). df.info() OUTPUT: df.describe() above code is helpful to describe the dataset, for this purpose I am using describe(). You can verify the output below. Your small task is to see the dataset’s parameters given below such as mean, std, min, and the max value. OUTPUT: Now moving forward to look at the correlation matrix of our dataset. mpl.rcParams['figure.figsize'] = 20, 14 plt.matshow(df.corr()) plt.yticks(np.arange(df.shape[1]), df.columns) plt.xticks(np.arange(df.shape[1]), df.columns) plt.colorbar() OUTPUT: You can visualize the correlation matrix of the given datasets. df.hist() The above piece of code is useful to plot the histogram of our dataset. You can verify the output below. OUTPUT: Hope you are enjoying the tutorial as well as following this tutorial with me. Now we will look at our dataset more closely and in a precise manner. dataset=df mpl.rcParams['figure.figsize'] = 8,6 plt.bar(dataset['target'].unique(), dataset['target'].value_counts(), color = ['pink', 'green']) plt.xticks([0, 1]) plt.xlabel('Target Classes') plt.ylabel('Count') plt.title('Count of each Target Class') OUTPUT: You can visualize and count the target class of the given datasets. from sklearn.preprocessing import StandardScaler df = pd.get_dummies(df, columns = ['sex', 'cp', 'fbs', 'restecg', 'exang', 'slope', 'ca', 'thal']) standardScaler = StandardScaler() columns_scale = ['age', 'trestbps', 'chol', 'thalach', 'oldpeak'] df[columns_scale] = standardScaler.fit_transform(df[columns_scale]) df.head() OUTPUT: As we have seen in our dataset that, there are lots of categorical values like 0’s and 1’s in our input feature so it is always a good idea to get dummies variable for those categorical variables. For this purpose, you can use a library provided by pandas, as you can see in the above code I am using pd.get_dummies() to get the dummies variable for each categorical input feature. If you have followed the last tutorial for Diabetes prediction you must know that we have to convert our dataset into a standard scaler format. And I have told you that basically standard scalar format is used to remove the mean and used to scale each feature to unit variance. So in the above piece code, I am doing two things first is to get the dummies variable and then convert our input feature into standard scalar formate. moving forward to the next and important step. from sklearn.model_selection import train_test_split y = df['target'] X = df.drop(['target'], axis = 1) X_train, X_test, y_train, y_test = train_test_split(X, y, test_size = 0.33, random_state = 0) In the above code, I am splitting my dataset into a train and test set using the library provided by sklearn . np.random.seed(42) tf.random.set_seed(42) As you can see above, I am using a random seed to generate a pseudo-random number and assigning to our tf graph. Let’s collect the shape of our input feature and create our model. X_train.shape OUTPUT: (203, 30) from keras.models import Sequential from keras.layers import Dense, Dropout model = Sequential() model.add(Dense(15, input_dim=30, activation='relu')) model.add(Dense(10, activation='relu')) model.add(Dense(8, activation='relu')) model.add(Dropout(.2)) model.add(Dense(1, activation='sigmoid')) As you can see in the above snippet of code, I am using a sequential model. And as in the previous block of code, we have collected the shape of our input feature and that was 203 rows and 30 columns. So if you will look closely at our input layer then you will find that I am taking input_dim as 30, now you must get my point. I am using relu activation function in the input layer and sigmoid activation in the output layer. I am also using a 20% dropout layer. model.summary() look to the summary of our dataset, we have a total of 722 trainable parameters. model.compile(loss="binary_crossentropy", optimizer="adam", metrics=['accuracy']) model_history = model.fit(X_train, y_train, epochs=200, validation_data=(X_test, y_test)) OUTPUT: You can see in the above code that I am compiling my model with 200 epoch, with binary-cross entropy loss function and adam optimizer. I have also given a link to the code at last where I have used SGD optimizer, Output looks like this. You can see the difference between both the output and that is above code is looking to overfitted but the below one is not overfitted. model.compile(loss="binary_crossentropy", optimizer="SGD", metrics=['accuracy']) OUTPUT: model_history.history OUTPUT: You can visualize the history of the created model using the above code. Now moving forward predict the values using our model. y_pred = model.predict(X_test) print (y_pred) OUTPUT: you can see the predicted output, and also you can verify with the actual value. THANK YOU CONCLUSION: Hope you enjoyed and followed the tutorial with me, you are welcome with any further suggestion. You can further modify the code to increase the accuracy, like adding another optimizer, increasing the epochs. As you have seen the differences between two optimizers that how they influence our model so always remember your requirement and then you use the parameters. You can also use sigmoid instead of relu for final probability between 0 and 1. You can make some changes in the code and see the effect of used parameters that how they are influencing our model’s accuracy, you can get the code with the help of the given link Heart_attack_detection Thanks for your time.
https://valueml.com/heart-attack-detection-using-tensorflow-python/
CC-MAIN-2021-25
en
refinedweb
Class SceneOrbiter - java.lang.Object - com.openinventor.inventor.Inventor - com.openinventor.inventor.misc.SoBase - com.openinventor.inventor.fields.SoFieldContainer - com.openinventor.inventor.nodes.SoNode - com.openinventor.inventor.nodes.SoGroup - com.openinventor.inventor.nodes.SoSeparator - com.openinventor.inventor.viewercomponents.nodes.SceneInteractor - com.openinventor.inventor.viewercomponents.nodes.SceneOrbiter - All Implemented Interfaces: SafeDisposable public class SceneOrbiter extends SceneInteractor(Preview Feature) Tool class for easily building an OpenInventor application whose main behavior is an orbiting displacement around the center of the scene, without using existing viewer classes. SceneOrbiter is an extension of the SceneInteractornode that allows providing the camera and headlight manipulations like panning, zooming and orbiting. The 'headlight', i.e. an SoDirectionalLight node, is automatically aligned with the camera's view direction. More a viewing cube, i.e an SoViewingCube node, is added to the scene. But Contrary to classic Open Inventor viewer, the selection mode doesn't need to be explicitly asked, by a key press. See parent class SceneInteractorfor more details about the structure of the internal scene graph. The SceneOrbiter uses an instance of SoCameraInteractorto manipulate the camera in response to OpenInventor events. Notes: - Window system integration The SceneOrbiter needs a component that integrates the Open Inventor 3D rendering window with the native window system. System dependent tasks include creating a window, placing the window in the application's user interface, initializing OpenGL and processing the native event/message loop. System independent support for this is provided by the SoRenderAreaCoreclass. Example components are provided for AWT and SWT toolkits. - Event handling The SceneOrbiter needs a component that builds OpenInventor events (SoEvent) from native system events. System independent support for this is provided by the SoEventBuilderclass. Example components are provided for AWT and SWT toolkits: AWTEventToSoEvent and SWTEventToSoEvent. - Library A basic version of SceneOrbiter is a supported part of the Open Inventor API and a prebuilt jar is provided. - Source code The basic version of SceneOrbiter is also provided as source code in the sources folder to allow applications to customize and build their own interactive tool class. See $OIVJHOME/source/com/openinventor/inventor/viewercomponents/nodes. - Scene graph The application scene graph should be the last child of the SceneOrbiter. The initial application scene graph can be added by simply calling the inherited method addChild(). But note that if you need to replace the application scene graph, for example loading a new data set, do not call removeAllChildren(). That would also remove the SceneOrbiter's camera, headlight and event handler nodes. Add an SoSeparator to the SceneOrbiter to serve as the "scene graph holder", then add and remove the application scene graph from this node. - Clip planes SceneOrbiter automatically adjusts the 'near' and 'far' clipping planes when events modifying the camera are handled. This adjustment, based on the bounding box of the scene, ensures that shapes will not be clipped as the camera orbits and also that depth buffer precision is maximized. Note: Updating clipping planes after a camera move can be not sufficient. If the scene graph is modified or if a dragger or a rendered shape is moved, they can disappear or become partially clipped. A classic implementation of a render area must adjust clipping planes before each rendering by calling the provided method SceneInteractor.adjustClippingPlanes(SbViewportRegion). See render area's implementations available in $OIVJHOME/examples/inventor/viewercomponents/awt and $OIVJHOME/examples/inventor/viewercomponents/swt folders for examples of adjustClippingPlanesuse. Compatibility Please note that some interaction behaviors are different than the classic Open Inventor viewer classes : - Left Mouse + Shift: now : Zoom in/out. - Mouse wheel performs a dolly relative to the cursor position, not the center of the viewport. - The classic Alt key behavior is not implemented. This key is reserved for application use. - The Right Mouse button does not display a popup menu. This button is reserved for application use. Usage: - Left Mouse: Rotate the scene - Left Mouse + Ctrl: Pan the scene. - Left Mouse + Shift: Zoom in/out the scene. - Mouse Wheel: Zoom in/out (zoom center is the mouse cursor location). Nested Class Summary Nested classes/interfaces inherited from class com.openinventor.inventor.viewercomponents.nodes.SceneInteractor SceneInteractor.CameraMode Nested classes/interfaces inherited from class com.openinventor.inventor.nodes.SoSeparator SoSeparator.Cachings, SoSeparator.FastEditings, SoSeparator.RenderUnitIds Nested classes/interfaces inherited from class com.openinventor.inventor.nodes.SoNode SoNode.RenderModes Nested classes/interfaces inherited from class com.openinventor.inventor.Inventor Inventor.ConstructorCommand Field Summary Fields inherited from class com.openinventor.inventor.nodes.SoSeparator boundingBoxCaching, fastEditing, pickCulling, renderCaching, renderCulling, renderUnitId Fields inherited from class com.openinventor.inventor.nodes.SoGroup boundingBoxIgnoring Fields inherited from class com.openinventor.inventor.Inventor VERBOSE_LEVEL, ZeroHandle Method Summary Methods inherited from class com.openinventor.inventor.viewercomponents.nodes.SceneInteractor adjustClippingPlanes, enableHeadlight, getCamera, getCameraInteractor, getCameraMode, getRenderEngineMode, isHeadlightEnabled, viewAll, viewAxis Methods inherited from class com.openinventor.inventor.nodes.SoGroup addChild, findChild, getChild, getNumChildren, insertChild, removeAllChildren, removeChild, removeChild, replaceChild, replaceChild Methods inherited from class com.openinventor.inventor.nodes.SoNode affectsState, callback, copy, copy, distribute, doAction, getAlternateRep, getBoundingBox, getByName, getMatrix, getPrimitiveCount, Method Detail enableViewingCube public void enableViewingCube(boolean enabled)Enable or disable the viewing cube. isViewingCubeEnabled public boolean isViewingCubeEnabled()Return if viewing cube is enabled. getViewingCube public SoViewingCube getViewingCube()Returns a pointer to the viewing cube. setCameraMode public void setCameraMode(SceneInteractor.CameraMode mode)Description copied from class: SceneInteractorSet camera mode to perspective or orthographic. - Overrides: setCameraModein class SceneInteractor
https://developer.openinventor.com/refmans/latest/RefManJava/com/openinventor/inventor/viewercomponents/nodes/SceneOrbiter.html
CC-MAIN-2021-25
en
refinedweb
>” def cleanHtml(i): i = str(i) # Convert the Beautiful Soup Tag to a string bS = BeautifulSoup(i) # Pass the string to Beautiful Soup to strip out html # Find all of the text between paragraph tags and strip out the html i = bS.find(‘p’).getText() # Strip ampersand codes and WATCH: i = re.sub(‘&\w+;’,”,i) i = re.sub(‘WATCH:’,”,i) return i def cleanHtmlRegex(i): i = str(i) regexPatClean = re.compile(r'<[^<]*?/?>’) i = regexPatClean.sub(”, i) # Strip ampersand codes and WATCH: i = re.sub(‘&\w+;’,”,i) return re.sub(‘WATCH:’,”,i) # Copy all of the content from the provided web page webpage = urlopen(‘’).read() # Grab everything that lies between the title tags using a REGEX titleString = ‘<title>(.*)</title>’ patFinderTitle = re.compile(titleString) # Grab the link to the original article using a REGEX origArticleLink = ‘<link rel.*href=”(.*)” />’ patFinderLink = re.compile(origArticleLink) # Store all of the titles and links found in 2 lists findPatTitle = re.findall(patFinderTitle,webpage) findPatLink = re.findall(patFinderLink,webpage) # Create an iterator that will cycle through the first 16 articles and skip a few listIterator = [] listIterator[:] = range(2,16) # Print out the results to screen for i in listIterator: print “<h3>” + findPatTitle[i]+ “</h3><br />” # The title print “<a href='” + findPatLink[i] + “‘>Original Article</a><br />” # The link to the original article articlePage = urlopen(findPatLink[i]).read() # Grab all of the content from original article divBegin = articlePage.find(‘<div>’) # Locate the div provided article = articlePage[divBegin:(divBegin+1000)] # Copy the first 1000 characters after the div # Pass the article to the Beautiful Soup Module soup = BeautifulSoup(article) # Tell Beautiful Soup to locate all of the p tags and store them in a list paragList = soup.findAll(‘p’) # Print all of the paragraphs to screen for i in paragList: # i = cleanHtml(i) i = cleanHtmlRegex(i) print i + “
https://www.newthinktank.com/2010/11/python-2-7-tutorial-pt-17/
CC-MAIN-2021-25
en
refinedweb
Performance Effects of Exceptions in Java Last modified: March 29, 2021 1. Overview In Java, exceptions are generally considered expensive and shouldn't be used for flow control. This tutorial will prove that this perception is correct and pinpoint what causes the performance issue. 2. Setting Up Environment Before writing code to evaluate the performance cost, we need to set up a benchmarking environment. 2.1. Java Microbenchmark Harness Measuring exception overhead isn't as easy as executing a method in a simple loop and taking note of the total time. The reason is that a just-in-time compiler can get in the way and optimize the code. Such optimization may make the code perform better than it would actually do in a production environment. In other words, it might yield falsely positive results. To create a controlled environment that can mitigate JVM optimization, we'll use Java Microbenchmark Harness, or JMH for short. The following subsections will walk through setting up a benchmarking environment without going into the details of JMH. For more information about this tool, please check out our Microbenchmarking with Java tutorial. 2.2. Obtaining JMH Artifacts To get JMH artifacts, add these two dependencies to the POM: <dependency> <groupId>org.openjdk.jmh</groupId> <artifactId>jmh-core</artifactId> <version>1.28</version> </dependency> <dependency> <groupId>org.openjdk.jmh</groupId> <artifactId>jmh-generator-annprocess</artifactId> <version>1.28</version> </dependency> Please refer to Maven Central for the latest versions of JMH Core and JMH Annotation Processor. 2.3. Benchmark Class We'll need a class to hold benchmarks: @Fork(1) @Warmup(iterations = 2) @Measurement(iterations = 10) @BenchmarkMode(Mode.AverageTime) @OutputTimeUnit(TimeUnit.MILLISECONDS) public class ExceptionBenchmark { private static final int LIMIT = 10_000; // benchmarks go here } Let's go through the JMH annotations shown above: - @Fork: Specifying the number of times JMH must spawn a new process to run benchmarks. We set its value to 1 to generate only one process, avoiding waiting for too long to see the result - @Warmup: Carrying warm-up parameters. The iterations element being 2 means the first two runs are ignored when calculating the result - @Measurement: Carrying measurement parameters. An iterations value of 10 indicates JMH will execute each method 10 times - @BenchmarkMode: This is how JHM should collect execution results. The value AverageTime requires JMH to count the average time a method needs to complete its operations - @OutputTimeUnit: Indicating the output time unit, which is the millisecond in this case Additionally, there's a static field inside the class body, namely LIMIT. This is the number of iterations in each method body. 2.4. Executing Benchmarks To execute benchmarks, we need a main method: public class MappingFrameworksPerformance { public static void main(String[] args) throws Exception { org.openjdk.jmh.Main.main(args); } } We can package the project into a JAR file and run it at the command line. Doing so now will, of course, produce an empty output as we haven't added any benchmarking method. For convenience, we can add the maven-jar-plugin to the POM. This plugin allows us the execute the main method inside an IDE: <groupId>org.apache.maven.plugins</groupId> <artifactId>maven-jar-plugin</artifactId> <version>3.2.0</version> <configuration> <archive> <manifest> <mainClass>com.baeldung.performancetests.MappingFrameworksPerformance</mainClass> </manifest> </archive> </configuration> </plugin> The latest version of maven-jar-plugin can be found here. 3. Performance Measurement It's time to have some benchmarking methods to measure performance. Each of these methods must carry the @Benchmark annotation. 3.1. Method Returning Normally Let's start with a method returning normally; that is, a method that doesn't throw an exception: @Benchmark public void doNotThrowException(Blackhole blackhole) { for (int i = 0; i < LIMIT; i++) { blackhole.consume(new Object()); } } The blackhole parameter references an instance of Blackhole. This is a JMH class that helps prevent dead code elimination, an optimization a just-in-time compiler may perform. The benchmark, in this case, doesn't throw any exception. In fact, we'll use it as a reference to evaluate the performance of those that do throw exceptions. Executing the main method will give us a report: Benchmark Mode Cnt Score Error Units ExceptionBenchmark.doNotThrowException avgt 10 0.049 ± 0.006 ms/op There's nothing special in this result. The average execution time of the benchmark is 0.049 milliseconds, which is per se pretty meaningless. 3.2. Creating and Throwing an Exception Here's another benchmark that throws and catches exceptions: @Benchmark public void throwAndCatchException(Blackhole blackhole) { for (int i = 0; i < LIMIT; i++) { try { throw new Exception(); } catch (Exception e) { blackhole.consume(e); } } } Let's have a look at the output: Benchmark Mode Cnt Score Error Units ExceptionBenchmark.doNotThrowException avgt 10 0.048 ± 0.003 ms/op ExceptionBenchmark.throwAndCatchException avgt 10 17.942 ± 0.846 ms/op The small change in the execution time of method doNotThrowException isn't important. It's just the fluctuation in the state of the underlying OS and the JVM. The key takeaway is that throwing an exception makes a method run hundreds of times slower. The next few subsections will find out what exactly leads to such a dramatic difference. 3.3. Creating an Exception Without Throwing It Instead of creating, throwing, and catching an exception, we'll just create it: @Benchmark public void createExceptionWithoutThrowingIt(Blackhole blackhole) { for (int i = 0; i < LIMIT; i++) { blackhole.consume(new Exception()); } } Now, let's execute the three benchmarks we've declared: Benchmark Mode Cnt Score Error Units ExceptionBenchmark.createExceptionWithoutThrowingIt avgt 10 17.601 ± 3.152 ms/op ExceptionBenchmark.doNotThrowException avgt 10 0.054 ± 0.014 ms/op ExceptionBenchmark.throwAndCatchException avgt 10 17.174 ± 0.474 ms/op The result may come as a surprise: the execution time of the first and the third methods are nearly the same, while that of the second is substantially smaller. At this point, it's clear that the throw and catch statements themselves are fairly cheap. The creation of exceptions, on the other hand, produces high overheads. 3.4. Throwing an Exception Without Adding the Stack Trace Let's figure out why constructing an exception is much more expensive than doing an ordinary object: @Benchmark @Fork(value = 1, jvmArgs = "-XX:-StackTraceInThrowable") public void throwExceptionWithoutAddingStackTrace(Blackhole blackhole) { for (int i = 0; i < LIMIT; i++) { try { throw new Exception(); } catch (Exception e) { blackhole.consume(e); } } } The only difference between this method and the one in subsection 3.2 is the jvmArgs element. Its value -XX:-StackTraceInThrowable is a JVM option, keeping the stack trace from being added to the exception. Let's run the benchmarks again: Benchmark Mode Cnt Score Error Units ExceptionBenchmark.createExceptionWithoutThrowingIt avgt 10 17.874 ± 3.199 ms/op ExceptionBenchmark.doNotThrowException avgt 10 0.046 ± 0.003 ms/op ExceptionBenchmark.throwAndCatchException avgt 10 16.268 ± 0.239 ms/op ExceptionBenchmark.throwExceptionWithoutAddingStackTrace avgt 10 1.174 ± 0.014 ms/op By not populating the exception with the stack trace, we reduced execution duration by more than 100 times. Apparently, walking through the stack and adding its frames to the exception bring about the sluggishness we've seen. 3.5. Throwing an Exception and Unwinding Its Stack Trace Finally, let's see what happens if we throw an exception and unwind the stack trace when catching it: @Benchmark public void throwExceptionAndUnwindStackTrace(Blackhole blackhole) { for (int i = 0; i < LIMIT; i++) { try { throw new Exception(); } catch (Exception e) { blackhole.consume(e.getStackTrace()); } } } Here's the outcome: Benchmark Mode Cnt Score Error Units ExceptionBenchmark.createExceptionWithoutThrowingIt avgt 10 16.605 ± 0.988 ms/op ExceptionBenchmark.doNotThrowException avgt 10 0.047 ± 0.006 ms/op ExceptionBenchmark.throwAndCatchException avgt 10 16.449 ± 0.304 ms/op ExceptionBenchmark.throwExceptionAndUnwindStackTrace avgt 10 326.560 ± 4.991 ms/op ExceptionBenchmark.throwExceptionWithoutAddingStackTrace avgt 10 1.185 ± 0.015 ms/op Just by unwinding the stack trace, we see a whopping increase of some 20 times in the execution duration. Put another way, the performance is much worse if we extract the stack trace from an exception in addition to throwing it. 4. Conclusion In this tutorial, we analyzed the performance effects of exceptions. Specifically, it found out the performance cost is mostly in the addition of the stack trace to the exception. If this stack trace is unwound afterward, the overhead becomes much larger. Since throwing and handling exceptions is expensive, we shouldn't use it for normal program flows. Instead, as its name implies, exceptions should only be used for exceptional cases. The complete source code can be found over on GitHub.
https://www.baeldung.com/java-exceptions-performance
CC-MAIN-2021-17
en
refinedweb
In this quick tutorial, we’ll learn how to quickly search a part of a string in an Array. We’ll be demonstrating the examples in Java, Python, and Swift. The straightforward way to check whether a substring exists in any of the array elements is looping through the array elements. But in the next sections, we’ll use different approaches which are far shorter, cleaner and readable. Table of Contents Substring in an Array Using Java Using the Java 8 Stream API we can lookup if any substring is present in the array. import java.util.Arrays; import java.util.Optional; public class SubStringChecker{ public static void main(String []args){ System.out.println("Hello World"); String[] array = {"abc","def", "ghi"}; String input = "abcdefghi"; boolean stringExists = substringExistsInArray(input, array); System.out.println(stringExists); String input2 = "acdfhi"; stringExists = substringExistsInArray(input2, array); System.out.println(stringExists); System.out.println(getFirstMatchingSubstring(input, array)); System.out.println(getFirstMatchingSubstring(input2, array)); } public static boolean substringExistsInArray(String inputStr, String[] items) { return Arrays.stream(items).parallel().anyMatch(inputStr::contains); } public static Optional getFirstMatchingSubstring(String inputStr, String[] items) { return Arrays.stream(items).parallel().filter(inputStr::contains).findAny(); } } In the above code, we’ve created two methods. One to check if the substring exists. Other to return the matching substring. anyMatch returns any of the substrings that got matched. No particular order. Similarly, findAny returns any of the array elements that got matched. The value is returned as an Optional String. The output of the above is: Check For Substrings In Array Java for sub in array) print("substring exists in the list for input2: " + str(result)) matchingString = next(substring for substring in array if substring in input1) print("substring that matched was "+matchingString) #Output """ substring exists in the list for input1: True substring exists in the list for input2: False substring that matched was abc """ any returns true if any of the substrings is present in the array. To print the matched substring we use next throws StopIteration if the condition was not matched at all. Using Swift to check if array contains substring Swift has been increasingly gaining popularity. The below code snippet is a validation of that. import UIKit let input1 = "abcdefghi" let array = ["abc", "def", "ghi"] let input1Matches = array.contains(where: input1.contains) print("array contains input1 \(input1Matches)") let input2 = "acdfhi" let input2Matches = array.contains(where: input2.contains) print("array contains input2 \(input2Matches)") array.contains(where: string.contains) returns us a boolean if the array contains a substring of the string. The output of the above code is : Check For Substrings In Array Swift That sums up this tutorial. We have covered an interesting problem in Java, Python, and Swift.
https://www.journaldev.com/32677/check-for-substring-in-an-array
CC-MAIN-2021-17
en
refinedweb
umask - Man Page set and get the file mode creation mask Prolog This manual page is part of the POSIX Programmer's Manual. The Linux implementation of this interface may differ (consult the corresponding Linux manual page for details of Linux behavior), or the interface may not be implemented on Linux. Synopsis #include <sys/stat.h> mode_t umask(mode_t cmask); Description. Return Value. Errors No errors are defined. The following sections are informative. Examples None. Application Usage None. Rationale-2017. Future Directions None. See Also creat(), exec, mkdir(), mkfifo(), mknod(), mq_open(), open(), sem_open() The Base Definitions volume of POSIX.1-2017, exec(3p), mkdir(1p), mkdir(3p), mkfifo(3p), mknod(3p), open(3p), posix_typed_mem_open(3p), sh(1p), shm_open(3p), sys_stat.h(0p), umask(1p).
https://www.mankier.com/3p/umask
CC-MAIN-2021-17
en
refinedweb
This article is a brief yet concise introduction to multiprocessing in Python programming language. What is multiprocessing? Multiprocessing refers to the ability of a system to support more than one processor at the same time. Applications in a multiprocessing system are broken to smaller routines that run independently. The operating system allocates these threads to the processors improving performance of the system. Why multiprocessing? Consider a computer system with a single processor. If it is assigned several processes at the same time, it will have to interrupt each task and switch briefly to another, to keep all of the processes going. This situation is just like a chef working in a kitchen alone. He has to do several tasks like baking, stirring, kneading dough, etc. So the gist is that: The more tasks you must do at once, the more difficult it gets to keep track of them all, and keeping the timing right becomes more of a challenge. This is where the concept of multiprocessing arises! A multiprocessing system can have: - multiprocessor, i.e. a computer with more than one central processor. - multi-core processor, i.e. a single computing component with two or more independent actual processing units (called “cores”). Here, the CPU can easily executes several tasks at once, with each task using its own processor. It is just like the chef in last situation being assisted by his assistants. Now, they can divide the tasks among themselves and chef doesn’t need to switch between his tasks. Multiprocessing in Python In Python, the multiprocessing module includes a very simple and intuitive API for dividing work between multiple processes. Let us consider a simple example using multiprocessing module: Square: 100 Cube: 1000 Done! Let us try to understand the above code: - To import the multiprocessing module, we do: import multiprocessing - To create a process, we create an object of Process class. It takes following arguments: - target: the function to be executed by process - args: the arguments to be passed to the target function Note: Process constructor takes many other arguments also which will be discussed later. In above example, we created 2 processes with different target functions: p1 = multiprocessing.Process(target=print_square, args=(10, )) p2 = multiprocessing.Process(target=print_cube, args=(10, )) - To start a process, we use start method of Process class. p1.start() p2.start() - Once the processes start, the current program also keeps on executing. In order to stop execution of current program until a process is complete, we use join method. p1.join() p2.join() As a result, the current program will first wait for the completion of p1 and then p2. Once, they are completed, the next statements of current program are executed. Let us consider another program to understand the concept of different processes running on same python script. In this example below, we print the ID of the processes running the target functions: ID of main process: 28628 ID of process running worker1: 29305 ID of process running worker2: 29306 ID of process p1: 29305 ID of process p2: 29306 Both processes finished execution! Process p1 is alive: False Process p2 is alive: False - The main python script has a different process ID and multiprocessing module spawns new processes with different process IDs as we create Process objects p1 and p2. In above program, we use os.getpid() function to get ID of process running the current target function. Notice that it matches with the process IDs of p1 and p2 which we obtain using pid attribute of Process class. - Each process runs independently and has its own memory space. - As soon as the execution of target function is finished, the processes get terminated. In above program we used is_alive method of Process class to check if a process is still active or not. Consider the diagram below to understand how new processes are different from main Python script: So, this was a brief introduction to multiprocessing in Python. Next few articles will cover following topics related to multiprocessing: - Sharing data between processes using Array, value and queues. - Lock and Pool concepts in multiprocessing References: - -.
https://www.geeksforgeeks.org/multiprocessing-python-set-1/
CC-MAIN-2021-17
en
refinedweb
CCoinsView that brings transactions from a mempool into view. More... #include <txmempool.h> CCoinsView that brings transactions from a mempool into view. It does not check for spendings by memory pool transactions. Instead, it provides access to all Coins which are either unspent in the base CCoinsView, or are outputs from any mempool transaction! This allows transaction replacement to work as expected, as you want to have all inputs "available" to check signatures, and any cycles in the dependency graph are checked directly in AcceptToMemoryPool. It also allows you to sign a double-spend directly in signrawtransactionwithkey and signrawtransactionwithwallet, as long as the conflicting transaction is not yet confirmed. Definition at line 863 of file txmempool.h. Definition at line 919 of file txmempool 921 of file txmempool.cpp. Definition at line 866 of file txmempool.h.
https://doxygen.bitcoincore.org/class_c_coins_view_mem_pool.html
CC-MAIN-2021-17
en
refinedweb
As of WordPress 5.0, Gutenberg comes built-in. In this post, I'll give you the basics of what Gutenberg is, why it's awesome, and how to set up your environment to start creating your own custom Gutenberg blocks. While at least some knowledge of React will be useful, it is not totally required. Before getting into building custom gutenberg blocks, I think it will be helpful to know what gutenberg is. It may also be useful to understand the history of the editor and why WordPress added it to their core codebase. Without further adieu, let's get into it! What is Gutenberg? Before WordPress 5.0, users were able to edit their content using a WYSIWYG (which stands for "What You See Is What You Get) editor. This allowed content creators to write blog posts and static pages with no coding skills. At the same time, it also severely limited what they could do with their site. The theme would control what the header and footer looked like, but for any sort of custom layout, a developer would have to create a custom template and hardcode stuff in (bad) or do a bunch of crazy stuff to make things more changeable for the user (also bad). In 2011, the Advanced Custom Fields plugin was released which made a lot of these things easier. It allows developers to create custom fields for a given content type (post or page) and then render them in a template with minimal code. It makes custom templates for a home page or other special pages much easier to change for both developers and end-users. This has been my go-to for years now and it's been a great experience. I've even used it when creating sites with WordPress and Gatsby! While this solution is still a great solution and offers many different use cases, I have been using Gutenberg to build sites lately. As I mentioned before, Gutenberg now comes built-in to WordPress as the default editor although it started out as a plugin. So why did it get added to core? I presume it's largely an effort to keep up with site-builders such as SquareSpace and Wix. What are Gutenberg blocks? Gutenberg (named after Johannes Gutenberg who invented the first printing press) allows users to select pre-styled sections, or "blocks", for each page and fill in the content. This makes for a much more fluid user experience when creating pages or blog posts. WordPress provides some default blocks which will probably work for a lot of casual users, but what if you need a special block for a particular page or you want a block with some different styles? Rest assured, it is totally possible to create custom blocks. I will admit: at this time some of the documentation isn't great for creating blocks but hopefully this post will help anyone getting started with Gutenberg to have a better understanding of the block development process. Blocks in the theme or module? Pretty much all of the tutorials I have seen about block creation address doing so in a plugin. In addition, many of them are creating a plugin for a single block. By following these tutorials, you'd need 30 separate plugins if you needed 30 custom blocks! I have created multiple blocks in a plugin and can definitely see the value in doing so if you have a lot of existing sites to add those blocks to. Doing so would allow you to update the module, push it to a remote git repository, then pull your changes into whatever sites needed the update. When I was searching the other day, I couldn't find any tutorials that explained how to set up custom blocks as a part of a theme. I believe there are some benefits to having the blocks in a theme rather than a plugin though, including (but not limited to) less dependencies to manage, keeping proprietary code for blocks specific to a website private, and not having to worry about a user accidentally disabling the plugin and breaking things. Custom Gutenberg block theme setup When I'm building a new WordPress site, I tend to use the Underscores theme which is made by Automattic. It's a starter theme with very minimal styling. Although it can be downloaded with Sass structures in place, there is not a bundling tool included. I will be using Gulp to allow me to write jsx in my custom blocks. Before you can start developing the custom blocks, you need to add some code to the theme to handle it. Blocks directory for custom blocks To help keep things organized, I like to place all of my custom blocks into a directory in the root of my theme called blocks. This directory can be called whatever you like, but I'd recommed naming it something that is easily recognizable as custom blocks. In my case, the following command will create the directory: # terminal $ mkdir blocks Now that my blocks directory has been created, I need to create a php file inside which will enqueue my blocks and register my custom block types. I usually give mine the appropriate name of blocks.php though, again, you can call this whatever you like. The following command will create the file in my blocks directory and open it in the default code editor: # terminal $ touch blocks/blocks.php && open $_ Create a function to register custom gutenberg blocks The first thing you need to do in your blocks.php file (after the opening php tags) is create a function which will take care of adding the block scripts as well as registering the custom block type. I'll take this step-by-step so it's easy to follow. The empty function should look like this: <?php // blocks/blocks.php /** * Enqueue scripts for custom blocks */ function custom_block_scripts() { // Do something... } add_action('enqueue_block_assets', 'custom_block_scripts'); After creating the function, you'll use a hook to call the function. Since adding Gutenberg to WordPress core, a new hook has been added called enqueue_block_assets which exists exactly for this purpose. Enqueue the scripts and styles for the custom blocks The next thing you need to do is include the scripts for the custom blocks you're creating. This can be done using wp_enqueue_script() just like you'd do in a custom theme. This should go inside the custom_block_scripts() function like so: <); } add_action('enqueue_block_assets', 'custom_block_scripts'); In the code above, you may notice that I have listed an array of dependencies. This is required for any WordPress components you want to use in your blocks. The ones I have listed here are the ones I find myself using most often. A full list of packages that are available can be found here. At a minimum, you need wp-blocks to register a block. The rest of the wp_enqueue_script() function should look pretty familiar if you've done theme development before. In case you haven't, here's a quick breakdown of the arguments: <?php // wp_enqueue_script() wp_enqueue_script( $nickname, $location, $dependencies, $version, $in_footer ); Register the actual custom block types Now that you have the scripts added, you need to use register_block_type() to tell WordPress what to do with the code. It should be noted that the $args array will use the nickname you chose in the previous step to identify the script or styles you want to use. Again, WordPress added a custom function to do this called register_block_type() with the following arguments: <?php // register_block_type() register_block_type( $namespace, $args ); Based on the way you have set up the blocks so far, this is how your register_block_type() function will look: <?php // register_block_type() register_block_type( 'iamtimsmith/blocks', array( 'editor_script' => 'custom-block-scripts', // The script you enqueued earlier ) ); The code above should go in the same custom_block_scripts() function where you are enqueuing your scripts. After you have set this up, your custom function should look like this: <); // Register custom block types register_block_type( 'iamtimsmith/blocks', array( 'editor_script' => 'custom-block-scripts', ) ); } add_action('enqueue_block_assets', 'custom_block_scripts'); Telling functions.php about the custom blocks The final step for registering blocks in your theme is to add a call to the functions.php file. This will simply tell your theme that the file exists in the blocks directory and the content should be pulled in. While this step is relatively easy, it is also required for this to work. If you are running into issues with your custom blocks not showing up at all, I'd double check and make sure you added the call to your functions.php file. Adding the code below will tell your theme about the registered custom blocks: <?php // functions.php /** * Add custom blocks for gutenberg */ require get_template_directory() . '/blocks/blocks.php'; Although it doesn't matter where in your functions.php file you place the code, I tend to put it at the bottom. Especially if you're using the underscores theme, it helps to keep your code separated from the default theme code. Wrapping Up That's as much as I'm going to cover in this article. You have now registered the namespace and scripts where your custom blocks will live. In the next post in the series, I'll be going over a gulp setup which allows you to use JSX when building your custom blocks. Using JSX makes blocks easier to read and can make your life easier as a developer. If you're not familiar with gulp, I'll teach you some basics to get your custom blocks up and running and provide you with a jumping off point to add more optimizations. Have thoughts or questions? You can reach me on Twitter at @iam_timsmith. Discussion (3) Hey Tim. Great article! :) Small question: How do you add the "Originally published at iamtimsmith.com on Oct 05, 2019" line to your posts? I think it’s because I have the canonical url set in the front matter. Other than that, I’m not sure. It comes from my RSS feed and it’s just there. Ok... i have the canonical url set as well. I'll have a look at the rss feed option then. Thanks :)
https://practicaldev-herokuapp-com.global.ssl.fastly.net/iam_timsmith/creating-custom-gutenberg-blocks-with-react-and-wordpress-part-1-1njb
CC-MAIN-2021-17
en
refinedweb
# Query Operations All of the READ operations above used a simple DB path to query the database but of course Firebase provides many tools to fine tune what we want the server to return. All of these parameters we're used to having off the Firebase Query API are available from a separate SerializedQuery class which is exported as a named export of abstraced-admin. You would use it like so: import { SerializedQuery } from "abstracted-admin"; const recentTransactionsEU = SerializedQuery.path("/transactions") .orderByChild("date") .limitToLast(20) .equalTo("europe", "region"); As a convenience method you can also access SerializedQuery directly off your abstracted admin object with the query property: import DB from "abstracted-admin"; const db = new DB(); const recentTransactionsEU = DB.query .path("/transactions") .orderByChild("date") .limitToLast(20) .equalTo("europe", "region"); As you can see the Query class provides a fluent interface that any firebase developer should feel right at home with. Once you've defined your query you can use any of the above READ operations and instead of passing in the path just pass in the query: const db = new DB(); const transactions = await db.getList<ITransaction>(recentTransactionsEU);
https://abstracted-admin.com/query/
CC-MAIN-2021-17
en
refinedweb
Rails comes with a router in config/routes.rb. This file contains all the the urls your application will respond to. This Rails guide is a good reference to using Rails' router. Let's create some routes that our Angular application will eventually interface with. Rails' router comes with a useful resources function which we can use to specify RESTful routes, which we can also nest in blocks to create nested routes. routes.rbfor :postsand resources. We only need some of the routes that resourcesprovides us with by default, so we'll use the only:option specify the actions we want. We'll need to create our own putroutes for upvoting. Putting it in a memberblocks makes it so our url parameters map to :idinstead of :post_id. root to: 'application#angular' resources :posts, only: [:create, :index, :show] do resources :comments, only: [:show, :create] do member do put '/upvote' => 'comments#upvote' end end member do put '/upvote' => 'posts#upvote' end end rake routesto see all the routes you created. Once you've seen the routes we've configured, let's generate our controllers for posts and comments. We'll need to use the --skip-assets and --skip-template-engine flags since we'll be creating our own javascript files and templates. rails generate controller Posts --skip-assets --skip-template-engine rails generate controller Comments --skip-assets --skip-template-engine jsonformat, we need to add the following respond_tostatement in ApplicationController: protect_from_forgery with: :exception respond_to :json Let's start adding actions to our controllers. We'll be using the respond_with method in our actions to return json to our endpoints. Don't forget to permit the data coming from the user with strong parameters. We'll need an index, create, show, and upvote action to correspond with the routes we just created. Since we're using Rails 4, we'll need to also specify which parameters we want permitted in our controllers :linkand :titleattributes in PostsController: def post_params params.require(:post).permit(:link, :title) end end index, create, show, and upvoteaction PostsController: def index respond_with Post.all end def create respond_with Post.create(post_params) end def show respond_with Post.find(params[:id]) end def upvote post = Post.find(params[:id]) post.increment!(:upvotes) respond_with post end private def post_params params.require(:post).permit(:link, :title) end end :bodyattribute for comments in def comment_params params.require(:comment).permit(:body) end end createand upvoteactions in def create post = Post.find(params[:post_id]) comment = post.comments.create(comment_params) respond_with post, comment end def upvote post = Post.find(params[:post_id]) comment = post.comments.find(params[:id]) comment.increment!(:upvotes) respond_with post, comment end private def comment_params params.require(:comment).permit(:body) end We respond with both post and comments in CommentsController because we are using a nested resource, although only the last object is returned when responding to json. Our Rails backend is now ready to be wired up to our angular app!
https://thinkster.io/tutorials/angular-rails/creating-api-routes-and-controllers
CC-MAIN-2019-13
en
refinedweb
How to Analyze Tweet Sentiments with PHP Machine Learning This article was peer reviewed by Wern Ancheta. Thanks to all of SitePoint’s peer reviewers for making SitePoint content the best it can be! As of late, it seems everyone and their proverbial grandma is talking about Machine Learning. Your social media feeds are inundated with posts about ML, Python, TensorFlow, Spark, Scala, Go and so on; and if you are anything like me, you might be wondering, what about PHP? main goals of this post are: - Explore the general concepts around Machine learning and Sentiment Analysis - Review the capabilities and shortcomings of PHP-ML - Define the problem we are going to work on - Prove that trying to do Machine learning in PHP isn’t a completely crazy goal (optional) What is Machine Learning? Machine learning is a subset of Artificial Intelligence that focuses on giving “computers the ability to learn without being explicitly programmed”. This is achieved by using generic algorithms that can “learn” from a particular set of data. For example, one common usage of machine learning is classification. Classification algorithms are used to put data into different groups or categories. Some examples of classification applications are: - Market segmentation - Fraud detection Machine learning is something of an umbrella term that covers many generic algorithms for different tasks, and there are two main algorithm types classified on how they learn – supervised learning and unsupervised learning. Supervised Learning In supervised learning, we train our algorithm using labelled data in the form of an input object (vector) and a desired output value; the algorithm analyzes the training data and produces what is referred to as an inferred function which we can apply to a new, unlabelled dataset. For the remainder of this post we will focus on supervised learning, just because its easier to see and validate the relationship; keep in mind that both algorithms are equally important and interesting; one could argue that unsupervised is more useful because it precludes the labelled data requirements. Unsupervised Learning This type of learning on the other hand works with unlabelled data from the get-go. We don’t know the desired output values of the dataset and we are letting the algorithm draw inferences from datasets; unsupervised learning is especially handy when doing exploratory data analysis to find hidden patterns in the data. PHP-ML Meet PHP-ML, a library that claims to be a fresh approach to Machine Learning in PHP. The library implements algorithms, neural networks, and tools to do data pre-processing, cross validation, and feature extraction. I’ll be the first to admit PHP is an unusual choice for machine learning, as the language’s strengths are not that well suited for Machine Learning applications. That said, not every machine learning application needs to process petabytes of data and do massive calculations – for simple applications, we should be able to get away with using PHP and PHP-ML. The best use case that I can see for this library right now is the implementation of a classifier, be it something like a spam filter or even sentiment analysis. We are going to define a classification problem and build a solution step by step to see how we can use PHP-ML in our projects. The Problem To exemplify the process of implementing PHP-ML and adding some machine learning to our applications, I wanted to find a fun problem to tackle and what better way to showcase a classifier than building a tweet sentiment analysis class. One of the key requirements needed to build successful machine learning projects is a decent starting dataset. Datasets are critical since they will allow us to train our classifier against already classified examples. As there has recently been significant noise in the media around airlines, what better dataset to use than tweets from customers to airlines? Fortunately, a dataset of tweets is already available to us thanks to Kaggle.io. The Twitter US Airline Sentiment database can be downloaded from their site using this link The Solution Let’s begin by taking a look at the dataset we will be working on. The raw dataset has the following columns: - tweet_id - airline_sentiment - airline_sentiment_confidence - negativereason - negativereason_confidence - airline - airline_sentiment_gold - name - negativereason_gold - retweet_count - text - tweet_coord - tweet_created - tweet_location - user_timezone And looks like following example (side-scrollable table): The file contains 14,640 tweets, so it’s a decent dataset for us to work with. Now, with the current amount of columns we have available we have way more data than what we need for our example; for practical purposes we only care about the following columns: - text - airline_sentiment Where text will become our feature and the airline_sentiment becomes our target. The rest of the columns can be discarded as they will not be used for our exercise. Let’s start by creating the project, and initialize composer using the following file: { "name": "amacgregor/phpml-exercise", "description": "Example implementation of a Tweet sentiment analysis with PHP-ML", "type": "project", "require": { "php-ai/php-ml": "^0.4.1" }, "license": "Apache License 2.0", "authors": [ { "name": "Allan MacGregor", "email": "amacgregor@allanmacgregor.com" } ], "autoload": { "psr-4": {"PhpmlExercise\\": "src/"} }, "minimum-stability": "dev" } composer install If you need an introduction to Composer, see here. To make sure we are set up correctly, let’s create a quick script that will load our Tweets.csv data file and make sure it has the data we need. Copy the following code as reviewDataset.php in the root of our project: <?php namespace PhpmlExercise; require __DIR__ . '/vendor/autoload.php'; use Phpml\Dataset\CsvDataset; $dataset = new CsvDataset('datasets/raw/Tweets.csv',1); foreach ($dataset->getSamples() as $sample) { print_r($sample); } Now, run the script with php reviewDataset.php, and let’s review the output: Array( [0] => 569587371693355008 ) Array( [0] => 569587242672398336 ) Array( [0] => 569587188687634433 ) Array( [0] => 569587140490866689 ) Now that doesn’t look useful, does it? Let’s take a look at the CsvDataset class to get a better idea of what’s happening internally: <?php public function __construct(string $filepath, int $features, bool $headingRow = true) { if (!file_exists($filepath)) { throw FileException::missingFile(basename($filepath)); } if (false === $handle = fopen($filepath, 'rb')) { throw FileException::cantOpenFile(basename($filepath)); } if ($headingRow) { $data = fgetcsv($handle, 1000, ','); $this->columnNames = array_slice($data, 0, $features); } else { $this->columnNames = range(0, $features - 1); } while (($data = fgetcsv($handle, 1000, ',')) !== false) { $this->samples[] = array_slice($data, 0, $features); $this->targets[] = $data[$features]; } fclose($handle); } The CsvDataset constructor takes 3 arguments: - A file-path to the source CSV - An integer that specifies the number of features in our file - A boolean to indicate if the first row is header If we look a little closer we can see that the class is mapping out the CSV file into two internal arrays: samples and targets. Samples contains all the features provided by the file and targets contains the known values (negative, positive, or neutral). Based on the above, we can see that the format our CSV file needs to follow is as follows: | feature_1 | feature_2 | feature_n | target | We will need to generate a clean dataset with only the columns we need to continue working. Let’s call this script generateCleanDataset.php : <?php namespace PhpmlExercise; require __DIR__ . '/vendor/autoload.php'; use Phpml\Exception\FileException; $sourceFilepath = __DIR__ . '/datasets/raw/Tweets.csv'; $destinationFilepath = __DIR__ . '/datasets/clean_tweets.csv'; $rows =[]; $rows = getRows($sourceFilepath, $rows); writeRows($destinationFilepath, $rows); /** * @param $filepath * @param $rows * @return array */ function getRows($filepath, $rows) { $handle = checkFilePermissions($filepath); while (($data = fgetcsv($handle, 1000, ',')) !== false) { $rows[] = [$data[10], $data[1]]; } fclose($handle); return $rows; } /** * @param $filepath * @param string $mode * @return bool|resource * @throws FileException */ function checkFilePermissions($filepath, $mode = 'rb') { if (!file_exists($filepath)) { throw FileException::missingFile(basename($filepath)); } if (false === $handle = fopen($filepath, $mode)) { throw FileException::cantOpenFile(basename($filepath)); } return $handle; } /** * @param $filepath * @param $rows * @internal param $list */ function writeRows($filepath, $rows) { $handle = checkFilePermissions($filepath, 'wb'); foreach ($rows as $row) { fputcsv($handle, $row); } fclose($handle); } Nothing too complex, just enough to do the job. Let’s execute it with phpgenerateCleanDataset.php. Now, let’s go ahead and point our reviewDataset.php script back to the clean dataset: Array ( [0] => @AmericanAir That will be the third time I have been called by 800-433-7300 an hung on before anyone speaks. What do I do now??? ) Array ( [0] => @AmericanAir How clueless is AA. Been waiting to hear for 2.5 weeks about a refund from a Cancelled Flightled flight & been on hold now for 1hr 49min ) BAM! This is data we can work with! So far, we have been creating simple scripts to manipulate the data. Next, we are going to start creating a new class under src/classification/SentimentAnalysis.php. <?php namespace PhpmlExercise\Classification; /** * Class SentimentAnalysis * @package PhpmlExercise\Classification */ class SentimentAnalysis { public function train() {} public function predict() {} } Our Sentiment class will need two functions in our sentiment analysis class: - A train function, which will take our dataset training samples and labels and some optional parameters. - A predict function, which will take an unlabelled dataset and assigned a set of labels based on the training data. In the root of the project create a script called classifyTweets.php. We will use his script to instantiate and test our sentiment analysis class. Here is the template that we will use: <?php namespace PhpmlExercise; use PhpmlExercise\Classification\SentimentAnalysis; require __DIR__ . '/vendor/autoload.php'; // Step 1: Load the Dataset // Step 2: Prepare the Dataset // Step 3: Generate the training/testing Dataset // Step 4: Train the classifier // Step 5: Test the classifier accuracy Step 1: Load the Dataset We already have the basic code that we can use for loading a CSV into a dataset object from our earlier examples. We are going to use the same code with a few tweaks: <?php ... use Phpml\Dataset\CsvDataset; ... $dataset = new CsvDataset('datasets/clean_tweets.csv',1); $samples = []; foreach ($dataset->getSamples() as $sample) { $samples[] = $sample[0]; } This generates a flat array with only the features – in this case the tweet text – which we are going to use to train our classifier. Step 2: Prepare the Dataset Now, having the raw text and passing that to a classifier wouldn’t be useful or accurate since every tweet is essentially different. Fortunately, there are ways of dealing with text when trying to apply classification or machine learning algorithms. For this example, we are going to make use of the following two classes: - Token Count Vectorizer: This will transform a collection of text samples to a vector of token counts. Essentially, every word in our tweet becomes a unique number and keeps track of amounts of occurrences of a word in a specific text sample. - Tf-idf Transformer: short for term frequency–inverse document frequency, is a numerical statistic intended to reflect how important a word is to a document in a collection or corpus. Let’s start with our text vectorizer: <?php ... use Phpml\FeatureExtraction\TokenCountVectorizer; use Phpml\Tokenization\WordTokenizer; ... $vectorizer = new TokenCountVectorizer(new WordTokenizer()); $vectorizer->fit($samples); $vectorizer->transform($samples); Next, apply the Tf-idf Transformer: <?php ... use Phpml\FeatureExtraction\TfIdfTransformer; ... $tfIdfTransformer = new TfIdfTransformer(); $tfIdfTransformer->fit($samples); $tfIdfTransformer->transform($samples); Our samples array is now in a format where it an easily be understood by our classifier. We are not done yet, we need to label each sample with its corresponding sentiment. Step 3: Generate the Training Dataset Fortunately, PHP-ML has this need already covered and the code is quite simple: <?php ... use Phpml\Dataset\ArrayDataset; ... $dataset = new ArrayDataset($samples, $dataset->getTargets()); We could go ahead and use this dataset and train our classifier. We are missing a testing dataset to use as validation, however, so we are going to “cheat” a little bit and split our original dataset into two: a training dataset and a much smaller dataset that will be used for testing the accuracy of our model. <?php ... use Phpml\CrossValidation\StratifiedRandomSplit; ... $randomSplit = new StratifiedRandomSplit($dataset, 0.1); $trainingSamples = $randomSplit->getTrainSamples(); $trainingLabels = $randomSplit->getTrainLabels(); $testSamples = $randomSplit->getTestSamples(); $testLabels = $randomSplit->getTestLabels(); This approach is called cross-validation. The term comes from statistics and can be defined as follows: Cross-validation, sometimes called rotation estimation, is a model validation technique for assessing how the results of a statistical analysis will generalize to an independent data set. It is mainly used in settings where the goal is prediction, and one wants to estimate how accurately a predictive model will perform in practice. — Wikipedia.com Step 4: Train the Classifier Finally, we are ready to go back and implement our SentimentAnalysis class. If you haven’t noticed by now, a huge part of machine learning is about gathering and manipulating the data; the actual implementation of the Machine learning models tends to be a lot less involved. To implement our sentiment analysis class, we have three classification algorithms available: - Support Vector Classification - KNearestNeighbors - NaiveBayes For this exercise we are going to use the simplest of them all, the NaiveBayes classifier, so let’s go ahead and update our class to implement the train method: <?php namespace PhpmlExercise\Classification; use Phpml\Classification\NaiveBayes; class SentimentAnalysis { protected $classifier; public function __construct() { $this->classifier = new NaiveBayes(); } public function train($samples, $labels) { $this->classifier->train($samples, $labels); } } As you can see, we are letting PHP-ML do all the heavy lifting for us. We are just creating a nice little abstraction for our project. But how do we know if our classifier is actually training and working? Time to use our testSamples and testLabels. Step 5: Test the Classifier’s Accuracy Before we can proceed with testing our classifier, we do have to implement the prediction method: <?php ... class SentimentAnalysis { ... public function predict($samples) { return $this->classifier->predict($samples); } } And again, PHP-ML is doing us a solid and doing all the heavy lifting for us. Let’s update our classifyTweets class accordingly: <?php ... $predictedLabels = $classifier->predict($testSamples); Finally, we need a way to test the accuracy of our trained model; thankfully PHP-ML has that covered too, and they have several metrics classes. In our case, we are interested in the accuracy of the model. Let’s take a look at the code: <?php ... use Phpml\Metric\Accuracy; ... echo 'Accuracy: '.Accuracy::score($testLabels, $predictedLabels); We should see something along the lines of: Accuracy: 0.73651877133106% Conclusion This article fell a bit on the long side, so let’s do a recap of what we’ve learned so far: - Having a good dataset from the start is critical for implementing machine learning algorithms. - The difference between supervised learning and unsupervised Learning. - The meaning and use of cross-validation in machine learning. - That vectorization and transformation are essential to prepare text datasets for machine learning. - How to implement a Twitter sentiment analysis by using PHP-ML’s NaiveBayes classifier. This post also served as an introduction to the PHP-ML library and hopefully gave you a good idea of what the library can do and how it can be embedded in your own projects. Finally, this post is by no means comprehensive and there is plenty to learn, improve and experiment with; here are some ideas to get you started on how to improve things further: - Replace the NaiveBayes algorithm with the Support Vector Classification algorithm. - If you tried running against the full dataset (14,000 rows) you’d probably notice how memory intensive the process can be. Try implementing model persistence so it doesn’t have to be trained on each run. - Move the dataset generation to its own helper class. I hope you found this article useful. If you have some application ideas regarding PHP-ML or any questions, don’t hesitate to drop them below into the comments area!
https://www.sitepoint.com/how-to-analyze-tweet-sentiments-with-php-machine-learning/?utm_source=sitepoint&utm_medium=articletile&utm_campaign=comments&utm_term=php/
CC-MAIN-2019-13
en
refinedweb
Python client for parsing SCOTUS cases from the granted/noted and orders dockets. Project description Getting started pip install nyt-docket Using nyt-docket Command-line interface docket grants 2015 docket orders 2015 docket opinions 2015 Demo app Run the demo app. python -m docket.demo Modules Use the docket loader manually from within your Python script. Grants (new cases) Grants are cases that have been granted certiorari and will be heard by the Court in this term. The most interesting thing about a grant, besides its existence, is the question the Court will be deciding. This is associated as a separate PDF file on the Court’s site but the parser attaches it to the case as a text blob. from docket import grants g = grants.Load() g.scrape() for case in g.cases: print case.__dict__ Slip opinions (decisions) Slip opinions are decisions in cases the Court has either heard arguments on or has made a procedural decision on. These opinions are not final, but it’s the fastest way to know when the Court has acted on a case. The most important feature of a slip opinion is the opinion text, which is a separate PDF file. This is associated with the opinion as a hyperlink. from docket import slipopinions o = slipopinions.Load() o.scrape() for case in o.cases: print case.__dict__ Orders (all kinds of things) Orders are the daily business of the Court. Denials of certiorari as well as various other procedural motions are resolved in the orders list. This plugin grabs the long orders list itself as a PDF link and then parses it out into individual cases. WARNING: The individual cases rely on regex and tomfoolery. The methods for parsing them are fragile, so YMMV. from docket import orders z = orders.Load() z.scrape() z.parse() for order in z.orders: print order.__dict__ for case in z.cases: print "%s\t%s\t%s" % (case.docket, case.orders_type, case.casename) Project details Release history Release notifications Download files Download the file for your platform. If you're not sure which to choose, learn more about installing packages.
https://pypi.org/project/nyt-docket/0.0.10/
CC-MAIN-2019-13
en
refinedweb
Probably the most obvious I/O capability is to interact with the file system. Before getting into details about creating or modifying a file, we discuss the API surface to list files and directories. This will also include macroscopic file- and directory-level operations, such as making copies or deleting files. The main classes exposing this kind of functionality are DriveInfo, DirectoryInfo, and FileInfo, all of which live in the System.IO namespace. No credit card required
https://www.oreilly.com/library/view/c-50-unleashed/9780133391480/ch28lev1sec1.html
CC-MAIN-2019-13
en
refinedweb
[OmniFaces utilities] The renderResponse()method signals JSF that, as soon as the current phase of the lifecycle has been completed, control should be passed to the Render Response phase, bypassing any phases that have not been executed yet. Method: Typically, use cases of the Faces#renderResponse() consist in the following scenario: at the end of the current phase, JSF should bypass the rest of phases and go directly into the Render Response phase. For example, suppose that the current phase is Process Validations and, in case that validation is a success, you want to bypass Update Model Values and Invoke Application phase, and go directly in Render Response phase. Then, you simply do this: import org.omnifaces.util.Faces; ... // (e.g. at the end of Process Validations phase invoke) Faces.renderResponse(); Niciun comentariu : Trimiteți un comentariu
http://omnifaces-fans.blogspot.com/2015/04/omnifaces-utilities-20-force-control-to.html
CC-MAIN-2019-13
en
refinedweb
Clarango: A Clojure Driver for ArangoDB Clarango: A Clojure Driver for ArangoDB Join the DZone community and get the full member experience.Join For Free Secure your Java app or API service quickly and easily with Okta's user authentication and authorization libraries. Developer accounts are free forever. Try Okta Instead. Anybody working with or curious about ArangoDB might be interested in Stefan Edlich's work-in-progress Clojure driver for ArangoDB, Clarango. The current version is 0.3.0, and 1.0 is expected in late 2014, so obviously there is still a lot to be done, but according to the GitHub, the features list is already pretty interesting: - various options for connecting - document CRUD including various options -> for documentation on this see document.clj - querying by example - AQL queries (see query namespace) - collection management (see collection namespace) - database management (see database namespace) - graph functions (see graph namespace) - simple exception handling - experimental clojure idiomatic collection methods like cla-assoc!and cla-conj!(see collection_ops.clj for details) The GitHub includes instructions on installation and usage, and if you want to get really in-depth, you can check out the API docs. Clarango also has a website, but it looks like the GitHub and docs will cover most of what you need. }}
https://dzone.com/articles/clarango-clojure-driver
CC-MAIN-2019-13
en
refinedweb
JavaFX in Spring Day 1 – Application Initialization JavaFX in Spring Day 1 – Application Initialization Join the DZone community and get the full member experience.Join For Free Get the Edge with a Professional Java IDE. 30-day free trial. I spoke recently to the Spring Dallas User Group, which was a great audience, and in preparation for the talk I dusted off my Spring Framework skills (with a little help from Josh Long). Even though Spring is primarily targeted at server-side applications, you can actually do quite a nice integration between the two technologies on the client. In addition to the full talk (which you can find below), I will elaborate on some of the patterns for building Spring/JavaFX hybrid applications in this blog. To demonstrate how you can structure your application in JavaFX, I am going to build out a full Customer Data application over a 3 day blog series. For easy reference, you can flip to any of the blogs (as they are published) here: - JavaFX in Spring Day 1 – Application Initialization - JavaFX in Spring Day 2 – Configuration and FXML - JavaFX in Spring Day 3 – Authentication and Authorization And here is a small teaser shot of the application login page (the full source will be posted on GitHub on the 3rd day): You may be thinking to yourself why you should bother learning (or applying) Spring Framework in your JavaFX applications. I am sure there are plenty of good use cases that I haven’t even thought of, but here were some of my motivations: - Modularizing the UI – Complicated JavaFX applications have many screens involved in the workflow, and it can be difficult to create a consistent structure for the pageflow of the application. By taking advantage of Spring configuration and some age-old MVC patterns, you can greatly simplify this making it easy for others who maintain your application (maybe even yourself in 6 months time) to easily follow the structure. - Authentication and Authorization – No need to reinvent the wheel for user authentication and authorization. You can take advantage of Spring Security, the most widely used authentication system in the Java ecosystem, to also handle permissions for your JavaFX application. - Dependency Injection – If you have UI classes with a ballooning number number of constructor parameters or mandatory setter methods, then dependency injection can help you to manage the chaos. By taking advantage of Spring Bean dependencies and autowiring, you can have access to the model, controller, and other screens simply by declaring the relationships. To start out with, let’s cover the “safe” way to integrate Spring Framework into your application. Since you are not running in an application server environment, you need to manually bootstrap Spring, while also starting the JavaFX runtime. Also, the same restrictions about making UI changes on the JavaFX Application thread apply to Spring code injected into your application, so to be safe you should always execute your code on the UI thread. The following JavaFX Application main class meets all of these criteria: public class CustomerApp extends Application { public static void main(String[] args) { launch(args); } @Override public void start(Stage stage) throws Exception { ApplicationContext context = new AnnotationConfigApplicationContext(CustomerAppConfiguration.class); ScreensConfiguration screens = context.getBean(ScreensConfiguration.class); screens.setPrimaryStage(stage); screens.loginDialog().show(); } } In this example I am using annotation-based config for Spring. This is my favorite way to write Spring applications for obvious reasons (and if you agree, please take a moment to sign my Freedom from XML Petition). It is important to start the AnnotationConfigApplicationContext inside of the start method, because this runs all the Spring initialization on the UI thread. While it is possible (and possibly desirable) to run the Spring startup in a separate thread, if you happen to create a new Stage, Pop-up, or modify the Scene Graph in any small way, you will get exceptions, deadlocks, or worse! I highly recommend starting with this approach, and then selectively moving long-running operations onto worker threads if startup performance becomes an issue. Notice that the “Bean” that we are loading from the configuration is not actually a Bean, but a special ScreensConfiguration class that contains all the UI beans. This is a standard trick to allow lazy loading of Spring beans using Java Configuration (annotations), while letting you inject and access the beans directly. It isn’t until we call screens.loginDialog() that the UI class will actually be instantiated. This should be enough to get you started in initializing the Spring context in your own applications. (In my Dallas UG talk I showed a simple media example configured entirely via Spring… a good experiment to try yourself.) In tomorrow’s blog I will go into detail on the ScreensConfiguration class as well as share some tricks for modularizing your UI and doing dependency injection into FXML controllers. Until then, enjoy the presentation deck from the Dallas Spring User Group: Get the Java IDE that understands code & makes developing enjoyable. Level up your code with IntelliJ IDEA. Download the free trial. Published at DZone with permission of Stephen Chin , DZone MVB. See the original article here. Opinions expressed by DZone contributors are their own. {{ parent.title || parent.header.title}} {{ parent.tldr }} {{ parent.linkDescription }}{{ parent.urlSource.name }}
https://dzone.com/articles/javafx-spring-day-1-%E2%80%93
CC-MAIN-2019-13
en
refinedweb
Reading time: 5 – 8 minutes Nowadays last version of browsers support websockets and it’s a good a idea to use them to connect to server a permanent channel and receive push notifications from server. In this case I’m going to use Mosquitto (MQTT) server behind lighttpd with mod_websocket as notifications server. Mosquitto is a lightweight MQTT server programmed in C and very easy to set up. The best advantage to use MQTT is the possibility to create publish/subscriber queues and it’s very useful when you want to have more than one notification channel. As is usual in pub/sub services we can subscribe the client to a well-defined topic or we can use a pattern to subscribe to more than one topic. If you’re not familiarized with MQTT now it’s the best moment to read a little bit about because that interesting protocol. It’s not the purpose of this post to explain MQTT basics. A few weeks ago I set up the next architecture just for testing that idea: The browser Now it’s time to explain this proof of concept. HTML page will contain a simple Javascript code which calls mqttws31.js library from Paho. This Javascript code will connect to the server using secure websockets. It doesn’t have any other security measure for a while may be in next posts I’ll explain some interesting ideas to authenticate the websocket. At the end of the post you can download all source code and configuration files. But now it’s time to understand the most important parts of the client code. client = new Messaging.Client("ns.example.tld", 443, "unique_client_id"); client.onConnectionLost = onConnectionLost; client.onMessageArrived = onMessageArrived; client.connect({onSuccess:onConnect, onFailure:onFailure, useSSL:true}); Last part is very simple, the client connects to the server and links some callbacks to defined functions. Pay attention to ‘useSSL’ connect option is used to force SSL connection with the server. There are two specially interesting functions linked to callbacks, the first one is: function onConnect() { client.subscribe("/news/+/sport", {qos:1,onSuccess:onSubscribe,onFailure:onSubscribeFailure}); } As you can imagine this callback will be called when the connections is established, when it happens the client subscribes to all channels called ‘/news/+/sports’, for example, ‘/news/europe/sports/’ or ‘/news/usa/sports/’, etc. We can also use, something like ‘/news/#’ and it will say we want to subscribe to all channels which starts with ‘/news/’. If only want to subscribe to one channel put the full name of the channel on that parameter. Next parameter are dictionary with quality of service which is going to use and links two more callbacks. The second interesting function to understand is: function onMessageArrived(message) { console.log("onMessageArrived:"+message.payloadString); }; It’s called when new message is received from the server and in this example, the message is printed in console with log method. The server I used an Ubuntu 12.04 server with next extra repositories: # lighttpd + mod_webserver deb precise main deb-src precise main # mosquitto deb precise main deb-src precise main With these new repositories you can install required packages: apt-get install lighttpd lighttpd-mod-websocket mosquitto mosquitto-clients After installation it’s very easy to run mosquitto in test mode, use a console for that and write the command: mosquitto, we have to see something like this: # mosquitto 1379873664: mosquitto version 1.2.1 (build date 2013-09-19 22:18:02+0000) starting 1379873664: Using default config. 1379873664: Opening ipv4 listen socket on port 1883. 1379873664: Opening ipv6 listen socket on port 1883. The configuration file for lighttpd in testing is: server.modules = ( "mod_websocket", ) websocket.server = ( "/mqtt" => ( "host" => "127.0.0.1", "port" => "1883", "type" => "bin", "subproto" => "mqttv3.1" ), ) server.document-root = "/var/www" server.upload-dirs = ( "/var/cache/lighttpd/uploads" ) server.errorlog = "/var/log/lighttpd/error.log" server.pid-file = "/var/run/lighttpd.pid" server.username = "www-data" server.groupname = "www-data" server.port = 80 $SERVER["socket"] == ":443" { ssl.engine = "enable" ssl.pemfile = "/etc/lighttpd/certs/sample-certificate.pem" server.name = "ns.example.tld" } Remember to change ‘ssl.pemfile’ for your real certificate file and ‘server.name’ for your real server name. Then restart the lighttpd and validate SSL configuration using something like: openssl s_client -host ns.example.tld -port 443 You have to see SSL negotiation and then you can try to send HTTP commands, for example: “GET / HTTP/1.0” or something like this. Now the server is ready. The Test Now you have to load the HTML test page in your browser and validate how the connections is getting the server and then how the mosquitto console says how it receives the connection. Of course, you can modify the Javascript code to print more log information and follow how the client is connected to MQTT server and how it is subscribed to the topic pattern. If you want to publish something in MQTT server we could use the CLI, with a command mosquitto_pub: mosquitto_pub -h ns.example.tld -t '/news/europe/sport' -m 'this is the message about european sports' Take a look in your browser Javascript consle you have to see how the client prints the message on it. If it fails, review the steps and debug each one to solve the problem. If you need help leave me a message. Of course, you can use many different ways to publish messages, for example, you could use python code to publish messages in MQTT server. In the same way you could subscribe not only browsers to topics, for example, you could subscribe a python code: import mosquitto("the_client_id") mqttc.on_message = on_message mqttc.on_connect = on_connect mqttc.on_publish = on_publish mqttc.on_subscribe = on_subscribe mqttc.connect("ns.example.tld", 1883, 60) mqttc.subscribe("/news/+/sport", 0) rc = 0 while rc == 0: rc = mqttc.loop() Pay attention to server port, it isn’t the ‘https’ port (443/tcp) because now the code is using a real MQTT client. The websocket gateway isn’t needed. The files - mqtt.tar.gz – inside this tar.gz you can find all referenced files
http://oriolrius.cat/blog/2013/09/25/server-send-push-notifications-to-client-browser-without-polling/
CC-MAIN-2019-13
en
refinedweb
SPI controlling LED Strip UCS1903 - Paul Chabot I am trying to control a strip of lights that I have using a UCS1903. I am not understanding things correctly. I understand that a square wave can be used to indicate a 1 or a 0 and that pauses in sending a signal can indicate other functions. How does this translate to sending infomation over spi. Im not sure. If I could watch was is happening on the MOSI as it is attached to the flash storage...if I connect the DIN of my light strip to MOSI and install software or boot up the machine, the lights start going about changing colours and lighting up... But plugged into CS1 and trying to send data.. The lights take a 24bit piece of data r(8bits) g(8bits) and b(8bits) Here is code showing how my brain is comprehending this. import onionSpi import time busnum = 1 deviceid = 32766 spi = onionSpi.OnionSpi(busnum, deviceid) spi.speed = 400000 spi.delay = 1 spi.mosi = 18 spi.mode = 0 #spi.bitsPerWord = 1000 # spi.bitsPerWord = 24 # spi.modeBits = ['lsbfirst'] spi.modeBits = 0x8 spi.setupDevice() spi.registerDevice() #spi.bitsPerWord = 3 device_check = spi.checkDevice() print device_check print 'SPI MOSI GPIO: %d'%(spi.mosi) print 'SPI Speed: %d Hz (%d kHz)'%(spi.speed, spi.speed/1000) print 'Mode Bits: 0x%x'%(spi.modeBits) # R7 R6 R5 R4 R3 R2 R1 R0 G7 G6 G5 G4 G3 G2 G1 G0 B7 B6 B5 B4 B3 B2 B1 B0 led_rgb = [] led_full = [ 0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff ] led_red = [ 0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff ] led_green = [ 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00 ] led_blue = [ 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00 ] # ret1 = spi.writeBytes(0x00, led_red) # ret2 = spi.writeBytes(0x00, led_green) # ret3 = spi.writeBytes(0x00, led_blue) # print ret1 # print ret2 # print ret3 vals = [0x00, 0x1C, 0x07, 0x12, 0x37, 0x32, 0x29, 0x2D] ret = spi.write(vals) # spi.write(led_red) # time.sleep(1) # spi.write(led_green) # time.sleep(1) # spi.write(led_blue) # time.sleep(1) @Paul-Chabot: From the datasheet you linked, these UCS1903 chips are close relatives to the WS281x family of LED chain chips. To generate the required signal with SPI, you'd need to have a SPI interface available exclusively for the purpose (which you don't on Omega2), and you need to have tight control of the SPI to send a long uninterrupted stream of data (which usually works on Arduino&Co, but is difficult on a Linux system). But luckily, there's hardware better suited for the task than the SPI in Omega2's MT7688 chip: the PWM unit. It's still a bit tricky to meet all timing constraints, but definitely possible. I know because I wrote a kernel driver named p44-ledchain exactly for the purpose ;-) I needed it for Omega2 based projects like this (with 200 WS2813 LEDs). Now, the UCS1903 has slightly different timings specs than the chips already supported in p44-ledchain(WS2811, WS2812, WS2813, P9823, SK6812). Still, I think it should work with the P9823 setting. If not, just let me know and I can add another option for the UCS1903 chip to the driver. If you want to give it a try: Install the driver (needs --force-dependsbecause there's no 100% kernel version match): cd /tmp wget ledchain_4.4.61%2B0.9-2_mipsel_24kc.ipk opkg install --force-depends kmod-p44-ledchain* Then load the driver and configure it for a chain of 100 P9823 (see here for detailed description of the module parameters): insmod /lib/modules/4.4.61/p44-ledchain.ko ledchain0=0,100,3 Now you need to configure one of the GPIO pins to provide PWM0 output (GPIO 18 or 46 have that option): omega2-ctrl gpiomux set pwm0 pwm; # for GPIO 18 omega2-ctrl gpiomux set uart1 pwm01; # for GPIO 46 Finally you need to connect your DIN of the first UCS1903 in the chain to the PWM0 output pin of the Omega2 (GPIO 18 or 46). As the Omega2 is 3.3V and the UCS1903 is 5V powered, the high level coming from the Omega2 might not be sufficient for the UCS1903. In my experience with WS281x chips, it can work sometimes, but not reliably. So the right way to do it is adding a level shifter such as a 74AHCT1G125. Now you can just write a string of RGB values into /dev/ledchain0 to control the LEDs, for example: echo -en '\xFF\x00\x00\x00\xFF\x00\x00\x00\xFF' >/dev/ledchain0 to make the first LED red, second green, third blue. There's more info and details in this forum thread
http://community.onion.io/topic/2782/spi-controlling-led-strip-ucs1903
CC-MAIN-2019-13
en
refinedweb
ROS. Table of Contents Introduction There are three different modes of communication in ROS: topics, services & actions. Topics and services are the most popular modes of communication in ROS. This section explains when ROS actions are better over topics & services. ROS topics are used when the flow of data stream is continuous like robot pose, sensor data, etc. Topics implement a publish/subscribe communication mechanism, one of the most common ways to exchange data in a distributed system. So, this type of communication is many to many connections. On the other hand, ROS services are used when synchronous remote procedure calls can terminate quickly. So, this mechanism is best suited when the remote function needs to be called infrequently and the remote function takes a limited amount of time to complete (like for querying the state of a node or turning the robot on & off). Now coming to the ROS actions. They are used when you send a request to a node to perform a task which takes a long time to execute and you also expects a response for that request. This sounds very similar to ROS services but there is a major difference between services and actions. Services are synchronous and actions are asynchronous. Due to the asynchronous mechanism, ROS actions can be preempted and preemption is always implemented by ROS action servers. Similar to services, an action uses a target to initiate a task and sends a result when the task is completed. But the ROS action also uses feedback to provide updates on the task’s progress toward the target and also allows for a task to be canceled or preempted. For example - your task is to send a robot to an x,y position. You send a target waypoint, then move on to other running tasks while the robot is reaching towards the waypoint. Till your robot reaches the target, you will periodically receive the ROS messages (like current x,y position of the robot, the time elapsed, etc). Based on this information, you can check the status of that task and if it is not getting executed properly you can terminate it in between or if something more important comes up (like stop/brake command as some obstacle is detected), you can cancel this task and preempt the new task. ROS actions architecture is explained in more detail here. Going through a few ROS tutorials on action server will help you in understanding upcoming section better. ROS Motion Server Framework ROS motion server framework makes writing new task simpler and easy. In this framework, the client sends a request via ROS Simple Action Client where action_type defines which task to run. There is a task manager which manages the lifecycle of the task by interfacing with the user-defined task config library to check and construct tasks. It also manages cancelation, abortion, failure, etc. and reports task ending state back to the Action Client. Now we will discuss task config where user defines the task. User needs to define a task handler with three key methods: - wait_until_ready: blocks for a specified timeout until all dependencies are ready, or throws an exception if dependencies fail within timeout. - check_request: checks that the incoming request is valid; if so, return an instance of the requested task. - handle: handle the provided (by task) state and command. Task config will get more clear when we will look an example in the last section. Additional implementation details can be found from here. Example To setup ROS motion server framework, clone below two repository in your catkin workspace and build the workspace: git clone git clone catkin_make All the required changes to add a new task will be done inside example motion task config folder. Now, we will explain how a new task can be added to this framework. Here, we are adding a new task for the PX4 flight controller. If you are not familiar with MAVROS, PX4 offboard mode, please look at this tutorial. A high-level state machine is written inside the test_action_client file.py. Here, we will be running a high-level command “uavTakeOff” by calling SimpleActionClient Server. Below code snippet will request a new task named uavTakeOff. def run(server_name): client = actionlib.SimpleActionClient(server_name, TaskRequestAction) rospy.loginfo('TaskActionClient: Waiting for server') client.wait_for_server() rospy.loginfo('TaskActionClient: server ready') time.sleep(60) request = TaskRequestGoal(action_type='uavTakeOff') Add a new file highlevelcommands.py inside the src where we will write the abstract class for the “uavTakeOff” task. This class needs three methods init, get_desired_command & cancel. For simplicity cancel method is not implemented here. In the future, if you want to add a new task, just add a new class of that task with these three methods. import rospy from task_states import TaskRunning, TaskDone from task_commands import * from motions_server.abstract_task import AbstractTask class uavTakeOff(AbstractTask): def __init__(self, request): rospy.loginfo('Take off task being constructed') self.status = 0 self.altitude = request.pose.pose.position.z def get_desired_command(self): if(self.status == 0): self.status = 1 return TaskRunning(), FlightMode(mode='OFFBOARD',arm=1,altitude=self.altitude) else: return TaskDone(), StringCommand(data='Task done') def cancel(self): return True In the above class, we introduced an additional command of FlightMode. All the parameters defined in this class is self explanatory if you are aware of the PX4 offboard mode. This class needs to be defined in the file task_commands.py. class FlightMode(object): def __init__(self, mode='', arm=0, altitude=0): self.mode = mode self.arm = arm self.altitude = altitude self.sp = PositionTarget() self.sp.type_mask = int('010111111000', 2) # LOCAL_NED self.sp.coordinate_frame = 1 Task manager is defined in the task_config.py. Here we will define the task handler and the required methods check_request, wait_until_ready, handle_task_running which is already explained in the previous section. In the init method we will create all the required objects (subscriber/publisher/services). class CommandHandler(AbstractCommandHandler): def __init__(self): #initailizing the services and publishers/subscribers rospy.wait_for_service('mavros/cmd/arming') rospy.wait_for_service('mavros/cmd/takeoff') rospy.wait_for_service('mavros/set_mode') try: self._armService = rospy.ServiceProxy('mavros/cmd/arming', mavros_msgs.srv.CommandBool) self._flightModeService = rospy.ServiceProxy('mavros/set_mode', mavros_msgs.srv.SetMode) except rospy.ServiceException, e: rospy.logerr("Service call failed: %s"%e) self._sp_pub = rospy.Publisher('mavros/setpoint_raw/local', PositionTarget, queue_size=1) def check_request(self, request): if request.action_type == 'uavTakeOff': self._taskname = request.action_type return uavTakeOff(request) def wait_until_ready(self, timeout): return def _handle_task_running(self, command): rospy.loginfo("Handle task running: ") if(self._taskname == "uavTakeOff"): if(command.arm>0 and command.altitude>0): #Arming the vehicle try: self._armService(bool(command.arm)) except rospy.ServiceException, e: rospy.logerr("Service arming call failed: %s"%e) #Put the vehicle in offboard mode # We need to send few setpoint messages, then activate OFFBOARD mode, to take effect k=0 while k<10: self._sp_pub.publish(command.sp) self._rate.sleep() k = k + 1 try: self._flightModeService(custom_mode=command.mode) except rospy.ServiceException, e: rospy.logger("service set_mode call failed: %s. Offboard Mode could not be set."%e) command.sp.position.z = command.altitude while(abs(self._local_pos.pose.pose.position.z - command.altitude) > 0.2): self._sp_pub.publish(command.sp) self._rate.sleep() If we want to query the current state of the task, you can add that part of code in task_states.py. You can find above code here also.
https://roboticsknowledgebase.com/wiki/common-platforms/ros/ros-motion-server-framework/
CC-MAIN-2021-10
en
refinedweb
Usage Typescript Import Format //To import this class, use the format below. import {urlPathAdapter} from "ojs/ojrouter"; For additional information visit: Oracle® JavaScript Extension Toolkit (JET) 10.0.0 F32683-01 Url adapter used by the oj.Router to manage URL in the form of /book/chapter2. The UrlPathAdapter is the default adapter used by the router as it makes more human-readable URLs, is user-friendly, and less likely to exceed the maximum charaacter limit in the browser URL. Since this adapter generates path URLs, it's advisable that your application be able to restore the page should the user bookmark or reload the page. For instance, given the URL /book/chapter2, your application server ought to serve up content for "chapter2" if the user should bookmark or reload the page. If that's not possible, then consider using the urlParamAdapter. There are two available URL adapters, this one and the urlParamAdapter. To change the URL adapter, use the urlAdapter property. //To import this class, use the format below. import {urlPathAdapter} from "ojs/ojrouter"; For additional information visit:
https://www.oracle.com/webfolder/technetwork/jet/jsdocs/oj.Router.urlPathAdapter.html
CC-MAIN-2021-10
en
refinedweb
Use a TLS/SSL certificate in your code in Azure App Service In your application code, you can access the public or private certificates you add to App Service. Your app code may act as a client and access an external service that requires certificate authentication, or it may need to perform cryptographic tasks. This how-to guide shows how to use public or private certificates in your application code. This approach to using certificates in your code makes use of the TLS functionality in App Service, which requires your app to be in Basic tier or above. If your app is in Free or Shared tier, you can include the certificate file in your app repository. When you let App Service manage your TLS/SSL certificates, you can maintain the certificates and your application code separately and safeguard your sensitive data. Prerequisites To follow this how-to guide: Find the thumbprint In the Azure portal, from the left menu, select App Services > <app-name>. From the left navigation of your app, select TLS/SSL settings, then select Private Key Certificates (.pfx) or Public Key Certificates (.cer). Find the certificate you want to use and copy the thumbprint. Make the certificate accessible To access a certificate in your app code, add its thumbprint to the WEBSITE_LOAD_CERTIFICATES app setting, by running the following command in the Cloud Shell: az webapp config appsettings set --name <app-name> --resource-group <resource-group-name> --settings WEBSITE_LOAD_CERTIFICATES=<comma-separated-certificate-thumbprints> To make all your certificates accessible, set the value to *. Load certificate in Windows apps The WEBSITE_LOAD_CERTIFICATES app setting makes the specified certificates accessible to your Windows hosted app in the Windows certificate store, in Current User\My. In C# code, you access the certificate by the certificate thumbprint. The following code loads a certificate with the thumbprint E661583E8FABEF4C0BEF694CBC41C28FB81CD870. using System; using System.Linq; using System.Security.Cryptography.X509Certificates; string certThumbprint = "E661583E8FABEF4C0BEF694CBC41C28FB81CD870"; bool validOnly = false; using (X509Store certStore = new X509Store(StoreName.My, StoreLocation.CurrentUser)) { certStore.Open(OpenFlags.ReadOnly); X509Certificate2Collection certCollection = certStore.Certificates.Find( X509FindType.FindByThumbprint, // Replace below with your certificate's thumbprint certThumbprint, validOnly); // Get the first cert with the thumbprint X509Certificate2 cert = certCollection.OfType<X509Certificate>().FirstOrDefault(); if (cert is null) throw new Exception($"Certificate with thumbprint {certThumbprint} was not found"); // Use certificate Console.WriteLine(cert.FriendlyName); // Consider to call Dispose() on the certificate after it's being used, avaliable in .NET 4.6 and later } In Java code, you access the certificate from the "Windows-MY" store using the Subject Common Name field (see Public key certificate). The following code shows how to load a private key certificate: import org.springframework.web.bind.annotation.RestController; import org.springframework.web.bind.annotation.RequestMapping; import java.security.KeyStore; import java.security.cert.Certificate; import java.security.PrivateKey; ... KeyStore ks = KeyStore.getInstance("Windows-MY"); ks.load(null, null); Certificate cert = ks.getCertificate("<subject-cn>"); PrivateKey privKey = (PrivateKey) ks.getKey("<subject-cn>", ("<password>").toCharArray()); // Use the certificate and key ... For languages that don't support or offer insufficient support for the Windows certificate store, see Load certificate from file. Load certificate from file If you need to load a certificate file that you upload manually, it's better to upload the certificate using FTPS instead of Git, for example. You should keep sensitive data like a private certificate out of source control. Note ASP.NET and ASP.NET Core on Windows must access the certificate store even if you load a certificate from a file. To load a certificate file in a Windows .NET app, load the current user profile with the following command in the Cloud Shell: az webapp config appsettings set --name <app-name> --resource-group <resource-group-name> --settings WEBSITE_LOAD_USER_PROFILE=1 This approach to using certificates in your code makes use of the TLS functionality in App Service, which requires your app to be in Basic tier or above. The following C# example loads a public certificate from a relative path in your app: using System; using System.IO; using System.Security.Cryptography.X509Certificates; ... var bytes = File.ReadAllBytes("~/<relative-path-to-cert-file>"); var cert = new X509Certificate2(bytes); // Use the loaded certificate To see how to load a TLS/SSL certificate from a file in Node.js, PHP, Python, Java, or Ruby, see the documentation for the respective language or web platform. Load certificate in Linux/Windows containers The WEBSITE_LOAD_CERTIFICATES app settings makes the specified certificates accessible to your Windows or Linux container apps (including built-in Linux containers) as files. The files are found under the following directories: The certificate file names are the certificate thumbprints. Note App Service inject the certificate paths into Windows containers as the following environment variables WEBSITE_PRIVATE_CERTS_PATH, WEBSITE_INTERMEDIATE_CERTS_PATH, WEBSITE_PUBLIC_CERTS_PATH, and WEBSITE_ROOT_CERTS_PATH. It's better to reference the certificate path with the environment variables instead of hardcoding the certificate path, in case the certificate paths change in the future. In addition, Windows Server Core containers load the certificates into the certificate store automatically, in LocalMachine\My. To load the certificates, follow the same pattern as Load certificate in Windows apps. For Windows Nano based containers, use the file paths provided above to Load the certificate directly from file. The following C# code shows how to load a public certificate in a Linux app. using System; using System.IO; using System.Security.Cryptography.X509Certificates; ... var bytes = File.ReadAllBytes("/var/ssl/certs/<thumbprint>.der"); var cert = new X509Certificate2(bytes); // Use the loaded certificate To see how to load a TLS/SSL certificate from a file in Node.js, PHP, Python, Java, or Ruby, see the documentation for the respective language or web platform.
https://docs.microsoft.com/en-us/azure/app-service/configure-ssl-certificate-in-code?WT.mc_id=AZ-MVP-5002040
CC-MAIN-2021-10
en
refinedweb
The Ultimate Guide of Feature Importance in Python Want to share your content on python-bloggers? click here. Feature Importance is a score assigned to the features of a Machine Learning model that defines how “important” is a feature to the model’s prediction. It can help in feature selection and we can get very useful insights about our data. We will show you how you can get it in the most common models of machine learning. We will use the famous Titanic Dataset from Kaggle. import pandas as pd import numpy as np import statsmodels.formula.api as smf from sklearn.linear_model import LogisticRegression from sklearn.ensemble import RandomForestClassifier from xgboost import XGBClassifier from sklearn.feature_extraction.text import CountVectorizer #we used only the train dataset from Titanic data=pd.read_csv('train.csv') data=data[['Sex','Age','Embarked','Pclass','SibSp','Parch','Survived']] data.dropna(inplace=True) Feature Importance in Sklearn Linear Models model=LogisticRegression(random_state=1) features=pd.get_dummies(data[['Sex','Embarked','Pclass','SibSp','Parch']],drop_first=True) features['Age']=data['Age'] model.fit(features,data['Survived']) feature_importance=pd.DataFrame({'feature':list(features.columns),'feature_importance':[abs(i) for i in model.coef_[0]]}) feature_importance.sort_values('feature_importance',ascending=False) #if you don't want the absolute value #feature_importance=pd.DataFrame({'feature':list(features.columns),'feature_importance':[i for i in model.coef_[0]]}) #feature_importance.sort_values('feature_importance',ascending=False) feature feature_importance 3 Sex_male 2.501471 0 Pclass 1.213811 4 Embarked_Q 0.595491 5 Embarked_S 0.380094 1 SibSp 0.336785 6 Age 0.042501 2 Parch 0.029937 As you can see we took the absolute value of the coefficients because we want to get the Importance of the feature both with negative and positive effect. If you want to keep this information, you can remove the absolute function from the code. Keep in mind that you will not have this option when using Tree-Based models like Random Forest or XGBoost. Feature Importance in Sklearn Ensemble Models model=RandomForestClassifier() model.fit(features,data['Survived']) feature_importances=pd.DataFrame({'features':features.columns,'feature_importance':model.feature_importances_}) feature_importances.sort_values('feature_importance',ascending=False) features feature_importance 6 Age 0.416853 3 Sex_male 0.288845 0 Pclass 0.145641 1 SibSp 0.063167 2 Parch 0.052152 5 Embarked_S 0.025383 4 Embarked_Q 0.007959 Feature Importance in Stats Models model=smf.logit('Survived~Sex+Age+Embarked+Pclass+SibSp+Parch',data=data) result = model.fit() feature_importances=pd.DataFrame(result.conf_int()[1]).rename(columns={1:'Coefficients'}).eval("absolute_coefficients=abs(Coefficients)") feature_importances.sort_values('absolute_coefficients',ascending=False).drop('Intercept')[['absolute_coefficients']] absolute_coefficients Sex[T.male] 2.204154 Pclass 0.959873 Embarked[T.Q] 0.329163 Parch 0.192208 SibSp 0.103804 Embarked[T.S] 0.084723 Age 0.027517 Feature Importance in XGBoost model=XGBClassifier() model.fit(features,data['Survived']) feature_importances=pd.DataFrame({'features':features.columns,'feature_importance':model.feature_importances_}) print(feature_importances.sort_values('feature_importance',ascending=False)) features feature_importance 3 Sex_male 0.657089 0 Pclass 0.163064 1 SibSp 0.067181 6 Age 0.041643 5 Embarked_S 0.029463 2 Parch 0.027073 4 Embarked_Q 0.014488 Feature Importance when using a Word Vectorizer In most of the cases, when we are dealing with text we are applying a Word Vectorizer like Count or TF-IDF. The features that we are feeding our model is a sparse matrix and not a structured data-frame with column names. However we can get the feature importances using the following technique. We are using a dataset from Kaggle which is about spam or ham message classification. This will be interesting because words with high importance are representing words that if contained in a message, this message is more likely to be a spam. df=pd.read_csv('SPAM text message 20170820 - Data.csv') df.head() Category Message 0 ham Go until jurong point, crazy.. Available only ... 1 ham Ok lar... Joking wif u oni... 2 spam Free entry in 2 a wkly comp to win FA Cup fina... 3 ham U dun say so early hor... U c already then say... 4 ham Nah I don't think he goes to usf, he lives aro... v = CountVectorizer(ngram_range=(1,1)) x = v.fit_transform(df['Message']) model=LogisticRegression() model.fit(x,df['Category']) #we are not getting the absolute value feature_importance=pd.DataFrame({'feature':v.get_feature_names(),'feature_importance':[i for i in model.coef_[0]]}) feature_importance.sort_values('feature_importance',ascending=False).head(10) feature feature_importance 2978 error 2.606383 7982 txt 2.178409 6521 ringtone 1.788390 7640 text 1.777959 8012 uk 1.717855 1824 call 1.709997 6438 reply 1.643512 1975 chat 1.528649 5354 new 1.441076 8519 won 1.436101 Here we can see how useful the feature Importance can be. From the example above we are getting that the word error is very important when classifying a message. In other words, because we didn’t get the absolute value, we can say that If this word is contained in a message, then the message is most likely to be a spam. Want to share your content on python-bloggers? click here.
https://python-bloggers.com/2021/02/the-ultimate-guide-of-feature-importance-in-python/
CC-MAIN-2021-10
en
refinedweb
Love Meter is a silly, but fun, novelty app. If you can’t decide whether to start or continue a relationship with someone, the Love Meter app can tell you how much chemistry you and your potential mate have. All you need to do is to convince the other person to hold their finger on your phone’s screen for about 8 seconds while you do the same. While you do this, Love Meter shows a beating heart and progress bar as it performs its analysis and then reports how much love exists between you two, on a scale from 0 to 100%. A red heart fills a white heart border with the same percentage, to help you visualize the percentage somewhat like a pie chart. This app demonstrates a new category of animations— keyframe animations. Love Meter doesn’t actually measure anything! It should go without saying, but this app is for entertainment purposes only.Windows phones do not (yet!) have a sensor for measuring human chemistry This app requires multi-touch! Therefore, you cannot test it as-is on the emulator unless your computer supports multi-touch. Keyframe Animations Sometimes the animation behavior you desire cannot be represented by linear interpolation or any of the built-in easing functions. For example, Love Meter performs its heartbeat animation by animating the scale of a vector heart graphic marked with the following ScaleTransform: [code] <!– The solid red heart with a complex geometry –> <Path …> <Path.RenderTransform> <!– The target of the heartbeat and final animations –> <ScaleTransform x:Name=”HeartScale” ScaleX=”0” ScaleY=”0”/> </Path.RenderTransform> </Path> [/code] To make its scale grow in a heartbeat pattern (with alternating small and large “beats”), you might try to create a multi-animation storyboard as follows (for ScaleX, at least): [code] <!– Horizontal stretching and shrinking –> <Storyboard x:Name=”HeartbeatStoryboard” Storyboard.TargetName=”HeartScale” Storyboard.TargetProperty=”ScaleX” RepeatBehavior=”6x”> <DoubleAnimation BeginTime=”0:0:0” To=”0”/> <DoubleAnimation BeginTime=”0:0:.4” To=”0”/> <DoubleAnimation BeginTime=”0:0:.6” To=”.5”/> <DoubleAnimation BeginTime=”0:0:.8” To=”0”/> <DoubleAnimation BeginTime=”0:0:1” To=”1”/> <DoubleAnimation BeginTime=”0:0:1.4” To=”0”/> </Storyboard> [/code] However, this fails with an InvalidOperationException that explains, “Multiple animations in the same containing Storyboard cannot target the same property on a single element.” You could split this up into multiple storyboards and start one when another one ends, but that’s cumbersome to manage. Instead, you can use a keyframe animation that supports as many distinct segments as you want. Keyframe animations enable you to specify any number of keyframes—specific property values at specific times—rather than being limited to a single From value and a single To value. For example, the preceding heartbeat animation can be correctly written as follows: [code] <!– Horizontal stretching and shrinking –> <Storyboard x:Name=”HeartbeatStoryboard” Storyboard.TargetName=”HeartScale” Storyboard.TargetProperty=”ScaleX” RepeatBehavior=”6x”> <DoubleAnimationUsingKeyFrames> > </Storyboard> [/code] This only animates ScaleX, so Love Meter uses an identical keyframe animation for ScaleY as well, to grow and shrink the heart’s scale in sync. The use of keyframes requires a keyframe-enabled animation class. For this case, DoubleAnimation’s companion DoubleAnimationUsingKeyFrames class is used. The other three animation classes have corresponding keyframe classes as well. The keyframe animation classes have the same properties as their counterparts except for From, To, and By, as that information is represented inside each child keyframe. Interpolation can be done between each keyframe, and the interpolation can be different between each pair. This is based on which type of keyframe is used, out of the four available: - Linear keyframes—Perform basic linear interpolation. - Easing keyframes—Perform interpolation based on the specified easing function. - Spline keyframes—Perform interpolation based on a spline object that describes the desired motion as a cubic Bézier curve. - Discrete keyframes—Perform no interpolation; the value jumps to the new value at the appropriate time. Inside DoubleAnimationUsingKeyFrames, you choose from the four types of keyframes by using a LinearDoubleKeyFrame, EasingDoubleKeyFrame, SplineDoubleKeyFrame, or DiscreteDoubleKeyFrame. Inside ColorAnimationUsingKeyFrames, you choose by using a LinearColorKeyFrame, EasingColorKeyFrame, SplineColorKeyFrame, or DiscreteColorKeyFrame. And so on. The type of keyframe chosen affects the interpolation between the previous value and its own value. Linear and easing keyframes enable the same familiar capabilities as nonkeyframe animations, but on a per-keyframe basis. Spline and discrete behavior is specific to keyframe animations. Figure 14.1 illustrates the motion enabled by applying the following storyboard to a heart on a canvas: [code] <Storyboard x:Name=”Figure14_1_Storyboard” Storyboard.TargetName=”Heart”> <!– Move the heart vertically in a complicated pattern –> <DoubleAnimationUsingKeyFrames Storyboard.TargetProperty=”(Canvas.Top)”> <LinearDoubleKeyFrame Value=”0” KeyTime=”0:0:0”/> <LinearDoubleKeyFrame Value=”200” KeyTime=”0:0:1”/> <DiscreteDoubleKeyFrame Value=”0” KeyTime=”0:0:2”/> <LinearDoubleKeyFrame Value=”0” KeyTime=”0:0:3”/> <SplineDoubleKeyFrame Value=”200” KeySpline=”0,1 1,0” KeyTime=”0:0:4”/> </DoubleAnimationUsingKeyFrames> <!– Move the heart horizontally (linearly) at the same time –> <DoubleAnimation Storyboard.TargetProperty=”(Canvas.Left)” From=”0” To=”500” Duration=”0:0:4”/> </Storyboard> [/code] The type of the first keyframe never matters, as there’s no previous value from which to interpolate. In this example, the type of the fourth keyframe is also irrelevant because the keyframe’s value (0) is identical to the preceding value. Spline Keyframes and Bézier Curves The spline keyframe classes have a KeySpline property that defines the interpolation as a cubic Bézier curve. Bézier curves (named after engineer Pierre Bézier) are commonly used in computer graphics for representing smooth curves, and are even used by fonts to mathematically describe curves in their glyphs. The basic idea is that in addition to two endpoints, a Bézier curve has one or more control points that give the line segment its curve.These control points are not visible (and not necessarily on the curve itself ) but rather are used as input to a formula that dictates where each point on the curve exists. Intuitively, each control point acts like a center of gravity, so the line segment appears to be “pulled” toward these points.The control points specified inside KeySpline are relative, where the start of the curve is 0 and the end is 1. Finding the right value for KeySpline that gives the desired effect can be tricky and almost certainly requires the use of a design tool such as Expression Blend. But several free tools can be found online that help you visualize Bézier curves based on the specified control points. The User Interface Love Meter has a single page besides its instructions and about pages, whose code isn’t shown in this chapter. Listing 14.1 contains the main page’s XAML. LISTING 14.1 MainPage.xaml—The Main User Interface for Love Meter [code] <phone:PhoneApplicationPage x:Class=”WindowsPhoneApp.MainPage” xmlns=”” xmlns:x=”” xmlns:phone=”clr-namespace:Microsoft.Phone.Controls;assembly=Microsoft.Phone” xmlns:shell=”clr-namespace:Microsoft.Phone.Shell;assembly=Microsoft.Phone” FontFamily=”{StaticResource PhoneFontFamilyNormal}” FontSize=”{StaticResource PhoneFontSizeNormal}” Foreground=”{StaticResource PhoneForegroundBrush}” SupportedOrientations=”Portrait” shell:SystemTray.IsVisible=”True”> <!– The application bar, for instructions and about –> <phone:PhoneApplicationPage.ApplicationBar> <shell:ApplicationBar> > <!– Add three storyboards to the page’s resource dictionary –> <phone:PhoneApplicationPage.Resources> <!– The storyboard for the heartbeat scale animation (repeated 6 times) –> <Storyboard x:Name=”HeartbeatStoryboard” Storyboard.TargetName=”HeartScale” RepeatBehavior=”6x”> <!– The horizontal stretching and shrinking –> <DoubleAnimationUsingKeyFrames Storyboard.TargetProperty=”Scale> <!– The vertical stretching and shrinking (in sync) –> <DoubleAnimationUsingKeyFrames Storyboard.TargetProperty=”Scale> <!– Ensure the result text is hidden when beginning –> <DoubleAnimationUsingKeyFrames Storyboard.TargetName=”ResultTextBlock” Storyboard.TargetProperty=”Opacity”> <DiscreteDoubleKeyFrame KeyTime=”0:0:0” Value=”0”/> </DoubleAnimationUsingKeyFrames> </Storyboard> <!– The storyboard that animates the progress bar –> <Storyboard x:Name=”ProgressStoryboard” Completed=”ProgressStoryboard_Completed”> <!– Show the progress bar at the beginning and hide it at the end –> <DoubleAnimationUsingKeyFrames Storyboard.TargetName=”ProgressPanel” Storyboard.TargetProperty=”Opacity”> <DiscreteDoubleKeyFrame KeyTime=”0:0:0” Value=”1”/> <DiscreteDoubleKeyFrame KeyTime=”0:0:8.4” Value=”0”/> </DoubleAnimationUsingKeyFrames> <!– Animate its value from 0 to 100% –> <DoubleAnimation From=”0” To=”100” Duration=”0:0:8.4” Storyboard.TargetName=”ProgressBar” Storyboard.TargetProperty=”Value”/> </Storyboard> <!– A final random animation before displaying the result –> <Storyboard x:Name=”FinalStoryboard” Storyboard.TargetName=”HeartScale”> <!– Horizontal stretching and shrinking, set via code-behind –> <DoubleAnimationUsingKeyFrames x:Name=”FinalAnimationX” Storyboard.TargetProperty=”ScaleX”/> <!– Vertical stretching and shrinking, set via code-behind –> <DoubleAnimationUsingKeyFrames x:Name=”FinalAnimationY” Storyboard.TargetProperty=”ScaleY”/> <!– Show the result at the end of the animation –> <DoubleAnimationUsingKeyFrames Storyboard.TargetName=”ResultTextBlock” Storyboard.TargetProperty=”Opacity”> <DiscreteDoubleKeyFrame KeyTime=”0:0:1” Value=”1”/> </DoubleAnimationUsingKeyFrames> </Storyboard> </phone:PhoneApplicationPage.Resources> <!– Transparent background to receive touch input –> <Grid Background=”Transparent”> <!– Mini-header –> <TextBlock Text=”LOVE METER” Margin=”24,16,0,12” Style=”{StaticResource PhoneTextTitle0Style}”/> <!– The progress bar and corresponding text block –> <StackPanel x:Name=”ProgressPanel” Opacity=”0” Margin=”0,60,0,0”> <TextBlock Margin=”24,0” Text=”Measuring chemistry…”/> <ProgressBar x:Name=”ProgressBar” VerticalAlignment=”Top” Margin=”12,24”/> </StackPanel> <!– The solid red heart with a complex geometry –> <Path Width=”436” Stretch=”Uniform” Fill=”#E51400” Margin=”12,0” HorizontalAlignment=”Center” VerticalAlignment=”Center” RenderTransformOrigin=”.5,.5” Data=”F1 M 349.267,270.347C 374.787,266.867 401.253,269.427 425.267,278.92C 453.48,289.173 477.067,309.027 496.333,331.6C 507.533,345.013 516.68,360 524.547,375.56C 527.587,381.733 529.893,388.253 533.333,394.24C 537.573,386.76 540.2,378.52 544.467,371.08C 555.253,351.573 567.667,332.667 583.84,317.173C 597.32,303.027 613.707,291.773 631.36,283.467C 660.36,269.16 694.16,265.547 725.76,271.92C 746.72,276.547 766.8,285.627 783.72,298.92C 799.147,311.693 812.573,327.133 821.52,345.173C 831.867,366.267 837.827,389.773 837.373,413.333C 838.707,448.413 829.133,483.093 814.987,514.933C 793.107,563.24 760.693,606.053 724.373,644.413C 712.653,658 699.253,669.973 686.2,682.213C 640.48,724.373 590.373,761.52 538.667,795.96C 536.653,797.013 534.6,798.733 532.213,798.493C 528.067,796.613 524.6,793.573 520.84,791.067C 468.253,756.28 417.973,717.8 372.107,674.493C 356,659.96 341.453,643.813 326.933,627.733C 311.28,609.84 296.267,591.293 283.4,571.28C 265.067,544.44 250.013,515.24 239.92,484.307C 233.48,462.133 228.32,439.227 229.28,416C 228.64,403.027 230.867,390.187 233.533,377.547C 241.507,342.733 263.187,311.213 293.16,291.72C 310.107,280.76 329.293,273.32 349.267,270.347 Z”> <Path.RenderTransform> <!– The target of the heartbeat and final animations –> <ScaleTransform x:Name=”HeartScale” ScaleX=”0” ScaleY=”0”/> </Path.RenderTransform> </Path> <!– The same heart with no fill and outlined in white or black –> <Path Width=”456” Stretch=”Uniform” StrokeThickness=”12” Margin=”12,0” Stroke=”{StaticResource PhoneForegroundBrush}” Data=”…”/> <!– A text block for displaying the resulting percentage –> <TextBlock x:Name=”ResultTextBlock” Opacity=”0” FontSize=”108” HorizontalAlignment=”Center” VerticalAlignment=”Center”/> </Grid> </phone:PhoneApplicationPage> [/code] Notes: - HeartbeatStoryboard contains the keyframe animation shown earlier for the horizontal component of the beating visualization (ScaleX), as well as one for the vertical component (ScaleY). With RepeatBehavior on the storyboard, the beat pattern occurs six times. - HeartbeatStoryboard also contains a keyframe animation that “animates” the result text (shown at the end of the whole process) to an opacity of 0. This is done for the benefit of subsequent runs during the same session, because the result text already has an opacity of 0 before the first run. Rather than making the text fade out, the animation instantly sets the opacity to 0 with a single discrete keyframe that takes effect at the start of the storyboard. - The first animation inside ProgressStoryboard uses the same technique to instantly show the progress bar and its text block (inside ProgressPanel) at the start of the storyboard and instantly hide it at the end. The normal DoubleAnimation smoothly and linearly animates the progress bar’s value from 0 to 100 over the course of 8.4 seconds, which is how long it takes for HeartbeatStoryboard to finish. The codebehind initiates HeartbeatStoryboard and ProgressStoryboard simultaneously when two fingers touch the screen. The progress UI inside ProgressPanel is shown in Figure 14.2. - FinalStoryboard is started by code-behind after HeartbeatStoryboard and ProgressStoryboard finish. It randomly shrinks and stretches the heart for a second before revealing ResultTextBlock. This is done by adding keyframes with random values in code-behind. - The grid uses a transparent background, so the fingers can be pressed anywhere on the screen and the appropriate event gets raised. - The heart is vector-based, so it can scale to any size and still look crisp. Although not shown here, the outline’s Data property is set to the same long string used for the heart’s Data property. - On the ScaleTransform, ScaleX is a multiplier for the element’s width, and ScaleY is a multiplier for its height. (A ScaleX value of 0.5 shrinks an element’s rendered width in half, whereas a ScaleX value of 2 doubles the width.) Their default value is 1, but they are both initialized to 0 in this case, so the red heart is initially invisible. How do transforms such as ScaleTransform affect the values returned by an element’s ActualHeight and ActualWidth properties? Applying a transform never changes the values of these properties.Therefore, because of transforms, these properties can “lie” about the size of an element on the screen. For example, the red heart in Love Meter always reports 436 as its ActualWidth value, despite the initial ScaleTransform that makes its actual size 0. Such “lies” might surprise you, but they’re for the best. First, it’s debatable how such values should even be expressed for some transforms.More importantly, the point of transforms is to alter an element’s appearance without the element’s knowledge. Giving elements the illusion that they are being rendered normally enables custom elements to be plugged in and transformed without special handling. The Code-Behind Listing 14.2 contains the code-behind for the main page. LISTING 14.2 MainPage.xaml.cs—The Code-Behind for Love Meter’s Main Page [code] using System; using System.Windows.Input; using System.Windows.Media.Animation; using System.Windows.Navigation; using Microsoft.Phone.Controls; namespace WindowsPhoneApp { public partial class MainPage : PhoneApplicationPage { // The secret chemistry-measuring algorithm is just choosing a random number! Random random = new Random(); public MainPage() { InitializeComponent(); } protected override void OnNavigatedTo(NavigationEventArgs e) { base.OnNavigatedTo(e); // This is application-wide, so only listen while on this page Touch.FrameReported += Touch_FrameReported; } protected override void OnNavigatedFrom(NavigationEventArgs e) { base.OnNavigatedFrom(e); // Unhook the handler attached in OnNavigatedTo Touch.FrameReported -= Touch_FrameReported; } void Touch_FrameReported(object sender, TouchFrameEventArgs e) { TouchPointCollection fingers = e.GetTouchPoints(this); // Stop the storyboards if there aren’t two fingers currently on the screen if (fingers.Count != 2 || (fingers.Count == 2 && (fingers[0].Action == TouchAction.Up || fingers[1].Action == TouchAction.Up))) { this.HeartbeatStoryboard.Stop(); this.ProgressStoryboard.Stop(); } // Start the storyboards if two fingers are in contact and the second one // just made contact, AND if the storyboards aren’t already running else if (fingers.Count == 2 && (fingers[0].Action == TouchAction.Down || fingers[1].Action == TouchAction.Down) && this.HeartbeatStoryboard.GetCurrentState() != ClockState.Active) { this.HeartbeatStoryboard.Begin(); this.ProgressStoryboard.Begin(); } } // Called when the progress bar reaches 100% void ProgressStoryboard_Completed(object sender, EventArgs e) { this.FinalStoryboard.Stop(); // So we can clear its keyframes // Fill the X & Y animations with 10 random keyframes this.FinalAnimationX.KeyFrames.Clear(); this.FinalAnimationY.KeyFrames.Clear(); for (int i = 0; i < 10; i++) { this.FinalAnimationX.KeyFrames.Add(new LinearDoubleKeyFrame { KeyTime = TimeSpan.FromMilliseconds(i * 100), Value = (double)random.Next(0, 101) / 100 }); this.FinalAnimationY.KeyFrames.Add(new LinearDoubleKeyFrame { KeyTime = TimeSpan.FromMilliseconds(i * 100), Value = (double)random.Next(0, 101) / 100 }); } // Choose the result double finalPercentage = random.Next(0, 101); // Ensure that the otherwise-random animations end up at the right value this.FinalAnimationX.KeyFrames.Add(new LinearDoubleKeyFrame { KeyTime = TimeSpan.FromMilliseconds(1100), Value = finalPercentage / 100 }); this.FinalAnimationY.KeyFrames.Add(new LinearDoubleKeyFrame { KeyTime = TimeSpan.FromMilliseconds(1100), Value = finalPercentage / 100 }); // Update the text block now, which still has an opacity of 0. // It will be shown when FinalStoryboard finishes. this.ResultTextBlock.Text = finalPercentage + “%”; // Start the new random animations this.Final=Love Meter”, UriKind.Relative)); } } } [/code] Notes: - This application has special handling to initiate the first two storyboards only when two fingers are simultaneously pressed on the screen, and to stop the storyboards otherwise. This is done with the multi-touch FrameReported event and corresponding functionality. - We only want to start the first two storyboards if they aren’t already in progress. This is especially important inside the FrameReported event handler because it gets called repeatedly while the two fingers are pressed down. To check the status of the storyboards, it calls the GetCurrentState method on one of them, which returns either Active, Stopped, or Filling. Filling represents the case for which an animation has completed, but the animated property values remain at their postanimation values. This always happens for a completed storyboard until its Stop method is called, unless its FillBehavior property is set to Stop to make it stop automatically on completion. - Inside ProgressStoryboard_ Completed, FinalStoryboard is filled with random-value keyframes before revealing the (also-randomly chosen) chemistry value in the final keyframe. Although the final keyframe uses the same value for both ScaleX and ScaleY, the intermediate keyframes do not. This produces a more interesting animation that morphs the heart in either dimension, as demonstrated in Figure 14.3. The Finished Product I’m a beginner in programming and I want to know how to make a specific path. (in your project there is a data for the heart path, how did you get that?) How can I get the data for that path or any path? Can you send me a .zip file of the source code for Windows Phone
https://www.blograby.com/love-meter-keyframe-animations/
CC-MAIN-2021-10
en
refinedweb
Hello, I'm currently working with Intel compiler under Windows (Version 13.1.3) and try to use alias templates as template template parameters. Essentially I would like to compile something like this: [cpp] #include <iostream> using namespace std; template<class A, class B> struct Temp { int foo() { return 42; } }; template<class A> using c_t = Temp<A,int>; template<template<class> class TYPE> struct Temp2 { TYPE<int> a; void foo() { cout << a.foo() << endl; } }; int main() { Temp2<c_t> a; a.foo(); return 0; } [/cpp] This code compiles fine e.g. with GNU gcc 4.7.1. but the Intel compiler complains that the alias template c_t is not compatible with the template tempalte parameter TYPE. Is gcc here not restrictive enough or is it an actual bug in icl? A post on stackoverflow (...) seems to suggest the latter... (I used the following command line arguments to compile: /Qstd=c++11 /Qlocation ) Best Regards, Raffael Link Copied I tried your test case on Windows with a later version of the compiler (13.1.*) and it compiles without errors, printing 42. Could you re-try after updating with the latest available? Hi Melanie and Sergey, First of all thanks a lot for debugging the test case! The test case that I posted seems to compile indeed :) I found out last week that I made a stupid mistake while compiling the test case with the Intel Compiler: In reality my problem is way more complex and spans over 4 different (Traits-) classes, so I've tried to reduce the problem to the above test case but I must have made an error while compiling it in order to get the error message, arrgh. I guess I was a bit confused because my real problem has indeed this error message. I tried today for four hours to reduce my problem to a form which I could post here but it's almost impossible. So I've decided to circumvent the problem with another helper class that substitutes the alias template. Thanks again for your efforts! Raffael
https://community.intel.com/t5/Intel-C-Compiler/Using-alias-templates-as-template-template-parameters/td-p/945981
CC-MAIN-2021-10
en
refinedweb
An API-specific header for AV_HWDEVICE_TYPE_D3D11VA. More... #include <d3d11.h> #include <stdint.h> Go to the source code of this file. An API-specific header for AV_HWDEVICE_TYPE_D3D11VA. The default pool implementation will be fixed-size if initial_pool_size is set (and allocate elements from an array texture). Otherwise it will allocate individual textures. Be aware that decoding requires a single array texture. Using sw_format==AV_PIX_FMT_YUV420P has special semantics, and maps to DXGI_FORMAT_420_OPAQUE. av_hwframe_transfer_data() is not supported for this format. Refer to MSDN for details. av_hwdevice_ctx_create() for this device type supports a key named "debug" for the AVDictionary entry. If this is set to any value, the device creation code will try to load various supported D3D debugging layers. Definition in file hwcontext_d3d11va.h.
https://ffmpeg.org/doxygen/trunk/hwcontext__d3d11va_8h.html
CC-MAIN-2021-10
en
refinedweb
HOW BOSS Key feature? need Global shortcut for hide or show Notepad++ main window,that is boss key. thx. - PeterJones last edited by I’ve never had to hide the fact that I’m editing a text file from my boss. Just what are you editing that’s so secret? ;-) (Just kidding: I don’t want to know.) You cannot assign a shortcut command (keystroke) to the minimize button itself. But if you have a scripting language plugin or equivalent, it can be done by sending the Windows WM_SYSCOMMANDmessage with the wparam SC_MINIMIZE. NppExec: npp_sendmsg 0x0112 0xF020 0 PerlScript: notepad->SendMessage(0x0112, 0xF020, 0); Hmm, I’d forgotten that PythonScript doesn’t have the SendMessage feature directly attached to the notepad and editor objects, and I’m not immediately finding which pre-installed module gives access to the Win32 SendMessage directly. @Ekopalypse or another Python expert will have to chime in to get my list of example lines complete. Anyway, if you’re willing to choose one of those routes, install the appropriate plugin or external module; if you need help binding a specific instance to a keystroke, let us know which flavor of scripting you chose, and we can walk you through the next steps. - Ekopalypse last edited by Translated to python import ctypes SendMessage = ctypes.WinDLL('user32').SendMessageW FindWindow = ctypes.WinDLL('user32').FindWindowW npp_hwnd = FindWindow("Notepad++", None) SendMessage(npp_hwnd , 0x0112, 0xF020, 0) BUT why not using windows BOSS shortcuts?? Assuming you have npp in the taskbar, let’s say in second position, then WindowsKey+2 will either start npp or, if already started, toggle the visibility/focus. thx all @PeterJones windowskey+Number , Very Nice ~~~
https://community.notepad-plus-plus.org/topic/19750/how-boss-key-feature/4?lang=en-US
CC-MAIN-2021-10
en
refinedweb
- How GitLab implements GraphQL - Deep Dive - GraphiQL - Authentication - Global IDs - Types - Feature flags - Deprecating fields and enum values - Enums - JSON - Descriptions - Authorization - Resolvers - Mutations - Building Mutations - Naming conventions - Arguments - Object identifier arguments - Fields - The resolvemethod - Mounting the mutation - Authorizing resources - Errors in mutations - Aliasing and deprecating mutations - Pagination implementation - Validating arguments - GitLab custom scalars - Testing - Notes about Query flow and GraphQL infrastructure - Documentation and schema - Include a changelog entry - Laziness GraphQL API style guide This document outlines the style guide for the GitLab (GitLab team members only:) on the GitLab GraphQL API to share his domain specific knowledge with anyone who may work in this part of the codebase in the future. You can find the recording on YouTube, and the slides on Google Slides and in PDF. Everything covered in this deep dive was accurate as of GitLab 11.9, and while specific details may have changed since then, it should still serve as a good introduction. GraphiQL GraphiQL is an interactive GraphQL API explorer where you can play around with existing queries. You can access it in any GitLab environment on https://<your-gitlab-site.com>/-/graphql-explorer. For example, the one for GitLab.com. Authentication Authentication happens through the GraphqlController, right now this uses the same authentication as the Rails application. So the session can be shared. It’s also possible to add a private_token to the query string, or add a HTTP_PRIVATE_TOKEN header. Global IDs The GitLab GraphQL API uses Global IDs (i.e: "gid://gitlab/MyObject/123") and never database primary key IDs. Global ID is a convention used for caching and fetching in client-side libraries. See also: We have a custom scalar type ( Types::GlobalIDType) which should be used as the type of input and output arguments when the value is a GlobalID. The benefits of using this type instead of ID are: - it validates that the value is a GlobalID - it parses it into a GlobalIDbefore passing it to user code - it can be parameterized on the type of the object (e.g. GlobalIDType[Project]) which offers even better validation and security. Consider using this type for all new arguments and result types. Remember that it is perfectly possible to parameterize this type with a concern or a supertype, if you want to accept a wider range of objects (e.g. GlobalIDType[Issuable] vs GlobalIDType[Issue]). Types We use a code-first schema, and we declare what type everything is in Ruby. For example, app/graphql/types/issue_type.rb: graphql_name 'Issue' field :iid, GraphQL::ID_TYPE, null: true field :title, GraphQL::STRING_TYPE, null: true # (for example. Nullable fields GraphQL allows fields to be “nullable” or “non-nullable”. The former means that null may be returned instead of a value of the specified type. In general, you should prefer using nullable fields to non-nullable ones, for the following reasons: - It’s common for data to switch from required to not-required, and back again - Even when there is no prospect of a field becoming optional, it may not be available at query time - For instance, the contentof a blob may need to be looked up from Gitaly - If the contentis nullable, we can return a partial response, instead of failing the whole query - Changing from a non-nullable field to a nullable field is difficult with a versionless schema Non-nullable fields should only be used when a field is required, very unlikely to become optional in the future, and very easy to calculate. An example would be id fields. A non-nullable GraphQL schema field is an object type followed by the exclamation point (bang) !. Here’s an example from the gitlab_schema.graphql file: id: ProjectID! Here’s an example of a non-nullable GraphQL array: errors: [String!]! Further reading: - GraphQL Best Practices Guide. - GraphQL documentation on Object types and fields. - GraphQL Best Practices Guide - Using nullability in GraphQL Exposing Global IDs In keeping with the GitLab use of Global IDs, always convert database primary key IDs into Global IDs when you expose them. All fields named id are converted automatically into the object’s Global ID. Fields that are not named id need to be manually converted. We can do this using Gitlab::GlobalID.build, or by calling #to_global_id on an object that has mixed in the GlobalID::Identification module. Using an example from Types::Notes::DiscussionType: field :reply_id, GraphQL::ID_TYPE def reply_id ::Gitlab::GlobalId.build(object, id: object.reply_id) end information, append an ordering on the primary key, in descending order. This is usually id, so we add order(id: :desc) to the end of the relation. A primary key must be available on the underlying table. Shortcut fields Sometimes it can seem easy to implement a “shortcut field”, having the resolver return the first of a collection if no parameters are passed. These “shortcut fields” are discouraged because they create maintenance overhead. They need to be kept in sync with their canonical field, and deprecated or modified if their canonical field changes. Use the functionality the framework provides unless there is a compelling reason to do otherwise. For example, instead of latest_pipeline, use pipelines(last: 1).: Acts the same as graphql-ruby’s fieldmethod but setting a default description and type and making them non-nullable. These options can still be overridden by adding them as arguments. ability_field: Expose an ability defined in our policies. This behaves the same way as permission_fieldand the same arguments can be overridden. abilities: Allows exposing several abilities defined in our policies at once. The fields for these must all be non-nullable booleans with a default description. Feature flags Developers can add feature flags to GraphQL fields in the following ways: - Add the feature_flagproperty to a field. This allows the field to be hidden from the GraphQL schema when the flag is disabled. - Toggle the return value when resolving the field. You can refer to these guidelines to decide which approach to use: - If your field is experimental, and its name or type is subject to change, use the feature_flagproperty. - If your field is stable and its definition doesn’t change, even after the flag is removed, toggle the return value of the field instead. Note that all fields should be nullable anyway. feature_flag property The feature_flag property allows you to toggle the field’s visibility within the GraphQL schema. This removes the field from the schema when the flag is disabled. A description is appended to the field indicating that it is behind a feature flag. The feature_flag property does not allow the use of feature gates based on actors. This means that the feature flag cannot be toggled only for particular projects, groups, or users, but instead can only be toggled globally for everyone. Example: field :test_field, type: GraphQL::STRING_TYPE, null: true, description: 'Some test field.', feature_flag: :my_feature_flag Toggle the value of a field This method of using feature flags for fields is to toggle the return value of the field. This can be done in the resolver, in the type, or even in a model method, depending on your preference and situation. When applying a feature flag to toggle the value of a field, the description of the field must: - State that the value of the field can be toggled by a feature flag. - Name the feature flag. - State what the field returns when the feature flag is disabled (or enabled, if more appropriate). Example: field :foo, GraphQL::STRING_TYPE, null: true, description: 'Some test field. Will always return `null`' \ 'if `my_feature_flag` feature flag is disabled.' def foo object.foo if Feature.enabled?(:my_feature_flag, object) end Deprecating fields and enum values The GitLab GraphQL API is versionless, which means we maintain backwards compatibility with older versions of the API with every change. Rather than removing a field or enum value, we need to deprecate it instead. The deprecated parts of the schema can then be removed in a future release in accordance with the GitLab deprecation process. Fields and enum values are deprecated using the deprecated property. The value of the property is a Hash of: reason- Reason for the deprecation. milestone- Milestone that the field was deprecated. Example: field :token, GraphQL::STRING_TYPE, null: true, deprecated: { reason: 'Login via token has been removed', milestone: '10.0' }, description: 'Token for login.' The original description of the things being deprecated should be maintained, and should not be updated to mention the deprecation. Instead, the reason is appended to the description. Deprecation reason style guide Where the reason for deprecation is due to the field or enum value being replaced, the reason must be: Use `otherFieldName` Example: field :designs, ::Types::DesignManagement::DesignCollectionType, null: true, deprecated: { reason: 'Use `designCollection`', milestone: '10.0' }, description: 'The designs associated with this issue.', module Types class TodoStateEnum < BaseEnum value 'pending', deprecated: { reason: 'Use PENDING', milestone: '10.0' } value 'done', deprecated: { reason: 'Use DONE', milestone: '10.0' } value 'PENDING', value: 'pending' value 'DONE', value: 'done' end end If the field is not being replaced by another field, a descriptive deprecation reason should be given. See also Aliasing and deprecating mutations. is used for a class property in Ruby that is not an uppercase string, you can provide a value: option that adapts the uppercase value. In the following example: - GraphQL inputs of OPENEDare converted to 'opened'. - Ruby values of 'opened'are converted to "OPENED"in GraphQL responses. module Types class EpicStateEnum < BaseEnum graphql_name 'EpicState' description 'State of a GitLab epic' value 'OPENED', value: 'opened', description: 'An open Epic.' value 'CLOSED', value: 'closed', description: 'A closed Epic.' end end Enum values can be deprecated using the deprecated keyword. Defining GraphQL enums dynamically from Rails enums If your GraphQL enum is backed by a Rails enum, then consider using the Rails enum to dynamically define the GraphQL enum values. Doing so binds the GraphQL enum values to the Rails enum definition, so if values are ever added to the Rails enum then the GraphQL enum automatically reflects the change. Example: module Types class IssuableSeverityEnum < BaseEnum graphql_name 'IssuableSeverity' description 'Incident severity' ::IssuableSeverity.severities.keys.each do |severity| value severity.upcase, value: severity, description: "#{severity.titleize} severity." end end end JSON When data to be returned by GraphQL is stored as JSON, we should continue to use GraphQL types whenever possible. Avoid using the GraphQL::Types::JSON type unless the JSON data returned is truly unstructured. If the structure of the JSON data varies, but is one of a set of known possible structures, use a union. An example of the use of a union for this purpose is !30129. Field names can be mapped to hash data keys using the hash_key: keyword if needed. For example, given the following simple JSON data: { "title": "My chart", "data": [ { "x": 0, "y": 1 }, { "x": 1, "y": 1 }, { "x": 2, "y": 2 } ] } We can use GraphQL types like this: module Types class ChartType < BaseObject field :title, GraphQL::STRING_TYPE, null: true, description: 'Title of the chart.' field :data, [Types::ChartDatumType], null: true, description: 'Data of the chart.' end end module Types class ChartDatumType < BaseObject field :x, GraphQL::INT_TYPE, null: true, description: 'X-axis value of the chart datum.' field :y, GraphQL::INT_TYPE, null: true, description: 'Y-axis value of the chart datum.' end end Descriptions All fields and arguments must have descriptions. A description of a field or argument is given using the description: keyword. For example: field :id, GraphQL::ID_TYPE, description: 'ID of the resource.' Descriptions of fields and arguments are viewable to users through: Description style guide is Time, rather than just Date. - Must end with a period ( .). Example: field :id, GraphQL::ID_TYPE, description: 'ID of the issue.' field :confidential, GraphQL::BOOLEAN_TYPE, description: 'Indicates the issue is confidential.' field :closed_at, Types::TimeType, description: 'Timestamp of when the issue was closed.' copy_field_description helper Sometimes we want to ensure that two descriptions are always identical. For example, to keep a type field description the same as a mutation argument when they both represent the same property. Instead of supplying a description, we can use the copy_field_description helper, passing it the type, and field name to copy the description of. Example: argument :title, GraphQL::STRING_TYPE, required: false, description: copy_field_description(Types::MergeRequestType, :title) Authorization Authorizations can be applied to both types and fields using the same abilities as in the Rails app. If the: - Currently authenticated user fails the authorization, the authorized resource is returned as null. - Resource is part of a collection, the collection is filtered to exclude the objects that the user’s authorization checks failed against. Also see authorizing resources in a mutation.. This requires explicitly passing a block to field: module Types class MyType < BaseObject field :project, Types::ProjectType, null: true, resolver: Resolvers::ProjectResolver do authorize [:owner_access, :another_ability] end end end If the field’s type already has a particular authorization then there is no need to add that same authorization to the field. in the same way as in a mutation. See the Mutation arguments section. To limit the amount of queries performed, we can use BatchLoader. Writing resolvers Our code should aim to be thin declarative wrappers around finders and services. You can repeat lists of arguments, or extract them to concerns. Composition is preferred over inheritance in most cases. Treat resolvers like controllers: resolvers should be a DSL that compose other application abstractions. For example: class PostResolver < BaseResolver type Post.connection_type, null: true authorize :read_blog description 'Blog posts, optionally filtered by name' argument :name, [::GraphQL::STRING_TYPE], required: false, as: :slug alias_method :blog, :object def resolve(**args) PostFinder.new(blog, current_user, args).execute end end You should never re-use resolvers directly. Resolvers have a complex life-cycle, with authorization, readiness and resolution orchestrated by the framework, and at each stage lazy values can be returned to take advantage of batching opportunities. Never instantiate a resolver or a mutation in application code. Instead, the units of code reuse are much the same as in the rest of the application: - Finders in queries to look up data. - Services in mutations to apply operations. - Loaders (batch-aware finders) specific to queries. Note that there is never any reason to use batching in a mutation. Mutations are executed in series, so there are no batching opportunities. All values are evaluated eagerly as soon as they are requested, so batching is unnecessary overhead. If you are writing: - A Mutation, feel free to lookup objects directly. - A Resolveror methods on a BaseObject, then you want to allow for batching. Error handling Resolvers may raise errors, which will be converted to top-level errors as appropriate. All anticipated errors should be caught and transformed to an appropriate GraphQL error (see Gitlab::Graphql::Errors). Any uncaught errors will be suppressed and the client will receive the message Internal service error. The one special case is permission errors. In the REST API we return 404 Not Found for any resources that the user does not have permission to access. The equivalent behavior in GraphQL is for us to return null for all absent or unauthorized resources. Query resolvers should not raise errors for unauthorized resources. The rationale for this is that clients must not be able to distinguish between the absence of a record and the presence of one they do not have access to. To do so is a security vulnerability, since it leaks information we want to keep hidden. In most cases you don’t need to worry about this - this is handled correctly by the resolver field authorization we declare with the authorize DSL calls. If you need to do something more custom however, remember, if you encounter an object the current_user does not have access to when resolving a field, then the entire field should resolve to null. Deriving resolvers ( BaseResolver.single and BaseResolver.last) For some simple use cases, we can derive resolvers from others. The main use case for this is one resolver to find all items, and another to find one specific one. For this, we supply convenience methods: BaseResolver.single, which constructs a new resolver that selects the first item. BaseResolver.last, with constructs a resolver that selects the last item. The correct singular type is inferred from the collection type, so we don’t have to define the type here. Before you make use of these methods, consider if it would be simpler to either: - Write another resolver that defines its own arguments. - Write a concern that abstracts out the query. Using BaseResolver.single too freely is an anti-pattern. It can lead to non-sensical fields, such as a Project.mergeRequest field that just returns the first MR if no arguments are given. Whenever we derive a single resolver from a collection resolver, it must have more restrictive arguments. To make this possible, use the when_single block to customize the single resolver. Every when_single block must: - Define (or re-define) at least one argument. - Make optional filters required. For example, we can do this by redefining an existing optional argument, changing its type and making it required: class JobsResolver < BaseResolver type JobType.connection_type, null: true authorize :read_pipeline argument :name, [::GraphQL::STRING_TYPE], required: false when_single do argument :name, ::GraphQL::STRING_TYPE, required: true end def resolve(**args) JobsFinder.new(pipeline, current_user, args.compact).execute end Here we have a simple resolver for getting pipeline jobs. The name argument is optional when getting a list, but required when getting a single job. If there are multiple arguments, and neither can be made required, we can use the block to add a ready condition: class JobsResolver < BaseResolver alias_method :pipeline, :object type JobType.connection_type, null: true authorize :read_pipeline argument :name, [::GraphQL::STRING_TYPE], required: false argument :id, [::Types::GlobalIDType[::Job]], required: false, prepare: ->(ids, ctx) { ids.map(&:model_id) } when_single do argument :name, ::GraphQL::STRING_TYPE, required: false argument :id, ::Types::GlobalIDType[::Job], required: false prepare: ->(id, ctx) { id.model_id } def ready?(**args) raise ::Gitlab::Graphql::Errors::ArgumentError, 'Only one argument may be provided' unless args.size == 1 end end def resolve(**args) JobsFinder.new(pipeline, current_user, args.compact).execute end Then we can use these resolver on fields: # In PipelineType field :jobs, resolver: JobsResolver, description: 'All jobs.' field :job, resolver: JobsResolver.single, description: 'A single job.' Correct use of Resolver#ready? Resolvers have two public API methods as part of the framework: #ready?(**args) and #resolve(**args). We can use #ready? to perform set-up, validation or early-return without invoking #resolve. Good reasons to use #ready? include: - validating mutually exclusive arguments (see validating arguments) - Returning Relation.noneif we know before-hand that no results are possible - Performing setup such as initializing instance variables (although consider lazily initialized methods for this) Implementations of Resolver#ready?(**args) should return (Boolean, early_return_data) as follows: def ready?(**args) [false, 'have this instead'] end For this reason, whenever you call a resolver (mainly in tests - as framework abstractions Resolvers should not be considered re-usable, finders are to be preferred), remember to call the ready? method and check the boolean flag before calling resolve! An example can be seen in our GraphQLHelpers. Look-Ahead The full query is known in advance during execution, which means we can make use of lookahead to optimize our queries, and batch load associations we know we need. Consider adding lookahead support in your resolvers to avoid N+1 performance issues. To enable support for common lookahead use-cases (pre-loading associations when child fields are requested), you can include LooksAhead. For example: # Assuming a model `MyThing` with attributes `[child_attribute, other_attribute, nested]`, # where nested has an attribute named `included_attribute`. class MyThingResolver < BaseResolver include LooksAhead # Rather than defining `resolve(**args)`, we implement: `resolve_with_lookahead(**args)` def resolve_with_lookahead(**args) apply_lookahead(MyThingFinder.new(current_user).execute) end # We list things that should always be preloaded: # For example, if child_attribute is always needed (during authorization # perhaps), then we can include it here. def unconditional_includes [:child_attribute] end # We list things that should be included if a certain field is selected: def preloads { field_one: [:other_attribute], field_two: [{ nested: [:included_attribute] }] } end end The final thing that is needed is that every field that uses this resolver needs to advertise the need for lookahead: # in ParentType field :my_things, MyThingType.connection_type, null: true, extras: [:lookahead], # Necessary resolver: MyThingResolver, description: 'My things.' For an example of real world use, please see ResolvesMergeRequests. Negated arguments Negated filters can filter some resources (for example, find all issues that have the bug label, but don’t have the bug2 label assigned). The not argument is the preferred syntax to pass negated arguments: issues(labelName: "bug", not: {labelName: "bug2"}) { nodes { id title } } To avoid duplicated argument definitions, you can place these arguments in a reusable module (or class, if the arguments are nested). Alternatively, you can consider to add a helper resolver method. Metadata When using resolvers, they can and should serve as the SSoT for field metadata. All field options (apart from the field name) can be declared on the resolver. These include: type(this is particularly important, and is planned to be mandatory) extras description Example: module Resolvers MyResolver < BaseResolver type Types::MyType, null: true extras [:lookahead] description 'Retrieve a single MyType' end end Pass a parent object into a child Presenter Sometimes you need to access the resolved query parent in a child context to compute fields. Usually the parent is only available in the Resolver class as parent. To find the parent object in your Presenter class: Add the parent object to the GraphQL contextfrom within your resolver’s resolvemethod: def resolve(**args) context[:parent_object] = parent end Declare that your resolver or fields require the parentfield context. For example: # in ChildType field :computed_field, SomeType, null: true, method: :my_computing_method, extras: [:parent], # Necessary description: 'My field description.' field :resolver_field, resolver: SomeTypeResolver # In SomeTypeResolver extras [:parent] type SomeType, null: true description 'My field description.' Declare your field’s method in your Presenter class and have it accept the parentkeyword argument. This argument contains the parent GraphQL context, so you have to access the parent object with parent[:parent_object]or whatever key you used in your Resolver: # in ChildPresenter def my_computing_method(parent:) # do something with `parent[:parent_object]` here end # In SomeTypeResolver def resolve(parent:) # ... end For an example of real-world use, check this MR that added scopedPath and scopedUrl to IterationPresenter Mutations Mutations are used to change any stored values, or to trigger actions. In the same way a GET-request should not modify data, we cannot modify data in a regular GraphQL-query. We can however in a mutation. Building Mutations Mutations are stored in app/graphql/mutations, ideally grouped per resources they are mutating, similar to our services. They should inherit Mutations::BaseMutation. The fields defined on the mutation are returned as the result of the mutation. Update mutation granularity The service-oriented architecture in GitLab means that most mutations call a Create, Delete, or Update service, for example UpdateMergeRequestService. For Update mutations, a you might want to only update one aspect of an object, and thus only need a fine-grained mutation, for example MergeRequest::SetWip. It’s acceptable to have both fine-grained mutations and coarse-grained mutations, but be aware that too many fine-grained mutations can lead to organizational challenges in maintainability, code comprehensibility, and testing. Each mutation requires a new class, which can lead to technical debt. It also means the schema becomes very big, and we want users to easily navigate our schema. As each new mutation also needs tests (including slower request integration tests), adding mutations slows down the test suite. To minimize changes: - Use existing mutations, such as MergeRequest::Update, when available. - Expose existing services as a coarse-grained mutation. When a fine-grained mutation might be more appropriate: - Modifying a property that requires specific permissions or other specialized logic. - Exposing a state-machine-like transition (locking issues, merging MRs, closing epics, etc). - Accepting nested properties (where we accept properties for a child object). - The semantics of the mutation can be expressed clearly and concisely. See issue #233063 for further context. Naming conventions Each mutation must define a graphql_name, which is the name of the mutation in the GraphQL schema. Example: class UserUpdateMutation < BaseMutation graphql_name 'UserUpdate' end Our GraphQL mutation names are historically inconsistent, but new mutation names should follow the convention '{Resource}{Action}' or '{Resource}{Action}{Attribute}'. Mutations that create new resources should use the verb Create. Example: CommitCreate Mutations that update data should use: - The verb Update. - A domain-specific verb like Set, Add, or Toggleif more appropriate. Examples: EpicTreeReorder IssueSetWeight IssueUpdate TodoMarkDone Mutations that remove data should use: - The verb Deleterather than Destroy. - A domain-specific verb like Removeif more appropriate. Examples: AwardEmojiRemove Note If you need advice for mutation naming, canvass the Slack #graphql channel for feedback. Arguments Arguments for a mutation are defined using argument. Example: argument :my_arg, GraphQL::STRING_TYPE, required: true, description: "A description of the argument." Each GraphQL argument defined is passed to the #resolve method of a mutation as keyword arguments. Example: def resolve(my_arg:) # Perform mutation ... end graphql-ruby wraps up arguments into an input type. For example, the mergeRequestSetWip mutation defines these arguments (some through inheritance): These arguments automatically generate an input type called MergeRequestSetWipInput with the 3 arguments we specified and the clientMutationId. Object identifier arguments In keeping with the GitLab use of Global IDs, mutation arguments should use Global IDs to identify an object and never database primary key IDs. Where an object has an iid, prefer to use the full_path or group_path of its parent in combination with its iid as arguments to identify an object rather than its. The resolve method The resolve method receives the mutation’s arguments as keyword arguments. From here, we can call the service that modifies the resource. The resolve method should then return a hash with the same field names as defined on the mutation including of strings if the mutation failed after authorization. # The `errors_on_object` helper collects `errors.full_messages` errors: errors_on_object(merge_request) } Mounting the mutation To make the mutation available it must be defined on the mutation type that is stored in graphql/types/mutation_types. The mount_mutation helper method defines a field based on the GraphQL-name of the mutation: module Types class MutationType < BaseObject include Gitlab::Graphql::MountMutation graphql_name "Mutation" mount_mutation Mutations::MergeRequests::SetWip end end Generates loads the object on the mutation. This would allow you to use the authorized_find! helper method. When a user is not allowed to perform the action, or an object is not found, we should raise a Gitlab::Graphql::Errors::ResourceNotAvailable error which is correctly rendered to the clients. Errors in mutations We encourage following the practice of errors as data for mutations, which distinguishes errors by who they are relevant to, defined by who can deal with them. Key points: - All mutation responses have an errorsfield. This should be populated on failure, and may be populated on success. - Consider who needs to see the error: the user or the developer. - Clients should always request the errorsfield when performing mutations. - Errors may be reported to users either at $root.errors(top-level error) or at $root.data.mutationName.errors(mutation errors). The location depends on what kind of error this is, and what information it holds. Consider an example mutation doTheThing that returns a response with two fields: errors: [String], and thing: ThingType. The specific nature of the thing itself is irrelevant to these examples, as we are considering the errors. There are three states a mutation response can be in: Success In the happy path, errors may be returned, along with the anticipated payload, but if everything was successful, then errors should be an empty array, since there are no problems we need to inform the user of. { data: { doTheThing: { errors: [] // if successful, this array will generally be empty. thing: { .. } } } } Failure (relevant to the user) An error that affects the user occurred. We refer to these as mutation errors. In this case there is typically no thing to return: { data: { doTheThing: { errors: ["you cannot touch the thing"], thing: null } } } Examples of this include: - Model validation errors: the user may need to change the inputs. - Permission errors: the user needs to know they cannot do this, they may need to request permission or sign in. - Problems with application state that prevent the user’s action, for example: merge conflicts, the resource was locked, and so on. Ideally, we should prevent the user from getting this far, but if they do, they need to be told what is wrong, so they understand the reason for the failure and what they can do to achieve their intent, even if that is as simple as retrying the request. It is possible to return recoverable errors alongside mutation data. For example, if a user uploads 10 files and 3 of them fail and the rest succeed, the errors for the failures can be made available to the user, alongside the information about the successes. Failure (irrelevant to the user) One or more non-recoverable errors can be returned at the top level. These are things over which the user has little to no control, and should mainly be system or programming problems, that a developer needs to know about. In this case there is no data: { errors: [ {"message": "argument error: expected an integer, got null"}, ] } This is the result of raising an error during the mutation. In our implementation, the messages of argument errors and validation errors are returned to the client, and all other StandardError instances are caught, logged and presented to the client with the message set to "Internal server error". See GraphqlController for details. These represent programming errors, such as: - A GraphQL syntax error, where an Intwas passed instead of a String, or a required argument was not present. - Errors in our schema, such as being unable to provide a value for a non-nullable field. - System errors: for example, a Git storage exception, or database unavailability. The user should not be able to cause such errors in regular usage. This category of errors should be treated as internal, and not shown to the user in specific detail. We need to inform the user when the mutation fails, but we do not need to tell them why, since they cannot have caused it, and nothing they can do fixes it, although we may offer to retry the mutation. Categorizing errors When we write mutations, we need to be conscious about which of these two categories an error state falls into (and communicate about this with frontend developers to verify our assumptions). This means distinguishing the needs of the user from the needs of the client. Never catch an error unless the user needs to know about it. If the user does need to know about it, communicate with frontend developers to make sure the error information we are passing back is useful. See also the frontend GraphQL guide. Aliasing and deprecating mutations The #mount_aliased_mutation helper allows us to alias a mutation as another name within MutationType. For example, to alias a mutation called FooMutation as BarMutation: mount_aliased_mutation 'BarMutation', Mutations::FooMutation This allows us to rename a mutation and continue to support the old name, when coupled with the deprecated argument. Example: mount_aliased_mutation 'UpdateFoo', Mutations::Foo::Update, deprecated: { reason: 'Use fooUpdate', milestone: '13.2' } Deprecated mutations should be added to Types::DeprecatedMutations and tested for within the unit test of Types::MutationType. The merge request !34798 can be referred to as an example of this, including the method of testing deprecated aliased mutations. Deprecating EE mutations EE mutations should follow the same process. For an example of the merge request process, read merge request !42588. Pagination implementation To learn more, visit GraphQL pagination. Validating arguments For validations of single arguments, use the prepare option as normal. Sometimes a mutation or resolver may accept a number of optional arguments, but we still want to validate that at least one of the optional arguments is provided. In this situation, consider using the #ready? method within your mutation or resolver to provide the validation. The #ready? method is called before any work is done within the #resolve method. Example: def ready?(**args) if args.values_at(:body, :position).compact.blank? raise Gitlab::Graphql::Errors::ArgumentError, 'body or position arguments are required' end # Always remember to call `#super` super end In the future this may be able to be done using InputUnions if this RFC is merged. GitLab: true, description: 'Timestamp of when the issue was created.' Testing Writing unit tests Before creating unit tests, review the following examples: It’s faster to test as much of the logic from your GraphQL queries and mutations with unit tests, which are stored in spec/graphql. Use unit tests to verify that: - Types have the expected fields. - Resolvers and mutations apply authorizations and return expected data. - Edge cases are handled correctly. Writing integration tests Integration tests check the full stack for a GraphQL query or mutation and are stored in spec/requests/api/graphql. For speed, you should test most logic in unit tests instead of integration tests. However, integration tests that check if data is returned verify the following additional items: - The mutation is actually queryable within the schema (was mounted in MutationType). - The data returned by a resolver or mutation correctly matches the return types of the fields and resolves without errors. Integration tests can also verify the following items, because they invoke the full stack: - An argument or scalar’s prepareapplies correctly. - Logic in a resolver or mutation’s #ready?method applies correctly. - An argument’s default_valueapplies correctly. - Objects resolve performantly and there are no N+1 issues. When adding a query, you can use the a working graphql query shared example to test if the query renders valid results. You can construct a query including all available fields using the GraphqlHelpers#all_graphql_fields_for helper. This makes it easy to add a test rendering all possible fields for a query. If you’re adding a field to a query that supports pagination and sorting, visit Testing for details. To test GraphQL mutation requests, GraphqlHelpers provides two helpers: graphql_mutation which takes the name of the mutation, and a hash with the input for the mutation. This returns a struct with a mutation query, and prepared variables. You can then pass this struct to the post_graphql_mutation helper, that posts the request with the correct parameters, like a GraphQL client would do. To access the response of a mutation, you can use the graphql_mutation_response helper. Using these helpers, Testing tips and tricks Avoid false positives: Authenticating a user with the current_user:argument for post_graphqlgenerates more queries on the first request than on subsequent requests on that same user. If you are testing for N+1 queries using QueryRecorder, use a different user for each request. The below example shows how a test for avoiding N+1 queries should look: RSpec.describe 'Query.project(fullPath).pipelines' do include GraphqlHelpers let(:project) { create(:project) } let(:query) do %( { project(fullPath: "#{project.full_path}") { pipelines { nodes { id } } } } ) end it 'avoids N+1 queries' do first_user = create(:user) second_user = create(:user) create(:ci_pipeline, project: project) control_count = ActiveRecord::QueryRecorder.new do post_graphql(query, current_user: first_user) end create(:ci_pipeline, project: project) expect do post_graphql(query, current_user: second_user) # use a different user to avoid a false positive from authentication queries end.not_to exceed_query_limit(control_count) end end Mimic the folder structure of app/graphql/types: For example, tests for fields on Types::Ci::PipelineTypein app/graphql/types/ci/pipeline_type.rbshould be stored in spec/requests/api/graphql/ci/pipeline_spec.rbregardless of the query being used to fetch the pipeline data. Notes about Query flow and GraphQL infrastructure The GitLab GraphQL infrastructure can be found in lib/gitlab/graphql. Instrumentation is functionality that wraps around a query being executed. It is implemented as a module that uses the Instrumentation class. Example: Present module Gitlab module Graphql module Present #... some code above... def self.use(schema_definition) schema_definition.instrument(:field, ::Gitlab::Graphql::Present::Instrumentation.new) end end documentation. Documentation and schema Our schema is located at app/graphql/gitlab_schema.rb. See the schema reference for details. This generated GraphQL documentation needs to be updated when the schema changes. For information on generating GraphQL documentation and schema files, see updating the schema documentation. To help our readers, you should also add a new page to our GraphQL API documentation. For guidance, see the GraphQL API page. Include a changelog entry All client-facing changes must include a changelog entry. Laziness One important technique unique to GraphQL for managing performance is using lazy values. Lazy values represent the promise of a result, allowing their action to be run later, which enables batching of queries in different parts of the query tree. The main example of lazy values in our code is the GraphQL BatchLoader. To manage lazy values directly, read Gitlab::Graphql::Lazy, and in particular Gitlab::Graphql::Laziness. This contains #force and #delay, which help implement the basic operations of creation and elimination of laziness, where needed. For dealing with lazy values without forcing them, use Gitlab::Graphql::Lazy.with_value.
https://docs.gitlab.com/13.7/ee/development/api_graphql_styleguide.html
CC-MAIN-2021-10
en
refinedweb
single-task training on CodeSearchNet Corpus javascript dataset. Intended uses & limitations The model could be used to generate the description for the javascript function or be fine-tuned on other javascript code tasks. It can be used on unparsed and untokenized javascript code. However, if the javascript code is tokenized, the performance should be better. How to use Here is how to use this model to generate javascript_small_code_documentation_generation_javascript"), tokenizer=AutoTokenizer.from_pretrained("SEBIS/code_trans_t5_small_code_documentation_generation_javascript", skip_special_tokens=True), device=0 ) tokenized_code = "function isStandardBrowserEnv ( ) { if ( typeof navigator !== 'undefined' && ( navigator . product === 'ReactNative' || navigator . product === 'NativeScript' || navigator . product === 'NS' ) ) { return false ; } return ( typeof window !== 'undefined' && typeof document !== 'undefined' ) ; }" pipeline([tokenized_code]) Run this example in colab notebook. Training data The supervised training tasks datasets can be downloaded on Link Evaluation results For the code documentation tasks, different models achieves the following results on different programming languages (in BLEU score): Test results : Created by Ahmed Elnaggar | LinkedIn and Wei Ding | LinkedIn - Downloads last month - 13
https://huggingface.co/SEBIS/code_trans_t5_small_code_documentation_generation_javascript
CC-MAIN-2021-10
en
refinedweb
I get this RUNTIME ERROR: Code: Select all #include <iostream> using namespace std; int main() { int a[2][3] = { {1, 2, 3}, {4, 5, 6} }; int *(*b)[3]; for (int i = 0; i < 2; i++) for (int j = 0; j < 3; j++) b[i][j] = &a[i][j]; for (int i = 0; i < 2; i++) for (int j = 0; j < 3; j++) cout << *b[i][j] << " "; cout << endl; return 0; }; I want each element of b[][] is a pointer to an element of a[][]. Code: Select all Program received signal SIGSEGV, Segmentation fault. 0x0040147c in main () at test-p.cpp:8 8 b[i][j] = &a[i][j];
https://onlinejudge.org/board/viewtopic.php?f=14&t=42452
CC-MAIN-2021-10
en
refinedweb
Hide Forgot python-isodate fails to build with Python 3.10.0a1. ====================================================================== ERROR: test_totimedelta (isodate.tests.test_duration.DurationTest) Test conversion form Duration to timedelta. ---------------------------------------------------------------------- Traceback (most recent call last): File "/builddir/build/BUILD/isodate-0.6.0/src/isodate/tests/test_duration.py", line 400, in test_totimedelta self.assertEqual(dur.totimedelta(datetime(1998, 2, 25)), File "/builddir/build/BUILD/isodate-0.6.0/src/isodate/duration.py", line 320, in totimedelta return (start + self) - start File "/builddir/build/BUILD/isodate-0.6.0/src/isodate/duration.py", line 183, in __add__ newdt = other.replace(year=newyear, month=newmonth, day=newday) TypeError: 'decimal.Decimal' object cannot be interpreted as an integer ---------------------------------------------------------------------- Ran 278 tests in 0.070s FAILED (errors=45) Test failed: <unittest.runner.TextTestResult run=278 errors=45 failures=0> For the build logs, see: For all our attempts to build python-isod. Builtin and extension functions that take integer arguments no longer accept Decimals, Fractions and other objects that can be converted to integers only with a loss (e.g. that have the __int__() method but do not have the __index__() method). With Python 3.9 (current rawhide) I see the DeprecationWarning about this in build.log: Looks like we'll need to pull in ? Err, the direct link to the pull request is Upstream has been inactive for over two years. I have backported upstream PR, see: Builds alright in Copr with Python3.10a1 Permission to merge? This bug appears to have been reported against 'rawhide' during the Fedora 34 development cycle. Changing version to 34. Looks good to me. Thanks Tomáš and Miro!
https://bugzilla.redhat.com/show_bug.cgi?id=1890444
CC-MAIN-2021-10
en
refinedweb
Vector3 The point on the collider that is closest to the specified location. Returns a point on the collider that is closest to a given location. This method computes the point on the collider that is closest to a 3d location in the world. In the example below closestPoint is the point on the collider and location is the point in 3d space. If location is in the collider the closestPoint will be inside. Note: The difference from ClosestPointOnBounds is that the returned point is actually on the collider instead of on the bounds of the collider. (bounds is a box that surrounds the collider.) using UnityEngine; // Note that closestPoint is based on the surface of the collider // and location represents a point in 3d space. // The gizmos work in the editor. // // Create an origin-based cube and give it a scale of (1, 0.5, 3). // Change the BoxCollider size to (0.8, 1.2, 0.8). This means that // collisions will happen when a GameObject gets close to the BoxCollider. // The ShowClosestPoint.cs script shows spheres that display the location // and closestPoint. Try changing the BoxCollider size and the location // values. // Attach this to a GameObject that has a Collider component attached public class ShowClosestPoint : MonoBehaviour { public Vector3 location; public void OnDrawGizmos() { var collider = GetComponent<Collider>(); if (!collider) { return; // nothing to do without a collider } Vector3 closestPoint = collider.ClosestPoint(location); Gizmos.DrawSphere(location, 0.1f); Gizmos.DrawWireSphere(closestPoint, 0.1f); } } Note: Same as Physics.ClosestPoint but doesn't allow passing a custom position and rotation. Instead, it uses the position of the collider.
https://docs.unity3d.com/kr/2020.1/ScriptReference/Collider.ClosestPoint.html
CC-MAIN-2021-10
en
refinedweb
As part of the PyTorch/OpenMined grants we announced last December, the Web & Mobile team has been hard at work on developing 4 new libraries for model-centric federated learning: - syft.js - a library for federated learning in the browser - KotlinSyft - a library for federated learning on Android devices - SwiftSyft - a library for federated learning on iOS devices - Threepio - a library for translating commands from one deep learning framework to another Each of these libraries is centrally coordinated by PyGrid. As an added bonus: we also released a 4th worker library within the PySyft project, the PySyft FL Worker, that allows for federated learning in any Python environment. So, what is "model-centric" federated learning? Federated Learning, at its core, is any kind of machine learning that occurs when the model is brought to the data instead of the data to the model. There are two kinds of federated learning: model-centric and data-centric. In "model-centric" federated learning, a model's API is pre-configured (in the weights, layers, etc.) and hosted in the cloud (in our case, PyGrid). Ephemeral workers then show up, download the model, improve it, and then upload a new version. This typically happens over a long period of time (days, weeks, even months). It is most commonly found when using edge devices such as smartphones to passively improve an AI model over time, such as a model within an app. One great example of model-centric federated learning is how Google's GBoard mobile app learns your typing preferences and style over time. Naturally, if a user were to send their text messages to Google's central servers this would be a terrible breach of privacy and would also come at great networking expense. For Google, it's also simply less expensive to use the computation power of a mobile phone than training an equivalent model in one of their data centers every time someone sends a text message. For these reasons, Google opted to leave the user's text messages on the device, train the model there, and the report the result back to Google's servers to update the global model - all without compromising a user's privacy. In "data-centric" federated learning, a dataset's API is pre-configured (its schema, attributes, and security parameters) and is hosted in the cloud (in our case, PyGrid). Ephemeral models then show up and perform training locally in an ad-hoc, experimental way. While this form of federated learning is less common, it is more ideal for scientific exploration than model-centric federated learning as it more closely reflects the standard data science workflow. To use an example, let's say that you want to train a model to detect cancerous nodules in CT scans. If you don't work at a major hospital, it might be very difficult or simply impossible to obtain a dataset that would be sufficient for training your model. Fortunately, with data-centric federated learning, a data owner can host their datasets in PyGrid, and you (the data scientist) can submit requests for training and inference against that data. Even better, global differential privacy may be applied to the resulting model to protect the data from being stolen out of the trained model - how cool is that!? Great, show me the demos! Okay, okay... So, today marks the day that we have our initial releases of syft.js, KotlinSyft, and SwiftSyft. SwiftSyft will start in beta, while the other two are considered stable releases at 0.1.0. Currently, these libraries are locked in to PySyft 0.2.8, with the latest master branch of PyGrid. We're already working hard on version 0.3.0 to ensure more stable support in PyGrid, as well as providing support for a number of other features. There's a few similarities between all the libraries that we should address first: - We are hoping to add support for secure multi-party computation and secure aggregation protocols using WebRTC in the near future, but at the moment this is unsupported in PySyft. - All libraries support optional, but suggested, JWT authentication to protect against Sybil attacks. - KotlinSyft and SwiftSyft both have support for training either in the foreground or background. However, due to limitations with the background task scheduler in iOS, you are not guaranteed to maintain a background process for a certain amount of time. This is variable and up to the operating system to determine. - KotlinSyft and SwiftSyft have charge detection, wifi network detection, and the capability for sleep/wake detection. These are "smart defaults" that we included to ensure that the training process doesn't interfere with the user experience or run up their cell phone's data plan. These options can each be configured as you desire. Before we get into demos - a quick note: All of the demos for syft.js, KotlinSyft, and SwiftSyft require the same setup process. You must run one Jupyter Notebook within PySyft. This involves having PySyft installed on your local machine, as well as having PyGrid installed on your local machine. Get those set up properly, and then run the following notebook before continuing. syft.js The syft.js library supports training and inference of a machine learning model inside a web browser. Let's try some sample code to see what our API looks like: import * as tf from '@tensorflow/tfjs-core'; import { Syft } from '@openmined/syft.js'; const gridUrl = 'ws://pygrid.myserver.com:5000'; const modelName = 'my-model'; const modelVersion = '1.0.0'; // if the model is protected with authentication token (optional) const authToken = '...'; const worker = new Syft({ gridUrl, authToken, verbose: true }); const job = await worker.newJob({ modelName, modelVersion }); job.start(); job.on('accepted', async ({ model, clientConfig }) => { const batchSize = clientConfig.batch_size; const lr = clientConfig.lr; // Load data. const batches = LOAD_DATA(batchSize); // Load model parameters. let modelParams = model.params.map(p => p.clone()); // Main training loop. for (let [data, labels] of batches) { // NOTE: this is just one possible example. // Plan name (e.g. 'training_plan'), its input arguments and outputs depends on FL configuration and actual Plan implementation. let updatedModelParams = await job.plans['training_plan'].execute( job.worker, data, labels, batchSize, lr, ...modelParams ); // Use updated model params in the next iteration. for (let i = 0; i < modelParams.length; i++) { modelParams[i].dispose(); modelParams[i] = updatedModelParams[i]; } } // Calculate & send model diff. const modelDiff = await model.createSerializedDiff(modelParams); await job.report(modelDiff); }); job.on('rejected', ({ timeout }) => { // Handle the job rejection, e.g. re-try after timeout. }); job.on('error', err => { // Handle errors. }); Is that it? Yep - that's it. The greatest part of all is that you can write your model and training plan in normal PyTorch and PySyft and syft.js takes care of the rest. It's truly black magic (actually, it's not - check out the "But wait, there's more..." section for the juicy details). KotlinSyft Like it's brother syft.js, KotlinSyft is a library for performing federated learning on Android devices. Here's a code snippet for the same MNIST example we showed above: val userId = "my Id" // Optional: Make an http request to your server to get an authentication token val authToken = apiClient.requestToken("") // The config defines all the adjustable properties of the syft worker // The url entered here cannot define connection protocol like https/wss since the worker allots them by its own // `this` supplies the context. It can be an activity context, a service context, or an application context. val config = SyftConfiguration.builder(this, "").build() // Initiate Syft worker to handle all your jobs val syftWorker = Syft.getInstance(authToken, configuration) // Create a new Job val newJob = syftWorker.newJob("mnist", "1.0.0") // Define training procedure for the job val jobStatusSubscriber = object : JobStatusSubscriber() { override fun onReady( model: SyftModel, plans: ConcurrentHashMap<String, Plan>, clientConfig: ClientConfig ) { // This function is called when KotlinSyft has downloaded the plans and protocols from PyGrid // You are ready to train your model on your data // param model stores the model weights given by PyGrid // param plans is a HashMap of all the planIDs and their plans. // ClientConfig has hyper parameters like batchsize, learning rate, number of steps, etc // Plans are accessible by their plan Id used while hosting it on PyGrid. // eventually you would be able to use plan name here val plan = plans["plan id"] repeat(clientConfig.properties.maxUpdates) { step -> // get relevant hyperparams from ClientConfig.planArgs // All the planArgs will be string and it is upon the user to deserialize them into correct type val batchSize = (clientConfig.planArgs["batch_size"] ?: error("batch_size doesn't exist")).toInt() val batchIValue = IValue.from( Tensor.fromBlob(longArrayOf(batchSize.toLong()), longArrayOf(1)) ) val lr = IValue.from( Tensor.fromBlob( floatArrayOf( (clientConfig.planArgs["lr"] ?: error("lr doesn't exist")).toFloat() ), longArrayOf(1) ) ) // your custom implementation to read a databatch from your data val batchData = dataRepository.loadDataBatch(clientConfig.batchSize) //get Model weights and return if not set already val modelParams = model.getParamsIValueArray() ?: return // plan.execute runs a single gradient step and returns the output as PyTorch IValue val output = plan.execute( batchData.first, batchData.second, batchIValue, lr, *modelParams )?.toTuple() // The output is a tuple with outputs defined by the pysyft plan along with all the model params output?.let { outputResult -> val paramSize = model.modelState!!.syftTensors.size // The model params are always appended at the end of the output tuple val beginIndex = outputResult.size - paramSize val updatedParams = outputResult.slice(beginIndex until outputResult.size - 1) // update your model. You can perform any arbitrary computation and checkpoint creation with these model weights model.updateModel(updatedParams.map { it.toTensor() }) // get the required loss, accuracy, etc values just like you do in Pytorch Android val accuracy = outputResult[1].toTensor().dataAsFloatArray.last() } ?: return // this will happen when plan execution fails. // Most probably due to device state not fulfilling syft config constraints // You should not handle any error here and simply return to close the subscriber. // Failing to return from onReady will crash the application. // All error handling must be done with `onError` Listener } // Once training finishes generate the model diff val diff = mnistJob.createDiff() // Report the diff to PyGrid and finish the cycle mnistJob.report(diff) } override fun onRejected(timeout: String) { // Implement this function to define what your worker will do when your worker is rejected from the cycle // timeout tells you after how much time you should try again for the cycle at PyGrid } override fun onError(throwable: Throwable) { // Implement this function to handle error during job execution } } // Start your job newJob.start(jobStatusSubscriber) // Voila! You are done. SwiftSyft And of course we have support for iOS via SwiftSyft. Let's see what some code samples look like for the last MNIST demo: // // plan - Use this to generate diffs using our training data // clientConfig - contains the configuration for the training cycle (batchSize, learning rate) and // metadata for the model (name, version) // modelReport - Used as a completion block and reports the diffs to PyGrid. self.syftJob?.onReady(execute: { plan, clientConfig, modelReport in do { // This returns a lazily evaluated sequence for each MNIST image and the corresponding label // It divides the training data and the label by batches let (mnistData, labels) = try MNISTLoader.load(setType: .train, batchSize: clientConfig.batchSize) // Iterate through each batch of MNIST data and label for case let (batchData, labels) in zip(mnistData, labels) { // We need to create an autorelease pool to release the training data from memory after each loop try autoreleasepool { // Preprocess MNIST data by flattening all of the MNIST batch data as a single array let flattenedBatch = MNISTLoader.flattenMNISTData(batchData) // Preprocess the label ( 0 to 9 ) by creating one-hot features and then flattening the entire thing let oneHotLabels = MNISTLoader.oneHotMNISTLabels(labels: labels).compactMap { Float($0)} // Since we don't have native tensor wrappers in Swift yet, we use // `TrainingData` and `ValidationData` classes to store the data and shape. let trainingData = try TrainingData(data: flattenedBatch, shape: [clientConfig.batchSize, 784]) let validationData = try ValidationData(data: oneHotLabels, shape: [clientConfig.batchSize, 10]) // Execute the plan with the training data and validation data. `plan.execute()` // returns the loss and you can use it if you want to (plan.execute() // has the @discardableResult attribute) let loss = plan.execute(trainingData: trainingData, validationData: validationData, clientConfig: clientConfig) } } // Generate diff data and report the final diffs as let diffStateData = try plan.generateDiffData() modelReport(diffStateData) } catch let error { // Handle any error from the training cycle debugPrint(error.localizedDescription) } }) //) } PySyft FL Worker We promised you one more library. Technically, it's not a library - it's a worker within a library... but we'll go ahead and count that one. After deprecating the TrainConfig class, we have added a federated learning worker class within PySyft to take its place. If you're interested in seeing this in action, just simply run the next notebook in the series - here's a link to that. But wait, there's more... We also managed to release a fifth library, Threepio, a helper library running internally within PySyft (or standalone) that converts all PyTorch Plans into a list of commands for TensorFlow.js. This is required by syft.js in order to be able to execute commands inside a browser. Threepio is a library for converting commands from one deep learning framework to another. It does this by scraping documentation of popular frameworks like PyTorch, TensorFlow, TensorFlow.js, and Numpy and then mapping commands it can intelligently identify as equivalent in all other frameworks. When a match cannot be automatically determined by name, it can also be added manually within Threepio. In addition to command names being mapped, we also have support for argument and keyword argument (kwarg) reordering. Threepio is currently available as a library in both Python and Javascript, and also supports multi-command translation allowing commands to be mapped to other operations even when they don't directly exist in another framework. We hope to build out further support for Threepio and need your help to achieve 100% compatibility between major deep learning frameworks. If you're interested in getting started, check out the good first issues and join our Threepio Slack channel: #lib_threepio. What about data-centric federated learning? Funny you should mention it - we're also in the middle of some real big developments when it comes to data-centric federated learning. We've teamed up with the University of California, San Francisco to work on building out OpenMined's data-centric federated learning capabilities in PyGrid. Stay tuned to our roadmap for more updates on what's happening in that project. Where do we go from here? The sky is the limit! In short, here's what to look for in terms of model-centric federated learning's near-term roadmap: - The ability to start, stop, and pause the federated learning process in the middle of training. - The ability to persist training data to the device or browser's local storage in the event that training fails or is interrupted by the user. - The ability to arbitrarily execute PySyft plans without participating in a cycle - Better, more complete documentation and testing across all our libraries, including PyGrid and PySyft. - An easier-to-understand, and smarter server_configin PyGrid that will allow for easier definition of cycles, their length, the number of workers allowed, and the algorithm for selection. - Support for more commands and more frameworks within Threepio - Adding a compatibility table inside Threepio, including adding support for versioning and fuzzy finding of commands. - Support for secure mutli-party computation and secure aggregation protocols within our various worker libraries - Support for pre-defined averaging plans within PyGrid Of course, I'm sure you can think of many other suggestions on what we should do next. Matter of fact - drop us a line in Slack and tell us what you think! You can find us in the #lib_syft_js, #lib_kotlinsyft, #lib_swiftsyft, #lib_threepio, and #lib_syft_mobile channels. How do I contribute? We'd love your support and have over 100 open issues for beginners and newcomers to the community. We could really use your help in shaping future releases and stabilization. Here's a few places you can find issues to get started on:
https://blog.openmined.org/announcing-new-libraries-for-fl-on-web-and-mobile/
CC-MAIN-2021-10
en
refinedweb
I have two SQL tables: Teams and Members. Each team contains 3 members, in the database the members' ids are stored. How could I map the Member objects into the Teams using the Dapper.NET ORM? public class Team { public int? id { get; set; } public Member MemberA { get; set; } public Member MemberB { get; set; } public Member MemberC { get; set; } } public class Member { public int? id { get; set; } public string Name { get; set; } } public IEnumerable<Team> GetTeams() { string sql = "SELECT * FROM Teams t LEFT JOIN Members m ON t.MemberA=m.id AND t.MemberB=m.id AND t.MemberC=m.id"; return m_connection.Query<Team, Member, Member, Member, Team>(sql, (t, m1, m2, m3) => { t.MemberA = m1; t.MemberB = m2; t.MemberC = m3; return t; }, splitOn: "MemberA,MemberB,MemberC"); } You need to fix your sql query to have a proper join with the Members Table. Just change it to"; and your dapper code will work as you expect filling the three Member instance of every single Team retrieved. Notice that when you use multimapping, you need to place the SplitOn elements in the proper place to have Dapper understand your requirement to create three different Member variables. Version for MS-Access)";
https://dapper-tutorial.net/knowledge-base/57492070/how-to-map-multiple-objects-from-the-same-table-with-dapper-net-
CC-MAIN-2020-40
en
refinedweb
Add a new API to allocate and free memory that is guaranteed to beaddressable by a device, but which potentially is not cache coherentfor DMA.To transfer ownership to and from the device, the existing streamingDMA API calls dma_sync_single_for_device and dma_sync_single_for_cpumust be used.For now the new calls are implemented on top of dma_alloc_attrs justlike the old-noncoherent API, but once all drivers are switched tothe new API it will be replaced with a better working implementationthat is available on all architectures.Signed-off-by: Christoph Hellwig <hch@lst.de>--- Documentation/core-api/dma-api.rst | 75 ++++++++++++++---------------- include/linux/dma-mapping.h | 12 +++++ 2 files changed, 48 insertions(+), 39 deletions(-)diff --git a/Documentation/core-api/dma-api.rst b/Documentation/core-api/dma-api.rstindex 90239348b30f6f..ea0413276ddb70 100644--- a/Documentation/core-api/dma-api.rst+++ b/Documentation/core-api/dma-api.rst@@ -516,48 +516,56 @@ routines, e.g.::: } -Part II - Advanced dma usage-----------------------------+Part II - Non-coherent DMA allocations+-------------------------------------- -Warning: These pieces of the DMA API should not be used in the-majority of cases, since they cater for unlikely corner cases that-don't belong in usual drivers.+These APIs allow to allocate pages in the kernel direct mapping that are+guaranteed to be DMA addressable. This means that unlike dma_alloc_coherent,+virt_to_page can be called on the resulting address, and the resulting+struct page can be used for everything a struct page is suitable for. -If you don't understand how cache line coherency works between a-processor and an I/O device, you should not be using this part of the-API at all.+If you don't understand how cache line coherency works between a processor and+an I/O device, you should not be using this part of the API. :: void *- dma_alloc_attrs(struct device *dev, size_t size, dma_addr_t *dma_handle,- gfp_t flag, unsigned long attrs)+ dma_alloc_noncoherent(struct device *dev, size_t size,+ dma_addr_t *dma_handle, enum dma_data_direction dir,+ gfp_t gfp) .+This routine allocates a region of <size> bytes of consistent memory. It+returns a pointer to the allocated region (in the processor's virtual address+space) or NULL if the allocation failed. The returned memory may or may not+be in the kernels direct mapping. Drivers must not call virt_to_page on+the returned memory region. -Note: where the platform can return consistent memory, it will-guarantee that the sync points become nops.+It also returns a <dma_handle> which may be cast to an unsigned integer the+same width as the bus and given to the device as the DMA address base of+the region. -Warning: Handling non-consistent memory is a real pain. You should-only use this API if you positively know your driver will be-required to work on one of the rare (usually non-PCI) architectures-that simply cannot make consistent memory._attrs(struct device *dev, size_t size, void *cpu_addr,- dma_addr_t dma_handle, unsigned long attrs)+ dma_free_noncoherent(struct device *dev, size_t size, void *cpu_addr,+ dma_addr_t dma_handle, enum dma_data_direction dir) -Free memory allocated by the dma_alloc_attrs(). All common-parameters must be identical to those otherwise passed to dma_free_coherent,-and the attrs argument must be identical to the attrs passed to-dma_alloc_attrs().+Free a region of memory previously allocated using dma_alloc_noncoherent().+dev, size and dma_handle and dir must all be the same as those passed into+dma_alloc_noncoherent(). cpu_addr must be the virtual address returned by+the dma_alloc_noncoherent(). :: @@ -575,17 +583,6 @@ memory or doing partial flushes..- Part III - Debug drivers use of the DMA-API -------------------------------------------diff --git a/include/linux/dma-mapping.h b/include/linux/dma-mapping.hindex df0bff2ea750e0..4e1de194b45cbf 100644--- a/include/linux/dma-mapping.h+++ b/include/linux/dma-mapping.h@@ -389,6 +389,18 @@ static inline unsigned long dma_get_merge_boundary(struct device *dev) } #endif /* CONFIG_HAS_DMA */ +static inline void *dma_alloc_noncoherent(struct device *dev, size_t size,+ dma_addr_t *dma_handle, enum dma_data_direction dir, gfp_t gfp)+{+ return dma_alloc_attrs(dev, size, dma_handle, gfp,+ DMA_ATTR_NON_CONSISTENT);+}+static inline void dma_free_noncoherent(struct device *dev, size_t size,+ void *vaddr, dma_addr_t dma_handle, enum dma_data_direction dir)+{+ dma_free_attrs(dev, size, vaddr, dma_handle, DMA_ATTR_NON_CONSISTENT);+}+ static inline dma_addr_t dma_map_single_attrs(struct device *dev, void *ptr, size_t size, enum dma_data_direction dir, unsigned long attrs) {-- 2.28.0
https://lkml.org/lkml/2020/9/15/1148
CC-MAIN-2020-40
en
refinedweb
Theatre Essays – Jerome Robbins and Agnes de Mille Essay Get Full Essay Get access to this section to get all the help you need with your essay and educational goals.Get Access An analysis of the life and plants of the choreographers Jerome Robbins and Agnes de Mille and therole of dance in musical theater Agnesde Mille and Jerome Robbins worked in musical theater in what is widelyregarded to be the industry ‘s Golden Era. Many would state that it was theirinnovative attack to choreography in musical theater that brought an energyand a dynamism to the musical, accounting for its rush in popularity. It iscertainly true that they did much to promote the function of dance in musicaltheatre, which was antecedently mostly simply as an accoutrement to the maindramatic event ; pretty adult females with au naturel flesh exhibiting around the stage.Robbins and De Mille regarded dance as a serious art signifier and endeavor to portrayit as such on the phase. Musicaltheatre as we know it today did non come into being until the twentiethcentury, but song and dance have been a portion of theater for 1000s ofyears. From every bit early as the 5Thursdaycentury BC the Ancient Greeksemployed music and dance in many of their comedies and calamities to entertainthe public. The Romans carried on this tradition from the 3rdcentury BC, with many dramas by Plautus including vocal and dance. They inventedthe first pat places by attaching metal home bases to their places so that the entireaudience, who would sit in a prodigious alfresco theater, could hear the dancesteps ( 1 ) . In the Middle Ages going folk singers and companies of histrions, terpsichoreans and vocalists performed popular vocals and slapstick comedy. Thereligious play of the 12Thursdayand 13Thursdaycenturies alsoincluded liturgical vocals, although no dance. In the Gallic tribunal of the RenaissanceLouis XIV insisted that vocal and dance be incorporated into hisentertainments. InAmerica, some of the first dramatic functions to be performed by terpsichoreans were inmelodrama, which is unsurprising sing the extremely conventionalized motion ofmelodramatic histrions lends itself more to dance than to anything else. MlleCeleste, who was subsequently to go one of the most celebrated terpsichoreans of thenineteenth century, was foremost billed in America as the famed melodramaticactress ( 2 ) . Across the 19th century, circuses, showboats andpantomimes all included dance in some signifier. Stars such as Mlle Celeste andFanny Essler helped make a popular demand for dance and companies began toinclude more luxuriant dances in their eventide ‘s measure. Melodrama andpantomimes would frequently integrate complex concert dances into their entertainments.In England the most popular signifier of amusement for the working- andmiddle-classes was the music hall, which staged vaudeville amusement in theway of vocalists, terpsichoreans and forte Acts of the Apostless. Vaudeville was besides extremelypopular in America in the 19th century, and by the 1890s dance Acts of the Apostless wereever more in demand. Dances were still, nevertheless, mostly performed in betweenthe Acts of the Apostless of the chief production or before the end-piece to make full the spreads. Therole of dance in the theater at that clip was limited chiefly to entr’actes. Theyexisted strictly to pacify the audience, to showpiece a star, or to titillatepredominantly male audiences with leting spectacle of female limbs in leotardss ( 3 ) . Jack Cole referred to the dances and the terpsichoreans in theater at this timeas wallpaper ( 4 ) . Itwas n’t truly until the thirtiess that dance began to be an of import portion of themusical. George Balanchine, who trained at the Russian Imperial Ballet School beforeworking with Serge Diaghilev ‘s Ballets Russes, regarded dance as a legitimateand of import constituent in musical theater. He believed dance to be thegreatest expressive medium and foremost introduced concert dance onto the popular musicalstage withZiegfeld Folliess. Dancers in the theater began to be takenseriously, instead than regarded simply as pretty misss baring a batch of leg ; Intoa choreographic universe that was a mAA©lange of cosmetic motion, legs and lights-outs, Balanchine opened the door and concert dance leapt on to the popular musical phase, directed by a supreme creative person ( 5 ) . Whereas antecedently merelymodus operandishadbeen performed on the theatrical phase, Balanchine choreographeddances.He refused for his dances to be simply bite-size pieces of entertainmentsandwiched between the chief attractive force and insisted that they be portion of theplot, integrated seamlessly into the action. For the first clip in a musicalthe dances in Balanchine ‘sOn Your Toesreally helped to progress theplot. When, in 1982,On Your Toesreturned to Broadway, Carol Lawson oftheNew York Timeswrote ; On YourToeswas a turning point in the history of musical comedy, for Mr.Balanchine ‘s dances were more than mere interludes. Alternatively they served asessential facets of the secret plan, and were exhaustively incorporate parts of the production.( 6 ) Balanchine paved the manner for AgnesDe Mille and Jerome Robbins to wholly alter the kineticss of dance in musicaltheatre, and thereby in musicals has a whole. De Mille introduced the conceptof utilizing dance as a vehicle for story-telling and Robbins transformed the roleof choreographer in a musical to being manager of the full show, makingdance the drive force. Agnes De Mille Asa kid, although she came from a theatrical household, De Mille was non permittedformal dance preparation, but would improvize pieces to execute to invitees andnightly improvised to the concomitant of her female parent on the Orchestrelle ( 7 ) .She would pattern her melodramatic playing accomplishments every dark before performingflexibility exercisings to limber up her organic structure in preparedness for the phase. Whenin Hollywood with her household her true terpsichorean ‘s inherent aptitude became apparent as shefell in love with the broad unfastened infinites of the state environing the town ; this would be a repeating subject in her ulterior stage dancing. In herautobiography, Dance to the Piper, she exclaimed ; The descendinggrassy inclines filled me with a passion to run, to turn over in craze, to bust up mybody on the Earth. Space means this to a terpsichorean – or to a kid! The descentthrough theair, the determination of earth-footage, the embrace and battle with thefundamental ground.These are to a terpsichorean what strong aromas are to an animate being.( 8 ) Theday De Mille foremost watched Anna Pavlova perform merely increased her desire tobecome a terpsichorean. She was enthralled, awed, and dumbstruck, and describes thatmoment with passion and relish ( 9 ) . It was this that encouraged de Mille toorganise her first dance show with a group of other misss but she was still notallowed dance lessons and became frustrated with the limited dance she coulddo. It was n’t until her sister was advised by an orthopedist to get down balletdancing that she excessively was permitted to go to the Theodore Kosloff School ofImperial Russian Ballet. Whilst there she learnt technique and poise andtrained her organic structure into that of a terpsichorean ‘s. She worked feverishly difficult, perhapseven more so because her parents would non let her to hold lessons more thantwice a hebdomad, go forthing her lagging behind the remainder of the category. She resortedto practising in her female parent ‘s bathroom, where she had installed a barre for her. Bythe clip De Mille had finished high school nevertheless, she had grown to loath therigours of day-to-day pattern and decided to abandon her categories and her solitarypractices and travel to college. During her clip at UCLA De Mille occasionallystaged dances for pupil mass meetings and towards the terminal of her college life shestarted exerting with the head to acquiring back up on her points. She decidedto dance professionally after run intoing Douglass Montgomery, who convinced herthat she could. Thingss were ne’er traveling to be easy for her though. She movedto New York at a clip when terpsichoreans [ were ] hired on the shininess of the stockingand the blink of an eye of their agent, and when the few dance companies that existed onBroadway were little and dedicated to the personal development of some star ( 10 ) . I have mentioned earlier the limited chances a terpsichorean had in thistime, where no ‘pure ‘ concert dance was being performed in either music shows ormoving image shows and there was no such word as ‘choreography ‘ . Whenrehearsing for a concert of her ain stage dancing Montgomery taught De Mille howto act through her dance ; he taught me that every gesture must hold someexplicit significance ( 11 ) . She decided to execute character surveies whereby thedancing revealed personality and was natural in the class of the narrative. Rightfrom the start she wanted to use dance as more than light amusement, asa critical story-telling vehicle. These first efforts, being lone charactersketches, were rather light by nature, and the manner was folk instead thanballet, but it was different to what anybody else had done on the phase before.When she performed some of these at a concert she was received good but whenshe auditioned for Charles Cochran and Noel Coward they told her that she wasmore suited to the concert hall, and that she would ne’er do it in thetheatre. Aftertouring with Adolph Bolm, she was commissioned as a dancer-choreographer on ChristopherMorley ‘s resurgence ofThe Black Crookbut the drunken, noisy audience madeher manus her notice in. It was in the mid-thirtiess that the dance scene in NewYork began to stir. Every Sunday a twosome of dance concerts were given, withsoloists experimenting with every dance signifier conceivable. De Mille remembers, we were out remodel our full trade there were no regulations we struck sparksfrom one another ( 12 ) . For five old ages De Mille taught herself to choreograph, but she was seeking to larn to compose dances, non dumb shows, nor dramaticstories, nor character surveies, but planned sequences of sustained movementwhich would be original and compelling ( 13 ) . She viewed dance as a seriousart signifier and wanted to choreograph dances that would show it as such, butwith hardly any formal preparation behind her she found this really difficult.After disastrously choreographingFlying ColoursDe Mille and her mothermoved to London where, as in New York, she choreographed and danced in her ownrecitals to critical acclamation but with no fiscal addition. At one narration though, Marie Rambert and Arnold Haskell were amongst the audience and were impressedenough to inquire her to remain in London to go on her narrations and be taught atThe Ballet Club. Itwas at The Ballet Club that De Mille met Anthony Tudor and Fredrick Ashton, both of whom would travel on to go of import choreographers and who, with her, would revolutionize the dance universe. In 1933 she choreographed the dances forCharles B. Cochran ‘sNymph Errantin London but during the mid-thirtiess DeMille returned to America several times, dancing in her uncle ‘s production ofCleopatrain 1934 and choreographing Irving Thalburg ‘s film-version ofRomeo andJuliet. On the latter undertaking she had to digest her dances being cut topieces as the camera cut out most of the group work and showed merely snippets ofthe remainder. The usage at the clip was non to demo a whole dance but to providelight amusement with film editings of dances. OnHurrah for WhatDe Mille came up against the type of work forces that insisteddancers were hired for their sex entreaty and that dances were performed to sellsex. These were the kind of work forces that were maintaining dance from going aserious, of import art signifier and that issued it with merely a cosmetic functionin theater and movies. The direction wanted the misss exposed as much aspossible, face forepart ever, bosom bared, legs merely seeable to the waist, DeMille recalls ( 14 ) . As she refused to conform precisely, desiring her owncreative input, she was fired with one word, before her stage dancing was rippedto scintillas. Without the security of Equity many of the terpsichoreans and histrions werefired without warning as the Business Manager exacted his vision of abosoms-and-legs chorus-line extravaganza. At this clip on Broadway dances, attheir best, were slick and grammatical, but with no great minutes of dramaticrevelation ( 15 ) . When De Mille returned to Broadway some old ages subsequently she wasto dramatically alter this impression. In1940 Ballet Theatre was formed and De Mille was invited to go one of thechoreographers, on the apprehension that she was non to dance herself. It wasa extremely originative clip for De Mille and she was able to work with some of thefinest terpsichoreans and choreographers of the clip. It was at Ballet Theatre thatDe Mille created her first concert dance,Black Ritual, a controversial piecewith black terpsichoreans ; the first clip this had of all time been attempted by a seriousballet company. Having had lone brief and manic bustles with commercialtroupes of assorted cocottes and chorus terpsichoreans she had non had the experienceof puting a agenda of choreographing and rehearsing and was extremelynervous. Her terpsichoreans did non assist affairs by being systematically tardily and byarriving unprepared. The concert dance was non received good but shortly after shewas hired by a successful engagement director for a national circuit. De Mille andher terpsichoreans prepared for the circuit through blood, perspiration and cryings but it was atotal success, and De Mille discovered something critical: although the managersmay non, the populace liked and appreciated her work. Notlong after returning to New York, De Mille was asked by Ballet Theatre tocreateThree Virgins and a Devil, which was a immense hit and dAA©buted theyoung Jerome Robbins. In 1942 she was commissioned to make a concert dance for theBallet Russe de Monte Carlo. She extended a piece she had partially choreographedyears earlier, andRodeowas the consequence. The concert dance formed the basisfor a uniquely American dance manner, utilizing common people subjects, pat dance andenergetic, fast-paced motions, capturing the kernel of a cowpuncher ‘s manner.Teaching male terpsichoreans who were used to the preciseness and elegance of balletproved to be hard so De Mille resorted to moving lessons to assist herdancers happen their characters. She wanted them tobecowpunchers ; shewanted them to pass on dramatic significance. Come opening dark they wereprepared and the audience adored them. De Mille had created an wholly newand exciting dance manner ; it was the first of its sort, and the minute wasquick with birth ( 16 ) . De Mille successfully turned concert dance into musicalcomedy, and gave the signifier existent energy and relish, with motions ne’er beforeseen in this really precise of dance signifiers. Wehad breached the ramparts De Mille exclaims inDance to the Piper( 17 ) .She, with a few choreographers before her, had created a new tradition, onewith a different root urge to traditional concert dance. She asserts that tocreate a manner that truly differs from concert dance one must establish that manner onanother technique. De Mille integrated folk dances into her work, withoutlowering the public presentations to comedy imitations. Her work, like that of fellowchoreographer Anthony Tudor, conveyed theatrical significance through dance stairss ; the line between histrion and terpsichorean was blurred. Rather than terpsichoreans usingtraditional technique and executing well-known stairss, where the human bodiesare used simply as units of design, grouped, lumped, and directed intopredetermined multitudes, De Mille endeavor for originality and dramaticcommunication in her stage dancing. She writes of Tudor ‘s work ; Tudordeveloped the story-telling quality of his stage dancing to such a degree thateach gesture, formed out of the emotional constituents of the minute, is almostas explicit as though the terpsichoreans spoke. The new stage dancing does non arrangeold stairss into new forms ; the emotion evolves stairss, gestures, and beat.( 18 ) Reading De Mille’sexplanation of her method for making dance inDance to the Piper, oneis reminded of a manager get downing to present a drama. She spends much clip oncharacterisation ; happening the right gestures and stance for each character actsas a stimulation for the choreographic procedure ( 19 ) . De Mille did non createimpersonal terpsichoreans but characters moving out, through dance, a narrative. Fromthe success ofRodeo, every bit good as for its all-American manner and subject, De Mille was asked by Richard Rodgers and Oscar Hammerstein to choreographdances for their new production,Sooner state!De Mille knew the projectwas traveling to be hard as, unlike concert dance where the choreographer is themaster and swayer of the show, many elements other than dance contribute to formmusical theater. The performing artists must take way from the manager, thecomposer, the writer of the book, and the manufacturer. The dance manager gotlittle say in the agreement. Singing and moving were the chief constituents inmusical theater at the clip ; dance was simply for ornament. When projecting thedancers, De Mille insisted on endowment and personality, Rodgers wanted faces, although his thought of a face had often to make with the character in it, but Mamoulian, the manager, wanted slender legs above all ( 20 ) . It was assumedthat the populace, besides, were far more interested in the vocalizing and the dramathan the dance. The Numberss of dances were hence limited. De Milleinsisted, nevertheless, that every terpsichorean was hired for merely one ground – that heor she was the best available performing artist for the function ( 21 ) . She did non cavein to the caprice of the manager ; she wanted her terpsichoreans to be seriousprofessionals, and Rodgers agreed. Once, during dry runs, a note was playedout of melody and one of the chorus ‘ faces winced with hurting, but it was notannoyance or amusement, it was agonized concern. When Rodgers saw herexpression – 1 he had ne’er seen cross a chorus miss ‘s face – he realisedthat responsible creative persons had entered the ranks ( 22 ) . The chorus terpsichoreans wereno longer pretty faces, good legs but nil between the ears ; everyperformer, including the terpsichoreans, knew their trade. Another trouble DeMille would hold was that the dances would hold to be created from the impetusof the book, they would hold to construct the writer ‘s line and develop his action ( 23 ) , instead than being created from abrasion from characters developed by her.De Mille was besides faced with the job of fleetly going from duologue, to song, to dance, and back to duologue once more without it looking ludicrous. Asthe choreographer she was traveling to hold to larn surgery, to graft and splice ( 23 ) . DeMille achieved all this and more. She succeeded in promoting her function aschoreographer to that of equal importance with the dramatist, the composer andthe lyrist, and she did what no choreographer had successfully done before -she integrated the concert dances into the narrative. Her terpsichoreans were non merelydecoration butcharacters, and she worked with them to accomplish deepness ofcharacter, motive and emotion. Dancers could no longer project theirpersonal response to a piece of music. They needed to travel as the charactersthey were portraying. Their reactions, their facial looks, all needed tofurther the audience ‘s apprehension of their character. This requiredin-depth book readings and analysis of character motives, merely as adirector would take a firm stand on for his or her histrions. De Mille realised that this canreally help the terpsichorean. Whereas in concert dance the terpsichorean has to trust on what theyfeel to give the dance energy and dynamism, they now had the vocalizing and actingto give them background and motive to assist give their dance, as thesecharacters, expressive motion ( 24 ) . If the function of dance inSooner state!was to pass on dramatic significance to the audience, and to foster the secret plan, the terpsichorean had tobecomethe character, and cognize it wrong-side-out. AsDe Mille herself notes, it was Anthony Tudor who foremost shocked audiences intoviewing a concert dance terpsichorean as an single capable of dramatic communicationthrough her organic structure, by dressing them in long Edwardian frocks ( 25 ) . No longerwas the concert dance dancer the conventionalized, typical image that made it acceptable forwomen to bare their legs and weaponries and wrap their limbs around a adult male. She wasnow familiar ; like their female parents and aunties. They could now pass on humantruths and take portion in the relation of a narrative. Dressed as the characters of aSouth-western town, instead than leotardss and a Tutu, the audience was able to hum terpsichoreans as worlds with a narrative to state. Thecrowning glorification of De Mille ‘s stage dancing onSooner state!was without doubtthe dream-ballet which occurs at the terminal of Act 1. With this De Milleexperimented with something wholly new in musical theater, and for many yearsto come hardly a musical was made without it incorporating a dream concert dance. Inthis extended concert dance Laurie acts out her quandary through dance ; a highlyimaginative method of traveling the narrative frontward. Dance was inextricably boundto the secret plan of the musical. Whereas in old musicals dance was simply aside amusement and could be cut without the narrative losing any of itsmeaning, one could non take the dream concert dance out ofSooner state!withoutruining the secret plan. By utilizing dance the ideas and feelings in the head and theheart of Laurie could be conveyed and explored far more efficaciously thanthrough consecutive duologue. The dances were intended to beef up theaudience ‘s apprehension of the characters and farther the secret plan, every bit good ascomplement the wordss and the duologue, and it worked. Now, every bit good as singingand playing, dancing added to the dramatic impact of the musical on theaudience. AsKislan notes, dance besides adds to the of import subject of unfastened infinite inSooner state.It is the steering metaphor for the promise of the American Dream and thelimitless chances for the ‘brand new province ‘ the lovers are destined tolive in ( 26 ) . The audience is ever cognizant of the physical infinite on phase asthe terpsichoreans ne’er seem crowded, no affair how many occupy the infinite. In thedream concert dance Curly lifts Laurie up in the air, making for the sky, and theballetic manner danced in invariably opens the organic structure up, widening weaponries and legsto give the feeling of illimitable infinite. InDance to the PiperDeMille writes of the sense of infinite concert dance terpsichoreans work with ; Every articulation andsinew is pulled long, the weaponries are broad and free the stretching up and out, the emancipating leap, the racing over and off from the Earth ( 27 ) . Thefeeling of infinite conveyed on phase through dance complements the vocals, withlyrics such as plentifulness of room to swing a rope/plenty of bosom and plentifulness ofhope ( 28 ) . Atlast dance as more than an accoutrement, but as a serious art signifier, had arrivedonto the popular phase, and the audience were howling. They were howling.People had n’t seen misss and boys dance like this in so long. Of class, theyhad been dancing like this, but non merely where this audience could see them ( 29 ) . Possibly the most of import achievement for dance inSooner state!was that De Mille was a choreographer on the show, non a dance manager. Thedifference being that dance managers worked for audience blessing ; choreographers work for audience enlightenment ( 30 ) . Her dances were integralto the narrative – they added and enlightened instead than decorated. This was anew function for dance in musical theater. DeMille went on to choreograph the dances for many more Broadway musicals in the1940s and 1950s, includingOne Touch of Venusin 1943,Carouselin 1945,Brigadoonin1947,Gentlemans Prefer Blondsin 1949, andPaint Your Wagonin1951.Tally-Ho( 1944 ) andFallRiver Legend( 1948 ) provided her with the chance to farther herrevolutionary manner. She continued to project terpsichoreans that were skilled at projectingcharacter every bit good as executing the right stairss. Kislan records that dancersthat worked with De Mille have testified to her antic ability to feel eventhe smallest dramatic quality in their dance, and, together, manage to put itfree and incorporate it into the stage dancing so that the dance is alwaysexpressive of the play ( 31 ) . De Mille was still responsible to the manager, the lyrist andthe writer of the book though. Her stage dancing had to suit the other elementsof the musical, and dance was frequently of secondary importance to those elements.Choreographers such as Jerome Robbins were to alter the function of thechoreographer, and therefore the function of dance in musical theater, everlastingly. Banishedwas the mindless aesthetics that enslaved dance to the colossal, opulent, andlavish demands of the manufacturer, the star, or the forte act ( 32 ) . Dance wasto be given the highest position of the production. The choreographer was torule the show. Indeed, the choreographer would no longer be simply the dancecreator, but the director-choreographer ; the dance-director follows, thechoreographer adapts, but the director-choreographer leads ( 32 ) . JeromeRobbins was a innovator of this alteration in position for the function of dance in musicaltheatre. Jerome Robbins Robbinswas born into a piously Judaic household in 1918, but resented being Judaic, withits conservativism and old ways. His big household, nevertheless, provided him withmany theatrical contacts and influences. His uncle, Jack Silverman, startedout as a dance hall terpsichorean with the two work forces he was populating with, Bing Crosby andGeorge Raft. Edward G. Robinson was besides related, and another of Robbins’uncles, Daniel Davenport, owned a concatenation of music hall and burlesque theatres.Davenport ‘s male parent and his brother performed on the music hall circuit underthe name of the Davenport Brothers, presenting athletic Acts of the Apostless. It is to this partof the household that Robbins owes his gusto for vaudeville-comedy. Robbins’parents ensured that both their kids were educated in the humanistic disciplines, and this iswhere Jerome shone. He saw it as an flight path, a manner by which he could haveaccess to the possibilities which lay beyond his community ; When I was a childart seemed like a tunnel to me. At the terminal of that tunnel, I could see lightwhere the universe opened up, waiting for me ( 33 ) . Both he and his sister, Sonia, were strongly encouraged by their female parent to draw a bead on to the phase. Soniatook dance lessons and Jerome music lessons, and by the clip he was three and ahalf he was composing pieces and giving narrations on the piano. Indeed, heexcelled in anything originative that he tried, but admitted that this wasbecause, the lone universe that was truly exciting for me was the universe in whichI could do believe that things were non the manner they were ( 33 ) . The worldof musical theater was hence the perfect universe for him, subsequently, to populate in. Robbinshad to maintain his love of dance a secret from his parents, particularly his male parent, and his school friends, who were all into athleticss. As his sister danced her wayinto the limelight Jerome was left practicing in private, frequently with the helpof Sonia. At the Weehawken schools he attended Robbins performed in manyschool dramas, but it was at his summer cantonments that he fell in love with Gilbertand Sullivan musicals, and played the amusing leads inHMS Pinafore,TheMikado, andPlagiarists of Penzance. Jerome ‘s bent for comedy was madeevident through his public presentations in these functions. A fellow camper latercommented, Jerry had a enormous sense of temper in everything he did ( 34 ) .He still kept his dancing a secret though. At one parent ‘s twenty-four hours at the camphowever, Robbins performed a dance on the table-tennis tabular array and, as anothercamper remembers, had the grownups in cryings. Furthermore, This was a bigaudience and he was wholly uninhibited ( 34 ) . Robbinseventually took dance lessons with Sonia ‘s dance instructor in modern dance, theform that was the emerging tendency in the Depression old ages of the 1930s, whenpeople wanted a dance signifier that could more readily show the societal realismsof the clip than could ballet. Jerome witnessed many open uping greats of thedance phase, such as Martha Graham, Charles Weidman, and Doris Humphrey, but in1932 he was to run into the adult male he would subsequently name his ‘guru ‘ , Gluck Sandor ( 35 ) .Sandor directed, choreographed and danced in many of the productions staged atthe Dance Centre, at which Sonia danced. He worked in music hall and onBroadway in the 1920s and was a enormously expressive terpsichorean, manipulatingevery gesture for dramatic consequence, which was to a have profound influence onRobbins ‘ future work. As Robbins himself has cited, We terpsichoreans were taught toperform with the concentration of an histrion ( 36 ) . Anzia Kubicek, a terpsichorean, remembers that Sandor, preferred to make things with a narrative line hisimagination would merely travel a stat mi a minute, and he worked with the organic structures he hadto work with, which were sometimes really limited ( 37 ) . Robbins would work withboth rules in his stage dancing, get downing with a narrative from which hisdancers could develop their characters, and hence their motions. Aftergraduating from Woodrow Wilson High School in 1935 Robbins entered New YorkUniversity to analyze Chemistry, but in his 2nd twelvemonth his male parent ‘s corsetbusiness was in danger of traveling insolvent and he could no longer fullyfinancially back up Jerome ‘s instruction. Jerome was by this point desperate todrop out and follow his dream of going a professional terpsichorean and, throughhis sister, he managed to successfully try out for an apprenticeship withSandor ‘s company. With the aid of Sandor, Jerome convinced his parents tolet him seek to do it as terpsichorean, and he left the university. Sandor persuadedan unconvinced Robbins to concentrate on concert dance instead than modern dance but itwas n’t until he saw Alexandra Danilova perform with the Ballet Russes that Robbinsagreed that concert dance held many chances for him. Jerome progressed quicklyand Sandor recognised him as a natural terpsichorean, remembering old ages subsequently ; Oncehe saw something, he could make it backward. Before I would make a thing he had it.He could expect what was to come. He was sensitive and he was musical.( 38 ) In1937 Robbins secured his first portion inThe Brothers Ashkenazi, whichintensified his passion for the theater. Throughout its tally he would practiceon the barre, much to the obfuscation of the Yiddish dramatis personae of the drama. Hisfellow performing artists recall him invariably dancing ( 39 ) . After two old ages trainingat the Dance Centre, and holding procured functions in assorted dramas, Robbins leftthe company in hunt of more commercial work. He found work in the chorus ofa figure of musicals which, in the mid-thirtiess, were mostly amusing. AlthoughRobbins went on to choreograph and dance in such musicals, he besides wanted totake the medium farther, and utilize musical theater as a vehicle for explorationinto the human mind. He would subsequently state, Musicals tend to be bantering. Noone has of all time used them as a medium to picture deep personal battle, and Ithink this can be done ( 40 ) . He would travel on to make merely that. Aswell as his brief brushs with Broadway, in the summer of 1937 Robbins startedworking as portion of the amusement staff at Camp Tamiment, a summer occupation hewould have for five old ages. The resort played host to many up-and-comingtalents, such as Danny Kaye, Imogene Coca, and Carol Channing. It was avirtual genteelness land for instrumentalists, comics, vocalists and terpsichoreans. Robbinschoreographed and danced in many of the public presentations held in the societal hall.It was a really originative ambiance, with new productions performed every week.Max Lieberman, manager of the amusement plan at Tamiment, endeavor forBroadway-quality pieces, and with merely a hebdomad to make and practise each one, thoughts had to flux. Robbins ‘ work was of two extremes ; burlesque sketches onthe one manus and socially serious dramatic dances such asStrange FruitandDeath of a Loyaliston the other. Some of his pieces were performed atthe 92neodymiumStreet YMHA, under the protections of the Theatre ArtsCommittee, every bit good as in theStraw Hat Revue, which Tamiment opened onBroadway in 1939. The review was an merger of many of the sketchesperformed at that summer ‘s cantonment but, due to the sensitive ambiance followingthe eruption of war in Europe, they were merely allowed to include the comedysketches. Robbins suffered a immense blow to his self-importance when Jerome Andrews, who hadbeen brought in by the angels to oversee the dances, was given exclusive crediton the charge for the stage dancing. It did nevertheless give him a determinationto be entirely in charge of any stage dancing in future productions, and led to hislater devising of the function of the all-controlling director-choreographer. Inthe summer of 1940 Robbins joined Ballet Theatre and was taught by some of thegreat choreographers of the twenty-four hours, including Tudor and De Mille. They trainedRobbins and his fellow pupils to move every bit good as dance, and taught thatdancers must non merely be able to execute stairss accurately but must besides be ableto show the dramatic content of dance. He danced in the corps in manyballets at this clip, among them Anthony Tudor ‘sGoya Pastoral. Robbinslearnt much from Tudor, whose ability as a story-teller through dance was hisforte. This was the sort of dance Robbins wanted to see in musicals, for whileother choreographers were interested in stairss and cosmetic motion, Tudorwas devoted to analyzing human passions and relationships ( 41 ) . Robbins wouldtake much of what he learnt whilst working with Tudor and utilize it as astarting-block for his ain expressive stage dancing. Robbinswas shortly promoted to solo functions, his first as the Youth in De Mille ‘sThreeVirgins and a Devil. He was lauded for his expressive motions andgestures, which made his an improbably amusing character to watch. De Mille’sinfluence led Robbins further into the field of moving, by presenting him to agood friend of hers, Mary Hunter, who had late established a theater groupcalled the American Actors Company. She showed him the procedure ofimprovisation which greatly influenced and improved the staginess of hisdancing and stage dancing. He realised that, holding started dancing relativelylate in life, he did n’t hold the proficient accomplishments, but his playing experienceprovided him with a dramatic genius. Thegreatest solo public presentation of his early calling was as the marionette inPetrouchka, and his readying for the function was intense. He studied images of thepuppet in minute item, seeking urgently to capture the kernel ofPetrouchka in order to acquire every individual gesture merely right. He prepared as anactor would fix for a function, seeking to happen the character ‘s motive, hisemotions, ideas and feelings, so that he could feed that into hismovements. In this manner every gesture conveyed dramatic significance. The criticsand the audience raved about him and he became one of the taking terpsichoreans ofthe company. Robbinsdesperately wanted to choreograph his ain concert dances for Ballet Theatre andeventually was given the opportunity to after an thought he pitched to them, of aballet about three immature crewmans on leave in Manhattan, was given the go-aheadandFancy Freewas born.He gave each of the terpsichoreans exactdetail of the characters they were playing, and he expected an exactperformance in return ( 42 ) . Like De Mille onSooner state!, Robbins pickeddancers that were the most appropriate for the functions, and the consequence was anincredibly tight, character-driven concert dance which became a smash-hit. JohnMartin reported in theTimess, He has managed to acquire into thislight-hearted small piece of American genre the same quality of temper whichhas ever characterised his personal dance, the same histrion ‘s sense of thetheatre ( 43 ) . Each character had a alone personality, which was portrayedbrilliantly in the dance, particularly in the person dances where eachtries to court the miss. He integrated classical concert dance with modern dance formsand images of modern-day American civilization in a manner that no-one had seen onthe phase before. Perhapsinevitably,Fancy Freewas turned into a Broadway musical entitledOnthe Town, and contained the highest figure of dances of any Broadway showyet. InSooner state!the dances moved the narrative frontward but inOn theTownthe dances really hold the production together. The kernel of thewhole production, commented Leonard Bernstein, is contained in these dances ( 44 ) . Importantly, there is no effort to do the dances realistic ; thecharacters inOn the Towndance every bit of course as they sing and speak, andthe audiences accepted this. Robbins successfully farther grounded dance inmusical theater as an indispensable story-telling component. FollowingOn the Townevery musical Robbins worked on contained essentialstory-telling elements – a secret plan, characters, and a point.Billion DollarBaby,High Button Shoes, andLook, Ma, I ‘m Dancin ‘were allconceived around a solid narrative with strong characters, and with the secret plan andaudience apprehension of the characters both furthered by the dances. InLook, Ma, I ‘m Dancin ‘, a partly-autobiographical show about an incrediblyambitious, hard-working dancer-choreographer and the rich inheritress that backshis company, the two concert dances Robbins creates illustrate the alterations theprotagonist ‘s character makes. In the first he is cock-sure, loud andenergetic and the concert dance mirrors this, being fast-paced, complex, and full ofyouthful exuberance. The 2nd concert dance is unagitated, brooding and altogethermore heartfelt, bespeaking his changed temper and the fact that he has come toreflect on his life and what he values most. The concert dances have a more profoundeffect on the musical than any of his others as, They grow out of the hero’spersonality and in that manner they develop the narrative ( 45 ) . Throughoutthe 1940s Robbins had continued dancing at Ballet Theatre but in 1949 he leftto join Balanchine ‘s fledgling New York City Ballet, where he was almostinstantly appointed Associate Artistic Director. He danced with the companyuntil the mid-1950s but his stage dancing was his most of import part tothe company. His plants contained his hallmark staginess and were infusedwith modern dance signifiers and music. His work in musical theater continuedalongside his residence at City Ballet, with his most of import piece in theearly 1950s beingThe King and I. This proved to be one of his toughestchallenges yet, as his mostly Western set of terpsichoreans had to larn and performa assortment of Eastern dance signifiers. His most of import piece inThe King andIis doubtless the concert dance, ‘The Small House of Uncle Thomas ‘ , into which waspoured historical information, researched oriental dance signifiers, and personalcreativity. Conventionalized gesture and motion, masks and mummer, all characteristic but donot overwhelm the other facets of the dance, such as the comedy of theballet. The concert dance besides helped convey one of the cardinal subjects of themusical ; that love and ground can get the better of cultural differences and racism. In1957 Robbins embarked on what was to go his greatest accomplishment in musicaltheatre yet ;West Side Story. The challenge for the confederates ofWestSide Storywas, harmonizing to Robbins, to see if all of us – Lenny [ Bernstein ] who wrote ‘long-hair ‘ music, Arthur [ Laurents ] who wrote seriousplays, myself who did serious concert dances, Oliver Smith who was a serious painter -could convey our Acts of the Apostless together and make a work on the popular phase The thought wasto make the poesy of the piece come out of our best efforts as seriousartists ( 46 ) . Robbins ‘ actuating force was to wholly incorporate the book, mark, stage dancing, and design of highbrow creative persons and convey it to the commercialstage. Although there were other major subscribers toWest Side Storythe musical was conceived, directed and choreographed by one adult male, Robbins, andas such was the first of its sort.West Side Storyfurthers the ideasthatSooner state!foremost suggested, that musicals can be wholly integratedso that every component works together to back up the implicit in subjects, its plotand its characters. Forthe first clip in a musical, instead than project a chorus and chief terpsichoreans, an ensemble of 40 performing artists was cast who could all sing, act and dance, toenableWest Side Storyto be a genuinely incorporate show. The dramaticcommunication inherent in all the dances was really of import to Robbins. Heundertook extended research into gang civilization, including in-depth observationof, and conversations with, teenage pack members on the west side. Once heknew what he wanted to portray he instilled it in his performing artists utilizing Method Actingtechniques. The rival packs ( the Sharks and the Jets ) were n’t allowed tosocialise with each other even wing and in between dry runs. He wantedto construct a bitterness and a misgiving between the packs that would come out intheir public presentations. He besides wanted each performing artist to cognize their charactersinside-out. Chita Rivera, who played the lead female function, recalls Robbinstalking to her about her character ; Weused to sit and merely speak about the character. I ‘d ne’er speak aboutsomething I did n’t cognize about earlier, a individual, and he talked in colors andtextures, that kind of thing. It was merely a absorbing manner to dissect aperson and why they existed.( 47 ) For each component to cometogether in public presentation, everything had to be tight, and Robbins took on thejob of seeing that every facet of the show was watertight. Forthe first clip in musical theater, dance was an perfectly equal spouse to thewords and the music. If anything, wordss and vocal served the dance ( 48 ) .The extremely stylised motions and gestures of the terpsichoreans effectivelycommunicated the tensenesss between the packs and the personalities of the characters.Furthermore, dance is employed to travel the secret plan frontward in the least sum oftime possible. For illustration, dance novices and introduces the audience to theconflict between the Jets and Sharks in the ‘Prologue ‘ , it advances theconflict during the ‘Dance at the Gym ‘ and it concludes the action in ‘TheRumble ‘ ( 49 ) . The ‘Dance at the Gym ‘ besides provides an emotional aspect withoutstalling the action of the play. Much happens between Tony and Maria in ashort sum of phase clip. Through dance they meet and fall in love in onlyforty steps of music. If the scene was dialogue-led it would hold takentriple the sum of clip. Further, moving out this stamp foremost love scenewithout words makes it far more emotional and sensitive. Robbinscontinued to choreograph and direct Broadway musicals, each wholly integratedto create a tight, seamless production. His background in both musical theatreand concert dance, combined with his many accomplishments, gave him the ability to near ashow with the over-all position with which to intermix all the elements into ahomogenous, seamless, whole. He became a director-choreographer, with fullcontrol over the production, promoting dance to the highest position. AgnesDe Mille and Jerome Robbins both contributed greatly to the altering function ofdance in musical theater. De Mille started the tendency for incorporate musicals, guaranting that dance furthered the secret plan, and provided her terpsichoreans with dramaticgestures and characteristics, analyzing character motive and emotions with them.Robbins further advanced the importance of the function of choreographer todirector-choreographer, doing dance the indispensable component of the show. Dancebecame non simply the support for the chief theatrical show, but the showitself. Not merely did this alteration in function for dance benefit terpsichoreans by creatingmore chances and raising the importance of the medium they worked in, butmusical theater itself evolved into a far more originative art. With a singledirector-choreographer supervising the full production, the histrions, singersand dances could far more easy work together. Hubert Saal, composing inNewsweekten old ages after Robbins foremost started on Broadway, asserted ; danceremains the kernel of the Broadway musical Body English is an eloquentlanguage all its ain. It may merely be heightened or stylised motion, or ameans of altering gait, or a stageful of ebullient organic structures exposing rawenergy, but the exhilaration of Broadway beat is every bit strong as of all time The newbreed of choreographers, following such ground-breakers as Jerome Robbins, Agnes De Mille and Bob Fosse, has gone to great strivings in their attempts tointegrate dance into the secret plan.( 50 ) Thankss to the innovativenessof choreographers such as De Mille and Robbins, and their defeat with thelimited function danced played in musical theater, dance is now an of import andfully incorporate component of any musical. The pinnacle of their success wasprobably the juncture, forWest Side Story, that the first box in aprogramme was posted with the words, Entire production, way, andchoreography by Jerome Robbins ( 51 ) . From that minute on, dance would beforever built-in to musical theater. Bibliography Bell, Marty. ( 1994 ) .Backstageon Broadway: Musicals and their Makers. London, Nick Hern Citron, Stephen. ( 1991 ) .TheMusical from the Inside Out. London, Hodder & A ; Stoughton. De Mille, Agnes. ( 1982 ) .Danceto the Piper ; And Promenade Home: A Bipartite Autobiography. New York, Da Capo Press. Kislan, Richard. ( 1987 ) .Hoofingon Broadway: A History of Show Dancing. London, Simon & A ; Schuster. Lawrence, Greg. ( 2001 )Dancewith Devils. New York, G. P. Putnam ‘s Sons. Lerner, Alan J. ( 1986 ) .The Musical Theatre: A Celebration. London, Collins. Couples, Julian. ( 1985 ) .America ‘s Musical Phase: Two Hundred Old ages of MusicalTheatre. Westport & A ; London, Greenwood Press. Steyn, Mark. ( 1997 ) .Broadway Babies Say Goodnight: Musicals so and now.London, Faber and Faber.
https://studyhippo.com/theatre-essays-jerome-robbins-and-agnes-de-mille-4858/
CC-MAIN-2020-40
en
refinedweb
As we know that a class cannot access the private members of other class. Similarly a class that doesn’t inherit another class cannot access its protected members. Friend Class: A friend class is a class that can access the private and protected members of a class in which it is declared as friend. This is needed when we want to allow a particular class to access the private and protected members of a class. Function Class Example In this example we have two classes XYZ and ABC. The XYZ class has two private data members ch and num, this class declares ABC as friend class. This means that ABC can access the private members of XYZ, the same has been demonstrated in the example where the function disp() of ABC class accesses the private members num and ch. In this example we are passing object as an argument to the function. #include <iostream> using namespace std;){ cout<<obj.ch<<endl; cout<<obj.num<<endl; } }; int main() { ABC obj; XYZ obj2; obj.disp(obj2); return 0; } Output: A 11 Friend Function: Similar to friend class, this function can access the private and protected members of another class. A global function can also be declared as friend as shown in the example below: Friend Function Example #include <iostream> using namespace std; class XYZ { private: int num=100; char ch='Z'; public: friend void disp(XYZ obj); }; //Global Function void disp(XYZ obj){ cout<<obj.num<<endl; cout<<obj.ch<<endl; } int main() { XYZ obj; disp(obj); return 0; } Output: 100 Z
https://beginnersbook.com/2017/09/friend-class-and-friend-functions/
CC-MAIN-2020-40
en
refinedweb
Your API is wide open If you read part 1, you know now what a JWT is and how to issue one. We’ve provided a convenient way for clients to gain access to restricted areas of our API… … but, we haven’t actually restricted anything. If you leave your API open like this, your SPA can make requests to it, but so can anyone else. As a minimum we want to ensure the client making a request to our API has a valid token (which they got by calling our JWT-issuing action, with a valid username and password). The plan Our plan of attack: - Require authentication for our API controllers - Test that anonymous requests are rejected - Configure JWT auth in startup.cs - Test that requests with a valid JWT are accepted Require Authentication With the [Authorize] attribute you can easily lock down your API to authorized users only. [Authorize] [Route("api")] public class ApiController : Controller { [HttpGet("Test")] public IActionResult Test() { return Ok("Super secret content, I hope you've got clearance for this..."); } // rest of controller goes here } Now, unless otherwise specified, every action in this controller will require the request to be authenticated. Test anonymous requests Try navigating to your and you’ll get a 401 Unauthorized response. Check your work Whilst it’s become a little bloated in recent times, Postman is still a handy tool for testing your site. In this case, if you want to try hitting your API, you can easily create a GET request, run it, then check what status code comes back (401 hopefully). So our API is definitely secure, but now we have the opposite problem to before, no one can get past this barrier. Authenticate JWTs In ASP.NET Core 2.0, configuring JWT auth is pretty straightforward. In most cases, you just need to configure it in startup.cs. public void ConfigureServices(IServiceCollection services) { services.AddAuthentication = "yourdomain.com", ValidAudience = "yourdomain.com", IssuerSigningKey = new SymmetricSecurityKey( Encoding.UTF8.GetBytes(_configuration["SecurityKey"])) }; }); services.AddMvc(); } These are the basic options for configuring JWT auth for your site. This code configures ASP.NET Core so that requests to API actions (which require authentication) will check for a token meeting these requirements: - issued with the correct issuer and audience details - signed with the same secret key that we’re using (from configuration) - hasn’t expired If you’re wondering where _configuration came from (it may be missing in your startup.cs file), you can easily bring it in via the constructor and let ASP.NET Core wire it up for you. public Startup(IConfiguration configuration) { _configuration = configuration; } Whilst you’re here you’ll also need to add one line to the Configure method in startup.cs. public void Configure(IApplicationBuilder app, IHostingEnvironment env) { if (env.IsDevelopment()) { app.UseDeveloperExceptionPage(); } app.UseAuthentication(); app.UseMvc(); app.UseStaticFiles(); } Your Configure method may vary but the key part is adding app.UseAuthentication();. This needs to be one of the first lines (and definitely before UseMvc) so it takes effect early in the pipeline and stops ASP.NET Core from serving the request as soon as possible if the user isn’t authorised. Test valid requests Now to the final hurdle, can we get to our test API method if we pass a valid token? First up, we can hit the API action we created in part 1, to get a JWT. Then modify the headers for our API GET request, to pass that token in the Authorization header. Authorization: Bearer <token goes here> So now what? You can issue a JWT and validate it. Your app is now protected from meddling by anonymous users. This approach is known as Bearer Token Authentication where your app issues a token and then gives access to the bearer of that token. If you have a valid token, you’re in. Photo Credits: Jez B Fairly pointless… via photopin (license) Theo Crazzolara Stop, it’s winter via photopin (license) All posts in the Secure your ASP.NET Core Web API series.
https://jonhilton.net/security/apis/secure-your-asp.net-core-2.0-api-part-2-jwt-bearer-authentication/
CC-MAIN-2020-40
en
refinedweb
a remote browser you need to use the ConnectAsync function: var options = new ConnectOptions() { BrowserWSEndpoint = $"wss://chrome.browserless.io?token={apikey}" }; var browser = await Puppeteer.ConnectAsync(options); var page = await browser.NewPageAsync(); The end" The second option was a bit more interesting: A Telegram WebPhotographer. A tele what? Requirements We need to implement an Azure Function which receives a URL and returns a screenshot of that URL. This Azure function would help not only the Telegram Photographer but also any other service we want to implement. We also want to implement a Telegram bot in .NET Core and deploy it using a Docker container. This bot would listen to screenshot requests, call our Azure Function and return that image to the client. Let's get started The Azure Function As we won't be able to execute Chrome inside an Azure Function, we will need to use a SaaS Chrome such as browserless.io. Once we know the URL of a Chrome instance, we can connect to this external Chrome process using the ConnectAsync function. var options = new ConnectOptions() { BrowserWSEndpoint = $"wss://chrome.browserless.io?token={apikey}" }; var browser = await Puppeteer.ConnectAsync(options); Our Azure function will look something like this: namespace ScreenshotFunction { public static class TakeScreenshot { [FunctionName("TakeScreenshot")] public static async Task<IActionResult> Run( [HttpTrigger(AuthorizationLevel.Anonymous, "get", "post", Route = null)]HttpRequest req, TraceWriter log, Microsoft.Azure.WebJobs.ExecutionContext context) { var config = new ConfigurationBuilder() .SetBasePath(context.FunctionAppDirectory) .AddJsonFile("local.settings.json", optional: true, reloadOnChange: true) .AddEnvironmentVariables() .Build(); string url = req.Query["url"]; if (url == null) { return new BadRequestObjectResult("Please pass a name in the query string"); } else { string apikey = config["browserlessApiKey"]; var options = new ConnectOptions() { BrowserWSEndpoint = $"wss://chrome.browserless.io?token={apikey}" }; var browser = await Puppeteer.ConnectAsync(options); var page = await browser.NewPageAsync(); await page.GoToAsync(url); var stream = await page.ScreenshotStreamAsync(new ScreenshotOptions { FullPage = true }); byte[] bytesInStream = new byte[stream.Length]; stream.Read(bytesInStream, 0, bytesInStream.Length); stream.Dispose(); await page.CloseAsync(); browser.Disconnect(); return new FileContentResult(bytesInStream, "image/png"); } } } } A piece of cake! Creating a bot on Telegram First, we need to create our Telegram bot. Let's chat with BotFather: The console app Implementing a Telegram bot is quite easy thanks to Telegram.Bot. Let’s connect with Telegram: private static ManualResetEvent Wait = new ManualResetEvent(false); static void Main(string[] args) { string telegramApiKey = Environment.GetEnvironmentVariable("WEBPHOTOGRAPHER_APIKEY"); var botClient = new TelegramBotClient(telegramApiKey); botClient.OnMessage += BotClient_OnMessage; botClient.StartReceiving(Array.Empty<UpdateType>()); Console.WriteLine("Telegram Bot Started\npress any key to exit"); Wait.WaitOne(); botClient.StopReceiving(); } If you are wondering about the ManualResetEvent class; It's just a recipe to get this process alive inside a Docker container. A simple Console.ReadLine won't work there. Then, on the Message event, we need to listen to URLs, call our Azure Function and send the image back to the client. private static async void BotClient_OnMessage(object sender, Telegram.Bot.Args.MessageEventArgs e) { var linkParser = new Regex(@"\b(?:https?://|www\.)\S+\b", RegexOptions.Compiled | RegexOptions.IgnoreCase); var bot = (TelegramBotClient)sender; if (!string.IsNullOrEmpty(e.Message.Text)) { foreach (Match m in linkParser.Matches(e.Message.Text)) { await bot.SendTextMessageAsync(e.Message.Chat.Id, "Prepping a screenshot for you my friend"); var url = (m.Value.StartsWith("http") ? string.Empty : "https://") + m.Value; MemoryStream stream = null; try { var data = await new WebClient().DownloadDataTaskAsync(azureFunction + url); stream = new MemoryStream(data); await bot.SendPhotoAsync( e.Message.Chat.Id, new Telegram.Bot.Types.FileToSend("url", stream), m.Value); } catch(Exception ex) { await bot.SendTextMessageAsync(e.Message.Chat.Id, "Unable to get a screenshot for you"); } finally { stream?.Close(); } } } } Deploy time! There is not much to say about deploying an Azure Function: Right click, Publish, next, next, next and it will be up in Azure. In order to deploy our Console App in Docker we’ll need these two files: A Dockerfile file: FROM microsoft/dotnet:2.0-sdk WORKDIR /app # copy csproj and restore as distinct layers COPY *.csproj ./ RUN dotnet restore # copy and build everything else COPY . ./ RUN dotnet publish -c Release -o out ENTRYPOINT ["dotnet", "out/WebPhotographerBot.dll"] And a docker-compose.yml file: version: '2.1' services: telegrambotfriend: image: telegramwebphotographer build: . environment: - WEBPHOTOGRAPHER_APIKEY=BOT_APIKEY - WEBPHOTOGRAPHER_AZUREFUNCTION=FUNCTIONENDPOINT We have include the real Telegram API Key and Azure Function endpoint. Finally, we run docker-compose up and we'll have our bot up and running. Let's check this out! Final Words First of all, this is a proof of concept. Don't use this code in production! Second, I know this solution was a little bit over-engineered. The idea was showing off how to connect to a remote Chrome instance and also playing a little bit with .NET Core and Docker. Wrapping up, you can find this code on Github as telegram-webphotographer, feel free to fork it and play with it. Don't stop coding! Originally posted on harkoded.com Posted on by: Darío Kondratiuk Microsoft MVP - .NET Developer with 15+ years working on web projects. Discussion Amazing post Darío! Thanks for sharing. Thank you Darío! Awesome post
https://practicaldev-herokuapp-com.global.ssl.fastly.net/hardkoded/using-chrome-in-azure-with-puppeteer-sharp-and-browserlessio-2odk
CC-MAIN-2020-40
en
refinedweb
This document is complementary to the documentation on the external BuildSystem interface, and describes how the build system internally maps the client facing model to the underlying Core build engine. Internally, the build system models all of its keys using the BuildKey type which is a union over all possible keys, and which implements a simple binary encoding to and from the raw data used by the low-level build engine. Similarly, all of the values which can be produced are modelled as via BuildValue type which is a similar union with binary coding support. Even though the input graph is bipartite, we model all keys and values in the same namespace to simplify encoding issues, and because the low-level build engine does not yet provide any facilities for enforcing the type safety of the keys and values with regard to changes in the key space (for example, should a build node's name ever change to become that of a build command, the engine has no facility to automatically invalidate any prior BuildValue associated with that key). Directory handling by the build system component is split across several coordinating tasks: The directory contents task transforms DirectoryContents key requests into a list of the filenames contained within that directory (if present, and a directory) as well as the FileInfo for the directory itself which is used as a proxy to validate its contents on incremental rebuilds. The directory tree signature task is responsible for computing the signature for an entire recursive directory tree structure. It does so incrementally and dynamically by recursively requesting the information for each subpath it discovers, as follows: The contents of the input path are requested. The build node for each path contained within the directory is requested, based on the information in #1. An important side effect of this is that it establishes a strong dependency between the signature and any producers of the nodes contained in the directory, thus effectively dynamically discovering the commands which contribute content to the directory. .. note:: NOTE: The signature task does *NOT* attempt to perform a global scan of the dependency graph to find nodes which **are not** currently present in the directory, but which **are** the output of some task. The client is currently responsible for ensuring that any commands which produce output in the directory should be strongly ordered before the signature node. However, clients can explicitly make additional dependencies on the directory tree by registering a phony command which produces the directory used by the tree signature, and which adds input dependencies on other nodes in the graph (for example, ones which may or may not produce content within the directory tree, and thus must be run before it). Recursively request directory signatures for any directories discovered as part of #2. The task accumulates the results from all of the dependency requests made as part of #2 and #3, and stores them in a deterministic order (based on the sorted names of the directory subpaths). Finally, the signature is computed as a hash of all of the accumulated values. Since the raw encoded value is deterministic, stable, and a complete representation of the node, we simply compute the aggregate hash for the all of the encoded results.
https://fuchsia.googlesource.com/third_party/swift-llbuild/+/refs/heads/upstream/master/docs/buildsystem-internals.md
CC-MAIN-2020-40
en
refinedweb
Get Going with Go and RedisSendGrid Team Go is a promising language. It’s a strong replacement for Java, it’s similarly productive to writing Python, and it is an excellent tool for writing servers. I’m starting to dive into it in my spare time, and in this blog post I will show you the basics of getting started with Go and Redis together. This tutorial assumes you’re running a flavor of Mac OS X and are comfortable with Terminal. Setup Install Redis Run the following commands on most *nix command lines to download, unpack and install Redis: wget tar xvzf redis-stable.tar.gz cd redis-stable make sudo make install Install Go Install Go. As of writing, this was the latest version for Mac. (You can find the list of all available downloads by clicking from the go download page.) Install Mercurial You must have Mercurial installed for the go get command to work. So let’s install Mercurial, which is super easy if you manage packages with Homebrew. brew update brew doctor brew install mercurial Setup GOPATH Create your go workspace. This is just the way go works. Do the following: mkdir gocode Open up .bashrc (or .zshrc if using zsh): vim .bashrc On the last line set the GOPATH: export GOPATH="$HOME/gocode" Create your project workspace Now, we can create our project workspace. cd gocode mkdir -p src/github.com/yourusername Let’s Code Cool, now let’s create the redis go project. Create the hello-go-redis project cd src/github/yourusername mkdir hello-go-redis cd hello-go-redis vim hello-go-redis.go Paste the following into hello-go-redis.go. package main import “fmt” import “github.com/garyburd/redigo/redis” func main() { //INIT OMIT c, err := redis.Dial(“tcp”, “:6379”) if err != nil { panic(err) } defer c.Close() //set c.Do(“SET”, “message1”, “Hello World”) //get world, err := redis.String(c.Do(“GET”, “message1”)) if err != nil { fmt.Println(“key not found”) } fmt.Println(world) //ENDINIT OMIT } Get redis. go get github.com/garyburd/redigo/redis Run it /usr/local/bin/redis-server go run hello-go-redis.go You will get the message back “Hello World.” Nice job, you just wrote your first Go script. Check out further tutorials on the Go site.
https://sendgrid.com/blog/get-going-go-redis/
CC-MAIN-2020-40
en
refinedweb
What am I missing? I'm trying to read with NOLOCK using a TransactionScope like this: var scopeOptions = new TransactionOptions { IsolationLevel = IsolationLevel.ReadUncommitted }; using (var scope = new TransactionScope(TransactionScopeOption.Required, scopeOptions)) { using (var db = new MyDbContext(ConnectionStringEntities)) { // Simple read with a try catch block... } scope.Complete(); } I expected to see with NOLOCK added to the SQL query (looking in SQL Profiler and also a custom DbCommandInterceptor - but it's not there... UPDATE: after some more research, I wonder if the selected cursor is being used after all, just without the NOLOCK "hint" (SQL Server specific - and also specific to just one table), I found some code that get the current transaction and it seem to show the right selected transaction isolation (ReadUncommitted / Serializable etc.)I still want to test it but let me know if you have any thoughts Get current .net TransactionScope IsolationLevel Transaction trans = Transaction.Current; System.Transactions.IsolationLevel level = trans.IsolationLevel; LogService.Instance.Debug($"Transaction IsolationLevel = {level.ToString()}"); So it looks like Entity Framework does respect the IsolationLevel, only it does not use the NOLOCK hint (probably because it is too database specific) and this by the way my main complaint against EF - that it is not very optimized for different database types, another example is where the new identity is saving a GUID primary key for AspNetUsers as a string (again for lack of optimization) other than that (and few other things) EF is awesome! I could not find a solution to my problem anywhere, I definitely didn't want to make all my queries use NOLOCK - just the uncommitted ones, so I ended up combining two solutions (with some changes): NoLockInterceptor - for adding NOLOCK on the fly (Entity Framework with NOLOCK): /// <summary> /// Add "WITH (NOLOCK)" hint to SQL queries, SQL Server specifc - may break queries on different databases. /// (conditionally turn off with NoLockInterceptor.AddNoLockHintToSqlQueries = false to change on runtime) /// <para> /// /// </para> /// </summary> public class NoLockInterceptor : DbCommandInterceptor { private static readonly Regex TableAliasRegex = new Regex( @"(?<tableAlias>AS \[Extent\d+\](?! WITH \(NOLOCK\)))", RegexOptions.Multiline | RegexOptions.IgnoreCase); /// <summary> /// Add "WITH (NOLOCK)" hint to SQL queries - unique to each thread /// (set to true only when needed and then back to false) /// </summary> [ThreadStatic] public static bool AddNoLockHintToSqlQueries; public NoLockInterceptor() { // Do not use by default for all queries AddNoLockHintToSqlQueries = false; } public override void ScalarExecuting(DbCommand command, DbCommandInterceptionContext<object> interceptionContext) { if (AddNoLockHintToSqlQueries) { command.CommandText = TableAliasRegex.Replace(command.CommandText, "${tableAlias} WITH (NOLOCK)"); } } public override void ReaderExecuting(DbCommand command, DbCommandInterceptionContext<DbDataReader> interceptionContext) { if (AddNoLockHintToSqlQueries) { command.CommandText = TableAliasRegex.Replace(command.CommandText, "${tableAlias} WITH (NOLOCK)"); } } } TransactionWrapper - to invoke the NoLockInterceptor behaviour and also useful for repeated use of transactions (): /// <summary> /// Transaction wrapper for setting pre-defined transaction scopes /// <para> /// /// </para> /// </summary> public static class TransactionWrapper { /// <summary> /// Set transaction scope and using NoLockInterceptor for adding SQL Server specific "WITH (NOLOCK)" /// to ReadUncommitted isolation level transactions (not supported by Entity Framework) /// </summary> /// <param name="isolationLevel"></param> /// <param name="transactionScopeOption"></param> /// <param name="timeout"></param> /// <param name="action"></param> public static void SetScope(IsolationLevel isolationLevel, TransactionScopeOption transactionScopeOption, TimeSpan timeout, Action action) { var transactionOptions = new TransactionOptions { IsolationLevel = isolationLevel, Timeout = timeout }; using (var transactionScope = new TransactionScope(transactionScopeOption, transactionOptions)) { if (isolationLevel == IsolationLevel.ReadUncommitted) NoLockInterceptor.AddNoLockHintToSqlQueries = true; action(); transactionScope.Complete(); if (isolationLevel == IsolationLevel.ReadUncommitted) NoLockInterceptor.AddNoLockHintToSqlQueries = false; } } } Use it like this: var timeout = TimeSpan.FromSeconds(ConfigVariables.Instance.Timeout_Transaction_Default_In_Seconds); TransactionWrapper.SetScope(IsolationLevel.ReadUncommitted, TransactionScopeOption.Required, timeout, () => { using (var db = new MyDbContext(MyDbContextConnectionStringEntities)) { // Do stuff... } }); NOLOCK is now added just to queries with a ReadUncommitted transaction isolation level scopes. You can't get Entity Framework to render the NOLOCK hint. If you want to read un-committed data, you have to do something different like what you did by adding the TransactionScope with IsolationLevel.ReadUncommited to the TransactionOptions. Writing your own command interceptor or your own EF provider would also work.
https://entityframeworkcore.com/knowledge-base/36386801/why-does-entity-framework-ignore-transactionscope--not-adding-with-nolock--
CC-MAIN-2020-40
en
refinedweb
IOPL(2) Linux Programmer's Manual IOPL(2) iopl - change I/O privilege level #include <sys/io.h> int iopl(int level);. On success, zero is returned. On error, -1 is returned, and errno is set appropriately. EINVAL level is greater than 3. ENOSYS This call is unimplemented. EPERM The calling thread 5.08 of the Linux man-pages project. A description of the project, information about reporting bugs, and the latest version of this page, can be found at. Linux 2020-08-13 IOPL(2) Pages that refer to this page: afs_syscall(2), break(2), fattach(2), fdetach(2), getmsg(2), getpmsg(2), gtty(2), inb(2), inb_p(2), inl(2), inl_p(2), insb(2), insl(2), insw(2), inw(2), inw_p(2), ioperm), systemd.exec(5), capabilities(7)
https://man7.org/linux/man-pages/man2/iopl.2.html
CC-MAIN-2020-40
en
refinedweb
memory¶ This page contains tutorials about the memory package. Calling a virtual function¶ This is a simple example for calling a virtual function in CS:S. import core from memory import DataType from memory import Convention from mathlib import Vector from entities.helpers import pointer_from_index from entities.helpers import index_from_pointer # CBaseEntity* CCSPlayer::GiveNamedItem(char const*, int) # Index on Windows is 400. On Linux it is 401. GIVE_NAMED_ITEM_INDEX = 400 if core.PLATFORM == 'windows' else 401 def give_named_item(index, item, slot=0): # Retrieve the player's pointer. ptr = pointer_from_index(index) # Now, we can get and wrap the virtual function. To do this, we need # to pass the index of the virtual function in the virtual function table # of the CCSPlayer class. This index might change after a game update. # We also need to specify the calling convention. Since it's a normal # member function, the calling convention is THISCALL. # For the third argument a tuple is required that defines the types of the # arguments of the function. In this case we need to pass a pointer # (the this-pointer), a string that specifies the item that should be # given and an integer that defines the slot/sub type of the item. # For the fourth and last argument we need to define the return type of the # virtual function. In this case the function returns a pointer to the # created entity. GiveNamedItem = ptr.make_virtual_function( GIVE_NAMED_ITEM_INDEX, Convention.THISCALL, (DataType.POINTER, DataType.STRING, DataType.INT), DataType.POINTER ) # Finally, we can call the virtual function! Since many functions in # Source.Python's API require an index instead of a pointer, we will # convert the returned pointer to an index. return index_from_pointer(GiveNamedItem(ptr, item, slot)) # Usage example: # entity_index = give_named_item(my_player_index, 'weapon_awp') Calling a virtual function with non atomic arguments¶ This example will cover calling a virtual function which requires two non atomic arguments. This example was made for CS:S. import core from memory import DataType from memory import Convention from mathlib import Vector from entities.helpers import pointer_from_index # void CBaseAnimating::GetVelocity(Vector*, Vector*) # Index on Windows is 140. On Linux it is 141. GET_VELOCITY_INDEX = 140 if core.PLATFORM == 'windows' else 141 def get_velocity(index): # Retrieve the entity's pointer. ptr = pointer_from_index(index) # Now, get and wrap the virtual function. Again, we need to pass the # virtual function index and the calling convention. # The tuple that defines the types of the arguments will now contain three # times Argument.POINTER. The first pointer stands for the this-pointer. # The other two stand for the Vector* arguments. # This time the function doesn't return anything. Instead the return value # is "returned" by modifying the two Vector arguments. This is a common # practice in C/C++ to return multiple return values. GetVelocity = ptr.make_virtual_function( GET_VELOCITY_INDEX, Convention.THISCALL, (DataType.POINTER, DataType.POINTER, DataType.POINTER), DataType.VOID ) # Since Source.Python exposes the Vector class, we don't even need to # reconstruct the memory structure of the both vectors. So, we just need # to create two new Vector objects. velocity = Vector() angle = Vector() # Finally, call the function! But what's happening here? Source.Python # converts both Vector objects to a Pointer object at first and then # accesses the memory addresses of the Vector objects which were allocated # internally. Pretty cool, eh? This works with every object whose class # was exposed by Source.Python and every object that is an instance of # CustomType. GetVelocity(ptr, velocity, angle) # After the function modified the vectors just return them. return (velocity, angle)
http://wiki.sourcepython.com/developing/module_tutorials/memory.html
CC-MAIN-2022-27
en
refinedweb
#include "global.h" #include <stdlib.h> #include <string.h> #include "living.h" #include "object.h" #include "sounds.h" #include "spells.h" #include "sproto.h" Go to the source code of this file. All this functions handle gods: give presents, punish, and so on. Oct 3, 1995 - Code laid down for initial gods, priest alignment, and monster race initialization. b.t. Sept 1996 - moved code over to object -oriented gods -b.t. Definition in file gods.c. This function is called whenever a player has switched to a new god. It handles basically all the stat changes that happen to the player, including the removal of godgiven items (from the former cult). Handles race restrictions on god, and will punish player if needed. Definition at line 413 of file gods.c. References add_string(), ARMOUR, ATNR_COLD, ATNR_ELECTRICITY, ATNR_FIRE, ATNR_POISON, BOOK, BOOTS, cast_magic_storm(), change_abil(), CLEAR_FLAG, create_archetype(), determine_god(), draw_ext_info_format(), find_god(), FLAG_APPLIED, FLAG_BLIND, FLAG_MAKE_INVIS, FLAG_REFL_MISSILE, FLAG_REFL_SPELL, FLAG_SEE_IN_DARK, FLAG_STARTEQUIP, FLAG_STEALTH, FLAG_UNDEAD, FLAG_USE_ARMOUR, FLAG_USE_SHIELD, FLAG_USE_WEAPON, FLAG_XRAYS, follower_remove_given_items(), liv::food, FOR_INV_FINISH, FOR_INV_PREPARE, FORCE, free_string(), get_archetype_by_type_subtype(), give_skill_by_name(), GLOVES, god_gives_present(), liv::grace, HELMET, liv::hp, treasureliststruct::items, obj::last_eat, obj::last_grace, obj::last_heal, obj::last_sp, obj::level, link_player_skills(), LOOSE_MANA, liv::luck, MSG_TYPE_ATTRIBUTE, MSG_TYPE_ATTRIBUTE_GOD, obj::name, NDI_NAVY, NDI_UNIQUE, treasurestruct::next, NROFATTACKS, object_find_by_type_subtype(), object_free_drop_inventory(), object_present_in_ob_by_name(), object_remove(), give::op, obj::path_attuned, obj::path_denied, obj::path_repelled, player_unready_range_ob(), PREFER_LOW, QUERY_FLAG, random_roll(), obj::randomitems, remove_special_prayers(), obj::resist, SET_FLAG, SHIELD, SK_PRAYING, SKILL, obj::slaying, liv::sp, SPELL, SPELLBOOK, obj::stats, stop_using_item(), obj::title, Ice::tmp, update_priest_flag(), nlohmann::detail::void(), WEAPON, and worship_forbids_use(). Referenced by command_setgod(), and pray_at_altar(). Determines if op worships a god. Returns the godname if they do or "none" if they have no god. In the case of an NPC, if they have no god, we try and guess who they should worship based on their race. If that fails we give them a random one. Definition at line 55 of file gods.c. References add_string(), find_god(), FLAG_ALIVE, get_god_for_race(), get_rand_god(), obj::name, object_find_by_type_subtype(), give::op, PLAYER, QUERY_FLAG, SK_PRAYING, SKILL, SPELL, SPELL_EFFECT, and Ice::tmp. Referenced by become_follower(), cast_bless(), cast_consecrate(), cast_curse(), cast_detection(), cast_smite_spell(), cast_spell(), cfapi_object_get_property(), hit_player(), hit_with_one_attacktype(), kill_player_not_permadeath(), mood_change(), perceive_self(), pets_summon_golem(), pets_summon_object(), pray_at_altar(), prayer_failure(), ring_bell(), show_skills(), and tailor_god_spell(). Determines the archetype for holy servant and god avatar. Possible monsters are stored as invisible books in god's inventory, one having the right name is selected randomly. Definition at line 675 of file gods.c. References BOOK, archt::clone, disinfect::count, treasurestruct::item, say::item, treasureliststruct::items, llevError, LOG(), treasurestruct::next, obj::randomitems, rndm(), and is_valid_types_gen::type. Referenced by CREMainWindow::onReportSummon(), and pets_summon_golem(). Checks for any occurrence of the given 'item' in the inventory of 'op' (recursively). Definition at line 169 of file gods.c. References FOR_INV_FINISH, FOR_INV_PREPARE, give::op, same_string(), and Ice::tmp. Referenced by god_gives_present(). Converts a level and difficulty to a magic/enchantment value for eg weapons. Definition at line 756 of file gods.c. References llevError, and LOG(). Referenced by improve_weapon_magic(). Removes from a player's inventory all items bestowed by a particular god. Intended mainly for use in punishing characters for switching gods. Definition at line 133 of file gods.c. References draw_ext_info_format(), FOR_INV_FINISH, FOR_INV_PREPARE, HUGE_BUF, MSG_TYPE_ITEM, MSG_TYPE_ITEM_REMOVE, give::name, obj::name, NDI_UNIQUE, object_free_drop_inventory(), object_get_value(), object_remove(), give::op, query_short_name(), and Ice::tmp. Referenced by become_follower(). God wants to enchant weapon. Affected weapon is the applied one (weapon or bow). It's checked to make sure it isn't a weapon for another god. If all is all right, update weapon with attacktype, slaying and such. Definition at line 820 of file gods.c. References add_string(), AT_PHYSICAL, obj::attacktype, BOW, buf, draw_ext_info(), draw_ext_info_format(), esrv_update_item(), liv::exp, find_skill_by_number(), FMT64, god_examines_item(), if(), improve_weapon_magic(), obj::item_power, llevError, LOG(), MAX_BUF, MAX_WEAPON_ITEM_POWER, MSG_TYPE_ITEM, MSG_TYPE_ITEM_CHANGE, MSG_TYPE_ITEM_INFO, obj::name, NDI_UNIQUE, object_find_by_type_applied(), object_get_value(), object_set_value(), give::op, Settings::personalized_blessings, PLAYER, settings, SK_PRAYING, obj::slaying, obj::stats, obj::title, TRUE, UPD_NAME, and WEAPON. Referenced by god_intervention(). God checks item the player is using. If you are using the item of an enemy god, it can be bad...-b.t. Definition at line 1181 of file gods.c. References buf, draw_ext_info_format(), MAX_BUF, MSG_TYPE_ATTRIBUTE, MSG_TYPE_ATTRIBUTE_GOD, give::name, obj::name, NDI_NAVY, NDI_UNIQUE, query_name(), and obj::title. Referenced by god_enchants_weapon(), and god_examines_priest(). Checks and maybe punishes someone praying. All applied items are examined, if player is using more items of other gods, s/he loses experience in praying or general experience if no praying. Definition at line 1134 of file gods.c. References cast_magic_storm(), change_exp(), create_archetype(), draw_ext_info_format(), liv::exp, FLAG_APPLIED, FOR_INV_FINISH, FOR_INV_PREPARE, god_examines_item(), LOOSE_MANA, MSG_TYPE_ATTRIBUTE, MSG_TYPE_ATTRIBUTE_GOD, obj::name, NDI_NAVY, NDI_UNIQUE, object_find_by_type_subtype(), give::op, PREFER_LOW, QUERY_FLAG, random_roll(), SK_PRAYING, SK_SUBTRACT_SKILL_EXP, SKILL, obj::skill, obj::stats, and Ice::tmp. Referenced by god_intervention(). God gives an item to the player. Inform player of the present. Mark what god gave it, so it can be taken vengefully later! Definition at line 195 of file gods.c. References arch_to_object(), archt::clone, draw_ext_info_format(), fix_generated_item(), follower_has_similar_item(), GT_ONLY_GOOD, HUGE_BUF, treasurestruct::item, MSG_TYPE_ITEM, MSG_TYPE_ITEM_ADD, give::name, obj::name, NDI_UNIQUE, object_free(), object_insert_in_ob(), object_set_value(), give::op, query_short_name(), ROD, Ice::tmp, TRUE, and WAND. Referenced by become_follower(), and god_intervention(). Every once in a while the god will intervene to help the worshiper. Later, this fctn can be used to supply quests, etc for the priest. -b.t. called from pray_at_altar() currently. Definition at line 926 of file gods.c. References altar_valkyrie::altar, apply_anim_suffix(), BOOK, cast_change_ability(), cast_heal(), treasurestruct::chance, check_spell_known(), archt::clone, create_archetype(), create_archetype_by_object_name(), create_treasure(), do_learn_spell(), draw_ext_info(), draw_ext_info_format(), find_treasurelist(), god_enchants_weapon(), god_examines_priest(), god_gives_present(), god_removes_curse(), GT_ONLY_GOOD, GT_STARTEQUIP, GT_UPDATE_INV, HOLY_POSSESSION, treasurestruct::item, say::item, treasureliststruct::items, obj::level, llevError, LOG(), say::max, MSG_TYPE_ITEM, MSG_TYPE_ITEM_ADD, MSG_TYPE_SKILL, MSG_TYPE_SKILL_PRAY, treasurestruct::name, obj::name, NDI_UNIQUE, NDI_WHITE, treasurestruct::next, object_free_drop_inventory(), give::op, PREFER_HIGH, random_roll(), obj::randomitems, remove_depletion(), SPELL, and Ice::tmp. Referenced by pray_at_altar(). God helps player by removing curse and/or damnation. Definition at line 723 of file gods.c. References CLEAR_FLAG, draw_ext_info(), esrv_update_item(), FLAG_CURSED, FLAG_DAMNED, FLAG_KNOWN_CURSED, FOR_INV_FINISH, FOR_INV_PREPARE, MSG_TYPE_SKILL, MSG_TYPE_SKILL_PRAY, NDI_UNIQUE, give::op, PLAYER, QUERY_FLAG, Ice::tmp, and UPD_FLAGS. Referenced by god_intervention(). Utility function for improving the magic on a weapon. Affected weapon is the applied one (weapon or bow). This utility function improves the weapon magic on a weapon being enchanted by a god. This was necessary because the same block of the code was being called from two places in the god_enchants_weapon(...) function. Definition at line 787 of file gods.c. References draw_ext_info(), esrv_update_item(), follower_level_to_enchantments(), obj::item_power, obj::level, obj::magic, MSG_TYPE_ITEM, MSG_TYPE_ITEM_CHANGE, NDI_UNIQUE, give::op, PLAYER, Ice::tmp, and UPD_NAME. Referenced by god_enchants_weapon(). Player prays at altar. Checks for god changing, divine intervention, and so on. Definition at line 258 of file gods.c. References absdir(), altar_valkyrie::altar, become_follower(), cast_magic_storm(), create_archetype(), determine_god(), draw_ext_info(), draw_ext_info_format(), EVENT_APPLY, events_execute_object_event(), find_god(), god_intervention(), obj::level, LOOSE_MANA, MAX, move_player(), MSG_TYPE_ATTRIBUTE, MSG_TYPE_ATTRIBUTE_GOD, obj::name, archt::name, NDI_NAVY, NDI_UNIQUE, obj::other_arch, PREFER_LOW, random_roll(), SCRIPT_FIX_ALL, Ice::tmp, and try_leave_cult(). Removes special prayers given by a god. After this function, op shouldn't know any prayer granted by god. Prayers will be given when player prays on god's altar, so not handled now. Definition at line 350 of file gods.c. References archt::clone, draw_ext_info_format(), FLAG_STARTEQUIP, FOR_INV_FINISH, FOR_INV_PREPARE, treasurestruct::item, treasureliststruct::items, llevError, LOG(), MSG_TYPE_ATTRIBUTE, MSG_TYPE_ATTRIBUTE_GOD, obj::name, NDI_NAVY, NDI_UNIQUE, treasurestruct::next, object_free_drop_inventory(), object_remove(), give::op, player_unready_range_ob(), QUERY_FLAG, obj::randomitems, SPELL, Ice::tmp, and obj::type. Referenced by become_follower(). Compares 2 strings. Definition at line 111 of file gods.c. Referenced by follower_has_similar_item(). Unapplies up to number worth of items of type type, ignoring curse status. This is used when the player gets forbidden to use eg weapons. Definition at line 620 of file gods.c. References AP_IGNORE_CURSE, AP_UNAPPLY, apply_special(), FLAG_APPLIED, FOR_INV_FINISH, FOR_INV_PREPARE, give::op, QUERY_FLAG, Ice::tmp, and is_valid_types_gen::type. Referenced by become_follower(). Changes the attributes of cone, smite, and ball spells as needed by the code. Definition at line 1222 of file gods.c. References add_string(), AT_GODPOWER, AT_HOLYWORD, obj::attacktype, buf, determine_god(), draw_ext_info(), find_god(), FREE_AND_COPY, free_string(), llevError, LOG(), MAX_BUF, MSG_TYPE_ATTRIBUTE, MSG_TYPE_ATTRIBUTE_GOD, obj::name, obj::name_pl, NDI_UNIQUE, object_free_drop_inventory(), object_get_owner(), obj::race, obj::slaying, SPELL, SPELL_EFFECT, obj::title, and obj::type. Referenced by cast_cone(), cast_smite_spell(), explode_bullet(), fire_arch_from_position(), and fire_swarm(). Try to leave a cult. Deducts experience from 'skill' proportional to 'angry'. Returns true if successful (only when 'angry' is 1) or false otherwise. Definition at line 232 of file gods.c. References change_exp(), liv::exp, obj::level, PREFER_LOW, random_roll(), random_roll64(), SK_SUBTRACT_SKILL_EXP, obj::skill, and obj::stats. Referenced by pray_at_altar(). If the god does/doesnt have this flag, we give/remove it from the experience object if it doesnt/does already exist. Definition at line 643 of file gods.c. References CLEAR_FLAG, QUERY_FLAG, and SET_FLAG. Referenced by become_follower(), and worship_forbids_use(). Forbids or let player use something item type. Definition at line 590 of file gods.c. References draw_ext_info_format(), MSG_TYPE_ATTRIBUTE, MSG_TYPE_ATTRIBUTE_GOD, NDI_UNIQUE, give::op, QUERY_FLAG, and update_priest_flag(). Referenced by become_follower().
https://crossfire.real-time.com/code/server/trunk/html/gods_8c.html
CC-MAIN-2022-27
en
refinedweb
How to collect tweets from the Twitter Streaming API using Python Twitter provides a comprehensive streaming API that developers can use to download data about tweets in real-time, if they can figure out how to use it effectively. In this tutorial, we’re going to retrace the steps I took to set up a server to collect tweets about hate speech as they occur to create a dataset we can use to learn more about patterns in hate speech. The first step is to apply for a Twitter Developer account. You will have to link to your Twitter account and write a brief summary of what you plan to do with the data you extract using the Twitter API. You should receive approval quickly, but unfortunately you can’t proceed with setup until your application is approved. After you receive approval, you need to register your app. Within the Twitter Developer website, go to Apps and click “Create an app”. The page will ask you for more information about what your app will do and where it will be hosted. Now that we have all of the administrative stuff taken care of, we can get to the code. For this example, I’m using the Tweepy library to handle some of the streaming logistics. Step 1: In Python, import Tweepy and set up your authentication and stream listener with API keys. Here we’re using the authentication information Twitter provided when we registered our application. Step 2: Create StreamListener class. In the same file, we’re going to create a new class called StreamListener that inherits from tweepy’s StreamListener class and overrides the on_status and on_error methods so that we can customize what we want to do with each tweet we encounter. For now, let’s just print the id of each tweet. We’ll add in more sophisticated functionality later. Step 3: Initialize stream with filters. Finally, we can start the stream by specifying the tags we want to use to filter tweets. Step 4: Process and store data. We want to store the tweets we encounter in the stream so that we can refer back to it and perform offline analysis. Let’s log tweet data in a csv for now. (We’ll upload this file to a GitHub repository in Step 6 so that collaborators can access it!) Step 5: Handling retweets, extended tweets, and quote tweets. When you run this code, you might notice the same text appearing over and over. In most cases this isn’t because of bots, but rather because one tweet is being retweeted over and over and the Twitter Streaming API doesn’t immediately distinguish between original tweets and retweets. To fix this, we have to look for an attribute called “retweet_status” in the status object. Let’s add a flag to indicate if the current tweet is a retweet or not. To further complicate things, the API automatically truncates the text attribute of tweets longer than 140 characters. If you want to access the full text, which you probably will need if you’re doing any kind of natural language processing analysis on the data, you need to access the “extended_tweet” attribute of the tweet object. Let’s check for this attribute to make sure we’re getting the full text before recording each tweet. Twitter has one more special case: the quote tweet. This occurs when a user retweets a post and adds a comment alongside the retweet. As before, this case is signified by a “quoted_status” attribute. Let’s flag quote tweets and store the text of the tweet that was quoted. We will also remove any characters that might cause problems with the csv encoding, like newlines and commas. Step 6: Share with collaborators There are a lot of ways you could share your Twitter datasets with collaborators, like Google Drive or Dropbox, but I’m going to use GitHub because it has a really good version control system and it’s easy to add collaborators. If you haven’t used GitHub before, I would suggest looking at a tutorial to familiarize yourself with the workflow. At a basic level, GitHub allows you to make changes to project files on your computer and push those changes to a “master” version of the project that is hosted on a remote server. GitHub keeps track of all the changes you make, so it’s easy to revert back to previous versions if you make a mistake or accidentally overwrite a file. Also, GitHub allows multiple collaborators to edit the same file with lots of precautions to prevent anyone from accidentally deleting or changing important data or code. I have a GitHub repository called hate-speech and I want to push a file called “hatespeech-tweets.csv” where I recorded tweets with the keyword “hate speech.” I can use the following commands to add the file to my repository, commit the changes I made, and finally push it to the remote version of my repository, where it can be viewed and downloaded by collaborators. We now have a functional program that listens for and records Twitter activity on any topic you’re interested in. There’s always room for improvement, like using a database to store the records we’re interested, examining different user and tweet data attributes, or conducting some rudimentary sentiment analysis on tweets as they come in, but this is a good starting point — you’re well on your way to incorporating live Twitter data in your application. Good luck! import tweepy # authorization tokens consumer_key = "[insert your key here]" consumer_secret = "[insert your secret here]" access_key = "[insert your key here]" access_secret = "[insert your secret here]" # StreamListener class inherits from tweepy.StreamListener and overrides on_status/on_error methods. class StreamListener(tweepy.StreamListener): def on_status(self, status): print(status.id_str) # if "retweeted_status" attribute exists, flag this tweet as a retweet. is_retweet = hasattr(status, "retweeted_status") # check if text has been truncated if hasattr(status,"extended_tweet"): text = status.extended_tweet["full_text"] else: text = status.text # check if this is a quote tweet. is_quote = hasattr(status, "quoted_status") quoted_text = "" if is_quote: # check if quoted tweet's text has been truncated before recording it if hasattr(status.quoted_status,"extended_tweet"): quoted_text = status.quoted_status.extended_tweet["full_text"] else: quoted_text = status.quoted_status.text # remove characters that might cause problems with csv encoding remove_characters = [",","\n"] for c in remove_characters: text.replace(c," ") quoted_text.replace(c, " ") with open("out.csv", "a", encoding='utf-8') as f: f.write("%s,%s,%s,%s,%s,%s\n" % (status.created_at,status.user.screen_name,is_retweet,is_quote,text,quoted_text)) def on_error(self, status_code): print("Encountered streaming error (", status_code, ")") sys.exit() if __name__ == "__main__": # complete authorization and initialize API endpoint auth = tweepy.OAuthHandler(consumer_key, consumer_secret) auth.set_access_token(access_key, access_secret) api = tweepy.API(auth) # initialize stream streamListener = StreamListener() stream = tweepy.Stream(auth=api.auth, listener=streamListener,tweet_mode='extended') with open("out.csv", "w", encoding='utf-8') as f: f.write("date,user,is_retweet,is_quote,text,quoted_text\n") tags = ["hate speech"] stream.filter(track=tags) - How to collect tweets from the Twitter Streaming API using Python - November 10, 2019 Ok, it was really interesting information, I also use data delivery from API – which is my personal project. Dear Laura I hope you are well. I was using your code to extract some information from twitter. The code runs for many hours and then i get an error similar to : ProtocolError: (‘Connection broken: IncompleteRead(56 bytes read)’, IncompleteRead(56 bytes read)) Is there a way to avoid that error please? I replaced this piece of your code with this def on_error(self, status_code): #print(“Encountered streaming error (“, status_code, “)”) #sys.exit() if status_code == 420: #returning False in on_data disconnects the stream return False but it did not help. How can i avoid hitting the error when the code runs for a long time? is there a way to pause the code for few minutes to avoid hitting the rate limit? Thank you so much
https://www.storybench.org/how-to-collect-tweets-from-the-twitter-streaming-api-using-python/
CC-MAIN-2022-27
en
refinedweb
In the process of converting a lot of spaghetti code to organized script modules and multiple functions. One drawback I’ve noticed to this is that sometimes the error messages are truncated and not as helpful or specific as if the script was on the button onAction script. I will just get errors that show a Key I tried access in a dictionary that doesn’t exist, or a type error, but no line number or even what function this occured in. I am trying to mediate that. What I am looking for is some function I can call in the beginning of other functions so that it will print out the current function and what function called it if possible. I tried doing it with using the inspect module per this and doing this in a function called from another function import inspect print "running " + str(inspect.stack()[1][3]) + " called by " + str(inspect.stack()[2][3]) which just throws me a error like so I assume this has to do with the odd runtime environment Ignition is, java calling python scripts etc and so I’m not sure if there is a python stack to really access like there would be in a traditional all python application. Is there another way to do this? I would really like a function I could call as the first line of other functions that will just print “Running function (function name), called by (function name)”
https://forum.inductiveautomation.com/t/printing-the-name-of-the-function-and-who-called-it/37910
CC-MAIN-2022-27
en
refinedweb
I am writing a Jest/RTL test for a React component that invokes another component. I want to mock that second component so I can have it do things like invoking callbacks that the first component passes to it. But there is nothing specific to React about this requirement: it comes up for non-React modules, too. I have an approach that I have shown to work using trivial modules and I want to document it here for myself and anyone else who finds it useful. The module being tested is src/experiment/MyModule.js, and it reads as follows in its entirety: import nameToGreet from './subdir/SubModule'; export default function greet() { return `Hello, ${nameToGreet()}` }; The second module that it includes, and which I want to mock, is of course src/experiment/subdir/SubModule.js, and it reads: export default function nameToGreet() { return 'Mike' }; Clearly calling greet() returns “Hello, Mike” — I have verified this. But from within my test-script I can mock the second module to change the greeting as follows (in src/experiment/MyModule.test.js): import greet from './MyModule'; import nameToGreet from './subdir/SubModule'; jest.mock('./subdir/SubModule'); nameToGreet.mockImplementation(() => 'Fiona'); test('Greets Fiona instead of Mike', () => { expect(greet()).toBe('Hello, Fiona'); }); Obviously the second stanza ( import/ jest.mock/ mockImplementation) is the clever part here. First you import the particular part of the included module that your tested module is going to use — in this case the default export but it works fine with named exports, too. Then you use jest.mock to replace the module’s exports with mockable stubs. Finally you use mockImplementation to specify how that function should behave in the context of tests. And, yes, as you would expect: this works when the tested thing and/or the included module’s export is a React component rather than a regular function. I just showed the regular-function here because it’s easier to understand what’s going on. jest.mock(‘./subdir/SubModulenameToGreet.mockImplementation(() => ‘Fiona’); Is this a typo? I don’t think so?
https://reprog.wordpress.com/2021/10/21/testing-with-jest-how-to-mock-an-import-used-by-the-module-youre-testing/
CC-MAIN-2022-27
en
refinedweb
Biking data from XML to analysis, revised Am I getting slower every day? If you’ve ever been a bike commuter, you’ve probably asked yourself this question. Thanks to these little devices we can now attach to ourselves or our bicycles, we can now use our own actual ride data to investigate these kinds of questions, as well as questions like these: - If I’m going to work from home one day a week, which day would maximize my recovery? - Do I tend to ride faster in the morning or the evening? Last year, I wrote a few posts about learning how to parse a set of Garmin XML data from 2013 and analyze it using pandas, matplotlib, and seaborn. This year I redid the same analyses, with a new installment of data from 2014. This new and improved version is both prettier and more interesting, thanks to having more data, and more experience with pandas and seaborn (not to mention that pandas and iPython notebook have both been updated a few times). In this series of posts, I’ll show what I did to improve on my initial attempts. In later posts, I’ll present some new analyses of higher-density time series data, which include detailed 3D location information in the form latitude, longitude, and altitude. First, get a rough idea of what is going on One of the first questions we wanted to ask was about relative speeds in the city vs. suburbs, and morning vs. evening. Last year I made this plot, roughly separating city vs. suburbs based on the length of the trips. This year I used time of day to get better separation. python sns.set(style="ticks") python sns.jointplot("hour", "average_mph", data=filtered) And that gave me a clear idea of what might work, but this kind of plot is not very easy to look at if you’re not a data person. Clarify relevant questions So to make it a little clearer, and be able to ask better questions, I wrote a simple helper function to add a flag for the four main categories I wanted: morning city, morning suburbs, evening city, evening suburbs. On further revision, I added a couple of extra flags for two kinds of ‘outliers’: one I called ‘evening later’ for when Caltrain was delayed, plus ‘other’ for those random data points that were probably due to the Garmin device being on or off when it shouldn’t be, aka, user error. Then I used a list comprehension to apply that to the ‘hour’ column in my dataframe, which I had previously generated using the awesome built-in datetime handling that pandas makes so easy. Then I made a better plot with seaborn. def leg_definer(hour): """ (int) -> (str) Helper function to identify trip legs by hour, i.e. 6 AM: first leg (to Caltrain) - morning_city 7 AM: second leg (to work) - morning_suburb 4 PM (16): 3rd leg (return to Caltrain) - evening_suburb 5 PM (17): 4th leg (return home) - evening_city later - evening_later (Caltrain delays, etc) other hour: other >>> leg_definer(6): 'morning_city' """ legref = {6:"morning_city", 7:"morning_suburb", 16:"evening_suburb", 17:"evening_city", 9:"other", 11:"other", 12:"other", 15:"other", 18:"evening_later"} return legref[hour] filtered["leg_flag"]=[leg_definer(hour) for hour in filtered["hour"]] g = sns.lmplot("day", "average_mph", filtered, hue="leg_flag", fit_reg=False) g.set(xticks=[0,5,10,15,20,25,30], ylim=(5,24)) And I think that’s pretty cute, but it doesn’t really show any meaningful patterns, because we’re looking at all the months on top of each other (day here is coming from the date, aka ‘day of the month’). Focus on how to see what you’re looking for I realized grouping by weekday was more likely to be interesting. And then I got caught by one of those random little pandas gotchas: weekday is a method, not a datetime attribute. So it’s item.weekday(), not item.weekday (so it’s different from month, day, or hour). I also had to use a plotting trick of adding horizontal ‘jitter’ to make it easier to see all the dots on the scatterplot that would otherwise be overlapped. filtered["weekday"]=[item.weekday() for item in filtered["zoned"]] sns.set_context("talk") g=sns.lmplot("weekday", "average_mph", filtered, hue="leg_flag", x_jitter=0.15, fit_reg=False) g.set(xlim=(-1,6), ylim=(5,24)) This plot is starting to make more sense. Now, we can start making some observations and generating new hypotheses (which may or may not match up with our previously qualitative impressions). - Suburbs have higher average speeds (gold and red dots) than city (hypothesis: traffic) - Morning city is much faster than evening city (hypothesis: traffic) - Morning suburb is a bit faster than evening suburb (hypotheses: tired? slope? wind?) - Fewer rides on Thursdays (weekday 3) make sense, since this cyclist usually works from home on Thursdays (but not always). Interestingly, Thursday rides seem to cluster toward the faster end (hypothesis: taking Wednesday off to recover makes Thursday’s ride easier) Sometimes a different type of plot works better to get your point across Now, it seemed like the relevant finding was that Mornings were generally faster than Evenings, and I decided to focus on the Suburbs rides since they were less vulnerable to the confounding (!) effects of traffic. I chose a violin plot to show the distributions more clearly. I hadn’t done this quite this way before, but it’s actually very easy to create a plot object, and add a couple of subplots. Then I tweaked it a little more: I adjusted the range of the axes, as well as getting rid of the tick-marks along the edge, to make it less cluttered. (Also, I didn’t know how to add a title to a plot, so I had to look that up!) f, ax= plt.subplots() g = sns.violinplot(evening_suburb["average_mph"], evening_suburb["weekday"], color="Blues") g = sns.violinplot(morning_suburb["average_mph"], morning_suburb["weekday"], color="Yellow") g.set(ylim=(10,24)) title("Morning(yellow) vs. Evening(blue) Suburb") sns.despine() At the end of the day, I think this plot makes it pretty clear that Morning rides have higher average speeds than Evening rides. Which made me wonder: if we plot the route on a map, can we find out if the road is actually slightly inclined, such that it’s actually downhill in the morning, and uphill at night? The incline must be minimal, since this cyclist perceives it as being ‘flat’. Sure enough, a rough estimate of the route using this awesome tool at cycleroute.org gives a plot that looks something like this: Next time: Plotting the measured velocities over the distance of the actual route.
https://szeitlin.github.io/posts/biking_data/biking-data-from-xml-to-plots-revised/
CC-MAIN-2022-27
en
refinedweb
Request For Commits – Episode #8 Open Source and Business with David Cramer & Isaac Schlueter, raising venture capital, working with investors and with community, and different company approaches to developing open source projects. Transcript Click here to listen along while you enjoy the transcript. 🎧 On today’s show, Mikeal and I talk with David Cramer, CEO of Sentry, and Isaac Schlueter, CEO of npm. Sentry is a developer tool for detecting application errors, and npm is the default package manager NodeJS. Both started as open source projects, which David and Isaac have built into businesses. Our focus on today’s episode with David and Isaac was around building businesses in open source. We talked about why they decided to turn their side projects into full-time work, and how they experimented with finding steady sources of revenue. We also talked about raising venture capital, working with investors and with community and different company approaches to developing open source projects. So let’s get into the history of Sentry and npm before we get started. Both of you have pretty interesting backgrounds on how you built these businesses over a number of years. Sentry, David, I know that it started as a project that you built for work when you were at Disqus, and then it seemed like it was this side project, even though it was making money for years. It only went full-time a couple years ago, and you recently announced that you raised some venture capital. So in some ways it seemed, from reading about it, that you were reluctant to turn it into a full-time business. Can you tell us a little bit about that evolution, from going from a work project to a side project, to a full-time thing and why you decided to double down on Sentry? Yeah, absolutely. So Sentry is pretty old, it’s about eight years I think, at this point. When I joined Disqus, they were using what eventually turned into Sentry. Terrible piece of code, so my first month there we kind of revamped it, brought it up to speed with the quality that we needed, and then kind of built it over the next couple of years. During that time it started getting traction. I guess one thing that always fascinated me with open source was you can kind of share results with the world, they provide feedback; the communities are very interactive, and that traction kind of lead to us continuing to build it. That mean things like adding more support for other languages, for other platforms. Honestly, just adding things that we never needed ourselves at Disqus. Then fast-forward a few years, we had built a little SaaS service on top of it, and it was making a little bit of money. The running joke when I kicked it off was, “It might cover beers.” So not a lot of ambition there. And here we are, about four and a half years since that day - we decided to go full-time last year, at the start of the year, so it’s been about a year and a half. The decision for us was we had a significant number of paying customers and it was really, really hard to have two jobs, so we had a decision to make. It was either to let Sentry kind of crumble, or take it to the next level. One of the forcing factors for us was just in the last few years there’s been a huge amount of new competition in our space, and anybody who knows me knows that I’m not one to kind of let somebody step in and take over whatever I’ve been doing. So we really risked Sentry becoming obsolete if we didn’t double down on it at the time. That also is kind of a lot of the reason we went down to venture capital, just to continue to compete and do well. But yeah, happy to dive into more specifics if need be. I’m fairly risk-averse, so I did not go full-time on Sentry until we were able to match salaries, and I was working at Dropbox before (Dropbox pays fairly well), so there was a little bit of that. It was also just, we didn’t really wanna go down the venture capital route. Honestly, it just wasn’t interesting to us; it’s a very different kind of company at that stage. Our goals now are very, very different than our goals were a couple years ago, but it is what it is. At any time did the project kind of suffer from not having that kind of support? I would say absolutely over the course of the years where it was purely a side project. I was building a lot of things at Dropbox during the prime of Sentry’s life, and juggling those two full-time gigs was really hard. So what it meant is the iteration speed at Sentry was greatly reduced. That’s kind of fixed today, but it took us a good year to really build up that momentum to start shipping things very aggressively again. And that was just because of the balance of building new things versus supporting new customers, versus supporting the infrastructure behind Sentry. Really tough to do when basically it was just me and my co-founder, who is a technical designer. Isaac, does this sound at all familiar to you, or is your story a bit different? It’s not that different. I created npm in 2009; it was a big part of the Node community for a long time. Obviously, it’s the thing that you do when you’re doing Node. And by the end of 2013 it had been my side project for a little around four years, and it was still running on donated infrastructure and just my side project while I was running the Node project Joyent. It just kind of grew to the point where that approach no longer worked. We had about two months of… I joked that we had one nine of uptime and it wasn’t in the first digit, like it wasn’t in the tens place. Stuff would fall over and I would start getting messages and emails and tweets and angry GitHub notifications, and eight hours later I’d wake up and actually deal with it. I had some help from some very noble souls who basically said, “Look, you can either start paying for this thing or take it somewhere else, because we can’t do this for free anymore.” Around the same time, I was getting a lot of feedback about it from people working at pretty big companies, some of them saying, “Look, we really wanna use this thing, but a) we can’t trust it because it falls over all the time, and b) it’s all open source; we need a thing to host our private code and all the alternatives are kind of annoying to use and inconvenient, compared to how easy it is to manage open source code with this tool.” That seemed like a happy coincidence of events there, and we raised some money from True Ventures and a couple of angels and started the company at the beginning of 2014. It really was a shift from… You know, it wasn’t a business with paying customers that I was really only working on part-time, it was something that I was doing literally the minimum required effort, because I was so busy with Node, and it was just falling into disrepair and that was really very sad to see. So it seemed like the obvious choice was to start a company around it so that I could justify hiring some full-time, dedicated ops people and actually build some products to address the needs of companies that were using it to manage their JavaScript modules. [07:57] Since then, we’ve grown to about 25 people now, we have two products and they’re growing pretty well. It’s a ton of work running a company, but at least it’s actually full-time, front and center work. That’s been sort of nice, to be able to focus on it. I think we touched a little bit on your projects being open source, and that it was important to you. Do you think at all about how that might affect running a business while having an open source product and whether it might be a little more difficult? I think for npm it’s always been pretty clear to me that the client itself, the CLI program is not what people are ever gonna pay for. They’re willing to pay for the service that enables the workflows that they’re using, but there’s no benefit really to not having the npm client be open source, or even most of our infrastructure, because really what they’re paying for is access to this gigantic community and a service that’s always going to be up and running. If it was easy, I would have been able to do it without starting a company, and I think that that’s sort of the thing that puts us in that sort of strategic position in order to run a successful business. I have really no worries whatsoever about it being open source, and honestly the investors that I’ve talked to, some of them get it, some of them don’t. The ones that don’t are never gonna be a good partner anyway, so I don’t really care too much. How about you, David? Yeah, it’s kind of the same on our side. Sentry is an infrastructure service. A lot of people run it themselves. I would say they’re not overly thrilled with that idea, but due to the nature of data that Sentry works with, it’s kind of a requirement that we work on premise. But we get enough small businesses and even some larger ones that use our SaaS service, so it was kind of a natural fit, and it’s really allowed us to balance it out. The way we see open source is that it’s a thing that’s allowed Sentry to be accessible without us having a massive company backing a product. I think that was especially great early on in the project’s lifecycle, and today in terms of marketing and other aspects it’s very valuable for us to continue to grow. Honestly for us, we think of it a lot like a traditional freemium model. We don’t offer free accounts on our SaaS service, but if you wanna use Sentry for free, you can just run the same code that powers our SaaS. So Sentry and npm both started while you were at companies, or developed a lot while you were at Disqus and Isaac was at Joyent. Can you tell us a little bit about the relationship between the company and the project? Did having those projects help the company that you were at, or were they involved at all? Yeah, so at Disqus, like I mentioned, the first month I was there we kind of kicked off Sentry. We kind of had the “happy accident” that it was pre-licensed, because we just built on additional code, and the founders of Disqus were totally on board with open source. So I was able to get some help building out Sentry for our needs at Disqus, but I made a pretty clear point to never just build random features on top of Sentry during my time at Disqus. I would work on it in the evenings and things like that, but we would also build out features that we needed to make it better. I think the company saw a lot of value in it, not just Sentry but in open source in general. It helped us on recruiting fronts, it kind of gave some confidence to the technology team that what we were doing was useful, and I think in general it was a good situation for both sides, and today we still have a very good relationship with the founders of Disqus. And there was no tension when you left… Because if it was a recruiting tool, you have to imagine that that recruiting tool kind of goes away if now there’s a Sentry company and they’re not the Sentry company, right? As far as npm and Joyent, I think I got the job at Joyent in part because I had written npm. Not in part, I mean in whole. [laughter] Yeah, basically Ryan Dahl was like, “Hey, come work on Node. I need somebody to help with this thing that Joyent is paying me to do”, and I happened to be jobless at the time when he said that, so it worked out. But no, I always maintain that npm was my thing and not Joyent’s thing. Honestly, I think it’s sort of tricky – I don’t wanna speak for Joyent, obviously; it’s a whole different company than it was when I got hired, and it was a whole different company when I left than when I got hired. It’s a whole different company now. But I think for a lot of the time, the view of npm was that it was sort of this thing that’s attached to Node. Maybe it’s just like egotistical, but my point of view has always been Node is the thing that enables npm, and I was mostly trying to keep Node going good and working on it, because I think it’s important for the broader JavaScript community, and I think that npm is a central part of that. Obviously, I’m biased; this is my pet project, and it was my labor of love long before it was my company, but there was no bad blood as I was leaving. I honestly don’t think they really cared that much. It’s not like they ever hired people to work on it, or anything like that, and it was never seen as a Joyent product. There was some discussion early on, like “Why don’t you sell npm to Joyent and then we’ll put a team on it?” I think we were several orders of magnitude away from one another in what kind of value we assigned to this thing, so it was not likely to work out. Again, I don’t say that in any kind of disparaging way towards Joyent; they kind of saw it as a thing that enables Node, so they weren’t gonna throw a huge team of 50 people behind it or anything, whereas I kind of saw this as a product that works really well, but it had to be run in a very particular way. Joyent being first and foremost operating systems and infrastructure type of company, I think it just kind of didn’t have the right sort of DNA to be that supported for this particular product. Again, I really have nothing disparaging to say about them, it’s just there are different types of companies and different types of teams and different skill sets, and it just was not really ever gonna be a good fit, I don’t think. So that’s a big part of why I made the decision to move or to leave. We also explored the option of putting npm in a foundation or creating a foundation for it, and really just focusing on the community/open source aspects of it. The reason why I didn’t go that route is that it works out to being about the same amount of work for fundraising to create a foundation as to get a startup off the ground. You still have to go and make a pitch to a bunch of people and get them to put money in the foundation. The difference is that there’s a different set of motivations and different sorts of companies that put it in for different reasons. A VC is putting money into something that they think is going to be a profitable enterprise that they’re either going to get a positive return on their investment. A large company generally invests in a foundation because the success of that foundation is in their interest. Let’s say they depend on Linux, and they wanna make sure that Linux keeps being healthy because they use it so much, so they’re gonna put some money in the pot to kind of keep this thing alive. [15:55] Another reason why they tend to do that is because it’s a nice marketing boon. You see, “Oh, IBM supports WordPress. IBM must not be so bad, because I love WordPress.” And because of the sort of background and very developer-centric role that npm occupies, and the fact that I hadn’t spent four years marketing it, I spent four years making it something that people could use, developers loved it. But there was no real buzz around it, and there wasn’t a huge marketing benefit to having IBM put a bunch of money in npm, or any company for that matter. So setting up a foundation for it would have been at least as much work and not as much benefit. And then getting a foundation to say, “Look the real challenge here is that we need a private service, and it needs to be a thing that people pay for, because if you don’t pay for a thing that’s guarding your privacy then how can you trust it?” So we needed to build a business around it anyway, and doing that under the auspices of a foundation is just a much more challenging thing to kind of figure out how to fit. So yeah, in the end we’re still on good terms, we’re actually still a Joyent customer, for we use a couple of their services. Obviously, we use Node very heavily; we’re part of the Node Foundation. It’s all generally peaceful and good. And the last question, was there any concern around – if you’re talking about foundations versus businesses, that being a business and a startup, especially taking venture might be perceived as being at odds with what the community wanted, or just changing your incentives. Yeah, I think that there is always going to be some concern around incentives anytime you’re dealing with people, and the less well you know them, the more concerned you’re gonna be about their incentives. When somebody’s your friend, you can kind of have a little bit more of that sort of natural primate trust that comes from being in the same tribe. But in general, there are some really interesting things about the JavaScript community in particular. The JavaScript community in particular is pretty pro-business and they’re pretty open-minded about things like that. It’s a sort of interesting melting pot, because almost every application touches JavaScript, touches a frontend at some point. So you have developers who think of themselves as Rubyists, Python developers or Java developers, but they still need to use JavaScript. So there’s a sort of liberal attitude about a lot of these things, and I think also just sort of the place in history that we are made it a little bit of an easier sell. WordPress already sort of crossed a lot of these bridges for us. There is a bunch of other examples of relatively healthy open source projects that were in some way tied to a company or to a foundation, or just some sort of hodgepodge - a foundation made up of several companies. So having a profit motive - for a lot of people I think actually makes it easier for them to see what your incentives are. They can look at npm and say, “This is the product you’re selling. Okay, I get it. If I need that product, I’ll maybe come consider it, but I can see that you’re not trying to sell my email address. You’re not trying to sell advertising space on the command line, or weird tracking beacons or whatever.” It’s all open source, it’s all very clear what we’re doing. We’re trying to be as transparent as possible, which I think is key when you’re interacting with an open source community. David, what are your thoughts on that? I mean, Sentry obviously has a very different community. How did they respond to the venture announcement? [19:45] The venture announcement - it’s all been like “Thanks”, “Yeah, good luck!”, “It was awesome!” We have a few people - we’ll call them the Hacker News crowd, who always question everything. [laughter] But yeah, in general it’s been good. The reason we kicked off the SaaS fundamentally was people wanted to pay for stuff, and I think that mentality easily caters to a VC-backed company. The biggest response we got from people was like, “Oh, I hope Sentry doesn’t change”, and this is just often from people who are more a little bit ignorant about how businesses work; of course things are gonna change, but it doesn’t necessarily mean change is bad. We’re very different now than we were a few years ago. Mostly the product’s significantly better and we’re able to do a lot more, and we’re able to make it a lot more accessible. But you have a lot of the negative nancies that are like, “Oh, Sentry charges $10/month now. They’re gonna start charging New Relic process.” So we had a little bit of that, but from our actual customers and our actual users, everybody was thrilled. That’s great. I like where this is going a lot. We’re gonna take a short break and then when we come back we’re gonna talk a little bit about how these projects make money, and I think I’m just gonna ask outright how much they both make, as in salary. [laughs] No, I’m not gonna do that. We’ll be right back, everybody. And we’re back from the break. Now let’s get into how y’all make money. How do Sentry and npm the companies make money? Really quickly, and then we’ll kind of go from there. Yeah, so Sentry is a simple SaaS service. We actually have a tiered business model… Nearly all of our revenue comes from SaaS. More than our SaaS customers, we have free open source users and also those free open source users happen to be the largest companies. There’s some tradeoffs to be made there… Longer term we’re looking at how we monetize the enterprise without creating what I call crippleware - that is like an open core product. We’re trying to prove that we can be successful without that, and I think we can. So we’ve started exploring some of that, but fundamentally the SaaS is what’s gotten us off the ground. For npm, we have two main products. One is npm Enterprise, which is a standalone registry that you can spin up really easily inside of your company’s infrastructure, and it integrates with SAML or LDAP or whatever special snowflake authorization/authentication system you might have. Then we have a private modules SaaS system that you can use for organizations and teams. We shipped private modules for individuals and then about six months later added the organizational and team features, and what’s really interesting is you can see from the registration graphs and the new customer signup graphs that the individual product was not the product, it’s not what people actually want. The folks are willing to pay or are paying for organization and team features. That’s about evenly split between our SaaS and enterprise products right now. In all of our early customer conversations when we were trying to figure out exactly what we should go build, what we found was about half of the people said, “No, no, no… We’re never gonna be able to put our code anywhere other than in our infrastructure, and multi-tenancy is absolutely a no-go for us.” [24:06] The other half were like “There is no way that we’re gonna install a thing. We only want a SaaS, we only use SaaS.” It’s kind of interesting how that’s born out in practice almost as if that was a perfectly represented example; it was almost exactly 50/50. As far as the open source users, we have about four million people using our service on a regular basis, four million humans. By several approximate measures that you can try and figure out, that’s about what it works out to. But much fewer than that are actually paying users, which is sort of fine. I think there’s actually something sort of morally good about – if you’re using something at a company and you’re using it for proprietary code and that proprietary code is gonna go make money, you should be paying for a service that supports the open source community; that’s doing things that are ultimately benefitting all of us. I know that the companies that you’re inspired by… I know that npm gets compared a lot to GitHub, some of what you’re describing with Sentry reminds me of Travis a little bit. Are there other models that you’re going after? I think GitHub is the one that we definitely get compared to the most. We’ve basically created our business model almost exactly from them. The funny thing is actually our org’s product, we’ve always charged per seat, because it just seemed like it was the easiest thing to reason about, even though that was a departure from how GitHub did it, where you paid per private repo that you have on GitHub. The most frequent pushback or complaint we got about it is “Why can’t you just charge us like GitHub?” and then GitHub changed their pricing model to be per seat, and… Now we do charge you just like GitHub, it’s great. The other company that I think is a pretty interesting or inspiring model is Wordpress, or Automattic, to be more specific. Just because they’ve taken something that was a vibrant open source community and really in some very creative ways have spun it into an extremely successful business. For Sentry, early on we straight up copy-pasted somebody else’s pricing model, and I think they put about as much thought into it as we did, so we’re still working on fixing that. But in terms of other companies, I actually think GitLab is very interesting. They are in a way similar to Sentry, where the entire product is open source. It’s licensed very differently, but they try to ship both. They have an enterprise offering and then they have the SaaS service. You can interact and contribute to both of their codebases, and it’s very compelling. But yeah, just like npm, we very much look at other tools and developer services in this space, GitHub being one of the big ones. We also look at a lot of infrastructure tooling, how does New Relic price, and actually we look at those more so like “This is not what we wanna do.” But there is a lot of variance in how things work, because often in the open source space you end up with a kind of support and services model, and I think I can probably speak for both of us in saying that that’s absolutely not what we wanna do. Those companies are Elastic, and potentially Docker, depending on which direction they go. And then a lot of the older database companies. It’s funny to hear both of you say that you basically started by copying someone else’s model and then tweaked it as you went along. A lot of people that I’ve talked to who have open source projects, they’re interested in monetizing them. They’re sort of like, “I have no idea where to start. I have no idea what a business looks like and I don’t have any great models for this.” How did you both learn and figure out how to run a business? Was it just watching other companies and learning? [28:09] I don’t actually know how to run a business… [laughter] That’s not exactly true. You just have to very aggressively learn as you go. I think that as far as pricing model, just to get to your first question about copying and pasting somebody else’s pricing model and then tweaking it a little bit, honestly I think that’s kind of a sensible way to go. I know this is sort of a controversial opinion, but I sort of think that the more time you spend worrying about your pricing model, chances are that’s time you should have spent going to market. It definitely pays to stop and do some slow thinking about it, and it’s been really interesting to hear about some of the thought that went into GitHub’s original pricing model and now how they’ve changed it for their organization’s SaaS. Any pricing model is applying a tax to some particular behavior. If you look at it that way, charging per private repo is saying “There is a tax on your private namespace, and the bigger that namespace gets, the more things that are in it, the more you pay us. But you can have as many people as you want involved in those projects.” So that was obviously sort of a play to make sure that they’re growing their user base as aggressively as possible. Once you have a big user base, once you have a big community, growing that community bigger is not gonna benefit you more. At this point, GitHub is probably close to nearing saturation of all the open source development that happens on Earth, right? GitLab is obviously a big one, there is also Stash - there are alternatives, but GitHub is the one that’s sort of the dominant player in that space. The benefit to them of having that be the tax was probably no longer the best thing for them to be doing. On the other hand, if you say “We’re gonna charge per user in your paid organization”, now the tax is on having a bunch of people who are not participating in the open source world. Participation in open source remains free, but you want an organization to be on your product and you want them to be using it in the same way that it’s used in open source. You don’t want a company to come in and say, “Gosh, we don’t wanna have to upgrade our plan, so let’s just stuff more stuff in this one mega-repo that has all of our code in it”, because it’s just not how it’s done in open source. Applying the tax that way ended up having some weird, perverse incentives. That being said, GitHub threw something together, they thought about it for a little while and then they went to market with it, saw how it did and saw what the impacts were. I think you can really easily get yourself into analysis paralysis around that decision and avoid ever building something useful. Yes, there is a lot of science and a lot of art around pricing models and stuff, but whatever; it’s fine. Figure out a way to charge it, work out a model that shows you getting profitable eventually, and then see how it does. It’s one of those things where the less thinking you do about it, in some cases, the better. Yeah, and I think from my side the best advice I have is make money on whatever you’re doing. This is not my first time I’ve attempted doing something on the side, and the lesson I learned each time was if I have to keep showing out money to keep things running, I’m not gonna keep it going, no matter how interesting the product is. [31:56] From day zero, we’ve never put money into Sentry. We bootstrapped it, we got sponsorship from a couple companies, Heroku and Softlayer, both hosting companies, and from there we were always cash flow positive. Now, we weren’t paying ourselves any money at the time, but that didn’t matter because we were employed elsewhere. That’s really allowed us to take the time to figure out the business side, and that’s not saying we figured it out. By no means are we done. But it really allowed us to grow organically, and especially for open source… And anybody who’s asking those questions, it’s gonna be more like an individual or side project-esque kind of thing, and I think that’s a very good way to go about it. See where you can cut costs, see where you can get free sponsorship… For example, Sentry gives away free service to open source projects, and I think that there’s a lot of companies that will help you out and really help you get off the ground. That varies, because clearly for a SaaS service it’s a little bit more straightforward, but I definitely think looking at what other people do and how they’ve approached problems is the most valuable thing you can do. I’m incredibly jealous about the luxury of having something that was cash flow positive from day one. We pulled the business together around this ship that was more or less on fire at the time, and had not been cash flow positive. But it’s improving. Don’t worry, the first thing when you take VC funding is to burn it all. [laughter] We’ve since fixed that… Sometimes when you say the term “open source business”, people are thinking about old school enterprise companies. Then I look at Sentry or npm and I’m like, “Oh, these are the cool, new, shiny open source businesses.” I don’t know whether that’s just because y’all are just like newer businesses that they feel shinier, but I haven’t tried to pin down whether there’s something that’s actually changing in the way that people think about open source businesses, whether it’s more SaaS than enterprise stuff right now, or something. Are there any trends that you’re seeing around what an open source business is now, versus in the ’90s or something. I think the biggest difference is that the internet exists now, and it’s a real thing. People do things on the internet; they have SaaS systems, they do stuff online… It’s less of an assumption that what you’re buying is the kind of unlocked version of the crappy open source thing. I think you call it crippleware… You see that a lot in databases and to some extent in operating systems traditionally. The real value in an open source world, and the value in an internet-connected world is in communities and in not having to have the teamwork in your company to build a lot of these things. We use a ton of SaaS at npm. We use AWS, we use Joyent, we use Fastly… It would be completely impossible to build this business without leveraging all of those same things. I think when we imagine the stodgy old enterprises, even those are increasingly either selling services or selling very large support contracts. If you’re somebody like IBM or Ericsson and you’re selling something to a humongous, multinational oil company or the Department of Defense, or the water and power of Europe - these massive deals for huge infrastructure projects, like… Okay, yeah, that still happens. That’s gonna keep happening. But I think when we talk about open source business, we’re talking about a handful of different things that are somewhat dependent on what type of open source thing we’re talking about. npm is a business built around a workflow which has been pioneered in an open source community, so we’re not really selling our open source thing; what we’re selling is some tools to make it easier to use this open source workflow, which is a little bit of like a subtle thing to wrap your head around. I think Sentry is a similar kind of thing, it’s an open source product; you can use it, you can have it. What you’re really selling is the service, and that’s a thing that is like “Yes, of course I will pay for that, because the alternative is I pay even more for it.” Yeah, even the term “open source business” is not really actually a term; it’s like a business that might have a component to it that’s open source, but how that is monetized, or whether it’s even relevant to the business itself seems to vary a lot. A lot of companies now that are saying they’re an open source business or they open source a certain aspect of their products, and they do it for recruiting and they do it because they’re trying to build a platform and attract all these developers to it and whatever. Those things are I think very different from what npm or Sentry does. Yeah, I agree with this whole ship that makes it more accessible - cloud services fundamentally - but I think the thing that really would make you think that npm or Sentry are these hot new open source ideas is because of the market that we’re going after. Both of us focus on the developer market. Not necessarily directly the enterprise market… We go through developers to get to the enterprise market, and that’s a significant change; that never used to exist. It never used to be that a developer at a big bank could run npm or Sentry internally and really drive the decision “We’re gonna pay for this.” It used to always be passed top-down. I think that’s the major change that we’re seeing, and that’s why there are companies like us and are going to be more companies like us going forward. I think that’s a really great point. It kind of coincides with a lot of other things around more like bottom-up everything, instead of top-down. Also, you mentioned you’re not sure if “open source business” is even a real term; that does not stop anybody from using it. [laughter] It absolutely is a thing that people say, and even use it in kind of ludicrous ways. Like, how can you open source-ify your business, which is a…. Oh, god… It’s a real sentence I’ve heard a real human being utter. It’s challenging, right? I mean, not every business makes sense, and the role that open source plays in some businesses might not make sense. There’s a lot to be said for proprietary software; there’s a lot of proprietary software in the world, and we use a lot of it. But I think increasingly what’s happening is where almost anything that you would give to a customer is probably mostly open source, and anything that you’re gonna charge for is probably behind, running on your own infrastructure anyway. It’s just easier to build a business around that, I think. Alright, we’re gonna take a short break and when we get back we’ll dig a little bit more into the open source side of things. And we’re back with Isaac Schlueter from npm and David Cramer from Sentry. Let’s get into the open source side of things with both of your businesses. For the projects that you’re stewarding as a company, do you think of your projects as something that the community builds and you’re sort of another actor in that community? Or is it something that your company is building and then open sourcing to the community? Sentry has never been kind of a community-built project, and we’re totally okay with that. Even today, which is probably not great for a CEO, if you look at the contribution graph, it’s like I wrote nearly everything, and I think this is valuable because it allowed myself to drive the direction of the project, which is atypical of a lot of open source projects where they kind of end up like this hydra going a bunch of different directions and you really have to wrangle them in. That’s been very beneficial to us. That said, on the other side what we have done is we really pushed a lot of “Here’s how it’s extensible, here’s how you can integrate or send data” and things like that. And what we saw early on is a lot of people were interested in that side. It’s much more accessible than a complex infrastructure project is, it’s much more appropriate for their situation. So what that means is we started building the project around the Python community. Somebody from that community went and started working at a Rails company, and they’re like, “You know, it would be really nice if I could send my Ruby data to Sentry” and it’s like “Oh, we have an API for that”, so they built our first Ruby client. Somebody else was like, “Hey, it’d be cool to be able to use GitHub to create issues from Sentry”, so they went and they built the GitHub integration. That’s really where a lot of our power has come over the years. For us especially, that’s been a very compelling story because while we might be very good at what we do, we definitely are not experts in every language and every platform, and we barely know what kind of tools exist in the ecosystem anymore. So opening it up to allow contributions where contributions make more sense has been really good But then we also do now and then get bug fixes from companies. Back in the day, when this big gaming company started using Sentry, they started contributing these really compelling performance patches that were very specific to their needs but were very interesting. Fast-forward today and we still have that same idea. Square has recently started contributing a bunch of small fixes here and there whenever they run into any issues, and often what that is it’s because they’re running something in a slightly different situation - in this case they’re using a MySQL instead of a Postgres database, and they have a very specific issue that comes up… We’d probably fix it for them, but the fact that it is open source and the fact that it is accessible really just brings in the contributions. But they’re never anything that’s really driving any kind of product features, and that’s actually worked out extremely well for us. I think it helps because it caters more towards a classical business, rather than a big open source ecosystem. I’m interested to hear Isaac’s take on it, since npm is literally an ecosystem. [43:49] That’s always been a little bit interesting with the npm client, because it was… There are a couple of different things to talk about when we talk about our participation in open source. There’s the npm project itself, the CLI project. There’s the massive number of open source modules which are published and shared and installed on the npm registry, and then there’s the broader open source JavaScript community which sort of includes Node, React, Amber and all the rest. The npm client is open source - it always has been - but for a very long time it was essentially a single-author open source project, which is a very simple governance style. The governance was, “I make all the changes, and if you have an idea, I’ll either take your pull request or not”, but it was essentially just run by me. That’s since changed significantly. We have a team of people working on it - there are three individuals working on it today, and quite a bit of their actual day-to-day work is spent on communication with our open source users. We take issues on the open source GitHub issues list, they have a semi-regular call that they do as an open hangout where people can suggest things for their agenda to discuss for that week, and they do regular releases with release notes, and are very responsive to the community. That transparency has caused an increase in the number of pull requests and the quality of bug reports that they get, but they’ve also been working on making the codebase itself a little bit more accessible, which is a big and somewhat overlooked challenge in any open source project. I think the social structures around most open source projects make it so that you never really address that, because all of the people who are working on it obviously are the ones who are capable working on it, who are not intimidated by the codebase and who think it’s totally fine… Whereas newcomers can look at this, the way that it’s structured and the way that the architecture isn’t really well explained and say, “Gosh, this seems kind of hard. I don’t know if I really wanna get involved.” So they don’t get involved, they don’t get a voice and so it never changes. I’ve seen this in literally almost every single open source project I’ve ever been connected to - npm, Node, PHP Core project, the Linux Kernel… Although the Linux Kernel probably does a better job of this particular aspect than most projects. It’s still pretty daunting, though. You can approach it in a couple of different ways, by breaking things up into smaller modules, breaking things up into sub-projects that people can contribute and be a part of, but it’s still just an ongoing, difficult, unsolved problem to make a codebase accessible. But in terms of our role in the community, that’s been a little bit more challenging. I feel a personal weight of responsibility to make sure that this community is a functional community and to make sure that our users are able to use the service and not be in too great a conflict with one another, and able to actually get what they expect out of it. As the community has grown, we’ve gone through several different stages where different sorts of governance approaches have made more or less sense. In the very early days actually, Michael wrote the first version of the registry, which had no authorization or authentication whatsoever. It was like, “You wanna publish a thing? Alright. You probably know what you’re doing if you know what this thing is.” That didn’t last very long. That had people taking advantage of it almost from day one, so we scrambled to add some authorization in there, but it was the simplest possible thing and now we’re kind of grown up to this stage where we have private code, where we have teams, and you can specify which team has access to which modules, to read access, write access and so on. [48:08] Also, as a community, you go from the state where literally everybody has met everybody else. Where everybody was one or two degrees of separation from each other, to the point now where there’s more npm users than in some major American cities. This requires a different sort of policy, it requires different practices in terms of having things more well-documented and a little bit more regimented in how we approach certain types of conflict. While on the one hand it’s a little bit troubling to have a for-profit company - or any entity, really - in this position of authority and control, at the same time anarchy doesn’t really serve anybody. Anarchy just means that the loudest voices have the most control, and that’s really not any better. I feel like there does need to be some sort of governance structure, and a body with a dedicated interest in keeping this community healthy is in charge of keeping this community healthy. Otherwise it doesn’t happen, and you end up with a tragedy of the commons really quickly. At the same time, we try very hard not to abuse our position and to be as transparent as possible. Some of the things we do there - we have a support team, which if you email support@npmjs.com you will talk to them. They’re not there to do your Node homework, they are there to resolve issues that you might have with the service if there’s no other outlet that sort of makes sense. It’s sort of the frontlines of our support for the community. We also have a lot of time and effort and money and energy spent on keeping the service running, which is sort of the core thing that keeps the community healthy. We try very hard not to abuse our position as much as possible. We are trying to run a business, but the actual purpose of this business is to keep the community alive and running and healthy. My main goal with starting a company is to keep the npm registry running forever. I think that’s in the interest of most of our users, so I can sleep okay at night because of that. If two people want the same thing or are fighting over something, the one that you don’t agree with is gonna be very upset at you about it, and there’s sometimes just no way around that. Do you think it’s possible for a company to actually be in a community or part of a community, or is it always sort of this outside patron that is kind of facilitating the rest of the community? Because this is in some ways like talking about protecting the npm community… For either of you, do your employees represent Sentry, do they represent npm when they’re communicating within the projects, or is it like they represent themselves? I think that all too often we talk about a company as if it’s a single entity, and despite what politicians and Super PAC say, companies aren’t people; corporations aren’t people, they’re legal entities with interest, but they’re just sort of a bunch of people working together for some kind of common interest. That’s essentially what the company is - they’re a social fiction designed to make money. That’s not a bad thing inherently, but it can get things a little bit twisted when you start saying “Well, the company says this” or “The company says that.” It’s like, “No, you say that. You’re just hiding behind this weird logo.” [51:44] The brand of npm I think is very strong. Like I said, I have a large personal interest in continuing that and making sure that we’re a force of good. I think mostly that can very easily become a virtuous cycle or a vicious cycle, depending on how you spin it. Most of the people who work for me have, as far as I can tell, also considered themselves part of this community individually, and would continue to be part of it if they stopped working here. It’s just that this is the job that they’re doing and they are doing it because they believe in it and also because we pay them. It’s kind of weird… I think frequently where you tend to get into trouble is when you try to have a company pretend that they’re part of the community when it’s pretty clear that they’re not. If they’re saying, “Look, we’re using Node, we’re a part of this community, so therefore you should like us, because you like Node.” It’s like, “Maybe I don’t. Maybe I don’t agree with what you’re doing.” I think it’s actually kind of rare that there are companies like npm or Automattic or Sentry where not only are they just kind of involved with this community in this sort of abstract sense, their products and services literally depend upon it and their success depends upon the success of the community, and the people working at the company consider themselves part of the community. David, how about Sentry? Are they Sentry employees or are they themselves? It’s very different for Sentry, just because we’re not a community. npm has tons and tons of users and they interact heavily with their own code, their own projects through npm, whereas Sentry it’s like you’re using our product, the product that we built. I very much believe that everyone at Sentry is acting on behalf of Sentry, but as themselves. Their interests are Sentry’s interests, that is the company and the project; we think of them as kind of one and the same. But it’s not a community as a whole. It’s very much like, “Hey, we’re all of the same mindset, we all wanna build this thing. This is our singular voice and how this thing is gonna be built.” Does this affect how you think about recruiting? Are you looking for people that are…? I mean obviously, probably people that have already been involved with Sentry, or care about it, or think about, but do you that sort of like strengthens that unified voice if you’re hiring someone who is already involved with the project and already feels like they’re a part of it? It absolutely does. I think we’re a slightly non-traditional company, at least in the VC-funded startup world, in that the entire team is engineers right now. Most of the team has contributed to Sentry or runs Sentry, or at least used Sentry before joining the company. So it helps that a lot of people… It’s not even a vested interest, it’s just they had a genuine interest in Sentry before coming onto the team, and that helps a lot with how do we build the product, how do we think about the product, how are each of the members of the team individually involved in the product. It’s been very valuable for us to have that. I would say that everybody has a different idea for the small branches of how Sentry should work, but everybody has the mindset that this is super valuable what we’re doing, we agree with the direction we’re going, how can we contribute to make that a reality? I think there’s different tradeoffs and different challenges for a community-driven project versus – I kind of consider us more like a BDFL situation. There’s absolutely value in the community and I think it’s something we’re going to explore as time goes on, but we’ll see. So far, what we’ve done has worked well for just building a singular project. How do your investors think about the whole – having to work with communities or just outside users that might have a say in your projects, that aren’t actually a part of the company. [56:00] Speaking for Sentry, I’m not sure the investors really consider it much. I think a lot of the trust… Like, when we talk to them, the mentality is like, “You’ve done very well to get where you are. Whatever you’re doing must be right. How do we push it to the next level?” So I think there’s a lot of explicit trust even there. This might be very different for npm because again, we are a very focused, single project; I wrote most of the server itself and pretty much everything is now built by us, the company. But it definitely just comes down to, “You guys are clearly the experts at doing what you’re doing. There must be something that’s correct there. We’re not gonna tell you what you to do, since realistically you should know better. We’re betting on you, at the end of the day.” I think as far as the community interactions and advice from our investors, they… I don’t know, I don’t wanna say they don’t care; I mean, certainly they’re very excited that we have so many users. Both of our investors, if you look at their portfolio companies, what they tend to go for are companies that have some kind of a network effect, where the more people use this thing, the more other people will end up using this thing. True [Ventures] is in Automattic, they’re invested in a bunch of other SaaS companies, BVP, they just saw some really big success from Twilio, which just recently went public, and they’re sort of very focused on developer-lead enterprise products with some kind of a community or network effect, and open source just sort of plays to that strength really well. That being said, as far as managing the community, they’re not really experts in that. I think they are betting on us being experts in that. Obviously, we’ve built this huge community, we have this huge type of funnel; they’re mostly just concerned with how well we can turn those eyeballs into dollars, so to speak, to fall back on awful dotcom parlance. It’s funny hearing both of your… I think it’s important hearing both of your experiences, because they are very different types of projects and they way that you’ll run them and monetize them and manage the community is naturally very different. I think it just speaks again to the fact that not every open source project is the same; it’s almost like weird that we use this term that tries to encompass all these different cultures, where actually each one is quite different. Do you think that building businesses like these is different from building other types of startups at all? Is there anything about having the open source side of things that makes it different from any other startup wisdom that you might hear? I think for npm it kind of makes it a lot easier. One of the biggest hurdles is getting people to use your thing, and I guess it’s a little bit suspicious to say that it’s easier… I mean, we’ve traded one set of problems for another, but there are certainly some very big problems which are fatal problems for many companies, and we just sort of haven’t even had to worry about, which is sort of a luxury. For Sentry it’s fairly similar… There’s things that we could not have achieved without the open source aspects, at least not at our scale. For us, cross-platform is a true fundamental strength and requirement of our product, and there’s no way we could have done most of this without the community. But I think at the end of the day, outside of that, Sentry looks just like a normal company. Again, that’s very different from npm, but it’s interesting for us because – we don’t necessarily publicize it super well on the website, but we often get questions like “Is Sentry open source? Do I have to pay you? What does this mean?” Because we have for example an enterprise version, which is fundamentally just a support contract. So there’s still a lot of challenges around conveying what open source means to your users and how that affects your business and things like that. But I think it’s just another channel that you treat differently at the end of the day and each business does that differently. Open source is just one factor that influences that. There could be many other factors, depending on the industry or the type of product. That’s a great idea to end on. Thanks for talking with us, David and Isaac. We really enjoyed this. Yeah, thanks for having us. Thank you! Our transcripts are open source on GitHub. Improvements are welcome. 💚
https://changelog.com/rfc/8
CC-MAIN-2022-27
en
refinedweb
Geocoding in Python: A Complete Guide A step-by-step tutorial on geocoding with Python Introduction When dealing with large datasets for machine learning, have you ever come across an address column that looks like this? Location data can be very messy and difficult to process. It is difficult to encode addresses, since they are of very high cardinality. If you try to encode a column like this with a technique like one-hot encoding, it will lead to high dimensionality, and your machine learning model might not perform well. The easiest way to overcome this problem is to geocode these columns. What is geocoding? Geocoding is the process of converting addresses into geographical coordinates. This means that you'll be transforming raw addresses into latitude/longitude pairs. Geocoding in Python There are many different libraries available that can help you do this with Python. The fastest is the Google Maps API, which I recommend if you have more than 1000 addresses you need to convert in a short period of time. However, the Google Maps API isn't free. You will need to pay around $5 per 1000 request. A free alternative to the Google Maps API is the OpenStreetMap API. However, the OpenStreetMap API is a lot slower, and also slightly less accurate. In this article, I will walk you through the geocoding process with these two APIs. Method 1: Google Maps API Lets first use the Google Maps API to convert addresses into lat/long pairs. You will first need to create a Google Cloud account to do this, and enter your credit card information. Although this is a paid service, Google gives you $200 in free credit when you first create a Google Cloud account. This means that you can make around 40,000 calls with their geocoding API before you get charged for it. As long as you don't hit this limit, your account will not be charged. First, set up a free account with Google Cloud. Then, once you've set up an account, you can follow this tutorial to get your Google Maps API key. Once you've received you API key, you can start coding! Pre-requisites We are going to use the Zomato Restaurants Kaggle dataset for this tutorial. Make sure to have the dataset installed in your path. Then, install the googlemaps API package with this command: pip install -U googlemaps Imports Run the following lines of code to import the libraries you need to get started: import csv import pandas as pd import googlemaps Reading the dataset Now, lets read the dataset and check the head of the dataframe: data = pd.read_csv('zomato.csv',encoding="ISO-8859-1") df = data.copy() df.head() This dataframe has 21 columns and 9551 rows. We only need the address column for geocoding, so I'm going to drop all the other columns. Then, I am going to drop duplicates so we only get unique addresses: df = df[['Address']] df = df.drop_duplicates() Taking a look at the head of the dataframe again, we can see only the address column: Great! We can start geocoding now. Geocoding First, we need to access our API key with Python. Run the following lines of code to do this: gmaps_key = googlemaps.Client(key="your_API_key") Now, lets try geocoding one address first, and take a look at the output. add_1 = df['Address'][0] g = gmaps_key.geocode(add_1) lat = g[0]["geometry"]["location"]["lat"] long = g[0]["geometry"]["location"]["lng"] print('Latitude: '+str(lat)+', Longitude: '+str(long)) The output of the above code looks like this: If you get the above output, great! Everything works. We can now replicate this process for the entire dataframe: # geocode the entire dataframe: def geocode(add): g = gmaps_key.geocode(add) lat = g[0]["geometry"]["location"]["lat"] lng = g[0]["geometry"]["location"]["lng"] return (lat, lng) df['geocoded'] = df['Address'].apply(geocode) Lets check the head of the dataframe again to see if this worked: df.head() If your output looks like the screenshot above, congratulations! You have successfully geocoded addresses in an entire dataframe. Method 2: OpenStreetMap API The OpenStreetMap API is completely free, but is slower and less accurate than the Google maps API. This API was unable to locate many of the addresses in the dataset, so we will be using the locality column this time instead. Before we start with the tutorial, lets look at the difference between the address and locality column. Run the following lines of code to do this: print('Address: '+data['Address'][0]+'\n\nLocality: '+data['Locality'][0]) Your output will look like this: The address column is a lot more granular than the locality column, and it provides the exact location of the restaurant, including the floor number. This might be the reason the address isn't recognized by the OpenStreetMap API, but the locality is. Lets geocode the first locality and take a look at the output. Geocode Run the following lines of code: import url import requests data = data[['Locality']] url = '' + urllib.parse.quote(df['Locality'][0]) +'?format=json' response = requests.get(url).json() print('Latitude: '+response[0]['lat']+', Longitude: '+response[0]['lon']) The output of the above codes is very similar to the result generated by the Google Maps API: Now, lets create a function to find the coordinates of the entire dataframe: def geocode2(locality): url = '' + urllib.parse.quote(locality) +'?format=json' response = requests.get(url).json() if(len(response)!=0): return(response[0]['lat'], response[0]['lon']) else: return('-1') data['geocoded'] = data['Locality'].apply(geocode2) Great! Now, lets take a look at the head of the dataframe: data.head(15) Notice that this API was unable to come up with coordinates for many of the localities in the dataframe. Although its a great free alternative to the Google Maps API, you risk losing a lot of data if you geocode with OpenStreetMap. That's all for this tutorial! I hope you learnt something new from here, and have a better understanding on dealing with geospatial data. Good luck with your data science journey, and thanks for reading!
https://www.natasshaselvaraj.com/a-step-by-step-guide-on-geocoding-in-python/
CC-MAIN-2022-27
en
refinedweb
Usual disclaimer - I don’t know Haskell at all. What follows is my rambling experimentation with the Maybe type. Haskell has a useful type called Maybe. F# and probably most other functional languages have something similar. Even C# has Nullable Maybe is a parameterised or polymorphic type that represents that we possibly have a value of the type parameter. So Maybe Int means maybe we have an Int, maybe we don’t (ie we have Nothing). Because Haskell doesn’t allow null values we might use Maybe for a function that returns its input if the input is a positive number. giveIfEvan :: Int -> Maybe Int giveIfEvan n = if n `mod` 2 == 0 then Just n else Nothing Maybe is a monad, which, to me, means primarily that it provides a function to convert from a Maybe of some type to a Maybe of some other type ( >>= :: Maybe a -> Maybe b). Imagine I want a function that given a Maybe Int adds 1 to the value (if there is a value). One terrible way to write this is: addOne :: Maybe Int -> Maybe Int addOne n = if isJust n then Just (fromJust n + 1) else Nothing or with pattern matching: addOne :: Maybe Int -> Maybe Int addOne (Just a) = Just (a + 1) addOne Nothing = Nothing since Maybe is a monad we can use do notation: addOne n = do v <- n return (v + 1) The benefit here is that addOne handles the nothing case without an explicit conditional. If you don’t mind using a lambda you can use the de-sugared version of do >>= (bind): addOne n = n >>= \v -> return (v + 1) Or instead of a lambda we could define an extra function: increment :: Int -> Maybe Int increment v = return (v + 1) addOne :: Maybe Int -> Maybe Int addOne n = n >>= increment Here is a complete program. Try changin the 8 to an odd number to see the nothing case. import Data.Maybe giveIfEvan :: Int -> Maybe Int giveIfEvan n = if n `mod` 2 == 0 then Just n else Nothing addOne :: Maybe Int -> Maybe Int addOne n = do v <- n return (v + 1) main = do return (addOne $ giveIfEvan 8)
https://www.withouttheloop.com/articles/2013-05-19-maybe-haskell/
CC-MAIN-2022-27
en
refinedweb
Your browser does not seem to support JavaScript. As a result, your viewing experience will be diminished, and you have been placed in read-only mode. Please download a browser that supports JavaScript, or enable it if it's disabled (i.e. NoScript). @ferdinand, YES!!! That solved it. Thank you so much. Forgot about the effectors order inside the Matrix. I was only changing the order in the Object Manager. @kbar, thank you for the reply. The problem with the Scene Hook is that it requires C++. I'm coding in python.. I would like to create an Object plugin and I would like the code to be able to show a bitmap, with optional alpha (transparency) adjustment, in the background of the viewports (perspective and orthogonal views). What is the best way do do it? I don't need actual code. Just what methods to override and what must I make to be sure it displays below all viewport objects. Thank you very much in advance for any reply. Rui Batista Thank you so much. That did it!! I have a CommandDataPlugin that creates a dialog with buttons and a checkbox. It is easy to get what gadget what clicked, inside the Command method. But how to I get the value of the checkbox? I need to know if it is on or off, and Command only tells me what gadget was clicked. You're welcome, Manuel. Actually, I usually prepare all the textures and mapping to be in UVW mapping, when texturing is required. But, mainly, what I need is exporting geometry that is animated. Thank you very much, Manuel. I will send the scene to the e-mail you provided. I'm trying to export a Dynamics MoGraph animation, with a Voronoi Fracture object. If I export as OBJ, it works fine. If I export as C4D, all the saved files are the same, and have no animation.+".c4d") poly_doc=doc.Polygonize(keepanimation = False) if poly_doc != None: c4d.documents.SaveDocument(poly_doc,file_name,c4d.SAVEDOCUMENTFLAGS_DONTADDTORECENTLIST | c4d.SAVEDOCUMENTFLAGS_SAVECACHES,c4d.FORMAT_C4DEXPORT) But all the saved files are the same and I wanted each file to export as different frame, already polygonized. Is this possible? Once again, thank you, Manuel. I had to do a few changes and it is working now. I even made it select the faces that are within a certain angle from the camera. Here is my code: for i,poly in enumerate(faces): a,b,c,d = poly.a,poly.b,poly.c,poly.d pta = points[a]*mg ptb = points[b]*mg ptc = points[c]*mg ptd = points[d]*mg v1 = pta-ptb v2 = ptb-ptc normal = v1.Cross(v2) normal.Normalize()) direction = cam_off - center norm_dir = direction.GetNormalized() angle = NINETY - normal.Dot(norm_dir) if (angle > 0.0 and angle < max_ang): selection.Select(i) Thank you Manuel. Your code is for an object, not a face. So, I tried to adapt it and this is what I got (and it doesn't really work): def main(): selected=doc.GetActiveObjects(c4d.GETACTIVEOBJECTFLAGS_0) if selected==[]: return bd = doc.GetActiveBaseDraw() cam = bd.GetSceneCamera(doc) if cam == None: cam = bd.GetEditorCamera() cam_mg = cam.GetMg() # in case I need it cam_off = cam_mg.off # in case I need it for op in selected: if op.GetType() != 5100: continue mg = op.GetMg() faces = op.GetAllPolygons() points = op.GetAllPoints() selection = op.GetPolygonS() selection.DeselectAll() for i,poly in enumerate(faces): a,b,c,d = poly.a,poly.b,poly.c,poly.d pta = points[a] ptb = points[b] ptc = points[c] ptd = points[d] v1 = pta-ptb v2 = ptb-ptc normal = v1.Cross(v2) normal.Normalize() normal = normal * mg) center = center * mg is_vis = bd.BackfaceCulling(normal, center) if (is_vis == True): selection.Select(i) c4d.EventAdd() What could be wrong? How can I determine if a polygon on a polygonal object is facing the current (user or editor) camera? I know there is a BackfaceCulling function but it always returns me with True. Great, I will do that. Since it is my serial protection code, I may have to edit things a bit, to protect my secrets I will see if I can adjust to code to provide as much information as possible, without revealing too much, as soon as I get home. I still get the same error, even after adding the #include "maxon/string.h" I also replaced the DiagnosticOutput() with ApplicationOutput(), but the error persists. Actually, yes. Would it be better to use GePrint? Or was it discontinued? What is the best way to print to the Console now, in C++? I set the stylecheck.level to zero and the complains about the indentation disappeared. I also tried that type of structure but the error still appeared, when the stylecheck.level was set to 3. Now I'm getting different types of errors.
https://plugincafe.maxon.net/user/rui_mac/posts
CC-MAIN-2022-27
en
refinedweb
Functional factorization for matrices and tensors Project description FunFact: Build Your Own Tensor Decomposition Model in a Breeze FunFact is a Python package that aims to simplify the design of matrix and tensor factorization algorithms. It features a powerful programming interface that augments the NumPy API with Einstein notations for writing concise tensor expressions. Given an arbitrary forward calculation scheme, the package will solve the corresponding inverse problem using stochastic gradient descent, automatic differentiation, and multi-replica vectorization. Its application areas include quantum circuit synthesis, tensor decomposition, and neural network compression. It is GPU- and parallelization-ready thanks to modern numerical linear algebra backends such as JAX/TensorFlow and PyTorch. Quick start example: semi-nonnegative CP decomposition Install from pip: pip install -U funfact Package import: import funfact as ff import numpy as np Create target tensor: T = np.arange(60, dtype=np.float32).reshape(3, 4, 5); T Define abstract tensors and indices: R = 2 a = ff.tensor('a', T.shape[0], R, prefer=ff.conditions.NonNegative()) b = ff.tensor('b', T.shape[1], R) c = ff.tensor('c', T.shape[2], R) i, j, k, r = ff.indices('i, j, k, r') Create a tensor expression (only specifies the algebra but does not carry out the computation immediately): tsrex = (a[i, ~r] * b[j, r]) * c[k, r]; tsrex Find rank-2 approximation: >>> fac = ff.factorize(tsrex, T, max_steps=1000, nvec=8, penalty_weight=10) >>> fac.factors 100%|██████████| 1000/1000 [00:03<00:00, 304.00it/s] <'data' fields of tensors a, b, c> Reconstruction: >>> fac() DeviceArray([[[-0.234, 0.885, 2.004, 3.123, 4.243], [ 4.955, 5.979, 7.002, 8.025, 9.049], [10.145, 11.072, 12. , 12.927, 13.855], [15.335, 16.167, 16.998, 17.83 , 18.661]], [[20.025, 21.014, 22.003, 22.992, 23.981], [25.019, 26.01 , 27.001, 27.992, 28.983], [30.013, 31.006, 31.999, 32.992, 33.985], [35.007, 36.002, 36.997, 37.992, 38.987]], [[40.281, 41.14 , 41.999, 42.858, 43.716], [45.082, 46.04 , 46.999, 47.958, 48.917], [49.882, 50.941, 51.999, 53.058, 54.117], [54.682, 55.841, 56.999, 58.158, 59.316]]], dtype=float32) Examine factors: >>> fac['a'] DeviceArray([[1.788, 1.156], [3.007, 0.582], [4.226, 0.008]], dtype=float32) >>> fac['b'] DeviceArray([[-2.923, -4.333], [-3.268, -3.541], [-3.614, -2.749], [-3.959, -1.957]], dtype=float32) >>> fac['c'] DeviceArray([[-3.271, 3.461], [-3.341, 3.309], [-3.41 , 3.158], [-3.479, 3.006], [-3.548, 2.855]], dtype=float32) How to cite If you use this package for a publication (either in-paper or electronically), please cite it using the following DOI: Contributors Current developers: Previou contributors: FunFact Copyright (c) 2021, others to do so. Funding Acknowledgment This work was supported by the Laboratory Directed Research and Development Program of Lawrence Berkeley National Laboratory under U.S. Department of Energy Contract No. DE-AC02-05CH11231. Project details Release history Release notifications | RSS feed Download files Download the file for your platform. If you're not sure which to choose, learn more about installing packages.
https://pypi.org/project/funfact/
CC-MAIN-2022-27
en
refinedweb
std::sort, found in the standard library header algorithm, is a standard library algorithm for sorting a range of values, defined by a pair of iterators. std::sort takes as the last parameter a functor used to compare two values; this is how it determines the order. Note that std::sort is not stable. The comparison function must impose a Strict, Weak Ordering on the elements. A simple less-than (or greater-than) comparison will suffice. A container with random-access iterators can be sorted using the std::sort algorithm: #include <vector> #include <algorithm> std::vector<int> MyVector = {3, 1, 2} //Default comparison of < std::sort(MyVector.begin(), MyVector.end()); std::sort requires that its iterators are random access iterators. The sequence containers std::list and std::forward_list (requiring C++11) do not provide random access iterators, so they cannot be used with std::sort. However, they do have sort member functions which implement a sorting algorithm that works with their own iterator types. #include <list> #include <algorithm> std::list<int> MyList = {3, 1, 2} //Default comparison of < //Whole list only. MyList.sort(); Their member sort functions always sort the entire list, so they cannot sort a sub-range of elements. However, since list and forward_list have fast splicing operations, you could extract the elements to be sorted from the list, sort them, then stuff them back where they were quite efficiently like this: void sort_sublist(std::list<int>& mylist, std::list<int>::const_iterator start, std::list<int>::const_iterator end) { //extract and sort half-open sub range denoted by start and end iterator std::list<int> tmp; tmp.splice(tmp.begin(), list, start, end); tmp.sort(); //re-insert range at the point we extracted it from list.splice(end, tmp); }
https://www.notion.so/ec2e1b745c704e5f81f24b3358b4138f
CC-MAIN-2022-27
en
refinedweb
Whether you have just started using Kendo UI or are already pretty proficient with our widgets and main concepts, you know as well as I do, as does anyone in our industry - things change. Every. Time. You. Blink. So in this blog post, I intend to show you some tips and tricks on how to make your users' experience more personal, more dynamic and most importantly - of a better quality. To deliver personalized and improved experience there are three key points to focus on when building a Kendo UI jQuery Grid with dynamic options: People love being in control so why not give it to them? Instead of decorating the Kendo UI Grid like a Christmas tree, with all the available features and functionalities, why not create some custom UI that allows the users to pick and choose? Not only will users be able to choose their configuration but it may bring them better performance because they will enable only what they will use. I like this point the best, because it is in line with the Extreme Programming principle You Ain't Gonna Need It (Y.A.G.N.I. for short). It is easy to forget that in the background a whole bunch of magic needs to take place, widgets initialized and handlers attached when all one needs to type is reorderable:true. But why have a reorderable grid if it is not needed? One of the frequently asked questions about a Kendo UI Grid with dynamic options is: How to grant administrator rights to some users and deny it to others? The most straightforward way is to obtain the user role before creating the jQuery datagrid and dependent on the role, pass the desired configuration options. However, remember that user permissions should be handled on the server, so don't rely on the client user permissions alone. The Kendo UI Grid has a mobile* configuration option that makes working on smaller screens/touch enabled devices easier. The grid creates a separate view for editing and filtering the column menu, and enables native scrolling where possible. If you have not seen our adaptable demos in action, you may do so here. They look the best on real mobiles and tablets but the browser's device mode can also give you a good idea. If you like the look and feel of the adaptive Kendo UI Grid, you can initiate it dynamically with the help of the nifty API of the kendo.support utility namespace. It can help determine the device type and OS version, the browser, scrollbar, transitions and transformations and other things which you may find helpful. *Before deciding whether to use the adaptive grid, visit the documentation. It may look like the web grid but it is quite different. Boolean("false") === true So later you can obtain the radio selected options like this:So later you can obtain the radio selected options like this:<input type="radio" name="selectable" id="selectable-multi" class="k-radio" value='"multiple,row"'> <label class="k-radio-label" for="selectable-multi">Multiple Selection</label> <input type="radio" name="selectable" id="selectable" class="k-radio" checked="checked" value="false"> <label class="k-radio-label" for="selectable">No Selection</label> var radioSelectedOptions = { selectable: JSON.parse($("[name='selectable']:checked").val()) }; dataTextFieldthe users see as the option and the value is the dataValueFieldwhich matches the grid configuration options we will pass: Next is the task to get the the radio selected options and the listbox options and merge them. For example:Next is the task to get the the radio selected options and the listbox options and merge them. For example:var listBoxDs = [{ text: "Column Menu", value: { columnMenu : true} }, { text: "Excel Export", value: { excel: { allPages: true } } }]; var lbOptions = selectedOptions.dataItems() .toJSON() .reduce(function(optionsObj, item) { for (var key in item.value){ optionsObj[key] = item.value[key]; } return optionsObj; }, {}); var selectedGridOptions = kendo.deepExtend(lbOptions, radioSelectedOptions); setOptions()method to change the options and reset the data source with the shorthand query()method: var grid = $("#grid").data("kendoGrid"); if(grid){ grid.dataSource.query({ filter: [], group:[], sort:[], page: 1, pageSize:20 }); grid.setOptions(mergedOptions); } else { $("#grid").kendoGrid(mergedOptions); } setOptions()method makes an internal call to get the current options, then deep extends them with the new options. So if the user had set a pageablegrid initially, then removed that setting, the pager will linger and will not go away unless you set it to falseexplicitly. Here is a list of the defaults used in the demo: var defaultOptions = { mobile: isMobile, toolbar: [], groupable:false, pageable:false, resizable:false, columnMenu:false, navigatable:false, reorderable:false, scrollable:true, persistSelection:false, sortable:false, dataSource: dataSource, height: 550, columns: [ { field:"ProductName", width: 200}, { field: "UnitPrice", title: "Unit Price", format: "{0:c}", width: 180 }, { field: "UnitsInStock", title: "Units In Stock", width: 150 }, { field: "Discontinued", width: 180 } ] } selectable column. if(isMobile && selectedGridOptions.selectable && selectedGridOptions.selectable === "multiple,row"){ selectedGridOptions.selectable = false; defaultOptions.columns.splice(0,0,{ selectable:true, width: 30 }); } if(selectedGridOptions.pdf){ defaultOptions.toolbar.push("pdf"); } if(selectedGridOptions.excel){ defaultOptions.toolbar.push("excel"); } if(!isMobile && selectedGridOptions.editable){ var editTools = ["create", "save", "cancel"]; defaultOptions.toolbar = defaultOptions.toolbar.concat(editTools); defaultOptions.columns.push({ command: "destroy", title: " ", width: 150 }); } // inline or popup editing provides better UX on a mobile if(isMobile && selectedGridOptions.editable){ selectedGridOptions.editable.mode = "inline"; selectedGridOptions.editable.confirmation = false; var editTools = ["create"]; defaultOptions.toolbar = defaultOptions.toolbar.concat(editTools); defaultOptions.columns.splice(0,0,{ command: ["edit", "destroy"], title: " ", width: 150 }); } grid.dataSource.query({ filter: [], group:[], sort:[], page: 1, pageSize:20 }); I hope this blog will inspire you to look for ways to give your users a better and more personal experience of using the Kendo UI jQuery Grid. While the idea of "one size fits all scenarios and devices" seems like a fairy tale, we get one step closer by getting personal - using the information that we know - the user type, the device and browser they are using and allowing them to choose what they need. If there is anything in particular that you want our Kendo UI team to blog about, please mention it in the comments or in our Feedback Portal. We would love to hear from you. Alex Hajigeorgieva is a Technical Support Engineer II working on Kendo UI. She likes scuba diving, volleyball and studying new technologies.
https://www.telerik.com/blogs/dynamic-options-in-the-kendo-ui-grid
CC-MAIN-2022-27
en
refinedweb
Awesome Array Driver - arenaudineau/AwesomeArray-PythonDriver Wiki Welcome to the AwesomeArray-PythonDriver wiki! Installation verification By following the quick installation guide from README, you should be able to import the library wherever you want: import aad Several errors can show up: Missing aad ModuleNotFoundError: No module named 'aad' The library has not been correctly installed. [...] Missing B1530driver ModuleNotFoundError: Failed to import B1530driver. Please make sure that the B1530driver files are in 'extlibs/B1530Driver'. The two licenced files B1530driver.py and B1530ErrorModule.py are missing, you must provide them in aad/extlibs/B1530Driver, aside the DRIVER_SCRIPTS_HERE file. Missing wgfmu DLL error occurs with wgfmu.dll wgfmu.dll failed to load, it must be in C:\B1530A_InstLib\. Alternatively, modify its path in aad/extlibs/B1530Driver/B1530driver.py with the variable wgfmu_link. If it still fails to load, then ¯\_(ツ)_/¯ Initialization driver = aad.AwesomeArrayDriver() Again, several errors may occur: Microcontroller not found Exception: µc not found, please verify its connection or specify its PID Make sure the microcontroller is connected. There are two USB port on the board: PWRis used to power and program it and is NOT used by the driver, it may be connected to the PC the driver is run from, or another one ; USERis used for serial communication and IS used by the driver, it must be connected to the PC the driver is run from ; By default, the driver is looking for a USB device with a Product IDentifier ( PID): 22336. For some reason the board may not have this one. You can either change it by reprogramming the microcontroller (in STM32Cube, in the project, ioc file, search for USB_DEVICE on the left panel, go down, Device Descriptor panel, PID (Product IDentifier)) or provide it to the driver: aad.print_ports() will show every device connected and its associated PID. (⚠️ This is NOT STMicroelectronics STLink Virtual COM Port) You can then use: driver = aad.AwesomeArrayDriver(uc_pid={correct_pid}) Microcontroller port already open serial.serialutil.SerialException: could not open port '{PORT}': PermissionError(13, 'Access is denied.', None, 5) The serial connection is already opened somewhere else, check if there are no other instance of AwesomeArrayDriver running (for example in another Jupyter Notebook ; you may want to interrupt/restart the kernel) Lost microcontroller connection serial.serialutil.SerialException: WriteFile failed (PermissionError(13, 'The device does not recognize the command.', None, 22)) This error will not happen right after the initialization but can appear when using it. It means the USB cable have been plugged out or that the connection has timed out. B1530 not found Exception: -5: Error in WGFMU_openSession("GPIB0::18::INSTR"); viReadSTB returned -1073807304. Check the connection or that EasyEXPERT on the B1500A has been closed. B1530 already open Exception: -3: Error in WGFMU_openSession("GPIB0::18::INSTR"); A session has already been opened by WGFMU_openSession("GPIB0::18::INSTR"). The connection is already opened somewhere else, check if there are no other instance of AwesomeArrayDriver running (for example in another Jupyter Notebook ; you may want to interrupt/restart the kernel) Usage For now, the driver configures by default the Awesome Array, and can only be used, in CARAC mode. A memristor can be addressed by three parameters: col: its column index in the Test Array ; row: its row index in the Test Array ; bar: boolean to adress the complementary memristor if True [False by default] On each of them you can perform the operations: form, set, reset and read. The three first do not return anything, the last one returns the memristor value in Ω ohm. Words in the Awesome Array are 64 bits long (64x64 * 2 = 8 192 memristors). The constant aad.SR_WORD_SIZE is set to this value for more explicit code. Examples """ Forms each memristor ⚠️ it takes several **minutes** to run, the bottleneck being the calls to the B1530 underlying driver """ for col in range(aad.SR_WORD_SIZE): for row in range(aad.SR_WORD_SIZE): driver.form(col=col, row=row, bar=False) driver.form(col=col, row=row, bar=True) """Toggles the complementary memristor at column 6, row 42""" RES_LIMIT = 5e3 res = driver.read(col=6, row=42, bar=True) print("Before toggle:", res) if res > RES_LIMIT: driver.set(col=6, row=42, bar=True) else: driver.reset(col=6, row=42, bar=True) res = driver.read(col=6, row=42, bar=True) print("After toggle:", res)
https://github-wiki-see.page/m/arenaudineau/AwesomeArray-PythonDriver/wiki/Awesome-Array-Driver
CC-MAIN-2022-27
en
refinedweb
Minimize the sum of minimum and second minimum elements from all possible triplets Introduction This blog will discuss the problem of finding the minimum possible sum of minimum and second minimum of all possible triplets in an array. Problem Statement We are given an array having n elements. Our task is to minimize the sum of minimum and second minimum elements from all possible triplets from the given array. A single element of the given array can only be a part of one triplet. Sample examples Input - 1: a[] = [ 4, 5, 6, 3, 1, 7, 9 ] Output - 1: 13 Explanation: There are only two possible triplets as there are only 7 elements in the given array. Triplet1: (1, 3, 9) Triplet2: (4, 5, 7) Sum of minimum and second minimum elements of all possible triplets = 1 + 3 + 4 + 5 Input - 2: a[] = [ 4, 5, 6, 7, 8 ,9, 1, 2, 3 ] Output - 2: 21 Explanation: There are 3 possible triplets as there are 9 elements in the given array Triplet1: (1,2,9) Triplet2: (3,4,8) Triplet3: (5,6,7) Sum of minimum and second minimum elements of all possible triplets = 1 + 2 + 3 + 4 + 5 + 6 = 21 Approach The idea is very simple, count the possible number of triplets from the given array. Now, as we need to minimize the sum of the minimum and second minimum elements for every possible triplet, we calculate the sum of the first k minimum numbers in the given array where k = 2 * number of possible triplets. To find the sum of the first k minimum elements in the given array, sort the array in non-decreasing order and find the sum of the first k elements. Steps of algorithm - Calculate the possible number of triplets and store it in the possible_triplets variable, where possible_triplets = n/3. - Declare a variable k and do k = 2 * possible triplets. - Now, sort the given array, sort(a, a+n). - Declare a variable ans and initialize it with 0. - Finally, calculate the sum of the first k elements in the given array and store them in the ans variable. - Return the value of ans variable. Let’s understand the above approach with an example: Given array: - possible_triplets = n/3 = 7/3 = 2 - k = 2*possible_triplets = 2*2 = 4 - Now, sort the given array. - ans = 0 - Calculate the sum of the first k elements. - ans = 1 + 3+ 4 + 5 = 13 Finally, return the value of ans variable. Implementation in C++ #include<bits/stdc++.h> using namespace std; int minm_sum(int a[], int n) { int possible_triplets = n / 3; // possible number of triplets int k = 2 * possible_triplets; sort(a, a + n); // sorting the given array int ans = 0; for (int i = 0; i < k; i++) //first k minimum elements sum { ans = ans + a[i]; } return ans; } int main() { int a[] = { 4, 5, 6, 8, 7 , 9, 1, 2, 3 }; //given array int n = sizeof(a) / sizeof(a[0]); // size of array cout << minm_sum(a, n); return 0; } Output: 21 Complexity Analysis Time complexity: We are using the sort function, so the time complexity is O(n*logn), where n is the size of the given array. Space complexity: O(1) as we have used constant extra space. Frequently asked questions Q1. Is STL's sort stable? Ans: Introsort is typically used by the sort() function. As a result, sort() may or may not preserve the physical order of semantically equivalent values. Mergesort is usually used by the stable sort() function. As a result, stable sort() guarantees that the physical order of semantically equivalent values is preserved. Q2. What is the C++ STL sort's asymptotic complexity? Ans: The sorting algorithm is not specified in the language standard and may vary between implementations, but the function's worst-case asymptotic complexity is: when applied to a range of N elements, a call to sort must perform O(N log N) comparisons. Q3. In C++, how do you find the smallest value in an array? Ans: One variable, minElement, should be set to the first element of the input array. Traverse the input array from index 0 to N -1 using a loop and compare each element to minElement. Update minElement with the current element if the current element is less than minElement. Key Takeaways This article discussed palindrome and the approach to minimize the sum of minimum and second minimum elements from all possible triplets with examples for a better understanding and its C++ code. If you are a beginner, interested in coding and want to learn DSA, you can look for our guided path for DSA, which is free! Thank you for reading!
https://www.codingninjas.com/codestudio/library/minimize-the-sum-of-minimum-and-second-minimum-elements-from-all-possible-triplets
CC-MAIN-2022-27
en
refinedweb
Building a responsive scheduler in Angular Scheduling is a task that we do for both personal reasons and for work. Let’s see how to quickly set up a weekly and daily schedule view in Angular that looks good on mobile and desktop screens as well. Prerequisites The minimum required version is Angular 4 or newer (including the latest release). You will have to install Mobiscroll in your project for the examples to work. For this project we’ll be using a simple Angular CLI app. If you’ve already have Mobiscroll installed, you can skip this step, otherwise follow this two minute guide. Install Mobiscroll 1. Set up your account: If you don’t use the licensed version or if don’t have a running trial, you can start one for free. Make sure to remember your email address and set a password. 2. Install the Mobiscroll CLI from npm: Run the following command in the terminal. npm install -g @mobiscroll/cli 3. Install Mobiscroll: Run the config command in the root folder of your Angular app. If you don’t have one at hand, create a new app with the Angular CLI. mobiscroll config angular –version=5 When installing Mobiscroll you will be prompted to log in. Use the email address and password you’ve set up in the first step. Now that you have Mobiscroll installed, let’s set up the scheduler. There are three pieces to it, the Component, Template and Module. Component To get started make sure to import the MbscEventCalendarOptions and MbscCalendarEvent: import { MbscEventcalendarOptions, MbscCalendarEvent } from '@mobiscroll/angular'; After the import we will set up the options object that we’ll pass to the event calendar template.' } } } } }; Alternatively, the options can be set inline when writing the template: <mbsc-eventcalendar [view]="{schedule: { type: 'week' }}" [clickToCreate]="true" [dragToCreate]="true" [dragToMove]="true" [dragToResize]="true"> </mbsc-eventcalendar> Now let’s dig into the options we pass. - Setting the theme: The themecan be ios, material, windowsor autowhere autosets the theme based on the system. The themeVariantcan be light, darkor autowhere autosets the variant based on system settings. Learn more about theming and play with the different settings. - Configuring drag & drop: The four clickToCreate, dragToCreate, dragToMoveand dragToResizeprovide granular control over the drag & drop experience. See how turning the different options influence the calendar. - Setting up responsiveness: For a weekly schedule we’d only need to set the view option like view: { schedule: { type: 'week' } }, however in our current example we are setting up a responsive behavior that switches between a weekly scheduler and a daily scheduler based on screen width. With the configuration above you’ll get a weekly scheduler on screens with a width of 600px and up. For smaller screens a daily schedule will be shown. After the responsive view is configure we will need to load some events. You can see a couple of different examples and sources here, but for now we’ll use a demo API and load the events into the myEvents object which we’ll later pass in the template. myEvents: MbscCalendarEvent[] = []; ngOnInit(): void { this.http.jsonp<MbscCalendarEvent[]>('', 'callback').subscribe((resp) => { this.myEvents = resp; }); } The API call will return an array of event objects that can be directly passed to the event calendar. start, end, text, color are base properties. Besides these any other property can be passed and parsed. { "start":"2021-03-30T07:00:00.000Z","end":"2021-03-30T08:00:00.000Z","text":"Product team mtg.","color":"#f67944" } Learn more about the event object. Template The actual event calendar needs to be added to the template with all of its options. <mbsc-eventcalendar [data]="myEvents" [options]="eventSettings"></mbsc-eventcalendar> myEvents is passed to the data option while eventSettings are passed to options. Both variables are defined in the Component. Alternatively you can set the options individually in the template. Module Depending on your setup you will need to import modules you’ll use in the accompanying module for your component. import { MbscModule } from '@mobiscroll/angular'; import { HttpClientJsonpModule } from ‘@angular/common/http’; The MbscModule is for Mobiscroll and the HttpClientJsonpModule module is for loading the events from an API using jsonp. CSS CSS is optional, at least it is not necessary to write CSS for the scheduler to render correctly. There is however one thing we will add in order to get to the desired outcome – make the scheduler fill the entire screen. For that to work, we’ll need to set all parent containers of the event calendar to height: 100% (including body and html). Since we started out with a basic Angular CLI app we’ll also remove the margins from the top container so our CSS will look something like this: body, html { height: 100%; margin: 0; } The result After all of that your component should look something like this 👇 And this is how your code should look like 👇 import { Component, OnInit } from '@angular/core'; import { MbscEventcalendarOptions, MbscCalendarEvent } from '@mobiscroll/angular'; import { HttpClient } from '@angular/common/http'; @Component({ selector: 'app-root', templateUrl: './app.component.html' }) export class AppComponent implements OnInit { constructor(private http: HttpClient) {} myEvents: MbscCalendarEvent[] = [];' } } } } }; ngOnInit(): void { this.http.jsonp<MbscCalendarEvent[]>('', 'callback').subscribe((resp) => { this.myEvents = resp; }); } } <mbsc-eventcalendar [data]="myEvents" [options]="eventSettings"></mbsc-eventcalendar> import { BrowserModule } from '@angular/platform-browser'; import { NgModule } from '@angular/core'; import { HttpClientJsonpModule } from '@angular/common/http'; import { FormsModule, ReactiveFormsModule } from '@angular/forms'; import { MbscModule } from '@mobiscroll/angular'; import { AppComponent } from './app.component'; @NgModule({ declarations: [ AppComponent ], imports: [ BrowserModule, MbscModule, FormsModule, ReactiveFormsModule, HttpClientModule, HttpClientJsonpModule ], bootstrap: [AppComponent] }) export class AppModule { } Useful links Mobiscroll scheduler for Angular: Interactive demo to the responsive scheduler: Angular templates and Admin Dashboards: 👈 If you are looking for templates to save you some serious time check these guys out.
https://blog.mobiscroll.com/building-a-responsive-scheduler-in-angular/
CC-MAIN-2022-27
en
refinedweb
I need to be able to time very short loops accurately (As in "see how long it takes to execute". I DON'T want to have a short-interval timer. There's a big difference). This is important in my code, because I'm writing a DLL that's a plug-in to a real-time multitrack HD audio recorder. The plug-in provides audio processing functions to the main program. A single file sometimes gets 50MB, and this program works with multiple such files. You can see how a slow loop that operates on each sample can severely bring down the system. Anyway, I tried various ways to time the loops accurately, but haven't found anything accurate enough. The best I found is to use QueryPerformanceCounter(), which gives me a timing resolution of 838ns, quite small steps. The problem is not with the timer's resolution, but with the fact that every time I do the timing, I get different results. This varies from day to day, and appears to be connected with what mood the OS is in on that day, and also the position of the moon. I try to make the timing intervals small, so that my code isn't pre-empted before the timing period is over. I also tried putting a Sleep(0), or Sleep(10), call just before I start timing, to make sure the likelihood of my code being pre-empted is small. I also boost the PriorityClass and ThreadPriority to it's max level just before the timing starts, and restore it just after. These techniques does help to some extent, but I still get results that vary as much as 35% from one to the next run, and from day to day. Obviously I can't trust this to see if one version of my code is 10% faster than another or not. It appears to me as if there's some low-level hardware interrupts that is happening that I have no control over. I also looked at the "Zen Timer", but it appears as if that's for DOS only. So, my simple question is: How can I actually know how many cycles MY code takes to complete. If I knew that, it wouldn't matter if my code was interrupted, because the cycle count would be for my code only. Imagine this feature in VC++: You compile your code with debug info on, but the release build. Then you start debugging, and place a breakpoint just before the code you want to time. When you start single-stepping, you open a debug window (just like the Watch, Memory, etc), and there you have some options as to what type of CPU you want to simulate, and you can also reset the cycle counter. Then, as you single-step (or run up to the next breakpoint), the debugger adds cycles to the counter, depending on the selected CPU. I see no reason why it can't do this. It already knows the assembly instructions, it just needs to be taught how long each one takes, as well as about pairing, overlapped instructions, etc etc. This would be SO easy to time critical code then, because you can know EXACTLY how may cycles a piece of code will take to execute, as well as simulating running it on different CPUs. When can we expect such a cool feature in VC++? Anyone have any idea? And why would it NOT be possible to do it? Would it be possible to add a third-party plug-in to VC++ to do this, and are there any available at this point? Any insight, advice, comments welcome. Steven Schulze Concord, CA > Then, as you single-step [...], the debugger adds > cycles to the counter You have correctly summarized the problems inherent in timing small sections of code, but I don't think your fix will work. The time to access a *single* aligned DWORD may vary from an apparent zero cycles (if run in parallel to, say, an FP instruction, with a memory barrier following) to some 10 million (!) cycles (cache miss, TLB miss, PTE faulted in from disk, data page faulted in from disk). Unless you run this on a box with no VM, you are out of luck. Oh, and I hope you unplugged the network card. Even if we ignore all this, the debugger won't be able to accurately keep track, as a single-step or other sort of interrupted flow of execution means that the instruction queue is empty when your code is resumed. On the other hand, in real life, your DLL will also run in a real, busy, system ... so maybe it would make more sense to run a test series under heavy load and _then_ measure the total execution time, to arrive at the slack time left on that config under such and such a load. In that case, just run the code long enough to average out the asynchronous nature of interrupts and the like. -- Cheers, Felix. If you post a reply, kindly refrain from emailing it, too. Note to spammers: fel...@mvps.org is my real email address. No anti-spam address here. Just one comment: IN YOUR FACE! >I need to be able to time very short loops accurately Did you try GetThreadTimes? This should give you the amount of time that the thread spent executing, no matter how many times it was preempted by other threads. Now, I'm not sure who gets charged by NT for time spent in hardware interrupts, or exactly how and when NT charges the time (it should be doing it either at the end of a quantum, or when the thread goes to sleep/wait state), but I think you ought to try this function out and see if the results are more stable. Hope that helps. --------------------------------------------------------------------------- Dimitris Staikos dsta...@unibrain.com (Business mail only please), dsta...@softlab.ece.ntua.gr (Personal mail), ICQ UIN 169876 Software Systems & Applications Chief Engineer, UniBrain, --------------------------------------------------------------------------- Any sufficiently advanced bug is indistinguishable from a feature. Some make it happen, some watch it happen, and some say, "what happened?" How can we remember our ignorance, which our progress requires, when we are using our knowledge all the time? How a man plays a game shows something of his character, how he loses shows all of it. --------------------------------------------------------------------------- Yes, I understand all this, but the idea is to simulate a "perfect" machine so that what you see is the best possible performance of your code under the best possible situation. This way, you can concentrate on your code to get IT'S cycle count as low as possible. There's nothing you can do in your code to guard against a noisy system, but if you can get YOUR OWN code optimized as good as possible, then you've done all you can. BTW, I have a Beta copy of Intel's VTune 3.0, and it actually has a similar feature, where it will show you in detail a selected range of code's cycle time, pairing, penalties, etc, etc. It makes some basic assumptions, as in that the data is already in the cache, etc, etc, and then does a simulation on the CPU of your choice. Unfortunately it's a little buggy, and it's REALLY slow. And it's a pain to switch between VC++ and VTune for every change you make in your code. (About 10 minutes turnaround on my computer). The fact that it's able to give me this kind of info in the first place tells me it's definitely possible. Also, you can give a listing of you disassembled code to a assembly programming guru, and in a short time he can tell you: "Under perfect conditions, this code will take xxx cycles to execute". Why can't the debugger do the same for me? Steven Schulze Concord, CA Excellent suggestion! While this is definitely a good way to go, unfortunately, I'm using W98, and, of course, it's not available under W98 :( Oh well... But thanks for telling me about this function. I was unaware of it. Sometime in the near future, I should switch to NT, and then I can start playing with it. Until then, I guess I'm stuck in W98-land... Steven Schulze Concord, CA > and in a short time he can tell you: "Under perfect > conditions, this code will take xxx cycles to execute". > Why can't the debugger do the same for me? The debugger can't do it because there are not enough people asking for just that. And while I do not claim any honorific (except "slob", maybe), I can tell you that I hate cycle-counting. Passionately. I'd rather have a real result, with imprecision, from a profiler than a manual count from me. :-\ > Now, I'm not sure who gets charged by NT for time spent in hardware > interrupts I fear this won't help, as the time spent in ISRs and such is charged to the thread currently running. (Not 100% sure, though.) For a definitive answer, we lean back and wait if Jamie H. notices us. :-) If u r using a pentium or above chip, then you may want to use an opcode which just does what you need, it gets a cycle count from the chip. Lookup on a RDTSC in a pentium assembly coding guide. Here is some code which emits the correct assembly code. It compiles with VC 5.0 but i am sure you can modify it to work with any other compiler. /* cl -W3 -O2 -Ox rdtsc1.c */ #include <stdio.h> #define RDTSC __asm _emit 0x0f __asm _emit 0x31 static __inline unsigned __int64 get_clock () { unsigned long lo; unsigned long hi; _asm { RDTSC mov lo,eax mov hi,edx } return (((unsigned __int64) hi)<<32) + lo; } int main() { unsigned __int64 t0, t1; t0 = get_clock (); t1 = get_clock (); printf ("Cycles elapsed = %I64d\n", t1 - t0); return (0); } -bobby Steven Schulze wrote in message ... So do I, which is why I'd love such a feature. >Passionately. I'd >rather have a real result, with imprecision, from a profiler than a >manual count from me. :-\ Yes, but while it might be ok for some people, other people might need more precise info, since their projects might require it. BTW, I was thinking - the compiler DEFINATELY already know this info, since it needs it to do the optimizations. Why can't we be privy to this info as well, if we need it? Steven Schulze Concord, CA Thanks, I'll DEFINITELY look into. BTW, does the RDTSC use the same clock as QueryPerformanceCounter()? Steven Schulze Concord, CA I believe you will still have some accuracy problems using this (RDTSC) instruction due to the context switches that can occur while you are timing code. I'm not sure how much this helps but I recall reading in one of the programming journals (WDJ, MSJ or WinTech) about a VxD someone wrote to monitor context switches so that you could account for the number of CPU cycles executed out of context from your timed code. The VxD allowed you you register your RDTSC counter variable and thread ID with it. The VxD would then subtract from your counter variable, the number of CPU cycles executed outside of your threads context. Perhaps someone else remembers this article more specifically. You might also want to take a look at for other issues that can affect accuracy of this instruction. Hope this helps, -- Ian MacDonald I have somewhat of an answer to this problem. What I did (using QueryPerformanceCounter()), was that I wrote a class that has a function called Reset(), Start(), Stop(), and Show(). First you call Reset(), which resets all members, then you run your code multiple times (I run it up to 1000 times), and every time you first call Start(), then do the code, then call Stop(). The class then adds this time to the total time, as well as keeping track of the fastest as well as the slowest run. When you then call Show(), it shows the fastest, slowest and average times. This gives pretty good results, because there's GOT to be at least one run where the code wasn't pre-empted. Also, I try to keep the segments of code I time pretty short, so that it's possible to get through it without pre-empting at least once. The problem is simply that I think QueryPerformanceCounter is flaky. Sometimes I get a time of 0 (even WITH my code in-between Start() and Stop()), which should be impossible, given that the resolution is so small (838ns). It's almost as if the counter itself is updated from software that needs to be pre-empted first, although the info I have on it suggests it's a hardware item. But by combining the RDTSC with the method I describe above, I might get satisfactory results. Steven Schulze Concord, CA Note that perfect conditions are: 1. No other code running while the test code is running. 2. No caches enabled. 3. No other hardware process like DMA or refresh working. 4. No weird programming tricks being done in interrupt service routines. In other words, it's not a real number, but just a guess. The only way to make these decisions is to start with the cycle counts, program up a test case, and TEST IT! This will still be only an approximation of what happens when it gets out in the field. RDTSC uses the clock speed of the processor, so if you are using a 400MHZ processor each cycle would be 2.5 nanosecs. What i use this most for is measuring cycles of assembly code. Also, you may want to subtract the overhead of RDTSC call itself so that it does not affect your measurements, to do this call RDTSC twice in a row and make a note of the cycles elapsed this would be the overhead. >I believe you will still have some accuracy problems using this (RDTSC) >instruction due to the context switches that can occur while you are >timing code. > To avoid context switches as much as possible, do a yield before u start timing an assembly code section. This ensures that you have a new timeslice from the OS when you wakeup. -bobby I think my original point is being lost here... Here's my main point. If the Cycle Guru can tell me: "This code of yours will execute in 755 cycles under perfect conditions". I then re-write my code, and ask him again, and he says "Now your code will execute in 621 cycles under perfect conditions". Now, that kind of info can help me immensely to speed up my code. See, I don't need to worry about DMA interrupts, etc, etc, because if I can get my code to perform as fast as possible under "perfect" conditions, I know it'll also perform better than my original code under a noisy system. The problem with "TEST IT!" as you put it, is that I can't get an accurate reading of how long MY code takes to execute, because the results vary too much. I can't make informed decisions as to whether version A or version B of my code is faster for that very reason. BTW, Intel has an "Optimizing Tutor" that shows you how to optimize code, and shows how many cycles a certain version of code will take to execute vs another version. Obviously this is a legitimate way to analyze code, but you insist that it is not. I wish I had a way to see my code in the same way as Intel shows in it's examples. That's what I'm saying. Also, tell me, how does the people that hand-optimize code figure out which version of code will run faster? Steven Schulze Concord, CA <snip> > Also, tell me, how does the people that hand-optimize code figure out which > version of code will run faster? Steve, there is an ancient rule of thumb that suggests that 10% of an application's code will take 90% of the CPU time. My rule for optimization is to wait until the end of the development, then if the code executes too slowly for comfortable use, instrument the code with some performance tool; find the 10%; hand optimize it; repeat the process until the code executes "fast enough". If there is no 10% (or 20%) then there may be some basic problem in the design. I'm afraid that the thread is simply saying you can't get exactly what you want in a absolute sense. You can get what you want in a relative sense. IMO Gus Gustafson gus...@gte.net I know EXACTLY where time is spent in my code. It's the loop that executes 50 million times for a 50 million byte file. I can pinpoint it down to 7 lines in my code. Also, FOR THIS SPECIFIC application, there's no "fast enough", there's only "faster is better". It's an application that processes multiple audio files (up to 44), each with a size of to about 50MB for 5 minutes of audio (that's the extreme case, though). It's a program that tries to do stuff in real-time. Anytime you have the slightest performance bottleneck, you could lose the ability to process one or two more of those 44 files in real-time. So, this isn't a simple case of having the spell-checker take 5 instead of 6 seconds to finish - how nice, this is a REAL case where speed makes a difference in how the program can be applied. >I'm afraid that the thread is simply saying you can't get exactly what you want >in a absolute sense. You can get what you want in a relative sense. BTW, as I asked before, how does the compiler make it's decisions as to which version of your code will execute faster when it does it's optimizations, since it's not really executing your code to make a "relative" decision, as you say? How on earth does it do it, since any cycle counting on assembly code is being shot down as being unrealistic? Why should I not be able to look at my code and make conclusions based on information about cycle times, just as the compiler does (yes, I know about pairing, penalties, etc, etc)? Or are we as programmers simply not able to work down to this level anymore? I guess so. Steven Schulze Concord, CA Steven, Why don't you share those 7 lines of code with us? We can trade messages forever, but your code won't get any faster this way! If you are shy, or those seven lines are very dear, try the book "Zen of Code Optimization" by Michael Abrash. Consider this an FYI which I offer in case any of this is new to you. If not, please ignore it, I'm not trying to heighten your exasperation. You can get a listing of the machine instructions generated by the compiler with the /FA switch. There used to be a time when this listing didn't take into account optimizations, I don't know if that is still true. If so, you should be able to look at the machine instructions with a debugger. Intel's processor manual list the number of clock cycles an instruction takes. MS reprinted that info with the copy of MASM I bought several years ago. As it is now fashionable to exclude printed docs under the guise of saving forests I'm not sure that the assembler includes the info any more. The trouble with the info it is that is not all simple. The number of clock cycles an instruction takes depends on the processor, the addressing mode and perhaps even the value of the arguments. For example, if I am reading the table correctly, the integer multiply instruction (IMUL) takes from 13 to 42 clocks on a 486 using double word operands. Regards, Will Also, I'm sure later versions of MASM can generate the processor timing information on a listing, so I guess that by assembling the compiler generated assembler output with MASM, you could get the timing information on a listing. It's a bit long winded, so it's not something you'd want to do very often. Dave ---- Address is altered to discourage junk mail. Please post responses to the newsgroup thread, there's no need for follow up email copies. IMHO it is not true. If you get your code run 100 cycles less it will not be a point in the system there all other threads may take a several millions cycles to execute. It is just a drop in the ocean. I think you better develop your code in a way then no other applications will be permitted to run (in case you do it under win95). This will save you some valuable time. Yan Actually, that was a generalization. I have about 40 - 50 such small functions, varying from about 4 to 100 lines of code. BTW, I've been able to write a class now that uses the RDTSC to measure elapsed cycles. While it's not perfect, it's pretty good. I can get repeatable results (which is what I need) down to less than 0.1% (after doing multiple runs and taking the minimum result). Not bad. Steven Schulze Concord, CA No, it's not. If the LOOP takes 700 cycles for one version of the code, and the loop takes 600 cycles for a different version of code (but doing the same thing), then that's a 17% improvement for that specific loop. Now, if my program spends an awful lot of time in that loop (as my code does, processing a large file of audio data), then it's a SIGNIFICANT improvement. I'm pretty aware of the fact that trying to optimize the WHOLE program is fruitless, but since my program does what it does, it can benefit a lot from optimizing the small sections of code that the program probably spends 95% or more of it's time in (while processing). Boaz Tamir. Ian MacDonald wrote: > > In article <#U7t9OGt...@uppssnewspub05.moswest.msn.net>, > > I believe you will still have some accuracy problems using this (RDTSC) > instruction due to the context switches that can occur while you are > timing code. > May '95 !? Holy smokes. It seems like just yesterday. I guess it just goes to show that you should never throw away any of your old magazines. Thanks for remembering. -- Ian
https://groups.google.com/g/microsoft.public.win32.programmer.kernel/c/Qh3k8bxath8
CC-MAIN-2022-27
en
refinedweb
Embedded Models¶ Django MongoDB Engine supports MongoDB’s subobjects which can be used to embed an object into another. Using ListField and DictField it’s already possible to embed objects ( dicts) of arbitrary shape. However, EmbeddedModelField (described beneath) is a much more comfortable tool for many use cases, ensuring the data you store actually matches the structure and types you want it to be in. The Basics¶ Let’s consider this example: from djangotoolbox.fields import EmbeddedModelField class Customer(models.Model): name = models.CharField(...) address = EmbeddedModelField('Address') ... class Address(models.Model): ... city = models.CharField(...) The API feels very natural and is similar to that of Django’s relation fields. >>> Customer(name='Bob', address=Address(city='New York', ...), ...).save() >>> bob = Customer.objects.get(...) >>> bob.address <Address: Address object> >>> bob.address.city 'New York' Represented in BSON, Bob’s structure looks like this: { "_id": ObjectId(...), "name": "Bob", "address": { ... "city": "New York" }, ... } While such “flat” embedding is useful if you want to bundle multiple related fields into one common namespace – for instance, in the example above we bundled all information about a customers’ address into the address namespace – there’s a much more common usecase for embedded objects: one-to-many relations. Lists of Subobjects (One-to-Many Relations)¶ Often, lists of subobjects are superior to relations (in terms of simplicity and performance) for modeling one-to-many relationships between models. Consider this elegant way to implement the Post ⇔ Comments relationship: from djangotoolbox.fields import ListField, EmbeddedModelField class Post(models.Model): ... comments = ListField(EmbeddedModelField('Comment')) class Comment(models.Model): text = models.TextField() Embedded objects are represented as subobjects on MongoDB: >>> comments = [Comment(text='foo'), Comment(text='bar')] >>> Post(comments=comments, ...).save() >>> Post.objects.get(...).comments [<Comment: Comment object>, <Comment: Comment object>] { "_id": ObjectId(...), ... "comments" : [ {"text": "foo", }, {"text": "bar"} ] } Generic Embedding¶ Similar to Django’s generic relations, it’s possible to embed objects of any type (sometimes referred to as “polymorphic” relationships). This works by adding the model’s name and module to each subobject, accompanying the actual data with type information: { "_id" : ObjectId(...), "stuff" : [ {"foo" : 42, "_module" : "demoapp.models", "_model" : "FooModel"}, {"bar" : "spam", "_module" : "demoapp.models", "_model" : "FooModel"} ] } As you can see, generic embedded models add a lot of overhead that bloats up your data records. If you want to use them anyway, here’s how you’d do it: class Container(models.Model): stuff = ListField(EmbeddedModelField()) class FooModel(models.Model): foo = models.IntegerField() class BarModel(models.Model): bar = models.CharField(max_length=255) Container.objects.create( stuff=[FooModel(foo=42), BarModel(bar='spam')] )
http://django-mongodb-engine.readthedocs.io/en/latest/topics/embedded-models.html
CC-MAIN-2018-13
en
refinedweb
Core code to create objects XStream sm = new XStream (); Automatic identification comment sm.autodetectAnnotations (true); Written in an object file sm.toXML (model, file); Read from the xml object *** It should be noted, how to read xml is to create pojo, alias, model model, core code, model sm, xml objectMay Quote xml than as2 as3 features enhanced processing N times, acquisition or through a node is very convenient, like json handling of the image. A power of XML is its characters through a linear string of text to provide complex, nested data. The data xml documents, data structure, hierarchical structure, lastname, run time error, actionscript, n times, addeventlistener, example xml, book isbn, future generations, firstname, pastries, control operator, xml object, xml objects, dot operator, data trace, kumquats, continoDe Some operations on XML require 'xml_parse.rb' docsvnxml = MSXML:: XmlDocument.new () docsvnxml.load_string ("<Root/>") docnodexml = docsvnxml.root strQuerys = "/ / svnPathsXML [@ taskUri = '" + taskUri + "']" taskNode = lt, ruby, attributes, rb, xml objectDecember 7 Introducing the Flex in the operation of a XML, first introduced to XML in simple basic terms. Element: XML in a start tag and end tag of this piece called "Elements" Nodes: the combination of XML elements and text nodes collectively referred to lt, quotation marks, expression, key value, elements, text content, type string, text node, quotes, text nodes, element tag, end tag, xml text, root node, start tag, xml object, text attributes, xml objectsNovember 30 flash.utils package with a variety of package-level functions for timing code execution, retrieving information about classes and objects, and converting escape characters. Public Methods Function Defined clearInterval (id: uint): void Cancels a spec utf 8, parameters, object reference, string string, code execution, string object, closure, flash player, languages, input string, interval, relative time, value parameter, setinterval, xml object, time flash, level functions, timing codeNovember 22 VisualRules into development platforms and operating platforms, deployment and integration, and operation platform. Currently considering compatibility, VisualRules configured rules package compiled generated code, the generated jsp page, and support dbcp, deployment, lib directory, package jar, debug, web frameworks, jsp code, database link, web root directory, beanutils, digester, jar files, wit, business rules, jakarta, development platforms, custom business, object library, xml object, memory tableNovember 12, as3, qname, xml element, coherence, language specification, text element, ecma 262, xml object, core foundation, xmlparser, draft specification, flash 5 actionscript, class conflictsNovember 11 Previously used the digester to convert the xml object javabean, but then the xml file is a more standard format is similar to many of the returned object is encapsulated into a collection. <result> <resultInfoList> <resultInfo> <refN result set, lt xml, utf 8, javabean, implements, match, xml file, 1234, result object, little bit, pagination, classname, java bean, mapping xml, knot, xml objectOctober 26, new approach, qname, xml element, coherence, language specification, text element, ecma 262, xml object, core foundation, xmlparser, draft specification, flash 5 actionscript, class conflictsOctober 15 Xml object, if known, to remove this from the parent xml object xml: function deleteNode( node : XML ) : void { delete node.parent().children()[ node.childIndex() ]; } Or remove eligible nodes: delete xml ..*.(id == "test") as3, xml object, parent childrenSeptember 28 xml format data into javabean data object by castor CastorXmlToBeanTest.java package com.wj.castor; import java.io.File; import java.io.FileInputStream; import java.io.InputStream; import java.io.InputStreamReader; import org.apache.commons.logging.L import java, import org, string args, java package, string url, fileinputstream, main string, inputstream, xml format, mapping xml, inputstreamreader, xml object, castor xmlSeptember 21 javabean object data into xml format through data castor CastorBeanToXmlTest.java package com.wj.castor; import java.util.*; import java.io.*; import org.apache.commons.logging.Log; import org.apache.commons.logging.LogFactory; import org.exolab.cast string name, import java, java util, public string, import org, string args, java package, logger info, main string, arraylist, string sex, class person, person age, int age, mapping xml, writer writer, transformation rules, xml objectSeptember 21 <?xml actionscript 3 var jid = "skyoo2007"; var xml:XML = <root> <info> </info> </root> xml.info.appendChild("<blog>http://"+jid+".javaeye.com</blog>"); trace(xml); / / Output: <root> <i xml output, xml objectSeptember 11 RESTful Web Service (3): Using ajax to create client-side The previous article, using the eclipse to create the web service. According to rest the idea that every resource has a unique identity URI. In the REST Web service explorer you can see every lt xml, utf 8, eclipse, attribute, ajax, td, json, web service, error function, table border, jquery, current version, ccc, uri, tagname, category type, xml parsing, xml object, category data, category text This is from the blog.flexexamples.com an article on how to use Flex4 new features for advanced text rendering, the original address: The following example sh lt xml, utf 8, application name, cdata, opensource, mxml, italic, beta version, sdk, maecenas, quis, s library, xml object, richtext, text flow, arcu, cras, fontstyle, tellus, ipsumJuly 21 Entities The entity tag contains the XML for an IEntity object. An IEntity is essentially a named container for a collection of components. So, the entity tag supports a single child tag, appropriately named component. Any number of component tags ca entities, bg, multiple times, game engine, tag label, enemies, bullets, deserializer, fashion, xml object, entity tag, game objectsJuly 11 Tomcat 5 Startup Sequence Sequence 1. Start from Command Line Class: org.apache.catalina.startup.Bootstrap What it does: a) Set up classloaders commonLoader (common) -> System Loader sharedLoader (shared) -> commonLoader -> System Loader catalina deploy, tomcat 5, catalina, mapping tool, obj, b6, digester, object mapping, coyote, sequence 1, major components, class reflection, startup sequence, command line argument, single entry, sequence 2, xml object, context factory, initial context, incoming requestsJuly 10 Need to resolve the Xml data prototype is: <? Xml version = "1.0" encoding = "UTF-8"?> <persons> <person> <name> Li </ name> <age> 30 </ age> </ Person> <person> <name> Li Xia import java, lt xml, import org, implementation class, xml file, arraylist, inputstream, person person, service import, tag tag, stream object, input stream, end tag, xml import, start tag, tag end, switch type, xml object, type parser, position analysisJune 29 First look at the members of the class: (VS.100). ASPx ExpandoObject instances can add and remove members at run time. What does that mean? This means that such instances ca dynamic language, aspx, run time, library system, dynamic object, language runtime, father and son, address street, dynamic characteristics, address city, msdn, contact name, xml object, linq, address state, time features, dynamic objects, address element, mercer island, contact addressApril 30 Generally need to add a set of data, a concept introduced Ext.data.Reader, where Reader feature is a single, only for parsing the data to, EXTJS support different data formats, need to be a different data parser. And this Reader is to assume the role lt, proxy, quot quot, prototype, array, parser, data formats, thr, xml object, xmldocument, xml form, yannaApril 19 / / Expand private function ExpandAll (): void ( / / TreeMenu.dataProvider is the id for the data source is bound treeMenu is xmllist for each (var item: XML in treeMenu.dataProvider) treeMenu.expandChildrenOf (item, true); ) / / All put away private data source, parameters, private function, node object, selecteditem, xml objectApril 19 public static function Traversal(xml:XML, target:String):void{ var list:XMLList = xml.children(); for each( var xmlChildren:XML in list) { if(xmlChildren["xml The nodes in the "] == target) { // Locate the node after operation ... break; } if(xm attribute, attributes, target string, elements, property description, neglect, child nodes, parent node, expressions, white space, child node, method description, classification method, node parameters, xml object, property classification, blank valueApril 1 One to carry out the above class package loaded, and now the main task is to load server.xml, inside the configuration and analysis, the most special thing inside each container initialization (Server, Service, Engine, Host , Context). -------------- apache, deploy, catalina, mapping tool, elements, b6, digester, object mapping, coyote, server service, major components, command line argument, single entry, xml object, context factory, initial context, incoming requestsMarch 19 1 Based on the above article by talking to the server.xml parsing for the simple analysis of org.apache.catalina.startup.Catalina # load Digester is used sax to parse server.xml of the class, this step is to initialize / / Create the object, mainly f apache, source code, deploy, catalina, protocol, mapping tool, elements, b6, digester, parsing, object mapping, coyote, sax, major components, single entry, xml object, incoming requestsMarch 19 Simple Object Access Protocol (SOAP) is a W3C organization Note, it describes a decentralized or distributed in the environment, a lightweight protocol to exchange information. SOAP is an XML-based protocol, which consists of three parts: SOAP packag remote procedure call, framework document, object serialization, xml format, predecessor, simple object access protocol, communication protocol, server servlet, rpc, call processing, using tools, transport protocols, soap protocol, lightweight protocol, communications package, xml object, call and response, web service soap, w3c organization, good securityMarch 19 workflow jqueryDistance(L)=TA*500+http: 211.142.13.210: hzyl startgwt 导出execljavascript:__doPostBack(lbtnAsk,)http: 120.25.150.2404.8088 zkbmhtep:11117.172.58.23 saasubuntu nginx txt 不能读取 405 (Not Allowed)http:—law.cchttp: 202.113.4.11:8800 user basic index
http://www.quweiji.com/tag/xml-object/
CC-MAIN-2018-47
en
refinedweb
. using NUnit.Framework; namespace DotNetNuke.Tests.Core.Mail { [TestFixture] public class MailTests { /// <summary> /// This represents the positive path for our tests /// The tests checks for good SMTP server name and port syntax and not /// about the send result; as we will not be sending emails because /// the server doesn't exist. /// </summary> [TestCase("GoodSmtpName")] public void Valid_SMTP_Server_Name_Should_Pass(string smtpServerName) { Assert.Fail(); } /// This represents the negative path for our tests /// The tests checks for bad SMTP server name and port syntax and /// checks for text returned from the called method [TestCase("BadSmtpName: and port")] public void Invalid_SMTP_Server_Name_Should_Fail(string smtpServerName) } } Notice that we set our tests to fail initially until we write the proper tests that make them pass. Now we proceed to add the test body for the positive test by calling the “SendMail” method passing a valid server name. But because we don’t need to physically send the email, we need to use an SMTP server name that doesn’t resolve to an actual mail server. The body of the positive method will be something as follows (all parameters are valid including the SMTP server name): [TestCase("GoodSmtpName")] var result =); Assert.AreEqual(string.Empty, result); But when we run the test we see that it failed due to many exceptions, unrelated to calling the send mail method. Here is what we can see in this case: System.NullReferenceException : Object reference not set to an instance of an object. at DotNetNuke.Common.Globals.get_Status() in C:\Projects\DnnSoftware\Dnn.Platform\DNN Platform\Library\Common\Globals.cs:line 589 at DotNetNuke.Services.Log.EventLog.LogController.AddLog(LogInfo logInfo) in C:\Projects\DnnSoftware\Dnn.Platform\DNN Platform\Library\Services\Log\EventLog\LogController.cs:line 181 at DotNetNuke.Services.Log.EventLog.ExceptionLogController.AddLog(Exception objException, LogInfo log, ExceptionLogType logType) in C:\Projects\DnnSoftware\Dnn.Platform\DNN Platform\Library\Services\Log\EventLog\ExceptionLogController.cs:line 136 at DotNetNuke.Services.Log.EventLog.ExceptionLogController.AddLog(Exception objException, ExceptionLogType logType) in C:\Projects\DnnSoftware\Dnn.Platform\DNN Platform\Library\Services\Log\EventLog\ExceptionLogController.cs:line 97 at DotNetNuke.Services.Exceptions.Exceptions.LogException(Exception exc) in C:\Projects\DnnSoftware\Dnn.Platform\DNN Platform\Library\Services\Exceptions\Exceptions.cs:line 473 at DotNetNuke.Common.Utilities.DataCache.GetCachedDataFromDictionary(CacheItemArgs cacheItemArgs, CacheItemExpiredCallback cacheItemExpired) in C:\Projects\DnnSoftware\Dnn.Platform\DNN Platform\Library\Common\Utilities\DataCache.cs:line 599 at DotNetNuke.Common.Utilities.DataCache.GetCachedData[TObject](CacheItemArgs cacheItemArgs, CacheItemExpiredCallback cacheItemExpired, Boolean storeInDictionary) in C:\Projects\DnnSoftware\Dnn.Platform\DNN Platform\Library\Common\Utilities\DataCache.cs:line 625 at DotNetNuke.Common.Utilities.CBO.GetCachedObject[TObject](CacheItemArgs cacheItemArgs, CacheItemExpiredCallback cacheItemExpired, Boolean saveInDictionary) in C:\Projects\DnnSoftware\Dnn.Platform\DNN Platform\Library\Common\Utilities\CBO.cs:line 925 at DotNetNuke.Entities.Controllers.HostController.GetSettings() in C:\Projects\DnnSoftware\Dnn.Platform\DNN Platform\Library\Entities\Controllers\HostController.cs:line 186 at DotNetNuke.Entities.Controllers.HostController.GetString(String key, String defaultValue) in C:\Projects\DnnSoftware\Dnn.Platform\DNN Platform\Library\Entities\Controllers\HostController.cs:line 240 at DotNetNuke.Entities.Controllers.HostController.GetString(String key) in C:\Projects\DnnSoftware\Dnn.Platform\DNN Platform\Library\Entities\Controllers\HostController.cs:line 226 at DotNetNuke.Entities.Host.Host.GetSmtpSetting(String settingName) in C:\Projects\DnnSoftware\Dnn.Platform\DNN Platform\Library\Entities\Host\Host.cs:line 1323 at DotNetNuke.Entities.Host.Host.get_SMTPAuthentication() in C:\Projects\DnnSoftware\Dnn.Platform\DNN Platform\Library\Entities\Host\Host.cs:line 1278 at DotNetNuke.Services.Mail.Mail.SendMail(String mailFrom, String mailSender, String mailTo, String cc, String bcc, String replyTo, MailPriority priority, String subject, MailFormat bodyFormat, Encoding bodyEncoding, String body, List`1 attachments, String smtpServer, String smtpAuthentication, String smtpUsername, String smtpPassword, Boolean smtpEnableSSL) in C:\Projects\DnnSoftware\Dnn.Platform\DNN Platform\Library\Services\Mail\Mail.cs:line 538 at DotNetNuke.Tests.Core.Mail.MailTests.Valid_SMTP_Server_Name_Should_Pass(String smtpServerName) in C:\Projects\DnnSoftware\Dnn.Platform\DNN Platform\Tests\DotNetNuke.Tests.Core\Mail\MailTests.cs:line 43 So, what is the issue? As I said above, we don’t expect the actual site being installed to run the unit tests, therefore we don’t have fully initialized variables and objects from the site’s database and files. This requires mocking. Mocking is creating a fake class instance to server our purpose of calling stubbed methods that are not the actual method in the code and we control the returned results of these methods. The full details are beyond the scope of this article and you need to search online for more details. For this to work, we need to add mocks for a few items in our system. First, we need to mock the data provider in order to override database logging. This is provided by DNN and you just need to add these lines at the top of the class which will be executed each time a test is executed as follows: private Mock<DataProvider> _mockDataProvider; [SetUp] public void SetUp() ComponentFactory.Container = new SimpleContainer(); _mockDataProvider = MockComponentProvider.CreateDataProvider(); //Standard DataProvider path for logging _mockDataProvider.Setup(d => d.GetProviderPath()).Returns(""); Now when we run the positive test, we see it is still failing but due to a different reason which is the return result from the call is not as expected. The error we see is as follows: Expected string length 0 but was 77. Strings differ at index 0. Expected: <string.Empty> But was: " Failure sending mail.\r\nThe remote name could not be resolved..." -----------^ at NUnit.Framework.Assert.That(Object actual, IResolveConstraint expression, String message, Object[] args) at DotNetNuke.Tests.Core.Mail.MailTests.Valid_SMTP_Server_Name_Should_Pass(String smtpServerName) in C:\Projects\DnnSoftware\Dnn.Platform\DNN Platform\Tests\DotNetNuke.Tests.Core\Mail\MailTests.cs:line 47 As we explained before, the process of sending the email through the SMTP server is not in the scope of this test and fails. Therefore, we need to change the test check at the end to take this into consideration and check for the expected return result. To do so, we need to change the positive test to look like the following: replyTo: "", priority: DotNetNuke.Services.Mail.MailPriority.Normal, subject: "Subject", var expected @"The remote name could not be resolved: 'GoodSmtpName'"; Assert.AreEqual(expected, result); Now when we run the positive test, it passes without any error. To add more scenarios to the positive test, we can add the following test cases before the test definition which will give us a total of 6 cases to validate. [TestCase("GoodSmtpName:1")] [TestCase("GoodSmtpName:12")] [TestCase("GoodSmtpName:123")] [TestCase("GoodSmtpName:1234")] [TestCase("GoodSmtpName:65535")] This concludes our positive tests. Now let’s turn towards the negative tests now. To consider no exceptions are thrown now, we need to assert this in the method. We do this by changing the test as follows: [TestCase("BadSmtpName: and port")] Assert.DoesNotThrow(() => {); }); This will pass when we run the test. But this is insufficient, as we can have other errors than validating the server name being returned. Therefore, we need to add another assertion to make sure the cause of the error is due to malformed server name string. As such, we must check the returned value from the send mail call. Upon inspecting the code which is executed when an error occurs, we see that is executes this code path in the Mail class: else retValue = Localize.GetString("SMTPConfigurationProblem"); } This will tell us we need to check for the value of the returned string from the localization provider. If we look for this value in the resources files, we see the following: <data name="SMTPConfigurationProblem.Text" xml: <value>There is a problem with the configuration of your SMTP Server. Mail was not sent.</value> </data> Now, we can test directly against this by changing the negative test to the following: string result = null; result = DotNetNuke.Services.Mail.Mail.SendMail( var expected configuration of your SMTP Server. Mail was not sent."; } But, still this fails the last assertion as the result will be NULL. In order to overcome this, we need to mock the localization provider. This can be done by adding the mocking code inside the negative test itself (as it is not shared with the positive test) and stubbing the expected return value as we need it to be. We must to do this to avoid the cases when the resources file string might change at any time or we run the tests under non- “us-EN” locale which will cause the test to fail. The following code shows how this is done in our negative test: const string SmtpConfigurationProblem_Key = "SMTPConfigurationProblem"; const string SmtpConfigurationProblem_Value = "Invalid SMTP Server name and port. Mail not sent!"; var localizationMock = MockComponentProvider.CreateLocalizationProvider(); // mock the expected return error string(s) from the SendMail method localizationMock.Setup(l => l.GetString(SmtpConfigurationProblem_Key, It.IsAny<string>(), It.IsAny<string>(), It.IsAny<PortalSettings>(), It.IsAny<bool>())) .Returns(SmtpConfigurationProblem_Value); Assert.AreEqual(SmtpConfigurationProblem_Value, result); And, voila, the test passes without errors. Note that in mocking the localized string, we ignored the thread culture / locale of the environment this test runs under by providing our own string for any culture (the second parameter in “GetString” which is called as “It.IsAny<string>()”). Adding Few More Tests Cases Now we can add more negative test cases by replacing this line [TestCase("BadSmtpName: and port")] With the following various cases: [TestCase("GoodSmtpName:")] // invalid syntax; must not have colon in name [TestCase(":5000")] // invalid syntax; missing server name [TestCase("GoodSmtpName::1000")] // invalid syntax; double colons [TestCase("GoodSmtpName:posrtYY")] // invalid port; contains text [TestCase("GoodSmtpName:123456")] // invalid port; greater than 65535 [TestCase("GoodSmtpName:-1234")] // invalid port; negative not allowed Now running the full set of tests will pass as appearing in this image. Below is the full source code for the final version of the “MailTests.cs” file: using System; using System.Collections.Generic; using System.Net.Mail; using System.Text; using DotNetNuke.ComponentModel; using DotNetNuke.Data; using DotNetNuke.Entities.Portals; using DotNetNuke.Services.Mail; using DotNetNuke.Tests.Utilities.Mocks; using Moq; private Mock<DataProvider> _mockDataProvider; [TestCase("GoodSmtpName:")] // invalid syntax; must not have colon in name [TestCase("GoodSmtpName:-1234")] // invalid port; negative not allowed }using System; So, you have written these tests what next? After completing these tests and making sure they are all passing, you need to commit these along with the source code that you have fixed and submit a pull request to be processed by the DNN development team. In order to comply with the contribution submission, you need to have a look at this contributing guide. I just touched the basics of adding unit tests in DNN. The existing unit tests have a lot more examples for you to learn from. So, don’t hesitate to have a look and send me any questions you have by posting them as comments to this series of articles. DNN Digest is our monthly email newsletter. It highlights news and content from around the DNN ecosystem, such as new modules and themes, messages from leadership, blog posts and notable tweets. Keep your finger on the pulse of the ecosystem by subscribing.
https://www.dnnsoftware.com/community-blog/cid/155468
CC-MAIN-2018-47
en
refinedweb
. The known techniques for linking parent-child records are described in the following table. Parent-Child database relationship Not all the techniques described hereafter requires this configuration but I think it is important to define a relationship from the TB1 table to the TB2 records. In Database Configuration define the following relationship in object TB1: - Relationship: TB2 - Child Object: TB2 - Where Clause: TB1ID=:TB1ID Using Application Designer In the Application Designer you have to use the TB2 relationship defined before to link the child table. The last important step is to initialize the TB2.TB1ID field on child records. To achieve this, add a Default Value control with the following configuration: - Attribute: TB1ID - From Data Source ID: results_showlist - From Attribute: TB1ID Using APPFIELDDEFAULTS Another possibility is to default the key values of your child table using the APPFIELDDEFAULTS table. The following INSERT statement will put a default value in TB2.TB1ID field to link the child records to the parent one. INSERT INTO APPFIELDDEFAULTS (APP, DEFAULTVALUE, OBJECTNAME, ATTRIBUTENAME, APPFIELDDEFAULTSID) ('[APPNAME]', ':OWNEROBJECT.TB1ID', 'TB2', 'TB1ID', APPFIELDDEFAULTSSEQ.NEXTVAL );) { // retrieves the TB1ID value from the parent Mbo String tb1id = ownerMbo.getString("TB1ID"); // sets the TB1ID value in the child Mbo setValue("TB1ID", tb1id, NOACCESSCHECK|NOVALIDATION_AND_NOACTION); } } } To correctly manage deletes of child records you should also override the delete method of the Tb1Mbo class. public void delete(long accessModifier) throws MXException, RemoteException { super.delete(accessModifier); (((Mbo)this).getMboSet("TB2")).deleteAll(); } This method has only one limitation (as far as I know). If the primary key of the parent table is updated before saving the record, it will create zombie child records. This is because the changes of TB1.TB1ID fieald are not propagated to child records. This is also a problem when duplicating objects. To solve this problem you can implement the action() method of the TB1 field class or setting the primary key of the parent table as described in the Using Database Configuration method. Using Scripting Adapted from John's comment (thank you) Create a table and include attributes for: {OWNERTABLE, OWNERID} Create a script with an Object Launch Point. The launch point needs to be set to fire on Initiate only. from psdi.mbo import MboConstants mbo.setValue('TB1ID', mbo.getOwner().getString("TB1ID"), MboConstants.NOACCESSCHECK)) Using Database Configuration The last technique was suggested by Scott Dickerson. I haven't tested it but it should work. All you have to do is make sure your parent and child records have the same attribute names, and a unique primary index defined on the child table that's column order matches the unique primary index on the parent record. Make sure that the field names in the child table match the same field names from the key columns in the parent table. For instance, your child table MYASSETCHILD's field names must exactly match the field names of the parent table, in your case ASSETNUM,SITEID. The MYASSETCHILD field will also need it's own unique key field, let's say MYASSETCHILDID. So the unique key of the MYASSETCHILD table is ASSETNUM,SITEID,MYASSETCHILDID right? Now if you set the primarykeycolseq field in the maxattribute table in this order - MYASSETCHILD:ASSETNUM: 1 - MYASSETCHILD:SITEID:2 - MYASSETCHILD:MYASSETCHILDID:3 As long as the order of the primarykeys in your child table match the order of the primary keys in the parent table, the TPAE framework will automatically set these child attributes whenever new child MBOs are added underneath the parent. I think you overlooked the option of using automation scripts applied with an Object Launch Point and on the Add action to set the FK and other fields in the child records. I have updated the table but I haven't tried it so I don't have precise instructions about how to do it. I created a technical documentation table that can be used by multiple applications but primarily designed for the Asset and Locations tables. Create a table and include attributes for: {OWNERTABLE, OWNERID} Create a script with an Object Launch Point. The launch point needs to be set to fire on Initiate only. The Script: from psdi.mbo import MboConstants mbo.setValue('ownertable',mbo.getOwner().getName(),MboConstants.NOACCESSCHECK) mbo.setValue('ownerid',mbo.getOwner().getUniqueIDValue(),MboConstants.NOACCESSCHECK) Create a relationship between the parent and child using the following as a template(In this his case, the Asset table is the parent): ownerid = :assetuid and ownertable = 'ASSET' You can delete this post after assimilating these steps into your instructions above. Please reference my name though :) Thanks, John C. Wilson Hi Bruno, I use your' Basic method using Database Configuration and Application Designer'. I create 2 tables tb1 and tb2(child table) also i use default value same as your comments.Then create new application and i fill child table fields and i save my application but i can not see any data in my child table.After this i go to database and write select * from tb2 and i see my fields values before i save in child table.But key which i use tb1id (tb1id=tb2id) is seems null. In addition when i exit application then go to application i fill child table and save my new record in my child table i see my new values which i fill child table. Also i use select * from tb2 in database in this time i see tb1id is not null. What should i do or i forgot. Regards. I'm sorry but I don't understand your point. Hi, Sory for my English :(. What i mean here is only the first saving process fails. I added some image about my relation to here Regards Hello Bruno, I really appreciate your tips. I would like to mention that the only way I could get the linking to function correctly in the presentation (using the App Designer) was to include an index in the child object that replicated the columns identified in the relationship. (maybe just my problem) Hi Bruno, Thanks for the tips. In the article you mention: "To solve this problem you can implement the action() method of the TB1 field class" I was wondering, could you be a bit more specific about this solution? I tried to implement this, but when you are in the action() method and need to discover the child objects to propagate the field change to, the parent field is already updated and the relationship returns an empty mbo set (i.e., it's too late, the children have already become zombies). Do you have some working sample code which does not have this problem? Sorry I don't have any sample code. I'm not sure itis the best approach but you can try to retrieve the previous value of the attribute as described here. Then you can retrieve the child records using a where clause. Hi, Thanks for the reply. Getting the previous value of the parent key value is not a problem. The problem is how to use it to get the old related MBOs. The process looks like this: 1) Add a new parent ("new row") 2) Add a new child ("new row") - the parent fields are copied to the child, but are empty because they are not filled in in the parent yet; the relation is based on an empty key 3) Change the parent key field - this breaks the relationship Now the action() method gets called. However, Maximo has already updated the key field in the parent, so the existing relationship based on the empty key is no longer valid. Setting the where clause does not work either, because this causes the children to be queried from the database, and the child elements were not persisted yet. I examined the entire API of the involved objects, but cannot find a way to recover the non-persistent MBOs which where in the previous related set. And without being able to do that, the children are lost... When I create a work order and fill in the supervisor name, I want to pull in person.department into workorder.department where workorder.supervisor = person.personid How would I achieve this through an automation script? I've been struggling with cross-over domain, but someone said automation script is the way to do it. Hi, I think you can do this by setting relationship to person . Ex: DBrelationship : supervisordetails - personid = :supervisor modify xml - supervisordetails.department. Is there any way to use a relationship in a where clause of another relationship in the same table? Yes but you have to use the SQL syntax. my experience while using Application Designer - defaultValue method in Maximo 7.5.0.5: Attribute: TB2ID --- must be ChildTable.AttributeName From Data Source ID: results_showlist --- this is not fixed value, need to provide the correct data source ID, in my case - I had to use MAINRECORD From Attribute: TB1ID Also, APPFIELDDEFAULTS didn't work for me in Maximo 7.5.0.5 What Data source value should be used? I was designing a new application and I set data source ID: results_showlist and it didnt work. Any idea what Data Source should I be using. Rana - Use MAINRECORD for Data Source ID as mentioned by Sourabh. does MBO copy also copies child objects data ? Ex. I need to copy workorder table in a custom table. while copying the records I am getting error that "relationship does not exits between Child table and custom table" I'm using Control Desk 7.6. I had no success with the technique of matching the parent and child primary keys as described in the "Using Database Configuration" section. However you can link a child and parent using default values for the key attributes on the child object. In my case I wanted to create a custom object that would be a child record of tickets. I defined a TICKETUID attribute on the child object type, and defined its default value to be ":&OWNER&.TICKETUID". This evaluates to the TICKETUID value of the ticket that owns the child record. I prefer doing it this way to using the APPFIELDDEFAULTS table because doing the work in Database Configuration makes it independent of which application is creating the child record. To answer Priyanshu's question, mbo copy will not automatically copy a child object unless support for this is coded into the mbo's class. In Maximo/ICD 7.6 you can implement this using automation scripts. Create a script having no launch point. Define the script's name to be .DUPLICATE. So, for my case of creating custom child records for tickets, I need to create scripts named SR.DUPLICATE, INCIDENT.DUPLICATE, etc. The script will run after the mbo class' copy logic has done all of its processing. The script will have a "mbo" variable pointing at the source of the copy, and a "dupmbo" variable pointing at the result of the copy. Your script will need to create the desired child objects under dupmbo, and copy the desired attribute values to it. In my 2nd paragraph it should have said to set the script name to "object-name.DUPLICATE".
http://maximodev.blogspot.com/2013/05/link-parent-child-records-in-maximo.html
CC-MAIN-2018-47
en
refinedweb
Device & model: ARDUINO UNO + Ethernet W5100 What dashboard are you using? Trying to connect my Arduino to the Cayenne Cloud using the Web. I have connection to the internet, I have already run a WebChecker Sketch on the Arduino and a Ping to got success. I am also able to run a webserver in local and light a Led by a web page. What I cannot is to connect my arduino to Cayenne. I am using the “Manual Sketch” were I have put the MAC and the IP properly, I reckon. There is a WAINTING all the time lighting in the screen. I have checked the Serial Monitor and it is blank all the time. So info on it. Has someone the same problem?? Thank you very much #include <SPI.h> #include <Ethernet.h> //#define CAYENNE_DEBUG // Uncomment to show debug messages #define CAYENNE_PRINT Serial // Comment this out to disable prints and save space #include <CayenneEthernet.h> // Comment this out if you uncomment a different Ethernet device below. //#include <CayenneEthernetW5500.h> // Uncomment this and comment out CayenneEthernet.h to use an Ethernet 2 shield or other Ethernet W5500 shield. _ // You will need the Ethernet2 library installed. See the ArduinoEthernetW5500 example sketch for more info._ //#include <CayenneEthernetW5200.h> // Uncomment this and comment out CayenneEthernet.h to use an Ethernet W5200 shield. _ // You will need the EthernetW5200 library installed. See the ArduinoEthernetW5200 example sketch for more info._ // Cayenne authentication token. This should be obtained from the Cayenne Dashboard. char token[] = “200fc1hyjs”; // Mac address should be different for each device in your LAN byte arduino_mac[] = {0x00, 0xAA, 0xBB, 0xCC, 0xDE, 0x02}; IPAddress arduino_ip(10, 123, 24, 240); IPAddress dns_ip(10, 123, 24, 252); IPAddress gateway_ip(10, 123, 24, 252); IPAddress subnet_mask(255, 255, 255, 0); void setup() { _ Serial.begin(9600);_ _ Cayenne.begin(token, arduino_ip, dns_ip, gateway_ip, subnet_mask, arduino_mac);_ } void loop() { _ Cayenne.run();_ }
https://community.mydevices.com/t/can-t-conect-my-arduino-uno-ethernet-w5100-to-cayenne-waiting-for-response/1415
CC-MAIN-2018-47
en
refinedweb
A couple of weeks ago I wrote about using a combination of tools and features of the Python language to help assess Django apps and write them a bit more defensively. One suggestion was to use a tool like flake8 to report not only on formatting issues but more importantly potential bugs identified through static analysis (including the kind of bugs that are discovered by a compiler _or _in an interpreted language, at runtime). Reader Karl Goetz wrote back (shared with permission): I just ran flake8 on a project and was given a list of 1845 issues. Depressing! We discussed briefly and validated my hunch that most of these were style issues. That’s not to say they should be ignored, but that a list of over 1000 issues or more isn’t necessarily something to lose sleep over. It’s all about one, understanding which problems are important and two, having a strategy in hand to deal with these issues. How you prioritize issues in an existing/legacy software project is one of the most significant stumbling blocks for developers and product teams working in Django apps (and pretty much any other type of project), so we have plenty of grist. Linter reporting is not where I’d normally start, but rather than switch gears let’s start here anyhow. For our purposes we’re going to assume flake8 as the tool of choice, although that’s not strictly required. flake8 includes linting, complexity analysis, and static analysis, making it an excellent default choice. Every individual error will be reported back to you with an error code telling you exactly what it is (e.g. E225 or F403, missing whitespace around an operator and starred imports, respectively). These errors have different discovery sources (e.g. pycodestyle or pyflakes) and different implications. Missing whitespace around an operator has no impact on how the code runs. It does however make the code more difficult to read and thus is likely to slow down development or even lead to programmer errors due to misreading. Starred imports similarly have no direct impact on how the code runs (e.g. from module import *) however they obscure the available namespace in a given module. This can result in shadowed imported names in a module’s namespace, and it also makes it more challenging for other tools to identify names in use. Both issues represent friction and increased potential for developer error, but the starred import could be hiding a code issue. On the other hand, F823 - local variable referenced before assignment - represents a runtime error waiting to happen. I find it helpful to think of the different issues you might run into, specifically reported back by linting and static analysis, into three categories: The most obvious and uncontroversial “definite” bugs are those that will result in runtime errors. This includes things like return or yield statements outside of functions, a break statement outside of a while or for loop, and a naked except block as not the last exception handler. These should result in errors when a module is imported. Other errors, like undefined names, will never be raised until that one line of code is run. Extensive test coverage will catch errors like this, but we don’t need to run the test suite to find these issues. In the event an undefined name needs to be imported then the solution is as simple as adding the necessary import. Often the name is presumed to have been defined in the code, as an empty list, for example. The solution here is to add in the initial definition, provided you can identify the type and expected initial value. Other “definite” bugs are those that don’t cause runtime errors but might be expected to create bugs involving different values. For example, F601 - dictionary key name repeated with different values - won’t cause your program to crash, but is likely to result in unexpected values being assigned. While definite bugs are a higher priority, this category tends to be much larger and require more extensive fixes. These are issues that are not going to directly cause errors in your app but pose significant risk by hiding potential bugs. We already mentioned starred imports, but the most common that comes to mind is plain, “naked” exceptions. flake8 will report these as E722, do not use bare except, specify exception instead. Exceptions get used for one of two purposes - expected control flow and rescue from errors that unnecessarily crash the program. By control flow I mean using exceptions idiomatically where in other languages (such as Go) you’d use explicit checks on values. Instead of checking for list length and returning the 5th element from a list if it’s length is at least 5 and another if it’s not, the idiomatic solution is to try returning the fifth indexed element and in the event of an IndexError return the alternate value. Whereas you might have a function that sits at the top of a call stack including various backing services (database) and HTTP APIs and the goal is to ensure that no matter what happens no error in this call stack is ever propagated back up to the user in the form of a 500 error. We may want to catch at least a base exception here, but nonetheless there is a rationale for catching everything. Now the _reason _for being specific, especially in the first instance, is two-fold. The explicitness of having the named exception shows intent so that other developers understand why there is a try block and also how the flow control works. In both cases, a naked exception just swallows everything so, for example, if there’s an actual bug throwing an error then it’s possible that even a test won’t catch this because all exceptions were implicitly handled. In a case like the latter where there may be too many possible exceptions to handle, from socket issues to third party API responses, the solution is to first add an exception log before continuing so that you have access to the full error information, and to separately handle known/expected exceptions first. That might look something like this: try: do\_something() except ThirdPartyAPIISDown as e: logging.error(e) pass except BaseException: logging.exception("Unhandled failure") pass That third party API is known to be flakey and we don’t need the full stack trace every time it’s unavailable. Now we don’t have our own code swallowing issues and hiding some potential bugs. Additional issues that may hide bugs include excessive complexity. This is a big topic and it straddles the third category, but ultimately overly complex code is hard to reason through and the combination of numerous logical paths and levels of these paths means very complex functions and methods are often where hidden bugs are to be found. The solution here is to refactor (in the “pure” sense of the word). This is sometimes easier said than done, and the urgency of refactoring complex code depends a lot on particularities about the module and it’s use. Pretty much everything else falls into this category. Style and formatting issues are at the least distractions and at worse can obscure what the code actually does. These should be fixed but need not be top priority. The good news, or better news, is that in many instances, these can be fixed automatically! Whether with a hard and fast formatter like black[0], a configurable one like autopep8[1], or a built-in tool like PyCharm[2], there are ways to address these without editing each extraneous space and each wayward tab individually. The further benefit of automated tools like these is that they can be added to your development process, and even automated build process, so that no one on the team has to think about them anymore. Why _not _use an autoformatter? If your project has its own format guidelines or conventions you want to maintain, an existing autoformatter may either not work or require too much knob turning to be useful. In our own work we’ve made use of some tooling that we recently published[3] to help break down the issues. The flake8-csv formatter exports the results from flake8 in CSV format with an option for including some pre-configured categorization. This can be loaded into a spreadsheet (or DataFrame!) letting you examine not only which modules have trouble spots but what the breakdown of issues looks like. Coupled with test coverage data and Git churn information, you can start to prioritize in even a large, busy codebase based on the most serious issues against the most frequently edited modules. Indendtedly yours, Ben [0] Mentioned before, black is a Python3-only no-holds-barred formatter in the mold of the Go fmt command [1] If you’d like more formatting options, autopep8: [2] PyCharm is a Python IDE and a commercial product, it can also be configured to format with an external tool of your choice [3] Flake8-csv Run flake8 with the –format=csv_categories flag to get the error code categorization. Learn from more articles like this how to make the most out of your existing Django site.
https://wellfire.co/this-old-pony/starting-to-prioritize-and-triage-issues-in-cleaning-up-django-apps--this-old-pony-50/
CC-MAIN-2018-47
en
refinedweb
Mantid’s plugin architecture has been engineered so that it is easy for a user to write their own algorithm. This page is a primer for the user about to write their first algorithm and assumes no great knowledge of C++. It covers the basics, with links to more advanced options where appropriate. Note if you are looking to add a plugin fit function rather than an algorithm then see Writing a Fit Function. There is special description for the case when you are looking to add a custom MD conversion plugin. Alternatively, you can implement your algorithm in Python. See Python Vs C++ Algorithms for a comparison of Mantid’s two programming languages. All algorithms in Mantid inherit from a base Algorithm class, which provides the support and services required for running a specific algorithm and greatly simplifies the process of writing a new one. The first step is to create a new directory, with any name of your choice, under your MantidInstall directory (on Windows, probably located at C:\\MantidInstall). Alternatively, you can just do everything in the UserAlgorithms directory. The UserAlgorithms directory contains a simple Python script called createAlg.py. This can be used to create a new ‘empty’ algorithm - to create one called ‘MyAlg’ you should type python createAlg.py myAlg category, where category is an optional argument to set the algorithm’s category. To do the same thing ‘by hand’, create files called MyAlg.h and MyAlg.cpp and paste in the following boilerplate C++ code (changing each occurrence of ‘MyAlg’ to your chosen algorithm name): Header file (MyAlg.h): #ifndef MYALG_H_ #define MYALG_H_ #include "MantidAPI/Algorithm.h" class MyAlg : public Mantid::API::Algorithm { public: /// (Empty) Constructor MyAlg() : Mantid::API::Algorithm() {} /// Virtual destructor virtual ~MyAlg() {} /// Algorithm's name virtual const std::string name() const { return "MyAlg"; } /// Algorithm's version virtual const int version() const { return (1); } /// Algorithm's category for identification virtual const std::string category() const { return "UserDefined"; } private: /// Initialisation code void init(); /// Execution code void exec(); }; #endif /*MYALG_H_*/ Source file (MyAlg.cpp): #include "MyAlg.h" // Register the algorithm into the AlgorithmFactory DECLARE_ALGORITHM(MyAlg); void MyAlg::init() { } void MyAlg::exec() { } At this point you will already have something that will compile and run. To do so (on Windows), copy the files build.bat & SConstruct from UserAlgorithms into the directory containing your code and execute build.bat. If you then start MantidPlot your algorithm will appear in the list of available algorithms and could be run. But, of course, it won’t do anything of interest until you have written some algorithm code… You will see that the algorithm skeletons set up in the last section contain two methods/functions/subroutines called init and exec. It will be no surprise to discover that these will, respectively, contain the code to initialise and execute the algorithm, which goes in the .cpp file between the curly brackets of each method. Note that these are private methods (i.e. cannot be called directly); an algorithm is run by calling the base class’s initialize() and execute() methods, which provide additional services such as the validation of properties, fetching workspaces from the AnalysisDataService, handling errors and filling the workspace histories. The initialization (init) method is executed by the FrameworkManager when an algorithm is requested and must contain the declaration of the properties required by the algorithm. Atypically, it can also contain other initialization code such as the calculation of constants used by the algorithm, so long as this does not rely on the values of any of the properties. Calls to the declareProperty method are used to add a property to this algorithm. See the properties page for more information on the types of properties supported and the example algorithms in UserAlgorithms (especially PropertyAlgorithm and WorkspaceAlgorithm) for further guidance on how to use them. For the simple types (integer, double or string), the basic syntax is: declareProperty("UniquePropertyName",value); An optional validator or directional argument (input, output or both) can also be appended. The syntax for other property types ( WorkspaceProperty & ArrayProperty) is more complex - see the properties page or the example algorithms in UserAlgorithms for further details. Before the data can be processed, the first task is likely to be to fetch the values of the input properties. This uses the getProperty method as follows: TYPE myProperty = getProperty("PropertyName"); where TYPE is the type of the property ( int, double, std::string, std::vector…). Note that the value of a WorkspaceProperty is a shared pointer to the workspace, which is referred to as Mantid::API::Workspace_sptr or Mantid::API::Workspace_const_sptr. The latter should be used for input workspaces that will not need to be changed in the course of the algorithm. If a handle is required on the property itself, rather than just its value, then the same method is used as follows: Mantid::Kernel::Property* myProperty = getProperty("PropertyName"); This is useful, for example, for checking whether or not an optional property has been set (using Property’s isDefault() method). Usually, the result of an algorithm will be stored in another new workspace and the algorithm will need to create that new workspace through a call to the WorkspaceFactory. For the (common) example where the output workspace should be of the same type and size as the input one, the code would read as follows: Mantid::API::Workspace_sptr outputWorkspace = Mantid::API::WorkspaceFactory::Instance().create(inputWorkspace); where inputWorkspace is a shared pointer to the input workspace. It is also important to, at some point, set the output workspace property to point at this workspace. This is achieved through a call to the setProperty method as follows: setProperty("OutputWorkspacePropertyName",outputWorkspace); where outputWorkspace is a shared pointer to the created output workspace. The bulk of most algorithms will involve the manipulation of the data contained in workspaces and information on how to interact with these is given here. The more advanced user may also want to refer to the full workspace documentation. Those familiar with C++ should make use of private methods and data members to break up the execution code into more manageable and readable sections. The advanced user is referred to the full documentation page for the Algorithm base class to explore the full range of methods available for use within an algorithm. A few aspects are highlighted below. Algorithms may wish to make use of the functionality of other algorithms as part of their execution. For example, if a units change is required the ConvertUnits algorithm could be used. Mantid therefore has the concept of a child algorithm and this is accessed through a call to the createChildAlgorithm method as follows: Mantid::API::Algorithm_sptr childAlg = createChildAlgorithm("AlgorithmName"); This call will also initialise the algorithm, so the algorithm’s properties can then be set and it can be executed: childAlg->setPropertyValue("number", 0); childAlg->setProperty<Workspace_sptr>("Workspace",workspacePointer); childAlg->execute(); The g_log object enables access to the logging facilities of Mantid, and is an invaluable tool in understanding the running of your algorithms. Any algorithm can be run asynchronously (e.g. by MantidPlot) without modification. However, some features are only enabled if code is added within the exec() method. Algorithm::interruption_point() should be called at appropriate intervals so that the algorithm’s execution can be interrupted. Algorithm::progress(double p) reports the progress of the algorithm. p must be between 0 (start) and 1 (finish). It is fine to throw exceptions in your algorithms in the event of an unrecoverable failure. These will be caught in the base Algorithm class, which will report the failure of the algorithm. Validators allow you to give feedback to the user if the input of a property is incorrect (for example, typing non-numeric characters in a number field). For more advanced validation, override the Algorithm::validateInputs() method. This is a method that returns a map where: This method allows you to provide validation that depends on several property values at once (something that cannot be done with IValidator). Its default implementation returns an empty map, signifying no errors. It will be called in dialogs after parsing all inputs and setting the properties, but before executing. It is also called again in the execute() call, which will throw if this returns something. In the MantidPlot GUI, this will set a “star” * label next to each property that is reporting an error. This makes it easier for users to find where they went wrong. If your validateInputs() method validates an input workspace property, bear in mind that the user could provide a WorkspaceGroup (or an unexpected type of workspace) - when retrieving the property, check that casting it to its intended type succeeded before attempting to use it.
http://developer.mantidproject.org/WritingAnAlgorithm.html
CC-MAIN-2018-47
en
refinedweb
Stock and bond returns This content was STOLEN from BrainMass.com - View the original, and get the already-completed solution here! Please show all work and complete in excel. Problem Set #1 Calculating Returns: 1. a) Assume you bought 1000 shares of stock at an initial price of $25 per share. The stock paid a dividend of $0.50 per share during the following year, and the share price at the end of the year when you sold it, was $35. Compute your total dollar return (income) on this investment. b) In the previous problem, what is the capital gains yield? What is the dividend yield? What is the total rate of return on the investment? Geomerric Mean Return: 2. Compute the geometric mean of following annual returns: Year Return 1 5% 2 -10% 3 12% 4 17% 5 3% Margins and Margin Returns: 3. You bought 1,000 shares of SNCR at 12.49 per share with 50% margin. Your broker charges 5% annual interest on borrowed funds. At the end of one year, if you sold the sock at: a) $14.95/share b) $13.91 /share i) Compute your return on investment for scenarios a) and b) above. ii) Compute the Margin/Equity % in parts a) and b) above. Short Sales: 4. You sold short 1000 shares of LEH stock at $35/share at the end of 3 months. You closed out your positions at the following prices: a) $25/share b) $35/share c) $48/share Calculate your annual return. Assume 4% interest receipt on the 50% funds deposited or margin. Expected returns and Standard Deviations: 5. a) Use the following information on states of the economy and stock returns to calculate the expected return for Dingaling Telephone: State of Economy Probability of Security Return State of Economy if State Occurs Weak Recession 0.20 -10% Normal 0.40 20% Boom 0.20 30% Strong Recession 0.20 -15% 1.00 b) Using the information in the previous question, calculate the standard deviation of return. Bid-Ask Spread: 6. The bid price of Microsoft stock is $31.29 and it ask price is $31.35. Compute its bid-ask % spread. Nominal Rates: 7. Given: Real interest rate = 3% Inflation rate =4.5, Compute: a) Approximate nominal interest rate b) Exact nominal interest rate T-Bill Pricing: 8. Assume a $10,000 Treasury bill is quoted to pay 5 percent interest over a six-month period: a) How much interest income would the investor receive? b) What is the price of the Treasury bill? c) What is the effective yield? Bond Pricing: 9. a) Given a 15-year bond that originally sold for $1,000 with an 8 percent coupon rate, what would be the price of the bond if interest rates in the marketplace on similar bonds are now 10 percent ? Interest is paid semiannually. %-years have elapsed since the bond issue. Therefore, assume a 10 year time period. b) What would be the price if interest rates go down to 6 percent? (once again, do a semi-annual analysis for 10 years) Yield to Maturity: 10. What is the yield to maturity for a 9 percent coupon-rate bond priced at $1,040.37? Assume there are five years left to maturity. It is a $1,000 par value bond. Use the trial-and-error approach with annual discounting. Constant Growth Models 11. ABC Corporation is currently paying $2.50 in dividends. Its earnings are expected to grow at a constant rate of 12% for the foreseeable future. If investors require 15% return compute the intrinsic value of ABC stock. 12. XYZ Company is expecting its return on equity to equal 25% and expects to pay 40% of profits as dividends. Its current dividends are $2.00, and the investors require an 18% return. a) What is XYZ's expected growth rate? b) What is XYZ's intrinsic value? c) If its market price is $73.50, would you invest in XYZ stock? Explain Two-stage Growth Model 13. Stock ABC is considered to be a growth stock with a non-constant growth rate of 18% for the next five years, followed by 15% sustainable annual growth from thereafter. ABC's current dividends per share are $2.10, and its required rate of return is 12%. Calculate its intrinsic value. Multi-stage Growth Model 14. The following information is available on the ABC stock: D0 = $1.50 g1 = 15% for years 1-2 g2 = 12% for years 3-4 g3 = G = 10% for years 5-n If investors require 14% return compute the price of ABC stock.© BrainMass Inc. brainmass.com October 24, 2018, 11:14 pm ad1c9bdddf Solution Summary Answer to 14 questions on stock and bond returns Finance: Bond Valuation, Dividend Discount Model, Return on Preferred Stock, Valuation, Risk A 1: (Bond valuation) A $1,000 face value bond has a remaining maturity of 10 years and a required return of 9%. The bond's coupon rate is 7.4%. What is the fair value of this bond? A 2: %? A 3: (Required return for a preferred stock) James River $3.38 preferred is selling for $45.25. The preferred dividend is non growing. What is the required return on James River preferred stock? A 4: (Stock valuation) Suppose Toyota has non maturing (perpetual) preferred stock outstanding that pays a $1.00 quarterly dividend and has a required return of 12% APR (3% per quarter). What is the stock worth? B. 1. If the yield to maturity for all three bonds is 8%, what is the fair price of each bond? 2. Suppose that the yield to maturity for all of these bonds changed instantaneously to 7%. What is the fair price of each bond now? 3. Suppose that the yield to maturity for all of these bonds changed instantaneously again, this time to 9%. Now what is the fair price of each bond? 4. Based on the fair prices at the various yields to maturity, is interest-rate risk the same, higher, or lower for longer-versus shorter-maturity bonds? B 18: (Default risk) You buy a very risky bond that promises a 9.5% coupon and return of the $1,000 principal in 10 years. You pay only $500 for the bond. 1. You receive the coupon payments for three years and the bond defaults. After liquidating the firm, the bondholders receive a distribution of $150 per bond at the end of 3.5 years. What is the realized return on your investment? 2. The firm does far better than expected and bondholders receive all of the promised interest and principal payments. What is the realized return on your investment? B, one%. 1. What value would James estimate for this firm? 2. What value would Bret assign to the Medtrans stock? Problem: (Beta and required return) The risk less return is currently 6%, and Chicago Gear has estimated the contingent returns given here. 1. Calculate the expected returns on the stock market and on Chicago Gear stock. 2. What is Chicago Gear's beta? 3.
https://brainmass.com/business/bond-valuation/stock-and-bond-returns-186639
CC-MAIN-2018-47
en
refinedweb
Hi Thorsten, > one question with regards to this topic: > what would be the advantage of namespaces in Picolisp over > namingconventions like in Emacs Lisp? Right. Not much. > 'gnus-function-name' for all functions in gnus library > 'dired-function-name' for all functions in dired library etc Yes. Such conventions make things transparent. The drawback might just be readability of the longish symbol names. I suggested something like this in my reply to Henrik (on Sep 5th, using the 'dot' as a delimiter): > A call like > > (foo> '+Pckg <arg>) > > is in no regard more encapsulating the namespace then > > (foo.Pckg <arg>) Cheers, - Alex -- UNSUBSCRIBE: mailto:picolisp@software-lab.de?subject=Unsubscribe
https://www.mail-archive.com/picolisp@software-lab.de/msg02741.html
CC-MAIN-2018-47
en
refinedweb
tempfile – Create temporary filesystem resources.¶ Many programs need to create files to write intermediate data. Creating files with unique names securely, so they cannot be guessed by someone wanting to break the application, is challenging. The tempfile module provides several functions for creating filesystem resources securely. TemporaryFile() opens and returns an un-named file, NamedTemporaryFile() opens and returns a named file, and mkdtemp() creates a temporary directory and returns its name. TemporaryFile¶ If your application needs a temporary file to store data, but does not need to share that file with other programs, the best option for creating the file is the TemporaryFile() function. It creates a file, and on platforms where it is possible, unlinks it immediately. This makes it impossible for another program to find or open the file, since there is no reference to it in the filesystem table. The file created by TemporaryFile() is removed automatically when it is closed. import os import tempfile print 'Building a file name yourself:' filename = '/tmp/guess_my_name.%s.txt' % os.getpid() temp = open(filename, 'w+b') try: print 'temp:', temp print 'temp.name:', temp.name finally: temp.close() # Clean up the temporary file yourself os.remove(filename) print print 'TemporaryFile:' temp = tempfile.TemporaryFile() try: print 'temp:', temp print 'temp.name:', temp.name finally: # Automatically cleans up the file temp.close() This example illustrates the difference in creating a temporary file using a common pattern for making up a name, versus using the TemporaryFile() function. Notice that the file returned by TemporaryFile() has no name. $ python tempfile_TemporaryFile.py Building a file name yourself: temp: <open file '/tmp/guess_my_name.14891.txt', mode 'w+b' at 0x100458270> temp.name: /tmp/guess_my_name.14891.txt TemporaryFile: temp: <open file '<fdopen>', mode 'w+b' at 0x100458780> temp.name: <fdopen> By default, the file handle is created with mode 'w+b' so it behaves consistently on all platforms and your program can write to it and read from it. import os import tempfile temp = tempfile.TemporaryFile() try: temp.write('Some data') temp.seek(0) print temp.read() finally: temp.close() After writing, you have to rewind the file handle using seek() in order to read the data back from it. $ python tempfile_TemporaryFile_binary.py Some data If you want the file to work in text mode, set mode to 'w+t' when you create it: import tempfile f = tempfile.TemporaryFile(mode='w+t') try: f.writelines(['first\n', 'second\n']) f.seek(0) for line in f: print line.rstrip() finally: f.close() The file handle treats the data as text: $ python tempfile_TemporaryFile_text.py first second NamedTemporaryFile¶ There are situations, however, where having a named temporary file is important. If your application spans multiple processes, or even hosts, naming the file is the simplest way to pass it between parts of the application. The NamedTemporaryFile() function creates a file with a name, accessed from the name attribute. import os import tempfile temp = tempfile.NamedTemporaryFile() try: print 'temp:', temp print 'temp.name:', temp.name finally: # Automatically cleans up the file temp.close() print 'Exists after close:', os.path.exists(temp.name) Even though the file is named, it is still removed after the handle is closed. $ python tempfile_NamedTemporaryFile.py temp: <open file '<fdopen>', mode 'w+b' at 0x100458270> temp.name: /var/folders/5q/8gk0wq888xlggz008k8dr7180000hg/T/tmpIIkknb Exists after close: False mkdtemp¶ If you need several temporary files, it may be more convenient to create a single temporary directory and then open all of the files in that directory. To create a temporary directory, use mkdtemp(). import os import tempfile directory_name = tempfile.mkdtemp() print directory_name # Clean up the directory yourself os.removedirs(directory_name) Since the directory is not “opened” per se, you have to remove it yourself when you are done with it. $ python tempfile_mkdtemp.py /var/folders/5q/8gk0wq888xlggz008k8dr7180000hg/T/tmpE4plSY Predicting Names¶ For debugging purposes, it is useful to be able to include some indication of the origin of the temporary files. While obviously less secure than strictly anonymous temporary files, including a predictable portion in the name lets you find the file to examine it while your program is using it. All of the functions described so far take three arguments to allow you to control the filenames to some degree. Names are generated using the formula: dir + prefix + random + suffix where all of the values except random can be passed as arguments to TemporaryFile(), NamedTemporaryFile(), and mkdtemp(). For example: import tempfile temp = tempfile.NamedTemporaryFile(suffix='_suffix', prefix='prefix_', dir='/tmp', ) try: print 'temp:', temp print 'temp.name:', temp.name finally: temp.close() The prefix and suffix arguments are combined with a random string of characters to build the file name, and the dir argument is taken as-is and used as the location of the new file. $ python tempfile_NamedTemporaryFile_args.py temp: <open file '<fdopen>', mode 'w+b' at 0x100458270> temp.name: /tmp/prefix_SMkGcX_suffix Temporary File Location¶ If you don’t specify an explicit destination using the dir argument, the actual path used for the temporary files will vary based on your platform and settings. The tempfile module includes two functions for querying the settings being used at runtime: import tempfile print 'gettempdir():', tempfile.gettempdir() print 'gettempprefix():', tempfile.gettempprefix() gettempdir() returns the default directory that will hold all of the temporary files and gettempprefix() returns the string prefix for new file and directory names. $ python tempfile_settings.py gettempdir(): /var/folders/5q/8gk0wq888xlggz008k8dr7180000hg/T gettempprefix(): tmp The value returned by gettempdir() is set based on a straightforward algorithm of looking through a list of locations for the first place the current process can create a file. From the library documentation: Python searches a standard list of directories and sets tempdir to the first one which the calling user can create files in. The list is: -. If your program needs to use a global location for all temporary files that you need to set explicitly but do not want to set through one of these environment variables, you can set tempfile.tempdir directly. import tempfile tempfile.tempdir = '/I/changed/this/path' print 'gettempdir():', tempfile.gettempdir() $ python tempfile_tempdir.py gettempdir(): /I/changed/this/path See also - tempfile - Standard library documentation for this module. - File Access - More modules for working with files.
https://pymotw.com/2/tempfile/index.html
CC-MAIN-2018-47
en
refinedweb
In many cases, indices are captured from the user with the ArrayProperty<int> property type. However, this lacks a few key behaviours which a property collecting workspace index information should have. Firstly, there is no automatic validation of the indices with respect to the workspace itself. Therefore users could enter invalid input e.g negative numbers, or numbers outside of the range of spectra held by the workspace, unless bounded validators or used. Additionally, the ArrayProperty<int> does not lend itself to a distributed method for accessing workspace data, this would require manual conversion of global indices to local indices on each mpi rank. Finally, any conversion between “index types” i.e. conversion from spectrum numbers to workspace indices (also known as spectrum indices) must be managed by the algorithm developer. This style of development is very error prone, particularly in the MPI case, and could lead to inconsitencies across algorithms as each developer may have a different approach to addressing the aforementioned issues. The IndexProperty in Mantid provides a consistent interface for algorithms which need to access a subset of the workspace spectra for processing.The IndexProperty facilitates the retrieval of a set of workspace indices, called a SpectrumIndexSet, once provided with a set of workspace indices or spectrum numbers [1]. Therefore algorithm developers do not need to perform any conversions manually. In situations where data is distributed across several clusters (distributed processing) [2], the underlying IndexInfo object, which is used to obtain a SpectrumIndexSet, hides these mpi-specific details by automatically selecting the correct indices for a specific mpi rank. Therefore access to the workspace data remains unchanged for developers. Unlike other property types in Mantid, the IndexProperty is designed to be used in conjunction with other properties which define the workspace and the input type of the indices which represent the subset of the data. However, developers do not need to concern themselves with maintaining these properties on their own. There are few special methods in Algorithm which handle this. Namely, Algorithm::declareWorkspaceInputProperties, Algorithm::setWorkspaceInputProperties and Algorithm::getWorkspaceAndIndices [3]. Property declaration is as shown below: #include "MantidAPI/Algorithm.tcc" // Declare property with default settings // IndexType::WorkspaceIndex is default declareWorkspaceInputProperties<MatrixWorkspace>( "InputWorkspace", "This is an input workspace with associated index handling"); // Declare all arguments declareWorkspaceInputProperties<MatrixWorkspace, IndexType::SpectrumNum | IndexType::WorkspaceIndex>( "InputWorkspace", "This is an input workspace with associated index handling" /* optional PropertyMode, LockMode, and validator forwarded to WorkspaceProperty */); Internally, a WorkspaceProperty is created along with an IndexTypeProperty for managing the workspace and the type of user-defined input index list respectively. Their names are automatically generated based on the property name in the declaration. A toy example algorithm dialog in the GUI would have the following inputs defined: After properties have been set, client code can retrieve the values of interest from within the algorithm as follows: //Declare workspace and index set MatrixWorkspace_sptr inputWs; SpectrumIndexSet indexSet; //Method returns a tuple of the workspace //and index set simultaneously std::tie(inputWs, indexSet) = getWorkspaceAndIndices<MatrixWorkspace>("InputWorkspace"); for(auto index: indexSet){ auto &spec = inputWs->getSpectrum(index); //do something with spectrum. } For setting the property values, there are 4 valid options: //Set Property with workspace_sptr and string of indices setWorkspaceInputProperties<MatrixWorkspace, std::string>( "InputWorkspace", ws, IndexType::WorkspaceIndex, "1:5") //Set Property with workspace name and string of indices setWorkspaceInputProperties<MatrixWorkspace, std::string>( "InputWorkspace", "ws", IndexType::WorkspaceIndex, "1:5") //Set Property with workspace_sptr and vector of indices setWorkspaceInputProperties<MatrixWorkspace, std::vector<int>>( "InputWorkspace", ws, IndexType::WorkspaceIndex, std::vector<int>{1, 2, 3, 4, 5}) //Set Property with workspace name and vector of indices setWorkspaceInputProperties<MatrixWorkspace, std::vector<int>>( "InputWorkspace", "ws", IndexType::WorkspaceIndex, std::vector<int>{1, 2, 3, 4, 5})
http://developer.mantidproject.org/IndexProperty.html
CC-MAIN-2018-47
en
refinedweb
Received exception while creating connection for pool "PIPPODataSourceJP": ORA-04031: unable to allocate 32 bytes of shared memory ("shared pool","DATABASESYS","trigger inform","kglhin: temp") osbpr1do.log00253:ORA-04031: unable to allocate 32 bytes of shared memory ("shared pool","select obj#,type#,ctime,mtim...","sql area","tmp" I am interested in all the strings like PIPPODataSourceJP, to extract from the logs a unique list of Datasources failing. Let's first grep all the lines: grep ORA-04031 * > /tmp/allora04031.txt this doesn't work, it prints all the line: sed -n '/for pool /,/: ORA-04031/p' /tmp/allora04031.txt All this sed and awk is bullshit. /opt/oracle/fmw11_1_1_5/wlserver_10.3/common/bin/wlst.sh import fileinput start = 'for pool "' end = '": ORA-04031' for line in fileinput.input(['/tmp/allora04031.txt']): indstart = line.find(start, 0) if indstart > 0: indend = line.find(end, indstart) print line[indstart + len(start):indend] Here is the doc 2 comments: Or you could try this: perl -nle 'print $1 if /\s+pool\s+"(.*?)":\s+ORA-04031/' /path/to/logfile IMHO the FIRST quality of code is READABILITY. Perl, awk, sed, are all very powerful but totally unreadable unless you know them very well.
http://www.javamonamour.org/2012/10/print-text-between-2-strings.html
CC-MAIN-2018-47
en
refinedweb
- NAME - DESCRIPTION - SYNOPSIS - DETAILS - API - new - findnodes ($path, $context) - findnodes_as_string ($path, $context) - findnodes_as_strings ($path, $context) - findvalue ($path, $context) - findvalues($path, $context) - exists ($path, $context) - matches($node, $path, $context) - find ($path, $context) - getNodeText ($path) - set_namespace ($prefix, $uri) - clear_namespaces () - get_namespace ($prefix, $node) - set_strict_namespaces ($strict) - set_var ($var. $val) - get_var ($var) - $XML::XPathEngine::Namespaces - Node Object Model - Example - XPath extension - SEE ALSO - AUTHOR - BUGS - ACKNOWLEDGEMENTS NAME XML::XPathEngine - a re-usable XPath engine for DOM-like trees DESCRIPTION) The find function takes an XPath expression (a string) and returns either a XML:. getNodeText ($path) Returns the text string for a particular node. Returns a string, or undef if the node doesn't exist. set_namespace ($prefix, $uri) Sets the namespace prefix mapping to the uri. Normally in XML::XPathEngine the prefixes in XPath node testsPathEngine object..$ SEE ALSO AUTHOR Michel Rodriguez, <mirod@cpan.org> Most code comes directly from XML::XPath, by Matt Sergeant..
https://metacpan.org/pod/XML::XPathEngine
CC-MAIN-2018-47
en
refinedweb
Getting Started This guide is an introduction to Ecto, the database wrapper and query generator for Elixir. Ecto provides a standardised API and a set of abstractions for talking to all the different kinds of databases, so that Elixir developers can query whatever database they’re using by employing similar constructs. In this guide, we’re going to learn some basics about Ecto, such as creating, reading, updating and destroying records from a PostgreSQL database. If you want to see the code from this guide, you can view it at ecto/examples/friends on GitHub. This guide will require you to have setup PostgreSQL beforehand. Adding Ecto to an application To start off with, we’ll generate a new Elixir application by running this command: mix new friends --sup The --sup option ensures that this application has a supervision tree, which we’ll need for Ecto a little later on. To add Ecto to this application, there are a few steps that we need to take. The first step will be adding Ecto and a driver called Postgrex to our mix.exs file, which we’ll do by changing the deps definition in that file to this: defp deps do [ {:ecto_sql, "~> 3.0"}, {:postgrex, ">= 0.0.0"} ] end Ecto provides the common querying API, but we need the Postgrex driver installed too, as that is what Ecto uses to speak in terms a PostgreSQL database can understand. Ecto talks to its own Ecto.Adapters.Postgres module, which then in turn talks to the postgrex package to talk to PostgreSQL. To install these dependencies, we will run this command: mix deps.get The Postgrex application will receive queries from Ecto and execute them against our database. If we didn’t do this step, we wouldn’t be able to do any querying at all. That’s the first two steps taken now. We have installed Ecto and Postgrex as dependencies of our application. We now need to setup some configuration for Ecto so that we can perform actions on a database from within the application’s code. We can set up this configuration by running this command: mix ecto.gen.repo -r Friends.Repo This command will generate the configuration required to connect to a database. The first bit of configuration is in config/config.exs: config :friends, Friends.Repo, database: "friends_repo", username: "user", password: "pass", hostname: "localhost" NOTE: Your PostgreSQL database may be setup to - not require a username and password. If the above configuration doesn’t work, try removing the username and password fields, or setting them both to “postgres”. - be running on a non-standard port. The default port is 5432. You can specify your specific port by adding it to the config: e.g. port: 15432. This configures how Ecto will connect to our database, called “friends”. Specifically, it configures a “repo”. More information about Ecto.Repo can be found in its documentation. The Friends.Repo module is defined in lib/friends/repo.ex by our mix ecto.gen.repo command: defmodule Friends.Repo do use Ecto.Repo, otp_app: :friends, adapter: Ecto.Adapters.Postgres end This module is what we’ll be using to query our database shortly. It uses the Ecto.Repo module, and the otp_app tells Ecto which Elixir application it can look for database configuration in. In this case, we’ve specified that it is the :friends application where Ecto can find that configuration and so Ecto will use the configuration that was set up in config/config.exs. Finally, we configure the database :adapter to Postgres. The final piece of configuration is to setup the Friends.Repo as a supervisor within the application’s supervision tree, which we can do in lib/friends/application.ex, inside the start/2 function: def start(_type, _args) do children = [ Friends.Repo, ] ... This piece of configuration will start the Ecto process which receives and executes our application’s queries. Without it, we wouldn’t be able to query the database at all! There’s one final bit of configuration that we’ll need to add ourselves, since the generator does not add it. Underneath the configuration in config/config.exs, add this line: config :friends, ecto_repos: [Friends.Repo] This tells our application about the repo, which will allow us to run commands such as mix ecto.create very soon. We’ve now configured our application so that it’s able to make queries to our database. Let’s now create our database, add a table to it, and then perform some queries. Test Environment Setup The test environment setup is described here. Setting up the database To be able to query a database, it first needs to exist. We can create the database with this command: mix ecto.create If the database has been created successfully, then you will see this message: The database for Friends.Repo has been created. NOTE: If you get an error, you should try changing your configuration in config/config.exs, as it may be an authentication error. A database by itself isn’t very queryable, so we will need to create a table within that database. To do that, we’ll use what’s referred to as a migration. If you’ve come from Active Record (or similar), you will have seen these before. A migration is a single step in the process of constructing your database. Let’s create a migration now with this command: mix ecto.gen.migration create_people This command will generate a brand new migration file in priv/repo/migrations, which is empty by default: defmodule Friends.Repo.Migrations.CreatePeople do use Ecto.Migration def change do end end Let’s add some code to this migration to create a new table called “people”, with a few columns in it: defmodule Friends.Repo.Migrations.CreatePeople do use Ecto.Migration def change do create table(:people) do add :first_name, :string add :last_name, :string add :age, :integer end end end This new code will tell Ecto to create a new table called people, and add three new fields: last_name and age to that table. The types of these fields are string and integer. (The different types that Ecto supports are covered in the Ecto.Schema documentation.) NOTE: The naming convention for tables in Ecto databases is to use a pluralized name. To run this migration and create the people table in our database, we will run this command: mix ecto.migrate If we found out that we made a mistake in this migration, we could run mix ecto.rollback to undo the changes in the migration. We could then fix the changes in the migration and run mix ecto.migrate again. If we ran mix ecto.rollback now, it would delete the table that we just created. We now have a table created in our database. The next step that we’ll need to do is to create the schema. Creating the schema The schema is an Elixir representation of data from our database. Schemas are commonly associated with a database table, however they can be associated with a database view as well. Let’s create the schema within our application at lib/friends/person.ex: defmodule Friends.Person do use Ecto.Schema schema "people" do field :first_name, :string field :last_name, :string field :age, :integer end end This defines the schema from the database that this schema maps to. In this case, we’re telling Ecto that the Friends.Person schema maps to the people table in the database, and the last_name and age fields in that table. The second argument passed to field tells Ecto how we want the information from the database to be represented in our schema. We’ve called this schema Person because the naming convention in Ecto for schemas is a singularized name. We can play around with this schema in an IEx session by starting one up with iex -S mix and then running this code in it: person = %Friends.Person{} This code will give us a new Friends.Person struct, which will have nil values for all the fields. We can set values on these fields by generating a new struct: person = %Friends.Person{age: 28} Or with syntax like this: person = %{person | age: 28} We can retrieve values using this syntax: person.age # => 28 Let’s take a look at how we can insert data into the database. Inserting data We can insert a new record into our people table with this code: person = %Friends.Person{} Friends.Repo.insert(person) To insert the data into our database, we call insert on Friends.Repo, which is the module that uses Ecto to talk to our database. This function tells Ecto that we want to insert a new Friends.Person record into the database corresponding with Friends.Repo. The person struct here represents the data that we want to insert into the database. A successful insertion will return a tuple, like so: {:ok, %Friends.Person{__meta__: #Ecto.Schema.Metadata<:loaded>, age: nil, first_name: nil, id: 1, last_name: nil}} The :ok atom can be used for pattern matching purposes to ensure that the insertion succeeds. A situation where the insertion may not succeed is if you have a constraint on the database itself. For instance, if the database had a unique constraint on a field called You may wish to pattern match on the tuple in order to refer to the record inserted into the database: {:ok, person} = Friends.Repo.insert person Validating changes In Ecto, you may wish to validate changes before they go to the database. For instance, you may wish that a person has both a first name and a last name before a record can be entered into the database. For this, Ecto has changesets. Let’s add a changeset to our Friends.Person module inside lib/friends/person.ex now: def changeset(person, params \\ %{}) do person |> Ecto.Changeset.cast(params, [:first_name, :last_name, :age]) |> Ecto.Changeset.validate_required([:first_name, :last_name]) end This changeset takes a person and a set of params, which are to be the changes to apply to this person. The changeset function first casts the last_name and age keys from the parameters passed in to the changeset. Casting tells the changeset what parameters are allowed to be passed through in this changeset, and anything not in the list will be ignored. On the next line, we call validate_required which says that, for this changeset, we expect first_name and last_name to have values specified. Let’s use this changeset to attempt to create a new record without a first_name and person = %Friends.Person{} changeset = Friends.Person.changeset(person, %{}) Friends.Repo.insert(changeset) On the first line here, we get a struct from the Friends.Person module. We know what that does, because we saw it not too long ago. On the second line we do something brand new: we define a changeset. This changeset says that on the specified person object, we’re looking to make some changes. In this case, we’re not looking to change anything at all. On the final line, rather than inserting the person, we insert the changeset. The changeset knows about the person, the changes and the validation rules that must be met before the data can be entered into the database. When this third line runs, we’ll see this: {:error, #Ecto.Changeset<action: :insert, changes: %{}, errors: [first_name: "can't be blank", last_name: "can't be blank"], data: #Friends.Person<>, valid?: false>} Just like the last time we did an insertion, this returns a tuple. This time however, the first element in the tuple is :error, which indicates something bad happened. The specifics of what happened are included in the changeset which is returned. We can access these by doing some pattern matching: {:error, changeset} = Friends.Repo.insert(changeset) Then we can get to the errors by doing changeset.errors: [first_name: "can't be blank", last_name: "can't be blank"] And we can ask the changeset itself if it is valid, even before doing an insertion: changeset.valid? #=> false Since this changeset has errors, no new record was inserted into the people table. Let’s try now with some valid data. person = %Friends.Person{} changeset = Friends.Person.changeset(person, %{first_name: "Ryan", last_name: "Bigg"}) We start out here with a normal Friends.Person struct. We then create a changeset for that person which has a first_name and a last_name parameter specified. At this point, we can ask the changeset if it has errors: changeset.errors #=> [] And we can ask if it’s valid or not: changeset.valid? #=> true The changeset does not have errors, and is valid. Therefore if we try to insert this changeset it will work: Friends.Repo.insert(changeset) #=> {:ok, %Friends.Person{__meta__: #Ecto.Schema.Metadata<:loaded>, age: nil, first_name: "Ryan", id: 3, last_name: "Bigg"}} Due to Friends.Repo.insert returning a tuple, we can use a case to determine different code paths depending on what happens: case Friends.Repo.insert(changeset) do {:ok, person} -> # do something with person {:error, changeset} -> # do something with changeset end NOTE: changeset.valid? will not check constraints (such as uniqueness_constraint). For that, you will need to attempt to do an insertion and check for errors from the database. It’s for this reason it’s best practice to try inserting data and validate the returned tuple from Friends.Repo.insert to get the correct errors, as prior to insertion the changeset will only contain validation errors from the application itself. If the insertion of the changeset succeeds, then you can do whatever you wish with the person returned in that result. If it fails, then you have access to the changeset and its errors. In the failure case, you may wish to present these errors to the end user. The errors in the changeset are a keyword list that looks like this: [first_name: {"can't be blank", []}, last_name: {"can't be blank", []}] The first element of the tuple is the validation message, and the second element is a keyword list of options for the validation message. The validate_required/3 validations don’t return any options, but other methods such as validate_length/3 do. Imagine that we had a field called bio that we were validating, and that field has to be longer than 15 characters. This is what would be returned: [first_name: {"can't be blank", []}, last_name: {"can't be blank", []}, bio: {"should be at least %{count} characters", [count: 15]}] To display these error messages in a human friendly way, we can use Ecto.Changeset.traverse_errors/2: traverse_errors(changeset, fn {msg, opts} -> Enum.reduce(opts, msg, fn {key, value}, acc -> String.replace(acc, "%{#{key}}", to_string(value)) end) end) This will return the following for the errors shown above: %{ first_name: ["can't be blank"], last_name: ["can't be blank"], bio: ["should be at least 15 characters"], } One more final thing to mention here: you can trigger an exception to be thrown by using Friends.Repo.insert!/2. If a changeset is invalid, you will see an Ecto.InvalidChangesetError exception. Here’s a quick example of that: Friends.Repo.insert! Friends.Person.changeset(%Friends.Person{}, %{first_name: "Ryan"}) ** (Ecto.InvalidChangesetError) could not perform insert because changeset is invalid. * Changeset changes %{first_name: "Ryan"} * Changeset params %{"first_name" => "Ryan"} * Changeset errors [last_name: "can't be blank"] lib/ecto/repo/schema.ex:111: Ecto.Repo.Schema.insert!/4 This exception shows us the changes from the changeset, and how the changeset is invalid. This can be useful if you want to insert a bunch of data and then have an exception raised if that data is not inserted correctly at all. Now that we’ve covered inserting data into the database, let’s look at how we can pull that data back out. Our first queries Querying a database requires two steps in Ecto. First, we must construct the query and then we must execute that query against the database by passing the query to the repository. Before we do this, let’s re-create the database for our app and setup some test data. To re-create the database, we’ll run these commands: mix ecto.drop mix ecto.create mix ecto.migrate Then to create the test data, we’ll run this in an iex -S mix session: people = [ %Friends.Person{first_name: "Ryan", last_name: "Bigg", age: 28}, %Friends.Person{first_name: "John", last_name: "Smith", age: 27}, %Friends.Person{first_name: "Jane", last_name: "Smith", age: 26}, ] Enum.each(people, fn (person) -> Friends.Repo.insert(person) end) This code will create three new people in our database, Ryan, John and Jane. Note here that we could’ve used a changeset to validate the data going into the database, but the choice was made not to use one. We’ll be querying for these people in this section. Let’s jump in! Fetching a single record Let’s start off with fetching just one record from our people table: Friends.Person |> Ecto.Query.first That code will generate an Ecto.Query, which will be this: #Ecto.Query<from p in Friends.Person, order_by: [asc: p.id], limit: 1> The code between the angle brackets <...> here shows the Ecto query which has been constructed. We could construct this query ourselves with almost exactly the same syntax: require Ecto.Query Ecto.Query.from p in Friends.Person, order_by: [asc: p.id], limit: 1 We need to require Ecto.Query here to enable the macros from that module. Then it’s a matter of calling the from function from Ecto.Query and passing in the code from between the angle brackets. As we can see here, Ecto.Query.first saves us from having to specify the order and limit for the query. To execute the query that we’ve just constructed, we can call Friends.Repo.one: Friends.Person |> Ecto.Query.first |> Friends.Repo.one The one function retrieves just one record from our database and returns a new struct from the Friends.Person module: %Friends.Person{__meta__: #Ecto.Schema.Metadata<:loaded>, age: 28, first_name: "Ryan", id: 1, last_name: "Bigg"} Similar to first, there is also Friends.Person |> Ecto.Query.last |> Friends.Repo.one #=> %Friends.Person{__meta__: #Ecto.Schema.Metadata<:loaded>, age: 26, first_name: "Jane", id: 3, last_name: "Smith"} The Ecto.Repo.one function will only return a struct if there is one record in the result from the database. If there is more than one record returned, an Ecto.MultipleResultsError exception will be thrown. Some code that would cause that issue to happen is: Friends.Person |> Friends.Repo.one We’ve left out the Ecto.Query.first here, and so there is no limit or order clause applied to the executed query. We’ll see the executed query in the debug log: [timestamp] [debug] SELECT p0."id", p0."first_name", p0."last_name", p0."age" FROM "people" AS p0 [] OK query=1.8ms Then immediately after that, we will see the Ecto.MultipleResultsError exception: ** (Ecto.MultipleResultsError) expected at most one result but got 3 in query: from p in Friends.Person lib/ecto/repo/queryable.ex:67: Ecto.Repo.Queryable.one/4 This happens because Ecto doesn’t know what one record out of all the records returned that we want. Ecto will only return a result if we are explicit in our querying about which result we want. If there is no record which matches the query, one will return nil. Fetching all records To fetch all records from the schema, Ecto provides the all function: Friends.Person |> Friends.Repo.all This will return a Friends.Person struct representation of all the records that currently exist within our people table: [%Friends.Person{__meta__: #Ecto.Schema.Metadata<:loaded>, age: 28, first_name: "Ryan", id: 1, last_name: "Bigg"}, %Friends.Person{__meta__: #Ecto.Schema.Metadata<:loaded>, age: 27, first_name: "John", id: 2, last_name: "Smith"}, %Friends.Person{__meta__: #Ecto.Schema.Metadata<:loaded>, age: 26, first_name: "Jane", id: 3, last_name: "Smith"}] Fetch a single record based on ID To fetch a record based on its ID, you use the get function: Friends.Person |> Friends.Repo.get(1) #=> %Friends.Person{__meta__: #Ecto.Schema.Metadata<:loaded>, age: 28, first_name: "Ryan", id: 1, last_name: "Bigg"} Fetch a single record based on a specific attribute If we want to get a record based on something other than the id attribute, we can use get_by: Friends.Person |> Friends.Repo.get_by(first_name: "Ryan") #=> %Friends.Person{__meta__: #Ecto.Schema.Metadata<:loaded>, age: 28, first_name: "Ryan", id: 1, last_name: "Bigg"} Filtering results If we want to get multiple records matching a specific attribute, we can use where: Friends.Person |> Ecto.Query.where(last_name: "Smith") |> Friends.Repo.all [%Friends.Person{__meta__: #Ecto.Schema.Metadata<:loaded>, age: 27, first_name: "John", id: 2, last_name: "Smith"}, %Friends.Person{__meta__: #Ecto.Schema.Metadata<:loaded>, age: 26, first_name: "Jane", id: 3, last_name: "Smith"}] If we leave off the Friends.Repo.all on the end of this, we will see the query Ecto generates: #Ecto.Query<from p in Friends.Person, where: p.last_name == "Smith"> We can also use this query syntax to fetch these same records: Ecto.Query.from(p in Friends.Person, where: p.last_name == "Smith") |> Friends.Repo.all One important thing to note with both query syntaxes is that they require variables to be pinned, using the pin operator ( ^). Otherwise, this happens: last_name = "Smith" Friends.Person |> Ecto.Query.where(last_name: last_name) |> Friends.Repo.all ** (Ecto.Query.CompileError) variable `last_name` is not a valid query expression. Variables need to be explicitly interpolated in queries with ^ expanding macro: Ecto.Query.where/2 iex:1: (file) (elixir) expanding macro: Kernel.|>/2 iex:1: (file) The same will happen in the longer query syntax too: Ecto.Query.from(p in Friends.Person, where: p.last_name == last_name) |> Friends.Repo.all ** (Ecto.Query.CompileError) variable `last_name` is not a valid query expression. Variables need to be explicitly interpolated in queries with ^ expanding macro: Ecto.Query.where/3 iex:1: (file) expanding macro: Ecto.Query.from/2 iex:1: (file) (elixir) expanding macro: Kernel.|>/2 iex:1: (file) To get around this, we use the pin operator ( ^): last_name = "Smith" Friends.Person |> Ecto.Query.where(last_name: ^last_name) |> Friends.Repo.all Or: last_name = "Smith" Ecto.Query.from(p in Friends.Person, where: p.last_name == ^last_name) |> Friends.Repo.all The pin operator instructs the query builder to use parameterised SQL queries protecting against SQL injection. Composing Ecto queries Ecto queries don’t have to be built in one spot. They can be built up by calling Ecto.Query functions on existing queries. For instance, if we want to find all people with the last name “Smith”, we can do: query = Friends.Person |> Ecto.Query.where(last_name: "Smith") If we want to scope this down further to only people with the first name of “Jane”, we can do this: query = query |> Ecto.Query.where(first_name: "Jane") Our query will now have two where clauses in it: #Ecto.Query<from p in Friends.Person, where: p.last_name == "Smith", where: p.first_name == "Jane"> This can be useful if you want to do something with the first query, and then build off that query later on. Updating records Updating records in Ecto requires us to first fetch a record from the database. We then create a changeset from that record and the changes we want to make to that record, and then call the Ecto.Repo.update function. Let’s fetch the first person from our database and change their age. First, we’ll fetch the person: person = Friends.Person |> Ecto.Query.first |> Friends.Repo.one Next, we’ll build a changeset. We need to build a changeset because if we just create a new Friends.Person struct with the new age, Ecto wouldn’t be able to know that the age has changed without inspecting the database. Let’s build that changeset: changeset = Friends.Person.changeset(person, %{age: 29}) This changeset will inform the database that we want to update the record to have the age set to 29. To tell the database about the change we want to make, we run this command: Friends.Repo.update(changeset) Just like Friends.Repo.insert, Friends.Repo.update will return a tuple: {:ok, %Friends.Person{__meta__: #Ecto.Schema.Metadata<:loaded>, age: 29, first_name: "Ryan", id: 1, last_name: "Bigg"}} If the changeset fails for any reason, the result of Friends.Repo.update will be {:error, changeset}. We can see this in action by passing through a blank first_name in our changeset’s parameters: changeset = Friends.Person.changeset(person, %{first_name: ""}) Friends.Repo.update(changeset) #=> {:error, #Ecto.Changeset<action: :update, changes: %{first_name: ""}, errors: [first_name: "can't be blank"], data: #Friends.Person<>, valid?: false>} This means that you can also use a case statement to do different things depending on the outcome of the update function: case Friends.Repo.update(changeset) do {:ok, person} -> # do something with person {:error, changeset} -> # do something with changeset end Similar to insert!, there is also update! which will raise an exception if the changeset is invalid: changeset = Friends.Person.changeset(person, %{first_name: ""}) Friends.Repo.update! changeset ** (Ecto.InvalidChangesetError) could not perform update because changeset is invalid. * Changeset changes %{first_name: ""} * Changeset params %{"first_name" => ""} * Changeset errors [first_name: {"can't be blank", []}] lib/ecto/repo/schema.ex:132: Ecto.Repo.Schema.update!/4 Deleting records We’ve now covered creating ( insert), reading ( get, get_by, where) and updating records. The last thing that we’ll cover in this guide is how to delete a record using Ecto. Similar to updating, we must first fetch a record from the database and then call Friends.Repo.delete to delete that record: person = Friends.Repo.get(Friends.Person, 1) Friends.Repo.delete(person) #=> {:ok, %Friends.Person{__meta__: #Ecto.Schema.Metadata<:deleted>, age: 29, first_name: "Ryan", id: 2, last_name: "Bigg"}} Similar to insert and update, delete returns a tuple. If the deletion succeeds, then the first element in the tuple will be :ok, but if it fails then it will be an :error.
https://hexdocs.pm/ecto/getting-started.html
CC-MAIN-2018-47
en
refinedweb
This is my first time making video games and I decided to use unity because mainly it was free. I am learning about terrain but my main problem is how to make the character your using move and how to make the camera follow him/her. Can someone help me out and explain it very simply because like I said this is my first time? There is a prefab called "First Person Controller" inside Unity, search "First person Controller" in the project panel and drag it into the scene view, and play the game. Thank you so much that helped a lot Thanks, I am new and was having trouble with that too thank you that really helped me too because I am new to unity Thanks used this and it worked great added some up and down code as well. Answer by Wentzel · Nov 02, 2011 at 08:48 PM var MoveSpeed : int = 0.5; function Update () { if(Input.GetKey(KeyCode.UpArrow)){ transform.Translate(Vector3,MoveSpeed,0,0); } } I couldn't test it as i'm using my phone. This is just a simple way of moving your object if you change "Translate" into Rotate you can well rotate your object. i'm new myself :P I would reccomend site since your new and is a great place to start. Answer by Penaloza12 · Jul 28, 2015 at 09:51 PM This should work for you. this is a using UnityEngine; using System.Collections; public class Movement : MonoBehaviour { public int movementspeed = 100; // Use this for initialization void Start () { } // Update is called once per frame void Update () { if (Input.GetKey (KeyCode.A)) { transform.Translate (Vector3.left * movementspeed * Time.deltaTime); } if(Input.GetKey (KeyCode.D)) { transform.Translate (Vector3. right * movementspeed * Time.deltaTime); } } } Answer by CreativeStorm · Nov 02, 2011 at 08:48 PM Youtube can become a good friend to you learning Unity - there are lots of tutorials ;) a quick search brought up this: there are much more and maybe some better - watch a couple of them will point you the right direction. you can find tutorials everywhere on the web most are really simple, quick and nice to get familiar with Unity ;) Ask google. It is very useful to make my charterer move yes it is exactly right concept, share good stuff with good ideas and concepts, lots of great information and inspiration, both of which I need, thanks to this. Camera not moving on mobile 0 Answers Camera animation into a scene –most efficient way? 1 Answer Character Runs Off Camera 1 Answer object follow mouse on x-z plane with angled camera 0 Answers How to make a camera that moves with the player? 1 Answer
http://answers.unity3d.com/questions/182391/how-to-make-your-character-move.html
CC-MAIN-2017-09
en
refinedweb
Next Chapter: Example for recursive Programming: Towers of Hanoi Levenshtein Distance Introduction This chapter covers the Levenshtein distance and presents some Python implementations for this measure. There are lots of use cases for the Levenshtein distances. The Levenshtein Distance and the underlying ideas are widely used in areas like computer science, computer linguistics, and even bioinformatics, molecular biology, DNA analysis. You can even measure the similarity of melodies or rhythms in music1. The Levenshtein distance has widely permeated our everyday life. Whenever you use a program or an application using some form of spell checking and error correction, the programmers most likely will have used "edit distance" or as it is also called "Levenshtein distance". You might already have encountered another possible use case for this concept: Imagine that you are using a Python dictionary, in which you use strings as keys. Let us look at the following example dictionary with city names of the United States, which are often misspelled: cities = {"Pittsburgh":"Pennsylvania", "Tucson":"Arizona", "Cincinnati":"Ohio", "Albuquerque":"New Mexico", "Culpeper":"Virginia", "Asheville":"North Carolina", "Worcester":"Massachusetts", "Manhattan":"New York", "Phoenix":"Arizona", "Niagara Falls":"New York"} So, trying to get the corresponding state names via the following dictionary accesses will raise exceptions: cities["Tuscon"] cities["Pittsburg"] cities["Cincinati"] cities["Albequerque"] If a human reader looks at these misspellings, he or she will have no problem in recognizing the city you have in mind. The Python dictionary on the other hand is pedantic and unforgivable. It only accepts a key, if it is exactly identical. The question is to what degree are two strings similar? What we need is a string similarity metric or a measure for the "distance" of strings. A string metric is a metric that measures the distance between two text strings. One of the best known string metrics is the so-called Levenshtein Distance, also known as Edit Distance. Levenshtein calculates the the number of substitutions and deletions needed in order to transform one string into another one. The Minimum Edit Distance or Levenshtein Dinstance The minimum edit distance between two strings is the minimum numer of editing operations needed to convert one string into another. The editing operations can consist of insertions, deletions and substitutions. The simplest sets of edit operations can be defined as: - Insertion of a single symbol. This means that we add a character to a string s. Example: If we have the string s = "Manhatan", we can insert the character "t" to get the correct spelling: >>>>> s = s[:5] + "t" + s[5:] >>> print(s) Manhattan - Deletion of a single symbol Example: >>>>> s = s[:2] + s[3:] >>> s 'Manhattan' - Substitution of a single symbol In the following example, we have to change the letter "o" into the letter "a" to get the correct spelling: >>>>> s = s[:7] + "a" + s[8:] >>> s 'Manhattan' The minimum edit distance between the two strings "Mannhaton" and "Manhattan" corresponds to the value 3, as we need three basic editing operation to transform the first one into the second one: >>>>> s = s[:2] + s[3:] # deletion >>> s 'Manhaton' >>> s = s[:5] + "t" + s[5:] # insertion >>> s 'Manhatton' >>> s = s[:7] + "a" + s[8:] # substitution >>> s 'Manhattan' We can assign assign a weight or costs to each of these edit operations, e.g. setting each of them to 1. It is also possible to argue that substitutions should be more expensive than insertations or deletions, so sometimes the costs for substitutions are set to 2. Mathematical Definition of the Levenshtein DistanceThe Levenshtein distance between two strings a and b is given by leva,b(len(a), len(b)) where leva,b(i, j) is equal to - max(i, j) if min(i, j)=0 - otherwise: min(leva,b(i-1, j) + 1, leva,b(i, j+1) + 1, leva,b(i-1, j-1) + 1ai≠bj) where 1ai≠bj is the indicator function equal to 0 when ai=bj and equal to 1 otherwise, and leva,b(i, j) is the distance between the first i characters of a and the first j characters of b. The Levenshtein distance has the following properties: - It is zero if and only if the strings are equal. - It is at least the difference of the sizes of the two strings. - It is at most the length of the longer string. - Triangle inequality: The Levenshtein distance between two strings is no greater than the sum of their Levenshtein distances from a third string. Recursive Levenshtein Function in Python The following Python function implements the Levenshtein distance in a recursive way:"))The above Python code returned the following result: 3 This recursive implementation is very inefficient because it recomputes the Levenshtein distance of the same substrings over and over again. We count the number of calls in the following version by using a decorator function. If you don't know them, you can learn about them in our chapter on Memoization and Decorators: from collections import Counter def call_counter(func): def helper(*args, **kwargs): helper.calls += 1 key = str(args) + str(kwargs) helper.c[key] += 1 return func(*args, **kwargs) helper.c = Counter() helper.calls = 0 helper.__name__= func.__name__ return helper @call_counter")) print("LD was called " + str(LD.calls) + " times!") print(LD.c.most_common())This gets us the following: 3 LD was called 29737 times! [("('', 'P'){}", 5336), ("('P', ''){}", 4942), ("('', ''){}", 3653), ("('P', 'P'){}", 3653), ("('', 'Pe'){}", 2364), ("('P', 'Pe'){}", 1683), ("('Py', ''){}", 1666), ("('Py', 'P'){}", 1289), ("('', 'Pei'){}", 912), ("('Py', 'Pe'){}", 681), ("('P', 'Pei'){}", 681), ("('Pyt', ''){}", 462), ("('Pyt', 'P'){}", 377), ("('Py', 'Pei'){}", 321), ("('', 'Peit'){}", 292), ("('Pyt', 'Pe'){}", 231), ("('P', 'Peit'){}", 231), ("('Py', 'Peit'){}", 129), ("('Pyt', 'Pei'){}", 129), ("('Pyth', ''){}", 98), ("('Pyth', 'P'){}", 85), ("('', 'Peith'){}", 72), ("('Pyt', 'Peit'){}", 63), ("('Pyth', 'Pe'){}", 61), ("('P', 'Peith'){}", 61), ("('Py', 'Peith'){}", 41), ("('Pyth', 'Pei'){}", 41), ("('Pyth', 'Peit'){}", 25), ("('Pyt', 'Peith'){}", 25), ("('Pytho', ''){}", 14), ("('Pyth', 'Peith'){}", 13), ("('Pytho', 'P'){}", 13), ("('', 'Peithe'){}", 12), ("('Pytho', 'Pe'){}", 11), ("('P', 'Peithe'){}", 11), ("('Py', 'Peithe'){}", 9), ("('Pytho', 'Pei'){}", 9), ("('Pyt', 'Peithe'){}", 7), ("('Pytho', 'Peit'){}", 7), ("('Pyth', 'Peithe'){}", 5), ("('Pytho', 'Peith'){}", 5), ("('Pytho', 'Peithe'){}", 3), ("('Python', 'Pei'){}", 1), ("('Python', 'Peithe'){}", 1), ("('', 'Peithen'){}", 1), ("('P', 'Peithen'){}", 1), ("('Pytho', 'Peithen'){}", 1), ("('Py', 'Peithen'){}", 1), ("('Python', 'P'){}", 1), ("('Python', 'Peit'){}", 1), ("('Pyt', 'Peithen'){}", 1), ("('Pyth', 'Peithen'){}", 1), ("('Python', 'Peith'){}", 1), ("('Python', ''){}", 1), ("('Python', 'Pe'){}", 1), ("('Python', 'Peithen'){}", 1)] We can see that this recursive function is highly inefficient. The Levenshtein distance of the string s="" and t="P" was calculated 5336 times. In the following version we add some "memory" to our recursive Levenshtein function by adding a dictionary memo: def call_counter(func): def helper(*args, **kwargs): helper.calls += 1 return func(*args, **kwargs) helper.calls = 0 helper.__name__= func.__name__ return helper memo = {} @call_counter def levenshtein(s, t): if s == "": return len(t) if t == "": return len(s) cost = 0 if s[-1] == t[-1] else 1 i1 = (s[:-1], t) if not i1 in memo: memo[i1] = levenshtein(*i1) i2 = (s, t[:-1]) if not i2 in memo: memo[i2] = levenshtein(*i2) i3 = (s[:-1], t[:-1]) if not i3 in memo: memo[i3] = levenshtein(*i3) res = min([memo[i1]+1, memo[i2]+1, memo[i3]+cost]) return res print(levenshtein("Python", "Pethno")) print("The function was called " + str(levenshtein.calls) + " times!")The previous code returned the following output: 3 The function was called 49 times! The previous recursive version is now efficient, but it has a design flaw in it. We polluted the code with the statements to update our dictionary mem. Of course, the design is a lot better, if we do not pollute our code by adding the logic for saving the values into our Levenshtein function. We can also "outsource" this code into a decorator. The following version uses a decorator "memoize" to save these values: def call_counter(func): def helper(*args, **kwargs): helper.calls += 1 return func(*args, **kwargs) helper.calls = 0 helper.__name__= func.__name__ return helper def memoize(func): mem = {} def memoizer(*args, **kwargs): key = str(args) + str(kwargs) if key not in mem: mem[key] = func(*args, **kwargs) return mem[key] return memoizer @call_counter @memoize def levenshtein(s, t): if s == "": return len(t) if t == "": return len(s) if s[-1] == t[-1]: cost = 0 else: cost = 1 res = min([levenshtein(s[:-1], t)+1, levenshtein(s, t[:-1])+1, levenshtein(s[:-1], t[:-1]) + cost]) return res print(levenshtein("Python", "Peithen")) print("The function was called " + str(levenshtein.calls) + " times!")The code above returned the following: 3 The function was called 127 times! The additional calls come from the fact that we have three unconditional calls as arguments of the function "min". Iterative Computation of the Levenshtein Distance To compute the Levenshtein distance in a non-recursive way, we use a matrix containing the Levenshtein distances between all prefixes of the first string and all prefixes of the second one. We can dynamically compute the values in this matrix. The last value computed will be the distance between the two full strings. This is an algorithmic example of a bottom-up dynamic programming. The algorithm works like this: We set the cost for an insertion, a deletion and a substitution to 1. We want to calculate the distance between two string s and t with len(s) == m and len(t) == n. A matrix D is used, which contains in the (i,j)-cell the Levenshtein distance between s[:i+1] and t[:j+1]. The values of the matrix will be calculated starting with the upper left corner and ending with the lower right corner. We start with filling in the base cases, i.e. the row and the column with the index 0. Calculation in this case means that we fill the row with index 0 with the lenghts of the substrings of t and respectively fill the column with the index 0 with the lengths of the substrings of s. The values of all the other elements of the matrix only depend on the values of their left neighbour, the top neightbour and the top left one. The calculation of the D(i,j) for both i and j greater 0 works like this: D(i,j) means that we are calculating the Levenshtein distance of the substrings s[0:i-1] and t[0:j-1]. If the last characters of these substrings are equal, the edit distance corresponds to the distance of the substrings s[0:-1] and t[0:-1], which may be empty, if s or t consists of only one character, which means that we will use the values from the 0th column or row. If the last characters of s[0:i-1] and t[0:j-1] are not equal, the edit distance D[i,j] will be set to the sum of 1 + min(D[i, j-1], D[i-1, j], D[i-1, j-1])- We illustrate this in the following diagram: def iterative_levenshtein for r in range(rows): print(dist[r]) return dist[row][col] print(iterative_levenshtein("flaw", "lawn"))After having executed the Python code above we received the following: [0, 1, 2, 3, 4] [1, 1, 2, 3, 4] [2, 1, 2, 3, 4] [3, 2, 1, 2, 3] [4, 3, 2, 1, 2] 2 The following picture of the matrix of our previous calculation contains - coloured in yellow - the optimal path through the matrix. We start with a deletion ("f"), we keep the "l" (no costs added), after this we keep the "a" and "w". The last step is an insertion, raising the costs to 2, which is the final Levenstein distance. For the sake of another example, let us use the Levenshtein distance for our initial example of this chapter. So, we will virtually "go back" to New York City and its thrilling borough Manhattan. We compare it with a misspelling "Manahaton", which is the combination of various common misspellings. print(iterative_levenshtein("Manhattan", "Manahaton"))The above code returned the following result: [0, 1, 2, 3, 4, 5, 6, 7, 8, 9] [1, 0, 1, 2, 3, 4, 5, 6, 7, 8] [2, 1, 0, 1, 2, 3, 4, 5, 6, 7] [3, 2, 1, 0, 1, 2, 3, 4, 5, 6] [4, 3, 2, 1, 1, 1, 2, 3, 4, 5] [5, 4, 3, 2, 1, 2, 1, 2, 3, 4] [6, 5, 4, 3, 2, 2, 2, 1, 2, 3] [7, 6, 5, 4, 3, 3, 3, 2, 2, 3] [8, 7, 6, 5, 4, 4, 3, 3, 3, 3] [9, 8, 7, 6, 5, 5, 4, 4, 4, 3] 3 So far we have had fixed costs for insertions, deletions and substitutions, i.e. each of them was set to 1. def iterative_levenshtein(s, t, costs=(1, 1, costs: a tuple or a list with three integers (d, i, s) where d defines the costs for a deletion i defines the costs for an insertion and s defines the costs for a substitution """ rows = len(s)+1 cols = len(t)+1 deletes, inserts, substitutes = costs dist = [[0 for x in range(cols)] for x in range(rows)] # source prefixes can be transformed into empty strings # by deletions: for row in range(1, rows): dist[row][0] = row * deletes # target prefixes can be created from an empty source string # by inserting the characters for col in range(1, cols): dist[0][col] = col * inserts for col in range(1, cols): for row in range(1, rows): if s[row-1] == t[col-1]: cost = 0 else: cost = substitutes dist[row][col] = min(dist[row-1][col] + deletes, dist[row][col-1] + inserts, dist[row-1][col-1] + cost) # substitution for r in range(rows): print(dist[r]) return dist[row][col] # default: print(iterative_levenshtein("abc", "xyz")) # the costs for substitions are twice as high as inserts and delets: print(iterative_levenshtein("abc", "xyz", costs=(1, 1, 2))) # inserts and deletes are twice as high as substitutes print(iterative_levenshtein("abc", "xyz", costs=(2, 2, 1)))This gets us the following output: [0, 1, 2, 3] [1, 1, 2, 3] [2, 2, 2, 3] [3, 3, 3, 3] 3 [0, 1, 2, 3] [1, 2, 3, 4] [2, 3, 4, 5] [3, 4, 5, 6] 6 [0, 2, 4, 6] [2, 1, 3, 5] [4, 3, 2, 4] [6, 5, 4, 3] 3 The situation in the call to iterative_levenshtein with default costs, i.e. 1 for insertions, deletions and substitutions: The content of the matrix if we the substitutions are twice as expensive as the insertions and deletions, i.e. the call iterative_levenshtein("abc", "xyz", costs=(1, 1, 2)): Now we call iterative_levenshtein("abc", "xyz", costs=(2, 2, 1)), which means that a substitution is half as expension as an insertion or a deletion: It is also possible to have indivual weights for each character. Instead of passing a tuple with three values to the function, we will use a dictionary with values for every character. def iterative_levenshtein(s, t, **weight_dict): """ weight_dict: keyword parameters setting the costs for characters, the default value for a character will be 1 """ rows = len(s)+1 cols = len(t)+1 alphabet = "abcdefghijklmnopqrstuvwxyz" w = dict( (x, (1, 1, 1)) for x in alphabet + alphabet.upper()) if weight_dict: w.update(weight_dict) dist = [[0 for x in range(cols)] for x in range(rows)] # source prefixes can be transformed into empty strings # by deletions: for row in range(1, rows): dist[row][0] = dist[row-1][0] + w[s[row-1]][0] # target prefixes can be created from an empty source string # by inserting the characters for col in range(1, cols): dist[0][col] = dist[0][col-1] + w[t[col-1]][1] for col in range(1, cols): for row in range(1, rows): deletes = w[s[row-1]][0] inserts = w[t[col-1]][1] subs = max( (w[s[row-1]][2], w[t[col-1]][2])) if s[row-1] == t[col-1]: subs = 0 else: subs = subs dist[row][col] = min(dist[row-1][col] + deletes, dist[row][col-1] + inserts, dist[row-1][col-1] + subs) # substitution for r in range(rows): print(dist[r]) return dist[row][col] # default: print(iterative_levenshtein("abx", "xya", x=(3, 2, 8), y=(4, 5, 4), a=(7, 6, 6)) ) #print(iterative_levenshtein("abc", "xyz", costs=(1,1,substitution_costs)))After having executed the Python code above we received the following result: subs: 8 deletes: 7 inserts: 2 a x subs: 8 deletes: 1 inserts: 2 b x subs: 0 deletes: 3 inserts: 2 x x subs: 6 deletes: 7 inserts: 5 a y subs: 4 deletes: 1 inserts: 5 b y subs: 8 deletes: 3 inserts: 5 x y subs: 0 deletes: 7 inserts: 6 a a subs: 6 deletes: 1 inserts: 6 b a subs: 8 deletes: 3 inserts: 6 x a [0, 2, 7, 13] [7, 8, 8, 7] [8, 9, 9, 8] [11, 8, 12, 11] 11 We demonstrate in the following diagram how the algorithm works with the weighted characters. The orange arrows show the path to the minimal edit distance 11: Footnotes 1 Measurement of Similarity in Music: A Quantitative Approach for Non-parametric Representations Next Chapter: Example for recursive Programming: Towers of Hanoi
http://python-course.eu/levenshtein_distance.php
CC-MAIN-2017-09
en
refinedweb
Did you ever have to send a professional looking HTML email from your ASP.NET MVC APP? Say, to welcome a newly signed up user or to send a password reset link? I am sure at some point you had to! Well, at some point, we all have to send out emails from our app, be it for the sake of a notification or the "tell a friend" feature or anything... the sad part is, constructing HTML emails with personalized text, images and styles is at least a pain in the neck (or somewhere else for that matter)! To visualize, this is how it often looks in the code: StringBuilder mailBody = new StringBuilder(); mailBody.Append("<html><head><style type=\"text\css\">...</style></head>"); mailBody.Append("<body>") mailBody.AppendFormat("Hi {0}<br/>", user.FirstName); ... ... XX lines of similar Appending unless it its done! ... mailBody.Append("</body></html>"); Wait a minute! We are already creating complex HTML views with all the styling, master paging and dynamic content from our MVC view files. This is exactly what we need in the HTML emails! So, to me, a desired solution will be like the following: /* Layout: ~Views/Notifier/_Layout.cshtml */ <html> <head> <style type="text\css"> ... </style> </head> <body> <div class="banner"></div> <div class="mailBody"> Hello @user.FirstName:<br/> @RenderBody() Thank you. -- The Team! </div> <body> </html> /* WecomeMessage: ~Views/Notifier/WelcomeMessage.cshtml */ Welcome to the Team Site. Please click the following link to complete your account signup! @Html.ActivationLink(user.ActivationUrl) Isn't this cleaner? MvcMailer is available here. To install this, just open up your Package Manager Console and run "Install-Package MvcMailer" (without the quotes) and you are done! If you are new to NuGet, I would recommend you to install it to utilize free libraries out there as a plugin to your app just like MvcMailer! MvcMailer MvcMailer Once you install MvcMailer NuGet package, it adds the following to your project: system.net mailSettings If you open Mailers/Notifier.cs, you will find an example of how to send emails using the MvcMailer library. You can follow this list to get up and running in 3 minutes: <smtp from="some-email@gmail.com"> <network enableSsl="true" host="smtp.gmail.com" port="587" userName="some-email@gmail.com" password="valid-password"/> </smtp> mailMessage.To.Add("some-email@gmail.com"); using Mvc.Mailer; using YourAppRootNamespace.Mailers; ... new Notifier().WelcomeMessage().Send(); ... You can also send asynchronous email by using the following: new Notifier().WelcomeMessage().SendAsync(); So, all in all, just 1 line of code on top of its 2 lines of standard .NET Mail configuration to send email utilizing the power of all MVC Views! Sounds good, huh? If you look in the WelcomeMessage() method inside Mailers/Notifier.cs, you will see this line: WelcomeMessage() mailMessage.Body = PopulateBody(mailMessage: mailMessage, viewName: "WelcomeMessage") The above line populates the mailMessage.body with the string containing the compiled HTML from the view as it would be in the case of a regular MVC view! So, you get full power of your view engine to build the email body instead of the ugly looking string concats all over the place! mailMessage.body string As you can see, it uses the same view engine as your rest of the views do. So, it just reuses the components from the existing ASP.NET MVC framework and extends on top of it. It extends the ViewResult class of ASP.NET MVC with a new class called StringResult. StringResult does exactly what ViewResult does with one exception: instead of writing to the HTTP Response Output Stream, it simply writes to a StringBuilder! So, the HTML generation is exactly the same, but the rendering is different. ViewResult StringResult StringBuilder Now that the view, along with its master pages and dynamic contents are compiled into a string, it can be used as the email's body. This is how it works. Apart from this, it leverages the standard MailMessage class of System.Net.Mail namespace. So, it doesn't really have any learning curve for someone who already knows about the ASP.NET emailing libraries. string MailMessage System.Net.Mail namespace To simplify things, Mvc.Mailer namespace has an extension class for MailMessage that adds two handy methods to it - namely, Send() and SendAsync(). These methods simply relay to an instance of the SmtpClient class. So that your code doesn't need to deal with all these transport layer details and can stay at a fairly high level similar to the Controller Actions, you simply fire the view and forget about how it is actually sent to the output stream! Mvc.Mailer Send() and SendAsync() SmtpClient Controller Actions.
https://www.codeproject.com/articles/145629/announcing-mvcmailer-send-emails-using-asp-net-mvc
CC-MAIN-2017-09
en
refinedweb
I'll start with a quiz. What does this function do? def foo(lst): a = 0 for i in lst: a += i b = 1 for t in lst: b *= i return a, b If you think "computes the sum and product of the items in lst", don't feel too bad about yourself. The bug here is often tricky to spot. If you did see it, well done - but buried in mountains of real code, and when you don't know it's a quiz, discovering the bug is significantly more difficult. The bug here is due to using i instead of t in the body of the second for loop. But wait, how does this even work? Shouldn't i be invisible outside of the first loop? [1] Well, no. In fact, Python formally acknowledges that the names defined as for loop targets (a more formally rigorous name for "index variables") leak into the enclosing function scope. So this: for i in [1, 2, 3]: pass print(i) Is valid and prints 3, by design. In this writeup I want to explore why this is so, why it's unlikely to change, and also use it as a tracer bullet to dig into some interesting parts of the CPython compiler. And by the way, if you're not convinced this behavior can cause real problems, consider this snippet: def foo(): lst = [] for i in range(4): lst.append(lambda: i) print([f() for f in lst]) If you'd expect this to print [0, 1, 2, 3], no such luck. This code will, instead, emit [3, 3, 3, 3], because there's just a single i in the scope of foo, and this is what all the lambdas capture. The official word The Python reference documentation explicitly documents this behavior in the section on for loops: The for-loop makes assignments to the variables(s) in the target list. [...] Names in the target list are not deleted when the loop is finished, but if the sequence is empty, they will not have been assigned to at all by the loop. Note the last sentence - let's try it: for i in []: pass print(i) Indeed, a NameError is raised. Later on, we'll see that this is a natural outcome of the way the Python VM executes its bytecode. Why this is so I actually asked Guido van Rossum about this behavior and he was gracious enough to reply with some historical background (thanks Guido!). The motivation is keeping Python's simple approach to names and scopes without resorting to hacks (such as deleting all the values defined in the loop after it's done - think about the complications with exceptions, etc.) or more complex scoping rules. In Python, the scoping rules are fairly simple and elegant: a block is either a module, a function body or a class body. Within a function body, names are visible from the point of their definition to the end of the block (including nested blocks such as nested functions). That's for local names, of course; global names (and other nonlocal names) have slightly different rules, but that's not pertinent to our discussion. The important point here is: the innermost possible scope is a function body. Not a for loop body. Not a with block body. Python does not have nested lexical scopes below the level of a function, unlike some other languages (C and its progeny, for example). So if you just go about implementing Python, this behavior is what you'll likely to end with. Here's another enlightening snippet: for i in range(4): d = i * 2 print(d) Would it surprise you to find out that d is visible and accessible after the for loop is finished? No, this is just the way Python works. So why would the index variable be treated any differently? By the way, the index variables of list comprehensions are also leaked to the enclosing scope. Or, to be precise, were leaked, before Python 3 came along. Python 3 fixed the leakage from list comprehensions, along with other breaking changes. Make no mistake, changing such behavior is a major breakage in backwards compatibility. This is why I think the current behavior stuck and won't be changed. Moreover, many folks still find this a useful feature of Python. Consider: for i, item in enumerate(somegenerator()): dostuffwith(i, item) print('The loop executed {0} times!'.format(i+1)) If you have no idea how many items somegenerator actually returned, this is a pretty succinct way to know. Otherwise you'd have to keep a separate counter. Here's another example: for i in somegenerator(): if isinteresing(i): break dostuffwith(i) Which is a useful pattern for finding things in a loop and using them afterwards [2]. There are other uses people came up with over the years that justify keeping this behavior in place. It's hard enough to instill breaking changes for features the core developers deem detrimental and harmful. When the feature is argued by many to be useful, and moreover is used in a huge bunch of code in the real world, the chances of removing it are zero. Under the hood Now the fun part. Let's see how the Python compiler and VM conspire to make this behavior possible. In this particular case, I think the most lucid way to present things is going backwards from the bytecode. I hope this may also serve as an interesting example on how to go about digging in Python's internals [3] in order to find stuff out (it's so much fun, seriously!) Let's take a part of the function presented at the start of this article and disassemble it: def foo(lst): a = 0 for i in lst: a += i return a The resulting bytecode is: 0 LOAD_CONST 1 (0) 3 STORE_FAST 1 (a) 6 SETUP_LOOP 24 (to 33) 9 LOAD_FAST 0 (lst) 12 GET_ITER 13 FOR_ITER 16 (to 32) 16 STORE_FAST 2 (i) 19 LOAD_FAST 1 (a) 22 LOAD_FAST 2 (i) 25 INPLACE_ADD 26 STORE_FAST 1 (a) 29 JUMP_ABSOLUTE 13 32 POP_BLOCK 33 LOAD_FAST 1 (a) 36 RETURN_VALUE As a reminder, LOAD_FAST and STORE_FAST are the opcodes Python uses to access names that are only used within a function. Since the Python compiler knows statically (at compile-time) how many such names exist in each function, they can be accessed with static array offsets as opposed to a hash table, which makes access significanly faster (hence the _FAST suffix). But I digress. What's really important here is that a and i are treated identically. They are both fetched with LOAD_FAST and modified with STORE_FAST. There is absolutely no reason to assume that their visibility is in any way different [4]. So how did this come to be? Somehow, the compiler figured that i is just another local name within foo. This logic lives in the symbol table code, when the compiler walks over the AST to create a control-flow graph from which bytecode is later emitted; there are more details about this process in my article about symbol tables - so I'll just stick to the essentials here. The symtable code doesn't treat for statements very specially. In symtable_visit_stmt we have: case For_kind: VISIT(st, expr, s->v.For.target); VISIT(st, expr, s->v.For.iter); VISIT_SEQ(st, stmt, s->v.For.body); if (s->v.For.orelse) VISIT_SEQ(st, stmt, s->v.For.orelse); break; The loop target is visited as any other expression. Since this code visits the AST, it's worthwhile to dump it to see how the node for the for statement looks: For(target=Name(id='i', ctx=Store()), iter=Name(id='lst', ctx=Load()), body=[AugAssign(target=Name(id='a', ctx=Store()), op=Add(), value=Name(id='i', ctx=Load()))], orelse=[]) So i lives in a Name node. These are handled in the symbol table code by the following clause in symtable_visit_expr: case Name_kind: if (!symtable_add_def(st, e->v.Name.id, e->v.Name.ctx == Load ? USE : DEF_LOCAL)) VISIT_QUIT(st, 0); /* ... */ Since the name i is clearly tagged with DEF_LOCAL (because of the *_FAST opcodes emitted to access it, but this is also easy to observe if the symbol table is dumped using the symtable module), the code above evidently calls symtable_add_def with DEF_LOCAL as the third argument. This is the right time to glance at the AST above and notice the ctx=Store part of the Name node of i. So it's the AST that already comes in carrying the information that i is stored to in the target part of the For node. Let's see how that comes to be. The AST-building part of the compiler goes over the parse tree (which is a fairly low-level hierarchical representation of the source code - some background is available here) and, among other things, sets the expr_context attributes on some nodes, most notably Name nodes. Think about it this way, in the following statement: foo = bar + 1 Both foo and bar are going to end up in Name nodes. But while bar is only being loaded from, foo is actually being stored into in this code. The expr_context attribute is used to distinguish between uses for later consumption by the symbol table code [5]. Back to our for loop targets, though. These are handled in the function that creates an AST for for statements - ast_for_for_stmt. Here are the relevant parts of this function: static stmt_ty ast_for_for_stmt(struct compiling *c, const node *n) { asdl_seq *_target, *seq = NULL, *suite_seq; expr_ty expression; expr_ty target, first; /* ... */ node_target = CHILD(n, 1); _target = ast_for_exprlist(c, node_target, Store); if (!_target) return NULL; /* Check the # of children rather than the length of _target, since for x, in ... has 1 element in _target, but still requires a Tuple. */ first = (expr_ty)asdl_seq_GET(_target, 0); if (NCH(node_target) == 1) target = first; else target = Tuple(_target, Store, first->lineno, first->col_offset, c->c_arena); /* ... */ return For(target, expression, suite_seq, seq, LINENO(n), n->n_col_offset, c->c_arena); } The Store context is created in the call to ast_for_exprlist, which creates the node for the target (recall the the for loop target may be a sequence of names for tuple unpacking, not just a single name). This function is probably the most important part in the process of explaining why for loop targets are treated similarly to other names within the loop. After this tagging happens in the AST, the code for handling such names in the symbol table and VM is no different from other names. Wrapping up This article discusses a particular behavior of Python that may be considered a "gotcha" by some. I hope the article does a decent job of explaining how this behavior flows naturally from the naming and scoping semantics of Python, why it can be useful and hence is unlikely to ever change, and how the internals of the Python compiler make it work under the hood. Thanks for reading!
http://eli.thegreenplace.net/2015/the-scope-of-index-variables-in-pythons-for-loops/
CC-MAIN-2017-09
en
refinedweb
This manual page is part of the POSIX Programmers Manual. The Linux implementation of this interface may differ (consult the corresponding Linux manual page for details of Linux behavior), or the interface may not be implemented on Linux. Name Synopsis Description Return Value Errors Examples Using getpriority() Using setpriority() Application Usage Rationale Future Directions See Also getpriority, setpriority - get and set the nice value systems lowest supported nice value, setpriority() shall set the nice value to the lowest supported value; if value+ {NZERO} is greater than the systems:The following sections are informative. The following example returns the current scheduling priority for the process ID returned by the call to getpid(). #include <sys/resource.h> ... int which = PRIO_PROCESS; id_t pid; int ret; pid = getpid(); ret = getpriority(which, pid); .
http://manpages.sgvulcan.com/setpriority.3p.php
CC-MAIN-2017-09
en
refinedweb
On April 29, 2015, the Securities and Exchange Commission (SEC) proposed long-awaited pay-for-performance rules that would require most U.S. public companies to disclose the relationship between compensation “actually paid” to the company’s named executive officers (NEOs) and the company’s “financial performance.” As discussed in this Osler Update, the proposed rules mandated by Section 953(a) of the Dodd Frank Wall Street Reform and Consumer Protection Act (Dodd Frank) establish a somewhat prescriptive, as opposed to principles-based, approach to disclosing the relationship between executive compensation and company performance. As a result of the SEC’s use of the total shareholder return (TSR) metric for “financial performance” defined for purposes of preparing the already required five-year stock price performance graph, the proposed rules may have the effect of increasing company and investor focus on stock price movements over short-term periods and potentially reinforce further the use of this metric in awarding performance-based incentive compensation. The SEC is requiring these disclosures to assist shareholders when they are deciding whether to approve NEO compensation through the “say on pay” advisory vote, when making decisions on a compensation plan in which NEOs participate and when voting on the election of directors. Historical Pay-For-Performance Voluntary Disclosures In response to demands from investor and investor advisory firms to better demonstrate the relationship of corporate performance to compensation actually realized by a company’s CEO based on the compensation decisions of the board of directors, and in anticipation of the SEC adopting rules to show such impact as mandated by Dodd Frank, a number of U.S. and Canadian public companies had already provided supplemental voluntary disclosure regarding the CEO’s realized or realizable pay over a period of three to five years. However, there has been no agreed upon consistent method for calculating and disclosing such amounts or how to compare such amounts to company performance. The SEC’s proposed rule does not address these practices. Canadian Implications The disclosures required by the proposed rules would not be applicable to Canadian companies that are foreign private issuers or to “emerging growth companies” as defined by the SEC. Despite inconsistencies in methodology for calculating and disclosing realized or realizable pay, Canadian companies which voluntarily provide such information on a supplemental basis are unlikely to change their approach in light of the SEC’s proposed rule. The Required Disclosure The proposed rule, which would impose uniform definitions and methodologies for determining “financial performance” and compensation to be reported as “actually paid,” would amend Item 402 of Regulation S-K by adding new Item 402(v). This would require most reporting companies to include a new “pay-versus-performance table” in any proxy and information statements for which executive compensation disclosure under Item 402 is required. For each of the last five completed fiscal years (three years in the case of a smaller reporting company), the pay-versus-performance table would include columns presented in the following order: - total compensation of the company’s principal executive officer (generally the CEO) as reported in the Summary Compensation Table (the SCT) - compensation “actually paid” to the principal executive officer, with footnote disclosure detailing each adjustment made to total compensation as reported in the SCT for purposes of calculating the amount “actually paid” - average total compensation of the company’s named executive officers other than the principal executive officer (the Other NEOs”), based on total compensation amounts reported in the SCT - average compensation “actually paid” to the company’s Other NEOs, with footnote disclosure detailing each adjustment made to reported total compensation for purposes of calculating the amount “actually paid” - company’s cumulative total shareholder return (TSR), which would constitute “financial performance” for purposes of the proposed rules, using the same definition of and methodology for calculating TSR set forth in Item 201(e) of Regulation S-K that is used for preparing the stock performance graph required included in the company’s annual report - cumulative TSR of the company’s peer group (not required for smaller reporting companies), using either (i) the same peer group presented in the annual report stock performance graph or (ii) the peer group reported in the Compensation Discussion and Analysis (CD&A) for purposes of disclosing executive compensation benchmarking practices, including the identity of the issuers comprising the group if the selected peer group is not a published line of business or industry index The SEC was aware that interest in pay-for-performance disclosure has historically been focused on the compensation of the CEO, but concluded that it was bound by Dodd Frank to address the Other NEOs. It has proposed using the average Other NEO compensation in order to minimize the extent of additional disclosure and reduce variability due to changes in the identities of, and numbers of, Other NEOs from year to year. The proposed rules would also require, immediately following the foregoing table, narrative and/or graphical disclosure that would clearly describe in plain English, for each year covered, the relationship between (i) compensation “actually paid” to the principal executive officer and the Other NEOs and (ii) the cumulative TSR of the company, as well as a comparison of the cumulative TSR of the company to that of its selected peer group. The proposed rules would allow the company the flexibility to determine the most effective way of presenting these additional narrative disclosures. Disclosure could include, for example, a graph providing executive compensation actually paid and change in TSR on parallel axes, plotting compensation and TSR over the required time period. Alternatively, disclosure of the relationship could include showing the percentage change year over year in both executive compensation actually paid and TSR, together with a brief discussion of that relationship. Companies may choose to supplement the disclosure required by proposed Item 402(v) by providing pay-versus-performance disclosure based on another appropriate measure of financial performance, such as “realized pay” or “realizable pay,” if they believe it provides useful information about the relationship between compensation and the registrant’s performance as long as the supplemental disclosure is not misleading and not presented more prominently than the required disclosure. Calculation of “Actually Paid” Compensation Compensation that is “actually paid” to a named executive officer, or “executive compensation actually paid,” would be equal to the individual’s total compensation as reported in the SCT, adjusted to account for amounts included that are specifically attributable to pension benefits and equity award valuations in order to more accurately capture compensation actually paid to the named executive officer in the applicable fiscal year. Specifically, reported total compensation as set forth in the SCT would be adjusted to: (i) deduct the reported change, if any, in the actuarial present value of accumulated benefits under defined benefit and pension plans (column h of the SCT) based on changes in interest rates, executive age, and other actuarial inputs and assumptions (which may introduce significant volatility into this measure); (ii) add back the actuarial present value of service costs for services rendered in the applicable fiscal year (not required for smaller reporting companies); and (iii) replace the reported grant date fair values of option and restricted stock awards (columns e and f of the SCT) with vesting date fair values of option and restricted stock awards, if any, for which vesting conditions were satisfied during the applicable fiscal year, computed in accordance with the Financial Accounting Standards Board Accounting Standards Codification Topic 718, thereby treating equity awards as “actually paid” on the date of vesting. The proposed rule provides that the term “executive compensation actually paid” should include all compensation actually paid, regardless of whether the compensation is awarded based on the registrant’s financial performance. Accordingly, other compensatory amounts included in total compensation as reported in the SCT, such as incremental amounts triggered from a termination of employment or change of control payments during the financial year, perquisites, signing bonuses, etc. would not be excluded in determining the amount “actually paid.” As a result, inclusion of these non-performance based compensation amounts may affect the comparability of yearly amounts and add a degree of variability to the analysis, potentially necessitating further explanatory disclosure, as many of such items result from unusual events or changes in NEOs. When and Where the Disclosure is Required The additional disclosures described above would be required in any proxy and information statements for which executive compensation disclosure under Item 402 of Regulation S-K is required. Accordingly, the proposed disclosure would be required in proxy and information statements for meetings at which directors are to be elected or shareholder approval is sought for any bonus, profit sharing, pension or retirement plan, or other contract or arrangement in which a director, nominee or executive officer will participate, or the granting or extension to such persons of options, warrants or rights to purchase securities on a pro-rata basis. The proposed rules, however, would not require the pay-for-performance disclosures in a company’s Form 10-K or registration statements filed pursuant to the U.S. Securities Act of 1933, as amended, and would not be deemed to be incorporated by reference into its periodic reports or registration statements except to the extent specifically incorporated by reference. The proposed rules do not require a specific location within the proxy or information statement for this new disclosure. While the disclosure item is related to the CD&A because it would show the historical relationship between executive pay and registrant financial performance, and may provide a useful point of comparison for the analysis provided in the CD&A, the SEC noted that including this disclosure in the CD&A might suggest that the company considered the disclosed pay-versus-performance relationship in its compensation decisions, which may not be the case. Thus, the proposed rules provide flexibility for companies in determining where in the proxy or information statement to provide the disclosure required by proposed Item 402(v). Transition Period for Compliance Following the adoption of this proposal, companies subject to the rules would be afforded a transition period for compliance, with the initial proxy or information statement to include three years (instead of five) of disclosure, with an additional year of disclosure to be included in each of the next two proxy or information statements for which executive compensation is required. A newly reporting registrant would be required to provide the pay-versus-performance disclosure for only the most recently completed fiscal year in any proxy or information statement in its first year as a reporting company, and in the two most recently completed fiscal years in any proxy statement or information statement in its second year as a reporting company. This treatment is consistent with the phase-in period for new reporting companies in their SCT. For smaller reporting companies, the initial proxy or information statement would be required to include two years of disclosure and an additional year of disclosure would be required in the following year. XBRL Requirement To facilitate analysis over time and comparison across companies, the pay-versus-performance table, accompanying footnotes and narrative or graphical disclosures described above would be required to be tagged and electronically formatted using eXtensible Business Reporting Language (XBRL). Additionally, the interactive data files would need to be included as an exhibit to the proxy or information statement filed with the SEC. The XBRL file would not be required for small reporting companies. Practice Note Generally, gathering the numerical disclosures that would be required under the proposed rules will, for the most part, involve adapting and repackaging information that has already been gathered and used for other reporting purposes. However, companies should be aware that it may be challenging to provide narrative disclosure about the relationship between compensation “actually paid” and “financial performance,” as well as the comparison of the company’s TSR to that of its peer group. This may be especially important where, for example, the company’s compensation practices focus on performance factors that are not readily translated into or comparable to short-term changes in stock price or where certain compensation philosophies or strategies differ from that of other members of its peer group. Companies may want to begin preparing for implementation of the proposed rules as there is a possibility that the rules could become effective for the 2016 proxy season. The comment period on the proposed rule ends 60 days after the proposing release is published in the Federal Register. For further details, see the SEC Release No. 34-74835.
http://www.lexology.com/library/detail.aspx?g=a9040799-0d40-4a26-8939-9b0fa9c1ef6b
CC-MAIN-2017-09
en
refinedweb
Rolf Viehmann asks why Explorer doesn't let you create a file whose name begins with a dot.. If it really bugs you that you can't do it from Explorer, you are free to write your own shell extension to do "Rename this file, and if I'm going to shoot myself in the foot, then let me." As someone who sometimes works on files that are to be used on UNIX, which uses leading dots to denote hidden files, it does annoy me that Explorer won’t let me create them. Luckily, my text editor appears to have no problems creating them. In case anyone is wondering, the files in question are things like .htaccess . The set of people wanting files starting with a dot and the one of people keeping known extensions hidden have no intersection. What I want to know is why "Hide extensions for known file types" exists at all (and more to the point, why such an option is the default) @Jonathan Wilson Because file extensions are confusing to grandma. She just named the file "Letter to grandson" not "Letter to grandson.doc" Granted this behavior drives many power users nuts, but as power users we know how to disable it for our accounts. The reverse is not true. Of course spaces in filenames are the evilest things ever created. But in that case, surely Explorer should let you create a file called ".foo.txt". It has a filename as well as an extension, and shows up as ".foo" to users with hidden file extensions. (At least, it does on WinXP) But it doesn’t. Unless, in order to check whether a file had a "filename"(?) part the Explorer devs just checked for the absence of a leading dot instead of actually checking for a filename part. But again that seems like an odd way of doing things. Because if the system knows the mapping between .exe and Executable it can simply write "Executable" in the display, which is much easier to grasp for most users. Same with "JPEG Picture" instead of .jpg. And, if it did not hide the extensions for these known types then lot’s of people would complain why Windows has to display the same information twice… Why it is the default? Usability testing probably shows that most users prefer to see "Bitmap image" instead of .bmp… or perhaps they just applied common sense? Taking a cue from the original question and pre-empting a "why would anyone call a file ".foo.txt", calling a file ".NET project guidelines.txt" is an example of why. Ultimately, the problem is that "computers are hard" and "users are dumb". You can only dumb down computers so much before you have to ask something of the user. I mean you have to have a minimal level of understanding of how to operate a car. Gas / break pedals, steering wheel, etc. Similarly, you have to have a minimal understanding of how to use a computer. Unfortunately, many users have a hard enough time dealing with anything more complicated than pushing the power button, moving the mouse, and primary-clicking once. @Ross Bemrose "Luckily, my text editor appears to have no problems creating them." I am not sure what is your text editor.. But Note pad also allows you to create a file like ".NET project guidelines.txt". So its not a OS level restriction. And file level API should be able to handle a file like ..txt. However Explorer Shell does not let you rename it, it bombs saying "You must type a Name", as some one said, devs must have checked if (Left(filename,1)==’.’) ShootUser(); Who did QA for this part Ray.. (I know you need not know this Info) Its just kind of ironic that Explorer bans files that start with a dot, yet Microsofts premier developing environment is ".Net" :-) And yes, my download folder has .Net, .Net/2.0 .Net/3.0 etc. I don’t know about XP, but on Vista I can name a file ".foo.txt" with no problems. mvadu > Did you not actually read Raymond’s post? "So its not a OS level restriction. And file level API should be able to handle a file like ..txt. However Explorer Shell does not let you rename it […]" This is the whole point of the post! Why repeat it as if you just figured it out by yourself? @NUXI No, File extensions are not confusing to grandmas. After she has seen a few extensions, she likes how she can see the type of the file, even when she has switched off the annoying huge icons (she uses an old CRT, after all). Remind yourself: User are not necessarily stupid. Just maybe inexperienced with computers. Maybe your grandmother doesn’t have any problems with file extentions, but I’m pretty sure they scare the hell out of mine, and she’s had a computer for nearly a decade. @Karellen: In Vista they allow you to create ‘.foo.txt’ or ‘.NET Programming Guidelines.doc’ Not that I use a period as the first character in my filename anyway. What really bugs me is that sorting by type sorts by the name given to that type of file by whatever is set to open it, not the extension. This means that the top file is usually a .pdf ‘Adobe Acrobat 7.0 Document’, followed by .exe ‘Application’. Also if one program grabs lots of extensions and names them the same thing you can’t sort them at all. (Irfanview, Winrar) "No, File extensions are not confusing to grandmas. After she has seen a few extensions, she likes how she can see the type of the file, even when she has switched off the annoying huge icons (she uses an old CRT, after all)." Your grandma loves huge icons. She has bad eyesight in her old age, and finds it hard to focus on small text. She still has trouble following the tiny mouse pointer, though. Maybe you should get to know your grandma better before speculating on what she likes. (or better yet, we can drop the grandma analogy and speak with actual facts and evidence from usability studies) In Vista Explorer Tile view, ".txt" not only has no name, it lists no type (understandable) or size (less so, but I’m sure there’s an obvious reason that will embarrass me when it’s pointed out) but it does have the text doc icon. Hmmm. "Ultimately, the problem is that ‘computers are hard’ and ‘users are dumb’. You can only dumb down computers so much before you have to ask something of the user." What is silly is the notion that there is always a tradeoff between the two. Leading dots for file names is a convention from an entirely different operating system. It is not superior or inferior to allow such filenames. It is merely different. At some point we have to ask something of the experienced power user and say, "Geesh, live without this, is it THAT big a deal. The Hidden attribute has been in DOS and Windows forever, so use that." So is it merely that you just like it that way? What makes it right over using the attribute? I can’t see wither method being right or wrong, just the convention that built up over time. Isn’t the obvious solution to not treat files with only one dot at the front as having just an extension? If Explorer removes the “extension” and realises there is nothing left then it should display the entire filename, whether or not the “hide extensions” option is on. At least, I can’t see any negatives to doing that. It both solves the problem and fits with every real-world occurance of such files I have seen where the string after the dot really isn’t an extension and should not be treated as such. I wanted to reply with a solution that is very simple, intuitive and will leave both grandma and power users happy but Leo Davidson beat me to it. Of course given the direction that Explorer is taking there is no chance that something like this would ever be added to it. Obligatory ending: Vista and Vista’s Explorer suck Since I think its crazy not to have the extension display in Explorer, I always turn that option off. But I have been using computers for much longer than even MS-DOS has been around. But I’ve never had a problem with this kind of file, but I never have tried to create it in Explorer. I usually create the file first in an editor and then save it. I didn’t even know until today that Explorer worked that way. Speaking of renaming files, any time I have to batch-rename multiple files (mp3 albums, my documents, etc) I was thinking there must be a better way. Like a tool that shows all file names and lets you edit them like a multiline text block – with cursor navigation, insert/overwrite modes, find/replace, undo/redo, etc. When I’m done editing it would apply my changes (bonus points if the whole thing can be undone by pressing Ctrl+Z in Explorer). Is there something like this already? I think some of the power user complaints come from the historically different ways OSes have treated these things. In Unices, historically the interpretation of the suffix was up to the individual app and the OS and shell didn’t care much. Plenty of files have no extension. In Windows, the extension is a marker of the file type, and its interpretation is built more deeply into the system, especially in 8+3 days. That feels like data being used as metadata, which always bothered me, but Unix magic numbers and their extensions (magic sequences on shell scripts and PostScript files, for example) have the same objection. Macs had metadata (the resource fork) from day 1. It seems to me that the power user dilemma comes from viewing the filesystem through a blend of the Unix and Windows viewpoints, while wishing we had a resource fork all along.). "Hide file extensions for known file types" absolutely should not exist. It hides security-critical information from the user! It is even possible for a .exe to spoof the metadata that is displayed in tile view (I have done it). I know that the status bar, details pane and tooltip all display the file type, buit, but what if the file is on the desktop (where two of these three don’t exist and people rarely use the tooltip)? The only reason that this "feature" exists is because back when Windows 95 was following the "copy Apple" design strategy that Microsoft loves so much, somebody thought "Mac’s don’t have file extensions, therefore we should hide ours" and back in Win95’s day, security wasn’t taken seriously (much to everybody since’s detriment). So does the dot still have a special meaning in ‘modern’ Windows OSes and NTFS ? And are they still used to determine what application to run on double-click instead of using metadata in the filesystem ? Because you’re exposing the user to a piece of metadata that the user doesn’t and shouldn’t care about. Furthermore, this whole extension thing is broken by design, it’s presented as part of the file’s name, which it clearly is not. For example: if you type a letter: "Letter to grandma" and you save it as a DOC file, the file will be shown with the name "Letter to grandma.doc" which is the incorrect name, cause you named it "Letter to grandma" without the .doc part. Suppose you write a book, and you name it MyCleverTitle, it gets published and once you get the first copy you suddenly see "MyCleverTitle.IND" on the cover. You’re going to be pissed off at the printer. Yet, we accept this behaviour from our computer. Another problem is that an extension is not descriptive enough, if you get a .doc file, your computer might ASSUME it’s a Word file. But WordPerfect used to save file as .doc too. There are better ways to do this, as an example, the way OSX does this: If I save a JPEG file, and I query the metadata it gives a lot more information than just ‘JPG’ would, the content type tree for a JPEG is: "public.jpeg", "public.image", "public.data", "public.item", "public.content" Furthermore, there is no direct connection between the filetype/extension and the application that opens the file. You can have one .DOC open in Word and the other in TextEdit if you’d like, this can be set per-file. (there is of course a default) I think Apple’s solution is at least a step in the right direction, but they also use file extensions and the metadata can get lost when copying over the intertubes or archiving in formats that do not store metadata so it’s far from perfect. "It hides security-critical information from the user!" You could also wonder why the metadata describing the type of a file should be considered security-critical information. It would be helpful if Explorer also stripped out the zero-width non-breaking spaces from pasted filenames. Those can sometimes be encountered in the web. C:Windows You have to wonder (and I don’t mean to aim this at Raymond or anyone in particular) if somebody at Microsoft was kicking themselves at the "hide known extensions" feature, when they discovered that nearly every trojan was using that feature to its advantage. The convention of leading-dot-files-are-hidden comes from a completely operating system. In Windows, if you want a hidden file, you set its hidden attribute. If you want UNIX, you know where to find it. The name .NET is indeed unfortunate for more than just file names – just imagine all the speller hacks around it. I consider myself a power user, but I hide extensions on my Windows XP. It’s easier for renaming files – when you rename, Explorer selects the whole filename, and then you can just type a new name instead. But if extensions are shown, you have to re-type the extension as well. And typing ".mp3" after every file – including switching to English keyboard in order to do it, and back afterwards – gets old really fast. Epilog: Vista’s Explorer actually does better here – it initially selects just the file name, and leaves the extension unselected. I consider that a welcome improvement. Well, you can’t please 100% of the people 100% of the time. However, it would appear you can piss off 100% of the people 100% of the time if you try hard enough. To Aaargh!: If they did it the OSX way then there will be snarky comments like "Copying Apple, eh?" It’s a no-win situation. Personally and secretly, I would like to see Alternate Data Streams used for things like "type of file", much like how IE tags an EXE downloaded with an ADS marking it with the domain from whence it came. "If they did it the OSX way then there will be snarky comments like ‘Copying Apple, eh?’ " Actually, I’d like to have them develop some kind of universal standard together, if the way to store this information is different on every OS we’re still in a big mess. Too bad MS isn’t too keen on open standards. "What metadata? This is not a Mac. There is no metadata" NTFS supports metadata, I just don’t know if it’s used for anything else than just the standard stuff like access times, permissions, etc. [What is an unknown extension today may become a known extension tomorrow. I’m sure you considered this and am curious how you intend to address it. -Raymond] Change the display when the new extension comes online. Personally, I’d just allow any old name and show extensions by default. In the edge case of .jpg with extensions hidden, I’d special-case the display name to show the extension. It sure makes dealing with .emacs files easier. Guys, to be more compatible with the rest of unix and Windows world, Apple gave up using meta data for file type, and uses extensions to identify types in OS X. And in the Finder, they hide the file extensions for known types, like Windows does. And no, file "extensions" don’t have any meaning in NTFS. It’s the UI shell or command prompts built in the operating system that give extensions their meaning. Using Windows Explorer to create unix files that are meant to be hidden: edge case. If explorer actually understood what you did, it would have to hide the file immediately. [Once we invent the time machine we can implement your rule change. -Raymond] Isn’t it just a cosmetic/display issue, though? Right now Explorer shows completely blank names when faced with such files (if the "hide extensions" option is on). If it showed the names instead would anything other than the user notice? I confess that I don’t know whether the "hide extensions" option affects what would get returned when a program enumerated a shell namespace. If it did then I see your point. But if the only net affect was the labels printed on the screen then I can’t think of anything which would be broken. I should’ve said that I wasn’t proposing a change in the rules behind any API for splitting filenames into base and extension. Obviously they cannot change. But if nobody is relying on Explorer to apply those rules — and none other — when displaying the names on the screen then it should be free to change or augment the rules/APIs it uses now when working out what to display.). "[The rules for determining the extension from a file name were developed decades ago, and programs were written based on those old rules. You’re proposing changing the rules after the game has ended. -Raymond]" I wouldn’t change this rule, I would only change EXPLORER. No other program. Here is my proposal. Whenever explorer finds a file name starting with a period, rather than ignoring its extension (with the risk of displaying an empty name), it would display the whole name. So, for example, unzipping an archive containing a file named .txt would show a ".txt" display name in explorer rather than a "" display name, which is better IMO. Or maybe you judge that displaying empty names is a great feature of Explorer many programs are relying on, but I doubt so. A meta point about Chen’s douchebaggery: “[What is an unknown extension today may become a known extension tomorrow. I’m sure you considered this and am curious how you intend to address it. -Raymond]” Why not just say, “[What is an unknown extension today may become a known extension tomorrow. How would you address this? -Raymond]” Same content, none of the condescension. As a simple matter of being a decent human being, shouldn’t we avoid being condescending when possible? From the standpoint of being a blogger, aren’t you more likely to engender a meaningful discussion if you consistently take the high road, even if most commentators consistently don’t? —— As a substantive matter, I’m confused about exactly what’s meant by, “[Files beginning with a period] are considered to have an extension but no name.” Considered that by what? Windows file system APIs? E.g., when I create “.test.doc” with Notepad, explorer displays its type as a Word document. Which is to say — as far as I can understand things — that explorer considers “.doc” to be the file’s extension which, consequently, suggests to me that it’s treating “.test” as the file’s name. Hey, it’s not like he denies this – social skills of a thermonuclear device is his tagline. @Cooney: I have NEVER been condescended to by a thermonuclear device. It existed since NT 3.1! Calling Raymond a douchebag, in his ‘home’ no less, is quite the douchebag move. (I realize that by stating this I have joined the club) In Raymonds defense it’s pretty hard to watch naive solutions hastily thrown out day by day, without regard for the competing design constraints. I dont think it’s an accident that on his very terse blog-roll The Daily WTF comes first. The comments section are like his personal Hourly WTF. All he asks people is that you spend a little time to consider all the angles. @ifeanyie “In Raymonds defense it’s pretty hard to watch naive solutions hastily thrown out day by day, without regard for the competing design constraints.” If he was really that bothered by it, he could just disable comments. I’ve also NEVER seen him respond to a suggestion positively — granted, I read the comments only sporadically. I guess it’s possible that nobody has ever made a worthwhile observation/suggestion/whatever in the comments, but that strikes me as unlikely. The more probable explanation is that Chen gets off on mocking people he thinks are stupid. Look, the dude can run his blog however he wants. But, so long as he maintains a liberal comment policy, I’m equally entitled to criticize him. Implementing "Hide known extensions" was definitely a mistake. I know many people who have spent a significant amount of time on support calls, dealing with people who created files named things like "foo.txt" on default XP installs, only to be very, very surprised by the results. It looks like a file called foo.txt, but whenever you pass the filename to any tool, it fails to find it! Takes quite a while to figure out, unless you’ve been bitten by it before. "]" If this is true — and I believe it to be, as I am a strict adherent of the policy of believing online arguments are made in good faith () — then it’s a severe example of your cluelessly thermonuclear personality. Plus, since you are so frequently snarky and rude in response to comments, ambiguity is likely to be interpreted as more of the same. Arguably this is my problem, but arguably not. If I were you, I’d bear two things in mind: 1) if someone proposes a solution that seems to have a hole in it there are possible explanations falling somewhere between idiocy and failure to completely explicate a perfect design; and 2) not everybody shares the same policy preferences as you and/or Microsoft. For example, Microsoft’s dogged insistence on maintaining backwards compatibility () has been enormously beneficial to users and Microsoft. However, as is clear to readers of this blog, it also means that Windows is full of cruft and third parties (and Microsoft, for that matter) have at least one less disincentive to write clean software that follows the rules. Thus, it’s not clear what the "right" answer always is w/r/t backwards compatibility; what’s "right" for Microsoft may be "wrong" for others. Where someone suggests a solution that is not consistent with your or Microsoft’s policy ideals that does not necessarily suggest idiocy. "[Good comments stand on their own. Would you prefer that I respond to good comments with "Good comment"? -Raymond]" Yeah, if it’s particularly good. Because of idiocy or ignorance, sometimes I can’t tell if a suggestion is good as it sounds. (And I work under the assumption that you don’t have time to diss every bad idea that appears here.) Also, don’t you ever see a comment which is good, but to which you have something substantive to add? [Allow me to be more precise. “Files beginning with a period and containing no other periods are considered to have an extension but no name.” I simplified the rule for expository purposes. -Raymond] I am not sure about XP, but Explorer in Windows 2000 does follow the simple rule. It will not allow a file name that starts with a period, even if it contains other periods. However, Explorer in Vista allows file names that start with a period and contain other periods, while also stripping trailing periods. Thus, on Vista to create a file named “.htaccess”, one only needs to type “.htaccess.”. "However, Explorer in Vista allows file names that start with a period and contain other periods, while also stripping trailing periods. Thus, on Vista to create a file named ".htaccess", one only needs to type ".htaccess."." That is a good thing, because Subsystems for UNIX-based Applications is part of Vista Enterprise and Ultmate and lets you run Windows apps on Vista. "Windows apps on Vista." Unix apps on Vista. ] RC, how fluent do you consider yourself to be in English? Most of your blog entries are written exceptionally well, but comments like these seem to point to a fundamental misunderstanding about the English language. could just simply ask “What would you do about <X>?”. Why would it matter to you if they had thought of a resolution to every possible case that should come up? If they haven’t, they might think of a solution on the spot, or see that such a case could not be resolved by the way they suggested, or that the current method is the best way to handle the case. "A meta point about Chen’s douchebaggery" "As a simple matter of being a decent human being, shouldn’t we avoid being condescending when possible?" So you can be a douchebag, but Raymond is not allowed? This is Raymond’s blog, you are a visitor. Why would the visitor have more right to be a douchebag than the host? If you feel it’s inappropriate for Raymond to be a "douchbag" (whatever that means) then how can you possibly believe it’s appropriate for YOU to be one? Riiight… RC is from New Jersey, his english is just fine, and yes, he is that sarcastic in real life. What boggles me is how you got here from a discussion about explorer’s rules concerning files names .[a-zA-Z]+ There are lots of valid reasons to create filenames beginning with a dot. If the problem is only for *known* extensions then explorer blocking *all* filenames that start with a ‘.’ seems ‘overzealous’ (and yes, very annoying). @Ben: When faced with "FunInTheSun.jpg.exe" the use with "Hide extensions" turned on will see "FunInTheSun.jpg" with a type of "Application". If the user truly doesn’t comprehend file extensions, the ".jpg" bit will be meaningless and they’ll just think "oh, that’s an Application". It will also have an application icon, not an image icon. This assumes of course that whatever program they’re using to see the file shows the file type and icon — email clients are a bit sporadic about this, which doesn’t help matters any. So the ".jpg" bit will only trick people who already know enough to recognise extensions and know what they mean — and most of those probably already turned off "hide file extensions". At the end of the day, though, if you tell people that to see the dancing bunnies they just need to double click on the attachment, then many people will do just that — without further thought and regardless of how many obstacles you try to put in their way. They want to see those dancing bunnies, dammit, and they won’t take no for an answer. "The problem I have here is that hiding extensions for users that find them confusing, only comes full circle to bite the user when they stumble into a file that looks like (yes, this poor name) FunInTheSun.jpg.exe." When I create a file called FunInTheSun.jpg.exe on my desktop, it shows up as "FunInTheSun…" because the name is too long to show the rest. Single clicking shows the name, but something tells me most users will double click it to open it. Want to defeat the user viewing files in a list view? Name it "Fun In The Sun – A funny image showing cute panda bears playing at the San Francisco zoo with cute babes in the foreground.jpg.exe". At least the Details view shows the Type, but I’d guess most people are like me and ignore that column if they think they already know what type it is. Until Windows has a way to make *all* newly created files non-executable by default (requiring a moral equivalent of chmod +x before they can be executed), it is in the best interest of grandmae all around the world to have extensions always displayed and learn to distinguish potentially highly malicious extensions (.pif, .scr) from exploitable extensions (.doc, .ani) from safe ones (.txt and that’s about it). [Good comment. -Raymond] Funniest reply ever. @Dean Harding: "So you can be a douchebag, but Raymond is not allowed? This is Raymond’s blog, you are a visitor. Why would the visitor have more right to be a douchebag than the host?" 1) Raymond is allowed to do whatever he wants with his blog. He lets people post comments which enables me to write whatever I want. Raymond is further allowed to edit or remove such comments as sees fit. I am not saying Raymond cannot be a douchebag. I am saying he should not be a douchebag because his blog will be better if he’s not so nasty all the time. 2) If you think my calling him a douchebag makes me a hypocrite, fine. Whether or not it does has no effect on the validity of my points — though I understand that it might reasonably make you less likely to agree with me. I thought of it more as standing up to a bully. As a more general matter, I don’t think it’s unreasonable to hold bloggers and commentators to those blogs to different standards. In vaguely analogous context, many letters to the editor of newspapers are printed whose content would be completely inappropriate if it was written by the newspaper’s editors or reporters. You, of course, are free to argue that people writing comments owe Chen a greater duty than he owes them. I happen to feel otherwise. At any rate, Raymond, I want to apologize for calling you a douchebag. I really feel like I am pointing out a serious shortcoming in your blog and clearly my using that term has undermined my point, at least with some readers. Any idea why you can’t create a folder named "con" or a file name "con.{extension here}"? "[Allow to be even more precise. "Files beginning with a period and containing no other periods are considered, by the rules regarding file name parsing to have an extension but no name. It may be the case that not all programs adhere to these rules." -Raymond]" Can you add one more piece of precision: where are the "rules regarding file name parsing" codified? The best I could find was an article called "Naming a File" in the PSDK documentation that states, "Use a period (.) to separate the base file name from the extension in a directory name or file name." This statement is ambiguous since an affirmative command to use a period to delimit the file’s extension is not the same as a command to ONLY use it for that purpose. Indeed, later in the same block of the article it says, "Use a period (.) as a directory component in a path to represent the current directory." Maybe this is an expressio unius est exclusio alterius situation, but it doesn’t feel that way, particularly in light of the specific prohibitions on certain types of file names mentioned in the article. ("Do not use the following reserved device names for the name of a file: CON, PRN, AUX, NUL, COM1, COM2, COM3,…. Also avoid these names followed by an extension…") And, no doubt reflecting ample amounts of both ignorance and idiocy, if this was really the rule, why wouldn’t Windows enforce it at the API level? Whatever the answer, this seems like a rule that’s more honored in the breach — I just checked and even Word 2003 will let me name a file with a leading period. As such, I don’t think altering Explorer’s behavior really amounts to changing the rules after the game has ended. At least, it’s on a whole other level than adding new error codes. >So does the dot still have a special meaning in ‘modern’ Windows OSes and NTFS ? Yes. On most OS’s (Windows-derived or *NIX-derived) you can use C-API functions like basename(), dirname(), etc. to get the base filename and directory name for a given file. In general, you can do a strrchr(‘.’, basename(foo)) to get the extension of the file, if any. >And are they still used to determine what application to run on double-click instead of using metadata in the filesystem ? What metadata? This is not a Mac. There is no metadata. Rename a .doc to .txt and watch Windows attempt to open it with Notepad. *NIX based systems _generally_ do a better job because they don’t rely solely upon the extension: cp vim-7.0.000-2.el4.kb.src.rpm v7.txt file v7.txt v7.txt: RPM v3 src i386 vim-7.0.000-2.el4.kb *NIX generally (though not always; depends on if the programmer had a clue or not) examines the first two bytes of the file and looks it up in /etc/magic to determine what kind of file it actually is. Of course, this can lead to hacks as well — mv backdoor.sh README.txt and then wait for someone to A) not notice that README.txt is executable, B) has enough privledges to make it worthwhile. C) Double clicks on README.txt instead of opening it in an editor. > The convention of leading-dot-files-are-hidden comes from a completely operating system. In Windows, if you want a hidden file, you set its hidden attribute. If you want UNIX, you know where to find it. And you (and the other poster) miss the point entirely — some of us work in both worlds, and need files with leading dots. We don’t expect them to be hidden, but the fact that Explorer can’t create them directly is a (minor) annoyance. Honestly, I can’t say I’ve ever run into it, because that’s just not how I create new files (particularly from a Unix point of view). And, yes, the default behavior of Explorer to hide "known" extension types is a security hole since someone double clicking on README.txt.bat is unlikely to get what they want; and the default executable and lack of user/admin privledge separation (prior to Vista) mean that it’s quite different from the situation above. One could argue that the name of a file called ".dot" should be pronounced "full stop dot," whereas "·dot" (which is perfectly acceptable to Explorer) could be pronouced "dot dot." Too bad human beings aren’t too keen on open standards like Unicode. Pre-emptive snarky URL: Interestingly, I just tried to make "COM1.txt" in Explorer and I get the error "Cannot rename New Text Document: A file with the name you specified already exists. Specify a different file name." and it leaves me with "New Text Document.txt" in the directory. If I try to create the directory "COM1," I get no error but the directory remains "New Directory". If I try to save a file from Notepad called "COM1" it tells me that’s the reserved name of a device. If I try "COM1.txt" it tells me the file already exists and asks if I want to replace it. If I say yes, I get an error telling me that it cannot be created. Fascinating and weird! ’re creating a false dilemma. Why does your response need to assume that the poster did or did not take into account <X>? There are perfectly courteous ways of getting at the same idea: 1) "How would you handle <X>?"; 2) "This might work if you can deal with <X>. How would you do that?"; or even "I don’t think this will work because of <X>." All of those manage to avoid both condescension and rudeness and invite the poster to clarify his idea (or acknowledge that it doesn’t stand up to scrutiny). I simply don’t see any downside to approaching things without assuming that the poster has things right or wrong. If I’m missing something, though, I am always open to changing my opinion! Raymundo Chennai (If that is your real name :O): "No, File extensions are not confusing to grandmas. After she has seen a few extensions, she likes how she can see the type of the file…" It’s a nice theory, but unfortunately it doesn’t work in the real world. In the real world, many users just don’t even *want* to learn: they expect to be spoonfed virtually everything, primarily because over the past 15 years or so they’ve come to believe that a computer is a domestic commodity like a microwave or a VCR. I can’t imagine it would be *too* difficult to add a special case check to the Explorer code to determine if the name begins with a "." and act accordingly, but of course I have no idea what havoc would ensue elsewhere in the OS (or in 3rd party apps) and may well be talking out of my nether regions. :) @Johan: thanks for the link; that’s what I was referring to in my above post (though I was looking at my installed documentation). Let me clarify what I said at 9:49: The problem I have with Raymond’s interpretation of the rules (if the document linked by Johan is what Raymond is referring to) is that it opens by saying that "all file systems follow the same general conventions: a base file name and an optional extension, separated by a period." As I see it, this means there are two options for a file name under the rule: 1) a file name made of a "base file name" (BFN) only 2) a file name made of the BFN plus an extension, separated by a period. So it actually seems CONTRARY to the rules to treat a file like ".htaccess" as "hav[ing] an extension but no name" since it doesn’t fall into one of the two allowable categories (as I understand them). However, treating it has having no extension places it in category 1. And there’s nothing truly magical about having a period in a BFN, right? Under all interpretations "this.is.a.text.file.txt" is perfectly valid. Even Explorer is on board for that one. I’m really not just arguing this for the sake of it or trying to be petulant. I really do read the rules this way. My reading may be wrong, but I really don’t think there is nothing strange about it. @ivo: Oscar’s File Renamer works like that. "It will also have an application icon, not an image icon." Except that EXE files contain their own icons, they don’t all share the same icon. The *.jpg.exe filename isn’t even necessary if extensions are hidden, just embed the icon used for JPG files and "user x" won’t know the difference. PS: I love this blog and I believe it would be very boring if Raymond were forced to have no personality at all. The problem I have here is that hiding extensions for users that find them confusing, only comes full circle to bite the user when they stumble into a file that looks like (yes, this poor name) FunInTheSun.jpg.exe. Maybe it arrived via email, USB drive, download, whatever, that doesn’t really matter. Now, the user, that "can’t understand extensions in the first place", has been dumbed-down by hidden extensions and thinks this is an image. Yeah, let’s double click on FunInTheSun.jpg, that’ll be great. Maybe they get a warning of some sort, maybe not. Perhaps they were tired of all the preventative warnings they see and disabled them, or maybe they just mindlessly click through them. Either way, here is a juicy point of vulnerability. Does this seem farfetched? What percentage of people fall victim to this? 1%, .5% less? With Window’s user base, that’s a lot of people, a lot. I’m not trying to blame Window’s or the user, perhaps this is a UAT issue. At what point do we say, yes, this *might* simplify things, but at what cost? The reality is that no matter how simple you make something, someone is going to have a problem with it. I love the local cable ads "Power down, power up ". I wonder how many service related phone calls it took where the resolution was, power down the cable modem, the router and the PC, now turn them back on, is it working now? Before they made this commercial? It looks like in WinXP, explorer does not allow any filename which has a period at the end of the name. But here it goes a step further: it creates the file/folder with that name but deletes the period at the end. If for example you create “xyz.” it makes a file named “xyz” I believe this is because the explorer thinks that the file has no extension and therefore relieves it of the period at the end too. But this is more graceful than saying “You must type a file name.” for files starting with a period. Not to nitpick, but file(1) on UNIX doesn’t work like the poster above claims it does, its not as simplistic as just looking at the first few bytes of the file to determine its type. Refer to for the magic file format. Essentially, you can have N number of tests based on the data at numerous offsets O1..On in the file in order to determine the type of the file. How did you think it was able to determine the version of the RPM package file and the type of RPM package? Simple, by seeking to the offset in the file where the version number was stored, and interpreting it as an int/long/whatever, and seeking to the offset where the package type was stored, and so on. This is only loosely related to this topic, but I am curious and I’m sure others are as well. Why are certain file extensions (e.g. .pif, .shs) hidden even if the option to hide known extensions is turned off? [Good comment. -Raymond] Are you saying that because you honestly believe my comment was good, to demonstrate the impracticality of the suggestion you inferred from Raymundo Chennai’s earlier comment, to openly mock said inference, or for some reason so indescribably snarky that if exposed would cause the universe to collapse in-on itself? Look in HKEY_CLASSES_ROOT. Any file extension that has a key called "NeverShowExt" will have its extension hidden all the time. As for WHY those extensions where chosen to be hidden all the time? I dunno, I guess those file types are just on double-secret probation or something. Hmm… that gives me an idea… Although being unable to create a .htaccess file is stupid. Quoth the Adam: Any idea why you can’t create a folder named "con" or a file name "con.{extension here}"? I was bitten by this when I shortened the const.h file to con.h (big woop, saving two keystrokes per C source file) and, when I tried to compile it, it (the compiler) ran forever because it was trying to read the header file from the console ! WTF! Turns out all these devices (con:, lpt1:, nul:) don’t care about the extension (or at least the functions that were being called by the compiler didn’t). They just opened the device. That turned out to be a neat trick to see if a drive existed in BAT files: if not exist x:nul goto not_there or something like that. Cheers, Pax. What about NUL files? I love you, Raymond. I’m sorry people are so mean on your blog comments. *hugs* TimHollebeek wrote: "Implementing "Hide known extensions" was definitely a mistake." Not hiding file extensions has issues too. When renaming a file, you may accidentally remove the file extension. About file names starting with a period. Windows NT has a POSIX subsystem and has to interoperate with *NIX systems. So, it’s good that the system is able to create files starting with periods. I don’t really care whether Explorer allows or forbid that, since the OS allows that. Ben wrote: "Now, the user, that "can’t understand extensions in the first place", has been dumbed-down by hidden extensions and thinks this is an image" Usually, file types are easily recognized by their icon… Well, actually, an executable can easily reuse the icon of a JPEG file… Well, but there’s the file type in "Tile view mode" (assuming he is using this mode)… Unfortunately, spoofing is easy too. Anyway, I believe that showing file extensions wouldn’t change things much. The same use would probably execute a file named FunInTheSun.jpg.exe, because it has an image icon. From PM by J: " When I create a file called FunInTheSun.jpg.exe on my desktop, it shows up as "FunInTheSun…" because the name is too long to show the rest. " How is it different with file extensions showed? From Adam: " Any idea why you can’t create a folder named "con" or a file name "con.{extension here}"? " I discovered this behaviour of Explorer when making sure our application met the "Designed For Windows 95" logo program. One requirement was that you supported the creation and reading of files with a leading ‘.’. I was therefore bemused by the fact that the Windows 95 Explorer didn’t meet the requirements of the Windows 95 Logo! I was a lot younger then, and in hindsight, perhaps I misread the requrements… Johan Noodlebrock and Raymundo Chennai, Will you two give it a rest and STFU? @Karellen I did read the post. Ray was mentioning about files like .access, and I said even .someting.txt is also not possible through Explorer UI. More over says Do not end a file or directory name with a trailing space or a period. However, you can start a name with a period. But Explorer coders did not adhere to this rule. File name extensions are confusing to Grandma. So are their lack. Has anyone else had to explain to a user that the file that they named "important.txt" is actually "important.txt.exe" but windows is helpfully hiding the .exe, which is why nothing works for them? Hiding file name extensions was a happy idea that didn’t work in implementation and should have been ditched several service packs ago. @AQ Why? I’m still genuinely trying to understand where Raymond is getting his rule. thras: How on earth would Grandma accidentally create a file called important.txt.exe? Remember, Notepad will automatically append ".txt" to the end of any filename you type, even if it looks like you’re trying to override the default extension. (This behavior can be overridden by selecting Save as Type > All Files) Raymond, you have obviously missed that the "hide known extensions" is the real problem. There is simply no other serious way to tell the file type other than by the extension. One people suggest taking the icon. Fine, wonderful, EXE files allow you to choose arbitrary icons. Looks like a text, but once you double click it, it goes boom very badly. Other people, like you, suggest letting the system determine it. But how? Holding the mouse cursor over it for every file? Wastes time and goes boom on WMF and AVI. Always having the detailed view turned on? Wonderful waste of space, when in the default view 5 rows of files would easily fit into there. Also wastes time, because Explorer parses various properties on certain file types. What if the system doesn’t know it, or the association is broken, or hasn’t been created yet? (And not just to mention Windows Vista, which allows the attacker to even spoof the file extension as well using arbitrarily created desktop.ini files. This one is a lost case.) @David Brooks: SFU has no problems with this, since it doesn’t run in the Win32 subsystem, so there’re no reserved names. Also even there you can deal with it by simply unpacking it to a different name and then renaming it using UNC paths (ren foo.c .%cd%aux.c). "There is simply no other serious way to tell the file type other than by the extension." Check the first few bytes of the file against a list of known patterns (stored in the registry so as to be extensible) and set Explorer’s display accordingly. Fall back on extension if the info isn’t available. Need only be done when a file is created or is saved as part of the save operation. Can be stored in the file system. Raymond dealt with the "reserved file names" question a long time ago, by the way – What were you thinking Raymond? You simply cannot post anything at all on a topic like this without eliciting great gobs of snarkiness and nittpickery. You might as well have concluded with "If it really bugs you, you are free to write your complaints to oldnewthing#comments." "Check the first few bytes of the file against a list of known patterns (stored in the registry so as to be extensible) and set Explorer’s display accordingly." Great, try that with a remote folder containing 10000 files. See if you still like the idea. Because this is a real problem, Nautilus makes a distinction between local and remote files when making thumbnails. But nautilus can’t really know for sure if a directory is local or not, for instance when using fuse (user-space filesystems). "But nautilus can’t really know for sure if a directory is local or not" It can easily know whether accessing files is slow or not, simply counting the number of files opened after 200 milliseconds, for example. That’s actually what counts. A fast network is ready for thumbnails. A floopy disk isn’t. However, having a variable, inconsistent behavior isn’t acceptable. Just think about it: Refreshing a directory will typically show more thumbnails (since cache would speed up accesses)… This would look like a bug for thumbnails. This would be awful for file types… The type of a file mustn’t depend on speed (or networking) factors. For thumbnails, lazy initialization (computing & rendering thumbnails only for objects visible in the current window frame) would partially solve the problem. @NUXI: Of course spaces in filenames are the evilest things ever created. No. Control characters in filenames are the most evil things ever created. Especially that pesky ^H that shows up on UNIX systems when the erase character isn’t set properly, and you’re using a shell that doesn’t handle line editing on its own, etc. You can look at the file in the ls all you want, but it still *looks* right because it’s backspacing over the typo you made when you created the file, just like you backspaced when you "fixed" the typo. That’s why ls has the -b option. Now that’s evil. Since there is a side-thread that’s rehashing the filesystem’s theft of a whole class of reasonable names (con.*, aux.*…) – I once extracted a zip file of sources created on a UNIX system, and spent a while puzzled over what had happened to the auxiliary functions, all in the reasonably named "aux.c". I used a UNIX system to extract and rename it. Does SFU handle those cases properly (as in: I’m too lazy to find out)? You forgot the preemptive snarky comment: Yes… the confusing and useless .htaccess file. People who raise a query about reserved file names, (con, lpt, aux etc), don’t you think a search engine is a great way of finding out? Check out the first link returned by this: Hope that helps! The NT kernel was designed by mostly sane people and as such it mostly avoids trying to do anything impossible (this might seem like an obvious design rule for fundamental system code, but it’s escaped people before, ask the poor chaps at Apple) So, the NT kernel doesn’t care about filenames, they’re just arbitrary strings of non-zero 16-bit codes. 0xFFFF is a perfectly good filename so far as NT is concerned. But, Win32 comes from Raymond Chen’s world, where attempting the impossible and failing is sometimes considered a good idea (since it might make a customer happy, at least temporarily). So it adds a bunch of confusing name constraints in the hope of avoiding compatibility problems both real and imagined. Some of this stuff is bolted into the Win32 APIs (where it’s hard to avoid using it) and some of it is in the shell, or in standard dialogs, or just haphazardly bolted in wherever it solved a problem for someone. Thus what actually happens (despite the best intentions of Raymond and many others) is that consistency goes out the window, and users find out quickly enough that Windows "doesn’t like" some filenames, and they either learn to be very conservative about naming files to avoid these mysterious problems, or the spend a lot of time talking to helpdesk. This "learned avoidance" helps to mask further bugs to some extent. In a very real sense 8.3 filenames never went away on Windows. [ Of course Unix users have their own "learned avoidance" you won’t see a Unix hacker casually put spaces into a filename, because he’ll be worried that someone else forgot to quote that parameter in a shell script and it’ll break nastily ] Hiding file extensions is rightly the default. Anybody who wants to change it should be obliged to field all calls from people who have deleted the file extension and now can’t open it with a double click. @Stephen Jones Great, and you can field the calls from people who have run an EXE disguised as a JPG file with a .jpg.exe extension. —-"Great, and you can field the calls from people who have run an EXE disguised as a JPG file with a .jpg.exe extension."—- Anybody knowledgeable enough to distinguish between .jpg and .exe would have changed the default settings anyway. And if the default showed them both sombedy unknowledgeabe would still have clicked it. The real problem about showing extensions for grandma is not really the .doc that magically appears at the end. It’s the fact that when you try to rename the file, the whole filename is selected by default. Then grandma starts typing a new name (without keeping the original extention, because she doesn’t know what it is), and see a prompt saying: "If you change a file name extension, the file may become UNUSABLE." Grandma will be like "wtf? NO" and will never try to rename a file again. Hmmm… I beginning to see a pattern. MS finds a problem in their system ("Can not remotely configure file server!" in one of the past posts, "File with dot at the beginning can disappear!" now) and instead of fixing it properly, they just add a hack ("C$", "Explorer will not allow that.") that is inconsistent and/or dangerous and/or fixes only the part of the problem, but hey, it kinda works… Oh wise and venerable Bob, please share with us the perfect solution to those problems that all the engineers at Microsoft were too dumb to see for themselves! Or maybe the answer is SO obvious that it’s beneath you to explain? [Once we invent the time machine we can implement your rule change. -Raymond] No need for a time machine when M$ has windows update and can force paches down it’s customers throat. Here is the behaviour I would expect. If "Hide extensions for known…." is checked: Only accept no name from files with an unknown extension (like .htaccess) If "Hide…" is not checked: All no name files should be accepted. [If "Hide extensions for known…." is checked: Only accept no name from files with an unknown extension (like .htaccess)] Ah, we appear to have come full circle. Blog Nirvana? PingBack from
https://blogs.msdn.microsoft.com/oldnewthing/20080414-00/?p=22763
CC-MAIN-2017-09
en
refinedweb
This is a problem I encountered more than once, and today I finally found a "solution". I place "solution" in quotes because it's more of a workaround for something that seems to be a problem of the Python Windows installer. I ran into the problem on a Windows 7 box running ActivePython 2.6, but according to this Python issue, others have encountered the problem with Windows XP and Python 3.x as well. The problem manifests itself as follows. Prepare a simple script containing: import sys print sys.argv Execute it from the command-line: C:\>python z.py 1 2 3 ['z.py', '1', '2', '3'] This looks right. But now execute it without prepending python: C:\>z.py 1 2 3 ['C:\\z.py'] What gives? Doesn't the installer configure .py files to be run by the Python executable correctly, passing arguments to it as one would expect? Now, I found a couple of non-solutions. The most popular was to setup the association with the assoc and ftype commands, as follows: C:\>ftype Python26.File="C:\Python26\python.exe" "%1" %* C:\>assoc .py=Python26.File But at no avail, the problem persisted. What eventually solved it is fixing the relevant registry keys for Python. I set the HKEY_CLASSES_ROOT\Applications\python26.exe\shell\open\command key to: "C:\Python26\python26.exe" "%1" %* Previously, %* was missing. Similarly, I set HKEY_CLASSES_ROOT\py_auto_file\shell\open\command to the same value. You can also set it accordingly for python26w.exe (the no-shell version of Python on Windows). This worked, and now I got: C:\>z.py 1 2 3 ['C:\\z.py', '1', '2', '3'] What causes the problem? In all likeness the ActivePython 2.6 installer, which doesn't set up the registry keys correctly. Now, this may not always happen, and some discussions point to it being dependent on other factors. For instance, I had another version of Python already installed on the machine when I executed the installer - this may have confused it.
http://eli.thegreenplace.net/2010/12/14/problem-passing-arguments-to-python-scripts-on-windows/
CC-MAIN-2017-09
en
refinedweb
At the beginning of this month, I started working on a new Perl project, whose goal was to implement the Logo programming language (also known as "turtle graphics"). I had never actually programmed in Logo myself, but thought it would be fun to combine all of the low-level Logo commands for doing drawing (in a Perl/Tk GUI) with the power of Perl to do the rest. (At some point in the future, I plan to implement more of the Logo programming language itself, including loop constructs, conditionals, subroutines, etc.) One of the biggest early hurdles was getting past the well-known fact that you can't do Tk within a child thread. But I wanted to create a Tk window automatically whenever the first Logo object was constructed, and I didn't want the user to have to manage the event loop themselves. A coworker suggested forking off a separate instance of the Perl interpreter as a server, and having it respond to requests over a socket, and once I had that working, the rest was a lot more fun! One of the "cool" aspects of using a client-server model for the project was that I'm no longer limited to a single "turtle" on the screen; the module supports multiple simultaneous clients. Additionally, since there is support for quering any particular client's parameters (including screen position, angle heading, etc.), there is some interesting potential for possible games or projects which could result (Light cycles, for example). The following code represents the Language::Logo module, which I'm planning to submit (as my first module) to CPAN. On January 16th, I gave a presentation to Boston Perl Mongers, and was given some good feedback (including the appropriate namespace under which to register: Language::Logo). But I'd also like to hear comments from my fellow Perl monks, whose opinions I hold in high regard. So please give me your feedback! Here is the current code for Language::Logo: # # Language::Logo.pm # # An implementation of the Logo programming language which allows # multiple clients to connect simultaneously. # # Written January 2007, by John C. Norton # Presented at Boston Perlmongers on January 16th, 2007 # # Package header package Logo; our $VERSION = '1.000'; # Current version # Strict use strict; use warnings; # Libraries use IO::Select; use IO::Socket; use Sys::Hostname; ################# ### Variables ### ################# use constant PI => (4 * atan2(1, 1)); # User-defined my $date = '070129'; # Date last modified my $iam = "Language::Logo"; # Module identifier my $d_title = "$iam version $VERSION"; my $max_connect = 16; # Maximum client connections my $retry_timeout = 10; # Client connection timeout after N se +conds # Defaults my $d_port = "8220"; # Default socket port my $d_update = 10; # Default gui update rate my $d_bg = "black"; # Default canvas background color my $d_width = 512; # Default canvas width my $d_height = 512; # Default canvas height my $d_color = 'white'; # Default pen/turtle color my $d_psize = '1'; # default pen size (thickness) my $d_txdim = '6'; # Default turtle x-dimension my $d_tydim = '9'; # Default turtle y-dimension my @switches = qw( name debug title bg width height update host por +t ); my %switches = map { $_ => 1 } @switches; # Client-specific top-level variables (with initial values) my $pvars = { 'debug' => 0, 'step' => 0, 'wrap' => 0, }; # Command aliases my $palias = { 'fd' => 'forward', 'bk' => 'backward', 'rt' => 'right', 'lt' => 'left', 'sh' => 'seth', 'pu' => 'penup', 'pd' => 'pendown', 'ps' => 'pensize', 'co' => 'color', 'cs' => 'clear', 'hm' => 'home', 'sx' => 'setx', 'sy' => 'sety', 'xy' => 'setxy', 'ht' => 'hideturtle', 'st' => 'showturtle', 'w' => 'width', 'h' => 'height', 'ud' => 'update', 'bg' => 'background', }; my $pmethods = { 'forward' => 'move_turtle', 'backward' => 'move_turtle', 'right' => 'turn_turtle', 'left' => 'turn_turtle', 'seth' => 'turn_turtle', 'penup' => 'change_pen_state', 'pendown' => 'change_pen_state', 'pensize' => 'change_pen_size', 'color' => 'change_color', 'clear' => 'modify_canvas', 'width' => 'modify_canvas', 'height' => 'modify_canvas', 'background' => 'modify_canvas', 'home' => 'reset_turtle', 'setx' => 'move_turtle', 'sety' => 'move_turtle', 'setxy' => 'move_turtle', 'hideturtle' => 'show_turtle', 'showturtle' => 'show_turtle', 'update' => 'change_update', }; ################### ### Subroutines ### ################### #=================== #=== Client code === #=================== sub new { my ($class, @args) = @_; (ref $class) and $class = ref $class; # Create blessed reference my $self = { }; bless $self, $class; # Parse optional arguments while (@args) { my $arg = shift @args; if ($arg =~ /^sig(.+)$/) { # Trap specified signals my $sig = uc $1; $SIG{$sig} = shift @args; } elsif (defined($switches{$arg}) and @args > 0) { # Assign all valid parameters $self->{$arg} = shift @args; } } # Startup a new server locally if 'host' was not defined. if (!defined($self->{'host'})) { $self->fork_server(); } # Connect to the server $self->connect_to_server(); # Return the object return $self; } sub disconnect { my ($self, $msg) = @_; if ($msg || 0) { print "$msg"; <STDIN>; } my $sock = $self->{'socket'}; if ($sock || 0) { close($sock); } } sub connect_to_server { my ($self) = @_; # Return if socket is already connected my $sock = $self->{'socket'} || 0; my $name = $self->{'name'} || ""; $sock and return $sock; # If hostname is ':', use local host my $host = $self->{'host'} || ':'; ($host eq ':') and $host = hostname(); my $port = $self->{'port'} || $d_port; my %params = ( 'PeerAddr' => $host, 'PeerPort' => $port, 'Proto' => 'tcp', 'ReuseAddr' => 1, ); # Keep retrying until $retry_timeout is exceeded my $start = time; while (1) { ($sock = new IO::Socket::INET(%params)) and last; # Success! if (time - $start > $retry_timeout) { die "$iam: Failed client socket connection\n"; } select(undef, undef, undef, 0.1); } # Save socket $self->{'socket'} = $sock; print $sock ":$name\n"; chomp(my $ans = <$sock>); if ($ans !~ /^(\d+):(.+)$/) { die "$iam: expected 'id:name', got '$ans'\n"; } my ($id, $newname) = ($1, $2); $self->{'id'} = $id; $self->{'name'} = $newname; $self->{'host'} = $host; return $sock; } sub host { my ($self) = @_; return $self->{'host'}; } sub interact { my ($self) = @_; print "Type '?' for help\n"; while (1) { print "$iam> "; my $cmd = <STDIN>; defined($cmd) or return; chomp $cmd; $cmd =~ s/^\s*(.*)\s*$/$1/; ($cmd eq 'quit' or $cmd eq 'bye' or $cmd eq 'exit') and return +; if ($cmd ne "") { if ($cmd =~ s/^\?//) { my $ans = $self->ask($cmd); print "Response: $ans\n"; } else { my $ans = $self->command($cmd); $ans and print "Command ERROR: $ans\n"; } } } } sub command { my ($self, $cmdstr) = @_; my $sock = $self->connect_to_server(); $sock or return 0; my @commands = split(';', $cmdstr); foreach my $cmd (@commands) { print $sock "=$cmd\n"; my $answer = <$sock>; $answer or die "$iam: server socket went away\n"; chomp $answer; $answer and return $answer; } return ""; } sub cmd { my $self = shift; return $self->command(@_); } sub query { my ($self, $cmd) = @_; my $sock = $self->connect_to_server(); $sock or return 0; print $sock "?$cmd\n"; chomp(my $answer = <$sock>); return $answer; } sub ask { my $self = shift; return $self->query(@_); } #=================== #=== Server code === #=================== sub fork_server { my ($self) = @_; my $b_dbg = $self->{'debug'} || 0; my $title = $self->{'title'} || $d_title; my $w = $self->{'width'} || $d_width; my $h = $self->{'height'} || $d_height; my $bg = $self->{'bg'} || $d_bg; my $update = $self->{'update'} || $d_update; my $host = $self->{'host'} || hostname(); my $port = $self->{'port'} || $d_port; my $fork = fork(); defined($fork) or die "$iam: failed to fork server\n"; $fork and return; Logo->server_init($b_dbg, $title, $w, $h, $bg, $update, $host, $po +rt); } sub server_init { my ($class, $b_dbg, $title, $w, $h, $bg, $update, $host, $port) = +@_; # Create a blessed object my $self = { 'nticks' => 0, # Tracks number of GUI updates 'debug' => $b_dbg, # Debug flag 'count' => 0, # Current number of connections 'total' => 0, # Total number of connections 'clients' => { }, # The client hash 'names' => { }, # The clients by name }; bless $self, $class; # Open a socket connection at the desired port my %params = ( 'LocalHost' => $host, 'LocalPort' => $port, 'Proto' => 'tcp', 'Listen' => $max_connect, 'ReuseAddr' => 1, ); # Create socket object my $sock = new IO::Socket::INET(%params); if (!$sock) { # Port is already in use -- client will connect to it instead $b_dbg and print "[Port $port already in use]\n"; exit; } $self->{'socket'} = $sock; # Create select set for reading $self->{'select'} = new IO::Select($sock); # Create the GUI require Tk; $b_dbg and print "[Logo server v$VERSION on '$host']\n"; my $mw = Tk::MainWindow->new(-title => $title); $self->{'mw'} = $mw; # Allow easy dismissal of the GUI $mw->bind("<Escape>" => sub { $self->server_exit }); # Create a new canvas $self->clear_screen($w, $h, $bg); # Manage the GUI $self->{'repid'} = $self->set_update($update); Tk::MainLoop(); } sub server_exit { my ($self) = @_; my $mw = $self->{'mw'}; my $sel = $self->{'select'}; my $sock = $self->{'socket'}; my $pclients = $self->{'clients'}; my $pnames = $self->{'names'}; close $sock; foreach my $name (keys %$pnames) { my $pclient = $pnames->{$name}; my $fh = $pclient->{'fh'}; $self->server_remove_client($pclients, $sel, $fh); } # Shouldn't ever get here, since when the last client exited, # the server should have already gone away. But just in case ... # $mw->destroy(); exit; } sub set_update { my ($self, $update) = @_; ($update < 1) and $update = 1; ($update > 1000) and $update = 1000; $self->{'update'} = $update; my $mw = $self->{'mw'}; my $id = $mw->repeat($update => sub { $self->server_loop() }); return $id; } sub server_loop { my ($self) = @_; # Increment tick count ++$self->{'nticks'}; # Get data from the object my $sel = $self->{'select'}; my $sock = $self->{'socket'}; my $pclients = $self->{'clients'}; # Handle each pending socket my @readable = $sel->can_read(0); foreach my $rh (@readable) { if ($rh == $sock) { # The main socket means a new incoming connection. $self->server_add_client($rh, $pclients); } else { # Service the socket my $text = <$rh>; if (defined($text)) { # Process packet (either command or ask) chomp $text; my $pc = $pclients->{$rh}; $self->server_socket_input($pc, $text); } else { # Socket was closed -- remove the client $self->server_remove_client($pclients, $sel, $rh); } } } } sub server_add_client { my ($self, $rh, $pclients) = @_; # Accept the client connect and add the new socket my $sel = $self->{'select'}; my $ns = $rh->accept(); $sel->add($ns); my $b_dbg = $self->{'debug'} || 0; my $peer = getpeername($ns); my ($port, $iaddr) = unpack_sockaddr_in($peer); my $remote = inet_ntoa($iaddr); # Get the client handshake, and send back its unique ID chomp(my $text = <$ns>); ($text =~ /^:(.*)$/) or die "Bad header, expected ':[name]', got ' +$text'"; my $name = $1 || ""; my $id = $self->{'total'} + 1; $name ||= "CLIENT$id"; print $ns "$id:$name\n"; my $pc = $pclients->{$ns} = { 'id' => $id, 'fh' => $ns, 'name' => $name, 'remote' => $remote, }; # Create the 'turtle' object $self->create_turtle($pc); # Increment the number of connections and the total connection cou +nt ++$self->{'count'}; ++$self->{'total'}; # Add the client's name $b_dbg and print "[Added socket $id => '$name']\n"; $self->{'names'}->{$name} = $pclients->{$ns}; } sub server_remove_client { my ($self, $pclients, $sel, $fh) = @_;; my $b_dbg = $self->{'debug'} || 0; my $pc = $pclients->{$fh}; my $name = $pc->{'name'}; my $id = $pc->{'id'}; $sel->remove($fh); close($fh); delete $pclients->{$fh}; # Remove the client's name my $pnames = $self->{'names'}; delete $pnames->{$name}; # Remove the client's turtle my $cv = $self->{'canvas'}; my $ptids = $pc->{'turtle'}->{'tids'}; ($ptids || 0) and map { $cv->delete($_) } @$ptids; # Decrement the global client count --$self->{'count'}; $b_dbg and print "[Closed socket $id '$name']\n"; # Exit the server if this is the last connection if (0 == $self->{'count'} and $self->{'total'} > 0) { $b_dbg and print "[Final client closed -- exiting]\n"; $self->{'mw'}->destroy(); exit; } } sub server_socket_input { my ($self, $pc, $cmd) = @_; my $mode = $pc->{'mode'}; ($cmd =~ s/^=\s*//) and return $self->server_command($pc, $cmd); ($cmd =~ s/^\?\s*//) and return $self->server_query($pc, $cmd); } sub server_command { my ($self, $pc, $cmdstr) = @_; my $id = $pc->{'id'}; $pc->{'lastcmd'} = $cmdstr; $pc->{'debug'} and print "Command<$id>: '$cmdstr'\n"; my @args = split(/\s+/, $cmdstr); my $cmd = shift @args; # Resolve any command alias while (defined($palias->{$cmd})) { my $newcmd = $palias->{$cmd}; $cmd = $newcmd; } unshift @args, $cmd; # Execute one command if single-stepping is on if ($pc->{'step'}) { my $go = $self->single_step_prompt($pc, $cmd, [ @args ]); $go or return $self->server_reply($pc); } # Variables if (defined($pvars->{$cmd})) { return $self->set_variable($pc, @args); } # Find command in dispatch table my $method = $pmethods->{$cmd}; defined($method) and return $self->$method($pc, @args); # Return acknowledgment $self->server_reply($pc, "Unknown command '$cmd'"); } sub single_step_prompt { my ($self, $pc, $cmd, $pargs) = @_; my $cmdstr = join(" ", @$pargs); print "Step> [$cmdstr] Execute {y|n|c}? [y]"; chomp(my $ans = <STDIN>); ($ans =~ /^[cC]/) and $pc->{'step'} = 0; return ($ans =~ /^[nN]/)? 0: 1; } sub server_reply { my ($self, $pc, $reply) = @_; my $fh = $pc->{'fh'}; $reply ||= ""; print $fh "$reply\n"; } sub server_query { my ($self, $pc, $cmd) = @_; my $id = $pc->{'id'}; $pc->{'debug'} and print "Request<$id>: '$cmd'\n"; my @args = split(/\s+/, $cmd); $cmd = shift @args; my $cv = $self->{'canvas'}; # Quit if the query contains no text ($cmd || 0) or return $self->server_reply($pc, "Blank query"); # Return the specified global parameter, if defined if (defined($self->{$cmd})) { return $self->server_reply($pc, $self->{$cmd}); } # Return the specified per-client parameter, if defined if (defined($pc->{$cmd})) { return $self->server_reply($pc, $pc->{$cmd}); } # Return the specified per-client turtle parameter, if defined my $turtle = $pc->{'turtle'}; if (defined($turtle->{$cmd})) { return $self->server_reply($pc, $turtle->{$cmd}); } # Return an error message my $ans = "Unknown query '$cmd'"; $self->server_reply($pc, $ans); } sub create_turtle { my ($self, $pc, $from) = @_; my $turtle = { 'step' => 0, # Single step flag 'pen' => 0, # Pen state: 0 = 'up', 1 = 'down' 'color' => $d_color, # Pen color (also turtle color) 'size' => $d_psize, # Pen size (thickness) 'xdim' => $d_txdim, # Turtle x-dimension 'ydim' => $d_tydim, # Turtle y-dimension 'dist' => 0, # Last distance traveled (used as defa +ult) 'show' => 1, # Turtle starts out visible }; # Use old turtle as a reference if ($from || 0) { map { $turtle->{$_} = $from->{$_} } (keys %$from); } $self->home_turtle($pc, $turtle); $self->draw_turtle($pc, $turtle); } sub home_turtle { my ($self, $pc, $turtle) = @_; my $cv = $self->{'canvas'}; my $width = $cv->cget(-width); my $height = $cv->cget(-height); my $x = int($width / 2); my $y = int($height / 2); $turtle->{'x'} = $x; $turtle->{'y'} = $y; $turtle->{'angle'} = 0; } sub reset_turtle { my ($self, $pc, $cmd) = @_; my $turtle = $pc->{'turtle'}; $self->home_turtle($pc, $turtle); $self->draw_turtle($pc, $turtle); $self->server_reply($pc); } sub draw_turtle { my ($self, $pc, $turtle) = @_; # Erase old turtle if one exists my $cv = $self->{'canvas'}; my $ptids = $pc->{'turtle'}->{'tids'}; if ($ptids || 0) { map { $cv->delete($_) } @$ptids; $pc->{'turtle'}->{'tids'} = 0; } # Create turtle parameters my $cvbg = $cv->cget(-bg); my $x = $turtle->{'x'}; my $y = $turtle->{'y'}; my $xdim = $turtle->{'xdim'}; my $ydim = $turtle->{'ydim'}; my $color = $turtle->{'color'}; my $angle = $turtle->{'angle'}; my $show = $turtle->{'show'}; if ($turtle->{'show'}) { # Assign points, rotate them, and plot the turtle my $ppts = [ $x, $y, $x-$xdim, $y, $x, $y-2*$ydim, $x+$xdim, $ +y ]; $ppts = $self->rotate($x, $y, $angle, $ppts); my @args = (-fill => $cvbg, -outline => $color); my $tid = $cv->createPolygon(@$ppts, @args); $turtle->{'tids'} = [ $tid ]; $pc->{'turtle'} = $turtle; # If the pen is down, draw a circle around the current point $ppts = [ ]; if ($turtle->{'pen'}) { $ppts = [ $x-3, $y-3, $x+3, $y+3 ]; $tid = $cv->createOval(@$ppts, -outline => $color); push @{$turtle->{'tids'}}, $tid; } } # Save the turtle to this client's data $pc->{'turtle'} = $turtle; } sub change_update { my ($self, $pc, $cmd, $update) = @_; my $repid = $self->{'repid'}; ($repid || 0) and $repid->cancel(); $self->{'repid'} = $self->set_update($update); $self->server_reply($pc); } sub set_variable { my ($self, $pc, $param, $val) = @_; $pc->{$param} = $val || 0; $pc->{'debug'} and print "Variable '$param' set to '$val'\n"; $self->server_reply($pc); } sub modify_canvas { my ($self, $pc, $cmd, $val) = @_; my $cv = $self->{'canvas'}; ($cmd eq 'clear') and $self->clear_screen(); ($cmd eq 'width') and eval {$cv->configure('-wi', $val || $d_ +width)}; ($cmd eq 'height') and eval {$cv->configure('-he', $val || $d_ +height)}; ($cmd eq 'background') and eval {$cv->configure('-bg', $val || $d_ +bg)}; my $pnames = $self->{'names'}; foreach my $name (keys %$pnames) { my $pclient = $pnames->{$name}; my $turtle = $pclient->{'turtle'}; if ($cmd eq 'w' or $cmd eq 'h') { # Have to recreate the turtle $self->create_turtle($pclient); } elsif ($cmd eq 'bg') { # Have to redraw the turtle $self->draw_turtle($pclient, $turtle); } } $self->server_reply($pc); } sub clear_screen { my ($self, $width, $height, $bg) = @_; # Clear any old canvas my $oldcv = $self->{'canvas'}; if ($oldcv || 0) { $width ||= $oldcv->cget(-width); $height ||= $oldcv->cget(-height); $bg ||= $oldcv->cget(-bg); $oldcv->packForget(); } # Create a new canvas $width ||= $d_width; $height ||= $d_height; $bg ||= $d_bg; my $mw = $self->{'mw'}; my @opts = (-bg => $bg, -width => $width, -height => $height); my $cv = $mw->Canvas(@opts); $cv->pack(-expand => 1, -fill => 'both'); $self->{'canvas'} = $cv; # For each client, draw its turtle my $pclients = $self->{'clients'} || { }; foreach my $pc (values %$pclients) { my $turtle = $pc->{'turtle'}; $self->create_turtle($pc, $turtle); } } sub rotate { my ($self, $x, $y, $angle, $ppoints) = @_; for (my $i = 0; $i < @$ppoints; $i += 2) { $ppoints->[$i] -= $x; $ppoints->[$i+1] -= $y; } my $ppolar = $self->rect_to_polar($ppoints); for (my $i = 1; $i <= @$ppolar; $i += 2) { $ppolar->[$i] = ($ppolar->[$i] + $angle) % 360; } $ppoints = $self->polar_to_rect($ppolar); for (my $i = 0; $i < @$ppoints; $i += 2) { $ppoints->[$i] += $x; $ppoints->[$i+1] += $y; } return $ppoints; } sub calculate_endpoint { my ($self, $x, $y, $angle, $dist) = @_; my $prect = $self->polar_to_rect([ $dist, $angle ]); my ($x1, $y1) = @$prect; $x1 += $x; $y1 += $y; return ($x1, $y1); } sub rect_to_polar { my ($self, $ppoints) = @_; my $ppolar = ( ); while (@$ppoints > 1) { my $x = shift @$ppoints; my $y = shift @$ppoints; my $r = sqrt($x ** 2 + $y ** 2); my $t = $self->rad_to_deg(atan2($y, $x)); push @$ppolar, $r, $t; } return $ppolar; } sub polar_to_rect { my ($self, $ppoints) = @_; my $prect = [ ]; while (@$ppoints > 1) { my $r = shift @$ppoints; my $t = $self->deg_to_rad(shift @$ppoints); my $x = $r * cos($t); my $y = $r * sin($t); push @$prect, $x, $y; } return $prect; } sub deg_to_rad { my ($self, $degrees) = @_; my $radians = $degrees * PI / 180; ($radians < 0) and $radians += 6.283185307; return $radians; } sub rad_to_deg { my ($self, $radians) = @_; my $degrees = $radians * 180 / PI; ($degrees < 0) and $degrees += 360; return $degrees; } sub show_turtle { my ($self, $pc, $cmd) = @_; my $b_show = ($cmd eq 'st')? 1: 0; my $turtle = $pc->{'turtle'}; $turtle->{'show'} = $b_show; $self->draw_turtle($pc, $turtle); $self->server_reply($pc); } sub change_color { my ($self, $pc, $cmd, $color) = @_; defined($color) or return $self->syntax_error($pc); # Allow a random color if (($color || "") eq 'random') { $color = sprintf "#%02x%02x%02x", rand 256, rand 256, rand 256 +; } my $turtle = $pc->{'turtle'}; $turtle->{'color'} = $color; $self->draw_turtle($pc, $turtle); $self->server_reply($pc); } sub change_pen_state { my ($self, $pc, $cmd) = @_; my $state = ($cmd eq 'pendown')? 1: 0; my $turtle = $pc->{'turtle'}; $turtle->{'pen'} = $state; $self->draw_turtle($pc, $turtle); $self->server_reply($pc); } sub change_pen_size { my ($self, $pc, $cmd, $size, @args) = @_; my $turtle = $pc->{'turtle'}; # Allow a random pen size if (($size || "") eq "random") { my $min = $args[0]; my $max = $args[1]; defined($min) or return $self->syntax_error($pc); defined($max) or return $self->syntax_error($pc); $size = $min + rand($max - $min); } $size ||= $d_psize; $turtle->{'size'} = $size; $self->server_reply($pc); } sub syntax_error { my ($self, $pc) = @_; my $cmd = $pc->{'lastcmd'}; $self->server_reply($pc, "syntax error in '$cmd'"); } sub turn_turtle { my ($self, $pc, $cmd, $newang, $arg0, $arg1) = @_; my $turtle = $pc->{'turtle'}; my $angle = $turtle->{'angle'}; # Allow a random angle of turn if (($newang || "") eq 'random') { defined($arg0) or return $self->syntax_error($pc); defined($arg1) or return $self->syntax_error($pc); $newang = $arg0 + rand($arg1 - $arg0); } # Make angles default to right angles defined($newang) or $newang = 90; # Assign the angle ($cmd eq 'left') and $angle = $angle - $newang; ($cmd eq 'right') and $angle = $angle + $newang; ($cmd eq 'seth') and $angle = $newang; # Normalize the angle while ($angle < 0) { $angle += 360 } while ($angle > 360) { $angle -= 360 } $turtle->{'angle'} = $angle; $self->draw_turtle($pc, $turtle); $self->server_reply($pc); } sub move_turtle { my ($self, $pc, $cmd, $dist, $arg0, $arg1) = @_; my $wrap = $pc->{'wrap'} || 0; my $turtle = $pc->{'turtle'}; my $angle = $turtle->{'angle'}; # Allow a random distance if (($dist || "") eq 'random') { defined($arg0) or return $self->syntax_error($pc); defined($arg1) or return $self->syntax_error($pc); $dist = $arg0 + rand($arg1 - $arg0); } $dist ||= $turtle->{'dist'}; (0 == $dist) and $self->syntax_error($pc); $turtle->{'dist'} = $dist; ($cmd eq 'forward') and $angle = ($angle + 270) % 360; ($cmd eq 'backward') and $angle = ($angle + 90) % 360; my ($x0, $y0) = ($turtle->{'x'}, $turtle->{'y'}); my ($x1, $y1); if ($cmd eq 'setx' or $cmd eq 'sety' or $cmd eq 'setxy') { if ($cmd eq 'setxy') { defined($dist) or return $self->syntax_error($pc); defined($arg0) or return $self->syntax_error($pc); ($x1, $y1) = ($dist, $arg0); } else { defined($dist) or return $self->syntax_error($pc); ($x1, $y1) = ($x0, $y0); ($x1, $y1) = ($cmd eq 'setx')? ($dist, $y0): ($x0, $dist); } } else { ($x1, $y1) = $self->calculate_endpoint($x0, $y0, $angle, $dist +); } my @args = ($pc, $x0, $y0, $x1, $y1); return $self->move_turtle_reflect(@args) if (2 == $wrap); return $self->move_turtle_torus(@args) if (1 == $wrap); return $self->move_turtle_normal(@args); # Assume wrap == 0 } sub move_turtle_normal { my ($self, $pc, $x0, $y0, $x1, $y1) = @_; my $turtle = $pc->{'turtle'}; my $pen = $turtle->{'pen'}; my $size = $turtle->{'size'}; my $color = $turtle->{'color'}; $self->line($pen, $x0, $y0, $x1, $y1, $color, $size); $self->move($pc, $x1, $y1); return $self->server_reply($pc); } sub move_turtle_torus { my ($self, $pc, $x0, $y0, $x1, $y1) = @_; my $turtle = $pc->{'turtle'}; my $pen = $turtle->{'pen'}; my $size = $turtle->{'size'}; my $color = $turtle->{'color'}; # Calculate (dx, dy), which don't change for torus behavior my ($dx, $dy) = ($x1 - $x0, $y1 - $y0); while (!$self->contained($x1, $y1)) { my $height = $self->{'height'}; my $width = $self->{'width'}; if (abs($dx) < 0.0000001) { # Vertical line my $yb = ($y1 < $y0)? 0: $height; $self->line($pen, $x0, $y0, $x0, $yb, $color, $size); ($y0, $y1) = $yb? (0, $y1-$height): ($height, $y1+$height) +; $self->move($pc, $x0, $y0); } elsif (abs($dy) < 0.0000001) { # Horizontal line my $xb = ($x1 < $x0)? 0: $width; $self->line($pen, $x0, $y0, $xb, $y0, $color, $size); ($x0, $x1) = $xb? (0, $x1-$width): ($width, $x1+$width); $self->move($pc, $x0, $y0); } else { # Diagonal line my $m = $dy / $dx; my $b = $y1 - ($m * $x1); my $xb = ($y1 > $y0)? (($height - $b) / $m): (-$b / $m); my $yb = ($x1 > $x0)? (($m * $width) + $b): $b; my ($xn, $yn) = ($xb, $yb); my $crossx = ($xb > 0 and $xb < $width)? 1: 0; my $crossy = ($yb > 0 and $yb < $height)? 1: 0; if ($crossx and !$crossy) { # Line intercepts x-axis $yb = ($y1 > $y0)? $height: 0; $yn = $height - $yb; $y1 = ($y1 > $y0)? $y1 - $height: $y1 + $height; } elsif ($crossy and !$crossx) { # Line intercepts y-axis $xb = ($x1 > $x0)? $width: 0; $xn = $width - $xb; $x1 = ($x1 > $x0)? $x1 - $width: $x1 + $width; } else { # Line intercepts both axes $xb = ($x1 > $x0)? $width: 0; $yb = ($y1 > $y0)? $height: 0; ($xn, $yn) = ($width - $xb, $height - $yb); $x1 = ($x1 > $x0)? $x1 - $width: $x1 + $width; $y1 = ($y1 > $y0)? $y1 - $height: $y1 + $height; } $self->line($pen, $x0, $y0, $xb, $yb, $color, $size); ($x0, $y0) = ($xn, $yn); $self->move($pc, $x0, $y0); } } # Back within canvas return $self->move_turtle_normal($pc, $x0, $y0, $x1, $y1); } sub move_turtle_reflect { my ($self, $pc, $x0, $y0, $x1, $y1) = @_; my $turtle = $pc->{'turtle'}; my $angle = $turtle->{'angle'}; my $pen = $turtle->{'pen'}; my $size = $turtle->{'size'}; my $color = $turtle->{'color'}; while (!$self->contained($x1, $y1)) { # Calculate (dx, dy), which change for reflection behavior my ($dx, $dy) = ($x1 - $x0, $y1 - $y0); my $height = $self->{'height'}; my $width = $self->{'width'}; if (abs($dx) < 0.0000001) { # Vertical line my $yb = ($y1 < $y0)? 0: $height; $self->line($pen, $x0, $y0, $x0, $yb, $color, $size); $y0 = $yb; $y1 = ($y1 < $y0)? (- $y1): (2 * $height) - $y1; $self->move($pc, $x0, $y0); $angle = $self->adjust_angle($pc, 180 - $angle); } elsif (abs($dy) < 0.0000001) { # Horizontal line my $xb = ($x1 < $x0)? 0: $width; $self->line($pen, $x0, $y0, $xb, $y0, $color, $size); $x0 = $xb; $x1 = ($x1 < $x0)? (- $x1): (2 * $width) - $x1; $self->move($pc, $x0, $y0); $angle = $self->adjust_angle($pc, 360 - $angle); } else { # Diagonal line my $m = $dy / $dx; my $b = $y1 - ($m * $x1); my $xb = ($y1 > $y0)? (($height - $b) / $m): (-$b / $m); my $yb = ($x1 > $x0)? (($m * $width) + $b): $b; my $crossx = ($xb > 0 and $xb < $width)? 1: 0; my $crossy = ($yb > 0 and $yb < $height)? 1: 0; if ($crossx and !$crossy) { # Line intercepts x-axis $yb = ($y1 > $y0)? $height: 0; $y1 = ($y1 > $y0)? (2 * $height - $y1): (- $y1); } elsif ($crossy and !$crossx) { # Line intercepts y-axis $xb = ($x1 > $x0)? $width: 0; $x1 = ($x1 > $x0)? (2 * $width - $x1): (- $x1); } else { # Line intercepts both axes $xb = ($x1 > $x0)? $width: 0; $yb = ($y1 > $y0)? $height: 0; $x1 = ($x1 > $x0)? (2 * $width - $x1): (- $x1); $y1 = ($y1 > $y0)? (2 * $height - $y1): (- $y1); } $self->line($pen, $x0, $y0, $xb, $yb, $color, $size); ($x0, $y0) = ($xb, $yb); $self->move($pc, $x0, $y0); $angle = $self->adjust_angle($pc, 180 - $angle); } } # Back within canvas return $self->move_turtle_normal($pc, $x0, $y0, $x1, $y1); } sub adjust_angle { my ($self, $pc, $newang) = @_; my $turtle = $pc->{'turtle'}; while ($newang >= 360) { $newang -= 360; } while ($newang < 0) { $newang += 360; } $turtle->{'angle'} = $newang; $self->draw_turtle($pc, $turtle); return $newang; } sub line { my ($self, $pen, $x0, $y0, $x1, $y1, $color, $size) = @_; # Pen is up; no need to draw return unless $pen; # Get canvas and draw line my $cv = $self->{'canvas'}; my @points = ($x0, $y0, $x1, $y1, -fill => $color, -width => $size +); $cv->createLine(@points); } sub move { my ($self, $pc, $x, $y) = @_; # Set new turtle coordinates and redraw turtle my $turtle = $pc->{'turtle'}; $turtle->{'x'} = $x; $turtle->{'y'} = $y; $self->draw_turtle($pc, $turtle); } sub contained { my ($self, $x1, $y1) = @_; my $cv = $self->{'canvas'}; my $width = $cv->cget(-width); my $height = $cv->cget(-height); $self->{'width'} = $width; $self->{'height'} = $height; return ($x1 < 0 or $x1 > $width or $y1 < 0 or $y1 > $height)? 0: 1 +; } 1; __END__ =head1 NAME Language::Logo - An implementation of the Logo programming language =head1 SYNOPSIS use Language::Logo; my $lo = new Logo(update => 20); $lo->command("setxy 250 256"); $lo->command("color yellow"); $lo->command("pendown"); # Draw a circle for (my $i = 0; $i < 360; $i += 10) { $lo->command("forward 10; right 10"); } $lo->disconnect("Finished...") =head1 DESCRIPTION This module provides an implementation of the Logo programming languag +e, with all of the necessary drawing primitives in a Tk Canvas. The Canvas ob +ject is also referred to as the "screen". The first construction of a Language::Logo object causes a server to b +e created in a separate process; this server then creates a Tk GUI with +a Tk::Canvas for use by the client's "turtle", and responds to all reque +sts from the client's commands. In this way, multiple clients may be cons +tructed simultaneously -- each one with its own "turtle". In this first release, not all of the Logo language is implemented. Rather, the primary commands available are those which directly affect the turtle, and are related to drawing on the screen. The intent is t +o use the Logo in conjunction with Perl as a sort of "hybrid" language; Perl us used as the higher-level language layer through which all loop constructs, conditionals, and data-manipulation is done. This allows for a substantial level of programming power. =head2 Methods =over 4 =item I<PACKAGE>->new([I<param> => I<value>, [I<param> => I<value>, .. +.]]) Returns a newly created C<Language::Logo> object. No arguments are re +quired, but the following are allowed (each of which must be accompanied by a +value): =item name I<client name> the name of the current client. (The default is a uniquely generated +name; this parameter is not currently used, but may be used in the future to + force synchronization between clients in a multiple-client scenario). =item debug I<0 or 1> a zero value turns debugging off (the default); a nonzero value turns debugging on. =item title I<main window title> the title of the Tk window (the default is the name and current versio +n number of the module). =item bg I<background color> the starting background color of the screen (the default is black). =item width I<screen width> the starting width of the screen (the default is 512 pixels). =item height I<screen height> the starting height of the screen (the default is 512 pixels). =item update I<update interval> the starting update value for controlling the number of milliseconds t +o delay before reentering Tk's idle loop. The fastest is therefore a va +lue of 1 (which updates up to 1000 times per second). =item host I<server address> the host computer where the server is running (the default is to use t +he server on the local machine). If the host is on a remote machine, it +is assumed that the remote machine has already constructed at least one Language::Logo object which is currently running its own local server. =item port I<server port> the port at which to connect to the server (the default is port 8220). =back =item I<$OBJ>->disconnect([I<message>]) =over 4 Disconnects from the server. If a message is supplied, the user is prompted with the message, and the program waits until a newline is typed before disconnecting. This is especially useful if the client is the only one (or last one) connected; in which case the server will also exit upon disconnect. =item I<$OBJ>->interact() Enters interactive mode, whereby the user can issue Logo commands one-at-a-time. Queries may also be used to retrieve various informati +on about the state of the current client's object. =item I<$OBJ>->query(I<parameter>) =item I<$OBJ>->ask(I<parameter>) Queries the object's state to get the current value for a given parame +ter. =item I<$OBJ>->command(I<command string>) =item I<$OBJ>->cmd(I<command string>) Sends a Logo command to the server. Multiple commands may be sent at +the same time by inserting a semi-colon ';' between them. The following c +ommands are available: =over 4 =item "background" or "bg" (1 argument) Sets the background color of the screen. Colors must be valid Tk colo +rs, specified either by name ("blue") or hex triplet ("#0000ff"). For exa +mple, "background orange". =item "backward" or "bk" (1 argument) Moves the turtle backwards the specified number of pixels. If the pen + is down, a line is drawn with the current color and pensize. For example +, "backward 100". [Contrast "forward"] =item "clear" or "cs" (no arguments) Clears the screen entirely. =item "color" or "co" (1 argument) Changes the current turtle color to the specified color. Both the tur +tle and any items drawn by the turtle (when the pen is down) will appear i +n this color. For example, "color white". =item "forward" or "fd" (1 argument) Moves the turtle forwards the specified number of pixels. If the pen +is down, a line is drawn with the current color and pensize. For example, "for +eward 100". [Contrast "backward"] =item "height" or "h " (1 argument) Changes the current screen height to the specified number of pixels. Note that, as this change applies to the Tk Canvas, it affects all clients which are connected to the server. For example, "height 768". [Contrast "width"] =item "hideturtle" or "ht" (no arguments) Makes the turtle invisible. Note that this is unrelated to the curren +t state of the pen; lines will still be drawn or not, depending on wheth +er the pen is up or down. [Contrast "showturtle"] =item "home" or "hm" (no arguments) Puts the turtle in its original location, at the center of the screen, + with a heading of due North (0 degrees). =item "left" or "lt" (1 argument) Rotates the turtle to the left by the specified angle, given in degree +s. Thus, an angle of 90 degrees will make an exact left turn; an angle of 180 degrees will make the turtle face the opposite direction. For exa +mple, "left 45". [Contrast "right"] =item "penup" or "pu" (no arguments) Changes the state of the turtle's "pen" so that subsequent movements o +f the turtle will no longer result in lines being drawn on the screen. As a + visual cue, the turtle will appear -without- the circle drawn around the curr +ent point. [Contrast "pendown"] =item "right" or "rt" (1 argument) Rotates the turtle to the left by the specified angle, given in degree +s. Thus, an angle of 90 degrees will make an exact right turn; an angle o +f 180 degrees will make the turtle face the opposite direction. For exa +mple, "right 135". [Contrast "left"] =item "seth" or "sh" (1 argument) Changes the turtle's heading to the specified angle. The angle given +is an absolute angle, in degrees, representing the clockwise spin relativ +e to due North. Thus, a value of 0 is due North, 90 is due East, 180 is du +e South, and 270 is due West. For example, "seth 225". =item "setx" or "sx" (1 argument) Changes the turtle's x-coordinate to the specified pixel location on t +he screen, without changing the value of the current y-coordinate. The v +alue given is an absolute location, not one related to the previous positio +n. If the pen is down, a line will be drawn from the old location to the +new one. For example, "setx 128". [Contrast "sety", "setxy" ] =item "setxy" or "xy" (2 arguments) Changes the turtle's x and y coordinates to the specified pixel locati +ons on the screen, without changing the value of the current x-coordinate. The first argument is the new x-coordinate, the second the new y-coord +inate. The position of the new point represents an absolute location, not one + related to the previous position. If the pen is down, a line will be drawn fr +om the old location to the new one. For example, "setxy 10 40". [Contrast " +setx", "sety" ] =item "sety" or "sy" (1 argument) Changes the turtle's y-coordinate to the specified pixel location on t +he screen, without changing the value of the current x-coordinate. The v +alue given is an absolute location, not one related to the previous positio +n. If the pen is down, a line will be drawn from the old location to the +new one. For example, "sety 256". [Contrast "setx", "setxy" ] =item "showturtle" or "st" (no arguments) Makes the turtle visible. Note that this is unrelated to the current +state of the pen; lines will still be drawn or not, depending on whether the + pen is up or down. [Contrast "hideturtle"] =item "pendown" or "pd" (no arguments) Changes the state of the turtle's "pen" so that subsequent movements o +f the turtle will draw the corresponding lines on the screen. As a visual c +ue, the turtle will appear with a circle drawn around the current point. [Contrast "penup"] =item "pensize" or "ps" (1 argument) Changes the width of the turtle's "pen" to the given number of pixels, + so that subsequent drawing will be done with the new line width. =item "width" or "w " (1 argument) Changes the current screen width to the specified number of pixels. Note that, as this change applies to the Tk Canvas, it affects all clients which are connected to the server. For example, "width 1024". [Contrast "height"] =item "wrap" (1 argument) Changes the screen "wrap" type on a per-client basis, to the specified argument, which must be a value of 0, 1 or 2. See L<WRAP TYPES> below for more detailed information. =item "update" or "ud" (1 argument) Changes the current update value which controls the number of millisec +onds to delay before reentering Tk's idle loop. A value of 1000 is the slowes +t; it will cause a delay of 1 second between updates. A value of 1 is the f +astest, it will make the Tk window update up to 1000 times each second. =back =back =head1 RANDOM VALUES Some of the commands can take as an argument the word "random", poss +ibly followed by more arguments which modify the random behavior. For ex +ample, the command "color random" chooses a new random color for the pen, w +hereas "seth random 80 100" sets the turtle heading to a random angle betwe +en 80 and 100 degrees. The number of arguments following "random" depend on the context: angles ........ 2 arguments (mininum angle, maximum angle) distances ..... 2 arguments (minimum distance, maximum distance) other ......... no arguments =head1 WRAP TYPES The parameter 'wrap' defines the behavior that occurs when the turtle's destination point is outside of the display window. The allowable values for wrap are: 0: Normal "no-wrap" behavior 1: Toroidal "round-world" wrap 2: Reflective wrap Consider the following diagram: +---------------o---------------+ | /| (xb0,yb0) | | <C> / | | / | | | (x2,y2) o | | [wrap = 1] | | | | | [wrap = 2] | | | (x3,y3) o @ (x0,y0) | | \ | / | | <D> \ / <A> | | \|/ | +---------------o---------------+ / (xb1,yb1) <B> / / (x1,y1) @ [wrap = 0] Point (x0,y0) represents the current location of the turtle, and (x1,y1) the destination point. Since the destination is outside of the display window, the behavior of both the turtle and the drawn line will be governed by the value of the 'wrap' parameter. Since line segment <A> is in the visible window, it will be drawn in all cases. Since line segment <B> is outside of the visible window, it will not be visible in all cases. When (wrap == 0), only line segment <A> will be visible, and the turtle, which ends up at point (x1,y1), will NOT. When (wrap == 1), the window behaves like a torus, so that line segment <B> "wraps" back into the window at the point (xb0,yb0). Thus line segments <A> and <C> are visible, and the turtle ends up at point (x2,y2). When (wrap == 2), the borders of the window are reflective, and line segment <B> "reflects" at the point (xb1,yb1). Thus line segments <A> and <D> are visible, and the turtle ends up at the point (x3,y3). =head1 EXAMPLES The following programs show some of the various ways to use the Language::Logo object. ################################# ### Randomly-colored designs ### ################################# use Language::Logo; my $lo = new Logo(title => "Logo Demonstration"); $lo->command("update 2; color random; pendown; hideturtle"); for (my $i = 1; $i < 999; $i++) { my $distance = $i / 4; $lo->command("forward $distance; right 36.5"); $lo->command("color random") if not ($i % 50); } $lo->disconnect("Type [RETURN] to finish..."); ################################################ ### Randomly placed "rings" of random widths ### ################################################ use Language::Logo; my $lo = new Logo(title => "Random rings", update => 5); $lo->cmd("wrap 1"); # Toroidal wrap while (1) { $lo->cmd("pu; sx random 1 512; sy random 1 512; pd; co random" +); $lo->cmd("pensize random 1 32"); my $dist = 5 + (rand(50)); for (my $i = 0; $i <= 360; $i += $dist) { $lo->cmd("fd $dist; rt $dist"); } } ################################### ### Fullscreen "Frenetic Lines" ### ################################### use Language::Logo; my $lo = new Logo(width => 1024, height => 768, update => 3); # Change "1" to "2" for reflection instead of torus $lo->cmd("wrap 1"); $lo->cmd("setx random 0 800"); # Choose random x-coordinate $lo->cmd("sety random 0 800"); # Choose random y-coordinate $lo->cmd("rt random 0 360"); # Make a random turn $lo->cmd("pd"); # Pen down $lo->cmd("ht"); # Hide turtle my $size = 1; # Starting pen size while (1) { if (++$size > 48) { $size = 1; # Reset the size $lo->cmd("cs"); # Clear the screen } $lo->cmd("ps $size"); # Set the pensize $lo->cmd("color random"); # Random color $lo->cmd("fd 9999"); # Move the turtle $lo->cmd("rt random 29 31"); # Turn a random angle } =head1 BUGS The following items are not yet implemented, but are intended to be addressed in a future version of the library: =over 4 =item * There is no provision for compiling "pure" Logo language code. Such capabilities as loops, conditionals, and subroutines must be handled in the calling Perl program. =item * There is no way to change the processing speed on a per-client basis. =item * There is no way to synchronize multiple clients to wait for one anothe +r. =item * There are still some commands which do not support a "random" paramete +r ("setxy" and "background", for example). =item * There is currently no analog command to "setxy" ("offxy" ?) which chan +ges the position of the turtle's location I<relative> to the current point +, as opposed to setting it absolutely. =item * There is no way to synchronize multiple clients to wait for one anothe +r. =item * It would be nice to be able to draw things other than lines; for examp +le ovals, polygons, etc. It would also be nice to be able fill these wit +h a given "fill" color. =item * There needs to be a way to save the current screen to a file (eg. as PostScript). =head1 AUTHOR John C. Norton jcnorton@charter.net This program is free software; you can redistribute it and/or modify it under the same terms as Perl itself. =head1 VERSION Version 1.000 (January 2007) =head1 REQUIREMENTS The Tk module is required. =head1 SEE ALSO perl(1) =cut [download] Its documentation is in POD format; all of the basic commands are there, as well as some examples of usage. And here are some scripts I've written to demonstrate the module: #!/usr/bin/perl -w # # "logo.pl" -- a simple Logo interpreter use strict; use warnings; use Language::Logo; my $lo = new Logo()->interact(); [download] #!/usr/bin/perl -w # # "spiral.pl" (a number of other programs call this) # # Draws one or more spirals using the Logo library # # 070108 by liverpole # # Strict use strict; use warnings; # Libraries use lib "."; use Language::Logo; use Data::Dumper; use File::Basename; use Getopt::Long; # Globals my $iam = basename $0; my $b_toroid = 0; my $b_reflect = 0; my $b_debug = 0; my $bg = ""; my $host = ""; my $b_hide = 0; my $color = ""; my $update = ""; my $width = ""; my $maxsize = 512; my $minsize = 8; my $minang = -1; my $incr = 2; my $startsize = 0; my $nticks = 0; my $tickcnt = 0; my $pos = 0; my $syntax = " syntax: $iam [switches] <angle> Uses Logo to create a spiral with the specified update and angle. The update specifies the update interval of the gui (iterations per second between 1 (very fast) and 1000 (very slow)). Switches: -d ............ starts the Logo server in debug mode -t ............ 'toroidal' -- sets wrap to 1 -r ............ 'reflect' -- sets wrap to 2 -h <host> ..... connects to server on given host (':' = localh +ost) -x ............ hides the turtle -a <min> ...... use a random angle from <min> to the given <an +gle> -b <color> .... specifies the Canvas background color -c <color> .... specifies the pen color -u <update> ... specifies the update speed -w <width> .... specifies the pen width -s <size> ..... specify the starting size -i <incr> ..... increment the size each time by <incr> -n <nticks> ... change the color after <nticks> 'ticks' -p <x,y> ...... make the starting position (X, Y) "; # Command-line my $result = GetOptions( "d" => \$b_debug, "x" => \$b_hide, "t" => \$b_toroid, "r" => \$b_reflect, "b=s" => \$bg, "c=s" => \$color, "h=s" => \$host, "u=s" => \$update, "w=s" => \$width, "a=s" => \$minang, "s=s" => \$startsize, "i=s" => \$incr, "n=s" => \$nticks, "p=s" => \$pos, ); $result or die $syntax; (my $angle = shift) or die $syntax; # Main program # Create Logo object my @opts = (debug => $b_debug); $bg and push @opts, bg => $bg; # Initial canvas color $host and push @opts, host => $host; # Connect to an existing server my $lo = new Logo(@opts); # Issue logo commands $update and $lo->cmd("update $update"); if ($startsize) { my $half = $startsize / 2; my $quarter = $half / 2; $lo->cmd("rt 90; bk $quarter; lt 90; bk $half"); } $width and $lo->cmd("pensize $width"); $color and $lo->cmd("color $color"); $b_hide and $lo->cmd("ht"); $b_toroid and $lo->cmd("wrap 1"); $b_reflect and $lo->cmd("wrap 2"); if ($pos) { ($pos =~ /(\d+),(\d+)/) or die "$iam: invalid position '$pos'\n"; my ($x, $y) = ($1, $2); $lo->cmd("xy $x $y"); } $lo->cmd("pendown"); # Create spiral spiral($startsize, $angle); # Disconnect from main client $host or $lo->disconnect("Type [RETURN] to disconnect ... "); # Subroutines sub spiral { my ($size, $angle) = @_; while (($incr <= 0 and $size >= $minsize) || ($incr >= 0 and $size <= $maxsize)) { $lo->cmd("forward $size"); # Move & draw turtl +e if ($minang >= 0) { $lo->cmd("rt random $minang $angle"); # Make a random tur +n } else { $lo->cmd("rt $angle"); # Turn <angle> degr +ees } # Change pen color randomly if ($nticks and ++$tickcnt >= $nticks) { $lo->cmd("color random"); $tickcnt = 0; } $size += $incr; } } [download] #!/usr/bin/perl -w # # "boxes.pl" -- randomly colored boxes use strict; use warnings; system("spiral.pl -t -x -u 1 -n 10 -s 100 -w 8 -p 200,300 -i -.01 -a 8 +5 95") [download] #!/usr/bin/perl -w # # "double.pl" -- an example of 2 simultaneous clients use strict; use warnings; # Main program if (fork) { system("flowers.pl 35.6"); } else { system("flowers.pl -h : 324.4"); } [download] #!/usr/bin/perl -w # # "flowers.pl" -- uses the "spiral.pl" program to # create colorful growing patterns. # # Uses Language::Logo (in conjunction with the spiral program) to crea +te # different-colored growing 'flowers'. # # 070113 by liverpole # # Strict use strict; use warnings; # Libraries use File::Basename; use Getopt::Long; # Globals my $iam = basename $0; my $host = ""; my $b_toroid = 0; my $b_reflect = 0; my $syntax = " syntax: $iam [switches] <angle> Uses Logo (in conjunction with the spiral program) to create different-colored growing 'flowers'. Try, for example, one of: $iam 33 $iam 35.9 $iam 36 $iam 38 $iam 90.1 $iam 91 $iam 100 $iam 134 $iam 300 Switches: -t ............ 'toroidal' -- sets wrap to 1 -r ............ 'reflection' -- sets wrap to 2 -h <host> ..... connects to server on given host "; # Command-line my $result = GetOptions( "h=s" => \$host, "t" => \$b_toroid, "r" => \$b_reflect, ); $result or die $syntax; (my $angle = shift) or die $syntax; my $args = "-x -c gold -u 1 -n 90 -s 50 -i .25"; $b_toroid and $args .= " -t"; $b_reflect and $args .= " -r"; $host and $args .= " -h $host"; # Main program system("spiral.pl $args $angle"); [download] #!/usr/bin/perl -w # # "lines.pl" -- draws random colored lines using Logo, either with # a wrap of 1 (torus) or a wrap of 2 (reflection). # # 070116 by liverpole # # Strict use strict; use warnings; # Libraries use lib "."; use Language::Logo; use File::Basename; use Data::Dumper; use Getopt::Long; # User-defined my $nticks = 100; # Change color after nticks my $tickcnt = 0; # Number of ticks so far my $color = 'yellow'; # Starting pen color my $size = 2; # Pen size my $maxsize = 16; # Maximum pen size my $mindis = 5; # Minimum distance to advance my $maxdis = 50; # Maximum distance to advance my $width = 500; # Screen width my $height = 500; # Screen height my $dist = 9999; # Distance to move forward # Globals my $iam = basename $0; my $b_reflect = 0; my $syntax = " syntax: $iam [switches] <min angle> <max angle> Tests round-world scenarios with lines which continuously change a +ngles. Each time the color changes, a new random angle between <min angle +> and <max angle> is chosen, and the line is rotated by that amount. Switches: -r ............ reflect mode (wrap 2) instead of torus mode (w +rap 1) Try, for example, one of: $iam 29 31 $iam 85 95 $iam 170 190 "; # Command-line my $result = GetOptions( "r" => \$b_reflect, ); $result or die $syntax; (my $minang = shift) or die $syntax; (my $maxang = shift) or die $syntax; # Main program # Create Logo object my $lo = new Logo(width => $width, height => $height); my $wrap = $b_reflect? 2: 1; $lo->command("wrap $wrap"); $lo->command("update 5"); $lo->command("pu"); # Pen-up $lo->command("setx random 0 800"); # Choose random x-coordinate $lo->command("sety random 0 800"); # Choose random y-coordinate $lo->command("rt random 0 360"); # Make a random turn $lo->command("pendown; color $color"); # Start drawing $lo->command("ht"); # Hide turtle while (1) { if (++$size > $maxsize) { $size = 1; $lo->command("cs"); } $lo->command("ps $size"); $lo->command("color random"); $lo->command("fd $dist"); $lo->command("rt random $minang $maxang"); } [download] I guess a good challenge to someone, would be to write a script to completely search an area, in the least time, with the least movement. I just saw a news report where UPS drivers are now being given delivery routes, where they only make right turns, since left turns usually mean a time consuming wait at a light, or intersection. Maybe you could use this module to map out those sort of problems, given a fixed set of coordinates to visit, a set of paths ( streets), and legal street directions? One of the biggest early hurdles was getting past the well-known fact that you can't do Tk within a child thread I think the Tk canvas is an excellent widget for this, but the Gnome2::Canvas, does allow for you to access Gtk widgets from within the child threads, with it's "thread-safety" mechanism. I would suggest that, but the Gnome2::Canvas development has been frozen, and has an iffy future. There is a promising replacement though, called Papyrus which you may want to look at. It dosn't have a perl port yet, but it can't be far off. All in all though, the Tk canvas is still the best thing going out there, for simple drawing. One of my first thoughts though, is to make a feedback channel That's an excellent idea. Actually the module already does have the server respond to the client after each command. It did occur to me previously that it might be useful to send back a better response than just "" (blank) for success, but I never implemented it. Let me try modifying the code to send back the client's (x, y) position (and possibly one or two other parameters); the client side can then parse the values from the response, and make them available to the calling program. Update: I've made some fairly large changes to the module, such that the query command (and the alias ask) are no longer necessary. Instead, every time command (or cmd) is called, the user is passed back a hash containing all "interesting" parameters and their values. I've got to first make sure that my tests still run, but after that I'll be posting the module to CPAN soon (hopefully this evening). Update 2: The code is now available at CPAN.] Here's a more usable interact using Term::Readline. OFC, I named it interact2 just to avoid overriding your code ;-) sub interact2 { my $self = shift; eval { require Term::ReadLine }; die "This method requires Term::ReadLine.\nError was: $@\n" if $@; # used for completion my @commands = map { $_->[0] } values %{$palias}; my $term = Term::ReadLine->new('Logo'); $term->Attribs->{attempted_completion_function} = sub { my $word = shift; $term->Attribs->{completion_word} = \@commands; return $term->completion_matches( $word, $term->Attribs->{list_completion_function} ); }; $term->Attribs->{attempted_completion_over} = sub {}; while () { my $cmd = $term->readline( 'logo> ' ); last if !defined $cmd; for ($cmd) { s/^\s+//; s/\s+$//; } next if $cmd !~ /\S/; if ( $cmd eq '?' || $cmd eq 'help' ) { $self->interactive_help(); } elsif ( $cmd =~ /^(?i:q(?:uit)?|bye|exit)$/ ) { } else { $self->interactive_command($cmd); # adding only valid commands would be nice # - possible if "interactive_command" returns some status $term->addhistory($cmd); } } return 0; # Exit interactive mode } [download] Obviously, it requires Term::ReadLine to basically work, and Term::ReadLine::Gnu to be able to use the custom completion method. # Connect to the server $self->connect_to_server(); # Return the object return $self; [download] --. Hey, I love LOGO! Realy who doesn't? Many of us started programming in logo so it will always hold a special place for us. However I don't think this module does enough to be that logo we miss. I mean you need to supply a full set of programming options so that kids can use it and learn. Learning that I could use repeat to make a circle beat doing it by hand, and then latter learn to use 'to circle :radius' things just got better and better. I think you've made an awesome start, and the client/server idea is genius!, but I think before you go to far with it you should consider what changes to the parser need made in order to accomplish a fuller implementation of LOGO. /me can remember himself with a notebook full of procedures/functions to try next time he got access to the logo computer! ;) I was trying to add repeat when I noticed you were splitting up the commands before sending them to the server, i was wondering if it wouldn't make more sense to have all commands just sent to the server raw and have the client be just a very thin client. Off I go to see how to split on ; only outside of ....hmmm Don't worry -- I know it needs to implement more programming options. But I wanted to create something workable first, get feedback on that, and then go from there. There's a lot of of fun you can have driving it around the way it is, even if you have let Perl do the steering. And you can even write code that looks a lot like Logo. For instance, using the Spiral example at Wikipedia as a starting point, you could implement a reasonably similar Perl subroutine that provides the same drawn output: use strict; use warnings; use Language::Logo; my $l = new Logo(bg => 'white'); $l->cmd("co black; pd"); spiral(10); # # From # (Example_8) "A spiral drawn using recursion" # # to spiral :size # if :size > 30 [stop] ; a condition stop # fd :size rt 15 ; many lines of action # spiral :size *1.02 ; the tailend recursive call # end sub spiral { my $size = shift; if ($size > 30) { return } # a condition stop $l->cmd("fd $size; rt 15"); # many lines of action spiral($size * 1.02); # the tailend recursive call } [download] So now that I have the basics working, I will concentrate on what needs to be done to provide more of the Logo language. Any constructive feedback is, of course, very welcome! I had an itch and had to scratch it so I implemented brackets, a repeat command, and a rudimentary ability to define user functions. In order to do most of this I moved the command parsing code to the server itself which means user defined functions are available to any connected clients! ;) Drawing a circle now: pendown; repeat 35 [left 10; forward 10;]; [download] Making a circle function: to circle [ repeat 35 [ left 10; forward 10;]]; circle; [download] I'm not sure what the best way is to get you the changes so I've attached the entire modified version. Thunder fish Shocky knifefish TBD Electric eels were invented at the same time as electricity Before electricity was invented, electric eels had to stun with gas Results (292 votes). Check out past polls.
http://www.perlmonks.org/?node_id=597268
CC-MAIN-2017-09
en
refinedweb