text stringlengths 454 608k | url stringlengths 17 896 | dump stringclasses 91 values | source stringclasses 1 value | word_count int64 101 114k | flesch_reading_ease float64 50 104 |
|---|---|---|---|---|---|
okay I'm using this code turn an led on and off, using values provided by a photocell and a 1microf capacitor.
Code: Select all
import RPi.GPIO as GPIO import time import os #!) GPIO.setup(22, GPIO.OUT) def RCtime (RCpin): reading = 0 GPIO.setup(RCpin, GPIO.OUT) GPIO.output(RCpin, GPIO.LOW) time.sleep(.5) GPIO.setup(RCpin, GPIO.IN) while (GPIO.input(RCpin) == GPIO.LOW): reading += 1 return reading while True: if RCtime(18) > 500: print (RCtime(18)) GPIO.output(22, True) if RCtime(18) < 500: print ('less than') GPIO.output(22, False) GPIO.cleanup()
It works great as is, but i have two problems.
1.) I need a way to stop it on command, I want it to read values until I tell it to stop, for now it just reads values until I close the terminal.
2.) Right now it makes an instantaneous decision on wether or not to take action on the led, I would like it to run for an indefinite amount of time and have it set to record a value every (n)seconds, if the values over a period of time (n) are greater than a set value it needs to turn on the led, if they are lower i need it to turn it off.
I have basic knowledge of other languages but i have no experience with using time as a variable, and I'm new to python as a whole. I'm not asking you to write it for me, i just need help in the right direction. mainly the syntax is what I'm weakest at. | https://www.raspberrypi.org/forums/viewtopic.php?f=32&t=34297 | CC-MAIN-2020-05 | refinedweb | 269 | 71.14 |
On Fri, 2002-02-15 at 08:42, Othmar Pasteka wrote: > you have a small typo in the arm specific section of rules.patch. > > you wrote > > ifeq ($(DEB_HOST_ARCH),arm) > debian_patches += arm-const-double arm-tune arm-gnus-source > endif > > should be more like > > arm-gnu-source because the patch is named arm-gnu-source.dpatch. Can you make a binary-only upload with that change? I don't think it's worth doing a new source version and forcing all other arches to rebuild for this, considering that we will probably want to do a new upload anyway when GCC 3.0.4 is released in a few days' time. The arm-gnu(s)-source patch will be obsoleted by the 3.0.4 .orig.tar.gz anyway. p. | https://lists.debian.org/debian-gcc/2002/02/msg00085.html | CC-MAIN-2018-17 | refinedweb | 129 | 62.68 |
NAME | SYNOPSIS | DESCRIPTION | RETURN VALUE | ERRORS | VERSIONS | CONFORMING TO | NOTES | SEE ALSO | COLOPHON
SET_MEMPOLICY(2) Linux Programmer's Manual SET_MEMPOLICY(2)
set_mempolicy - set default NUMA memory policy for a process and its children
#include <numaif.h> int set_mempolicy(int mode, unsigned long *nodemask, unsigned long maxnode); Link with -lnuma..-empty nodemask specifies physical node ids. Linux does will not remap the nodemask when the process moves to a different cpuset context, nor when the set of nodes allowed by the process's current cpuset context changes. MPOL_F_RELATIVE_NODES (since Linux 2.6.26) A non-empty non-default process memory policy be removed, so that the memory policy "falls back" to the system default policy. The system default policy is "local allocation"-- inode points outside your accessible address space. EINVAL mode is invalid. Or, mode is MPOL_DEFAULT and nodemask is non-empty,.
The set_mempolicy(), system call was added to the Linux kernel in version 2.6.7.
This system call is Linux-specific.).
get_mempolicy(2), getcpu(2), mbind(2), mmap(2), numa(3), cpuset(7), numa(7), numactl(8)
This page is part of release 3.21 of the Linux man-pages project. A description of the project, and information about reporting bugs, can be found at. Linux 2008-08-15 SET_MEMPOLICY(2) | http://www.kernel.org/doc/man-pages/online/pages/man2/set_mempolicy.2.html | crawl-002 | refinedweb | 212 | 50.53 |
GUI Builder Panel
Pyjamas Equivalent of Glade goes recursive…
well, i decided that just having a dynamic GUI-instatiator based on a text file wasn't enough, but that what was needed was both a widget (single dynamic instance instantiator) and a panel (subclass of that but then with an "add" method that can take the name of the widget-group to be created).
so, the UI text file contains a widget-group with a panel type in it (e.g. a Grid) called "builderpanel", and it also contains a widget-group with a… something - a row, for example, named "builderrow" in the UI text file. also in the widget-group with the panel is a button, named "add".
then, the button has the name of a callback in the UI text file, called "onAddClicked", and this function has to occur in the app, and is linked to the button. surprise-surprise, the action associated with this button is to create an instance of builderrow and to add it to the panel:
def onAddClicked(self, sender): grid = self.app.bp.getPanel() row = grid.getRowCount() + 1 grid.resize(row, 1) self.app.bp.add("builderrow", row, 0)
the weird thing is that at no time, in any code written, do you actually get to see what "the panel" is, because its layout is specified in the UI text file. also, nor do you see, in "python code", any of the widgets that are added as a row: there could be 50 widgets in that "row" being added, but you don't manually create them.
i think that's just wicked, and although i was originally deeply unimpressed with the idea of writing widget layouts using an application editor, i'm now completely converted to the dark side and will pretty much be doing nothing _but_ writing pyjamas applications with this method from now on.
Syndicated 2010-08-23 20:57:46 from blog/lkcl | http://www.advogato.org/person/lkcl/diary.html?start=682 | CC-MAIN-2014-52 | refinedweb | 323 | 56.79 |
Foreword
Docker's introduction of the standardized image format has fueled an explosion of interest in the use of containers in the enterprise. Containers simplify the distribution of software and allow greater sharing of resources on a computer system. But as you pack more applications onto a system, the risk of an individual application having a vulnerability leading to a breakout increases.
Containers, as opposed to virtual machines, currently share the same host kernel. This kernel is a single point of failure. A flaw in the host kernel could allow a process within a container to break out and take over the system. Docker security is about limiting and controlling the attack surface on the kernel. Docker security takes advantage of security measures provided by the host operating system. It relies on Defense in Depth, using multiple security measures to control what the processes within the container are able to do. As Docker/containers evolve, security measures will continue to be added.
Administrators of container systems have a lot of responsibility to continue to use the common sense security measures that they have learned on Linux and UNIX systems over the years. They should not just rely on whether the "containers actually contain."
- Only run container images from trusted parties.
- Container applications should drop privileges or run without privileges whenever possible.
- Make sure the kernel is always updated with the latest security fixes; the security kernel is critical.
- Make sure you have support teams watching for security flaws in the kernel.
- Use a good quality supported host system for running the containers, with regular security updates.
- Do not disable security features of the host operating system.
- Examine your container images for security flaws and make sure the provider fixes them in a timely manner.
Security and Limiting Containers
To use Docker safely, you need to be aware of the potential security issues and the major tools and techniques for securing container-based systems. This report considers security mainly.
This report begins by exploring some of the issues surrounding the security of container-based systems that you should be thinking about when using containers.
Warning
Disclaimer!
The guidance and advice in this report is based on my opinion. I am not a security researcher, nor am I responsible for any major public-facing system. That being said, I am confident that any system that follows the guidance in this report will be in a better security situation than the majority of systems out there. The advice in this report does not form a complete solution and should be used only to inform the development of your own security procedures and policy.
Things to Worry About
So what sorts of security issues should you be thinking about in a container-based environment? The following list is not comprehensive, but should give you food for thought:
- Kernel exploits
Unlike in a VM, the kernel is shared among all containers and the host, magnifying the importance of any vulnerabilities present in the kernel. Should a container cause a kernel panic, it will take down the whole host. In VMs, the situation is much better: an attacker would have to route an attack through both the VM kernel and the hypervisor before being able to touch the host kernel.
- Denial-of-service attacks
All containers share kernel resources. If one container can monopolize access to certain resources—including memory and more esoteric resources such as user IDs (UIDs)—it can starve out other containers on the host, resulting in a denial-of-service (DoS), whereby legitimate users are unable to access part or all of the system.
- Container breakouts
An attacker who gains access to a container should not be able to gain access to other containers or the host. By default, users are not namespaced, so any process that breaks out of the container will have the same privileges on the host as it did in the container; if you were
rootin the container, you will be
rooton the host.2 This also means that you need to worry about potential privilege escalation attacks—whereby a user gains elevated privileges such as those of the
rootuser, often through a bug in application code that needs to run with extra privileges. Given that container technology is still in its infancy, you should organize your security around the assumption that container breakouts are unlikely, but possible.
- Poisoned images
How do you know that the images you are using are safe, haven’t been tampered with, and come from where they claim to come from? If an attacker can trick you into running his image, both the host and your data are at risk. Similarly, you want to be sure that the images you are running are up-to-date and do not contain versions of software with known vulnerabilities.
- Compromising secrets
When a container accesses a database or service, it will likely require a secret, such as an API key or username and password. An attacker who can get access to this secret will also have access to the service. This problem becomes more acute in a microservice architecture in which containers are constantly stopping and starting, as compared to an architecture with small numbers of long-lived VMs. This report doesn’t cover how to address this, but see the Deployment chapter of Using Docker (O’Reilly, 2015) for how to handle secrets in Docker.
The simple fact is that both Docker and the underlying Linux kernel features it relies on are still young and nowhere near as battle-hardened as the equivalent VM technology. For the time being at least, do not consider containers to offer the same level of security guarantees as VMs.3
Defense in Depth
So what can you do? Assume vulnerability and build defense in depth. Consider the analogy of a castle, which has multiple layers of defense, often tailored to thwart various kinds of attacks. Typically, a castle has a moat, or exploits local geography, to control access routes to the castle. The walls are thick stone, designed to repel fire and cannon blasts. There are battlements for defenders and multiple levels of keeps inside the castle walls. Should an attacker get past one set of defenses, there will be another to face.
The defenses for your system should also consist of multiple layers. For example, your containers will most likely run in VMs so that if a container breakout occurs, another level of defense can prevent the attacker from getting to the host or other containers. Monitoring systems should be in place to alert admins in the case of unusual behavior. Firewalls should restrict network access to containers, limiting the external attack surface.
Least Privilege
Another important principle to adhere to is least privilege: each process and container should run with the minimum set of access rights and resources it needs to perform its function.4 The main benefit of this approach is that if one container is compromised, the attacker should still be severely limited in being able to perform actions that provide access to or exploit further data or resources.
In regards to least privilege, you can take many steps to reduce the capabilities of containers:
Ensure that processes in containers do not run as
root, so that exploiting a vulnerability present in a process does not give the attacker root access.
Run filesystems as read-only so that attackers cannot overwrite data or save malicious scripts to file.
Cut down on the kernel calls that a container can make to reduce the potential attack surface.
Limit the resources that a container can use to avoid DoS attacks whereby a compromised container or application consumes enough resources (such as memory or CPU) to bring the host to a halt.
Warning
Docker Privileges = Root Privileges
This report focuses on the security of running containers, but it is important to point out that you also have to be careful about who you give access to the Docker daemon. Any user who can start and run Docker containers effectively has root access to the host. For example, consider that you can run the following:
$ docker run -v /:/homeroot -it debian bash ...
And you can now access any file or binary on the host machine.
If you run remote API access to your Docker daemon, be careful about how you secure it and who you give access to. If possible, restrict access to the local network.
Segregate Containers by Host
If you have a multi-tenancy setup, running containers for multiple users (whether these are internal users in your organization or external customers), ensure that each user is placed on a separate Docker host, as shown in Figure 1-1. This is less efficient than sharing hosts between users and will result in a higher number of VMs and/or machines than reusing hosts, but is important for security. The main reason is to prevent container breakouts resulting in a user gaining access to another user’s containers or data. If a container breakout occurs, the attacker will still be on a separate VM or machine and unable to easily access containers belonging to other users.
Similarly, if you have containers that process or store sensitive information, keep them on a host separate from containers handling less-sensitive information and, in particular, away from containers running applications directly exposed to end users. For example, containers processing credit-card details should be kept separate from containers running the Node.js frontend.
Segregation and use of VMs can also provide added protection against DoS attacks; users won’t be able to monopolize all the memory on the host and starve out other users if they are contained within their own VM.
In the short to medium term, the vast majority of container deployments will involve VMs. Although this isn’t an ideal situation, it does mean you can combine the efficiency of containers with the security of VMs.
Applying Updates
The ability to quickly apply updates to a running system is critical to maintaining security, especially when vulnerabilities are disclosed in common utilities and frameworks.
The process of updating a containerized system roughly involves the following stages:
Identify images that require updating. This includes both base images and any dependent images. See undefined '1-2' for how to do this with the Docker client.
Get or create an updated version of each base image. Push this version to your registry or download site.
For each dependent image, run
docker buildwith the
--no-cacheargument. Again, push these images.
On each Docker host, run
docker pullto ensure that it has up-to-date images.
Restart the containers on each Docker host.
Once you’ve ascertained that everything is functioning correctly, remove the old images from the hosts. If you can, also remove them from your registry.
Some of these steps sound easier than they are. Identifying images that need updating may require some grunt work and shell fu. Restarting the containers assumes that you have in place some sort of support for rolling updates or are willing to tolerate downtime. At the time of writing, functionality to completely remove images from a registry and reclaim the disk space is still being worked on.5
If you use Docker Hub to build your images, note that you can set up repository links, which will kick off a build of your image when any linked image changes. By setting a link to the base image, your image will automatically get rebuilt if the base image changes.
When you need to patch a vulnerability found in a third-party image, including the official images, you are dependent on that party providing a timely update. In the past, providers have been criticized for being slow to respond. In such a situation, you can either wait or prepare your own image. Assuming that you have access to the Dockerfile and source for the image, rolling your image may be a simple and effective temporary solution.
This approach should be contrasted with the typical VM approach of using configuration management (CM) software such as Puppet, Chef, or Ansible. In the CM approach, VMs aren’t re-created but are updated and patched as needed, either through SSH commands or an agent installed in the VM. This approach works, but means that separate VMs are often in different states and that significant complexity exists in tracking and updating the VMs. This is necessary to avoid the overhead of re-creating VMs and maintaining a master, or golden, image for the service. The CM approach can be taken with containers as well, but adds significant complexity for no benefit—the simpler golden image approach works well with containers because of the speed at which containers can be started and the ease of building and maintaining images.6
Note
Label Your Images
Identifying images and what they contain can be made a lot easier by liberal use of labels when building images. This feature appeared in 1.6 and allows the image creator to associate arbitrary key/value pairs with an image. This can be done in the Dockerfile:
FROM debian LABEL version 1.0 LABEL description "Test image for labels"
You can take things further and add data such as the Git hash that the code in the image was compiled from, but this requires using some form of templating tool to automatically update the value.
Labels can also be added to a container at runtime:
$ docker run -d --name label-test -l group=a \ debian sleep 100 1d8d8b622ec86068dfa5cf251cbaca7540b7eaa6766... $ docker inspect -f '{{json .Config.Labels}}'\ label-test {"group":"a"}
This can be useful when you want to handle certain events at runtime, such as dynamically allocating containers to load-balancer groups.
At times, you will need to update the Docker daemon to gain access to new features, security patches, or bug fixes. This will force you to either migrate all containers to a new host or temporarily halt them while the update is applied. It is recommended that you subscribe to either the docker-user or docker-dev Google groups to receive notifications of important updates.
Avoid Unsupported Drivers
Despite its youth, Docker has already gone through several stages of development, and some features have been deprecated or are unmaintained. Relying on such features is a security risk, because they will not be receiving the same attention and updates as other parts of Docker. The same goes for drivers and extensions depended on by Docker.
Storage drivers are another major area of development and change. At the time of writing, Docker is moving away from AUFS as the preferred storage driver. The AUFS driver is being taken out of the kernel and no longer developed. Users of AUFS are encouraged to move to Overlay or one of the other drivers in the near future.
Image Provenance
To safely use images, you need to have guarantees about their provenance: where they came from and who created them. You need to be sure that you are getting exactly the same image that the original developer tested and that no one has tampered with it, either during storage or transit. If you can’t verify this, the image may have become corrupted or, much worse, replaced with something malicious. Given the previously discussed security issues with Docker, this is a major concern; you should assume that a malicious image has full access to the host.
Provenance is far from a new problem in computing. The primary tool in establishing the provenance of software or data is the secure hash. A secure hash is something like a fingerprint for data—it is a (comparatively) small string that is unique to the given data. Any changes to the data will result in the hash changing. Several algorithms are available for calculating secure hashes, with varying degrees of complexity and guarantees of the uniqueness of the hash. The most common algorithms are SHA (which has several variants) and MD5 (which has fundamental problems and should be avoided). If you have a secure hash for some data and the data itself, you can recalculate the hash for the data and compare it. If the hashes match, you can be certain the data has not been corrupted or tampered with. However, one issue remains—why should you trust the hash? What’s to stop an attacker from modifying both the data and the hash? The best answer to this is cryptographic signing and public/private key pairs.
Through cryptographic signing, you can verify the identify of the publisher of an artifact. If a publisher signs an artifact with their private key,7 any recipient of that artifact can verify it came from the publisher by checking the signature using the publisher’s public key. Assuming the client has already obtained a copy of the publisher’s key, and that publisher’s key has not been compromised, you can be sure the content came from the publisher and has not been tampered with.
Docker Digests
Secure hashes are known as digests in Docker parlance. A digest is a SHA256 hash of a filesystem layer or manifest, where a manifest is metadata file describing the constituent parts of a Docker image. As the manifest contains a list of all the image’s layers identified by digest,8 if you can verify that the manifest hasn’t been tampered with, you can safely download and trust all the layers, even over untrustworthy channels (e.g., HTTP).
Docker Content Trust
Docker introduced content trust in 1.8. This is Docker’s mechanism for allowing publishers9 to sign their content, completing the trusted distribution mechanism. When a user pulls an image from a repository, she receives a certificate that includes the publisher’s public key, allowing her to verify that the image came from the publisher.
When content trust is enabled, the Docker engine will only operate on images that have been signed and will refuse to run any images whose signatures or digests do not match.
You can see content trust in action by enabling it and trying to pull signed and unsigned images:
$ export DOCKER_CONTENT_TRUST=1
$
In Docker 1.8, content trust must be enabled by setting the environment variable
DOCKER_CONTENT_TRUST=1. In later versions of Docker, this will become the default.
The official, signed, Debian image was pulled successfully.
In contrast, Docker refused to pull the unsigned image
amouat/identidock:unsigned.
So what about pushing signed images? It’s surprisingly easy:
$ docker push amouat/identidock:newest The push refers to a repository [docker.io/amouat/identido... ... 843e2bded498: Image already exists newest: digest: sha256:1a0c4d72c5d52094fd246ec03d... offline key with id 70878f1: Repeat passphrase for new offline key with id 70878f1: Enter passphrase for new tagging key with id docker.io/amo... Repeat passphrase for new tagging key with id docker.io/am... Finished initializing "docker.io/amouat/identidock"
Since this was the first push to the repository with content trust enabled, Docker has created a new root signing key and a tagging key. The tagging key will be discussed later. Note the importance of keeping the root key safe and secure. Life becomes very difficult if you lose this; all users of your repositories will be unable to pull new images or update existing images without manually removing the old certificate.
Now the image can be downloaded using content trust:
$ docker rmi amouat/identidock:newest Untagged: amouat/identidock:newest $ docker pull amouat/identidock:newest Pull (1 of 1): amouat/identidock:newest@sha256:1a0c4d72c5d... sha256:1a0c4d72c5d52094fd246ec03d6b6ac43836440796da1043b6e... Digest: sha256:1a0c4d72c5d52094fd246ec03d6b6ac43836440796d... Status: Image is up to date for amouat/identidock@sha256:1... Tagging amouat/identidock@sha256:1a0c4d72c5d52094fd246ec03...
If you haven’t downloaded an image from a given repository before, Docker will first retrieve the certificate for the publisher of that repository. This is done over HTTPS and is low risk, but can likened to connecting to a host via SSH for the first time; you have to trust that you are being given the correct credentials. Future pulls from that repository can be verified using the existing certificate.
Tip
Back Up Your Signing Keys!
Docker will encrypt all keys at rest and takes care to ensure private material is never written to disk. Due to the importance of the keys, it is recommended that they are backed up on two encrypted USB sticks kept in a secure location. To create a TAR file with the keys, run:
$ umask 077 $ tar -zcvf private_keys_backup.tar.gz \ ~/.docker/trust/private $ umask 022
The
umask commands ensure file permissions are set to read-only.
Note that as the root key is only needed when creating or revoking keys, it can—and should—be stored offline when not in use.
Back to the tagging key. A tagging key is generated for each repository owned by a publisher. The tagging key is signed by the root key, which allows it to be verified by any user with the publisher’s certificate. The tagging key can be shared within an organization and used to sign any images for that repository. After generating the tagging key, the root key can and should be taken offline and stored securely.
Should a tagging key become compromised, it is still possible to recover. By rotating the tagging key, the compromised key can be removed from the system. This process happens invisibly to the user and can be done proactively to protect against undetected key compromises.
Content trust also provides freshness guarantees to guard against replay attacks. A replay attack occurs when an artifact is replaced with a previously valid artifact. For example, an attacker may replace a binary with an older, known vulnerable version that was previously signed by the publisher. As the binary is correctly signed, the user can be tricked into running the vulnerable version of the binary. To avoid this, content trust makes use of timestamp keys associated with each repository. These keys are used to sign metadata associated with the repository. The metadata has a short expiration date that requires it to be frequently resigned by the timestamp key. By verifying that the metadata has not expired before downloading the image, the Docker client can be sure it is receiving an up-to-date (or fresh) image. The timestamp keys are managed by the Docker Hub and do not require any interaction from the publisher.
A repository can contain both signed and unsigned images. If you have content
trust enabled and want to download an unsigned image, use the
--disable-content-trust flag:
$ docker pull amouat/identidock:unsigned No trust data for unsigned $ docker pull --disable-content-trust \ amouat/identidock:unsigned unsigned: \ Pulling from amouat/identidock ... 7e7d073d42e9: Already exists Digest: sha256:ea9143ea9952ca27bfd618ce718501d97180dbf1b58... Status: Downloaded newer image for amouat/identidock:unsigned
If you want to learn more about content trust, see the offical Docker documentation, as well as The Update Framework, which is the underlying specification used by content trust.
While this is a reasonably complex infrastructure with multiple sets of keys, Docker has worked hard to ensure it is still simple for end users. With content trust, Docker has developed a user-friendly, modern security framework providing provenance, freshness, and integrity guarantees.
Content trust is currently enabled and working on the Docker Hub. To set up content trust for a local registry, you will also need to configure and deploy a Notary server.
If you are using unsigned images, it is still possible to verify images by pulling by digest, instead of by name and tag. For example:
$ docker pull debian@sha256:f43366bc755696485050c\ e14e1429c481b6f0ca04505c4a3093dfdb4fafb899e
This will pull the
debian:jessie image as of the time of writing. Unlike the
debian:jessie tag, it is guaranteed to always pull exactly the same image (or
none at all). If the digest can be securely transferred and authenticated in some
manner (e.g., sent via a PGP signed e-mail from a trusted party), you can
guarantee the authenticity of the image. Even with content trust enabled, it is
still possible to pull by digest.
If you don’t trust either a private registry or the Docker Hub to distribute
your images, you can always use the
docker load and
docker save commands to
export and import images. The images can be distributed by an internal download
site or simply by copying files. Of course, if you go down this route, you are
likely to find yourself recreating many of the features of the Docker registry
and content-trust components.
Reproducible and Trustworthy Dockerfiles
Ideally, Dockerfiles should produce exactly the same image each time. In practice, this is hard to achieve. The same Dockerfile is likely to produce different images over time. This is clearly a problematic situation, as again, it becomes hard to be sure what is in your images. It is possible to at least come close to entirely reproducible builds, by adhering to the following rules when writing Dockerfiles:
Always specify a tag in
FROMinstructions.
FROM redisis bad, because it pulls the
latesttag, which changes over time and can be expected to move with major version changes.
FROM redis:3.0is better, but can still be expected to change with minor updates and bug fixes (which may be exactly what you want). If you want to be sure you are pulling exactly the same image each time, the only choice is to use a digest as described previously; for example:
FROM redis@sha256:3479bbcab384fa343b52743b933661335448f816...
Using a digest will also protect against accidental corruption or tampering.
Provide version numbers when installing software from package managers.
apt-get install cowsayis OK, as
cowsayis unlikely to change, but
apt-get install cowsay=3.03+dfsg1-6is better. The same goes for other package installers such as pip—provide a version number if you can. The build will fail if an old package is removed, but at least this gives you warning. Also note that a problem still remains: packages are likely to pull in dependencies, and these dependencies are often specified in
>=terms and can hence change over time. To completely lock down the version of things, have a look at tools like aptly, which allow you to take snapshots of repositories.
Verify any software or data downloaded from the Internet. This means using checksums or cryptographic signatures. Of all the steps listed here, this is the most important. If you don’t verify downloads, you are vulnerable to accidental corruption as well as attackers tampering with downloads. This is particularly important when software is transferred with HTTP, which offers no guarantees against man-in-the-middle attacks. The following section offers specific advice on how to do this.
Most Dockerfiles for the official images provide good examples of using tagged versions and verifying downloads. They also typically use a specific tag of a base image, but do not use version numbers when installing software from package managers.
Securely Downloading Software in Dockerfiles
In the majority of cases, vendors will make signed checksums available for verifying downloads. For example, the Dockerfile for the official Node.js image includes the following:
RUN gpg --keyserver pool.sks-keyservers.net \ --recv-keys 7937DFD2AB06298B2293C3187D33FF9D0246406D \ 114F43EE0176B71C7BC219DD50A3051F888C628D
ENV NODE_VERSION 0.10.38 ENV NPM_VERSION 2.10.0 RUN curl -SLO "\ $NODE_VERSION-linux-x64.tar.gz" \ENV NODE_VERSION 0.10.38 ENV NPM_VERSION 2.10.0 RUN curl -SLO "\ $NODE_VERSION-linux-x64.tar.gz" \
&& curl -SLO "\ SHASUMS256.txt.asc" \&& curl -SLO "\ SHASUMS256.txt.asc" \
&& gpg --verify SHASUMS256.txt.asc \&& gpg --verify SHASUMS256.txt.asc \
&& grep " node-v$NODE_VERSION-linux-x64.tar.gz\$" \ SHASUMS256.txt.asc | sha256sum -c -&& grep " node-v$NODE_VERSION-linux-x64.tar.gz\$" \ SHASUMS256.txt.asc | sha256sum -c -
Gets the GNU Privacy Guard (GPG) keys used to sign the Node.js download. Here, you do have to trust that these are the correct keys.
Downloads the Node.js tarball.
Downloads the checksum for the tarball.
Uses GPG to verify that the checksum was signed by whoever owns the previously obtained keys.
Checks that the checksum matches the tarball by using the
sha256sumtool.
If either the GPG test or the checksum test fails, the build will abort.
In some cases, packages are available in third-party repositories, which means they can be installed securely by adding the given repository and its signing key. For example, the Dockerfile for the official Nginx image includes the following:
The first command obtains the signing key for Nginx (which is added to the
keystore), and the second command adds the Nginx package repository to the list
of repositories to check for software. After this, Nginx can be simply and
securely installed with
apt-get install -y nginx (preferably with a
version number).
Assuming no signed package or checksum is available, creating your own is easy. For example, to create a checksum for a Redis release:
$ curl -s -o redis.tar.gz \ $ sha1sum -b redis.tar.gz
fe1d06599042bfe6a0e738542f302ce9533dde88 *redis.tar.gzfe1d06599042bfe6a0e738542f302ce9533dde88 *redis.tar.gz
Here, we’re creating a 160-bit SHA-1 checksum. The
-bflag tells the
sha1sumutility that it is dealing with binary data, not text.
Once you’ve tested and verified the software, you can add something like the following to your Dockerfile:
RUN curl -sSL -o redis.tar.gz \ \ && echo "fe1d06599042bfe6a0e738542f302ce9533dde88\ *redis.tar.gz" | sha1sum -c -
This downloads the file as redis.tar.gz and asks
sha1sum to verify the
checksum. If the check fails, the command will fail and the build will abort.
Changing all these details for each release is a lot of work if you release
often, so automating the process is worthwhile. In many of the official
image repositories, you can find
update.sh scripts for this purpose (for example,).
Security Tips
This section contains actionable tips on securing container deployments. Not all the advice is applicable to all deployments, but you should become familiar with the basic tools you can use.
Many of the tips describe various ways in which containers can be limited so that containers are unable to adversely affect other containers or the host. The main issue to bear in mind is that the host kernel’s resources—CPU, memory, network, UIDs, and so forth—are shared among containers. If a container monopolizes any of these, it will starve out other containers. Worse, if a container can exploit a bug in kernel code, it may be able to bring down the host or gain access to the host and other containers. This could be caused either accidentally, through some buggy programming, or maliciously, by an attacker seeking to disrupt or compromise the host.
Set a USER
Never run production applications as
root inside the container. That’s worth
saying again: never run production applications as root inside the container.
An attacker who breaks the application will have full access to the
container, including its data and programs. Worse, an attacker who manages to break out of
the container will have
root access on the host. You wouldn’t run an
application as root in a VM or on bare metal, so don’t do it in a container.
To avoid running as
root, your Dockerfiles should always create a non-privileged
user and switch to it with a
USER statement or from an entrypoint script. For
example:
RUN groupadd -r user_grp && useradd -r -g user_grp user USER user
This creates a group called
user_grp and a new user called
user who belongs
to that group. The
USER statement will take effect for all following
instructions and when a container is started from the image. You may need to
delay the
USER instruction until later in the Dockerfile if you need to first
perform actions that need root privileges such as installing software.
Many of the official images create an unprivileged user in the same way, but do
not contain a
USER instruction. Instead, they switch users in an entrypoint
script, using the gosu utility. For example, the entry-point script for the
official Redis image looks like this:
#!/bin/bash set -e if [ "$1" = 'redis-server' ]; then chown -R redis . exec gosu redis "$@" fi exec "$@"
This script includes the line
chown -R redis ., which sets the ownership of
all files under the images data directory to the
redis user. If the Dockerfile
had declared a
USER, this line wouldn’t work. The next line,
exec gosu redis "$@", executes the given
redis command as the redis user. The use of
exec means
the current shell is replaced with
redis, which becomes PID 1 and has any
signals forwarded appropriately.
Tip
Use gosu, not sudo
The traditional tool for executing commands as another user is sudo. While sudo
is a powerful and venerable tool, it has some side effects that make it less
than ideal for use in entry-point scripts. For example, you can see what happens
if you run
sudo ps aux inside an Ubuntu10 container:
$ docker run --rm ubuntu:trusty sudo ps aux USER PID ... COMMAND root 1 sudo ps aux root 5 ps aux
You have two processes, one for sudo and one for the command you ran.
By contrast, say you install gosu into an Ubuntu image:
$ docker run --rm amouat/ubuntu-with-gosu gosu root ps aux USER PID ... COMMAND root 1 ps aux
You have only one process running—gosu has executed the command and gotten out of the way completely. Importantly, the command is running as PID 1, meaning that it will correctly receive any signals sent to the container, unlike the sudo example.
User Namespaces
As of Docker 1.10, you can enable user namespacing by starting the kernel with
the
--userns-remap flag. This will map UIDs (including root) inside the
container to high-numbered UIDs on the host. There is a single, system-wide
mapping, meaning that root inside a container is the same UID across containers.
This is great step forward for security, but as of the time of writing it has
some issues that limit its usability:
It can’t be used in conjunction with a read-only container filesystem.
Sharing of network, IPC and PID namespaces with the host or other containers is restricted in many cases.
The remapped root user inside a container has some extra restrictions, such as not being able to call
mknod.
Using Docker volumes becomes more complex as the changed UIDs affect access privileges.
Finally, if you have an application that insists on running as
root (and you
can’t fix it or use user namespaces), consider using tools such as sudo, SELinux
(see SELinux), and fakeroot to constrain the process.
Limit Container Networking
A container should open only the ports it needs to use in production, and these
ports should be accessible only to the other containers that need them. This is
a little trickier than it sounds, as by default, containers can talk to
each other whether or not ports have been explicitly published or exposed. You
can see this by having a quick play with the
netcat tool:11
$ docker run --name nc-test -d \ amouat/network-utils nc -l'
Connection to 172.17.0.3 5001 port [tcp/*] succeeded! $ docker logs nc-test helloConnection to 172.17.0.3 5001 port [tcp/*] succeeded! $ docker logs nc-test hello
Tells the
netcatutility to listen to port 5001 and echo any input.
Sends "hello" to the first container using
netcat.
The second container is able to connect to
nc-test despite there being no
ports published or exposed. You can change this by running the Docker daemon with
the
--icc=false flag. This turns off inter-container communication,
which can prevent compromised containers from being able to attack other
containers. Any explicitly linked containers will still be able to communicate.
Docker controls inter-container communication by setting IPtables rules (which
requires that the
--iptables flag is set on the daemon, as it should be by
default).
The following example demonstrates the effect of setting
--icc=false on
the daemon:
$ cat /etc/default/docker | grep DOCKER_OPTS= DOCKER_OPTS="--iptables=true --icc=false"
$' hell hello
On Ubuntu, the Docker daemon is configured by setting
DOCKER_OPTSin /etc/default/docker.
The
-w 2flag tells Netcat to time out after two seconds.
The first connection fails, as inter-container communication is off and
no link is present. The second command succeeds, due to the added link. If you
want to understand how this works under the hood, try running
sudo iptables -L
-n on the host with and without linked containers.
When publishing ports to the host, Docker publishes to all interfaces (0.0.0.0) by default. You can instead specify the interface you want to bind to explicitly:
$ docker run -p 87.245.78.43:8080:8080 -d myimage
This reduces the attack surface by only allowing traffic from the given interface.
Remove setuid/setgid Binaries
Chances are that your application doesn’t need any
setuid or
setgid
binaries.12 If you can disable or remove such binaries, you stop
any chance of them being used for privilege escalation attacks.
To get a list of such binaries in an image, try running
find / -perm +6000
-type f -exec ls -ld {} \;—for example:
$ docker run debian find / -perm +6000 -type f -exec \ ls -ld {} \; 2> /dev/null -rwsr-xr-x 1 root root 10248 Apr 15 00:02 /usr/lib/pt_chown -rwxr-sr-x 1 root shadow 62272 Nov 20 2014 /usr/bin/chage -rwsr-xr-x 1 root root 75376 Nov 20 2014 /usr/bin/gpasswd -rwsr-xr-x 1 root root 53616 Nov 20 2014 /usr/bin/chfn -rwsr-xr-x 1 root root 54192 Nov 20 2014 /usr/bin/passwd -rwsr-xr-x 1 root root 44464 Nov 20 2014 /usr/bin/chsh -rwsr-xr-x 1 root root 39912 Nov 20 2014 /usr/bin/newgrp -rwxr-sr-x 1 root tty 27232 Mar 29 22:34 /usr/bin/wall -rwxr-sr-x 1 root shadow 22744 Nov 20 2014 /usr/bin/expiry -rwxr-sr-x 1 root shadow 35408 Aug 9 2014 /sbin/unix_chkpwd -rwsr-xr-x 1 root root 40000 Mar 29 22:34 /bin/mount -rwsr-xr-x 1 root root 40168 Nov 20 2014 /bin/su -rwsr-xr-x 1 root root 70576 Oct 28 2014 /bin/ping -rwsr-xr-x 1 root root 27416 Mar 29 22:34 /bin/umount -rwsr-xr-x 1 root root 61392 Oct 28 2014 /bin/ping6
You can then "defang" the binaries with
chmod a-s to remove the suid bit. For
example, you can create a defanged Debian image with the following Dockerfile:
FROM debian:wheezy RUN find / -perm +6000 -type f -exec chmod a-s {} \; || true
Build and run it:
$ docker build -t defanged-debian . ... Successfully built 526744cf1bc1 docker run --rm defanged-debian \ find / -perm +6000 -type f -exec ls -ld {} \; 2> /dev/null \ | wc -l 0 $
It’s more likely that your Dockerfile will rely on a
setuid/
setgid binary
than your application. Therefore, you can always perform this step near the end, after
any such calls and before changing the user (removing
setuid binaries is pointless
if the application runs with root privileges).
Limit Memory
Limiting memory protects against both DoS attacks and applications with memory leaks that slowly consume all the memory on the host (such applications can be restarted automatically to maintain a level of service).
The
-m and
--memory-swap flags to
docker run limit the amount of memory
and swap memory a container can use. Somewhat confusingly, the
--memory-swap
argument sets the total amount of memory (memory plus swap memory rather
than just swap memory). By default, no limits are applied. If the
-m flag is
used but not
--memory-swap, then
--memory-swap is set to double the argument
to
-m. This is best explained with an example. Here, you’ll use the
amouat/stress image, which includes the Unix
stress utility that is used to test
what happens when resources are hogged by a process. In this case, you will tell
it to grab a certain amount of memory:
$ docker run -m 128m --memory-swap 128m amouat/stress \ stress --vm 1 --vm-bytes 127m -t 5s
stress: info: [1] dispatching hogs: 0 cpu, 0 io, 1 vm, 0 hdd stress: info: [1] successful run completed in 5s $ docker run -m 128m --memory-swap 128m amouat/stress \ stress --vm 1 --vm-bytes 130m -t 5sstress: info: [1] dispatching hogs: 0 cpu, 0 io, 1 vm, 0 hdd stress: info: [1] successful run completed in 5s $ docker run -m 128m --memory-swap 128m amouat/stress \ stress --vm 1 --vm-bytes 130m -t 5s
stress:stress:
stress: info: [1] dispatching hogs: 0 cpu, 0 io, 1 vm, 0 hdd stress: info: [1] successful run completed in 5sstress: info: [1] dispatching hogs: 0 cpu, 0 io, 1 vm, 0 hdd stress: info: [1] successful run completed in 5s
These arguments tell the stress utility to run one process that will grab 127 MB of memory and time out after 5 seconds.
This time you try to grab 130 MB, which fails because you are allowed only 128 MB.
This time you try to grab 255 MB, and because
--memory-swaphas defaulted to 256 MB, the command succeeds.
Limit CPU
If an attacker can get one container, or one group of containers, to start using all the CPU on the host, the attacker will be able to starve out any other containers on the host, resulting in a DoS attack.
In Docker, CPU share is determined by a relative weighting with a default value of 1024, meaning that by default all containers will receive an equal share of CPU.
The way it works is best explained with an example. Here, you’ll start four
containers with the
amouat/stress image you saw earlier, except this time they
will all attempt to grab as much CPU as they like, rather than memory.
$ docker run -d --name load1 -c 2048 amouat/stress 912a37982de1d8d3c4d38ed495b3c24a7910f9613a55a42667d6d28e1d... $ docker run -d --name load2 amouat/stress df69312a0c959041948857fca27b56539566fb5c7cda33139326f16485... $ docker run -d --name load3 -c 512 amouat/stress c2675318fefafa3e9bfc891fa303a16e72caf221ec23a4c222c2b889ea... $ docker run -d --name load4 -c 512 amouat/stress 5c6e199423b59ae481d41268c867c705f25a5375d627ab7b59c5fbfbcf... $ docker stats $(docker inspect -f {{.Name}} $(docker ps -q)) CONTAINER CPU % ... /load1 392.13% /load2 200.56% /load3 97.75% /load4 99.36%
In this example, the container
load1 has a weighting of 2048,
load2 has the
default weighting of 1024, and the other two containers have weightings of 512.
On my machine with eight cores and hence a total of 800% CPU to allocate, this
results in
load1 getting approximately half the CPU,
load2 getting a quarter,
and
load3 and
load4 getting an eighth each. If only one container is
running, it will be able to grab as many resources as it wants.
The relative weighting means that it shouldn’t be possible for any container to starve the others with the default settings. However, you may have "groups" of containers that dominate CPU over other containers, in which case, you can assign containers in that group a lower default value to ensure fairness. If you do assign CPU shares, make sure that you bear the default value in mind so that any containers that run without an explicit setting still receive a fair share without dominating other containers.
Limit Restarts
If a container is constantly dying and restarting, it will waste a large
amount of system time and resources, possibly to the extent of causing a DoS.
This can be easily prevented by using the
on-failure restart policy instead
of the
always policy, for example:
$ docker run -d --restart=on-failure:10 my-flaky-image ...
This causes Docker to restart the container up to a maximum of 10
times. The current restart count can be found as
.RestartCount in
docker inspect:
$ docker inspect -f "{{ .RestartCount }}" $(docker ps -lq) 0
Docker employs an exponential back-off when restarting containers. (It will wait for 100 ms, then 200 ms, then 400 ms, and so forth on subsequent restarts.) By itself, this should be effective in preventing DoS attacks that try to exploit the restart functionality.
Limit Filesystems
Stopping attackers from being able to write to files prevents several attacks and generally makes life harder for hackers. They can’t write a script and trick your application into running it, or overwrite sensitive data or configuration files.
Starting with Docker 1.5, you can pass the
--read-only flag to
docker run,
which makes the container’s filesystem entirely read-only:
$ docker run --read-only debian touch x touch: cannot touch 'x': Read-only file system
You can do something similar with volumes by adding
:ro to the end of the
volume argument:
$ docker run -v $(pwd):/pwd:ro debian touch /pwd/x touch: cannot touch '/pwd/x': Read-only file system
The majority of applications need to write out files and won’t operate in a completely read-only environment. In such cases, you can find the folders and files that the application needs write access to and use volumes to mount only those files that are writable.
Adopting such an approach has huge benefits for auditing. If you can be sure your container’s filesystem is exactly the same as the image it was created from, you can perform a single offline audit on the image rather than auditing each separate container.
Limit Capabilities
The Linux kernel defines sets of privileges—called capabilities—that can be assigned to processes to provide them with greater access to the system. The capabilities cover a wide range of functions, from changing the system time to opening network sockets. Previously, a process either had full root privileges or was just a user, with no in-between. This was particularly troubling with applications such as ping, which required root privileges only for opening a raw network socket. This meant that a small bug in the ping utility could allow attackers to gain full root privileges on the system. With the advent of capabilities, it is possible to create a version of ping that has only the privileges it needs for creating a raw network socket rather than full root privileges, meaning would-be attackers gain much less from exploiting utilities like ping.
By default, Docker containers run with a subset of capabilities,13 so, for example, a
container will not normally be able to use devices such as the GPU and
sound card or insert kernel modules. To give extended privileges to a container,
start it with the
--privileged argument to
docker run.
In terms of security, what you really want to do is limit the capabilities of
containers as much as you can. You can control the capabilities available to a
container by using the
--cap-add and
--cap-drop arguments. For example, if
you want to change the system time (don’t try this unless you want to break
things!):
$ docker run debian \ date -s "10 FEB 1981 10:00:00" Tue Feb 10 10:00:00 UTC 1981 date: cannot set date: Operation not permitted $ docker run --cap-add SYS_TIME debian \ date -s "10 FEB 1981 10:00:00" Tue Feb 10 10:00:00 UTC 1981 $ date Tue Feb 10 10:00:03 GMT 1981
In this example, you can’t modify the date until you add the
SYS_TIME privilege
to the container. As the system time is a non-namespaced kernel feature, setting
the time inside a container sets it for the host and all other containers as
well.14
A more restrictive approach is to drop all privileges and add back just the ones you need:
$ docker run --cap-drop all debian chown 100 /tmp chown: changing ownership of '/tmp': Operation not permitted $ docker run --cap-drop all --cap-add CHOWN debian \ chown 100 /tmp
This represents a major increase in security; an attacker who breaks into a kernel will still be hugely restricted in which kernel calls she is able to make. However, some problems exist:
How do you know which privileges you can drop safely? Trial and error seems to be the simplest approach, but what if you accidentally drop a privilege that your application needs only rarely? Identifying required privileges is easier if you have a full test suite you can run against the container and are following a microservices approach that has less code and moving parts in each container to consider.
The capabilities are not as neatly grouped and fine-grained as you may wish. In particular, the
SYS_ADMINcapability has a lot of functionality; kernel developers seemed to have used it as a default when they couldn’t find (or perhaps couldn’t be bothered to look for) a better alternative. In effect, it threatens to re-create the simple binary split of admin user versus normal user that capabilities were designed to remediate.
Apply Resource Limits (ulimits)
The Linux kernel defines resource limits that can be applied to processes, such
as limiting the number of child processes that can be forked and the number of
open file descriptors allowed. These can also be applied to Docker containers,
either by passing the
--ulimit flag to
docker run or setting container-wide
defaults by passing
--default-ulimit when starting the Docker daemon. The
argument takes two values, a soft limit and a hard limit separated by a colon,
the effects of which are dependent on the given limit. If only one value is
provided, it is used for both the soft and hard limit.
The full set of possible values and meanings are described in full in
man
setrlimit. (Note that the
as limit can’t be used with containers, however.) Of
particular interest are the following values:
- cpu
Limits the amount of CPU time to the given number of seconds. Takes a soft limit (after which the container is sent a
SIGXCPUsignal) followed by a
SIGKILLwhen the hard limit is reached. For example, again using the stress utility from Limit Memory and Limit CPU to maximize CPU usage:
$ time docker run --ulimit cpu=12:14 amouat/stress \ stress --cpu 1 stress: FAIL: [1] (416) <-- worker 5 got signal 24 stress: WARN: [1] (418) now reaping child worker processes stress: FAIL: [1] (422) kill error: No such process stress: FAIL: [1] (452) failed run completed in 12s stress: info: [1] dispatching hogs: 1 cpu, 0 io, 0 vm, 0 hdd real 0m12.765s user 0m0.247s sys 0m0.014s
The
ulimit argument killed the container after it used 12 seconds of CPU.
This is potentially useful for limiting the amount of CPU that can be used by containers kicked off by another process—for example, running computations on behalf of users. Limiting CPU in such a way may be an effective mitigation against DoS attacks in such circumstances.
- nofile
The maximum number of file descriptors15 that can be concurrently open in the container. Again, this can be used to defend against DoS attacks and ensure that an attacker isn’t able to read or write to the container or volumes. (Note that you need to set
nofileto one more than the maximum number you want.) For example:
$ docker run --ulimit nofile=5 debian cat /etc/hostname b874469fe42b $ docker run --ulimit nofile=4 debian cat /etc/hostname Timestamp: 2015-05-29 17:02:46.956279781 +0000 UTC Code: System error Message: Failed to open /dev/null - open /mnt/sda1/var/...
Here, the OS requires several file descriptors to be open, although
cat requires only a single file descriptor. It’s hard to be sure of how
many file descriptors your application will need, but setting it to a number
with plenty of room for growth offers some protection against DoS
attacks, compared to the default of 1048576.
- nproc
The maximum number of processes that can be created by the user of the container. On the face of it, this sounds useful, because it can prevent fork bombs and other types of attack. Unfortunately, the
nproclimits are not set per container but rather for the user of the container across all processes. This means, for example:
$ docker run --user 500 --ulimit nproc=2 -d debian sleep 100 92b162b1bb91af8413104792607b47507071c52a2e3128f0c6c7659bfb... $ docker run --user 500 --ulimit nproc=2 -d debian sleep 100 158f98af66c8eb53702e985c8c6e95bf9925401c3901c082a11889182b... $ docker run --user 500 --ulimit nproc=2 -d debian sleep 100 6444e3b5f97803c02b62eae601fbb1dd5f1349031e0251613b9ff80871555664 FATA[0000] Error response from daemon: Cannot start contai... [8] System error: resource temporarily unavailable $ docker run --user 500 -d debian sleep 100 f740ab7e0516f931f09b634c64e95b97d64dae5c883b0a349358c59958...
The third container couldn’t be started, because two processes
already belong to UID 500. By dropping the
--ulimit argument, you can continue to add
processes as the user. Although this is a major drawback,
nproc limits may still
be useful in situations where you use the same user across a limited number of
containers.
Also note that you can’t set
nproc limits for the
root user.
Run a Hardened Kernel
Beyond simply keeping your host operating system up-to-date and patched, you may want to consider running a hardened kernel, using patches such as those provided by grsecurity and PaX. PaX provides extra protection against attackers manipulating program execution by modifying memory (such as buffer overflow attacks). It does this by marking program code in memory as nonwritable and data as nonexecutable. In addition, memory is randomly arranged to mitigate against attacks that attempt to reroute code to existing procedures (such as system calls in common libraries). grsecurity is designed to work alongside PaX, and it adds patches related to role-based access control (RBAC), auditing, and other miscellaneous features.
To enable PaX and/or grsec, you will probably need to patch and compile the kernel yourself. This isn’t as daunting as it sounds, and plenty of resources are available online to help.
These security enhancements may cause some applications to break. PaX, in particular, will conflict with any programs that generate code at runtime. A small overhead also is associated with the extra security checks and measures. Finally, if you use a precompiled kernel, you will need to ensure that it is recent enough to support the version of Docker you want to run.
Linux Security Modules
The Linux kernel defines the Linux Security Module (LSM) interface, which is implemented by various modules that want to enforce a particular security policy. At the time of writing, several implementations exist, including AppArmor, SELinux, Smack, and TOMOYO Linux. These security modules can be used to provide another level of security checks on the access rights of processes and users, beyond that provided by the standard file-level access control.
The modules normally used with Docker are SELinux (typically with Red Hat-based distributions) and AppArmor (typically with Ubuntu and Debian distributions). We’ll take a look at both of these modules now.
SELinux
The SELinux, or Security Enhanced Linux, module was developed by the United States National Security Agency (NSA) as an implementation of what they call mandatory access control (MAC), as contrasted to the standard Unix model of discretionary access control (DAC). In somewhat plainer language, there are two major differences between the access control enforced by SELinux and the standard Linux access controls:
SELinux controls are enforced based on types, which are essentially labels applied to processes and objects (files, sockets, and so forth). If the SELinux policy forbids a process of type A to access an object of type B, that access will be disallowed, regardless of the file permissions on the object or the access privileges of the user. SELinux tests occur after the normal file permission checks.
It is possible to apply multiple levels of security, similar to the governmental model of confidential, secret, and top-secret access. Processes that belong to a lower level cannot read files written by processes of a higher level, regardless of where in the filesystem the file resides or what the permissions on the file are. So a top-secret process could write a file to
/tmpwith
chmod 777privileges, but a confidential process would still be unable to access the file. This is known as multilevel security (MLS) in SELinux, which also has the closely related concept of multicategory security (MCS). MCS allows categories to be applied to processes and objects and denies access to a resource if it does not belong to the correct category. Unlike MLS, categories do not overlap and are not hierarchical. MCS can be used to restrict access to resources to subsets of a type (for example, by using a unique category, a resource can be restricted to use by only a single process).
SELinux comes installed by default on Red Hat distributions and should be simple
to install on most other distributions. You can check whether SELinux is running by
executing
sestatus. If that command exists, it will tell you whether SELinux is
enabled or disabled and whether it is in permissive or enforcing mode. When in
permissive mode, SELinux will log access-control infringements but will not
enforce them.
The default SELinux policy for Docker is designed to protect the host from
containers, as well as containers from other containers. Containers are assigned
the default process type
svirt_lxc_net_t, and files accessible to a container
are assigned
svirt_sandbox_file_t. The policy enforces that containers are able to read and execute files only from
/usr on the host and cannot write to
any file on the host. It also assigns a unique MCS category number to each
container, intended to prevent containers from being able to access files or
resources written by other containers in the event of a breakout.
Note
Enabling SELinux
If you’re running a Red Hat-based distribution, SELinux should be installed
already. You can check whether it’s enabled and is enforcing rules by
running
sestatus on the command line. To enable SELinux and set it to
enforcing mode, edit /etc/selinux/config so that it contains the line
SELINUX=enforcing.
You will also need to ensure that SELinux support is enabled on the Docker
daemon. The daemon should be running with the flag
--selinux-enabled. If not,
it should be added to the file /etc/sysconfig/docker.
You must be using the devicemapper storage driver to use SELinux. At the time of writing, getting SELinux to work with Overlay and BTRFS is an ongoing effort, but they are not currently compatible.
For installation of other distributions, refer to the relevant documentation. Note that SELinux needs to label all files in your filesystem, which takes some time. Do not install SELinux on a whim!
Enabling SELinux has an immediate and drastic effect on using containers with volumes. If you have SELinux installed, you will no longer be able to read or write to volumes by default:
$ sestatus | grep mode Current mode: enforcing $ mkdir data $ echo "hello" > data/file $ docker run -v $(pwd)/data:/data debian cat /data/file cat: /data/file: Permission denied
You can see the reason by inspecting the folder’s security context:
$ ls --scontext data unconfined_u:object_r:user_home_t:s0 file
The label for the data doesn’t match the label for containers. The fix is to
apply the container label to the data by using the
chcon tool, effectively
notifying the system that you expect these files to be consumed by containers:
$ chcon -Rt svirt_sandbox_file_t data $ docker run -v $(pwd)/data:/data debian cat /data/file hello $ docker run -v $(pwd)/data:/data debian \ sh -c 'echo "bye" >> /data/file' $ cat data/file hello bye $ ls --scontext data unconfined_u:object_r:svirt_sandbox_file_t:s0 file
Note that if you run
chcon only on the file and not the parent folder, you
will be able to read the file but not write to it.
From version 1.7 and on, Docker automatically relabels volumes for use with
containers if the
:Z or
:z suffix is provided when mounting the volume.
The
:z labels the volume as usable by all containers (this is required
for data containers that share volumes with multiple containers), and the
:Z
labels the volume as usable by only that container. For example:
$ mkdir new_data $ echo "hello" > new_data/file $ docker run -v $(pwd)/new_data:/new_data debian \ cat /new_data/file cat: /new_data/file: Permission denied $ docker run -v $(pwd)/new_data:/new_data:Z debian \ cat /new_data/file hello
You can also use the
--security-opt flag to change the label for a container
or to disable the labeling for a container:
$ touch newfile $ docker run -v $(pwd)/newfile:/file \ --security-opt label:disable \ debian sh -c 'echo "hello" > /file' $ cat newfile hello
An interesting use of SELinux labels is to apply a specific label to a container in order to enforce a particular security policy. For example, you could create a policy for an Nginx container that allows it to communicate on only ports 80 and 443..
A lot of tools and articles are available for helping to develop SELinux
policies. In particular, be aware of
audit2allow, which can turn log messages
from running in permissive mode into policies that allow you to run in
enforcing mode without breaking applications.
The future for SELinux looks promising; as more flags and default implementations are added to Docker, running SELinux secured deployments should become simpler. The MCS functionality should allow for the creation of secret or top-secret containers for processing sensitive data with a simple flag. Unfortunately, the current user experience with SELinux is not great; newcomers to SELinux tend to watch everything break with "Permission Denied" and have no idea what’s wrong or how to fix it. Developers refuse to run with SELinux enabled, leading back to the problem of having different environments between development and production—the very problem Docker was meant to solve. If you want or need the extra protection that SELinux provides, you will have to grin and bear the current situation until things improve.
AppArmor
The advantage and disadvantage of AppArmor is that it is much simpler than SELinux. It should just work and stay out of your way, but cannot provide the same granularity of protection as SELinux. AppArmor works by applying profiles to processes, restricting which privileges they have at the level of Linux capabilities and file access.
If you’re using an Ubuntu host, chances are that it is running right now. You can
check this by running
sudo apparmor_status. Docker will automatically apply
an AppArmor profile to each launched container. The default profile provides a
level of protection against rogue containers attempting to access various system
resources, and it can normally be found at /etc/apparmor.d/docker. At the time of
writing, the default profile cannot be changed, as the Docker daemon will
overwrite it when it reboots.
If AppArmor interferes with the running of a container, it can be turned off for that container by passing
--security-opt="apparmor:unconfined" to
docker run. You can pass a different profile for a container by passing
--security-opt="apparmor:PROFILE" to
docker run, where the
PROFILE is the name of a security profile previously loaded by AppArmor.
Auditing
Running regular audits or reviews on your containers and images is a good way to ensure that your system is kept clean and up-to-date and to double-check that no security breaches have occurred. An audit in a container-based system should check that all running containers are using up-to-date images and that those images are using up-to-date and secure software. Any divergence in a container from the image it was created from should be identified and checked. In addition, audits should cover other areas nonspecific to container-based systems, such as checking access logs, file permissions, and data integrity. If audits can be largely automated, they can run regularly to detect any issues as quickly as possible.
Rather than having to log into each container and examine each individually, you
can instead audit the image used to build the container and use
docker diff
to check for any drift from the image. This works even better if you use a
read-only filesystem (see Limit Filesystems) and can be sure that nothing
has changed in the container.
At a minimum, you should check that the versions of software used are up-to-date
with the latest security patches. This should be checked on each image and any
files identified as having changed by
docker diff. If you are using volumes,
you will also need to audit each of those directories.
The amount of work involved in auditing can be seriously reduced by running minimal images that contain only the files and libraries essential to the application.
The host system also needs to be audited as you would a regular host machine or VM. Making sure that the kernel is correctly patched becomes even more critical in a container-based system where the kernel is shared among containers.
Several tools are already available for auditing container-based systems, and you can expect to see more in the coming months. Notably, Docker released the Docker Bench for Security tool, which checks for compliance with many of the suggestions from the Docker Benchmark document from the Center for Internet Security (CIS). Also, the open source Lynis auditing tool contains several checks related to running Docker.
Incident Response
Should something bad occur, you can take advantage of several Docker features to respond quickly to the situation and investigate the cause of the problem. In particular,
docker commit can be used to take a snapshot of the compromised system, and
docker diff and
docker logs can reveal changes made by the attacker.
A major question that needs to be answered when dealing with a compromised container is "Could a container breakout have occurred?" Could the attacker have gained access to the host machine? If you believe that this is possible or likely, the host machine will need to be wiped and all containers re-created from images (without some form of attack mitigation in place). If you are sure the attack was isolated to the container, you can simply stop that container and replace it. (Never put the compromised container back into service, even if it holds data or changes not in the base image; you simply can’t trust the container anymore.)
Effective mitigation against the attack may be to limit the container in some way, such as dropping capabilities or running with a read-only filesystem.
Once the immediate situation has been dealt with and some form of attack mitigation put in place, the compromised image that you committed can be analyzed to determine the exact causes and extent of the attack.
For information on how to develop an effective security policy covering incident response, read CERT’s Steps for Recovering from a UNIX or NT System Compromise and the advice given on the ServerFault website.
Conclusion
As you’ve seen in this report, there are many aspects to consider when securing a system. The primary advice is to follow the principles of defense-in-depth and least privilege. This ensures that even if an attacker manages to compromise a component of the system, that attacker won’t gain full access to the system and will have to penetrate further defenses before being able to cause significant harm or access sensitive data.
Groups of containers belonging to different users or operating on sensitive data should run in VMs separate from containers belonging to other users or running publicly accessible interfaces. The ports exposed by containers should be locked down, particularly when exposed to the outside world, but also internally to limit the access of any compromised containers. The resources and functionality available to containers should be limited to only that required by their purpose, by setting limits on their memory usage, filesystem access, and kernel capabilities. Further security can be provided at the kernel level by running hardened kernels and using security modules such as AppArmor or SELinux.
In addition, attacks can be detected early through the use of monitoring and auditing. Auditing, in particular, is interesting in a container-based system, as containers can be easily compared to the images they were created from in order to detect suspicious changes. In turn, images can be vetted offline to make sure they are running up-to-date and secure versions of software. Compromised containers with no state can be replaced quickly with newer versions.
Containers are a positive force in terms of security because of the extra level of isolation and control they provide. A system using containers properly will be more secure than the equivalent system without containers.
1I strongly recommend Dan Walsh’s series of posts at opensource.com.
2It is possible to turn on user namespacing, which will map the root user in a container to a high-numbered user on the host. We will discuss this feature and its drawbacks later.
3An interesting argument exists about whether containers will ever be as secure as VMs. VM proponents argue that the lack of a hypervisor and the need to share kernel resources mean that containers will always be less secure. Container proponents argue that VMs are more vulnerable because of their greater attack surface, pointing to the large amounts of complicated and privileged code in VMs required for emulating esoteric hardware (as an example, see the recent VENOM vulnerability that exploited code in floppy drive emulation).
4The concept of least privilege was first articulated as "Every program and every privileged user of the system should operate using the least amount of privilege necessary to complete the job," by Jerome Saltzer in "Protection and the Control of Information Sharing in Multics." Recently, Diogo Mónica and Nathan McCauley from Docker have been championing the idea of "least-privilege microservices" based on Saltzer’s principle., including at a recent DockerCon talk.
5A work-around is to
docker save all the required images and load them into a fresh registry.
6This is similar to modern ideas of immutable infrastructure, whereby infrastructure—including bare metal, VMs, and containers—is never modified and is instead replaced when a change is required.
7A full discussion of public-key cryptography is fascinating but out of scope here. For more information see Applied Cryptography by Bruce Schneier.
8A similar construct is used in protocols such as Bittorrent and Bitcoin and is known as a hash list.
9In the context of this report, anyone who pushes an image is a publisher; it is not restricted to large companies or organizations.
10I’m using Ubuntu instead of Debian here, as the Ubuntu image includes sudo by default.
11We’re using the OpenBSD version here.
12
setuid and
setgid binaries run with the privileges of the owner rather than the user. These are normally used to allow users to temporarily run with escalated privileges required to execute a given task, such as setting a password.
13These are
CHOWN,
DAC_OVERRIDE,
FSETID,
FOWNER,
MKNOD,
NET_RAW,
SETGID,
SETUID,
SETFCAP,
SETPCAP,
NET_BIND_SERVICE,
SYS_CHROOT,
KILL, and
AUDIT_WRITE. Dropped capabilities notably include (but are not limited to)
SYS_TIME,
NET_ADMIN,
SYS_MODULE,
SYS_NICE, and
SYS_ADMIN. For full information on capabilities, see
man capabilities.
14If you run this example, you’ll have a broken system until you set the time correctly. Try running
sudo ntpdate or
sudo ntpdate-debian to change back to the correct time.
15A file descriptor is a pointer into a table recording information on the open files on the system. An entry is created whenever a file is accessed, recording the mode (read, write, etc.) the file is accessed with and pointers to the underlying files. | https://www.oreilly.com/ideas/docker-security?intcmp=il-webops-free-article-lgen_five_security_concerns_when_using_docker | CC-MAIN-2018-05 | refinedweb | 11,876 | 51.38 |
I am trying to update values of node <tem:intA> and <tem:intB> in the following xml using below groovy script code.
I used below groovy script code to update the tag values. But I am getting an error "Java.lang.NullPointerException: Cannot invoke method getNodeValue() on null object error at line 36".
Am I using the wrong xpath? I am not sure whats going wrong. Can someone please help me?
Hi,
Ok, so if I may suggest the use of XMLSlurper, which is built into Groovy so requires no imports and is quote neat. Then if we put your XML snippet (similar typed in by me) into a file /work/SoapUI Forum/soapy.xml:
<?xml version="1.0" encoding="UTF-8"?>
<soap:Envelope xmlns:
<soap:Header/>
<soap:Body>
<tem:Add xmlns:
<tem:intA>5</tem:intA>
<tem:intB>7</tem:intB>
</tem:Add>
</soap:Body>
</soap:Envelope>
Then, the following Groovy script allows you to parse the XML from file, update the values of intA and intB and save the resulting XML back to the same file:
import groovy.xml.XmlUtil
def xmlFromFile = new File('/work/SoapUI Forum/soapy.xml')
def envelopeNode = new XmlSlurper().parseText(xmlFromFile.getText())
envelopeNode.Body.Add.intA=5
envelopeNode.Body.Add.intB=7
//Just to print it out
XmlUtil xmlUtil = new XmlUtil()
log.info xmlUtil.serialize(envelopeNode)
//To update the file with updated XML
xmlUtil.serialize(envelopeNode, new FileWriter(xmlFromFile))
Is this what you needed?
Cheers,
Rup
Thanks Rup!!
This is what I was looking for. But how to parameterize both xpath and and node value. Lets say, I am taking this xpath (envelopeNode.Body.Add.intA) as a input parameter from excel sheet and then modifying the value to '20'.
def sIntA=envelopeNode.Body.Add.intA // Lets say, this value is taken from excel sheet (I have a code to get the value from excel sheet)
My aim is to get the Node's xpath from excel sheet, get the Node's value from excel sheet, update the xml with this node and save updated xml in a different folder.
Please suggest.
Hi,
Thats ok.
I think I now get what you are after i.e. a way to pass the actual (XPath) query in as a String paramter, but before having a go at that, can I ask if you actually just want a way to parameterise and populate SOAP requests from your spreadsheet of values e.g. data-driven testing? If this is the case I can potentially explain an easier way to do this in SoapUI using your spreadsheet.
Cheers,
Rup
Yes Rup. I am actually looking for data driven testing. I am trying to use external xml stored on local disc. I will update the node values using xpath. After updating the xml, I will save it in different folder and at the end I will upload both request xml and response xml to the test case in ALM. It helps me keeping track of request and response used for specific test case.
Please let me know if you have any easier way to do this.
Really appreciate your help. Thanks!
Cheers,
Vikram
Hi Vikram,
Ok, that sounds interesting. I don't know much about ALM, but hopefully we can sort something out.
So a common pattern of data-driven testing in SoapUI can be:
1. Read line of CSV data or potentially a soap request (using Groovy) in your case. You can do this in a Groovy TestStep. Say you have read the values in you can set them as properties on the context holder e.g.
context["intA"]=2
context["intB"]=3
2. Setup a SOAP Request TestStep and use the properties from the context holder to parameterise the SOAP request. You can esily insert the values using whats called property expansions (see) e.g.
<?xml version="1.0" encoding="UTF-8"?>
<soap:Envelope xmlns:
<soap:Header/>
<soap:Body>
<tem:Add xmlns:
<tem:intA>${intA}</tem:intA>
<tem:intB>${intB}</tem:intB>
</tem:Add>
</soap:Body>
</soap:Envelope>
2a) You can also use the data values from the context to parameterise Assertions e.g. XPath Assertion (expected value) or Content or Script Assertions.
3. Use another Groovy TestStep (there is also a Conditional Goto TestStep) to loop back to 1 if there are more rows of test data.
Of course if you want to dump the SOAP request/response into a file, the thats easy enough to add with Groovy.
Does this sound at all like what you wanted? It just sounded like it might be easier to work with data values e.g. CSV rather than having to manipulate XML with XPaths..
If this is not really what you need, we can get back to the plan A and I'll have a go at adapting that Groovy script to use XPaths instead.
Cheers,
Rup
Hi Rupert,
I am also facing a similar problem but the difference is that I am not doing SOAPUI testing but I need to update the SOAPUI response tag values from an excel before forwarding that response to another system ,since my main application is developed in groovy ,I was thinking if this can be done in groovy only.
So my excel header are actually XPATHS and the corresponding rows below contain the value.
any help would be great.
Hi,
Ok, I am not sure I get exactly what you mean, please can you explain more? So you say you arent testing with SoapUI, but want to update tags in the response - are you calling some kind of service that returns XML and wanting to update the response based on values taken from an Excel spreadsheet?
Thanks,
Rup
Hi Rupert,
Yeah ,so I have an inhouse built(java-groovy) application which generates a SOAP response and I have to feed that response to a proprietory software .But before sending the response to the software I need to update few tag values as per the Excel received from third party.
For now I have manually updated the third party Excel column headers with XPATHs of the XML so that I can update the XMLs as per the values in the corresponding row.
My main problem is reading the column header and passing its value in place of
Envelope.Body.CamCommand.CamAction.tCANOTIFICATIONS.pCASECIDUNDL
in the below code
String fileContents = new File(FilePath).text def xmlfromFile = new File(FilePath) def Envelope = new XmlSlurper().parseText(xmlfromFile.getText()) Envelope.Body.CamCommand.CamAction.tCANOTIFICATIONS.pCASECIDUNDL = "NewData123" XmlUtil xmlUtil = new XmlUtil() println xmlUtil.serialize(Envelope) xmlUtil.serialize(Envelope, new FileWriter(xmlfromFile))
Thanks,
Amitav
The below link is having an example, Hope this will help | https://community.smartbear.com/t5/SoapUI-Open-Source/How-to-update-tag-value-in-external-xml-using-groovy-script/td-p/99353 | CC-MAIN-2019-43 | refinedweb | 1,106 | 64 |
Nevertheless that had to end. So I thought I'll start checking up on the changes introduced in C# 4.0 today. The most common aspect talked about was the introduction of the dynamic keyword.
The dynamic keyword allows you to reference objects that may or may not exist during runtime. In other words, your objects are late bound and static type checking which we were so used to and swore by is off the table when you use the dynamic keyword. When using the dynamic keyword, you won't see warnings or errors during compile time which suddenly makes you go "oh oh" the first time. In short, you can do absolutely ANY operation at runtime with dynamic objects. Your first thought when you hear this is that this is probably not such a great idea to use when you build complex applications. Javascript probably comes to mind. But as you think a lot over this, it may not be all that bad if you are a little careful.
As you already know, languages that are statically typed (like C# or Java) have variable bound to the object and the datatype. The compiler needs to know the datatype during compile time itself. This contrasts with a dynamic programming language where you do not know the datatype during runtime and a variable is only mapped to the object and not to any particular datatype.
I'm trying some of the basics over here:
using System;
using System.Collections.Generic;
using System.Linq;
using System.Text;
namespace HelloDynamicWorld
{
class Program
{
static void Main(string[] args)
{
Program HelloWorld = new Program();
HelloWorld.HelloWorldDynamic();
HelloWorld.HelloWorldDynamicObjects();
}
public void HelloWorldDynamic()
{
dynamic a = 1; //run time sees this as an int. Compiler doesn't know what this is
dynamic b = "dyanmic"; // run time sees this as a string. Compiler doesn't know what this is
dynamic c = 10.123; // run time sees this as a double. Compiler doesn't know what this is
Console.WriteLine(a + " Hello" + b + " World " + c); // The whole thing is a string.
dynamic d = a + b + c; // compiler doesn't now what this is. Run time sees the whole thing
// as a string "1dynamic10.123"
}
public void HelloWorldDynamicObjects()
{
Person p = new Person();
Location l = new Location();
p.FirstName = "George";
p.LastName = "Alexander";
dynamic a = p; // a is of type Person
//Console.WriteLine(typeof(dynamic).ToString()); // compiler will throw an error. dynamic not allowed.
Console.WriteLine ("\n\r First Name " + p.FirstName);
Console.WriteLine("\n\r Last Name " + p.LastName);
a = l; // a becomes of type Location
a.Country = "Jarvandoland"; // a.Country won't come up in intellisense
a.State = "Koliyork";
a.City = "Killingworth";
Console.WriteLine ("\n\r Country " + a.Country);
Console.WriteLine ("\n\r State " + a.State);
Console.WriteLine("\n\r City " + a.City);
//dynamic c = a + p; // compiler won't see this as an error.
//Runtime throws an error as type is resolved to Location and Person
}
}
class Person
{
public string FirstName { get; set; }
public string LastName { get; set; }
}
class Location
{
public string Country { get; set; }
public string State { get; set; }
public string City { get; set; }
}
}
So where would you most likely use this feature?
hmmm. it make sense to use them when you need to use COM interops or when returning objects from objects that are instantiated and have their definitions in different languages (Iron Python for instance). I'm guessing they might come in handy when you have operations involving XML that need a lot of concatenation. Or how about when exploiting the dynamic keyword when working with Entity or LINQ objects? Don't know and that's something I would have to play around with and see.
Cutting for now,
Until next time, happy programming! | https://it.toolbox.com/blogs/georgealexander/c-40-using-the-dynamic-keyword-042910 | CC-MAIN-2018-26 | refinedweb | 617 | 67.76 |
Search Discussions
- If anyone has been watching my trials and tribulations building and running the latest CVS snapshot under S/Linux on a SUN SPARCstation IPX. The latest position is:- If I compile with optimization ...
- With the 6.4 beta just over a week away, here are the open items I see for 6.4. We actually have fewer than usual, so that is a good thing. Many of the latter ones are fairly rare bugs. ...
- Let me ask about type coersion. When you have an int2 column called x, how do you place the conversion functions when it is being compared to an in4 constant? x = int2(500) int4(x) = 500 The first is ...
- Is this a bug that the index doesn't work on floats or is it a datatype mismatch thing? ----- template1= create table foo (x float4, y float4); CREATE template1= create index foox on foo(x); CREATE ...
- I did a cvsup and a fresh recompile this morning. But I still get ERROR: fmgr_info: function 683: cache lookup failed from initdb. Guess I have to dig into it more deeply. Is anyone else running ...
- backend/libpq/pgcomm.c no longer compiles on my system. The cvs log sez Massimo Dal Zotto <dz@cs.unitn.it flock is *VERY* far from portable. I am aware of three or four different, mutually ...
- vacuum still doesn't work for me. I did a regresion test and found the sanity_check test failed. The result shows that the backend died while doing vacuum. I destroyed and re-created the regression ...
- I am working on a patch to: remove oidname, oidint2, and oidint4 allow the bootstrap code to create multi-key indexes change procname index to procname, nargs, argtypes remove many sequential scans ...
- Well, initdb doesn't give the same errors, but it's still failing. Now it's just saying: initdb: could not create template database initdb: cleaning up by wiping out ...
- Hello! I've written the SPI procedure which allows to delete large objects referenced by currently deleted or updated tuples. I tested it with PostgreSQL v6.3.2 and seem it works. If PostgreSQL ...
- While testing my 6.4 patch to allow functions/expressions to be specified in the ORDER/GROUP BY clause (and not in the target list) I came across a nasty little bug. A segmentation fault gets thrown ...
- I found that current pg_dump command produces wrong output if a table name includes upper letters (See below). in bin/pg_dump.c: sprintf(q, "CREATE TABLE \"%s\" (", fmtId(tblinfo[i].relname)); Here ...
- Hi. A month or two ago I put an int8 data type into Postgres. Aside from being useful on its own, it could also form the basis for other types needing extended range, such as money, decimal, and ...
- Bruce, I'll send the patch itself directly to you. It's a bigger one and I don't want to waste bandwidth on the list. Would you please apply that one and forget the two others I posted recently? The ...
- Hi, got a lot of things working up to now. Most things on the relation level are fixed now, including qualified instead rules. Update rules can now correctly refer to *new* and *current*. Must check ...
- Attached is a patch that uses autoconf to determine whether there is a working 64-bit-int type available. In playing around with it on my machine, I found that gcc provides perfectly fine support for ...
- On Friday I asked. Since then I have downloaded the most current sources and I still get this problem. Am I the only one seeing this? -- D'Arcy J.M. Cain <darcy@{druid|vex}.net | Democracy is three ...
- I've just run CVS UPDATE again, in another attempt to get initdb to run. Anyhow, I noticed that there was a message saying that there were conflicts. Any ideas? -- Peter T Mount peter@retep.org.uk or ...
- I have worked with Vadim privately on the remaining OR clause index issues, and I am done. Please start testing, everyone. The only open item is whether MergeJoin will try to use a multi-index OR ...
- Here is a comment in path/indxpath.c that says they don't want to use multi-key indexes with OR clauses. Of course, we now support multi-key indexes, and this code was disabled anyway because it was ...
- This is the first I have heard of this. The file commands/copy.c does use a file descriptor cache, but that is really just used for allowing more file opens that the OS permits. Actual opens and ...
- Hmm. I'm on Linux-libc5, i686, gcc 2.7.2.1, and select_views is still core dumping: QUERY: SELECT * FROM toyemp WHERE name = 'sharon'; pqReadData() -- backend closed the channel unexpectedly. These ...
- Hi, I really wonder if anybody ever used functions returning tuples or sets of tuples. If so, be careful! Let's have a look at the following test: create table emp (name text, salary money); create ...
- Hi, just updated development version from cvs and got strange problem : select * from WORK_FLATS where DISTRICT_ID in (4,101); select * from WORK_FLATS where DISTRICT_ID in (101,4); Does anyone ...
- When experimenting with threaded perl I noticed I had to lock access to the database a one user at the time in order get reliable (no SEGV). Unfortunely it is 3 layer between my server and the wire, ...
- As far as I am concerned, we are ready to go. -- Bruce Momjian | 830 Blythe Avenue maillist@candle.pha.pa.us | Drexel Hill, Pennsylvania 19026 + If your life is a hard drive, | (610) 353-9879(w) + ...
- Hi, who's the parser guru? I need help! I have a table t1(a int4, b int4) When I update t1 set b = 2 where a = 1; I get a targetlist with 1 entry and resno=2. But when I update t1 set b = t2.b where ...
- I did a CVSup a few minutes ago, and tried to do a clean install. configure seemed to do the right thing, but initdb was not happy. Looks like (perhaps) the recent changes to initdb to allow ...
- hello tom, did you try informix SE on linux ? could you install it on red hat 4.2 ? i'll be installing red hat 5.0 on my machine and would like to know if informix SE works with red hat. oracle too ...
- Currently, large objects are stored internally as xinv### and xinx###. I would like to rename this for 6.4 to be _lobject_### to prevent namespace collisions, and make them clearer for ...
- I just tried: select oid from pg_type; and it worked. -- Bruce Momjian | 830 Blythe Avenue maillist@candle.pha.pa.us | Drexel Hill, Pennsylvania 19026 + If your life is a hard drive, | (610) ...
- Today I am wondering whether it wouldn't be a good idea to just go ahead and move those struct declarations into libpq-int.h. Then a pointer to PGconn would be truly an opaque type for applications; ...
- Hi I found some bug on next example: ---------------------------- /* step 1 -------------- */ test= select pubid,bn into table l1 from l; SELECT test= select pubid,bn into table n1 from n; SELECT /* ...
- Hi, the following patch fixes a bug in the oracle compatibility functions btrim() ltrim() and rtrim(). The error was that the character after the set was included in the tests (ptr2 pointed to the ...
- I am planning on running pgindent on Monday, in preparation for the next release. I assume no one will be sitting on any big patches. If you are, let me know and I will wait. Marc, I assume we are on ...
- I have added this to the developer's FAQ. Any comments or corrections? -- Bruce Momjian | 830 Blythe Avenue maillist@candle.pha.pa.us | Drexel Hill, Pennsylvania 19026 + If your life is a hard drive, ...
- I tried to initdb as Bruce applied that huge patch but: Running: postgres -boot -C -F -D/usr/local/pgsql/data -Q template1 ERROR: fmgr_info: function 683: cache lookup failed ERROR: fmgr_info: ...
- Hi, here's the second patch for the rule system. It fixes the backend crashes that occur when using views/rules in the same session where they are created. The bug was that the new actions parsetrees ...
- I need to make some more updates to the libpq documentation, and I'm confused (again) about whether to work on libpq.sgml or libpq.3. The last I heard libpq.3 was the primary doco, but I find that ...
- Did I forget to send it out? Or was it lost? It seems it never made it into cvs. I don't like the idea of submitting the next one and it won't fit in. :-) Michael -- Dr. Michael Meskes ...
- What is a partial index? I have never known. -- Bruce Momjian | 830 Blythe Avenue maillist@candle.pha.pa.us | Drexel Hill, Pennsylvania 19026 + If your life is a hard drive, | (610) 353-9879(w) + ...
- Let's say we have a table with two entries: name nr ---------- foo 1 bar 2 and a C program doing the following: ... i=1; exec sql declare C cursor for select name from table where nr=:i; i=2; exec ...
- It appears to me that there is a bug in gram.y. From looking at the code (while working on ecpg) it seems that it doesn't accept a createdb with th eoption: with location = ... but without the option ...
- Hi, as proposed here comes the first patch for the query rewrite system. The tar archive contains mainly a little test suite that would fit into our regression tests. You'll find the patch itself ...
- As I mentioned before, I did some trials on my LinuxPPC box last week end. First I compiled with -O0. To my surprise, things got worse than - O2. Seems tas() for PPC (storage/buffer/s_lock.c) never ...
- It currently looks like all developers are on a platform where USE_LOCALE is defined. If it is not defined (e.g. on AIX) I get all sorts of Function argument assignment between types "unsigned char*" ...
- Hi, I managed to recreate a simple example that's crashing postgres. I am running on a DEC Alpha with Digital Unix Version 4.0d and Postgres 3.2. I tried this example several times, on several ...
- Is there a reason why anyone renamed struct index to struct RelationGetRelidindex in ecpg's type.h? I'm surprised it doesn't break the ecpg compilation procedure since the usage in preproc.y hasn't ...
- Hi hackers. I grabbed the latest CVS from this morning and did a clean build and initdb. Things look a little broken as I get a SIGSEGV when trying to create a table. Any idea what went wrong? Keith. ...
- Just tried to shutdown database (PostgreSQL) by killing postmaster and keeping established connections (backends) alive. This works fine - I still can access database using these connections (through ...
Group Navigation
Group Overview
Archives
- August 2013 (1,430)
- July 2013 (1,772)
- June 2013 (2,428)
- May 2013 (1,555)
- April 2013 (1,283)
- March 2013 (1,441)
- February 2013 (1,366)
- January 2013 (2,520)
- December 2012 (1,701)
- November 2012 (1,618)
- October 2012 (1,580)
- September 2012 (1,195)
- August 2012 (1,180)
- July 2012 (1,125)
- June 2012 (1,850)
- May 2012 (1,584)
- April 2012 (1,459)
- March 2012 (1,921)
- February 2012 (1,517)
- January 2012 (1,777)
- December 2011 (1,347)
- November 2011 (1,764)
- October 2011 (1,687)
- September 2011 (1,623)
- August 2011 (1,433)
- July 2011 (1,658)
- June 2011 (2,404)
- May 2011 (1,664)
- April 2011 (1,834)
- March 2011 (1,956)
- February 2011 (2,476)
- January 2011 (2,965)
- December 2010 (2,587)
- November 2010 (2,109)
- October 2010 (2,113)
- September 2010 (2,173)
- August 2010 (2,061)
- July 2010 (1,612)
- June 2010 (1,560)
- May 2010 (1,961)
- April 2010 (1,575)
- March 2010 (1,314)
- February 2010 (2,439)
- January 2010 (3,118)
- December 2009 (2,514)
- November 2009 (493)
- October 2009 (1,903)
- September 2009 (1,990)
- August 2009 (2,170)
- July 2009 (2,025)
- June 2009 (1,565)
- May 2009 (1,518)
- April 2009 (1,431)
- March 2009 (1,424)
- February 2009 (1,399)
- January 2009 (2,641)
- December 2008 (2,011)
- November 2008 (1,918)
- October 2008 (1,695)
- September 2008 (1,887)
- August 2008 (1,374)
- July 2008 (1,495)
- June 2008 (1,149)
- May 2008 (1,112)
- April 2008 (1,908)
- March 2008 (1,288)
- February 2008 (1,309)
- January 2008 (1,220)
- December 2007 (989)
- November 2007 (1,369)
- October 2007 (1,553)
- September 2007 (1,225)
- August 2007 (1,221)
- July 2007 (1,007)
- June 2007 (1,127)
- May 2007 (1,189)
- April 2007 (1,266)
- March 2007 (1,866)
- February 2007 (1,916)
- January 2007 (1,645)
- December 2006 (1,423)
- November 2006 (1,021)
- October 2006 (1,565)
- September 2006 (2,375)
- August 2006 (2,039)
- July 2006 (1,626)
- June 2006 (1,624)
- May 2006 (1,366)
- April 2006 (1,150)
- March 2006 (1,327)
- February 2006 (1,259)
- January 2006 (1,030)
- December 2005 (1,252)
- November 2005 (1,568)
- October 2005 (1,417)
- September 2005 (1,464)
- August 2005 (1,191)
- July 2005 (1,150)
- June 2005 (1,576)
- May 2005 (1,592)
- April 2005 (1,004)
- March 2005 (1,069)
- February 2005 (969)
- January 2005 (1,100)
- December 2004 (949)
- November 2004 (1,210)
- October 2004 (1,059)
- September 2004 (951)
- August 2004 (1,601)
- July 2004 (1,397)
- June 2004 (1,037)
- May 2004 (1,446)
- April 2004 (1,167)
- March 2004 (1,330)
- February 2004 (1,003)
- January 2004 (861)
- December 2003 (827)
- November 2003 (1,618)
- October 2003 (1,602)
- September 2003 (1,683)
- August 2003 (1,343)
- July 2003 (1,045)
- June 2003 (1,302)
- May 2003 (802)
- April 2003 (1,012)
- March 2003 (1,242)
- February 2003 (1,323)
- January 2003 (1,315)
- December 2002 (1,176)
- November 2002 (1,220)
- October 2002 (1,449)
- September 2002 (1,816)
- August 2002 (2,295)
- July 2002 (1,290)
- June 2002 (1,024)
- May 2002 (1,157)
- April 2002 (1,532)
- March 2002 (1,207)
- February 2002 (1,225)
- January 2002 (1,397)
- December 2001 (963)
- November 2001 (1,301)
- October 2001 (1,155)
- September 2001 (895)
- August 2001 (1,204)
- July 2001 (938)
- June 2001 (1,131)
- May 2001 (1,458)
- April 2001 (1,168)
- March 2001 (1,672)
- February 2001 (1,154)
- January 2001 (1,450)
- December 2000 (1,222)
- November 2000 (1,390)
- October 2000 (1,296)
- September 2000 (633)
- August 2000 (888)
- July 2000 (1,428)
- June 2000 (1,283)
- May 2000 (1,648)
- April 2000 (184)
- March 2000 (291)
- February 2000 (1,464)
- January 2000 (1,639)
- December 1999 (1,056)
- November 1999 (916)
- October 1999 (949)
- September 1999 (946)
- August 1999 (695)
- July 1999 (1,070)
- June 1999 (1,242)
- May 1999 (1,136)
- April 1999 (218)
- March 1999 (1,002)
- February 1999 (692)
- January 1999 (758)
- December 1998 (591)
- November 1998 (600)
- October 1998 (1,208)
- September 1998 (678)
- August 1998 (856)
- July 1998 (482)
- June 1998 (496)
- May 1998 (618)
- April 1998 (702)
- March 1998 (1,118)
- February 1998 (1,307)
- January 1998 (855)
- December 1997 (346)
- November 1997 (374)
- October 1997 (575)
- September 1997 (549)
- August 1997 (404)
- July 1997 (391)
- June 1997 (595)
- May 1997 (478)
- April 1997 (854)
- March 1997 (526)
- February 1997 (297)
- January 1997 (927)
- December 1996 (2)
- November 1996 (1)
- October 1996 (1)
- October 1995 (2)
- July 1995 (1) | https://grokbase.com/g/postgresql/pgsql-hackers/1998/08 | CC-MAIN-2022-05 | refinedweb | 2,587 | 72.26 |
Opened 3 years ago
Closed 2 years ago
#4491 closed bug (fixed)
dataToQa uses only unqualified names when converting values to their TH representation.
Description
The dataToQa function in Language.Haskell.TH.Quote always use unqualified names when converting a value to its TH representation, even if the names are not in scope in the TH splice where they are called.
I have attached a patch that makes things a bit better, but there are still a number of outstanding issues.
The problem is that the Data type class allows one to find the name of the module in which a data constructor is declared, but it does not allow one to find the name of the package in which the constructor is declared. On the other hand, TH lets you either create a qualified name that is resolved using the namespace in effect at the point of a TH splice, or create a fully resolved name if you know the package in which it is declared. The patch I have attached changes dataToQa to create all names as qualified names, but the resulting TH only compiles without error if the data constructors that are used are imported in such a way that they can be qualified with the name of the module in which they are declared. Examples that assume the new definition of dataToQa follow.
Assume the file A.hs exists with the contents:
{-# LANGUAGE DeriveDataTypeable #-} module A where import Data.Generics data Foo = Foo Int deriving (Show, Data, Typeable)
Now this program will run and print Foo 1:
{-# LANGUAGE TemplateHaskell #-} import Language.Haskell.TH import A main :: IO () main = print $(dataToExpQ (const Nothing) (Foo 1))
So will this program:
{-# LANGUAGE TemplateHaskell #-} import Language.Haskell.TH import qualified A main :: IO () main = print $(dataToExpQ (const Nothing) (A.Foo 1))
But this program will not compile:
{-# LANGUAGE TemplateHaskell #-} import Language.Haskell.TH import A as B main :: IO () main = print $(dataToExpQ (const Nothing) (Foo 1))
Instead it gives the following error:
Not in scope: data constructor `A.Foo' In the first argument of `print', namely `$(dataToExpQ (const Nothing) (Foo 1))' In the expression: print ($(dataToExpQ (const Nothing) (Foo 1))) In the definition of `main': main = print ($(dataToExpQ (const Nothing) (Foo 1)))
This is expected, since dataToQa creates TH qualified names but uses the resolved module name, A, as the qualifier.
The current (unpatched) version of dataToQa creates all TH names using mkName, i.e., it creates all names unqualified. I think always using qualified names instead is preferable. It may break existing code, but it gives the programmer better control (that is, some control!) over namespace pollution.
There are two better solutions.
The first would be to change the Data type class so that it exposes the package name as well as the module name in which data constructors are declared. This seems GHC-specific though and a bit of a hack.
I think the ideal solution is to add a smart TH name constructor that allows one to create a resolved name by specifying the resolved module name without requiring that the package name be specified. Of course if different packages define the same data constructor and they both happen to be in scope where the splice occurs, then there will be a static error, but I suspect this would only happen in a module that uses PackageImports, and even then it would be rare. This might require adding a member to the Q monad that allows "resolving" names or something, which I'm sure is non-trivial. I would be willing to work on a patch if given a few hints on how to approach the problem and assuming there's a willingness to adopt such a patch.
Attachments (1)
Change History (8)
Changed 3 years ago by gmainland
comment:1 Changed 3 years ago by simonpj
Interesting. What is a "resolved module name"? And how would you get hold of the resolved module name for the constructor without changing Data?
I'd certainly entertain a patch. As you say, changing Data would be a fairly big deal, so if you can figure out a way to solve the problem without doing so, that'd be much easier. I'd totally forgotten about dataToQa and friends, and had to go back to your paper to figure out what they do. (Even then I'm shaky. Could you add some more explicit Haddock comments?) So I doubt they are used a lot, and changes there would be easier.
Simon
comment:2 Changed 3 years ago by gmainland
Sorry for the muddled terminology. When I wrote "resolved module name" I meant the moduleName of the Module component of the NameSort of a Name (whew...). An "unresolved module name" would be the ModuleName component of a qualified RdrName. Perhaps "renamed module name" and "qualifier" are better terms? What is the correct terminology?
Given an x that is an instance of Data, (showConstr . toConstr) x is the (unqualified) name of the data constructor used to construct x and (tyconModule . dataTypeName . dataTypeOf) x is the name of the module in which the corresponding type constructor is defined. Both are (newtype'd) strings. I would like to use these to get Template Haskell to create an original name, not a qualified name. The module defining the constructor may not be in scope, or it could be in scope but aliased due to an "as" import, so creating a qualified name is not the right thing to do.
Two things that might get us where I want to be:
- Add a constructor to Language.Haskell.TH.Syntax.NameFlavour, NameO ModName, that represents an original name.
- In GHC, have a way to convert a ModuleName to a Module, i.e., find the package that defines the ModuleName.
Then when converting TH, GHC could apply (2) to (1) and use the Orig data constructor to create a RdrName.
Wired-in names would still have to be handled separately (the attached patch does this more-or-less correctly, I think).
comment:3 Changed 3 years ago by igloo
- Milestone set to 7.2.1
comment:4 Changed 3 years ago by simonpj
Hang on. The NameFlavour type already has a NameG constructor that lets you construct an original name. You supply the module name and package and away you go. It may not be exported properly, but isn't that what you need?
comment:5 Changed 3 years ago by gmainland
Yes, that along with (2) from above is what I need---I meant (1) and (2) to be mutually exclusive, sorry. The difficulty is getting the package name from just a module name. I have a small patch to HEAD that adds a qModPackage member of type ModName -> m PkgName to the Quasi monad and an implementation in TcSplice.lhs. I think that solves my problem, but I haven't fully tested the patch.
comment:6 Changed 3 years ago by nicolas.frisby
Has anyone proposed something along the lines of
Data.Data.toConstrTH :: a -> Language.Haskell.TH.Name
? I've simulated this with SYB3 (and corresponding copy-tweak-and-paste of dataToQa) in a project and it worked pretty nicely. Coupling Data.Data and TH like so does seem a bit heavy handed…
comment:7 Changed 2 years ago by gmainland
- Resolution set to fixed
- Status changed from new to closed
patch to Language.Haskell.TH.Quote | https://ghc.haskell.org/trac/ghc/ticket/4491 | CC-MAIN-2014-15 | refinedweb | 1,232 | 62.38 |
Protect your Flask applications using CrowdSec
At CrowdSec we want our users to protect themselves regardless of the tech stack they use. The simplest way to do that is to implement threat remediation at the network level, with a firewall bouncer. CrowdSec bouncers can also be set up at the upper levels of an applicative stack: web server, CDN, and in the case we are looking at here, the business logic of the main application itself.
In this post, we’re going to learn how web applications developed using Python can be protected by CrowdSec at the application level.
Remedying directly in your application can be helpful for various reasons:
- It allows you to provide a business-logic answer to potential security threats
- It gives you a lot of flexibility about what and how to do when a security issue arises
We are going to deploy a Python bouncer which will integrate with a flask application. This application will then be able to apply captcha and ban remediations to the IPs suggested by CrowdSec. A reference flask app protected by CrowdSec is available here.
Before we begin, here are the prerequisites:
Prerequisites
In the following steps, we would be creating a CrowdSec client and a Flask middleware. This middleware would be registered with your flask app. For every incoming request, the middleware will take any action(ban, captcha) if CrowdSec has a decision against the IP.
Creating CrowdSec Client in flask app:
We will first create a client which polls CrowdSec to keep track of the latest (IP, remediation) pairs. This client is provided by the ‘pycrowdsec’ library.
pip install pycrowdsec # install pycrowdsec
Then in your application code before you create the flask app object, instantiate the client via
sudo cscli bouncers add flaskBouncer
Creating the ban view:
We will create a view where all the IPs which are suggested to be banned by CrowdSec will be redirected to. They won’t be able to access your web app.
from flask import abort @app.route("/ban") def ban_page(): return abort(403)
Creating the Captcha view:
IPs that are suggested to get captcha by CrowdSec will need to be:
- Redirected to captcha view if they haven’t solved captcha very recently
- Solve captcha correctly
- Redirected back to the original view they were trying to access.
We will be using Google’s reCaptcha to provide and verify the captcha. So this would be a lot simpler.
First, create an HTML template to render the captcha. Let’s name it “captcha_page.html”.
<html> <head> <title>reCAPTCHA</title> <script src="" async defer></script> </head> <body> <form action="" method="POST"> <div class="g-recaptcha" data-</div> <br/> <input type="submit" value="Submit"> </form> </body> </html>
from flask import request, render_template, session, redirect, url_for, abort def validate_captcha_resp(g_recaptcha_response): “”” Helper function which returns True if solved captcha is correct “”” resp = requests.post( url="", data={ "secret": "GOOGLE_RECAPTCHA_PRIVATE_KEY" "response": g_recaptcha_response, }, ).json() return resp["success"] Valid_captcha_keys = {} @app.route("/captcha", methods=["GET", "POST"]) def captcha_page(): if request.method == "GET": return render_template( "captcha_page.html", public_key="GOOGLE_RECAPTCHA_SITE_KEY" ) elif request.method == "POST": captcha_resp = request.form.get("g-recaptcha-response") if not captcha_resp: return redirect(url_for("captcha_page")) is_valid = validate_captcha_resp(captcha_resp) if not is_valid: return redirect(url_for("captcha_page")) session["captcha_resp"] = captcha_resp valid_captcha_keys[captcha_resp] = None return redirect(url_for("index")) # Replace “GOOGLE_RECAPTCHA_PRIVATE_KEY” and “GOOGLE_RECAPTCHA_SITE_KEY” with your own keys.
Registering CrowdSec middleware:
Finally, we create a middleware to combine the work of the previous steps. This middleware, again, is provided by the `pycrowdsec` library.
from pycrowdsec.flask import get_crowdsec_middleware actions = { "ban": lambda: redirect(url_for("ban_page")), "captcha": lambda: redirect(url_for("captcha_page")) if session.get("captcha_resp") not in valid_captcha_keys else None, } # app here is the flask app object. app.before_request( get_crowdsec_middleware(actions, crowdsec_client.cache, exclude_views=["captcha_page", "ban_page"]) )
Now test it!
Testing ban:
Let’s ban some IPs that you have access to.
sudo cscli decisions add –ip <YOUR_IP>
Try accessing the flask app from this IP, you should be redirected to 403 view.
Testing Captcha:
Let’s first unban our IP and then add a decision to captcha the IP
sudo cscli decisions delete –ip <YOUR_IP> sudo cscl decisions add –ip <YOUR_IP> –type captcha
Try accessing the flask app from this IP, you should be redirected to captcha view.
After solving the captcha you’ll be redirected to the original view.
Conclusion
In summary, we added CrowdSec’s protection to our flask app. This was done by integrating a middleware that did the work of checking if the IP is malevolent and then taking appropriate action against it.
If you have any ideas, feedback, or suggestions, feel free to contact us using our community channels (Gitter and Discourse) | https://crowdsec.net/blog/protect-your-flask-applications-using-crowdsec/ | CC-MAIN-2022-05 | refinedweb | 767 | 53.92 |
Often you may wish to convert one or more columns in a pandas DataFrame to strings. Fortunately this is easy to do using the built-in pandas astype(str) function.
This tutorial shows several examples of how to use this function.
Example 1: Convert a Single DataFrame Column to String
Suppose we have the following pandas DataFrame:
import pandas as pd #create DataFrame df = pd.DataFrame({'player': ['A', 'B', 'C', 'D', 'E'], 'points': [25, 20, 14, 16, 27], 'assists': [5, 7, 7, 8, 11]}) #view DataFrame df player points assists 0 A 25 5 1 B 20 7 2 C 14 7 3 D 16 8 4 E 27 11
We can identify the data type of each column by using dtypes:
df.dtypes player object points int64 assists int64 dtype: object
We can see that the column “player” is a string while the other two columns “points” and “assists” are integers.
We can convert the column “points” to a string by simply using astype(str) as follows:
df['points'] = df['points'].astype(str)
We can verify that this column is now a string by once again using dtypes:
df.dtypes player object points object assists int64 dtype: object
Example 2: Convert Multiple DataFrame Columns to Strings
We can convert both columns “points” and “assists” to strings by using the following syntax:
df[['points', 'assists']] = df[['points', 'assists']].astype(str)
And once again we can verify that they’re strings by using dtypes:
df.dtypes player object points object assists object dtype: object
Example 3: Convert an Entire DataFrame to Strings
Lastly, we can convert every column in a DataFrame to strings by using the following syntax:
#convert every column to strings df = df.astype(str) #check data type of each column df.dtypes player object points object assists object dtype: object
You can find the complete documentation for the astype() function here. | https://www.statology.org/pandas-to-string/ | CC-MAIN-2022-21 | refinedweb | 313 | 57.1 |
No module named Shell
I have a vanilla SimpleCV 1.2 installation (from the superpack) on Win7. The only mod I've made to the default configuration is that I've installed ipython 0.12 in order to get Web notebooks.
When I run the code that is in the startup script
python -m SimpleCV.__init__
it complains that there is no module named Shell in build\bdist.win32\SimpleCV\Shell\Shell.py.
If, on the other hand, I try running the simple sample application from the first tutorial (in a Web notebook, BTW)
import SimpleCV camera = SimpleCV.Camera() image = camera.getImage() image.show()
it works just fine.
Can anyone help with the missing Shell mystery?
Mystery solved, sorta. I installed the current version of SimpleCV from github, and now there is no complaint about a shell. (And camera.live() works, too!) | http://help.simplecv.org/question/429/no-module-named-shell/ | CC-MAIN-2019-22 | refinedweb | 142 | 69.99 |
From: John Britton (johnb_at_[hidden])
Date: 2000-10-05 16:40:34
> If one takes libs/graph/examples/connected_components.cpp, and
> encloses the main() function in a namespace, the VC++ compiler fails...
This problem is driving me nuts, and I've discovered it to be more insidious
than simply having, or not having, a namespace. If one simply edits a
pristine libs/graph/examples/connected_components.cpp and changes
int num = connected_components(G, &c[0], get(vertex_color, G),
dfs_visitor<>() );
to
int num = connected_components(G, &c[0] );
the same VC++ error "C1001: INTERNAL COMPILER ERROR" is reported.
When examining the code, I was wondering why the first version of
connected_components() was used, instead of the second one above, which if I
understand things correctly, should be equivalent. Was it to avoid this VC++
bug?
Not having fun today, JohnB
Boost list run by bdawes at acm.org, gregod at cs.rpi.edu, cpdaniel at pacbell.net, john at johnmaddock.co.uk | https://lists.boost.org/Archives/boost/2000/10/5522.php | CC-MAIN-2021-31 | refinedweb | 159 | 50.63 |
This project is designed to help you get started with Amazon AWS Robomaker and Qualcomm® Robotics RB3 Development Kit. The robotics platform is based on the Qualcomm ® Snapdragon™ 845 mobile platform from Qualcomm Technologies, so you may see the kit referenced as Qualcomm® SDA845 in some sections. The project will walk you through the following steps:
- Build and simulate a robot application in AWS cloud
- Deploy the robot application to the Qualcomm Robotics RB3 development kit through Greengrass
- Run the deployed robot application on the Qualcomm Robotics RB3 development kit
Materials Required / Parts List / Tools
Build/Assembly Instructions
Build/Assembly Instructions
Here are a few things to keep in mind and test before you start. Please make sure that the Wi-Fi on target (Qualcomm Robotics RB3 Development Kit) can access the AWS website. After deployment, the ROS master and robot application are run inside the Docker. The sensor node/movebase packages/Kobuki packages are run outside the Docker (run on the target). Please launch the movebase/Kobuki packages only after the ROS master is running successfully inside the Docker.
Build and simulate a robot application in AWS cloud
Hello World
The “HelloWorld” example is designed to help you understand some basic concepts on ROS and AWS cloud such as S3 bucket and deployment process. You don’t need to change any code in this stage. You only need to repeat steps 1 and 2 below to “Restart the Hello World Simulation Application”
- Create an AWS account.
- Run example Hello world simulation job.
- Create a development environment and a cloud 9 workspace. This will create a virtual PC (VPC) and a workspace on that VPC.
- Run HelloWorld app in your workspace.
- Deploy the robot APP to target.
More details for this step can be found in section 2 of the instructions.
Other Examples
After the HelloWorld example, we recommend that you try other examples on Robomaker for better understanding and improving your skills.
Please keep in mind that each example has a corresponding simulation job. Run a new simulation job, and choose from one of the examples as seen below. We recommend “Robot Monitoring” at this stage as it is based on movebase (navigation stack).
- Download the source code of example you choose.
- Start from the section to “Modify and Build Applications” because you have already created an environment while running the HelloWorld example.
- Download the source code that corresponds to the simulation job you choose.
- Run this new simulation.
Create your own
Once you are familiar with the AWS Robomaker examples, it’s time to create your own application workspace, simulation job and deploy your own robot application.
- You can utilize some code samples from existing examples, for example the movebase demo.
Build and bundle your application.
Useful Tips:
The last section Create a Simulation Job can be done from the Robomaker console.
For the rest of the steps, please follow the guild in your cloud 9 command prompt.
After the simulation job is created successfully, you will see it’s in running state. In case of failure, you can check the log to troubleshoot.
You can create a reference workspace based on movebase by the following steps.
On the AWS simulation environment, a workspace includes a robot application and a simulation application. For the simulation application, you can utilize the Robot Monitoring example. Copy the whole folder of this simulation application to your new simulation app and remove the package aws_robomaker_simulation_common.
Saving the below python script as a reference robot application to send the navigation goal to movebase. the folder tree please refer an exist AWS example.
After deployment, the robot application runs on the SDA845 target inside the docker. the movebase runs outside the docker on SDA845. please refer to last section of guide to launch movebase and Kobuki(the real robot).
The below python script is a reference for your robot application.
Deploy your own application.
Details for deploying your application are described below.
#!/usr/bin/env python
import rospy
import actionlib
from actionlib_msgs.msg import *
from geometry_msgs.msg import Pose, Point, Quaternion, Twist
from move_base_msgs.msg import MoveBaseAction, MoveBaseGoal
class MoveBaseTest():
def __init__(self):
rospy.init_node('nav_test', anonymous=False)
rospy.on_shutdown(self.shutdown)
#p1 = Point(-1.04219532013, 5.23599052429, 0.0)
p1 = Point(-1.04219532013, 2.23599052429, 0.0)
q1 = Quaternion(0.0, 0.0, -0.573064998815, 0.819509918874)
p2 = Point(1.64250051975, 1.58413732052, 0.0)
q2 = Quaternion(0.0, 0.0, -0.0192202632229, 0.999815273679)
p3 = Point(5.10259008408, 0.883781552315, 0.0)
q3 = Quaternion(0.0, 0.0, -0.455630867938, 0.890168811059)
p4 = Point(6.15312242508, -6.41992664337, 0.0)
q4 = Quaternion(0.0, 0.0, 0.999290790059, -0.037655237394)
p5 = Point(1.73421287537, -5.13594055176, 0.0)
q5 = Quaternion(0.0, 0.0, 0.718415199022, 0.695614549743)
p6 = Point(-3.83528089523, -5.31936645508, 0.0)
q6 = Quaternion(0.0, 0.0, 0.701646950739, 0.712524776073)
quaternions = list()
quaternions.append(q1)
quaternions.append(q2)
quaternions.append(q3)
#quaternions.append(q4)
#quaternions.append(q5)
#quaternions.append(q6)
points = list()
points.append(p1)
points.append(p2)
points.append(p3)
#points.append(p4)
#points.append(p5)
#points.append(p6)
goals = list()
goals.append(Pose(points[0], quaternions[0]))
goals.append(Pose(points[1], quaternions[1]))
goals.append(Pose(points[2], quaternions[2]))
#goals.append(Pose(points[3], quaternions[3]))
#goals.append(Pose(points[4], quaternions[4]))
#goals.append(Pose(points[5], quaternions[5]))
rospy.loginfo("*** started navi test")
# Publisher to manually control the robot (e.g. to stop it, queue_size=5)
self.cmd_vel_pub = rospy.Publisher('cmd_vel', Twist, queue_size=5)
self.move_base = actionlib.SimpleActionClient("move_base", MoveBaseAction)
self.move_base.wait_for_server()
rospy.loginfo("Connected to move base server")
rospy.loginfo("Starting navigation test")
# Initialize a counter to track goals
i = 0
while not rospy.is_shutdown():
# Intialize the waypoint goal
goal = MoveBaseGoal()
goal.target_pose.header.frame_id = 'map'
goal.target_pose.header.stamp = rospy.Time.now()
goal.target_pose.pose = goals[i%3]
#move toward the goal
self.move(goal)
i += 1
def move(self, goal):
# Send the goal pose to the MoveBaseAction server
self.move_base.send_goal(goal)
# Allow 1 minute to get there
finished_within_time = self.move_base.wait_for_result(rospy.Duration(60))
# If we don't get there in time, abort the goal
if not finished_within_time:
self.move_base.cancel_goal()
rospy.loginfo("Timed out achieving goal")
else:
if self.move_base.get_result():
rospy.loginfo("Goal done: %s", goal)
def shutdown(self):
rospy.loginfo("Stopping the robot...")
# Cancel any active goals
self.move_base.cancel_goal()
rospy.sleep(2)
# Stop the robot
self.cmd_vel_pub.publish(Twist())
rospy.sleep(1)
if __name__ == '__main__':
try:
MoveBaseTest()
except rospy.ROSInterruptException:
rospy.loginfo("Navigation test finished.")
Deploy the robot application to RB3 development kit through AWS IOT Greengrass
Refer to the AWS Greengrass official guide for the latest getting started instructions.
Open IAM page above and select “Policies” ---> “Create policy”
Choose “Greengrass”
Type the policy info in “JSON” tab, copy the JSON code below and modify s3 BUCKET info
{ "Version": "2012-10-17", "Statement": [ { "Effect": "Allow", "Action": [ "robomaker:UpdateRobotDeployment" ], "Resource": "*" }, { "Effect": "Allow", "Action": [ "s3:List*", "s3:Get*" ], "Resource": ["arn:aws:s3:::my-robot-application-source-bucket/*"] } ] }
Input your own policy name and then “Create policy”
Open IAM page below and select “Role” ---> “Create role”
Choose “Greengrass”
Select the policies below and then select “Next”
AWSGreengrassResourceAccessRolePolicy
SZ_IOE_POLICY
“Add tags” page is optional, skip it by selecting “Next”
Enter your IAM role name and create role.
Edit trust relationship, and copy the JSON settings seen below:
{ "Version": "2012-10-17", "Statement": [ { "Effect": "Allow", "Principal": { "Service": [ "greengrass.amazonaws.com", "lambda.amazonaws.com" ] }, "Action": "sts:AssumeRole" } ] }
Create AWS IoT Greengrass Group
Open page below, select “Create Group”
Select “Use easy creation”
Specify a Group name and then click “Next”
Specify a Greengrass Group name and then click “Next”
Select “Create Group and Core”
Download your security resources as pic shown below, and select “Finish”
*** This is your only chance to download the security resources.
*** Downloaded security keys will be used in the next step.
Attach the IAM role to the Greengrass Group
Congratulations! You have successfully created the IAM policy, role and created a Greengrass group for Robomaker. Next, let’s look at how you can run the Greengrass Core on RB3 development kit.
Run GG-Core in RB3 development kit Docker
Prerequisites and launching the docker service
Follow the steps below to connect the development kit to the internet.
Use below steps to enable WLAN and dhcp.
$ insmod usr/lib/modules/4.9.103/extra/wlan.ko $ ifconfig wlan0 up $ wpa_supplicant -iwlan0 -Dnl80211 -c /data/misc/wifi/wpa_supplicant.conf -O data/misc/wifi/sockets & $ /usr/sbin/dhcpcd wlan0 -t 0 -o domain_name_servers --noipv4ll -h -b & $ wpa_cli -iwlan0 -p /data/misc/wifi/sockets $ add_network $ set_network 0 ssid "Your SSID" $ set_network 0 psk "SSID Password" $ enable_network 0
Ping some website to make sure wlan network is up.
Run
chronyd, and make sure system time is correct.
Resolve host name “sda845” to “127.0.0.1” by add content below to
/etc/hosts
create a work directory on target
$ mkdir -p /greengrass/certs $ mkdir -p /greengrass/config
Push files listed below to
/greengrassdirectory
arm32v7-ubuntu-18.04-aws-iot-greengrass.tar
your-security-file.tar.gz
Copy the content in page below and save it as
/greengrass/certs/root.ca.pem
Decompress the secure file
$ tar xzvf your-security-file.tar.gz -C /greengrass
launch Docker service
$ systemctl start docker
Pro Tip: You can check docker with the command “ps -ef | grep docker”
load docker image
$ docker load -i arm32v7-ubuntu-18.04-aws-iot-greengrass.tar
Pro Tip: You can run command “docker images” and you’ll see docker images already installed on your system
Environment setup is now done, proceed to run Greengrass Group core on target
$ docker run --rm -it --name aws-iot-greengrass --entrypoint /greengrass-entrypoint.sh -v /greengrass/certs:/greengrass/certs -v /greengrass/config:/greengrass/config -v /greengrass/log:/greengrass/ggc/var/log -p 8883:8883 armv7l-ubuntu18.04/test-aws-iot-greengrass:1.8.0
Pro Tip: Press “CTRL+P+Q” keys to detach docker, it’s running in the background now!
Check docker status
$ docker ps
Check Greengrass Group core log
A sample log seen below indicates that your Greengrass Group core successfully connected.
$ tail -F /greengrass/log/system/runtime.log [2019-04-18T04:23:20.122Z][INFO]-Started Deployment Agent and listening for updates [2019-04-18T04:23:20.122Z][INFO]-Started Deployment Agent and listening for updates [2019-04-18T04:23:20.122Z][INFO]-MQTT connection connected. Start subscribing: clientId: SZ_IOE_GROUP_Core [2019-04-18T04:23:20.122Z][INFO]-Deployment agent connected to cloud [2019-04-18T04:23:20.123Z][INFO]-Start subscribing 2 topics, clientId: SZ_IOE_GROUP_Core [2019-04-18T04:23:20.123Z][INFO]-Trying to subscribe to topic $aws/things/SZ_IOE_GROUP_Core-gda/shadow/update/delta [2019-04-18T04:23:20.806Z][INFO]-Subscribed to : $aws/things/SZ_IOE_GROUP_Core-gda/shadow/update/delta [2019-04-18T04:23:20.806Z][INFO]-Trying to subscribe to topic $aws/things/SZ_IOE_GROUP_Core-gda/shadow/get/accepted [2019-04-18T04:23:21.307Z][INFO]-Subscribed to : $aws/things/SZ_IOE_GROUP_Core-gda/shadow/get/accepted [2019-04-18T04:23:21.789Z][INFO]-All topics subscribed, clientId: SZ_IOE_GROUP_Core
kill container (stop greengrass-core)
$ docker kill <ggc container-id> ## get container id by docker ps
Pro Tip: If you do not kill the container now, you will encounter a Greengrass Group core crash issue in the next step.
Create robot application
Follow the steps above to create your own application. While creating the application, be sure to select the correct AWS region (us-east-1, us-west-2, etc.).
Configure your robot app
Inside “Development” – “Robot applications” page, select your application and click “Actions” button,
Enter your robot-app S3 address to the “ARM64 source file”
You can get this info from page
Inside “Development” – “Robot applicants”, click your app name, and then select “create new version”
“Fleet management” – “Robots” – “Create robot”
“Fleet management” – “Fleets” – “Create fleet”
Click your fleet name inside “Fleets” page, then click “Register new” button and register your robot.
Inside “Fleet management” – “Deployment” – “Create deployment”. Configure your robot app info, and then click “create”
Deploy lambda (robot app) to target
Log into the Greengrass console and navigate to the Group hub.
Here you can see:
A lambda function is added to the robot application that was created.
Group status is “In progress”
Select “Action” -- “reset deployment” to reset the status because we need some other configuration
In “setting” page, set “Lambda function containerization” to “no container” This is an important step before you can deploy the Lambda, or Greengrass Group core will crash
Run Greengrass Group core on target
$ docker run --rm -it --name aws-iot-greengrass --entrypoint /greengrass-entrypoint.sh -v /greengrass/certs:/greengrass/certs -v /greengrass/config:/greengrass/config -v /greengrass/log:/greengrass/ggc/var/log --network host armv7l-ubuntu18.04/test-aws-iot-greengrass:1.8.0
Deploy
Congratulations! You have successfully deployed the robot application to RB3 development kit through AWS IoT Greengrass.
Usage Instructions
Run the deployed robot application on RB3 development kit
The robot application ROS node would run along with ROS master inside the docker once the deployment is finished. You need to run the Kobuki ROS package or other ROS packages (for example movebase) after ROS master is running. Before you run these packages, you need to setup the devices. Here is a script to help you with easy setup.
#! /bin/sh #hack the kobuki_node minimal.launch first: remap odom to wheel_odom #hack the /etc/ros_8009.bash: set the ROS_IP, ROS_HOSTNAME and #ROS_MASTER_URI with IP address directly, 'localhost' doesnot work source /etc/ros_845.bash roslaunch /opt/ros/indigo/share/kobuki_node/launch/minimal.launch & sleep 5 roslaunch /data/pathplan/launch/movebase_845.launch
Setup the ROS env:
Copy the script to the RB3 kit.
adb push launch_movebase.sh /home adb shell
Edit the ROS environment to change the IP address
vi /opt/ros/indigo/share/ros_env.bash
Set IP address as seen below
export ROS_MASTER_URI= export ROS_IP=192.168.1.102 export ROS_HOSTNAME=192.168.1.102
Switch to home directory
cd /home
Launch!
$ ./launch_movebase.sh
Congratulations! You are now up and running with Robomaker on the RB3 Development kit. The “Hello World” example is designed to make the Kobuki base rotate in place. The reference application is designed to make the Kobuki base move. We cannot wait to see how you use these powerful platforms, you can share your projects with us here. | https://developer.qualcomm.com/project/aws-robomaker-rb3 | CC-MAIN-2019-39 | refinedweb | 2,394 | 50.12 |
On Mon, Aug 26, 2013 at 6:36 AM, Hartmut Goebel <h.goebel at crazy-compilers.com> wrote: > Hi, > > I'm one of the developers of, a tool for creating > stand-alone executables. > > We need to reliable detect if a package is a namespace package (nspkg). > For each namespace, we need to add an empty fake-module into our > executable to keep the import mechanism working. This has to work in all > versions of Python starting with 2.4. > > nspkgs set up via a nspkg.pth-file are detected by being in sys.modules, > but imp.find_module() files. > > For nspkgs using __init__.py-files (which use > pkg_resources.declare_namespace() or pkgutil.extend_path()) I have no > clue how to detect them. > > I tried to query meta-information using pkgresources, but I did not find > a solution. > > Any help? Setuptools package metadata includes a namespace_packages.txt file with this information: This won't help you with PEP 420 namespace packages (3.3+), unless someone declares them, and likewise it won't help if somebody uses the dynamic APIs without any declaration. But at least it'll give you the declared ones. | https://mail.python.org/pipermail/distutils-sig/2013-August/022486.html | CC-MAIN-2014-15 | refinedweb | 188 | 70.6 |
On 10 August 2017 at 01:51, Dave Warren <da...@hireahit.com> wrote: > On 2017-08-09 16:53, Seth David Schoen wrote: > > Notably, it doesn't apply to certificate authorities that only issue DV >> certificates, because nobody at the time found a consensus about how to >> validate control over these domain names. >> > > I don't completely understand this, since outside the Tor world it's > possible to acquire DV certificates using verification performed on > unencrypted (HTTP) channels. >
Advertising
I can explain this. I don't agree with it, please don't argue with me, it was a CA/B-forum argument, I am not a member of CA/B-forum, please don't blame me, etc... Also: the argument is gonna be redundant real soon, so there's no point in kicking a dead whale along the beach. Seth has not quite framed the issue properly. The CA/B-forum argument against issuing DV SSL Certificates to 80-bit onions, goes like this: - SHA1 is bad, m'kay? - And Onion addresses are truncated SHA1 - So maybe someone could brute-force a collision, using bad SHA1, to generate their own "facebookcorewwwi" onion certificate? - And the thing about DV certificates is that they can be validated via a simple HTTPS request loopback, m'kay? (eg: LetsEncrypt) - So someone generates their own Facebook Onion certificate, sets up an onion site, and requests and receives a DV certificate via some automated process - And ZOMG this means that SSL will be no longer be perceived as the snow-white, unimpeachable source of trust that it currently is - Therefore: force Onions to use the EV process so that the SSL Issuer *IS REALLY SURE* that it is Facebook who is asking for the certificate, not some SHA1-hacker - And please: nobody point out that equivalent problem in the DNS namespace means that the entirety of SSL's trustworthiness is (in truth) slaved to the ability to revoke a DNS record when someone sets up a fake site. That's it. All of it. Put sarcastically but accurately. There's no point in arguing about it, as geeks so often enjoy. It's over, we can move on, and - as Seth rightly points out - with Prop224 the root of this argument (the SHA1 dependence) simply vanishes, taking the entire rest of the house of cards with it. > Wouldn't the same be possible for a .onion, simply requiring that the > verification service act as a Tor client? This would be at least as good, > given that Tor adds a bit of encryption. Like I say: it's past, we should all move on and be grateful for having got here at all. I know I am, and that I never want to have to deal with the above argument ever again. -a -- -- tor-talk mailing list - tor-talk@lists.torproject.org To unsubscribe or change other settings go to | https://www.mail-archive.com/tor-talk@lists.torproject.org/msg23635.html | CC-MAIN-2017-34 | refinedweb | 483 | 67.79 |
Mud Configuring Mudflap to find errors.)th) option is provided for the build.
If your binary is instrumented with Mudflap, you can't run Memory Analysis on it because there will be a conflict (trying to overload the same functions), and it will cause the program to crash.
For QNX and Managed projects that have multithreaded applications, you'll need to use the -fmudflapth option for the compiler.
The use of Mudflap requires GCC with Mudflap support. This means that you'll need GCC 4.x with the Mudflap enabled flag, and you'll need to set appropriate configuration settings (see Configuring Mudflap to find errors.) Once configured, the IDE adds options to the Makefile: -fmudflapth to LD_SEARCH_FLAGS and -fmudflapth to CFLAGS1.
Many runtime errors in C and C++ are caused by pointer errors. The most common reason for this type of error is that you've incorrectly initialized or calculated a pointer value and attempted to use this invalid pointer to reference some data. Since all pointer errors might not be identified and dealt with at runtime, you might encounter a situation where you go over by one byte (off-by-one error), which might run over some stack space, or write into the memory space of another variable. You don't always detect these types of errors because in your testing, they don't typically cause anything to crash, or they don't overwrite anything significant. An off-by-one error might become an off-by-1000 error, and could result in a buffer overflow or a bad pointer dereference, which may crash your program, or provide a window of opportunity for code injection.
Mudflap adds another pass to GCCs compiler sequence to add instrumentation code to the resulting binary that encapsulates potentially dangerous pointer operations. In addition, Mudflap keeps a database of memory objects to evaluate any pointer operation against a known list of valid objects. At runtime, if any of these instrumented pointer operations is invalid or causes a failure, then a violation is emitted to the stderr output for the process. The violation specifies where the error occurred in the code, as well as what objects where involved.
You don't have to use Telnet or a serial terminal window to obtain output from Mudflap. Although it is available from the Command line, you can choose to monitor the stdout or use it directly from within the IDE.
The IDE also includes a build integration that lets you select Mudflap as one of the build variant build options.
The IDE includes a QNX launch tool that enables you to parse Mudflap errors (such as buffer overflow on the stack or heap, or of a pointer, all the way to the target), and the errors are displayed in a similar manner to that of the Memory Analysis Tool. For example, during the Mudflap launch, the IDE creates a Mudflap session, and then you can select an item to view the errors in the source code.
#include <stdlib.h> #include <stdio.h> void funcLeaks(void); char funcError(void); int main(int argc, char *argv[]) { char charR; funcLeaks(); charR = funcError(); return EXIT_SUCCESS; } void funcLeaks() { float *ptrFloat = (float*)malloc(333 * sizeof(float)); if (ptrFloat==NULL) { // memory could not be allocated } else { // do something with memory but don't // forget to free and NULL the pointer } } char funcError() { char charA[10]; int i; for(i=0; i<10; i++) charA[i] = 'A'; return charA[11]; }
******* mudflap violation 1 (check/read): time=1255022555.391940 ptr=0x8047e72 size=12 pc=0xb8207c0b location=`C:/worksp_IDE47/z_x/z_x.c:35:2 (funcError)' thread=1 libmudflapth.so.0(__mfu_check+0x599) [0xb8207b8d] libmudflapth.so.0(__mf_check+0x3e) [0xb8207c06] z_x_g(funcError+0x10c) [0x804922d] z_x_g(main+0xe) [0x80490fa] Nearby object 1: checked region begins 0B into and ends 2B after mudflap object 0x80d5910: name=`C:/worksp_IDE47/z_x/z_x.c:29:7 (funcError) charA' bounds=[0x8047e72,0x8047e7b] size=10 area=stack check=3r/1w liveness=4 alloc time=1255022555.391940 pc=0xb82073d7 thread=1 number of nearby objects: 1 Leaked object 1: mudflap object 0x80d5290: name=`malloc region' bounds=[0x80d5248,0x80d525b] size=20 area=heap check=0r/0w liveness=0 alloc time=1255022555.3879] libc.so.3(dlopen+0x15f3) [0xb0343fe3] Leaked object 2: mudflap object 0x80d53c8: name=`malloc region' bounds=[0x80d5380,0x80d5393] size=2042) [0x804902a] Leaked object 3: mudflap object 0x80d5498: name=`malloc region' bounds=[0x80d5450,0x80d5463] size=20 area=heap check=0r/0w liveness=0 alloc time=1255022555.389961) [0x8049049] Leaked object 4: mudflap object 0x80d52f8: name=`malloc region' bounds=[0x80dc038,0x80dc237] size=512(pthread_key_create+0xc9) [0xb0320549] libc.so.3(dlopen+0x1610) [0xb0344000] Leaked object 5: mudflap object 0x80d58a8: name=`malloc region' bounds=[0x80e1688,0x80e1bbb] size=1332 area=heap check=0r/0w liveness=0 alloc time=1255022555.391940 pc=0xb82073d7 thread=1 libmudflapth.so.0(__mf_register+0x3e) [0xb82073d2] libmudflapth.so.0(__real_malloc+0xb9) [0xb8208b51] z_x_g(funcLeaks+0xd) [0x8049117] z_x_g(main+0x9) [0x80490f5] number of leaked objects: 5 Process 81942 (z_x_g) exited status=0.
The IDE will populate the Mudflap Violations view with the contents of Mudflap log file (specified in the Launch Configuration). It provides you with additional information about the violation(s) that Mudflap detected, from which you can select an item to view the error in the source code.
The top level of the main view shows the errors, and if you expand a particular violation, you'll receive information about nearby objects, a backtrace, similar errors, as well as other useful detailed information.
For detailed information about the results generated by Mudflap output, see Mudflap Violations view. | http://www.qnx.com/developers/docs/6.6.0_anm11_wf10/com.qnx.doc.ide.userguide/topic/debug_UsingMudflapInIDE_.html | CC-MAIN-2018-09 | refinedweb | 922 | 51.68 |
After playing with LiveView and leveraging Phoenix PubSub to broadcast messages to all of a live view’s clients, I wanted to try incorporating Phoenix Presence to track the state of these clients. So this past weekend I built a chat app using Phoenix LiveView, PubSub and Presence. The LiveView clocks in at 90 lines of code, and I was able to get the Presence-backed features up and running in no time! Keep reading to see how it works.
The App
The chat app is fairly straightforward, and we won’t get into the details of setting up LiveView in our Phoenix app here. You can check out the source code along with this earlier post on getting LiveView up and running for more info.
Following Along
If you’d like to follow along with this tutorial, clone down the repo here and follow the README instructions to get up and running. The starting state of the tutorial branch includes the chat domain model, routes, controller and the initial state of the LiveView, described below. You can also check out the completed code here.
ChatLiveView's Initial State
We’ve mounted our live view at
/chats/:id by telling the
show action of the
ChatController to render the
ChatLiveView. We pass the given chat and the current user from the controller into our live view:
# lib/phat_web/controllers/chat_controller.ex defmodule PhatWeb.ChatController do use PhatWeb, :controller alias Phat.Chats alias Phoenix.LiveView alias PhatWeb.ChatLiveView def show(conn, %{"id" => chat_id}) do chat = Chats.get_chat(chat_id) LiveView.Controller.live_render( conn, ChatLiveView, session: %{chat: chat, current_user: conn.assigns.current_user} ) end end
The
ChatLiveView.mount/2 function sets up the initial state of the LiveView socket with the given chat, an empty message changeset with which to populate the form for a new message, and the current user:
# lib/phat_web/live/chat_live_view.ex defmodule PhatWeb.ChatLiveView do use Phoenix.LiveView alias Phat.Chats def render(assigns) do PhatWeb.ChatView.render("show.html", assigns) end def mount(%{chat: chat, current_user: current_user}, socket) do {:ok, assign(socket, chat: chat, message: Chats.change_message(), current_user: current_user, )} end end
After mounting and setting the socket state, the live view will render the
ChatView’s
show.html template:
# lib/phat_web/templates/chat/show.html.leex <h2><%= @chat.room_name %></h2> <%=for message <- @chat.messages do %> <p> <%= message.user.first_name %>: <%= message.content %> </p> <% end %> <div class="form-group"> <%= form_for @message, "#", [phx_submit: :message], fn _f -> %> <%= text_input :message, :content, placeholder: "write your message here..." %> <%= hidden_input :message, :user_id, value: @current_user.id %> <%= hidden_input :message, :chat_id, value: @chat.id %> <%= submit "submit" %> <% end %> </div>
Our template is simple: it grabs the chat we assigned to our live view’s socket, displays the chat room name and iterates over the messages to show us the content and sender. It also contains a form for a new message, built on the empty message changeset we assigned to our socket. At this point, our rendered template looks something like this:
Pushing Messages to the LiveView Client
Now that our live view is up and running, let’s take a look at what happens when a given user submits the form for a new message.
We’ve attached the
phx-submit event to our form’s submission, and instructed it to emit an event of type
# lib/phat_web/templates/chat/show.html.leex <%= form_for @message, "#", [phx_submit: :message], fn _f -> %> ...
Now, we need to teach our live view how to handle this event by defining a matching
handle_event/3 function.
# lib/phat_web/live/chat_live_view.ex defmodule PhatWeb.ChatLiveView do ... def handle_event("message", %{"message" => message_params}, socket) do chat = Chats.create_message(message_params) {:noreply, assign(socket, chat: chat, message: Chats.change_message())} end end
The live view responds to the
"message" event by creating a new message and updating the socket’s with the updated chat and a new empty message changeset for our form. *Note that although we specify the value of the
phx_submit as an atom,
:message, our live view process receives the message as a string,
The live view then re-renders the relevant portions of our page, in this case the chat and messages display and the form for a new message.
Thanks to this code, we have messages getting pushed down the socket to the client who submitted the message form. But what about all of the other clients in our live view––the other users in the chatroom?
Broadcasting Messages with Phoenix PubSub
In order to broadcast the new message to all such users, we need to leverage Phoenix PubSub.
First, we need to ensure that each client starts subscribing to the chat room’s PubSub topic when they mount the live view:
# lib/phat_web/live/chat_live_view.ex defp topic(chat_id), do: "chat:#{chat_id}" def mount(%{chat: chat, current_user: current_user}, socket) do PhatWeb.Endpoint.subscribe(topic(chat.id)) {:ok, assign(socket, chat: chat, message: Chats.change_message(), current_user: current_user, )} end
Then, we need to teach our live view to broadcast new messages to these subscribers when it handles the
"message" event.
# lib/phat_web/live/chat_live_view.ex def handle_event("message", %{"message" => message_params}, socket) do chat = Chats.create_message(message_params) PhatWeb.Endpoint.broadcast_from(self(), topic(chat.id), "message", %{chat: chat}) {:noreply, assign(socket, chat: chat, message: Chats.change_message())} end
The
broadcast_from/4 function will broadcast a message of type
"message", with the payload of our newly updated chat, to all subscribing clients excluding the client who is sending the message.
Lastly, we need to teach our live view how to respond to this broadcast with a
handle_info/2 function:
# lib/phat_web/live/chat_live_view.ex def handle_info(%{event: "message", payload: state}, socket) do {:noreply, assign(socket, state)} end
The live view handles the
"message" message by updating the socket’s state with
%{chat: chat} payload, where the chat is our newly updated chat containing the new message from the user. And that is all it takes to ensure that all subscribing clients see any new messages submitted into the chat template’s new message form!
Tracking Users with Phoenix Presence
Now that our live view is smart enough to broadcast messages to all of the users in the given chat room, we’re ready to build some features that track and interact with those users. Let’s say we want to have our template render a list of users in the chat room, something like this:
We could create our own data structure for tracking user presence in a live view, store it in the live view’s socket, and hand-roll our own functions to update that data structure when a user joins, leaves or otherwise changes their state. However, the Phoenix Presence behaviour abstracts this work away from us. It provides presence tracking for processes and channels, leveraging Phoenix PubSub behind the scenes to broadcast updates. It also uses a CRDT (Conflict-free Replicated Data Type) model, which means it works on distributed applications.
Now that we understand a bit about what Presence is and why we want to use it, let’s get it set up in our application.
Setting Up Presence
In order to leverage Presence in our Phoenix app, we need to define our very our module:
# lib/phat_web/presence.ex defmodule PhatWeb.Presence do use Phoenix.Presence, otp_app: :phat, pubsub_server: Phat.PubSub end
The
PhatWeb.Presence module does three things:
usesthe Presence behaviour
- Specifies that it shares a PubSub server with the rest of the application
- Specifies that is shares our app’s OTP app, which holds our application configuration
Now we can use the
PhatWeb.Presence module throughout our app to track user presence in a given process.
Tracking User Presence
Our Presence module will maintain lists of present users in a given chat room by storing these users under a given topic of
"chat:#{chat_id}".
So, when should we tell Presence to start tracking a given user? Well, at what point in time do we consider a user to be “present” in a chat room? When the user mounts the live view!
We’ll hook into our
mount/2 function to add the new user to Presence’s list of users in a given chat room:
# lib/phat_web/live/chat_live_view.ex © def mount(%{chat: chat, current_user: current_user}, socket) do Presence.track( self(), topic(chat.id), current_user.id, %{ first_name: current_user.first_name, email: current_user.email, user_id: current_user.id } ) ... end
Here, we use the
Presence.track/4 function to track our live view process as a presence. We add the PID of the LiveView process to Presence’s data store, along with a payload describing the new user under a topic of
"chat:#{chat.id}" and a key of the user’s ID.
The Presence process’s state for the given topic will look something like this:
%{ "1" => %{ metas: [ %{ email: "[email protected]", first_name: "Sophie", phx_ref: "TNV4PzRfyhw=", user_id: 1 } } }
Broadcasting Presence To Existing Users
When we call
Presence.track, Presence will broadcast a
"presence_diff" event over its PubSub backend. We told our Presence module to use the same PubSub server as the rest of the application––the very same server that backs our
PhatWeb.Endpoint.
Recall that our live view clients are subscribing to this PubSub server via the following call in the
mount/2 function: ` PhatWeb.Endpoint.subscribe(topic(chat.id))
. So, these subscribing LiveView processes will receive the “presence_diff”` event, which looks something like this:
%{ event: "presence_diff", payload: %{ joins: %{ "1" => %{ metas: [ %{ email: "[email protected]", first_name: "Sophie", phx_ref: "TNV4PzRfyhw=", user_id: 1 } } }, leaves: %{}, } }
The event’s payload will describe the users that are joining the channel when
Presence.track/4 is called. Although we will respond to the
"presence_diff" event, we won’t do anything with the event’s payload for now. However, you could imagine using it to create custom user experiences such as welcoming the newly joined user or alerting existing users that a certain new member has joined the chat room.
In order to respond to the event we’ll define a
handle_info/2 function in our live view that will match the
"presence_diff" event:
# lib/phat_web/live/chat_live_view.ex def handle_info(%{event: "presence_diff"}, socket = %{assigns: %{chat: chat}}) do end
This function has two responsibilities:
- Get the list of present users for the given chat room topic from the Presence data store
- Update the LiveView socket’s state to reflect this list of users
def handle_info(%{event: "presence_diff", payload: _payload}, socket = %{assigns: %{chat: chat}}) do users = Presence.list(topic(chat.id)) |> Enum.map(fn {_user_id, data} -> data[:metas] |> List.first() end) {:noreply, assign(socket, users: users)} end
First, we use the
Presence.list/1 function to get the collection of present users under the given topic. This will return the following data structure:
%{ "1" => %{ metas: [ %{ email: "[email protected]", first_name: "Sophie", phx_ref: "TNV4PzRfyhw=" user_id: 1 } }, "2" => %{ metas: [ %{ email: "[email protected]", first_name: "Beini", phx_ref: "ZZ30QuoI/8s=" user_id: 1 } } ... }
The Presence behavior handles the diffs of join and leave events for us. So, as long as we call
Presence.track/4, the Presence process will update its own state, such that when we next call
Presence.list/1, we are retrieving the updated list of users.
Once we fetch this list, we iterate over it to collect a list of the individual
:metas payloads that describe each user. The resulting list will look like this:
[ %{ email: "[email protected]", first_name: "Sophie", phx_ref: "TNV4PzRfyhw=" user_id: 1 }, "2" => %{ metas: [ %{ email: "[email protected]", first_name: "Beini", phx_ref: "ZZ30QuoI/8s=" user_id: 1 } } ]
We enact this transformation so that we have a simple, easy-to-use data structure to interact with in the template when we want to list present user names.
Lastly, we update the LiveView socket’s state by adding a key of
:users pointing to a value of our user list:
{:noreply, assign(socket, users: users)}
Now we can access the user list via the
@users assignment in our template to list out the names of the users present in the chatroom:
# lib/phat_web/templates/chat/show.html.leex <h3>Members</h3> <%= for user <- @users do %> <p> <%= user.first_name %> </p> <% end %>
Let’s recap. The code we’ve written so far supports the following flow:
When a user visits a chat room at
/chats/:id and the LiveView is mounted…
- Add the user to the Presence data store’s list of users for the given chat room topic
- Broadcast to subscribing clients, telling them to grab the latest list of present users from the Presnce data store
- Update the live view socket’s state with this updated list
- Re-render the live view template to display this updated list of users
This allows users who are already in a chat room to see an updated list of users reflected anyone who joins the chatroom.
But what about the user who is joining? How can we ensure that when a new user visits the chat room, they see the list of users who are already present?
Fetching Presence for New Users
In order to display the existing chat room members to any new users who join, we need to fetch these users from Presence and assign them to the live view socket when the live view mounts.
Let’s update our
mount/2 function to do exactly that:
# lib/phat_web/live/chat_live_view.ex def mount(%{chat: chat, current_user: current_user}, socket) do ... users = Presence.list(topic(chat.id)) |> Enum.map(fn {_user_id, data} -> data[:metas] |> List.first() end) {:ok, assign(socket, chat: chat, message: Chats.change_message(), current_user: current_user, users: users )} end
Now our live view will be able to render the list of existing members for a new user loading the page.
Broadcasting User Leave Events
At this point, you might be wondering how we can update Presence state and broadcast changes when a user leaves the tracked process. This is actually functionality that we get for free thanks to the Presence behavior. Recall that we are tracking presence for a given LiveView process via the
Presence.track/4 function, where the first argument we give to
track/4 is the PID of the LiveView process.
When a user navigates away from the chat show page, their LiveView process terminates. This will cause
Presence.untrack/3 to get called, thereby un-tracking the terminated PID. This in turn tells Presence to broadcast the
"presence_diff" event, this time with a payload that describes the departed user, i.e. the user we were tracking under the terminated PID. Presence knows how to handle diffs from both join and leave events––it will update the list of users it is storing under the chat room topic appropriately.
The running LiveView processes that receive this
"presence_diff" event will need to fetch this updated list of present users for the given topic, update socket state and re-render the page accordingly. This means we can re-use our original
handle_info/2 function for the
"presence_diff" event without making any changes:
# lib/phat_web/live/chat_live_view.ex def handle_info(%{event: "presence_diff", payload: _payload}, socket = %{assigns: %{chat: chat}}) do users = Presence.list(topic(chat.id)) |> Enum.map(fn {_user_id, data} -> data[:metas] |> List.first() end) {:noreply, assign(socket, users: users )} end
So, we don’t have to write any additional code to handle the “leave” event at all!
Using Presence to Track User State
So far, we’ve leveraged presence to keep track of users as they join or leave the LiveView. We can also use presence to track the state of a given user while they are present in the LiveView process. Let’s see how this works by building a feature that indicates that a given user is typing into the new chat message form by appending a
"..." to their name on the list of present users rendered in the template:
First, we’ll update the
:metas payload we use to describe the starting state of a given user with the data point:
typing: false:
# lib/phat_web/live/chat_live_view.ex def mount(%{chat: chat, current_user: current_user}, socket) do Presence.track( self(), topic(chat.id), current_user.id, %{ first_name: current_user.first_name, email: current_user.email, user_id: current_user.id, typing: false } ) ... end
Then, we’ll attach a new
phx-change event to our form that will fire with a message type of
"typing" when a user types into the form field:
# lib/phat_web/templates/chat/show.html.leex <%= form_for @message, "#", [phx_change: :typing, phx_submit: :message], fn _f -> %> ... <% end %>
Next up, we will teach our live view to handle this event with a new
handle_event/2 function that matches the
"typing" event type. To respond to this event, the live view should update the current user’s
:metas map under the given chat room’s topic:
# lib/phat_web/live/chat_live_view.ex def handle_event("typing", _value, socket = %{assigns: %{chat: chat, current_user: user}}) do topic = topic(chat.id) key = user.id payload = %{typing: true} metas = Presence.get_by_key(topic, key)[:metas] |> List.first() |> Map.merge(payload) Presence.update(self(), topic, key, metas) {:noreply, socket} end
Here, we use the
Presence.get_by_key/2 function to fetch the
:metas for the current user, stored under the
topic of
"chat:#{chat.id}", under a key of the user’s ID.
Then we create a copy of the
:metas map for that user, setting the
:typing key to
true.
Lastly, we update the Presence process’s metadata for the topic and user to point to this new map. Calling
Presence.update/4 will once again broadcast a
"presence_diff" event for us. Our LiveView processes already know how to handle this event, so we don’t need to write any additional code to ensure that running LiveView processes fetch the latest list of users with the new metadata and re-render the page.
The last thing we need to do is update our template to append
"..." to name of any users on the list who have
typing set to
true:
# lib/phat_web/templates/chat/show.html.leex <h3>Members</h3> <%= for user <- @users do %> <p> <%= user.first_name %><%= if user.typing, do: "..." end%> </p> <% end %>
Now we’re ready to teach our LiveView how to behave when a user stops typing, ensuring that the template will re-render without the
"..." attached to the user’s name.
We’ll add a
phx-blur event to the message content form field:
# lib/phat_web/templates/chat/show.html.leex <%= text_input :message, :content, value: @message.changes[:content], phx_blur: "stop_typing", placeholder: "write your message here..." %>
This will send an event of type
"stop_typing" to the LiveView process when the user blurs away from this form field.
We’ll teach our LiveView to respond to this message with a
handle_info/2 that updates the Presence metadata with
typing: false for the current user.
# lib/phat_web/live/chat_live_view.ex def handle_event( "stop_typing", value, socket = %{assigns: %{chat: chat, current_user: user, message: message}} ) do message = Chats.change_message(message, %{content: value}) topic = topic(chat.id) key = user.id payload = %{typing: false} metas = Presence.get_by_key(topic, key)[:metas] |> List.first() |> Map.merge(payload) Presence.update(self(), topic, key, metas) {:noreply, assign(socket, message: message)} end
Note: Here we can see some obvious repetition of code we wrote to handle the
"typing" event. This code has been refactored to move Presence interactions into our
PhatWeb.Presence module which you can check out here and here. For the purposes of easy reading in this post, I let this code remain explicit.
Here, we update the message changeset to reflect the content the user typed into the form field. Then, we fetch the user’s metadata from Presence and update it to set
typing: false. Lastly, we update the live view’s socket to reflect the content the user typed into the message form field. This is a necessary step so that the template will display this content when it re-renders as a consequence of the
"presence_diff" event.
Since we called
Presence.update/4, the presence process will broadcast the
"presence_diff" event and the LiveView processes will respond by fetching the updated list of users with the new metadata and re-rendering the template. This re-render will have the effect of removing the
"..." from the given user’s name since the call to
user.typing in the template will now evaluate to
false.
Conclusion
Let’s take a step back and recap what we’ve built:
- With “plain” LiveView, we gave our chat the ability to push real-time updates to the user who initiated the change. In other words, users who submit new messages via the chat form see those new messages appear in the chat log on the page.
- With the addition of PubSub, we were able to broadcast these new chat messages to all of the LiveView clients subscribed to a chat room topic, i.e. all of the members of a given chat room.
- By leveraging Presence, we were able to track and display the list of users “present” in a given chat room, along with the state of a given user (i.e. whether or not they are currently typing).
You can see the final (slightly refactored!) code here.
The flexibility of Phoenix PubSub made it easy to subscribe all of our running LiveView processes to the same topic on the pub sub server. In addition, the Presence module’s ability to share a pub sub server with the rest of our application allowed each Presence process to broadcast presence events to LiveView processes. Overall, LiveView, PubSub and Presence played together really nicely, and enabled us to build a robust set of features with very little hand-rolled code. | https://elixirschool.com/blog/live-view-with-presence/ | CC-MAIN-2019-51 | refinedweb | 3,556 | 63.29 |
I know that I can include xpm file format into code:
and it will be included within the binary, adding on to the image size.and it will be included within the binary, adding on to the image size.Code:
#include "image.xpm"
However my issue is that xpm files aren't very optimized for size so if I want to add anything large, like a banner, It adds a very large size to my file
I was wondering what other image formats can be included like a header file and can be added to the binary, preferably a format that can be compressed a good deal
Thank you! | http://cboard.cprogramming.com/cplusplus-programming/113028-compilable-image-formats-printable-thread.html | CC-MAIN-2014-52 | refinedweb | 109 | 61.7 |
Localization is a process by which you allow people of different cultures,
languages and nationalities access your Web site. Although still a difficult
process, all things considered, it is gradually becoming easier. Both
the Java platform and the .NET platform have some nice features to aid
localization. For instance, all strings, dates, and numbers are internally
locale aware and when printed or validated will honor the localization setting.
The main topic of this article is localization of text. Text localization is
typically the easier of the many efforts involved in the localization process.
You simply use textual keys, and the system can load these keys from resource
files using utility classes. If it is that simple, then, what is the need for
this writeup? Writing a line of text in multiple languages is fairly trivial.
But doing that for hundreds of pages in a language-dependent way requires a
process, standards, and architecture. This is similar to scaling a dog house to
a multi-story building.
What follows is a discussion of process recommendation and necessary tools for helping the
localization of text under .NET.
It is known that for localizing text, one would use the resource managers
available under .NET. These resource managers use the main assembly and
language based satellite assemblies to retrieve the string resources. A main
assembly is basically your main executable file, if you are writing standalone
executables. When you are writing a Web application or a Web service, this main
assembly will be a DLL that is accessible by IIS. A satellite assembly is a DLL
that contains only resource strings. You typically have one satellite assembly
DLL for each language. Because these satellite assembly DLLs are separated from
the main assembly (an .exe or DLL), it is easier to drop in multi-language
support as you progress in your development process.
Related Reading
.NET Framework Essentials
By Thuan L. Thai, Hoang Lam
Based on documented literature, it is not hard to build text localization using
a single resource file for the entire project. When the project has multiple
modules and multiple people working on it, a single resource file will present
the following difficulties:
The solution is to allow multiple resource files: one for each module, or even
one for each page. One would think that the resource files that get
automatically generated by the IDE could be used for this purpose. But these
autogenerated files are hidden, and there is no easy API to retrieve resources
from multiple resource files. It is not hard to unhide these hidden per-page
resource files. Even if you were able to put your resources in these resource
files, these resource files may change as you change your GUI. This will make
it difficult to ship these resource files to translators as they change often,
not necessarily because of text strings, but because of other factors. I don't think this
dependency is good; it may be better to just leave them hidden.
Whichever mechanism that we are going to adapt for multiple resource files has
to be simple enough for the developer and the language translator to adapt.
There is a beta tool from Microsoft called "Enterprise Localization Toolkit,"
based on SQL server, that will supposedly simplify this process. If you are
considering using this tool, then it is well and good. But if for whatever reason,
you want to roll out a less encompassing solution read on.
The following points are important to consider when you are designing a localization
process:
You have just created a new Web page and about to enter a text string, and the
localization chief looks over your shoulder and says, "Ha! My friend, you can't
hard code the static text like that. You need to look up an equivalent key so
that we can localize that text string." Now you have to invent a new key, or worse
yet, look for an existing key, if it is already available. Here are these issues,
itemized:
To accomodate the above needs let us start with a directory structure for our
resource-related files under the a fictitious project called "MyWebProject":
MyWebProject
\resources\keys
\resources\files
The keys subdirectory will have files to identify your keys for localized
content. The files subdirectory will hold the actual resource files.
(Taking "Common" as an example)
MyWebProject
\resources\keys
\module1Keys.cs
\CommonKeys.cs
\one .cs file for each module
\resources\files
\CommonResources.resx
\module1Resources.resx
\one .resx file for each module
CommonKeys.cs is a C# file containing project-level common definitions for
the whole project, whereas module1keys.cs contains keys for your specific
module. On the other hand, CommonResources.resx is an XML-based resource file
that acts as a dictionary for the keys that are identified in the CommonKeys.cs
key file. Easy enough so far.
Let us take a look at the contents of the CommonKeys.cs file to fully
understand the key definitions. Notice that the keys themselves are strings; it is important to define constants for these strings so that we don't make
mistakes misspelling these keys. The provided structure for the CommonKeys.cs
file will allow the IDE to prompt us for the available keys. What about root? This
reserved key will define a naming context for our keys so that they are less
likely to be duplicated. By convention, it can also point to the name of the
module. By doing this, we can deduce the resource filename for a given resource
key, without explicitly specifying the resource file from which the key
originates. This property could be useful when it is time to retrieve the keys.
namespace SKLocalizationSample.resources.keys
{
public class CommonKeys
{
public static string root = "Common";
public static string FILE = root + ".FILE";
public static string NEW = root + ".NEW";
public static string SAVE = root + ".SAVE";
public CommonKeys(){}
}
}
As new developers come on board and are given responsibility for existing
modules or new modules, they can find out about resources for their respective
modules by looking up the following directory:
\project\resources\keys
This will tell them the available modules for which keys are defined, which
will tell them either to create a new file or use an existing file. When
they define keys in these key files, they are also responsible for going over
to the \resources\files\ directory and updating the corresponding .resx files
with proper English values for their. | http://archive.oreilly.com/pub/a/dotnet/2002/10/21/localpt3.htm | CC-MAIN-2015-35 | refinedweb | 1,068 | 63.7 |
#include <deal.II/lac/block_sparsity_pattern.h>
This is the base class for block versions of the sparsity pattern and dynamic sparsity pattern classes. It has not much functionality, but only administrates an array of sparsity pattern objects and delegates work to them. It has mostly the same interface as has the SparsityPattern, and DynamicSparsityPattern, and simply transforms calls to its member functions to calls to the respective member functions of the member sparsity patterns.
The largest difference between the SparsityPattern and DynamicSparsityPattern classes and this class is that mostly, the matrices have different properties and you will want to work on the blocks making up the matrix rather than the whole matrix. You can access the different blocks using the
block(row,col) function.
Attention: this object is not automatically notified if the size of one of its subobjects' size is changed. After you initialize the sizes of the subobjects, you will therefore have to call the
collect_sizes() function of this class! Note that, of course, all sub-matrices in a (block-)row have to have the same number of rows, and that all sub-matrices in a (block-)column have to have the same number of columns.
You will in general not want to use this class, but one of the derived classes.
Definition at line 1905 of file affine_constraints.h.
Declare type for container size.
Definition at line 85 of file block_sparsity_pattern.h.
Initialize the matrix empty, that is with no memory allocated. This is useful if you want such objects as member variables in other classes. You can make the structure usable by calling the reinit() function.
Definition at line 25 of file block_sparsity_pattern.cc.
Initialize the matrix with the given number of block rows and columns. The blocks themselves are still empty, and you have to call collect_sizes() after you assign them sizes.
Definition at line 33 of file block_sparsity_pattern.cc.
Copy constructor. This constructor is only allowed to be called if the sparsity pattern to be copied is empty, i.e. there are no block allocated at present. This is for the same reason as for the SparsityPattern, see there for the details.
Definition at line 45 of file block_sparsity_pattern.cc.
Destructor.
Definition at line 62 of file block_sparsity_pattern.cc.
Resize the matrix, by setting the number of block rows and columns. This deletes all blocks and replaces them with uninitialized ones, i.e. ones for which also the sizes are not yet set. You have to do that by calling the reinit() functions of the blocks themselves. Do not forget to call collect_sizes() after that on this object.
The reason that you have to set sizes of the blocks yourself is that the sizes may be varying, the maximum number of elements per row may be varying, etc. It is simpler not to reproduce the interface of the SparsityPattern class here but rather let the user call whatever function she desires.
Definition at line 77 of file block_sparsity_pattern.cc.
Copy operator. For this the same holds as for the copy constructor: it is declared, defined and fine to be called, but the latter only for empty objects.
Definition at line 111 of file block_sparsity_pattern.cc.
This function collects the sizes of the sub-objects and stores them in internal arrays, in order to be able to relay global indices into the matrix to indices into the subobjects. You must call this function each time after you have changed the size of the sub-objects.
Definition at line 129 of file block_sparsity_pattern.cc.
Access the block with the given coordinates.
Definition at line 767 of file block_sparsity_pattern.h.
Access the block with the given coordinates. Version for constant objects.
Definition at line 779 of file block_sparsity_pattern.h.
Grant access to the object describing the distribution of row indices to the individual blocks.
Definition at line 792 of file block_sparsity_pattern.h.
Grant access to the object describing the distribution of column indices to the individual blocks.
Definition at line 801 of file block_sparsity_pattern.h.
This function compresses the sparsity structures that this object represents. It simply calls
compress for all sub-objects.
Definition at line 168 of file block_sparsity_pattern.cc.
Return the number of blocks in a column.
Definition at line 962 of file block_sparsity_pattern.h.
Return the number of blocks in a row.
Definition at line 953 of file block_sparsity_pattern.h.
Return whether the object is empty. It is empty if no memory is allocated, which is the same as that both dimensions are zero. This function is just the concatenation of the respective call to all sub- matrices.
Definition at line 179 of file block_sparsity_pattern.cc.
Return the maximum number of entries per row. It returns the maximal number of entries per row accumulated over all blocks in a row, and the maximum over all rows.
Definition at line 192 of file block_sparsity_pattern.cc.
Add a nonzero entry to the matrix. This function may only be called for non-compressed sparsity patterns.
If the entry already exists, nothing bad happens.
This function simply finds out to which block
(i,j) belongs and then relays to that block.
Definition at line 810 of file block_sparsity_pattern.h.
Add several nonzero entries to the specified matrix row. This function may only be called for non-compressed sparsity patterns.
If some of the entries already exist, nothing bad happens.
This function simply finds out to which blocks
(row,col) for
col in the iterator range belong and then relays to those blocks.
Definition at line 829 of file block_sparsity_pattern.h.
Return number of rows of this matrix, which equals the dimension of the image space. It is the sum of rows of the (block-)rows of sub-matrices.
Definition at line 211 of file block_sparsity_pattern.cc.
Return number of columns of this matrix, which equals the dimension of the range space. It is the sum of columns of the (block-)columns of sub- matrices.
Definition at line 225 of file block_sparsity_pattern.cc.
Check if a value at a certain position may be non-zero.
Definition at line 917 of file block_sparsity_pattern.h.
Number of entries in a specific row, added up over all the blocks that form this row.
Definition at line 935 of file block_sparsity_pattern.h..
In the present context, it is the sum of the values as returned by the sub-objects.
Definition at line 239 of file block_sparsity_pattern.cc.
Print the sparsity of the matrix. The output consists of one line per row of the format
[i,j1,j2,j3,...]. i is the row number and jn are the allocated columns in this row.
Definition at line 252 of file block_sparsity_pattern.cc.
Print the sparsity of the matrix in a format that
gnuplot understands and which can be used to plot the sparsity pattern in a graphical way. This is the same functionality implemented for usual sparsity patterns, see SparsityPattern.
Definition at line 306 of file block_sparsity_pattern.cc.
Typedef for the type used to describe sparse matrices that consist of multiple blocks.
Definition at line 385 of file block_sparsity_pattern.h.
Define a value which is used to indicate that a certain value in the
colnums array is unused, i.e. does not represent a certain column number index.
This value is only an alias to the respective value of the SparsityPattern class.
Definition at line 95 of file block_sparsity_pattern.h.
Number of block rows.
Definition at line 342 of file block_sparsity_pattern.h.
Number of block columns.
Definition at line 347 of file block_sparsity_pattern.h.
Array of sparsity patterns.
Definition at line 355 of file block_sparsity_pattern.h.
Object storing and managing the transformation of row indices to indices of the sub-objects.
Definition at line 361 of file block_sparsity_pattern.h.
Object storing and managing the transformation of column indices to indices of the sub-objects.
Definition at line 367 of file block_sparsity_pattern.h.
Temporary vector for counting the elements written into the individual blocks when doing a collective add or set.
Definition at line 374 of file block_sparsity_pattern.h.
Temporary vector for column indices on each block when writing local to global data on each sparse matrix.
Definition at line 380 of file block_sparsity_pattern.h. | https://www.dealii.org/developer/doxygen/deal.II/classBlockSparsityPatternBase.html | CC-MAIN-2019-47 | refinedweb | 1,359 | 50.94 |
compiles in g++ and .NET but not on VC6 !!!
Discussion in 'C++' started by Julian, Dec
Compiles in visual studio but not in g++Mitch, Apr 21, 2004, in forum: C++
- Replies:
- 4
- Views:
- 622
- Mitch
- Apr 22, 2004
code that compiles in c but not in c++?, Mar 19, 2006, in forum: C Programming
- Replies:
- 7
- Views:
- 375
- P.J. Plauger
- Apr 3, 2006
Compiles with tolower from global namespace but not with std::tolowerEric Lilja, Sep 2, 2005, in forum: C++
- Replies:
- 4
- Views:
- 508
- Thierry Miceli
- Sep 2, 2005
Strange: compiles as Annotation but not as a methodOlli Plough, Dec 6, 2007, in forum: Java
- Replies:
- 2
- Views:
- 356
- Olli Plough
- Dec 7, 2007 | http://www.thecodingforums.com/threads/compiles-in-g-and-net-but-not-on-vc6.287666/ | CC-MAIN-2015-22 | refinedweb | 117 | 73.71 |
Hello,
I have the Bachelor's of Science, Microsoft Certified Professional, Certified Netware Administrator (4.1.1), and an Amateur Radio License.
Christian
I'm certified by my psychiatrist.
No certs ("now with Retsin!"). Just work experience. But I'm not in IT, where they seem to be more desired by employers. Engineering seems to be mostly about what you've done recently. Sorta like "we're building a new space shuttle. . . have you ever done that?" :-)
I currently have no certifications, hopefully will before the end of the school year, working on my Microsoft Software Development Certification *MSDC* class and next year I'll work on my Computer Development and Management Certification.
Actually, lets see, where is some places online that I can get Microsoft Certifications online, for teens?
B.Sc. in Pharmacy & Pharmaceutical Sciences, Honors in Computer Systems Technology.
By next fall I plan to have my Certification for Asthma Educator, and Diabetes Educator.
I have a BTEC Computer Engineering qualification. It's a bit like the A+ course. Other than that I'm currently in the second year of a three year Honours degree in Computer Science with Business Information Engineering at Hull University. If I get my degree then I'll automatically become a full member of the British Computer Society and have partial CEng accreditation.
Actually, lets see, where is some places online that I can get Microsoft Certifications online, for teens?
There really aren't any such places, especially not specifically "for teens", as you put it. Basically, what you'll want to do is just get up the $170 US (Or whatever it costs now), and go to a testing location. Just be sure that you're sure that you can pass it, because if not, you just blew $170.
heh heh.....
Sorta like 'Alphabet Soup' in here, isn't it, and IT-wise ole Catweazle is 'illetterate'! :)
Graduated from the School of the back shed, with advanced capabilities in system construction, hardware knowledge and troubleshooting techniques, and that was enough to get me a 'Gig' as an expert contributor with the most successful PC magazine in Australia.
Funny thing is, nowadays I often find myself fielding questions from some of those 'lettered' folk :D
Funny thing is, nowadays I often find myself fielding questions from some of those 'lettered' folk :D
Don't you love that?
When I worked in Medical Records at a hospital, this guy in IT was always like, "OH YEAH... I'm an MCSE.... let me look at that!" when a computer was down.
We had a Windows 98 machine that wasn't getting on the LAN, and he was fuming at how he couldn't figure it out. Now, mind you, I have a degree in Networking, and just couldn't find an IT gig (should have had his job, really), but I walked over and said, "What does winipcfg say your IP address is?" Thinking that would be something you'd immediately do after confirming your physical connections were good.
The resulting blank stare gave me pause. From then on, when he was around, and a system was down, I always thought, "OH YEAH... he's an MCSE.... let me look at that!" :D
heh heh.......
But let's not cast too much aspersion on the fine qualifications people have obtained for themselves, eh? Congratulations to everybody for doing so.
The simple thing is, having a Certified qualification isn't the be-all and end-all of it. There are some very necessary qualities that a Certification DOESN'T give you!
It does not mean that you are necessarily a good communicator. When people report a problem, their report is more often an expression of frustration than a helpful description of symptoms. To be really suited for providing assistance, you need to be able to effectively communicate with people from all walks of life, and in a wide range of 'emotional states', and also have the capacity to 'hone in' on what they're trying to say rather than what they are actually saying.
Having a Certification doesn't ensure that you are a person who has good skills with lateral thinking. Quite often, PC problems will manifest elsewhere from where they originate, and treating causes is always better than treating symptoms. Quite often, people will ask for assistance and advice based upon what they've been 'told' rather than what's best for their needs. If you aren't prepared and capable for giving advice directed at needs rather than specific requests, then you're not giving the best advice and assistance possible.
Gaining a Certification will help people to GAIN a job/position. But it's other qualities which will ensure they'll KEEP it ;)
But most of all, if the 'expert' you're faced with is basically sitting back on the laurels of their 'MCSE' qualification, and proclaiming loudly that having one makes them 'better' than others, then they're most likely not! It's proven results and outcomes of putting it all into pratice that should be crowed about ;)
I think certifications, as well as a degree to some point, will only give you book knowledge. You actually need to go out and DO to get the experience, and that's what jobs are looking for more than anything. Now myself, I'm sorry to say I hold no certifications :( I'm still in school going for my degree ... a B.S. in Computer Science with a minor in Business Computer Information Systems, which I'll hopefully be done with sometime soon. Personal matters have forced me to quit fulltime and only be taking 2 courses a semester.
Ya, gee, what a shame. When you go for an interview and they say "what certifications do you have" you'll have to say 'none'.
On the other hand you can say, "For several years I ran a premier programmer help forum with thousands of members; I wrote the scripts, maintained the machines, advertised on various sites, managed fee collection, managed a number of moderators, ..."
And we'll just have to keep quiet about the fact that ole Catweazle is unmanageable and incorrigible!
:)
Must be great to be csgl, and for the certifications, thats what the money is paying for is the class right. DUH!
I have the A+ certs, also a computer tech cert from high school and currently working on getting network+ and linux certs. I hate my high school, they only offer one computer programming class and 2 web page design classes. most of the classes they have are drama and retarted shakesperain classes.
for teens? im 16 and im doing A level computing at college here in the UK - wiki it its advanced stuff we do binary arithmatic, and program in c++ and assembler
C++ is terrible. I'm sure it's used for many things, but I guess I just suck at programming, I should've said I'm terrible at programming.
c++ aint hard if you stars out with console apps e.g my 1st app (it sux so bad tho)
/main.cpp //Copyright James Bennet 2006 - Engine source code available on request //A small RPG I am making for me and my friend to play as well as to learn c++ #include <cstdlib> #include <iostream.h> #include <time.h> #include <stdio.h> using namespace std; //////////////////////////////////////////////////////////////////////////////// class person //Nicked the idea for this from fallouts s.p.e.c.i.a.l system { public: int lvl; int gold; int p; int e; int c; int i; }; //////////////////////////////////////////////////////////////////////////////// int main(void) { person player1; int action; srand((unsigned)time(0)); //Makes the stats random, based on system time player1.p = 1 + rand() % (10); player1.e = 1 + rand() % (10); //Generates the random stats player1.c = 1 + rand() % (10); player1.i = 1 + rand() % (10); cout << endl << "Welcome to my RPG - James Bennet 2006" << endl << endl << "Your Stats Are: " << endl << endl << "Perception: " << player1.p << endl << "Endurance: " << player1.e << endl << "Charisma: " << player1.c << endl << "Intelligence: " << player1.i << endl; system("PAUSE"); //The push any key prompt return EXIT_SUCCESS; } ////////////////////////////////////////////////////////////////////////////////
IT hardware and Software technologies change so fast. Certs or no certs the key is you can't stop. Once you're on the roundabout if you don't keep learning and reading off your own back, the game's up!
I've been trying to get my employers to cough up for MCAD so long, when I finally landed a job where they promised it would be part of the package it's gone out of date (still available but MCPD is it's replacement)
I feel any form of higher/further education directly to do with your career or not is a good thing because the vital skills you learn are self motivation, social skill and the ability to research and assimilate new information. Once you know these you can turn your brain to most things. When an employer interviews you he/she is as scared as your are, if they hire the wrong person it could cause a real business headache and huge costs. They're not looking for a long list of certs, as long as there's something. What they really need is to be put at ease by you showing them in the interview you can think smart and take care of the job in hand.
So you need to get something but don't panic too much about what the course title is or what's the flavour of the month cert, just get one. All you're trying to do is show you can learn and pass things (you're a smart person) The rest is down to your self confidence and how well you can sell yourself.
For me the only real certification is the little slip of paper with your salary on it, if you're taking one of those home you're not far off the target.
------
Oh yeah I have:
BTEC Nat Dip Business and Finance
BTEC HND Business and Finance
Cert of Higher Education Applied Economics (not full Bsc cos I didn't finish it)
MCSE 2000 (now out of date)
Intro to ADO.NET (some little Microsoft course can't remember full details)
Developing Microsoft ASP.NET web applications (1st step to MCAD didn't get to exam as changed jobs)
Notice my sub-concious aversion to examination? I can learn as much as the next man, but I don't like having to prove it under duress if you know what I mean!
certifications --- hummmm. I have a HS graduation certificate, a marriage license, associates of arts degree, several medals from viet nam, a military retirement certificate, a social security card, a i will be getting an old-age social security retirement certificate in a couple years. I think that's enough certifications for one lifetime.:mrgreen: :eek: :rolleyes:
yeah i have birth certifcate, 9 GCSE certificates, a national insurance card, passport and NUS card
I have completed several programming(java) courses at nearby colleges and the infamous iDtech camps, i have more experience than knowledge, and i gained most of it from manuals and emails from my older brother, who is a programmer with his own business.
lol_hacker101
That's nothing one day I wanted certs so I went out and got my BAS, CDCT, BMSIS, NATO, DNVM, JavAX, JAZZ, PMP, GLZK, bis, SIM, CDC, and my NicNak cert. I didn't want to do any work so I just bought the "Special Certs kit" from Ebay.
I paid the guy $900 for the kit. It's been a year, last time I checked the guy was banned from Ebay, the order does not exist on records... and I still don't have my Special Certs - ... | https://www.daniweb.com/community-center/threads/3471/what-certifications-do-you-have/2 | CC-MAIN-2018-05 | refinedweb | 1,954 | 61.36 |
Description.
Specifications
Product Reviews
Questions & Answers
Usage Statistics
Discussions tagged with this product
Redirecting Folder Redirection GPO
All,
I'm in the middle of an interforest migration from Domain A to Domain B, and Windows Server 2003 to Windows Server 2012R2.
robocopy /move don't move folders on first run
Good day!
I have a question about robocopy:
I wrote a script lately for move files and folders into another specific folder.
More
Robocopy command for duplicating many folders with the same name.
Every year I need to make new budget folders that are named the current year. There hundreds of these folders currently named
Managing local storage on a unique pc
+1 for Robocopy. Just set up a scheduled task for a sync at whatever intervals is appropriate.
You could also turn on local copies
Windows 10, coping very large files
Robocopy all the way. It's much better than xcopy and can be configured tu run in several different ways that may help you:
Robocopy converts dashes to hyphens
I'm working on a powershell script to remove access of former employees from every folder in a file share. I use robocopy's list
Robocopy command to copy modified files with security permisions
Hello All,
I need to migrate the data from an old file server to a new file server. I used RichCopy but many files did not copy
Move Files Older Than 3 Years
I'm working with Robocopy with the following syntaxText
robocopy E:\Users E:\UsersOld /E /COPYALL /move /MINAGE:1095
Does the
Robocopy-check for any changed or new files/folders and update destination
I have just robocopied 1.5TB from source to destination with the below command
/E /MIR /W:0 /R:3 /COPY:DAT /DCOPY:T /log:c:\Temp\
CopyProfile - Copy customized default user profile to multiple computers
Hey guys,
I'm starting as a freelance and want to preconfigure a user profile I can set up on multiple different Windows 10
Does Robocopy and FrontPage website folders play well together?
Windows 7 Pro
I have a FrontPage website with a lot of sub-folders beginning with the underscore ("_"); like "_borders". The
Copy Files and permissions with Robocopy
I have configured many a batch file using RoboCopy. If you open a CMD prompt and type Robocopy /? you can see all the options with
Fastest way to virtualize?
Hi everyone,
I have a 5 tb external disk connected via PCI-E to my vm host(1.67 TB used). It is then passed through to a file
File Compare and Copy
I guess I'm not sure what your goal is but it soundslike you just need the /MIR argument andMicrosoft Robocopy
Pro Tip: How to Copy Files from One Server to Another
Because I use a lot of Robocopy switches and folder exclusions (e.g. recycle bin, dfsrprivate, etc.), and because I'm lazy and
Syncing Folders between Servers
I have a new mail server - I want to sync folders from the old server to the new server so nothing is missed and the migration is
Robocopy /XD refuses to work
Trying to configure robocopy to copy all but a directory (and its subdirecotries) in a single job, thus:Batchfile
/SD:\\
Robocopy question
When you have a Source and Destination already set up and the files are newer on the Source...
Robocopy will copy only those newer
robo copy delete files from backup after x days of deletion from source
i want to create a backup of files using robo copy.I want files in backup must be deleted after x days of deletion from source.But
robo copy delete files from backup after x days of deletion from source
i want to create a backup of files using robo copy.I want files in backup must be deleted after x days of deletion from source.But
What's the easiest way to back up a single folder throughout the day?
+1 for Robocopy - you can make it continuous if you want, or set it up to run through task scheduler depending on how often you
DFS-R with Robocopy - need proper switches
Hey geniuses of Spiceworks. I need to move our data share from one server location to another while keeping all the security
Help me improving this script memory management!
Hi folks!
I'm having one hell of a bad time trying to improve the memory management of this script I'm creating. The fact is that
Replicating Windows Backups Offsite
I am using Windows Server Backup and Windows Backup and Restore to make backups of servers and workstations to external hard
Backup Thunderbird Profiles via GPO
Hello everyone, and happy Monday!
I'm running into an issue trying to make automated backups of my user's Thunderbird profile
Want to talk me out of this...
I'm currently using 2 instances of Server 2008 R2Hyper-V to host 4 VMs on one server and their replicas on another, identical
Robocopy Knocking Off Workstation
I'm trying to migrate some file shares off a Windows 2008 DC onto a Windows 7 workstation. I've ran the following Robocopy script
Robocopy Cmd line problem - SOLVED
Hi everyone,
Sorry not sure if this is the right area.
I am new to this forum and new to Robocopy. I have taught myself a crash
Robocopy /XD from text file
I need to create a robocopy script that mirrors a directory excluding a list of directories in a text file. How does one go about
-
Powershell and RoboCopy - Looking for somehelp
OK All,
Here's the layout. I have about 2.7 TB worth of data that I want to move (Potentially)Powershell
$Cred=Get-Credential
Robocopy or other for copying data to new app server?
Howdy-
I have searched around and found plenty of examples for Robocopy to copy files, but none have worked great so far.
I have
Robocopy Script Help
Hi
Can someone tell me how to script this with Robocopy please:
I need to move about 150 servers from an old
Robocopy or What?
i use it with "scheduled task" as well as triggers, and even have some Powershell scripts call some Robocopy scripts.
[WINDOWS][HOW-TO] C:\>delete *.* except for [list]
I also posted this over at /r/software. In case anyone else has similar needs:
/u/nerdshark had this:Powershell
$excludedItems =
Moving files into folders using Powershell and csv
Hey guys hoping i can get some help with this. I spent most of the day on it and nothing seems to work :(
I have a csv. In it
Moves files /folders older then X days in Windows
I have a folder name C:\TEST which have several hundreds sub folders in it.
I want to MOVE files and folders older then 365 days
Migrating fileserver VM from 2003 to 2012 R2
We're running a 2003 file server on VMWare 5.5, and will be moving to 2012 R2. This VM is a DFS (namespace only) target. It has
Help with Robocopy
Hey good people,
We have a client who currently use a windows 2003 SBS as their main server, they looking to move to a new server
Robocopy command to copy all shares within a server
If I want to move everything on one server to another server without specifying a specific share, is that possible.
For Example, I
Unable to copy file
I am getting an error saying that I cannot copy the file because the network location is unavailable. I am able to navigate and
Disabling robocopy service
Trying to clean up an old W2003 File Server that was formerly used to backup user folders from workstations on the domain. All
Can Robocopy Help Me Backup my File Server?
I am trying to copy all of the files that are shared from our file server to a USB external harddrive however I keep getting
Robocopy /XF from text file
Need a script to delete a folder's contents each night (via GPO)
Doesn't matter, haha - I just want to be sure I fully understand what exactly that script says before I put it
A Mirrored folder.
I'd check out robocopy and see if you can write a script that runs as a scheduled task to mirror the folders.
Moving files
Program help
Just use Robocopy.
Making Gui for Robocopy script
So I know this has been asked in several different ways and I'm sure this is linked to VBscript/.HTA. But I already have a .bat
Use robocopy for switching file servers
Hi all. I have a massive file copy I need to do as I'm moving data from fileserver1 to fileserver2. There's so much data, and it
Projects
-
Server Room, Racks and Build AgentConvert basement area into a server room. Arrange for installation of air conditioning and liaise fo...
Hardware Refresh - Software Support CenterReplace mixed Software Support Center machines with W10 machines with i5, 8GB RAM, and 320GB SSD, wi...
VM MigrationMigrate VM from Vmware to Hyper-V, ensuring VM was working and all data was transferred and accessible
-
Migrate from SBS 2003 to 2012 R2Upgrade server infrastructure for CPA firm. Moving from SBS 2003 to a minimum of 2012 R2.
-
Disaster recoveryConsolidate all data to a NAS, Setup a second NAS at a remote site for disaster recovery.
EU Backup SolutionsSafe Harbour is dead. At the time of writing, nothing concrete is in place to protect our data. Wha...
IT System CleanupWe was approached by a clieant to attend site to work with them to help clean up their machines and ...
Backup ScriptThis is to back up log files and .xtr files onto our main server6
-
-
Windows Server 2003 to 2012 R2 migrationMigrating from Windows Server 2003 SP2 to Server 2012 R2. Resolving existing DNS and server manageme...1
-
-
-
-
Migrate 2003 Domain and File ServersMigrate existing File Server and AD Domain away from 2003 Servers into 2012 R2 Servers
Windows Server 2012 - DC, DHCPThis new server will replace an existing DC, DHCP, file and print server at the location. Additiona...
-
File Server Migration and UpgradeFile and print server migration from Windows server 2003 to windows server 2012 r2
-
Creative File TransferProject to accomplish two goals; migrate 1.8TB of data from a dying Windows Server and configure Tim...
Can you keep it alive for a bit longer?Dumbleton Hall Hotel was being pressured by their IT support company to purchase a pair of new serve...1
-
New File ServerBuild a new File Server and move all the files from the current server to the new one. It is critica...
New Backup SystemAfter having no backup for over 6 months, we decided it was time to put some investment into ensurin...
Replace Branch Servers with VMsreplace 2 aging physical hardware with server 2003 - with esxi 5.5 hosting 2 VMs, opportunity to upg...
File system virtualizationMigrate all file systems (user data) to Acopia virtualization using robocopy as primary method of da...
- | https://community.spiceworks.com/products/17457-microsoft-robocopy/review/680612/flag | CC-MAIN-2019-18 | refinedweb | 1,834 | 66.57 |
EDIT: Now that the problem is solved, I realize that it had more to do with properly reading/writing byte-strings, rather than HTML. Hopefully, that will make it easier for someone else to find this answer.
I have an HTML file that's poorly formatted. I want to use a Python lib to just make it tidy.
It seems like it should be as simple as the following:
import sys
from lxml import etree, html
#read the unformatted HTML
with open('C:/Users/mhurley/Portable_Python/notebooks/View_Custom_Report.html', 'r', encoding='utf-8') as file:
#write the pretty XML to a file
file_text = ''.join(file.readlines())
#format the HTML
document_root = html.fromstring(file_text)
document = etree.tostring(document_root, pretty_print=True)
#write the nice, pretty, formatted HTML
with open('C:/Users/mhurley/Portable_Python/notebooks/Pretty.html', 'w') as file:
#write the pretty XML to a file
file.write(document)
file_lines
str(document)
You are missing that you get bytes from tostring method from etree and need to take that into account when writing (a bytestring) to a file. Use the
b switch in the
open function like this and forget about the
str() conversion:
with open('Pretty.html', 'wb') as file: #write the pretty XML to a file file.write(document) | https://codedump.io/share/RnOuafRNFiUb/1/what-is-the-proper-method-for-reading-and-writing-htmlxml-byte-string-with-python-and-lxml-and-etree | CC-MAIN-2017-34 | refinedweb | 209 | 66.23 |
{-# LANGUAGE Trustworthy #-} {-# LANGUAGE CPP, NoImplicitPrelude, MagicHash #-} ----------------------------------------------------------------------------- -- | -- Module : Data.List -- Copyright : (c) The University of Glasgow 2001 -- License : BSD-style (see the file libraries/base/LICENSE) -- -- Maintainer : libraries@haskell.org -- Stability : stable -- Portability : portable -- -- Operations on lists. -- ----------------------------------------------------------------------------- module Data.List ( -- * Basic functions (++) , head , last , tail , init ,.Char ( isSpace ) findIndices p ls = loop 0# ls where loop _ [] = [] loop n (x:xs) | p x = I# n : loop (n +# 1#) xs | otherwise = loop (n +# 1#) xs . -- Both lists must be finite. isSuffixOf :: (Eq a) => [a] -> [a] -> Bool isSuffixOf x y = reverse x `isPrefixOf` reverse] #ifdef USE_REPORT_PRELUDE nub = nubBy (==) #else -- stolen from HBC nub l = nub' l [] -- ' where nub' [] _ = [] -- ' nub' (x:xs) ls -- ' | x `elem` ls = nub' xs ls -- ' | otherwise = x : nub' xs (x:ls) -- ' #endif -- | nubBy eq l = nubBy' l [] where nubBy' [] _ = [] nubBy' (y:ys) xs | elem_by eq y xs = nubBy' ys xs | otherwise = y : nubBy' ys (y:xs) -- Not exported: -- Note that we keep the call to `eq` with arguments in the -- same order as in the reference -- | The 'intersect' function takes the list intersection of two lists. -- For example, -- -- > [1,2,3,4] `intersect` [2,4,6,8] == [2,4] -- -- If the first list contains duplicates, so will the result. -- -- > [1,2,2,3,4] `intersect` [6,4,4,2] == [2,2,4] -- -- It is a special case of 'intersectBy', which allows the programmer to -- supply their own equality test. mapAccumL _ s [] = (s, []) mapAccumL f s (x:xs) = (s'',y:ys) where (s', y ) = f s x (s'',ys) = mapAccumL f s' xs -- | "List.genericIndex: negative argument." genericIndex _ _ = error _|_ = [] : _|_@ inits :: [a] -> [[a]] xs = xs : case xs of [] -> [] _ : xs' -> tails xs' -- | cmp rge r) qpart cmp x (y:ys) rlt rge r = case cmp x y of GT -> qpart cmp x ys (y:rlt) rge r _ -> qpart cmp x ys rlt (y:rge) r -- rqsort is as qsort but anti-stable, i.e. reverses equal elements cmp rgt r) rqpart cmp x (y:ys) rle rgt r = case cmp y x of GT -> rqpart cmp x ys rle (y:rgt) r _ -> rqpart cmp x ys (y:rle) rgt r -} #endif /* USE_REPORT_PRELUDE */ -- |] -- -- | 'foldl1' is a variant of 'foldl' that has no starting value argument, -- and thus must be applied to non-empty lists. foldl1 :: (a -> a -> a) -> [a] -> a foldl1 f (x:xs) = foldl f x xs foldl1 _ [] = errorEmptyList "foldl1" -- | A strict version of 'foldl1' foldl1' :: (a -> a -> a) -> [a] -> a foldl1' f (x:xs) = foldl' f x xs foldl1' _ [] = errorEmptyList "foldl1'" -- ----------------------------------------------------------------------------- -- List sum and product {-# -- ----------------------------------------------------------------------------- -- Functions on strings -- | 'lines' breaks a string up into a list of strings at newline -- characters. The resulting strings do not contain newlines. #endif | http://hackage.haskell.org/package/base-4.7.0.0/docs/src/Data-List.html | CC-MAIN-2015-11 | refinedweb | 451 | 60.99 |
SharePoint 2010 is great for BI; you have a ton more options. One of these options is the native Chart Web part, which allows you to create and map data from a number of different sources. For example, if you navigate to your SharePoint site, click Site Actions, Edit Page, Insert tab, Web Part, and then click Business Data you’ll see the native Chart Web Part for your use.
You can select, click Add and then walk through a series of steps to help configure the web part to render data from various sources. The figure below illustrates the steps you walk through and the data options for the web part. If this is the first time you’ve heard of the Chart Web Part, you can go here to get more information.
The Chart Web Part uses the System.Web.DataVisualization library, and this DLL is pretty comprehensive and provides a rich data binding and rendering experience. However, knowing this begs the question of how you can use the native ASP.NET Chart library to get some custom data (such as BDC models, WCF service endpoints, REST data, etc.) rendered in your chart web part. Thus, a more code-centric approach to leveraging the core library, but using a web part as a wrapper in the 2010 environment.
To get started, I drew some background information/sample code from the following posts, which were more centric to SharePoint 2007. These posts were a great start, but I needed something that leveraged the 2010 tooling and infrastructure and also used the ASP.NET Chart control.
So, I put together a simple Web part that extended on these posts to help get you started in SharePoint 2010.
The first hurdle you’ll need to get over is that while the Chart control library is native to .NET 4.0, SharePoint 2010 still uses .NET 3.5 SP1. So, you’ll need to make sure you have the 3.5 version of the Chart control. You can get it here. Once you have the library installed on your server, you next need to create a project and add it as a reference. The project you’ll create is a SharePoint Empty Project (open VS 2010 and click File, New, Project, Empty SharePoint Project, provide a name/location, and click OK).
You’ll then add a web part to the new project by right-clicking the project and selecting Add, New Item, SharePoint 2010, and Web Part. Provide a name for the new web part and click Add. See figure below.
So, you now have a web part project set up…what’s next? Right-click the project and add the System.Web.DataVisualization.dll to your project. Make sure it’s the 3.5 version because if it isn’t, your project will not build. Your project structure should now look something like the following figure. (You’ll note that I renamed the feature to ChartWebPart.)
Next, double-click the main web part class file (e.g. TheChart.cs), and then ensure you amend the code in your web part as per the bolded code below..DataVisualization.Charting; using System.Drawing;
namespace MtSPWebPartChart.TheChart { [ToolboxItemAttribute(false)] public class TheChart : WebPart { protected override void CreateChildControls() { Chart chrtSalesData = new Chart(); chrtSalesData.ImageStorageMode = ImageStorageMode.UseImageLocation;
chrtSalesData.Legends.Add("Legend"); chrtSalesData.Width = 500; chrtSalesData.Height = 300; chrtSalesData.RenderType = RenderType.ImageTag; string imagePath = "~/_layouts/ChartImages/"; chrtSalesData.ImageLocation = imagePath + "ChartPic_#SEQ(200,30)"; chrtSalesData.Palette = ChartColorPalette.Berry;
Title chartTitle = new Title("Hockey Unit Inventory", Docking.Top, new Font("Calibri", 12, FontStyle.Bold), Color.FromArgb(26, 59, 105)); chrtSalesData.Titles.Add(chartTitle); chrtSalesData.ChartAreas.Add("Inventory");
chrtSalesData.Series.Add("Skates"); chrtSalesData.Series.Add("Gloves"); chrtSalesData.Series.Add("Helmets");
chrtSalesData.Series["Skates"].Points.AddY(5); chrtSalesData.Series["Skates"].Points.AddY(10); chrtSalesData.Series["Skates"].Points.AddY(15); chrtSalesData.Series["Skates"].Points.AddY(10); chrtSalesData.Series["Skates"].Points.AddY(12); chrtSalesData.Series["Skates"].Points.AddY(20);
chrtSalesData.Series["Gloves"].Points.AddY(2); chrtSalesData.Series["Gloves"].Points.AddY(6); chrtSalesData.Series["Gloves"].Points.AddY(10); chrtSalesData.Series["Gloves"].Points.AddY(18); chrtSalesData.Series["Gloves"].Points.AddY(20); chrtSalesData.Series["Gloves"].Points.AddY(14);
chrtSalesData.Series["Helmets"].Points.AddY(20); chrtSalesData.Series["Helmets"].Points.AddY(15); chrtSalesData.Series["Helmets"].Points.AddY(12); chrtSalesData.Series["Helmets"].Points.AddY(13); chrtSalesData.Series["Helmets"].Points.AddY(25); chrtSalesData.Series["Helmets"].Points.AddY(18);
chrtSalesData.BorderSkin.SkinStyle = BorderSkinStyle.Emboss; chrtSalesData.BorderColor = Color.FromArgb(26, 59, 105); chrtSalesData.BorderlineDashStyle = ChartDashStyle.Solid; chrtSalesData.BorderWidth = 1; this.Controls.Add(chrtSalesData); } } }
You can now hit F6 to ensure the project builds, and if it builds successfully you can either hit F5 to debug or simply right-click the project and select Deploy to deploy the web part project into SharePoint.
Now before you jump to your SharePoint site, make sure you add one line (bolded below) to your SharePoint web.config.
… </httpHandlers>
…
At this point, you should be able to navigate to your SharePoint site, click Site Actions, Edit Page, navigate to your shiny, new web part and then add the aforementioned web part with chart included in it.
My next step with this is to implement data service calls (where the hard-coded data is now) and then data-bind to WCF endpoints in Windows Azure. While I work on this part, you can grab the code from here.
Hope this helps those of you who are trying to do the same thing. I noticed quite a few screams for help and not too many concerted blogs on how to do this.
Lastly, a shout-out to Marc Charmois for his earlier posts. They helped immensely, as did the numerous posts I needed to chase down on POST errors when leveraging the Chart control.
Happy coding!
Steve
---------------------------------------------------------------------------------------
Adding a note from the community on the web.config edits. Thanks to Kalyan Krishna for sending this along to me from his implementation of the above chart in SharePoint. His comments on his web.config edits were as follows:
1. The web.config entry for <httpHandlers> is:
>
1. The web.config entry for <handlers> is:
" />
3. The web.config entry for <appSettings> is as follows (make sure that the folder mentioned is present and the web site's account can write to it):
<add key="ChartImageHandler" value="storage=file;timeout=20;dir=c:\Temp\;" />
Cheers, | http://blogs.msdn.com/b/steve_fox/archive/2011/04/09/asp-net-chart-controls-in-sharepoint-2010-rendering-a-sales-chart-with-custom-data.aspx | CC-MAIN-2015-32 | refinedweb | 1,054 | 57.57 |
Wildfly is the new name for the community edition of the JBoss Application Server. The current development version of Wildfly (8.0) will be adding support for Java EE 7. Java EE 7 brings a lot of goodies for Java(EE) developers. One of the features of Java EE 7 is the JSR 356 Java API for WebSockets, which specifies a Java API that developers can use to integrate WebSockets into their applications — both on the server side as well as on the Java client side. In case you are new to WebSockets or JSR 356, please refer to my earlier blog post on this subject. In this blog post, we will install Wildfly on OpenShift using the DIY cartridge and look at the sample WebSocket application bundled with the quickstart.
OpenShift already has best in class Java support with Tomcat 6, Tomcat 7, JBoss AS7, and JBoss EAP 6 bundled with it. You can also run Jetty or GlassFish on it using the DIY cartridges. In addition, OpenShift provides support for Jenkins continuous integration server..
Installing Wildfly on OpenShift in Under Two Minutes
It takes less than two minutes to spin up a Wildfly instance on OpenShift. All you need to do is run the following commands at a terminal prompt. The OpenShift Wildfly quickstart is available on github at. To write this quickstart I have taken help from this blog post.
$ rhc app create wildfly diy $ cd wildfly $ rhc app stop --app wildfly $ git rm diy/index.html $ git rm diy/testrubyserver.rb $ git remote add upstream -m master $ git pull -s recursive -X theirs upstream master $ git push
The commands shown above first creates an OpenShift DIY application, stops the default ruby HTTP server, deletes template files, adds a git remote repository where the Wildfly quickstart exists, then pulls all the source code to your local machine, and finally pushes changes to your OpenShift gear where the application is running.
After the git push successfully finishes, the Wildly application server will be accessible at-{domain-name}.rhcloud.com.
The sample WebSocket application bundled with the quickstart is accessible at-{domain-name}.rhcloud.com/websocket-reverse-echo-example. The application simply reverses the message and echos back to the user.
Under the Hood
Now that we have Wildfly up and running on OpenShift, let’s look at what we have done in the commands mentioned in the previous section. When we pushed the source code, OpenShift invoked certain action hooks. The action hooks give developers a chance to hook into the application life cycle. We have written two action hooks — start and stop under .openshift/action_hooks folder. In the start hook, we wrote a few bash commands to download the current version of Wildfly, make changes to the Wildfly standalone.xml configuration file , and then start the Wildfly server. The start hook is shown below. You can view the full start hook here.
cd $OPENSHIFT_DATA_DIR if [ -d $OPENSHIFT_DATA_DIR/wildfly-8.0.0.Alpha3 ] then cd $OPENSHIFT_DATA_DIR/wildfly-8.0.0.Alpha3 nohup bin/standalone.sh -b $OPENSHIFT_DIY_IP -bmanagement=$OPENSHIFT_DIY_IP > $OPENSHIFT_DIY_DIR/logs/server.log 2>&1 & else wget unzip wildfly-8.0.0.Alpha3.zip // configure standalone.xml // start Wildfly application server fi
Similarly, we have overwritten the stop action hook. The stop action hook stops the Wildfly application server.
jps | grep jboss-modules.jar | cut -d ' ' -f 1 | xargs kill exit 0
Testing WebSocket Support
To test the Wildfly WebSocket support, I have written a very simple WebSocket application. The application is available on github. The application just has one Java class — ReverseEchoWebSocketServerEndpoint. It is a WebSocket server endpoint which reverses the message and then sends the message back to the client.
@ServerEndpoint("/echo") public class ReverseEchoWebSocketServerEndpoint { private final Logger logger = Logger.getLogger(this.getClass().getName()); @OnOpen public void onConnectionOpen(Session session) { logger.info("Connection opened ... " + session.getId()); } @OnMessage public String onMessage(String message) { if (StringUtils.isBlank(message)) { return "Please send message"; } return StringUtils.reverse(message); } @OnClose public void onConnectionClose(Session session) { logger.info("Connection close .... " + session.getId()); } }
The class shown above exposes a WebSocket endpoint at the /echo URL. The URL is relative to the root of the web socket container and must begin with a leading “/”. If the application is available at-{domain-name}.rhcloud.com/websocket-reverse-echo-example then the WebSocket URL will be ws://wildfly-{domain-name}.rhcloud.com:8000/websocket-reverse-echo-example/echo or wss://wildfly-{domain-name}.rhcloud.com:8443/websocket-reverse-echo-example/echo(secure connection). Please note that in OpenShift, WebSockets are available over ports 8000 and 8443. Please refer to this blog post by Marek Jelen to learn more about OpenShift WebSocket support.
The client side of the application is HTML5 and we use the WebSocket JavaScript API to connect with the backend server. On page load, we create an instance of WebSocket object and listen to various WebSocket events as shown below.
var wsUrl; if (window.location.protocol == 'http:') { wsUrl = 'ws://' + window.location.host + ':8000/websocket-reverse-echo-example/echo'; } else { wsUrl = 'wss://' + window.location.host + ':8443/websocket-reverse-echo-example/echo'; } console.log('WebSockets Url : ' + wsUrl); var ws = new WebSocket(wsUrl); ws.onopen = function(event){ console.log('WebSocket connection started'); }; ws.onclose = function(event){ console.log("Remote host closed or refused WebSocket connection"); console.log(event); }; ws.onmessage = function(event){ console.log(event.data); $("textarea#outputMessage").val(event.data); };
When the user presses the send button, the application writes a message over the WebSocket connection. The message is sent to the server which reverses it and sends it back to the client. The client then writes to the output text area on the right side.
$("button#messageSubmit").on('click',function(){ var message = $('textarea#inputMessage').val(); console.log('Input message .. '+message); ws.send(message); });
Conclusion
In this blog post, we learned how to install the Wildfly application server on OpenShift and deploy our WebSocket based web applications on it. If you are a Java EE developer and want to try out Java EE 7 features, OpenShift is a great. | https://blog.openshift.com/deploy-websocket-web-applications-with-jboss-wildfly/ | CC-MAIN-2017-22 | refinedweb | 1,003 | 50.73 |
#include "Experimental.h"
#include "MacroMagic.h"
Go to the source code of this file.
This file contains definitions of all images, cursors, colours, fonts and grids used by Audacity.
This will be split up into separate include files to reduce the amount of recompilation on a change.
Meantime, do NOT DELETE any of these declarations, even if they're unused, as they're all offset by prior declarations.
To add an image, you give its size and name like so:
If you do this and run the program the image will be black to start with, but you can go into ThemePrefs and load it (load components) from there. Audacity will look for a file called "Pause.png". | http://doxy.audacityteam.org/_all_theme_resources_8h.html | CC-MAIN-2017-51 | refinedweb | 118 | 72.05 |
Click "Launch and Activation Permissions" Edit Default Click OK Close the DCOMCNFG window Step 2: Install the SSL certificate without using IIS 7 The following solution describes how to resolve the Consider granting access rights to the resource to the ASP.NET request identity. My local computer user? –Kate Oct 11 '16 at 11:19 add a comment| Your Answer draft saved draft discarded Sign up or log in Sign up using Google Sign up C# to VB.NET: Wednesday, May 16, 2012 11:14 AM Reply | Quote 0 Sign in to vote I have tried those suggestion but no positive result is there. this contact form
What am I supposed to say? Join them; it only takes a minute: Sign up Access is denied. (Exception from HRESULT: 0x80070005 (E_ACCESSDENIED))? Yükleniyor... Çalışıyor... Dealing with "friend" who won't pay after delivery despite signed contracts I lost my equals key.
Your Email This email is in use. Ultra Hack Pro HD 64.754 görüntüleme 3:13 Como resolver erro 0x80070005 - ATUALIZADO - Süre: 4:21. Remember accessing from explorer is different from havingprivilegesto access from your source code.Sai Kumar K Wednesday, May 16, 2012 10:53 AM Reply | Quote 0 Sign in to vote This solves the problem on the windows server 2003 and now my web application generating the excel files with large amount of data.For windows server 2008 someone has mentioned to made
Reklam Otomatik oynat Otomatik oynatma etkinleştirildiğinde, önerilen bir video otomatik olarak oynatılır. For more information, see Connecting Between Different Operating Systems share|improve this answer answered Mar 18 '14 at 13:15 Zakaria 541511 What kind of user is it? Not the answer you're looking for? Uwp Access Is Denied. (exception From Hresult: 0x80070005 (e_accessdenied)) In it, you'll get: The week's top questions and answers Important community announcements Questions that need answers see an example newsletter By subscribing, you agree to the privacy policy and terms
Lütfen daha sonra yeniden deneyin. 1 Oca 2016 tarihinde yayınlandıIf you ever get this error when attempting to deploy a BizTalk solution into BTS admin from Visual Studio, it means that Sharepoint Access Is Denied. (exception From Hresult: 0x80070005 (e_accessdenied)) How to resolve this error. If the SSL certificate is not in available in the bindings list then proceed with the below instructions to set the appropriate permissions. original site Use a separate limited account for the site if you want or enable anonymous access for the site in IIS.
Exception Details: System.UnauthorizedAccessException: Access is denied. (Exception from HRESULT: 0x80070005 (E_ACCESSDENIED)) ASP.NET is not authorized to access the requested resource. Hresult 0x80070005 Onenote Exception Details: System.UnauthorizedAccessException: Access is denied. (Exception from HRESULT: 0x80070005 (E_ACCESSDENIED)) ASP.NET is not authorized to access the requested resource. Regards, Edited by Shweta Jain (Lodha) Thursday, May 17, 2012 3:27 AM spell mistake Proposed as answer by Shweta Jain (Lodha) Thursday, May 17, 2012 3:27 AM Thursday, May 17, ASP.NET has a base process identity (typically {MACHINE}\ASPNET on IIS 5 or Network Service on IIS 6) that is used if the application is not impersonating.
I have already fixed the problem regarding login into server 2008.
The namespace you are connecting to is encrypted, and the user is attempting to connect with an unencrypted connection Give the user access with the WMI Control (make sure they have C# Access Is Denied. (exception From Hresult: 0x80070005 (e_accessdenied)) Thursday, May 28, 2015 2:48 AM Reply | Quote Microsoft is conducting an online survey to understand your opinion of the Msdn Web site. Powershell Access Is Denied. (exception From Hresult: 0x80070005 (e_accessdenied)) So, I have to grant user policy for each web app (full read), now get-spweb works fine.
How should I respond to absurd observations from customers during software product demos? You can use WMI Administrator tool to debug the issue, can download from I hope this helps you... Possible Issues The user does not have remote access to the computer through DCOM. Should we eliminate local variables if we can? Wmi Access Is Denied. (exception From Hresult: 0x80070005 (e_accessdenied))
The following table lists the three categories of errors along with issues that might cause the errors and possible solutions. more hot questions question feed lang-cs about us tour help blog chat data legal privacy policy work here advertising info mobile contact us feedback Technology Life / Arts Culture / Recreation How did Adebisi make his hat hanging on his head? navigate here Konuşma metni Etkileşimli konuşma metni yüklenemedi.
Are the guns on a fighter jet fixed or can they be aimed? Complete Certificate Request Access Is Denied Exception From Hresult 0x80070005 E_accessdenied Terms of Service Layout: fixed | fluid CodeProject, 503-250 Ferrand Drive Toronto Ontario, M3C 3G8 Canada +1 416-849-8900 x 100 Contact Us1-888-484-2983 UK: +44 203 450 5486 Deutschland: +49 69 3807 Typically, DCOM errors occur when connecting to a remote computer with a different operating system version.
Highlight the ASP.NET account, and check the boxes for the desired access. Manoj Kumar 32.785 görüntüleme 53:09 How to Fix Microsoft Visual C++ 2015 Redistributable Setup Failed error 0x80240017 - Süre: 6:15. What is the "crystal ball" in the meteorological station? Clickonce Access Is Denied. (exception From Hresult: 0x80070005 (e_accessdenied)) Bu videoyu Daha Sonra İzle oynatma listesine eklemek için oturum açın Ekle Oynatma listeleri yükleniyor...
English (U.S.) Login Remember me Lost password Live Chat by LivePerson News SEARCH Knowledgebase: Comodo Certification Authority Access Denied. (Exception from HRESULT: 0x80070005 (E_ACCESSDENIED)) Cause: This error occurs Linux questions C# questions ASP.NET questions fabric questions C++ questions discussionsforums All Message Boards... Did MS change something? the account i use is in Farm Admin group, but not the account used to install sharepoint.
Please refer this link for more info. Yükleniyor... Right-click My Computer-> Properties Under COM Security, click "Edit Limits" for both sections. Circular Array Rotation Did Joseph Smith “translate the Book of Mormon”? | http://juicecoms.com/access-is/c-access-is-denied-exception-from-hresult-0x80070005-e-accessdenied.html | CC-MAIN-2017-43 | refinedweb | 995 | 56.05 |
Download presentation
Presentation is loading. Please wait.
Published byAbbigail Tetlow Modified over 2 years ago
1
Simple & easy to use Inventory management for Hotels (Front Office, Restaurant, Bar, Store, Banquet Hall) with integrated accounting (No Data reposting in Accounts). All reports and data outputs required for TAX return (E-Filing). No need of taking help from accountants. Help screens, booklets are available with software and accounting training during installation by our experienced staff. Hotel Management with Integrated Accounting
3
E-Star : Hotel Management Hotel Management with Integrated Accounting. Reservation, Advance entry, Check-in, Room Billing, Checkouts. Inventory Management for Main Store & Kitchen and Other stores. Restaurant & Bar billing, Banquets hall booking and billing. E-Filing, Luxury Tax, Service Tax, VAT, Cess and other Tax calculations and Reports. Barcoding, Barcode sticker printing using laser printers. Stock Transfer between store, Opening stock entry, Damaged stock write-off. Accounting Reports :- Daybook, Cashbook, Bankbook, Profit & Loss Account, Trial Balance Groupwise & Ledgerwise, Balance Sheet, Receivables & Payables. Self explanatory edit screens. Help screens explaining each screens operations. Frontoffice, Restaurant, Bar daily transaction Reports.
Individual user can be assigned with rights as per their roll, to get access to the editing screens and reports. User can be prevented from modifying / canceling the bills, viewing reports and edit screens like purchase and profit & Loss Account Financial Year, Company & Location Selection :- Company, Location and financial year is selected during login to maintain transaction of different location and financial year.
10
Room Check-IN Room Check-in screen.
11
Tariff Change Changing the Room Tariff & PAX and Shifting the Rooms
13
15
Room Advance Advance Payments
16
Reservation Room Reservation
17.
18
VAT Rate Setting Available VAT rates has been already added. New VAT rates can be added if required. Cess Percentage can be changed as required.
19
Unit Setting All required commenly used units are already available with the product. New units if required can be added using this screen.
20
Check-In Check-In :- Check-in to the Rooms. Rooms are selected automatically.
21.
22).
23
Old Gold Purchase Purchase of old ornaments / Gold. Old Items stocks are maintained separately.
24.
25
Customer List Clients /Customers can be added / modified using this screen. Opening balance of each client can be edited in the grid directly. Customer should be added before billing credit purchase / sales/ payment or receipts.
26
BANK ACCOUNS Bank Accounts are created using this screen. Opening Balances can be entered directly in the grid after saving Bank Accounts Details
27).
28
Debit Note Debit Note (Purchase Return) :- Items returned to the purchaser /supplier through debit notes. Stock will be updated automatically. Bill no is auto generated. Selected item can be deleted using Right Click option from List/Grid. Use Search option for earlier records.
29
Issue to Gold Smith Items Issued to Gold smith for aciding and re-production are entered here. Re- produced items will be added to the stock automatically and Items will be shown in Stock as Manufactured Qty.
30.
31
Receipt from Goldsmith Items issued to Gold smith are returned as new one and received through this screen. The stock will be added up automatically.
32
Credit Note (Sales Return) Items returned by the customer is entered using this screen. Stock will be added-up automatically.
33
Receipt after Aciding Items Received after aciding are entered here will be added to the stock.
34.
35
Receipt Voucher / Income Entry. This Screen is same as that of shown in the last Screen. The form simplified to make entries by the users not having much accounting back ground
36
Payment Voucher / Expense Entry Payments :- Payments given to the Clients / customers and Expenses like Office expense, Travel and other expenses are entered through payments voucher. If the payment mode is Cheque (Bank Ledger), then amount will be deducted from selected bank account else from Cash: Account.
37
Payment Voucher / Expense Entry Payments :- Payments given to the Clients / customers and Expenses like Office expense, Travel and other expenses are entered through payments voucher. If the payment mode is Cheque (Bank Ledger), then amount will be deducted from selected bank account else from Cash: Account.
38
Journal Voucher Journal :- Accounts transactions not involving CASH / BANK are normally entered through Journal voucher. But we can use journal voucher for all transactions. Net Debit amount should be equal to Net Credit amount.
39
Contra Voucher Transfer between CASH & BANK are entered through Contra Voucher. Or CASH / BANK transaction screen. Cash deposited / withdrawn from bank, Clearing Cheques, Transferring money to Salary Account & Petty Cash account are done through Contra Voucher.
40
Contra Voucher : Simplified Transfer between CASH & BANK are entered through Contra Voucher. Or CASH / BANK transaction screen. Cash deposited / withdrawn from bank, Clearing Cheques, Transferring money to Salary Account & Petty Cash account are done through Contra Voucher.
41
Cheque Payments Cheque Payment & Cheque printing screen to pay the money using cheque facility.
42.
43
Purchase Reports Purchases done during the given period are shown in purchase report. Detailed reports are also available as per user requirements.
44
Sales Reports Period wise Sales summary and details are available in the Sales report section. Item wise sales, VAT wise sales, Customer wise sales etc are also available.
45
Day Book All Transactions excluding Cash & Bank are shown in Daybook. CASH transactions are shown in CASH Book and Bank transactions are shown in BANK BOOK.
46.
47
CASH BOOK CASH BOOK :- Accounts transactions involving only CASH are shown in CASH BOOK. Opening Balance, transactions during given period, Closing balance are shown in CASH BOOK.
48
BANK BOOK All bank transactions are shown in the Bank Book.
49
Profit & Loss Account Profit & Loss A/C for the period is shown in P & L A/C.
50
Receivables & Payables List List of Receivable amounts with customer name and and Payable amount with customer name are shown in this screen.
51
Trial Balance : Ledger Transaction Details /Customer Transaction Details for the Given Period & Print Out Period wise transaction for the given period of selected Ledger / Customer is shown in the right side. Right –Click to print the report of that particular Ledger / Customer.
52
Ledger wise transaction Print
53
E-Filing : Purchase & Sales VAT Return VAT given during purchase and VAT collected from the Customers are shown in Form-52 for E- Filing. KVAT files required to be uploaded during E-Filing are created using this screen.
54.
55
For any query regarding Software, Accounting, facilities with E-Count, Barcoding, Customisation of Software. Many more options are available with the software which are not mentioned in the presentation. Call : +91-8089611161, 9995579750, 0484-6413231 E-Mail : eeetechglobal@yahoo.com, ps_elias@yahoo.in Other Software like Lab management, School Management, Hotel Management, Inventory Management, Chits (Financial) Management in Single & ERP versions are also available.
Similar presentations
© 2017 SlidePlayer.com Inc. | http://slideplayer.com/slide/1581970/ | CC-MAIN-2017-13 | refinedweb | 1,134 | 57.77 |
Learning Resources for Software Engineering Students »
Authors: Chelsey Ong, Lu Lechuan
Reviewers: Gilbert Emerson, Ong Shu Peng
VueJs (also known as Vue) is an open-source JavaScript framework for building user interfaces. It is designed to improve code quality and maintainability.
This is a simple example to show how easy it is to integrate VueJs into your web project:
The main HTML file:
<body> <div id="root"> <h2>{\{ message }\}</h2> </div> <script src=""></script> <script src="the_path_to_the_javacript_file.js"></script> </body>
This is inside the JavaScript file:
new Vue ({ el: '#root', data: { message: "Hello World" } });
Note that
{\{ and
}\} should not have the slash in your actual code.
Step-by-step explanation of the code:
Step 1: Import VueJs
<script src=""></script> <script src="the_path_to_the_javacript_file.js"></script>
Step 2: Create an instance of Vue (Vue is an object) in the JavaScript file; bind the instance to one of the component in our html file (e.g. create a component with id
root and bind it with the instance of Vue).
In this case, only the
root component can be accessed in VueJs while the rest are unaffected. This is how we progressively plug in VueJs into our projects.
new Vue ({ el: '#root', });
<div id="root"></div>
Step 3: Specify our data (message: "Hello World") in the instance of Vue Class.
data: { message: "Hello World" }
Step 4: Pass the message to the HTML file using double curly brackets.
<div id="root"> <h2>{\{message}\{</h2> </div>
Step 5: Open the brower and we will see "Hello World" being displayed:
Hello World
Mutating of data in the DOM
In Vue, the state of the data can be directly modified.
Let's say, there is a variable called
message in your app. To modify
message, you can do the following:
this.message = 'Hello Space';
When
message is changed, the view will be re-rendered to show the new message. So you can say, DOM is "reacting" to
2-way binding
v-model is a Vue directive used to bind the DOM input field to its data variable.
This allows the DOM variables and data to be "in sync", regardless of which one is being updated first. In other words, if you change the input value, the bound data will change, and vice versa.
<input type="checkbox", v-model=isChecked"> <label for="checked">Select</label>
When the checkbox is selected,
isChecked is set to
true. If the program sets
isChecked to
false, then checkbox will be unselected.
This reduces any extra step required to manually update the data.
2-way binding is useful for updating input form bindings such as checkboxes or drop-downs, where new data is entered by users and then updated in the view.
Passing data from outer to inner components
When you have components that are nested within each other, data is passed from the outer component to the inner component via
props, where
props are just custom data shared between the components.
This follows the 1-way data flow encouraged by Vue, which ensures that data can only be changed by the component itself and also allows bugs to be easily traced in the code.
Vue.component('todo-list', { props: ['item'], data: ['totalCount'], template: <div class='todo-list'> <p>Total:{\{this.totalCount}\{</p> <p>{\{item.name}\{: {\{item.count}\{</p> }) <todo-list</todo-list>
to-do list contains
item, i.e.
to-do list is the outer component and
item is the inner component.
Note that
props is passed from the outer component to the inner component while
data is kept private within a component.
Emitting events
However, what if the user decides to update the
item.count? The data for
item.count has to be passed from
item to
todo-list so that
totalCount can be updated inside
todo-list .
How do we do that if we have to follow the 1-way data flow rule?
In situations where the inner component has to pass data back to the outer component, the inner component has to emit custom events and the outer component will update after listening to these events.
You can think of emitting events like putting out a flyer about an event. If someone is interested in this event, he or she can gather more information through reading the flyer.
Vue.component('item', { data: ['count', 'name'], template: { <button v-on:Increment item count</button> } } /* Inside todo-list component */ template: { v-on:increased-count="updateCount" }
Computed properties
This is useful when you want to compose new data based on the data that has changed. Instead of calling methods to do that whenever data has changed, computed properties will do it for you automatically.
computed: totalCount() { let result = 0; this.items.forEach((item) => result += item.count); return result; }
Unlike the use of methods, this updating of
totalCount will only be triggered when the number of
items in the list or any
item's
count changed.
Since computed properties are cached and will not be processed every time the page refreshes, this can greatly improve the efficiency of your application.
Watched properties
Watched properties are used to call other functions when a particular data has been updated, such as
For example, when a new
item is added, we want to send a notification to our friend to alert him or her about the change.
A watched property on
items can be added so that a notification can be sent whenever
items has changed.
watch: { totalCount: function() { let result = 0 this.items.forEach((item) => result += item.count); this.totalCount = result; // notify friend about the change } }
This may look quite similar to
Computed properties.
To decide which is more suitable for your feature, here is a brief comparison:
Approachable:
VueJs is very easy to learn. Compared to other framework such as Angular and ReactJs, VueJs is simple in term of API and design. Learning enough to build non-trivial applications typically takes less than a day. An example is provided below:
How is iteration like in ReactJs:
The JavaScript file in ReactJs
var Iteration = React.createClass({ getInitialState() { return { array: ["Sunday", "Monday", "Tuesday", "Wednesday", "Thursday", "Friday", "Saturday"] } }, render() { this.state.array.map(function(date) { return ( <span>{date}</span> ) }); } }); ReactDOM.render(<Iteration />, document.getElementById('array'));
The HTML file in ReactJs
<div id="array"></div>
How is iteration like in VueJs:
The JavaScript file in VueJs
var Iteration = new Vue({ el: '#array', data: { array: ["Sunday", "Monday", "Tuesday", "Wednesday", "Thursday", "Friday", "Saturday"] } });
The HTML file in VueJs
<div id="array"> <span v-{date}</span> </div>
Progressive:
VueJs is designed from the ground up to be incrementally adoptable. The core library is focused on the view layer only, and is easy to pick up and integrate with other libraries or existing projects. This means that if you have a large application, you can plug VueJs into just a part of your application without disturbing the other components. A quote from Evan You - the founder of VueJs is as follow:
Vue.js is a more flexible, less opinionated solution (than Angular). That allows you to structure your app the way you want it to be, instead of being forced to do everything the Angular way (Angular requires a certain way to structure an application, making it hard to introduce Angular into an already built project). It’s only an interface layer so you can use it as a light feature in pages instead of a full blown SPA (single-page application).
Versatile:
VueJs is perfectly capable of powering sophisticated single-page applications when used in combination with modern tooling and supporting libraries.
Clean:
VueJs syntax is simple and this can make the HTML pages very clean. This would allow user interfaces built by VueJs to be more maintainable and testable.
Relatively small size community:
VueJs is a relatively new JavaScript framework as compared to Angular and React. The size of the community for VueJs is therefore relatively small. Although small size community means you can differentiate yourself from other JavaScript developers, it also means there are fewer resources such as tutorials and problem-shooting guides.
Language barriers:
A majority of users of VueJs are the Chinese as VueJs is developed by a Chinese American. He is supportive of the Chinese community and hence a lot of the existing plugins are written in Chinese. There might be some language barriers for an English speaking developer seeking for VueJs resources.
Detailed Comparison of VueJs with other JavaScript frameworks can be found from:
Links to VueJs tutorials and practices: | https://se-education.org/learningresources/contents/javascript/Javascript-framework-VueJs.html | CC-MAIN-2019-26 | refinedweb | 1,405 | 52.29 |
import "golang.org/x/mobile/event/touch"
Package touch defines an event for touch input.
See the golang.org/x/mobile/app package for details on the event model.
type Event struct { // X and Y are the touch location, in pixels. X, Y float32 // Sequence is the sequence number. The same number is shared by all events // in a sequence. A sequence begins with a single TypeBegin, is followed by // zero or more TypeMoves, and ends with a single TypeEnd. A Sequence // distinguishes concurrent sequences but its value is subsequently reused. Sequence Sequence // Type is the touch type. Type Type }
Event is a touch event.
Sequence identifies a sequence of touch events.
Type describes the type of a touch event.
const ( // TypeBegin is a user first touching the device. // // On Android, this is a AMOTION_EVENT_ACTION_DOWN. // On iOS, this is a call to touchesBegan. TypeBegin Type = iota // TypeMove is a user dragging across the device. // // A TypeMove is delivered between a TypeBegin and TypeEnd. // // On Android, this is a AMOTION_EVENT_ACTION_MOVE. // On iOS, this is a call to touchesMoved. TypeMove // TypeEnd is a user no longer touching the device. // // On Android, this is a AMOTION_EVENT_ACTION_UP. // On iOS, this is a call to touchesEnded. TypeEnd )
Package touch imports 1 packages (graph) and is imported by 20 packages. Updated 2017-10-17. Refresh now. Tools for package owners. | https://godoc.org/golang.org/x/mobile/event/touch | CC-MAIN-2017-43 | refinedweb | 224 | 70.9 |
File Management in .NET
- Working with Files
- Working with Drives and Directories
In two recent articles we looked at file access in the .NET framework—how to write data to and read data from both binary and text files. There’s one more aspect of file programming that a developer needs to know about, and that’s the file management tasks:
- Creating and deleting folders
- Moving, copying, and deleting files
- Getting information about drives, folders, and files
I’m glad to say that the .NET Framework’s approach to file management is well designed and easy to use.
Working with Files
There are two .NET classes that you’ll use for copying, moving, deleting, and getting information about files: the File class and the FileInfo class. Both classes are part of the System.IO namespace. They differ as follows:
- To use the FileInfo class, you must create an instance of the class that’s associated with the file of interest. You then call the appropriate methods to perform operations on that file.
- The File class is an abstract class and cannot be instantiated. You call the File class’s methods and pass as a parameter the name of the file you want to manipulate.
Let’s look at an example of how these two classes differ in usage. Suppose you want to determine the date and time that a file was created. Here’s how you would accomplish this task with FileInfo:
Dim d As DateTime Dim f As New FileInfo("c:\mydatafile.dat") d = f.CreationTime
In contrast, here’s how you would use the File class for the same task:
Dim d As DateTime d = File.GetCreationTime("c:\mydatafile.dat ")
Table 1 describes the members of the File and FileInfo classes that perform frequently needed file-management tasks.
Table 1 Members of the File and FileInfo classes.
File management actions provide many opportunities for exceptions to be thrown. Here are just a few examples:
- Trying to copy a nonexistent file
- Passing a filename/path that’s too long
- Trying to delete a file when you don’t have the required permissions
I advise that you place all file management code in Try..Catch blocks to ensure that unhandled exceptions cannot occur. I won’t go into the details of the possible exceptions here, but you can find the details in the .NET documentation.
Let’s compare some examples of using the File and FileInfo classes. To move the file c:\new_data\report.doc to the folder c:\old_data under the same filename, here’s the code using the FileInfo class:
Dim f As New FileInfo("c:\new_data\report.doc") f.Move("c:\old_data\report.doc")
To perform the same task using the File class, you would write this:
File.Move("c:\new_data\report.doc", " c:\old_data\report.doc")
To delete the file c:\documents\sales.xls, you can use this code:
File.Delete("c:\documents\sales.xls")
Or you could use the following code:
Dim f As New FileInfo("c:\documents\sales.xls") f. | http://www.informit.com/articles/article.aspx?p=482928 | CC-MAIN-2016-40 | refinedweb | 506 | 64.91 |
In my previous post, I mentioned I had some difficulties with deploying my backend. In this post, I'll be talking about what those difficulties were, and how you can deploy your Apollo Server with TypeScript using path aliases without going through all the hassle I experienced. I hear you asking why did I choose Vercel? I'm a simple man; I see good UI, I deploy... You might be also wondering what's up with that cover image? Don't worry, I don't know how my mind works either. Let's start with explaining what path aliases are and explaining the problem, then we'll continue with the setup.
Path Alias
A path alias is a representation of a certain path that you don't want to hardcode everytime you import something. So, instead of this:
import { normalizeString } from "../../../../../../../../tools";
You can do this:
import { normalizeString } from "tools";
Aliases are very handy for keeping your project sane. The problem with my setup though; you have to specify your aliases for both TypeScript and webpack.
The Problem
At first, I tried both Vercel and Heroku. Both were unable to run TypeScript directly. Since I like its UI, I decided on Vercel for going forward. When I tried to deploy the project again by compiling it to JavaScript first, the output file didn't work. The reason for that is I used path aliases in the project, but TypeScript doesn't convert them into real paths when compiling. For that, I used webpack with ts-loader to compile the project into JavaScript. I also configured my path aliases in webpack config too. Now the build file was working on localhost. Once again, I tried to deploy it to Vercel, but again, it didn't work. Turns out, you shouldn't be containing your app.listen() function inside another function. And I was doing that, because I was using TypeORM at that time. And TypeORM requires you to wrap your app.listen() function inside its initialization function so that you can establish your database connection before your API starts running. So I switched to Mongoose and it was a better choice to be honest since I was using a NoSQL database anyway. And I tried to deploy the project, again. Well.. It didn't work, again. I figured that I had to specify my API route in vercel.json, so I tried again. This time, it worked! Everything was flawless after that. Now I deploy the project with npm run deploy without any problems. However, enough stories. Now we'll talk about how you can do that too.
1. Configure TypeScript
Here is how my tsconfig.json looks like:
{ "compilerOptions": { "target": "es5", "lib": ["dom", "dom.iterable", "esnext"], "allowJs": true, "module": "commonjs", "moduleResolution": "node", "outDir": "dist", "removeComments": true, "strict": true, "strictPropertyInitialization": false, "esModuleInterop": true, "resolveJsonModule": true, "emitDecoratorMetadata": true, "experimentalDecorators": true, "baseUrl": "./", "paths": { "config": ["config"], "interfaces": ["interfaces"], "services": ["services"], "entities": ["entities"], "resolvers": ["resolvers"] } }, "include": ["**/*.ts"], "exclude": ["node_modules"] }
As you can see I have 5 path aliases named config, interfaces, services, entities and resolvers. They all located at the root of the project, so the baseUrl is "./". Don't forget to specify that.
2. Install and Configure Webpack
First let's install webpack and other dependencies we need:
npm i --save-dev webpack npm i --save-dev webpack-cli npm i --save-dev webpack-node-externals npm i --save-dev ts-loader
Now we need to create a config file named webpack.config.js. Create that in your root folder. You can copypasta and edit mine:
const nodeExternals = require("webpack-node-externals"); const path = require("path"); module.exports = { entry: "./src/app.ts", target: "node", externals: [nodeExternals()], mode: "production", module: { rules: [ { test: /\.tsx?$/, use: "ts-loader", exclude: /node_modules/ } ] }, resolve: { alias: { config: path.resolve(__dirname, "config"), interfaces: path.resolve(__dirname, "interfaces"), services: path.resolve(__dirname, "services"), entities: path.resolve(__dirname, "entities"), resolvers: path.resolve(__dirname, "resolvers") }, modules: ["src"], extensions: [".ts", ".js"] }, output: { filename: "app.js", path: path.resolve(__dirname, "dist") } };
There are some important fields here. entry is of course the starting point of your app. In alias, you have to specify all the path aliases you also configured in your tsconfig.json. In output, filename is the file name of the output file webpack builds for us. And the path is the location where you want webpack to put it. In my case, it's the "dist" folder.
3. Compile Your Project with Webpack
Open the command line in your root folder and run:
npx webpack
If you configured your webpack.config.js same as mine, your output file should be located at the dist folder. This is what we will be deploying to Vercel.
4. Install Vercel CLI and Login
To install:
npm i -g vercel
And to login:
vercel login
It'll send you an email, don't forget to check your junks folder.
If you use Windows and you are getting an security error in the command line, launch command line again as administrator and type:
Set-ExecutionPolicy RemoteSigned
Press A and enter. Then run the login command again.
5. Configure Your Vercel Deployment
Create a vercel.json file at the root folder of your project. And again, just copypasta mine and edit if you need to:
{ "version": 2, "builds": [{ "src": "dist/app.js", "use": "@now/node" }], "routes": [{ "src": "/", "dest": "dist/app.js" }] }
This tells Vercel to run your API on the root directory with node runtime. Here is the important part; the path you specified in vercel.json must match with the path you specified in Apollo's applyMiddleware() function. This is what I'm talking about:
server.applyMiddleware({ app, path: "/" });
This is a simplified version of my usage of applyMiddleware() function. If I wanted to run my API in the "/api" directory, the vercel.json would look like this:
{ "version": 2, "builds": [{ "src": "dist/app.js", "use": "@now/node" }], "routes": [{ "src": "/api", "dest": "dist/app.js" }] }
And my applyMiddleware() function would look like this:
server.applyMiddleware({ app, path: "/api" });
With that, we're done with the setup.
6. Deploy Your App to Vercel
This is the hardest part. I'm kidding, just run this on command line:
vercel --prod
In your first deployment, it'll ask you some properties to create your project on Vercel. After your deployment is done, it'll show you the link and it'll automatically copy that link to your clipboard. You can also add these lines in the scripts field of your package.json file for easing future deployments:
"build": "npx webpack", "deploy": "npm run build && vercel --prod"
Conclusion
I wanted to post this because the first few steps are the same for every platform. However, I think Vercel is more intended to be used with serverless functions. And as far as I know it doesn't support web sockets in serverside, so be aware of that. Considering those, you might want to rethink your architecture according to your needs. Although in my case, my project -which I talked about in this post- was a small scaled personal one. You might want to go with Heroku, AWS or Netlify, but in my opinion this is also a good option for hobbyists.
I hope this was useful, you can also follow me on Twitter for future content:
Top comments (4)
You've inspired me to migrate my cra project
Text to GIF animation — React Pet Project Devlog
Kostia Palchyk ・ Jun 17 ・ 6 min read
to the next.js/vercel platform! So far so good: the deployment experience is very smooth! Thanks :)
I'm yet to try adding authentication and db connections ("api routes" promise to solve that), will see how that goes.
Another promising platform I'd like to try some time is: nx.dev/react .
It is really flawless with GitHub Continuous Integration. I'll look into Nx too, thank you!
👍 Looking forward your next dev journey update!
Btw, I have a project on Heroku, but I'm not sure I can reproduce the deployment process again: truly, the UX could be smoother.
Though my worst experience was with Google Cloud Run. I failed with it miserably :)
Next one will be wild on frontend...
I don't like Heroku's Git based CLI either. I use it when I need to use web sockets. | https://dev.to/ozanbolel/deploying-apollo-server-with-typescript-path-aliases-to-vercel-4k5l | CC-MAIN-2022-40 | refinedweb | 1,380 | 67.15 |
N26 has been using Jenkins for as long as I’ve been part of this company. Over the last few years, I have been responsible for the quality of our delivery, and this involved getting my hands dirty with Jenkins. I learnt a lot, sometimes the hard way, and thought I’d share what I know before I leave the company and loose access to the code.
Please note that I am by no mean a Jenkins expert. I’m a frontend developer at heart, so this is all pretty alien to me. I just wanted to share what I learnt but this might not be optimal in any way.
- Go scripted
- Jenkins fails fast
- Conditional parallelisation
- Mark stages as skipped
- Built-in retry
- Manual stage retry
- Confirmation stage
- Handle aborted builds
- Build artefacts
Go scripted
The declarative syntax is nice for simple things, but it is eventually quite limited in what can be done. For more complex things, consider using the scripted pipeline, which can be authored with Groovy.
I would personally recommend this structure:
// All your globals and helpers
node {
try {
// Your actual pipeline code
} catch (error) {
// Global error handler for your pipeline
}
}
For more information between the scripted and the declarative syntaxes, refer to the Jenkins documentation on the pipeline syntax.
Jenkins fails fast
By default, Jenkins tends to resort to fast failing strategies. Parallel branches will all fail if one of them does, and sub-jobs will propagate their failure to their parent. These are good defaults in my opinion, but they can also be a problem when doing more complex things.
When parallelising tasks with the
parallel function, you can opt-out to this fast-failing behaviour with the
failFast key. I’m not super comfortable with the idea of having an arbitrarily named key on the argument of
parallel but heh, it is what it is.
Map<String, Object> branches = [:]
// Opt-out to fail-fast behaviour
branches.failFast = false
branches.foo = { /* … */ }
branches.bar = { /* … */ }
parallel branches
For programmatically scheduled jobs, you can also opt-out the failures being propagated up the execution tree with the
propagate option:
final build = steps.build(
job: 'path/to/job',
parameters: [],
propagate: false
)
The nice thing about this is that you can then use
build.status to read whether the job was successful or not. We use that when scheduling sub-jobs to run our end-to-end tests, and reacting to tests having failed within terminating the parent job.
Conditional parallelisation
For performance reasons, we have a case where we want to run two tasks in parallel (
foo and
bar for sake of simplicty), but whether or not one of these tasks (
bar) should run at all depends on environment factors. It took a bit of fidling to figure out how to skip the parallelisation when there is only one branch:
def branches = [:]
// Define the branch that should always run
branches.foo = { /* … */ }
if (shouldRunBar) {
branches.bar = { /* … */ }
parallel branches
} else {
// Otherwise skip parallelisation and manually execute the first branch
branches.foo()
}
Mark stages as skipped
I don’t know how universal this is, but if you would like to mark a stage as actually skipped (and not just guard your code with a if statement), you can use the following monstrosity. This will effectively change the layout in BlueOcean to illustrate the skip.
org.jenkinsci.plugins.pipeline.modeldefinition.Utils.markStageSkippedForConditional("${STAGE_NAME}")
For instance:
stage('Second') {
if (env == 'live') {
skipStage()
} else {
// Test code
}
}
Built-in retry
It can happen that some specific tasks are flaky. Maybe it’s a test that sometimes fail, or a fragile install, or whatnot. Jenkins has a built-in way to retry a block for a certain amount of times.
retry (3) {
sh "npm ci"
}
Manual stage retry
Our testing setup is pretty complex. We run a lot of Cypress tests, and they interact with the staging backend, so they can be flaky. We cannot afford to restart the entire build from scratch every time a request fails during the tests, so we have built a lot of resilience within our test setup.
On top of automatic retrying of failing steps (both from Cypress behaviour and from a more advanced home made strategy), we also have a way to manually retry a stage if it failed. The idea is that it does not immediately fail the build—it waits for input (“Proceed” or “Abort”) until the stage either passes or is manually aborted.
stage('Tests') {
waitUntil {
try {
// Run tests
return true
} catch (error) {
// This will offer a boolean option to retry the stage. Since
// it is within a `waitUntil` block, proceeding will restart
// the body of the function. Aborting results in an abort
// error, which causes the `waitUntil` block to exit with an
// error.
input 'Retry stage?'
return false
}
}
}
Confirmation stage
When you are not quite ready for continuous deployment, having a stage to confirm whether the build should deploy to production can be handy.
stage('Confirmation') {
timeout(time: 60, unit: 'MINUTES') {
input "Release to production?"
}
}
We use the
input command to await for input (a boolean value labeled “Proceed” or “Abort” by default). If confirmed, the pipeline will move on to the next instruction. If declined, the
input function will throw an interruption error.
We also wrap the
input command in a
timeout block to avoid having builds queued endlessly all waiting for confirmation. If no interaction was performed within an hour, the input will be considered rejected.
To avoid missing this stage, it can be interesting to make it send a notification of some sort (Slack, Discord, email…).
Handle aborted builds
To know whether a build is aborted, you could wrap your entire pipeline in a try/catch block, and then use the following mess in the catch.
node {
try {
// The whole thing
} catch (error) {
if ("${error}".startsWith('org.jenkinsci.plugins.workflow.steps.FlowInterruptedException')) {
// Build was aborted
} else {
// Build failed
}
}
}
Build artefacts
It can be interesting for a build to archive some of its assets (known as “artefacts” in the Jenkins jargon). For instance, if you run Cypress tests as part of your pipeline, you might want to archive the failing screenshots so they can be browsed from the build page on Jenkins.
try {
sh "cypress run"
} catch (error) {
archiveArtifacts(
artifacts: "cypress/screenshots/**/*.png",
fingerprint: true,
allowEmptyArchive: true
)
}
Artefacts can also be retrieved programmatically across builds. We use that feature to know which tests to retry in subsequent runs. Our test job archives a JSON file listing failing specs, and the main job collects that file to run only these specs the 2nd time.
final build = steps.build(job: 'path/to/job', propagate: false)
// Copy in the root directory the artefacts archived by the sub-job,
// referred to by its name and job number
if (build.status = 'FAILURE') {
copyArtifacts(
projectName: 'path/to/job',
selector: specific("${build.number}")
)
}
That’s about it. If you think I’ve made a gross error in this article, please let me know on Twitter. And if I’ve helped you, I would also love to know! 💖 | https://www.scien.cx/2020/11/17/everything-i-know-about-jenkins/ | CC-MAIN-2021-31 | refinedweb | 1,167 | 61.26 |
Hi!
- FRS event 13568 recommendations
- Missing attribute editor in ADUC Find
- DCs and load balancers
- Cross-file RDC behavior
- DFS out of site referrals due to WAN appliances
- Testing AD Schema extension
- VMware is expensive
Question.
So which is it?
Answer.
Question
I was wondering if it is intentional that the “attribute editor” tab is not visible when you use “Find” on an object in AD Users and Computers?
Answer.
Question
Are there any issues with putting DC’s behind load-balancers?
Answer.
Question
The documentation on DFSR’s cross-file RDC is pretty unclear – do I need two Enterprise Edition servers or just one? Also, can you provide a bit more detail on what cross-file RDC does?
Answer?
Question
I’m seeing DFS namespace clients going out of site for referrals. I’ve been through this article “What can cause clients to be referred to unexpected targets.” Is there anything else I’m missing?
Answer. 🙂
A double-sided network capture will show this very clearly – packets that leave one computer will arrive at your DFS root server with a completely different IP address. Reconfigure the WAN appliance not to do this or contact their vendor about other options.
Question?
Answer.).
Question
I <blah blah blah> Windows <blah blah blah> running on VMWare.
Answer:
-§ions=5045
-
-
Welcome Scooter and thanks for all the hard work Craig!
– Ned “I’ll let you try my Clip-Tang style!” Pyle
Regarding the Journal Wrap. If you set the "Enable Journal Wrap Automatic Restore" key to 1, and restart the DC. Isn't the key reverted back to 0? If so how can you get a loop?
What happend back in 2000 SP3 (and newer) that didn't "support" this way to recover from JWs?
What is the difference setting the "Burflags" key to D2 (non-authoritative) compared to setting the "Enable Journal Wrap Automatic Restore" key to 1?
Hi,
It’s not reverted back – you might be thinking of the burflag values, which do revert after each use.
The stuff I mentioned is what happened. 🙂 When the value was set, you got into these sorts of perpetual not quite fixed modes. It caused a huge support headache with us, so we banished the setting from our troubleshooting steps and documentation – for example,
these two complete lists of FRS registry values don’t mention it. 🙂
support.microsoft.com/…/221111
technet.microsoft.com/…/cc786122(WS.10).aspx
Setting a D2 does some of the the same basic operations, but with human control. And once done, it never happens again. I don’t recall if they do exactly the same things, it would require a code review. But since it’s not a supported registry value, that
will have to wait for another day. 🙂
<blah blah blah>
Warren here again. This is a quick reference guide on how to calculate the minimum staging area needed
Pingback from DFS – Logs de eventos
Pingback from DFS – Logs de eventos
Pingback from DFS – Logs de eventos 4202, 4204, 4206, 4208 e 4212.
Friday Mail Sack: Scooter Edition – Ask the Directory Services Team – Site Home – TechNet Blogs | https://blogs.technet.microsoft.com/askds/2010/08/20/friday-mail-sack-scooter-edition/ | CC-MAIN-2019-26 | refinedweb | 518 | 63.9 |
In addition to Scrapy spiders, you can also run custom, standalone python scripts on Scrapy Cloud. They need to be declared in the s
cripts section of your project
setup.py file.
⚠ Note that the project deployed still needs to be a Scrapy project. This is a limitation that will be removed in the future.
Here is a
setup.py example for a project that ships a
hello.py script:
from setuptools import setup, find_packages setup( name = 'myproject', version = '1.0', packages = find_packages(), scripts = ['bin/hello.py'], entry_points = {'scrapy': ['settings = myproject.settings']}, )
After you deploy your project, you will see the
py:hello.py script on the Scrapinghub dashboard, in the Run pop-up dialog and in the Add periodic job pop-up dialog.
It’s also possible to schedule a script via the Scrapinghub API:
curl -u API_KEY: -X POST -d "project=123" -d "spider=py:hello.py" -d "cmd_args=-a --loglevel=10 x y"
And with the python-scrapinghub library:
from scrapinghub import Connection conn = Connection('API_KEY') project = conn[123] project.schedule('py:hello.py', cmd_args='-a --loglevel=10 x y') | https://support.scrapinghub.com/support/solutions/articles/22000200394-running-custom-python-scripts | CC-MAIN-2019-35 | refinedweb | 183 | 68.67 |
cajunflavoredbob
Senior Member
About Me
- About cajunflavoredbob
- Service provider
- U.S.A. - T-Mobile
- City
- Your Basement
- Home country
- South Africa
- Signature
- ¯-.¸¸.·´¯-.¸¸.·´¯-.¸¸.·´¯ New users: Please click HERE or HERE before posting ¯-.¸¸.·´¯-.¸¸.·´¯-.¸¸.·´¯
PHP Code:
public class XDA {
public static void main(String[] args) {
System.out.println("XDA Member");
if (You beg for thanks) {
"Go Jump Off a Bridge!";
}
}
}
Whoso findeth a wife findeth a good thing, and obtaineth favour of the Lord. –Proverbs 18:22 KJV
Most Thanked
Thanks
Post Summary
394
READ THE FIRST TWO POSTS ENTIRELY BEFORE ATTEMPTING ANYTHING. MOST ISSUES HAVE ALREADY BEEN SOLVED. This is the original Google Now version without sport team adding or stocks. This will not be updated any further. Use a Jellybean ROM if you want...
190
Welcome to the Midnight Gapps Project Click the image for a larger preview. Welcome to the wonderful world of inverted applications. If this is your first time here, I’d i...
102
This post is dedicated to those tireless members of our society who constantly ruin things for the rest of us with their senseless questions. All these people have ended up on this list for their lack of visual comprehension in this thread. If you... | https://forum.xda-developers.com/member.php?s=517024b0030c0ace4910dd44c19dad93&u=2543015 | CC-MAIN-2020-16 | refinedweb | 200 | 76.32 |
- How
- Help - ASP.NET web app & Javascript
- Want to learn how to launch lobbied applications in C#
- Unixtime problem
- exporting a C++ object in a DLL
- how can i place datagrid in asp.net2.0 & where i can find in toolbox
- what is the base class in asp.net2.0
- Dynamically Assigning SqlParameter Names
- New to ASP.NET Having trust issues
- No clock on the zune
- Validating an XML file with an external schema
- Lost incoming emails
- Dennis : What IP address ranges.....
- Having my controls work in firefox
- Help with TypeLoadException (marshaling unmanaged code)
- WCF w/ Custom DataSets
- Remove method
- Publish service - Best practice
- reading datas from XML using C#
- Windows media Player For Pocket Pc
- How to call a button_click(aspx.cs) event from javascript function
- Visual C# Drawing is Beneath Everything
- receiving sms in asp.net
- Script to deploy websites and create databases ASP.NET C#
- .net
- Numeric up and down - minusing from label
- access database connection in vb .net
- How To Display a Windows Form(Vb form) into HTML Page?
- COM object with CLSID {EFAC2D80-175B-11D2-9927-006097C27C31} is either not valid or n
- C# Form Opening
- Use Reflection to get the method and parameter information from a vc6 COM
- How can I make Console::ReadLine accepts int? In other words.
- Dynamically Loading an XSLT Transform Inside an HTML DIV?
- Access to SOAP Objects
- Sky Anytime application error.
- C# WebBrowser returning HTML Button
- Service project
- Retrieve a drive serial number in VC++ dot net 2.0
- Selecting a row in a webtable
- application's configuration files must contain 'trust' section
- STA Thread Appartment
- wpf questions
- editing LOG file online using VB on visual web developer 2005
- LOC tool?
- how to extract from _inside_ xml tag
- vb6 xmlhttp
- JavaScript class does not work on NetScape
- .Net web services with session mangement
- Oracle bind variables causing runtime error
- Encode a String for Use in XML
- Library differences when adding Web Reference
- Using C++/CLI wrapper to call .NET assembly from native code?
- IE 7 for XP
- textbox text format in web app
- Pulling integers out of a tableAdapter into a variable etc.
- select time (hours,minutes,second) in updown arrow
- editor and webbased lists
- Compare two dropdowns in ASP.NET
- Passing more than one variable from utility function
- Updating a Access database
- vb.NET review and load previously saved data
- Urgent Requirement For .NET Programmers In Bangalore
- SQL database communication errors
- Web Application on Vista
- Viewstate of password
- What is repeatable inheritance in dotnet?
- new to coding
- Crystal Reports Problem: Grey Image Background When Printing
- Listing Files in a Directory for ASP.NET(Directory ListBox)
- Problem with deserialization
- Specified argument was out of the range of valid values
- GZipStream, compressed date differs in size
- Calling bind function in gridview page event handler makes GridView disappear
- Web Service timeout
- callNTpowerinformation
- gridview in C#.net
- Mailing System
- Bizarre behaviour in VS2005 when debugging
- calendar with year scroll
- Create XML file using C#
- Jpeg images to video conversion
- Problems displaying Crystal Report in viewer
- Problem Running Dos Command in VB.NET 2005
- Can Vb.Net do this?
- Gridview and javascript
- Opening a form within another form C++ .net
- How two systems communicate (Software Interface)
- convert excel to xml file
- How to plot a running graph from Live XML Stream Data
- How to set the "address" field of Internet Explorer through c#
- How to set Flagvalue for Gridview
- switching outlook profile
- Text box size
- develope an Interactive Exam
- Custom provider in partial trust mode
- how to work Submit Button on Content Page in ASPmet 2.0
- how to work Submit Button on Content Page in ASPmet 2.0
- Why doesn't DataAdapter.Fill() issue 'Add' update events??
- visual studio.net 2005 license question?
- Client Socket won't disconnect?!
- QueryInterface for interface Excel._Application failed
- How do you Catch an HttpRequestValidationException Exception
- Webservice WCF call from .NET 2.0
- merge two document files in asp.net
- dropdownlist box autosearch function
- Xtra Report manually Insert pagebreak
- VB.NET: How to populate combo box with database item
- C# data question
- Reading Values From XML File Into String Variables Using C#
- Copy file to remote server webservice
- Printing on MultiLines
- How to move records using firstbutton,previousbutton,nextbutton,lastbutton in vb.net
- Crystal reports, VB.net question: How to output?
- The Output should be in xml format using c# code
- Generic Dictionary of Generic Dictionaries
- Login status Boolean
- gridview
- URGENT ASP.net VB.net Developers SSE/TL required
- dot not app not quitting
- Run-time error when trying to execute .exe using Process.Start c# .NET
- SQL Database Connectivity Errors
- javascript + asp.net
- The Property OwnerKey Meaning
- Binary tree in managed C++
- Handling of xml data within Oracle 9/10 and sql Server 2005
- Web site Publish
- Transform .dat file
- Gettin the problem in datagrid
- XML Validation against XSD
- Record The Screen Video (Glad if Someone Help Me)
- COM Exception Happening In Code For Patching
- print dialoge box
- Problem while Sending emails
- stuct double pointer in a unmanaged dll
- .Net pdf viewer
- Can you prefill DataGrid footer template fields
- Reference a C++ exe in C# dll
- Using VS2005 on a project that earlier used VS2003
- Problems in web services
- Xtra Reports
- Read XHTML into XML
- Want to send LINK in the email using C#.net
- deleting jpeg files
- TCPClient Losing Data
- an error in declaring the session variables in ASP 2.0
- Open Source .NET Tools..
- FireFox vs Internet Explorer
- problem to display data in database screen
- Export datagrid to PDF File
- Problem in deleting a file in App_Themes folder
- DatagridviewCheckboxColumn in Windows Application 2005
- Resizing the scrollbar
- Problem in deleting a file in App_Themes folder
- How to print an Asp.net webpage?
- Documentation for Visual Studio 2005 Web Project issues HELP
- ASP.NET 2.0 Gridview with column containing bot hyperlinked and non hyperlikned cells
- Client application: Can't close a Socket!
- File Upload error
- Create the controls inside my control
- outlook express won't open
- c# - XML file - load xml, modify xml, save xml - HOW!!
- VB.NET PrintDialog
- XML Schema Issues
- Does anyone know any commercial C++/CLI GUI library with source code?
- C# Interop - passing structure from UNmanaged to managed code
- C# Asynchronous Protocol Handler for IE -- not working from embedded browser
- Updating machine.config. on multiple pcs
- xspx fired xsm/xsl website
- IsNumeric() in c#.net?
- VB.NET - Placing text in a textbox on a child form from another child form
- Date Validation
- login buttons do not work when xp themes are enabled
- Need help with COM COMPONENT
- Datagridview
- Run-time error when trying to execute .exe using Process.Start
- datagrid AllowPaging wierdness
- Calling J2EE web service from .NET
- Embedded XML datatsource for dependent ddl's
- Parsing an IFRAME
- IsCallback with XMLHttpRequest object
- changing view mode of an url in address bar
- How to open seperate window using ajax.net
- C# - Resizing multi-dimensional arrays
- Document Merge
- How to iterate and display an Int Array list in a textBox? this has to be easy...
- Getting Error: The multi-part identifier "Table.Field" could not be bound.
- Auto standby and auto hibernate
- Port Scanner
- WSHttpBinding in WCF
- Editable mult-column table
- How to read the results of a "Count" query
- how to remove a item from System.Collections.Generic.List in a loo
- how to save more than one row added in a datagridview while editing
- using x509Certificates for a Web Service
- Random numbers
- Closing notebook monitor switches desktop to connected 2nd monitor
- Marshaling a structure conatining a fixed length array of another structure in C#
- 'add service reference' missing - help
- VB 6 COM with .NET
- round off a decimal to a whole number in a label
- SmtpFailedRecipientException + asp.net 2.0
- Problem in JavaScript Help me :(
- Compile error in vs2005 but not in vs2003
- Debug webservice
- Merging Settings in VS2005
- crystalreport
- How to get email body from POP
- how to create tree with xml code by c#.net
- about crystal report
- using a vb dll from asp dot net application
- Data Binding
- Facing problem with ASP.net Report
- Client Side TabStrip Control in ASP.net
- I Have Some Problem Related Transaction
- manually insert Page break in Devexpress xtrareport
- Regular Expresssion Validation for Alpha_numeric
- How to get OnPageIndexChanging in C# .NET to work with MySql
- How to transfer large data through webservice
- Exclusive file access
- Calculating Run time of a program in vb.net
- How to read external XML with namespaces from .NET
- Need Help to get Selected Range from DataGrid using C# WebForm
- ASP.net and excel files
- ActiveX control '8856f961-340a-11d0-a96b-00c04fd705a2' cannot be instantiated
- controling dynamicly created function objects?
- I cant debug ASP.NET Project on IIS v6.0 and Windows 2003 Server
- Can not find the resource
- shift the desktop window wrt to docked window form in c#
- I want use send to like window explorer
- Disable print preview while printing excel from C#
- How can I resize the column of datagrid in asp.net2.0 at runtime
- How to detect .NET has started ?
- Precision in vb.net
- VSTO in VS2005
- Is there something better than MediaPlayer? [Win C#]
- letter combinations
- Profiles & Membership
- c# client to SOAP::Lite
- Calling AS/400 Stored Procedure from a CS Program
- Custom business objects + memory management
- C#, IE and C++ : getting C# Explorer Bar and C++ BHO to talk
- determin which button initiated postback in pageload event in codebehind page vb.net
- DVD RW Drive
- Problem in adding Multiple rows in a datagridview using dataAdapter
- Client Certificates: The request failed with HTTP status 403: Forbidden.
- What do Data Access Layer and Data Object Layer mean and related?
- I need help with C# and Visual Studio C#!
- ClassFactory error
- How can I document a root namespace in VB.NET ?
- Crystal Report
- a question on xslt with php5
- .NET Remoting
- Help! Some unknown error during Compiling
- Publishing different versions of Webservice
- Crystal Report Compatiblity
- Maximum request lenght exceeded
- image and scroll
- code to select columns stsrting with same letter
- making an SQL server then using VB 2005 to connect to it?
- Application_Start not firing
- datepicker in datagrid in asp.net/c#
- XQuery equivalent to NOT IN (subquery)
- Functions between forms
- zohreh Question
- Two digit after decimal
- C# error: cannot access disposed object
- Cant create xmlElement using xmlWriter in C#.net
- Click Once Application and Web Page Relation .Net 2
- Database HELP PLEASE
- C#.net WebForm ADO DataGrid & DataSet Help
- how do I addhandler to menustrip item
- How to get OnPageIndexChanging in C# .NET to work with MySql
- how to create login page.
- ASP.NET 2.0 Gridview select row in code problem
- OLE DB Help in C# .NET
- Serialized XML does not validate against the XSD
- Bush
- Opening Popup Window on Mouse Rollover
- label custom
- How to update in Database....
- Backend
- collection and Delegate
- Can Call Server Side function From Java Script?
- Make Collection
- Text Box problem
- How to automatically run a webpage daily at 9-00am without my interference.
- WEb services- C# and ANSI C interoperability
- Timing In C#
- How to clear the datagrid
- again the packet sniffing problem
- Error encountered while working with FAXCOM.dll
- datagrid
- changing text colors in listview column headers
- How to print last page of a text file using VB.Net
- TextBox, TAB and OnLeave event
- How to Activate A Control Added to Desktop
- Using Jmeter in .net
- what is the api of open with dialog box of windos xp and how to call in .net
- how to upload images and save it to the database
- Error while trying to run project
- hi
- VScrollBar in C# - weird problem
- DataGrid
- Error Message
- .NET , SQL Server interview questions websites....
- WebException not caught while running on Vista but is caught on XP
- The static library benefits
- Static library contain dynamic library
- ASP .NET (VB) - sql image to image control
- How do I edit an aspx file so it will look better
- Determining system default web browser from Windows Forms app
- Access the HTTP Response from a Web Browser control
- need help with XML parser
- Cannot escalate to MSDTC when using CommitableTransaction class on
- Deubbing forward classes in DLLs
- Can LINQ be used with VS2005 & 2.0?
- url rewriting
- C++ .Net 2005 problem II
- C++ .Net 2005 problem
- Is this impossible via XSL?
- HTTP 400 Bad Request caused by To and Action in Soap Envelope head
- Object initialization
- C# Initial Page Page Load is very slow - Cache
- XmlResolver implementation for OASIS XML Catalogs?
- WSE 3.0 - UserNameTokenManagers and Customer Principals
- Is it possible to access a row in a datatable in constant time?
- CDRW/DVD Combi Drive
- Response.Redirect: How to get back to previous page
- Grid filter
- What is the syntax for abstract class in c#.net?
- Mail receive component group by e-mail id
- Computer will not start
- Handling unhandled exception in .net service
- Visual Studio 2003 HTTP/1.1 500 Internal Server Error
- Wrapper Component in C++/CLI to use Legacy C++ code/functionality in C#
- .Net Framework 3.0 Question & Answer
- Checkbox question
- xmlns attribute produces XHTML validation error.
- Large XML file and some kind of indexing?
- VB.Net "Syntax Error" in Execute statement.
- Trying to update .CDX File
- cross tab query
- how can I implement to scrollbar in each columns of gridview
- rss feeds update
- XML, DTD, .C#Net
- Visual Studio Setup problem
- Handling callbacks from unmanaged code
- C++ dll in VB.Net
- .NET impersonation
- GAC Not So Simple
- How to get properties of a control
- clear postdata in dotnet
- IsPostback
- Managed lib into unmanaged project
- How to get properties of an object
- access through a remote computer
- Updated Triggers
- .EXE file in VB.Net
- Keyboard usage
- About MCP Certification
- create user using personalization
- underlying connection has closed
- creating a user with personalization
- How can I load an icon from an exe?
- Blank Page
- How do i remove selected row permanently from sql database through datagrid using vb?
- Managing ADs in Asp.net
- Drop Down List
- Deployment of assembly which has Com Interoperability.
- Regarding Crystal Reports in Asp.net 2.0
- XPath abbreviated form
- Handling Dynamic Image buttons
- convert PDF to XML and Store XML data to SQL Server Database
- How to make a sorting programme
- Clock C# Windows Application
- Error Updating SQL DB
- VC Self-contained static libraries
- Writing control properties to xml
- Problem with radio buttons in .NET forms application
- Compiling Unmanaged C++ Code
- VB6 to VB.NET, comments please
- how collecting 1 + TW000001 To Get TW000002
- Cannot load personal free/busy data
- help me debug
- Missing icons
- Writing/reading control properties to a file
- AppSettings
- opening a file for reading
- difference between .NET frameworks
- reflection and private methods
- Haskellizing XSLT
- [C#.NET] Screenshot a Window
- Unmanaged to managed return value
- How can I setup both .Net 2005 and MS SQL server 2005
- declare event
- Asynchronous file transfer from server to client and vice versa
- Typecasting between ref class and interface class
- Updating a database
- Windows Form UI Questions
- Master detail design
- Speaker Echo?
- MP3's - I can do this in VB 6.0
- Multi-byte characters?
- checkboxlist nested inside gridview
- GZipStream compressed file is larger than source file
- AppDomains and Singletons
- Typecasting between ref class and interface class
- vista internet connex problems
- Vista internet connex problems - no solutions?
- The non-generic type error
- C# Serialization of UI in an application
- months between two dates
- reports in asp.net2.0
- ASP.Net 2.0 - Resource Files - Culture Specific And Mode specific
- Performance of pure native C++ class in a managed C++/CLI DLL comp
- Menus and MasterPages
- Why do we get "This system cannot find the file specified".
- Server Error in '/profile1' Application
- UNC share
- NTLDR
- Regarding Session Expire
- is there anyway to deompile dll to C#
- New to this Forums
- Convert double to string in fraction format(.75 to 3/4) in asp.net
- Compiler Could not be created
- wireless
- To create a subdomain
- .Net Framework 3.0 Question & Answer
- Want to display user who are online.
- difference between ado.net and ado
- hai
- computer beeps, mouse freezes
- How can I drag and drop a record to a different position in the same Grid in asp.net
- .Net framework
- DataGridView---Editing Data
- Dynamic xpath in c# code
- How to get an unmanaged pointer within VC++ .Net?
- WMI + root\MicrosoftIISV2 + "Access Denied" problem
- Configuration in .net application
- Importing Selected Text From Other applications
- WebClient generates exception: header must be modified using ...
- Windows XP Media Center
- using icomparable
- Dcom
- Export Datareader to Excel Sheet
- Money 99.Error message.Money has not received updated Internet inf
- VB.NET App: Evaluate a variable to use as a Control
- defining a catch statement with #define
- How to use the relative positioning for labels and textboxes?
- Splitting a String Issues
- Impersonation of local account on remote machine
- Call webservice from windows C# application
- RichTextBox displays jagged formatted text, why?
- how to unbind a textbox
- FTP server appears to stop responding on Windows Server 2003 SP2
- statistical functions.
- VB and VB.Net interopability
- VB.NET: Deployment Project; "Could not find file"
- UCanCode Releases Upgraded Visio 2007- Like Flow/Diagramming Source Code Kit!
- Refresh textbox in asp.net
- Data grid is not displaying
- updated value from a master page
- how do solve this error for sending mail in vb.net?
- transform data from ms-excess to ms sql 2000
- How to access Hidden variable in Javascript declared in Content Page in .net 2.0
- MS Team
- DataReader Issues
- Problem using ObjectDataSource
- How to define CrystalReport object in ASP.NET
- ComboBox dropdown flickering in windows application C#.net 2005
- string constants / conversion from const char * to String
- Why do we get System.Runtime.InteropServices.COMException
- Regarding GENERICS...
- Conversion PDF to MsSQL table
- How can i use Apache instead of IIS in VB.NET
- Download a bulk of files in a web site in asp.net
- XML data into MDB file
- System.Net.Sockets.SocketException+webservice+Sour abh Das
- Good practice for returning "status" from functions
- How to end Cookie's Session?
- XML data into MDB file
- performance counters
- setting DateTime value to registry
- Change value in a datagrid to another or blank
- smart client
- Activator.CreateInstance locks in STAThread
- Displaying icon in StatusBarPanel
- serial communication
- In ASP.NeT Form, I want to show/hide 3 panels with click of 3 buttons
- Using Bitmap's for captchas
- VB2005 WebBrowser Control
- byte* to a String
- Any Livelink web service client sample in java/axis?
- Authentication
- Sending SMS
- Sending Email Function in OE 6 Malfunctioning
- Button
- MCSD Study Center
- Downloads not opening
- Send data from one folder to another folder
- How do I span the header text in a GridView (c#)
- reading excel file in c#.net
- Writing output to a 'log' file from within xsl:result-document
- asp.net membership feature not running in localhost
- How to make a treeview control (.net) transparent
- console applications
- Autontication Problem in HttpWeg request class
- how to get the the value in column 4 of the row i clicked for edit in datagrid (asp)
- Using java script make popup menu
- Printing Text in C#.NET with PageSelection Option and Preview Option
- Change ApartmentState of BackgroundWorker
- How to determine encoding of XML file ?
- Scroll inside panel is not working
- Active user counting
- How to load the file in richtextbox control using vb.net
- WebBrowser control's flat scrollbars
- XSLT and XML namespace issue
- Taskbar
- XSLT Compare two documents and output differences
- Sync AD using .NET Framework (ONLY updated accounts)
- cvc-complex-type validation errors
- How to extract data from an url into a text file?
- Opening WEBSITE DEVELOPED IN .NET VER 1 in .NET ver 2
- XML HTTP POST Messaging
- ekjpxkwyeu
- htcxhakgay
- xuiowebkep
- Run Windows service at regular intervals (once in a day)
- dot net in sydney
- Trying to Retrieve a List of Active Serial/Com Ports in C
- zhtom
- text to xml conversion
- help 'e debug
- PDF file
- ASP.Net 2.0, IIS6 Windows Authentication problem
- Checking the type of a up casted class | https://bytes.com/sitemap/f-312-p-47.html | CC-MAIN-2020-45 | refinedweb | 3,306 | 54.83 |
UNLINK(2) BSD System Calls Manual UNLINK(2)
NAME
unlink, unlinkat -- remove directory entry
SYNOPSIS
#include <<unistd.h>>
int
unlink(const char *path);
#include <<fcntl.h>>
#include <<unist
delayed until all references to it have been closed.
The unlinkat() function is equivalent to either the unlink() or rmdir(2)
function depending on the value of flag (see below), except that where
path specifies a relative path, the directory entry to be removed is
determined relative to the directory associated with file descriptor fd
instead of the current working directory.
If unlinkat() is passed the special value AT_FDCWD (defined in <fcntl.h>)
in the fd parameter, the current working directory is used and the behav-
ior is identical to a call to unlink() or rmdir(2), depending on whether
or not the AT_REMOVEDIR bit is set in flag.
The flag argument is the bitwise OR of zero or more of the following val-
ues:
AT_REMOVEDIR Remove the directory entry specified by path as a
directory, not a normal file.
RETURN VALUES
Upon successful completion, the value 0 is returned; otherwise the
value -1 is returned and the global variable errno is set to indicate the
error.
ERRORS
The unlink() and unlinkat() functions will fail and the effective user
ID of the process is not the superuser, or the file
system containing the file does not permit the use of
unlink() on a directory.
[EPERM] The directory containing the file is marked sticky,
and neither the containing directory nor the file to
be removed are owned by the effective user ID.
[EPERM] The named file or the directory containing it.
Additionally, unlinkat() will fail if:
[ENOTDIR] The AT_REMOVEDIR flag bit is set and path does not
name a directory.
[ENOTEMPTY] The AT_REMOVEDIR flag bit is set and the named direc-
tory contains files other than '.' and '..' in it.
[EINVAL] The value of the flag argument was neither zero nor
AT_REMOVEDIR.
rm(1), chflags(2), close(2), link(2), rmdir(2), symlink(7)
STANDARDS
The unlink() and unlinkat() functions conform to IEEE Std 1003.1-2008
(``POSIX.1'').
HISTORY
The unlink() system call first appeared in Version 1 AT&T UNIX. The
unlinkat() function appeared in OpenBSD 5.0.
BSD January 19, 2015 BSD | http://modman.unixdev.net/?sektion=2&page=unlinkat&manpath=OpenBSD-5.7 | CC-MAIN-2017-34 | refinedweb | 371 | 51.58 |
This weekend I was doing some research about capturing ‘screen shots’ of a Flex application, or even specific UI components, and passing the ‘image’ back to the server for processing.
While there are a few ways to do this, I am going to discuss the ‘easiest’. ‘Send ‘source path’ of my application.
import com.adobe.images.JPGEncoder;
Next, I create a method that converts a UI component into BitmapData.
private function getBitmapData(target: UIComponent)::
private function sendImage(target: UIComponent): void { var bitmapData: BitmapData = getBitmapData(target); ‘90‘ that is passed into the constructor of JPGEncoder is the ‘quality’ of the resulting JPG.
Next we need to create a<mx:remoteObject> to talk to ColdFusion.
<mx:RemoteObject <mx:method </mx:RemoteObject>
The CFC itself is very simple, it contains 1 method whose only argument is the binary data that is our image.
<cfcomponent displayname="Image Service" name="Image Service" output="false"> <cffunction access="remote" name="saveImage" output="false" returntype="any"> <cfargument name="data" required="true" type="binary"/> <cffile action="write" file="c:TempflexArea </mx:Panel> <mx:Text <mx:Button
In my online example I have other code that will open a new browser window and show you the image you just captured. You can view the source by right-clicking the application and selecting ‘View Source’.
Using this technique, you can do a screen shot of your entire application, or just individual UI components.
Comments on: "Doing 'screenshots' in Flex and sending them to ColdFusion" (6)
great minds think alike!
See my examples from a week and a half ago here too:
Your explanation is much more in depth than mine, well done!
fgs
test mail
Terrible de longie la wa…
las mentes terrible e lohgie
Just to clarify, this can only take a screenshot of a Flex application and not the entire browser or other mixed DHTML content?
Btw, the “online example” link is broken. | https://doughughes.net/2007/09/17/doing-screenshots-in-flex-and-sending-them-to-coldfusion/ | CC-MAIN-2018-13 | refinedweb | 314 | 50.77 |
16 February 2009 10:56 [Source: ICIS news]
SINGAPORE (ICIS news)--Affiliate companies of Saudi International Petrochemical Co (Sipchem) have obtained Saudi riyal (SR) 1.35bn ($360m) in funding to construct an acetyls complex at Al Jubail, according to a statement Sipchem released on 15 February.
“Construction of the complex is ongoing and it is 94% mechanically complete,” a company official said on Monday.
“We expect to have the acetic acid plant on stream at the end of the second quarter, with commercial production available in the third quarter.”
The funding, from a public investment fund, followed SR1.12bn for the Sipchem affiliates from the Saudi Industrial Development Fund and SR1.43bn from an unnamed corporate fund.
The fully integrated acetyls complex includes a 420,000 tonne/year acetic acid plant, a 330,000 tonne/year vinyl acetate monomer (VAM) plant and a 250,000 tonne/year ethylene vinyl acetate (EVA)/low density polyethylene (LDPE) swing line.
The utilities unit at the complex has been operational since end of last year, the official said.
The official denied earlier reports of a delay in the start-up due to a labour shortage.
The feedstocks – mainly methanol, carbon monoxide and hydrogen – will be provided internally by Sipchem affiliates International Methanol Co (IMC) and United Industrial Gases Co (UIGC), ensuring an uninterrupted supply of raw materials.
The bulk of acetic acid output from the plant will be targeted for the export market, the official said.
“We do not have the exact breakdown but approximately 220,000 tonnes/year of acetic acid will be exported globally, whereas 200,000 tonnes/year will be consumed by our VAM plant,” he said, adding that part of its VAM output would also be exported.
The EVA/LDPE plant is targeted to come on stream in 2012.
Major acetic acid producers in Asia include Celanese, BP, Daicel Chemical Industries, ?xml:namespace>
($1 = SR 3.75) | http://www.icis.com/Articles/2009/02/16/9192821/sipchem-affiliates-get-funding-for-al-jubail-acetyls.html | CC-MAIN-2013-48 | refinedweb | 317 | 52.6 |
INSTALLINSTALL
npm i node-beanstalk # or yarn add node-beanstalk
USAGEUSAGE
node-beanstalk fully supports
beanstalk protocol v1.12
ClientClient
node-beanstalk is built with use of promises.
Each client gives you full access to functionality of beanstalk queue manager, without strict separation to emitter and worker.
import { Client, BeanstalkJobState } from 'node-beanstalk'; const c = new Client(); // connect to beasntalkd server await c.connect(); // use our own tube await c.use('my-own-tube'); // put our very important job const putJob = await c.put({ foo: "My awsome payload", bar: ["baz", "qux"] }, 40); if (putJob.state !== BeanstalkJobState.ready) { // as a result of put command job can done in `buried` state, // or `delayed` in case delay or client's default delay been specified throw new Error('job is not in ready state'); } // watch our tube to be able to reserve from it await c.watch('my-own-tube') // acquire new job (ideally the one we've just put) const job = await c.reserveWithTimeout(10); /* ...do some important job */ c.delete(job.id); c.disconnect();
As beanstalk is pretty fast but still synchronous on a single connection - all consecutive calls will wait for the end of previous one. So below code will be executed consecutively, despite the fact of being asyncronous.
import { Client, BeanstalkJobState } from 'node-beanstalk'; const c = new Client(); await c.connect(); c.reserve(); c.reserve(); c.reserve(); c.reserve(); c.reserve();
Above code will reserve 5 jobs one by one, in asyncronous way (each next promise will be resolved
one by one).
To see all the Client methods and properties see Client API docs
DisconnectDisconnect
To disconnect the client from remote - call
client.disconnect(), it will wait for all the pending
requests to be performed and then disconnect the client from server. All requests queued after
disconnection will be rejected.
To disconnect client immediately - call
client.disconnect(true), it will perform disconnect right
after currently running request.
Payload serializationPayload serialization
As in most cases our job payloads are complex objets - they somehow must be serialized to Buffer. In general, serialized payload can be any bytes sequence, but by default, payload is serialized via JSON and casted to buffer, but you can specify your own serializer by passing corresponding parameter to client constructor options. Required serializer interface can be found in API docs.
PoolingPooling
For the cases of being used within webservers when waiting for all previous requests is not an
option -
node-beasntalk Pool exists.
Why?Why?
- Connecting new client requires a handshake, which takes some time (around 10-20ms), so creating new client on each incoming request would substantially slow down our application.
- As already being said - each connection can handle only one request at a time. So in case you application use a single client - all your simultaneous requests will be pipelined into serial execution queue, one after another, that is really no good (despite of
node-beanstalkqueue being very fast and low-cost).
Client pool allows you to have a pool af reusable clients you can check out, use, and return back to the pool.
import { Pool } from 'node-beanstalk'; const p = new Pool({ capacity: 5 }); // acquire our very own client const client = await p.connect(); try { // do some work await client.statsTube('my-own-tube') } finally { // return client back to the pool client.releaseClient() }
You must always release client back to the pool, otherwise, at some point, your pool will be empty forever, and your subsequent requests will wait forever.
DisconnectDisconnect
To disconnect all clients in the pool you have to call
pool.disconnect().
This will wait for all pending client reserves and returns to be done. After disconnect executed all returned clients will be disconnected and not returned to the idle queue. All reserves queued after disconnection will be rejected.
Force disconnect
pool.disconnect(true) will not wait for pending reserve and start disconnection
immediately (it will still be waiting clients return to the pool) by calling force disconnect on
each client.
TESTTEST
node-beanstalk is built to be as much tests-covered as it is possible, but not to go nuts with LOC
coverage. It is important to have comprehensive unit-testing to make sure that everything is working
fine, and it is my goal for this package.
It is pretty hard to make real tests for the sockets witch is used in this package, so
Connection
class is still at 80% covered with tests, maybe I'll finish it later. | https://www.npmjs.com/package/node-beanstalk | CC-MAIN-2022-33 | refinedweb | 737 | 54.22 |
In SQL Server 2012, you can now debug the Script component by setting breakpoints and running the package in SQL Server Data Tools (replaces BIDS) .
When the package execution enters the Script component, the VSTA IDE reopens and displays your code in read-only mode. After execution reaches your breakpoint, you can examine variable values and step through the remaining code.
On a side note, we upgraded the scripting engine to VSTA 3.0, which provides a Visual Studio 2010 shell and support for .NET 4.
Here are a few things to keep in mind when debugging the Script component.
- You can’t debug a Script component when you run the Script component as part of a child package that is run from an Execute Package task. Breakpoints that you set in the Script component in the child package are disregarded in these circumstances. You can debug the child package normally by running it separately.
- When you debug a package that contains multiple Script components, the debugger debugs one Script component. The system can debug another Script component if the debugger completes, as in the case of a Foreach Loop or For Loop container.
As with previous versions of SSIS, you can also monitor the execution of the Script component by using these methods:
- Interrupt execution and display a modal message by using the MessageBox.Show method in the System.Windows.Forms namespace.
- Raise events for informational messages, warnings, and errors. For more information, see the Developer’s Guide topic, Raising Events in the Script Component.
- Log events or user-defined messages to enabled logging providers. For more information, see the Books Online topic, Logging in the Script Component.
This is a very welcome feature. Thank you!
Question: does this debugging enhancement also include the "edit and continue" feature?
Greatly awaited time-saving feature..!
Because the VSTA IDEdisplays your code in read-only mode, you can't edit the code. You can step through the code and you can click Continue.
So the next release of SSIS project development is not Visual Studio 2010 based?
Hi Arthur – sorry, that should read "VSTA 3.0 … which provides a Visual Studio 2010 shell". I'll fix that in the text.
SQL Server 2012 ships will support for Visual Studio 2010 SP1 … we're working on a plan for supporting the next version of VS as well.
Thank you.
Also forgot to ask: will we get the possibility of integration with code testing e.g. Plex or the built-it VS testing facilities (it is e.g. when we can execute tests right from the task component)?
As far as I know , currently, this is not possible even with say NUnit, correct?
Nothing like that for SSIS packages in SQL 2012.
For Scripts – you'd have to try it out. I think the main limitation with VSTA is that you have a single C#/VB.NET project. It looks like certain VS plugins (like Resharper) that I use for my regular C# development also work in the VSTA IDE, so I can take advantage of them when writing scripts.
If I had a lot of script logic, I'd consider putting in a shared DLL so I could unit test it outside of SSIS (as well as turning it into a custom task/transform).
I actually was planning on experimenting with that, not sure what test framework to go with, the choice is overwhelming whereas it seems the majority of developers being very keen to using NUnit. What would be your word of advise?
For NUnit I can create the library project in VS then reference the DLL in the Script Task then hopefully test.
Interesting, I have the Resharper but is does get fired up in my case (in BIDS 2008). Would it in VS 2008?
You know, I use Snippet Compiler for Script Task coding and/or prototyping and then, yes it often times becomes a DLL, but that I if re-usability is involved. But some Scripts I develop are like real applications by themselves. In some places VS is not available, only BIDS, this is a reality.
Finally!
Yeah is there a way to unit test the Script component methods?
Hello Dinesh,
There are Visual Studio Test Edition-based unit tests for perfoming automated unit testing on data flow components. The "Delimited File Reader Source Sample" on CodePlex (sqlsrvintegrationsrv.codeplex.com/…/17646 ) shows how to use these tests, according to the sample description.
However, based on the MSDN documentation, I think you'd need to create a Unit Test Project to test the methods in your script code. (msdn.microsoft.com/…/hh598957.aspx ).
The "Creating Automated Tests" MSDN topic (msdn.microsoft.com/…/dd380755.aspx ) provides more information about Visual Studio automated tests.
Script task debugger does not work for me (clean OS install, SQL2012 SSDT + SP1). I created a simple task with 2 lines of custom code. It was working for the first run, but after the second stopped working.
Script component binary code not foudn error.
I compiled the code and the copilation is successfull. But I am alwyas gettinging binary code not found error,
This is what I was looking from 2005
I have been using this feature from last two and a half year and didn't find any single issue
We have a team that recently started using Visual Studio 2012, converting SSIS packages from 2008, and seem to be unable to set breakpoints in Script Tasks. The error we're receiving says "Cannot start debugging. Pre-debugging negotiations with Host failed." Cannot seem to find any information about this error, or what's causing it. There is also a brief flash of a dialog box that says "Visual Studio has encountered an unexpected error" but that disappears quickly.
Any suggestions, ideas, etc.?!
Thanks,
Larry
@Igschmidt, I found this page trying to find a solution for this problem. I was able to resolve it by ensuing that the script parameters were correct. I had changed the name of one of the parameters, but I forgot to change the name in the list of parameters that are passed to the script.
Hi
I am using SQL Server 2012.I am facing a problem when trying to debug the script. When I execute the package (F5) the break point that i placed in the script never breaks. What happens is VSTA opens and it stays blank (the file where break point is present does not open) with a status message at the bottom saying build succeeded and nothing happens. It looks like VSTA is not entering debug mode (i could say this because i am able to build the code in VSTA which is opened when i run the package). My problem is very similar to the one mentioned in link below
social.msdn.microsoft.com/…/ssis-2012-script-task-debugging-not-working-vsta-popup-but-no-script-displayed-in-ide
I am really loosing lot of time on this. Please help!
Hi larry,
Can you help me to come out from this error
cannot start debugging. Pre-debugging negotiations with Host failed | https://blogs.msdn.microsoft.com/mattm/2012/01/13/script-component-debugging-in-ssis-2012/ | CC-MAIN-2016-30 | refinedweb | 1,186 | 64.51 |
When I'm working on a .html.erb file in rails and I type:
if <tab>
I get a php snippet as follows:
<?php if (condition): ?>
<?php endif ?>
Any ideas why? I'm new to Sublime - awesome so far just trying to get the hang of it.
Also, on a related note, what is the snippet for generating an open and close erb tag
<%= %>
Either with or without the "=" sign?
And is there an easy to view all the snippets along with their tab completion keys other than going into the filesystem and opening up each .snippet file?
Same problem here. In a file named example.html.erb using "HTML (Rails)" when I type "if " it returns a PHP snippet when it should be Ruby.I'd normally just remove the PHP snippets but I have to maintain a legacy PHP app too.
Any help appreciated
Not sure about the first one, but I created a snippet that works well for <%= %> and <% %> tags.
<snippet>
<content><![CDATA[<%${1:=} $2 %>]]></content>
<tabTrigger><</tabTrigger>
</snippet>
If I don't need the tag to output, I can just backspace the '=' after the first tab and then tab again to the middle; otherwise, if I do need the output, I just double-tab and I'm all set.
Hope this helps.
I'm getting PHP snippets showing in a html.erb file as well.
Thanks thetristan for the snippet. It was on my list of things to fix in my ST2 setup before I buy a license.
Using dev build 2207 and the context ST2 is using is "HTML (Rails)" so I don't know why PHP snippets are showing.
I'm assuming I could just delete my php snippets but I would prefer to not have to do that.
the problem is the scope[1] of the PHP snippets. For them it is 'text.html - source'. An .html.erb file has the scope 'text.html.ruby', this is included in the PHP snippets' scope.
I would consider this a bug, since I find it not useful to litter all text.html.* scopes with PHP snippets and suggest to limit their scope to text.html.php.
[1] to find out the scope use ScopeHunter (available via Package Control)
This is definitely the issue.
Adding a new Ruby/Rails 'else' snippet isn't very helpful, because then ST2 just prompts for which snippet to use. It seems like moving/removing the PHP folder (or just the offending snippets), then creating custom Ruby/Rails snippets is the only workaround until this bug is fixed.
Edit:Another possible workaround might be to search out all instances of "text.html" in the ~/Library/Application Support/Sublime Text 2/Packages/PHP folder and replace them with "text.php".
Has there been any movement on this issue? will it be fixed in a future release?
I also have the same problem.
It's definitevly a bug, more if the workaround is to edit the app contents. | https://forum.sublimetext.com/t/when-in-rails-why-does-if-tab-generate-a-php-snippet/2174 | CC-MAIN-2017-43 | refinedweb | 495 | 75.1 |
Hi there,
I have a smiluation and want to build structures in-game. And I want the surrounding trees to be removed when placing a building nearby. Problem is that my current solution actually does remove the trees within a certain radius. But the game freezes for a few seconds, since there are so many trees.
The code looks similar to this:
TreeInstance[] trees = Terrain.activeTerrain.terrainData.treeInstances;
ArrayList newTrees = new ArrayList();
Vector3 terrainDataSize = Terrain.activeTerrain.terrainData.size;
Vector3 activeTerrainPosition = Terrain.activeTerrain.GetPosition();
float distance;
foreach (TreeInstance tree in trees )
{
distance = Vector3.Distance(Vector3.Scale(tree.position, terrainDataSize)
+ activeTerrainPosition, currentObject.transform.position);
if (distance > 20) {
newTrees.Add(tree);
}
}
Terrain.activeTerrain.terrainData.treeInstances = (TreeInstance[])newTrees.ToArray(typeof(TreeInstance));
do you have any suggestions how I might be able to do this faster?
edit: just realised, that when I restart the game after removing trees, they are still gone?! How can I prevent Unity from doing this?
Answer by Stormizin
·
Jun 28, 2013 at 06:31 PM
This really depends on the user's computer.
It's too many objects getting destroyed at same time.
Getting lots of memory free.
Did you tested with the scene already builded?
Works better. At least on my machine. But is there no better-perfor$$anonymous$$g way to remove the trees? There has to be one..
All that i know is, that you can put the trees in memory and hold.
Since you call the destroy method maybe the performance will improve.
Also you can try Builtin arrays, they are very fast.
$$anonymous$$G:
public class example : $$anonymous$$onoBehaviour {
private Vector3[] positions;
void Awake() {
positions = new Vector3[100];
int i = 0;
while (i < 100) {
positions[i] = Vector3.zero;
i++;
}
}
}
Problem is, that I don't know the size the array will have. :/
Your code need to know how many trees will be destroyed.
Use a var that will hold this count then execute the builtin array.
Yea, but that's the point. $$anonymous$$y code won't know before he iterates over all the TreeInst.
Replacing terrain trees with prefabs
0
Answers
Tree Billboards
1
Answer
Terrain Tree Placement
0
Answers
To what extent can the tree system be used instead of the details system
0
Answers
How to improve the Performance of Removing Trees during runtime?
1
Answer
EnterpriseSocial Q&A | https://answers.unity.com/questions/482862/remove-trees-during-runtime.html | CC-MAIN-2022-33 | refinedweb | 387 | 60.11 |
Chatlog 2012-10-04
From RDFa Working Group Wiki
See CommonScribe Control Panel, original RRSAgent log and preview nicely formatted version.
13:51:56 <RRSAgent> RRSAgent has joined #rdfa 13:51:56 <RRSAgent> logging to 13:51:58 <trackbot> RRSAgent, make logs world 13:51:58 <Zakim> Zakim has joined #rdfa 13:52:00 <trackbot> Zakim, this will be 7332 13:52:00 <Zakim> ok, trackbot; I see SW_RDFa()10:00AM scheduled to start in 8 minutes 13:52:01 <trackbot> Meeting: RDFa Working Group Teleconference 13:52:01 <trackbot> Date: 04 October 2012 13:57:52 <Steven> Steven has joined #rdfa 13:59:34 <Zakim> SW_RDFa()10:00AM has now started 13:59:43 <Zakim> +??P10 13:59:47 <manu1> zakim, I am ??P10 13:59:47 <Zakim> +manu1; got it 14:00:12 <Zakim> +ivan 14:00:28 <niklasl> niklasl has joined #rdfa 14:00:31 <danbri> danbri has joined #rdfa 14:02:04 <Steven> Steven has joined #rdfa 14:02:32 <Steven> zakim, who is on the call? 14:02:45 <Zakim> On the phone I see manu1, ivan 14:03:24 <Zakim> +??P38 14:03:26 <gkellogg> zakim, I am ??P38 14:03:28 <niklasl> zakim, I am ??P38 14:03:45 <Zakim> +??P41 14:03:53 <Steven> zakim, who is on the phone? 14:04:00 <Zakim> +gkellogg; got it 14:04:05 <Zakim> +Steven 14:04:08 <Zakim> sorry, niklasl, I do not see a party named '??P38' 14:04:25 <Zakim> On the phone I see manu1, ivan, gkellogg, ??P41, Steven 14:04:34 <niklasl> zakim, I am ??P41 14:04:59 <Zakim> +niklasl; got it 14:05:25 <manu1> Agenda: 14:05:57 <manu1> scribenick: niklasl 14:06:20 <niklasl> manu: we need to add the rdf:HTML topic to the agenda 14:06:25 <manu1> Topic: ISSUE-126: Can xmlns: be reported as a warning? 14:06:31 <manu1> 14:07:22 <niklasl> manu: Mike Smith has informed us that the validator w3c uses cannot even detect use of xmlns declarations 14:07:31 <Steven> Don't design the spec around bugs in software 14:08:23 <niklasl> … so the question is if conformance validators can report use of xmlns in HTML5 as an error 14:09:03 <niklasl> … what kinds of attribute use are illegal and "dropped" in html5? 14:09:25 <niklasl> manu: I don't see a big issue in doing that 14:10:01 <niklasl> ivan: I don't mind that the validator raise that. My question is whether RDFa processors should raise an error in general for this (in html5)? 14:10:36 <gkellogg> q+ 14:10:41 <manu1> ack gkellogg 14:10:46 <niklasl> manu: we have a warning about that, it should be clearly noted in the spec.. But a processor should be able to use it if it can 14:11:19 <niklasl> gregg: isn't the difference between warning an error in practice just different types of warnings? 14:11:29 <niklasl> s/warnings/messages/ 14:11:58 <niklasl> steven: this isn't about processors, just about conformance checkers 14:12:14 <niklasl> manu: so can we have errors that doesn't stop processors? 14:13:01 <niklasl> gregg: in general, I consider errors to mean that if the processors doesn't stop, it indicates that something strange may result 14:13:53 <niklasl> ivan: in this case, the logical case is to issue a warning in a processor, but use the value (according to core) 14:15:06 <niklasl> gregg: a processor using a conforming html5 processor cannot see the erroneous xmlns usage at all, so it cannot report anything 14:16:20 <niklasl> steven: I think it would be a bad idea to issue an error in a conformance checker for something that's not an error 14:16:45 <niklasl> manu: I think the requirement is to say something stronger than a warning 14:16:52 <ShaneM> ShaneM has joined #rdfa 14:16:53 <niklasl> ivan: validators MAY issue an error 14:18:26 <Zakim> + +1.612.217.aaaa 14:18:34 <ShaneM> zakim, I am aaaa 14:18:34 <Zakim> +ShaneM; got it 14:22:00 <gkellogg> zakim, who's making noise? 14:22:10 <Zakim> gkellogg, listening for 10 seconds I heard sound from the following: manu1 (9%), ivan (4%), Steven (65%) 14:23:12 <niklasl> steven: can we say this in a way to make it clear that conformance checker may report it as an error, but it's not *actually* an error... 14:23:19 <niklasl> ivan: in html5, it is an error 14:23:48 <niklasl> … xmlns is not an unknown thing in html5, it's a special, not allowed thing 14:26:55 <manu1> PROPOSAL: When an RDFa validator is processing an HTML5 document, it MAY report the use of xmlns: as an error. When an RDFa processor is processing an HTML5 document it MAY report the use of xmlns: as a warning. 14:27:02 <manu1> HTML5 spec: If the XML API doesn't support attributes in no namespace that are named "xmlns", attributes whose names start with "xmlns:", or attributes in the XMLNS namespace, then the tool may drop such attributes. 14:30:31 <manu1> In the HTML syntax, namespace prefixes and namespace declarations do not have the same effect as in XML. For instance, the colon has no special meaning in HTML element names. 14:34:28 <gkellogg> q+ 14:34:41 <manu1> Topic: ISSUE-139: XHTML5 processing specifically excludes the use of xml:base 14:34:47 <manu1> 14:34:49 <manu1> ack gkellogg 14:35:10 <niklasl> gregg: we have conformance tests for this in the test suite 14:36:03 <niklasl> … HTML IDL interfaces use this 14:36:21 <Steven> +1 14:36:25 <ShaneM> +1 14:36:33 <ivan> +1 14:36:39 <manu1> PROPOSAL: XHTML5+RDFa 1.1 MUST honor the use of xml:base to set the base URL of the document. 14:36:42 <manu1> +1 14:36:43 <gkellogg> +1 14:36:44 <niklasl> niklasl: +1 14:36:58 <Steven> +1 14:37:04 <ivan> +1 14:37:08 <manu1> RESOLVED: XHTML5+RDFa 1.1 MUST honor the use of xml:base to set the base URL of the document. 14:37:25 <Zakim> -Steven 14:37:26 <manu1> Topic: ISSUE-135: RDFa Lite and non-RDFa @rel values 14:37:33 <manu1> ISSUE-135 - 14:38:14 <niklasl> .. 14:38:26 <niklasl> …, 14:40:46 <niklasl> 14:41:01 <niklasl> "If @property and @rel/@rev are on the same elements, the non-CURIE and non-URI @rel/@rev values are ignored. If, after this, the value of @rel/@rev becomes empty, then the then the processor must act as if the attribute is not present." 14:42:20 <manu1> PROPOSAL: If @property and @rel/@rev are on the same elements, the non-CURIE and non-URI @rel/@rev values are ignored. If, after this, the value of @rel/@rev becomes empty, then the then the processor must act as if the attribute is not present. 14:42:32 <ivan> +1 14:42:33 <manu1> +0.5 14:42:34 <niklasl> niklasl: +1 14:42:38 <gkellogg> +0.5 14:43:07 <manu1> RESOLVED: If @property and @rel/@rev are on the same elements, the non-CURIE and non-URI @rel/@rev values are ignored. If, after this, the value of @rel/@rev becomes empty, then the then the processor must act as if the attribute is not present. 14:44:37 <manu1> niklasl: When the tokens in @rel only contain non-CURIE or non-URI values (there are no terms in HTML5+RDFa), @property overrides @rel. 14:45:57 <ShaneM> the URI for the 'term' production in RDFa Core is 14:46:11 <manu1> niklasl: When @rel and @property was used together, they used CURIEs, so we're okay there. This is to handle the general case of when @vocab comes in conflict with @rel/@property. There is no way to make everybody happy, this is the closest we could get. 14:46:55 <niklasl> gregg: the RDF 1.1 working group has added the datatype rdf:HTML. 14:46:57 <manu1> Topic: Addition of rdf:HTML datatype to RDFa 14:47:07 <ivan> q+ 14:47:14 <niklasl> … it's very much like rdf:XMLLiteral, without the exclusive XML canonicalization 14:47:48 <manu1> ack ivan 14:47:48 <niklasl> … we should support this. If we don't, we'd diverge from the RDF 1.1 concepts, for a feature very much intended for (good for) RDFa 14:48:33 <gkellogg> 14:48:34 <niklasl> ivan: two things of interest in new RDF concepts. On the XMLLiteral side, there is now much clearer language on that. 14:49:11 <niklasl> … and indeed, the rdf:HTML literal type. 14:49:28 <niklasl> … the literal is required to be valid HTML, which is much more liberal 14:50:05 <niklasl> … but we have a process issue. It's possible that RDFa in HTML5 would become a rec *before* RDF 1.1 14:50:22 <niklasl> … so we may not be able to have a formal reference in the spec 14:50:49 <niklasl> .. But we should add an informal section encouraging RDFa processors to implement handling of rdf:HTML literals 14:51:15 <niklasl> .. I (and Gregg?) have already implemented this 14:51:42 <niklasl> gregg: I've implemented this. There are no public test cases yet. 14:52:53 <manu1> PROPOSAL: Support the the rdf:HTML datatype in HTML+RDFa 1.1 (non-normatively for the purposes of ensuring that HTML+RDFa 1.1 is not blocked from REC by RDF Concepts). 14:52:59 <gkellogg> +1 14:52:59 <ivan> +1 14:53:00 <manu1> +1 14:53:00 <niklasl> niklasl: +1 14:53:03 <ShaneM> +1 14:53:13 <manu1> RESOLVED: Support the the rdf:HTML datatype in HTML+RDFa 1.1 (non-normatively for the purposes of ensuring that HTML+RDFa 1.1 is not blocked from REC by RDF Concepts). 14:55:42 <manu1> Topic: HTML+RDFa 1.1 spec 14:56:01 <Zakim> -ivan 14:56:46 <manu1> Manu: We're in good shape, as far as the spec is concerned, we'll get verification from Mike Smith, I'll update the spec and push out a new working draft (with the approval of the group) # SPECIAL MARKER FOR CHATSYNC. DO NOT EDIT THIS LINE OR BELOW. SRCLINESUSED=00000124 | http://www.w3.org/2010/02/rdfa/wiki/Chatlog_2012-10-04 | CC-MAIN-2014-42 | refinedweb | 1,753 | 66.88 |
I have written a small wrapper for using fmod in C#. I can load a stream fine, but as soon as I try to play the file I get a “native Exception” and the program exits with no debug information. I am using an iPaq 3870 with an Arm processor, so I am quite sure I am using the proper DLL. Any help would be greatly aprecisated.
Brian[/code]
- brian asked 12 years ago
- You must login to post comments
Same here! I am getting the same error.
I’m using an iPAQ 2210 running Windows Mobile 2003, could somebody just confirm that I’m using the correct dll (fmodapi373ce/api/wce4/armv4/fmodce.dll)?
Could somebody maybe post some known working code that will simply play an mp3 using .NET p/Invoke calls?
Cheers,
Steve
Oh, and this the code i’m trying to get to work…
[code:282mo01v]
using System;
using System.Drawing;
using System.Collections;
using System.Windows.Forms;
using System.Data;
using System.Runtime.InteropServices;
namespace SoundTest
{
public class SoundTestForm : System.Windows.Forms.Form
{
[DllImport( "fmodce.dll", EntryPoint="FSOUND_Init" )]
public static extern bool FSOUND_Init( int mixrate, int maxsoftwarechannels, uint flags );
[DllImport( "fmodce.dll", EntryPoint="FSOUND_Stream_Open" )] public static extern IntPtr FSOUND_StreamOpen( string name_or_data, int mode, int offset, int length ); [DllImport( "fmodce.dll", EntryPoint="FSOUND_Stream_Play" )] public static extern int FSOUND_StreamPlay( int channel, IntPtr stream ); public SoundTestForm() { IntPtr a; bool inited = FMODMethods.FSOUND_Init(44100, 32, 0); a = FMODMethods.FSOUND_StreamOpen("music.mp3", 0x00002000, 0, 0); FMODMethods.FSOUND_StreamPlay(-1, a); } protected override void Dispose( bool disposing ) { base.Dispose( disposing ); } static void Main() { Application.Run(new SoundTestForm()); } }
}
[/code:282mo01v]
I get “A native exception has occurred in SoundTest.exe” on the screen of the PocketPC.
According to the .NET debugger,
inited = true
a = 0
just after the exception occures.
The MP3 music.mp3 is in the same directory as the executable on the PocketPC.
Any help is greatly appreciated!
Cheers,
Steve
Hi Brett,
Thanks for the suggestion, but that’s not it.
pragma unmanaged is a C++ compiler directive, and the C# compiler doesn’t recognise it.
-Steve | http://www.fmod.org/questions/question/forum-10520/ | CC-MAIN-2016-44 | refinedweb | 346 | 61.83 |
Stephen McConnell wrote:
>
>
> Stefano Mazzocchi wrote:
>
> Stefanno:
>
> Have read with interest the Blocks 1.1 description. First of all -
> thanks to everyone who contributed to this. I have a number of notes
> in-line, some of which I am sure will reflect my ignorance concerning
> the Cocoon world/terminolgy. Throughts are strongly related to
> background with Avalon, experience with Merlin and Fortress and usage of
> the excalibur/meta package.
Stephen, thanks much for this. I think your experience in
component-oriented paradigm will be very valuable for us here. See my
>
>
> >
> > +---------------------------+
> > | Part 2: technical details |
> > +---------------------------+
> >
> > Ok. Now that we have described where we want to go, let's describe how.
> >
> > Cocoon Blocks
> > -------------
> >
> > A Cocoon block is a zipped archive, just like JARs and WARs.
> >
> > The suggested extension of a cocoon block is ".cob" (for COcoon Block).
>
>
>
> Another suggestion ... BAR - Block ARchive.
>
> The reason for suggestiong this is that the concept of a JAR/WAR style
> deployment unit is something I've been looking at within the Merlin
> framework. It seems to me that the notion of a block is something
> usable at level across many different applications and based on the
> requirements and descriptions here - I dion't see any immediate Cocoon
> specifics except for the inclusion of the sitemap and default sitemap
> semantics (more notes on that later).
Well, a COB is more than a BAR. A COB is a cocoon-specific BAR. We need
this. We need a way for the block to *mount* itself onto a specific URI
space. Otherwise blocks are just a way to deploy java components and
their libraries.
>
> >
> > The suggested MIME type is "application/x-cocoon-block".
>
> And following the BAR notion .. "application/x-block"?
I'm not against using a more general MIME type, as long as there is a
way to *cast* it to COB before deployment. Something like having a
signature inside the block metadata or something like that.
>
> >
> > A Cocoon Block (COB from now on) includes a directory called
> >
> > /BLOCK-INF
> >
> > which contains all the block metadata and the resources that must not be
> > directly referentiable from other blocks (for example, jars, classes or
> > file resources made available thru the classloader). The directories
> >
> > /BLOCK-INF/classes
> > /BLOCK-INF/jar
> >
> > are used for classes and jar files. [This follows the WAR paradigm]
>
>
>
> For consitency with the Servlet spec (Web Applications/SRV.9.5 Directory
> Structure) - I suggest /BLOCK-INF/jar be changed to /BLOCK-INF/lib
Yes, yes, I overlooked that. Totally agree it should be /BLOCK-INF/lib
>
> >
> > The main COB descriptor file is found at
> >
> > /BLOCK-INF/block.xml
> >
> > This file contains markup with a cob-specific namespace and will
> > include the following information:
> >
> > 1) block implementation metadata:
> > - unique URI identifier [this identifier will also be used as an
> > address on where to locate the block and how to download it from the
> > web!] (example:)
> > - version (1.5.34)
> > - short name (My Block)
> > - description
> > - author
> > - URI of license ()
> > - URI of the distribution location
> > ()
> > - ???
> >
> > 2) role(s):
> > the URI(s) of the behavioral role(s) this block implements
> > and exposes [optional]
>
>
>
> When you are using the work "role", is it safe to assume that this is a
> URI that resolves to a description of the set of computational
> "service"(s) that a block is capable of providing?
yes, it's the 'behavioral contract'.
> If this is correct - then I would suggest renaming this to
> "service(s)".
'service' is kinda abused as a term, expecially now with the web
services stuff floating around.
> My rationale here is that a "role" (to me) is more correctly aligned
> with the consumer of a service - Block B1 is depedendent on service X
> for role of "authorization". If I understand correctly, the notion
> you desribing is collection of an interface + version range supported
> by a block that would enable it to be supplied to Block B1 in order to
> fulfill the service dependecies that B1 has with respect to its
> "authorization" concerns.
Your assumptions are correct and I agree with your rationale that 'role'
is kinda misplaced as an identifier for a behavioral contract.
Still, I don't like the term 'service'. Anybody else has a good
suggestion for this?
>
>
> Keep in mind that I'm biased relative to the Merlin/Phoenix coventions
> here of using the work "service" to describe the functionality
> exported by a component.
No problem.
> I'm extending that notion on the assumption that a block exposed a set
> (or sub-set) of the services provided by the components it is aggregating.
Yes, this plus all the cocoon-specific services. In fact, you could
think at the sitemap as a description of the behavior of a hidden
'cocoon component' that exhibits pipeline-handling services.
> Just as a side note, you may want to think about seperating "block"
> URIs from "service" URIs.
Read again: it's already there. One URI describes the block another
describes the 'behavior' that it's implementing.
So, we could have implements
> This is something I've been working on recently in
> Merlin - and the seperation of component provider for service has
> proved valuable.
Oh, it's even more than valuable: it's vital. Otherwise, how can be
implement block polymorphism?
> It ensures that the concepts of a service is not tied to a
> particular implementation unit (block or component). Seperation of
> component implemetation meta data from the service meta data is
> already in place under the excalibur/meta package for the same reasons.
>
> >
> > 3) dependencies:
> > the URI(s) of the behavioral roles this block expects,
> > along with the prefixes used by the block as shortcuts in protocol
> > resolving (see below for the meaning of this) [optional]
>
>
>
> I'm guessing that your referring to the inclusion of a roles file - is
> that correct?
Nop. I'm talking about the dependencies of a particular block on the
services provided by other blocks.
So
http//apache.org/cocoon/blocks/instances/Forrest/1.0 requires.[2-x]
which means that Forrest requires a service to transform FO into PDF and
will use a contract defined on version 1.2 of that behavioral contract
and not changed until 2.0 (excluded)
> How does this compare to something like the dependencies
> declaration used in the excalibur/meta package?
>
>
> >
> > 4) inheritance:
> > the URI of the block extended. [optional]
>
>
>
> It seems to me that there two distinct inheritance concerns: (a) block
> inheritance and (b) component inheritance (assuming that a block
> aggregates components). In the case of block inheritance this would
> handles the cases of resources and the ability to redefine resources
> in derived blocks.
My proposal didn't include a way for a block to access a resource
included in another block directly, but only passing thru a sitemap and
invoquing pipelines.
Here, when a sitemap *extends* another one, it's means of falling back:
the two sitemaps are appended and if no matching happens in the first
one, it falls back on the extended one.
> In the case of component inheritance, this should be
> handled at the component type level and should not be linked to block
> inheritance.
I thought that classloading precedence would solve this issue almost
automatically. In fact, again, if the asked component is not present in
the block classloader, it will fall back to the classloader of the
extended block.
>
>
> >
> > 5) sitemap:
> > the location inside the block file space of the sitemap
> > [optional, if not found defaults to '/sitemap.xmap']
>
>
>
> This one - I'm not sure about - does it make sence for this to be part
> of a generic block specification
No, it makes sense to have this in a cocoon block specification but I'd
be interested in seeing how our effort can combine with others.
> , or is it part of a block that provides
> functionality derived from a stitemap?
A cocoon block is a block with some cocoon specific service. In this
sense a COB extends a BAR and for this reason must have cocoon-specific
semantics.
> Perhaps this is point where a COB extends a BAR ?
Yep. And no small point: without this, blocks are almost useless as a
webapp deployment tool to us. In avalon, you are packaging services
provided by components, in cocoon we want to package services but they
are not only provided by java components but also provided by cocoon
services (pipelines).
In this sense, if a block exposes a sitemap is a cocoon block, if it
doesn't (because sitemap exposure is optional) it is a regular avalon block.
> >
> > 6) configurations:
> > the configurations required for this block to function [optional]
>
>
>
> Some clarification needed here - I'm assuming that a block is a
> collection of a components.
No. A block is a collection of deployable services, some of which are
implemented as avalon components.
> Each component would have its own meta info
> (explicit or derived). At the level of block I can imagine information
> that is describing profiles of component usage, and instructions
> concerning assembly of profiles that will result in the establishment of
> a computation system (I'm talking about internal assembly of a block
> here - not block assembly). This internal "assembly" level information
> can be considered as the block configuration but should not be confused
> with component configuration data.
I lost you here.
Let me give you an example of what I mean with configuration.
Let me suppose that I deploy a block that provides me with
authentication services (don't think about java components only, think
about also a pipeline that handles the login pages, the error pages, the
authentication flow, the user-managing pages and flow and the components
to connect to the various data storages)
This block will then need configurations to work such as:
- system to use [file|RDBMS|LDAP]
- location of the database
- username/password for connection (not needed for 'file')
- ...
and so on.
>
> On the subject of component configuration, there are three different
> levels of component configuration that are handled within the Merlin
> container. The first type is static configuration defaults (established
> by a developer and bundled with the class), the second type is
> configuration data associated with a named deployment profile (i.e.
> component X deployed using profile P1 is different to component X
> deployed using profile P2). The third category of configuration data is
> data defined by an administrator that typically suppliments a profile,
> which in turn suppliments default configuration data.
It is *not* the deployer concern to know who uses the configurations
inside the block. So I shouldn't have to configure single components,
but the block as a whole and then the block knows what part uses what
configuration. Otherwise IoC is broken.
I don't want users to have to know the internals of the block in order
to be able to configure it.
it should be as simple as possible and as transparent as possible to the
user. Just fill-up the form with the value you want and that's it.
>
>
> >
> >
> > Also, the /BLOCK-INF/ directory contains the 'roles' file for Avalon
> > components:
> >
> > /BLOCK-INF/roles.xml
>
>
>
> I've been thinking about how to handle roles versus the more formal meta
> data approach used in Merlin. One of the first things that is needed at
> the component level is the declaration of mechanism used to bring
> external data into and meta-data model. Markus has already started
> working on content in this subject and I'll be shifting some of the
> meta-data content out of Merlin to the excalibur/meta package in the
> near future as part of supporting this work. In effect there should not
> be a need to include a /BLOCK-INF/roles.xml at the spec level - instead
> one should be declaring a meta management strategy at the component
> level, and possible a default strategy at a block level. This would
> enable the deployment of ECM style components without change, together
> with non-ECM components. Specification of the inclusion of a roles file
> would be part a ECM meta strategy spec.
Ok.
>
> >
> >
> > Possible use-case scenario
> > --------------------------
> >
> > Suppose you have your naked cocoon running in your favorite servlet
> > container, and you want to deploy myblock.cob. Here is a possible
> > sequence of actions on an hypotetical web interface on top of Cocoon
> > (a-la Tomcat Manager)
> >
> > 1) upload the myblock.cob to Cocoon
> >
> > 2) Cocoon scans /BLOCK-INF/, reads block.xml and finds out the
> > behaviors this block depends on as well as the block that it extends.
> >
> > 3) the block manager connects to the uber "Cocoon Block Librarian"
> > web service (hosted probably on cocoon.apache.org) and asks for the
> > list of blocks that exhibit that required behavior.
> >
> > 4) the librarian returns a list of those blocks, so the users chooses,
> > or the manager allows the user to deploy its own block that implements
> > the required behavior or to reuse those already deployed blocks that
> > implement the required behaviors.
> >
> > 5) Cocoon checks that all dependencies are met, then unpacks and
> > installs the blocks
> >
> > 6) For each block that exposes a sitemap, the deployment manager asks
> > the deploying user where he/she wants to *mount* that block in the
> > managed URI space or if he/she wants to keep them internal only (thus
> > only available to the other blocks, but not mounted on the public URI
> > space)
>
>
>
> The above comment is probably the point where a COB comes into focusus
> as a specification that extends a more generic BAR specification (i.e.
> COcoon Block could be viewed as an extension of a generic component
> Block ARchive).
Yep
>
> >
> >
> > 7) for each block that requires installation-time configurations, the
> > block manager will present the user information on how to configure
> > the block.
> >
> > 8) If no collisions in the URI spaces are found, the blocks are made
> > available for servicing.
> >
> >
> > Resource dereferencing
> > ----------------------
> >
> > Security concerns aside, the above scenario shows one major issue:
> > blocks are managed, deployed and mounted by the container. There is (and
> > there should not be) a way for a block to directly access another block
> > because this would ruin IoC. xdc
>
>
>
> If you follow the seperation of "block" from "service" you can avoid
> this issue. In effect, "service" is what is exposed by the assembly
> system - block never needs to be exposed. However, this does not
> address the complete picture. The block concept includes resources as
> well as services. To complete the picture, the block would need to
> declare accessible resources (something not addressed in the
> excalibur/meta or Merlin system).
You got me wrong here: the separation of block and service was already
proposed, but for inheritance, you have to expose *directly* the block
you want to extend. You can't extend a behavior with a block.
Also note that cocoon blocks will not expose resources but only
pipelines and components. Resources are those generated by the pipelines.
>
> The idea of seperating "block" and "service" has significant
> implications - firstly, the structural unit of deployment are seperate -
> which means that a service interface, realted meta and resources can be
> loaded indepedently of a block. You need to be able to do this as soon
> as you get into classloader hierachies across which service defintions
> appear higher in the classloader that the implemetations (i.e. the
> service defintions are shared whereas the block implementation is
> protected).
I lost you here again, probably you are more familiar than me on
implementation details.
>
> >
> >.
>
>
>
> Given sufficient meta-info (type-level) plus meta-data (profile-level)
> it is possible to do validation on components prior to the assembly
> of blocks/components into a running system.
Yeah, for components it's doable. But my concern are is talking about
validating an entire URI space with its internal flow etc etc. Not easy.
> The validation phase does
> things like ensuring that meta-data in consitent with implemetation,
> references to resoruces actually refer to existing resources, etc.
Hmmm, suppose you have a matchers like this
<map:match
<map:call
...
</map:match>
how are you going to validate it?
> This
> type of validation does not need any supplimentary langauge because its
> simply ensuring the consistency of a logical system before system
> deployment. Validation could be applied at block creation time, and
> during multi-block assembly.
Components were never my concern for validation purposes.
Cocoon-specific URI-oriented services are!
>
> >
> >
> > o) VERSIONING AS PART OF THE BEHAVIOR URI
> >
> > The behavior URI *MUST* terminate with a /x.y that indicates the
> > major.minor version of the behavior that a block implements.
>
>
>
> Can you explain the *must*
I'll explain it like this: Java failed to add versioning to interfaces
and class definitions (assuming that it was classloading's concern to do
that). I don't want to make the same mistake here. A contract is
immutable *only* if marked with a timestamp or a unique version number.
Unlike W3C, I prefer version numbers as unique discriminators for URIs.
> - the conventions used in the excalibur/meta
> package assume a default value of 1.0 if no version information is
> supplied.
This is lame. It's like saying that you'll default to
'org.apache.avalon' if you forgot to indicate the package of a
component. I think one should know what he/she is doing: if the URI that
defines a contract is
it's not
> My experience is that this is good for the developer but bad
> for the user. User's typically prefer the most recent stable release
> as a default value.
Ah, you are mixing concerns here! One thing is to talk about contract
dependencies, another thing entirely is to talk about user preferences
of deployment of block *instance* versions!
I agree with you that users want the most stable release, but that's why
the block descriptor metadata in my proposal includes:
- unique URI for this block ()
- URI for the latest release ()
but these are URI for block implementations *not* for the block behaviors.
> I've also some reservation about the "/" delimited as the appropriate
> means for version delimiting - because it would break what is already
> running in Merlin :-)
Nothing is carved in stone here so feel free to propose alternative
syntaxes as long as the intended results remain the same.
>
>
> >
> >
> > On dependencies, each block must be able to specify the 'ranges' of
> > versioning that it is known to work with. For example
> >
> >
> >
> > But I haven't really thought about the patterns that could be used for
> > this.
> >
> > Please, help on this.
>
>
>
> Some useful documetation concerning "component" level meta info for the
> type level is available on the excalibur/meta package. This meta info
> *only* deals with the component type level (equivalent to information
> supplimenting the component implementation classes and service interface
> classes).
>
>
> Meta information concerning the description of "profiles" (the
> configuration data, context directives, etc.) is defined under the
> Merlin 2 API. The Profile Javadoc is a good starting point.
>
>
Ok, I'll take a look at those later (I'm offline right now)
>
>
> >
> > 3) Which avalon container should we use since the one we currently
> > use (ECM) is not powerful enough? is there already a container which
> > is powerful enough to handle our needs as described here? if not, what
> > do we do? we implement our own or work with the avalon people to fix
> > theirs to meet our needs?
>
>
>
> I would *very* much like to see this as a joint Cocoon/Avalon iniative.
Me too. I don't want to reinvent the wheel if it's possible to avoid that!
>
> On the Avalon front there are two containers the play into the
> requirements stated above - Merlin and Fortress. However, neither of
> these containers completely address the requirements. But lets look a
> little deeper and figure out where Avalon is today relative to the
> target and what potential is offered by a combination of Merlin,
> Fortress and aother Avalon related iniatives.
ok
>
>
> Defintion of a block as a structural package
> ---------------------------------------------
>
> I would like to see an Excalibur package dealing with a BAR (Block
> ARchive) that serves as the basic structure for a COB. This should
> include tools and utilities for BAR creation, structural validation,
> signing, etc. There is existing content in the Phoenix app-server
> related to the SAR file format which is close to the notion of a block
> in terms of structure but is too cause grained for the block concept.
Why so? (BTW, should we copy Peter in this discussion?)
> In
> addition, the work in Merlin dealing with container defintion seems to
> me to be very close to component/service management side of a block, but
> lacks the formal management of resources (i.e. a Merlin container only
> expose services - not resources).
Again, cocoon blocks should not expose resources so this is not a problem.
>
>
> Component meta info and meta data
> ---------------------------------
>
> As mented above - the component type level meta info in excalibur/meta
> combined with profile level meta data in excalibur/assembly (model
> package) is a working starting point for the component level deployment
> concepts. There is some more seperation work to be done on the Merlin
> side - after which much of the Merlin meta data model will move over to
> the excalibur/meta package. This will provide a light-weight meta model
> that is container independent. The model does not currently support
> inhertance - this would require some minor additions to the existing
> stucture and some significant additions to the verification functions.
Ok.
>
>
> Assembly solutions
> ------------------
>
> Merlin includes a assembly engine that automates the process of wiring
> together components based on depedencies and services. This is working
> well today but could do with some refactoring.
Is it working well or not? the above doesn't really parse.
> Notions of default
> configurations combined with packaged deployment profiles are proving to
> be excellent solutions to simplification of the over service management
> problem.
>
> Lifecycle and Lifestyle management
> ----------------------------------
>
> Both Merlin and Fortress support the classic Avalon lifecycle stages
> (configuration, contextualization, composition/servicing, etc.) together
> with a common model for the introduction of lifecycle extensions.
> Respective implemetations differ in the Merlin allows extension
> implementations to be component that my have their own depedencies
> whereas Fortress does not have support for compoennt assembly. Lifestyle
> management is equivalent in that both provide support for singleton,
> thread, pool and transient policies. Again, implemetation approaches
> differ - Fortress is very much derived from the ECM model and respects
> lifestyle marker interfaces whereas Merlin requires lifestyle policy to
> be declared within the meta-info of a component type. Looking forward,
> the Merlin strategy will be to declare the lifestyle processing
> strategy, allowing for defintion of a plug-in handler for lifestyle
> resolution - allowing a mix of pure meta-based components together with
> ECM style and Avalon 4.1.2 marker interface recognition.
So, do you think it would be possible to migrate Cocoon from ECM to
Merlin/Fortress without breaking existing functionality?
Also, are they capable of run-time changes to the block dependencies?
> Mixed lookup semantics
> ----------------------
>
> Fortress provides complete support for the extended semantics implied
> within a lookup argument. The Merlin 2 implementation does not support
> this. The main issue (from my own point of view) is that the Avalon
> framrework Composable and Serviceable interface seamantics are
> insuffiently specificed and the real requirement here is to resolve this
> at the framework level first, then apply these solutions within
> respective containers. In the meantime, the strategy for Merlin will be
> to plug-in an ECM style manager when required at a component or
> container level (with an implementation based on existing Fortress
> code). This will enable zero modification of existing ECM style
> components.
Ok, cool. We don't want to deal with framework changes until they
solidify so this is a very big requirement for us.
>
> >
> > 4) how do we implement the block manager? should it be a comman line
> > interface or a web interface, or both?
>
>
>
> I don't think I agree with the question ;-)
> Management of blocks should be indepedent of the means through
> information is presented. Assume that you have a container that is
> capable of managing a set of components, resources and subsidary
> containers ... you could imagine a management interface to the
> container, and that management interface could be accessible via the
> web, command line, JMX etc.
Yeah, of course. for 'block manager' I was indicating the
'user-interactive layer' that allows us to deploy, configure and manage
blocks. I was not talking about the block manager in term of container
implementation. Terminology conflict here.
>
> > what about security?
>
>
>
> Work I've done in this area is perhaps excessive relative to what you
> have in mind. I have a micro PKI which handles the generation of keys
> and certificates which are used for both admin and runtime
> authorization. The main difference between the work I'm doing and what
> your describing here is that I'm dealing with distributed containers and
> I need to propergate identify with every invocation and single
> invocation may result in service invocation across multiple container
> deployed in defferent sites, each with different security policies.
No, no, no, nothing that fancy here :) we just would like to be sure
that the block really comes
from apache.org and is therefore trustfull to deploy. That's plain
enough for me.
>
> >
> > 5) the 'uber library of cocoon blocks'. Where do we host it? how to
> > we manage it? How do we provide the block discovery web service? which
> > technology do we use: SOAP or REST?
>
>
>
> My experience here is somewhat experimental at this stage. I'm not
> using a web protocol - instead I'm passing meta model structures over
> the wire (i.e. remote invocations but the scenario is a little diffenent
> because I'm more concerned with service access where service can be
> relocated locally or accessed remotely).
>
> >
> > 6) should we "digitally sign" our blocks?
>
>
>
> Yes.
>
> > if so, how?
> >
>
> Has anyone though about a Cocoon Certification Authority ?
Ok, I think we have to involve our crypto gurus here, I'll ask the
mod_ssl people.
For now, let's skip this step since it's just another step in the
validation phase and doesn't impact the block design.
--
Stefano Mazzocchi <stefano@apache.org>
--------------------------------------------------------------------
---------------------------------------------------------------------
To unsubscribe, e-mail: cocoon-dev-unsubscribe@xml.apache.org
For additional commands, email: cocoon-dev-help@xml.apache.org | http://mail-archives.apache.org/mod_mbox/cocoon-dev/200211.mbox/%3C3DC5B066.4060708@apache.org%3E | CC-MAIN-2015-22 | refinedweb | 4,340 | 53.61 |
Blog Gratia Blog file with some default information you want for it
to your project and change its build action to Embedded Resource. Then, you find
the name of the resource (it can be tricky if you have a few folders) by opening
up ildasm and double clicking the MANIFEST node. Using that resource name, you
would do something like this:
using System;
using System.IO;
using System.Xml;
using System.Reflection;
string path = Path.Combine(
Environment.GetFolderPath(
Environment.SpecialFolder.ApplicationData),
Application.CompanyName);
path = Path.Combine(path, Application.ProductName);
path = Path.Combine(path, subFolder);
path = Path.Combine(path, "fileName.xml");
if(!File.Exists(path)){
Assembly thisAssembly = Assembly.GetExecutingAssembly();
Stream rgbxml = thisAssembly.GetManifestResourceStream(
"YourNamespace.fileName.xml");
XmlDocument doc = new XmlDocument();
doc.Load(rgbxml);
doc.PreserveWhitespace = true;
doc.Save(path);
}
A couple of things to note about this: it's for a WinForms application, so I
can use the Application class to get things in AssemblyInfo.cs (like
ProductName, CompanyName). Also, you could probably do this anywhere but I chose
to put a lot of my default configuration under the user's ApplicationData
folder, where most users (can't say for sure about the guest account since
that's been disabled here for a long time) have authority to
write.
and are the new entries in the xml file available as an embedded resource. Do you not have to recompile the exe with the embedded resource?
In this case, I believe that you have to recompile to get a new xml file in there. You may be able to get around it using satellite assemblies if you are worried about having to recompile.
But this is a case that I have a default xml file that has the same content regardless of language, and it remains the same always. This is a good candidate for those cases.
ah so like a standard template config file that applies to all new users .....this then gets saved per user and can be edited
Thanks Chris
Last year I put together an article that had a bunch of resource files related stuff in it, that people reading this entry might benefit from (some source code there too) at:
It seems worth it to note that "doc.PreserveWhitespace = true;" should be placed before "doc.Load(rgbxml);"
The apparent result, otherwise, is that the embedded file will get loaded, ignoring the whitespace, and then any whitespace left (which will likely be none) will be preserved.
You have to check if stream is null. Check out my version of embedded file reading weblogs.asp.net/.../reading-embedded-files-at-runtime.aspx | http://weblogs.asp.net/cfrazier/archive/2005/07/18/419812.aspx | crawl-002 | refinedweb | 433 | 57.37 |
This Tech Tip reprinted with permission by java.sun.com
According to Wikipedia, a splash screen is a computer term for an image that appears while a program or operating system is loading. It provides the user with a visual indicator that the program is initializing. Prior to Java SE 6, (code name Mustang) you could only offer the behavior of a splash screen by creating a window at the start of your main method and placing an image in it. Although this worked, it required the Java runtime to be fully initialized before the window appeared. This initialization included AWT and typically Swing, so that it delayed the initial graphical display. With Mustang, a new command-line option makes this functionality much easier. It also displays the image more quickly to the user, that is, even before the Java runtime has started. Final inclusion of the feature is subject to JCP approval.
If you run a program from the command line, you can generate a splash screen through the -splash command line switch. This functionality is most useful when you're using a script, batch file, or desktop shortcut to run the program. The command line switch is followed by an image name:
java -splash:Hello.png HelloWorld
Yes, that is a colon between -splash and the image name. This immediately displays the image before the runtime environment is fully initialized. The displayed image is centered on the screen. Splash screen images can be have GIF, PNG, or JPEG formats. As is the case for the regular Image class, splash screen images support animation, transparency, and translucency (translucency support is limited to Microsoft Windows 2000 or XP). The splash screen disappears when the first window is created by the application.
Typically most users won't want to put -splash on their command line entry. So perhaps a more effective way of displaying a splash screen is to create a manifest file for an application, and then combine the application with the manifest and image in a JAR file. When a user launches the application from the JAR file, the splash screen appears. In this case, the user doesn't have to specify a command line option.
The manifest file option is named SplashScreen-Image. The option is followed by the image filename. The full path of the filename needs to be specified if the file is not at the top level of the JAR file.
Here's a simple example that demonstrates these new splash screen features. First, create the following program:
import javax.swing.*;
import java.awt.*;
public class HelloSplash {
public static void main(String args[]) {
Runnable runner = new Runnable() {
public void run() {
try {
Thread.sleep(1500);
} catch (InterruptedException e) {
}
JFrame frame = new JFrame("Splash Me");
frame.setDefaultCloseOperation(JFrame.EXIT_ON_CLOSE);
JLabel label = new JLabel(
"Hello, Splash", JLabel.CENTER);
frame.add(label, BorderLayout.CENTER);
frame.setSize(300, 95);
frame.setVisible(true);
}
};
EventQueue.invokeLater(runner);
}
}
Next, compile the program:
javac HelloSplash.java
Then try out the command-line -splash. For simplicity, use a splash screen image that's in the same directory as the program (this is not an absolute requirement):
java -splash:MyImage.png HelloSplash
You'll see MyImage centered on the screen immediately, followed by the application screen once the Java runtime environment initializes.
Now let's try the JAR file approach. First create the manifest.mf file for the manifest. The contents of the file should look like this:
Then package the JAR file:
Then run the JAR without specifying the -splash command line option:
As before, you should see the splash screen followed by the application screen.
If your JAR file has a splash screen image specified in its manifest, and a user specifies a splash image from the command line, the command line image is given precedence and shown instead.
Although the command-line -splash and manifest SplashScreen-Image options are sufficient for most needs, there is more to splash screens in Mustang. The java.awt package offers a SplashScreen class for more advanced functionality beyond simply showing a splash screen image.
Provided an image was created by either the -splash command line option or the SplashScreen-Image option in the manifest, the getSplashScreen() method of the SplashScreen class returns the generated screen. If no image was created, getSplashScreen() returns null.
Using other SplashScreen methods, you can discover various things related to a splash screen:
You can change the image shown after the splash screen is loaded, but before the application starts. You have two ways to do this. The setImageURL() method allows you to provide a URL for a new image to display. The second approach, which is likely more typical, is to call the getGraphics() method to get the graphics context (java.awt.Graphics) of the window. You then update the image with any of the normal graphical and Java 2D APIs. That's because this is an instance of Graphics2D, not simply java.awt.Graphics. After you draw to the graphics context, you call the update() method of SplashScreen to draw the updated image.
Here's an example of the latter behavior, which cycles through a bunch of colors on the splash screen. Imagine this displaying a progress bar or some other state data indicating the progress of application initialization.
import javax.swing.*;
import java.awt.*;
import java.awt.geom.*;
import java.util.*;
public class ExtendedSplash {
public static void main(String args[]) {
Runnable runner = new Runnable() {
public void run() {
Random random = new Random();
SplashScreen splash = SplashScreen.getSplashScreen();
Graphics2D g = (Graphics2D)splash.getGraphics();
Dimension dim = splash.getSize();
Color colors[] = {Color.RED, Color.ORANGE,
Color.YELLOW, Color.GREEN, Color.BLUE,
Color.MAGENTA};
for (int i=0; i<100; i++) {
g.setColor(colors[i % colors.length]);
g.fillRect(50, 50, dim.width-100, dim.height-100);
splash.update();
try {
Thread.sleep(250);
} catch (InterruptedException ignored) {
}
}
JFrame frame = new JFrame("Splash Me2");
frame.setDefaultCloseOperation(JFrame.EXIT_ON_CLOSE);
JLabel label =
new JLabel("Hello, Splash", JLabel.CENTER);
frame.add(label, BorderLayout.CENTER);
frame.setSize(300, 95);
frame.setVisible(true);
}
};
EventQueue.invokeLater(runner);
}
}
Notice how the drawing is done over the splash screen image.
After the color cycling is complete, the example shows the frame. This is typical of a startup process: after the initialization completes, show the frame, which hides the splash screen.
The final SplashScreen option to mention uses the close() method. You can call this method if you want to explicitly close the window and release the associated resources. It isn't necessary to explicitly call this method because it is called automatically when the first window is made visible.
For additional information on using splash screens, see the technical article New Splash-Screen Functionality in Mustang. Also see the javadoc for the SplashScreen class.
You can share your information about this topic using the form below!
Please do not post your questions with this form! Thanks. | http://www.java-tips.org/java-se-tips/javax.swing/splash-screens-and-mustang-5.html | CC-MAIN-2014-15 | refinedweb | 1,140 | 57.16 |
OK, I've done that. Please let me have your teacher's contact details so I can email it directly to him, thus saving you the bother of copy/pasting the solution. You will, of course. fail the course, and maybe get chucked out, but that would be quite reasonable given your attitude.
Alternatively...
DaniWeb Member Rules (which you agreed to when you signed up) include:
"Do provide evidence of having done some work yourself if posting questions from school or work assignments"
//Hope This Helps #include<iostream> using namespace std; int main() { int size=0,x=0; cout<<"Enter The Size of Message "; cin>>size; char message[size]; cout<<"Enter Message "; cin>>message; for(int i=0;i<size;i++) { for(int j=0;j<size-1;j++) { if(i!=j && message[i]<message[j]) { char temp=message[i]; message[i]=message[j]; message[j]=temp; } } } for(int i=0;i<size;i++) { cout<<message[i]<<endl; } system("pause"); }
James is right, Mabdullah. The rules on this forum are pretty clear and he's following them and you're violating them. He's been here quite a while, so one would assume he knows how things work here, especially when he linked the rule he was referring to.
And your program has a very serious error in it as well. ...
More Recommended Articles | https://www.daniweb.com/programming/threads/504177/please-help-me | CC-MAIN-2016-50 | refinedweb | 224 | 61.97 |
Addressing many LEDs with a single Arduino
A fun little side project of mine is Arduino C/MRI, a library that lets you easily connect your Arduino projects up to the JMRI layout control software, by pretending to be a piece of C/MRI hardware. Hence the name.
A common problem when using Arduino C/MRI is dealing with lots of inputs and outputs. As an example, lets wire up a simple non-CTC crossing loop here in New Zealand. It is about as simple as you can get:
Each end consists of:
- A turnout. We'll need 1 digital output to drive that.
- A route indication signal on each leg of the turnout. We'll need an LED for red, and one for green (technically it'd be blue here in NZ). That's 3 pairs of outputs = 6 more.
- A push button to control the turnout. That's 1 digital input.
That's 8 pins right there, doubled for the other end of the loop, makes 16 pins. That's nearly an entire Arduino dedicated to just one piece of track! Naturally we'll be having more than just a single crossing loop on our railway, yet we have no more Arduino pins left. What are we to do?
Expanding outputs
The answer comes in the form of a 74 series logic chip, the 74HC595. This is a serial-in, parallel-out device. We send it the state of each pin using 3 data pins, and it updates each of its 8 pins. So already using 3 pins we're able to drive 8 output pins. But the best part? They can be daisy chained. That means with 3 data pins, we can control an unlimited number of 74HC595 devices! Suddenly our job just got a whole lot easier.
The schematic below demonstrates how one might do this:
Notice how the
Q7' pin is daisy-chained to the next device, while the
ST_CP and
SH_CP pins are shared. Now using 3 data pins we're addressing 16 outputs. Fantastic. What does the code to deal with this look like?
#include <CMRI.h> #define LATCH 8 #define CLOCK 12 #define DATA 11 CMRI cmri; // defaults to a SMINI with address 0. SMINI = 24 inputs, 48 outputs void setup() { Serial.begin(9600); // make sure this matches your speed set in JMRI pinMode(LATCH, OUTPUT); pinMode(CLOCK, OUTPUT); pinMode(DATA, OUTPUT); } void loop() { // 1: main processing node of cmri library cmri.process(); // 2: update output. Reads bit 0 of T packet and sets the LED to this digitalWrite(LATCH, LOW); shiftOut(DATA, CLOCK, MSBFIRST, cmri.get_byte(0)); digitalWrite(LATCH, HIGH); }
You can see we're using a new method here,
cmri.get_byte(n). Rather than inspecting a single bit, this returns an entire byte, which we then shift out to the 74HC595 using the
shiftOut method. Toggling the
LATCH pin is how we tell the 74HC595 that we're busy sending it data; it only updates the output pins once we take the
LATCH pin high.
More inputs
That was pretty easy, but what if we have a massive CTC panel and want dozens and dozens of inputs? Or we have gone a little crazy with occupancy detectors? Can we do something similar? Luckily we can, using the CD4021 "8-Stage Static Shift Register". It's just the opposite of what we've seen above.
The schematic is a little messier because of all the pulldown resistors, but you get the idea: 3 data lines to the Arduino.
The code is a little more complex, but only slightly: (note: untested code)
#include <CMRI.h> #include
// pins for a 168/368 based Arduino #define SS 10 #define MOSI 11 #define MISO 12 /* not used */ #define CLOCK 13 CMRI cmri; // defaults to a SMINI with address 0. SMINI = 24 inputs, 48 outputs void setup() { Serial.begin(9600); // make sure this matches your speed set in JMRI SPI.begin(); } void loop() { // 1: main processing node of cmri library cmri.process(); // 2: toggle the SS pin digitalWrite(SS, HIGH); delay(1); // wait while data CD4021 loads in data digitalWrite(SS, LOW); // 3: update input status in CMRI, will get sent to PC next time we're asked cmri.set_byte(0, SPI.transfer(0x00 /* dummy output value */)); }
The connections from the above schematic are:
- dataPin -> MISO (12)
- latchPin -> SS (10)
- clockPin -> CLOCK (13)
We're using a new method again here, the
cmri.set_byte(n, b) which sets the given byte to the value read in from the CD4021.
Putting it together
Using a combination of the 74HC595 and CD4021, you should be able to easily address dozens of inputs and outputs from a single Arduino, while using only half a dozen pins. This leaves other pins free for more interesting tasks. Suddenly wiring up your entire goods yard is not only possible, but quite easy. | http://www.utrainia.com/45-addressing-many-leds-with-a-single-arduino | CC-MAIN-2019-51 | refinedweb | 810 | 73.37 |
Re: Modeling question...
Date: Thu, 30 Oct 2008 07:03:35 -0700 (PDT)
Message-ID: <8d9f2fb3-8981-4e78-aef9-50b5b6e04ee6_at_a17g2000prm.googlegroups.com>
On Oct 30, 7:41 pm, JOG <j..._at_cs.nott.ac.uk> wrote:
> On Oct 30, 1:51 am, David BL <davi..._at_iinet.net.au> wrote:
>
>
>
>
>
> > On Oct 29, 8:39 pm, JOG <j..._at_cs.nott.ac.uk> wrote:
>
> > > On Oct 29, 2:37 am, David BL <davi..._at_iinet.net.au> wrote:
> > > > On Oct 29, 9:13 am, JOG <j..._at_cs.nott.ac.uk> wrote:
>
> > > > > The RM handles facts as naturally as stating them in predicate logic.
> > > > > And why would one ever model things other than facts in predicate
> > > > > logic?
>
> > > > Exactly!
>
> > > Then may I suggest that your argument is not with the RM, but with the
> > > use of predicate logic to model equations, engines, etc. And yet this
> > > to me seems trivially true - if I was modelling a human in an art
> > > class I'd use clay, not predicate logic.
>
> > I don't think it's quite so trivial. For example, consider tri-
> > surface as a value-type. A simple type decomposition as a set of
> > triangles where each triangle is independently defined by 3 vertices
> > doesn't express the constraint that the triangles tend to meet each
> > other. It seems appropriate to introduce abstract identifiers for
> > the vertices in order that they may be shared.
> > This is evidently a relational solution. However unlike typical uses of the RM there
> > doesn't appear to be some external UoD to which the tuples,
> > interpreted as propositions can be related.
>
> I use Oracle Spatial to do exactly this sort of thing day in day out
> in a geospatial domain, and no abstract identifers are introduced. The
> coordinates of any vertex are used. That is what identifies them -
> that is what is used (note that these coordinates can happily be
> relative). Constraints to maintain adjacency use the spatial operators
> offered by SDO_RELATE. It is very good.
>
> I karate chop your example to pieces! Haiii-ya.
Please forgive my ignorance - I'm not familiar with Oracle Spatial. Are you suggesting that for a tri-surface all that is needed is a single relation for the triangles, and when for example you want to change what is conceptually a shared vertex (and so which is understood to impact multiple triangles), it is assumed that all vertex values that appear in the relation with that same value (ie coords) are indeed logically shared and therefore are all automatically updated by the DBMS at the same time? If so it is not clear to me how and when the DBMS knows that such an elaborate update policy is required. I presume it is inferred from the integrity constraints. Is that right? Does the DBMS provide such a facility in a generic way?
This reminds me of the idea that one can change the key of a tuple in a relation and have the DBMS automatically update all foreign key references across the entire database.
Anyway, I think there are data entry applications where the concept of "shared values" needs to be under user control. For example in the data entry of a CAD drawing of a car the user may or may not want all the wheels to share the same geometry. The problem with simple copy and paste (and no logical sharing) is that any future edits to the wheel geometry need to be repeated on every copy. The obvious solution seems to be to reference a single shared geometry for a wheel - hence the need for an abstract identifier. Are you suggesting that an alternative is to instead use an integrity constraint! If so how can you specify which geometries are logically tied and which are not (ie even though they just happen to be equivalent in value at that moment in time)? Doesn't that require abstract identifiers of some sort anyway? I can't imagine that values that happen to be the same are always assumed to be shared, because then it would be impossible for a user to copy and paste a value in order to create a copy that will subsequently diverge.
> > Rather it seems that a particular tri-surface /value/ has introduced a local and private
> > namespace in order to privately apply the RM. Note as well that this
> > is not like an RVA (where we think of only a single relation as a
> > value) because a tri-surface value is associated with /two/ relations
> > - one for the vertices and another for the triangles.
>
> > I have wondered whether abstract identifiers are needed precisely when
> > it is useful to express the concept of "common sub-expressions" within
> > nested value-types. Note that scene graphs are typically thought of
> > as DAGs not trees for precisely this reason.
>
> > I think there is an interesting interplay between 1) degrees of
> > freedom (or entropy or storage space if you like) in the encoding of a
> > value, 2) abstract identifiers, 2) integrity constraints and 4) update
> > anomalies. The existing normalisation theory in the literature seems
> > relevant but doesn't seem to me to account for recursive type
> > definitions and abstract identifiers.
>
> I am yet to be convinced of the need for abstract identifers (or
> invention of recursive types) from the examples offered so far.. the
> wff is the most interesting, but I am currently questioning the sense
> or utility of decomposing an equation in such a manner /at the logical
> level/ (as opposed to the physical).
Received on Thu Oct 30 2008 - 15:03:35 CET
Original text of this message | http://www.orafaq.com/usenet/comp.databases.theory/2008/10/30/0202.htm | CC-MAIN-2017-51 | refinedweb | 927 | 60.95 |
Quote:1) The input values it is actually using. Not the values in the "a, b, and c" files, or the values in the DB, but the actual values the app is using to create the file "d".- I tried finding out this, but failed as the file conatians values as i mentioned above
2) How it combines those values to produce a file. - the command used is :sprintf (buff, "cat %s >> %s",
file_a, file_d);
3) The output values it actually generates.
4) The output values you expect "d" to contain.
3 & 4 - some are correct and fileds are coming -ve randomly
#include <stdio.h>
int main()
{
printf("Hello World\n");
char buff[100];
char *file_a = "hello";
char *file_d = "world";
sprintf (buff, "cat %s >> %s", file_a, file_d);
printf("%s\n", buff);
return 0;
}
Hello World
cat hello >> world
cat
var
This content, along with any associated source code and files, is licensed under The Code Project Open License (CPOL) | https://www.codeproject.com/Questions/4067702/Conversion-issue-with-files | CC-MAIN-2021-04 | refinedweb | 159 | 74.83 |
All opinions expressed here constitute my (Jeremy D. Miller's) personal opinion, and do not necessarily represent the opinion of any other organization or person, including (but not limited to) my fellow employees, my employer, its clients or their agents.
Just out of whimsy, here's my list of classes or interfaces that seem to show up in every project I work);
}
What's yours? Or is this just a sign of being in a rut?
[Advertisement]
Jeremy;
Will be great is you can post the code of some of those classes
Thx
Pingback from Database Management » Blog Archive » Classes that show up in every project
One that comes to mind for me is some sort of an ILogger (we've got a bunch of legacy apps with their own 'logging' system) so we've ended up putting in a facade that takes whatever format that particular app's Logging facility looks like and maps it to log4net.
I'd also like to put in a request to talk about how you're using Linq/NHibernate and maybe functional-style programming with your single repository.
Log (sometimes replaced by Log4Net)
Utilities (everything miscallaneous)
BusinessBase - business base class. I've inherited a couple of CSLA.NET based projects, which has a similar, if not overcomplicated business base classes.
DataBase - my tried and true database access base class - sometimes replaced by something shiny like Subsonic or NHibernate or CodeSmith generated magic
Here is my typical solution, broken down by projects
PresentationLayer
BusinessLayer
CommonLayer
DataLayer
Yeah, it might be a sign of a rut.
I usually have a static IoC class that wraps Castle Windsor, a helper class named Db for bypassing nhibernate and using ADO.NET directly for ex. Db.Transaction(delegate(IDbCommand cmd) {})
If it's a WebService app I usually have a EntityTranslator service which translates domain objects to DTO (and back),.
Pingback from Reflective Perspective - Chris Alcock » The Morning Brew #133
I always end up writing a clock interface. Something like (in Java):
public interface Clock {
Instant now();
Date today();
}
Then having a SystemClock that delegates to the static time APIs, and a StoppedClock for fixing the time in unit tests.
Pingback from Dew Drop - July 10, 2008 | Alvin Ashcraft's Morning Dew
I always have some kind of abstraction over logging (whether I'm using log4net, System.Diagnostics or whatever) to do stuff like log performance in code blocks and log an exception easily.
And - not really a class - but I always end up with a Build project, which contains any MSBuild tasks needed to support the build. And it also contains post-build steps for full builds, like running unit tests and FxCop etcetera.
I think your approach is the start of your own software factory. You're not already identifying a lot of abstract steps you take before you start developing. The next step is to become more concrete and combine this stuff in predefined templates, etc.
Similar to your IRepository solution we added a Save() Extension method to IQueryable and the implementation wraps the datacontext / session.
We then grab IQueryable<T> from the IoC container.
Still early, but time will tell.
My IRepository looks very similar but instead of
T[] Query<T>(..)
T FindBy<T, U>(...) and
T FindBy<T>(...)
I just have a single
IQueryable<T> GetAll<T>()
In my unit tests I have mock repository builder methods that return mock repositories that spew out lists of test data whenever GetAll() is called. Any Linq extension methods that I chain after GetAll() then simply work on that object graph rather than being translated into SQL.
It's nice to wrap up specific queries in extension methods, then you can write stuff like:
var orders = orderRepository.GetAll().ThatMatch(criteria);
or maybe
var orders = orderRepository.GetAll().ThatHaveNotBeenBilled();
Like you I'm still experimenting with this pattern.
a generic name value class.
public class NameValue<TName, TValue>
{
public TName Name { get; set;}
public TValue Value { get; set; }
public override string ToString()
{
return Name.ToString();
}
}
Chris Brandsma,
Why not just use KeyValuePair<TKey, TValue>?
msdn.microsoft.com/.../8e2wb99w.aspx
Good idea to list those. For me, it comes down to about this:
DataProvider
Logging
Settings (or Config or whatever)
APPNAMEContext (like WebshopContext, or CommunityContext)
Utils
Events
MessagingEngine
Thinks like that.
+1 for the linq/nhibernate.
I'm a bit of a newbie with nhibernate, and realise it's a different mindset to the traditional DB way I've been working previously. I've understood nhibernate to be a persistence layer to enable you to work in terms of a domain model, rather than in terms of your database model. In other words, to get at data, you navigate rather than query. And so it follows that having to perform a query is an indication that there's something missing in your domain model.
Assuming I've got the general idea, why do are you querying the database? What kind of data are you getting back?
Cheers
Matt
Source code for Jeremy's IRepository & Repository classes are available here:
storyteller.tigris.org/.../IRepository.cs
PROVISO: The source code is available, but it's a naive implementation at the moment. Note the total absence of adequate try/catch blocks.
Try/catch blocks suck.
Maybe we could start a new "Exception Ignorance" grassroots movement.
Programming links 07.11.2008
1) ICommand
2) IRule<T> with the bool FulfiledBy(T candidate) method
3) A Guard class for parameter validation.
4) On.UIThread<T>(Func<T> operation)
5) ApplicationShell
6) EventBroker of some sort with the Subscribe and Publish methods
7) EventHelper for raising events(hate repeating that eventName != null crap)
Quick overview on few interesting posts in the previous days: The very useful CR_Documentor 2.0 has been
Pingback from Useful Links #9 | GrantPalin.com
Path handling library,
considering that a path is a string is such a poor practice with all the path richness (file/folder, absolute/relative, operation...)
Pingback from » Notable posts
Stuff we have that at some poinit will get harvested into a framework:
* DependencyResolver. This is our wrapper for any IoC (used to be called IoC). Has methods like Initialize (which takes in something to use to resolve, like an IWindsorContainer) and Get<T> to get a dependency
* IBuilder Fluent fixture for building entities. Like ObjectMother but a little more flexible
* ISpecification<T> A generic for creating specification objects to filter out items from entitiy collections. Have a few blog posts on this that I need to get out
* IValidationStrategy<T> A strategy pattern implementation on creating various validations for entities. An example like ReadOnlyValidation would be used in the UI to set controls to non-editable. Keeps validation out of the domain and into a service object.
285 days ago I blogged about my dislike for extension methods. Extension methods aren't very discoverable
Pingback from » Announcing the .NET Extension Library
You've been kicked (a good thing) - Trackback from DotNetKicks.com
"Keeps validation out of the domain and into a service object."
Do you mean that you aim to move all validation out of the domain, or just that you try to keep GUI focussed validation out?
Pingback from Score keeping hockey and DDD « Justin Rudd’s Drivel
Pingback from Hockey News Aggregator » Score keeping hockey and DDD ?? Justin Rudd???s Drivel
Pingback from Hockey » DU Beats Michigan 3-2 To Win Snoopy Tourney
Pingback from Link-Listing – July 08 « Cav’s Weblog
Lipitor side effects walking walking. Does grapefriut interfere with lipitor. Lipitor versus pravachol. Lipitor the drug. Lipitor grapefruit. Lipitor.
Hydrocod | http://codebetter.com/blogs/jeremy.miller/archive/2008/07/09/classes-that-show-up-in-every-project.aspx | crawl-002 | refinedweb | 1,257 | 55.13 |
-----BEGIN PGP SIGNED MESSAGE----- Hash: SHA1
On Jan 3, 2007, at 2:29 PM, Martin v. Löwis wrote:
Guido van Rossum schrieb:
Maybe this should be done in a more systematic fashion? E.g. by giving all "internal" header files a "py_" prefix?
Yet another alternative would be to move all such header files into a py/ directory, so you would refer to them as
#include "py/object.h"
Any preferences?
I think I prefer this, although I'd choose "python/object.h" just for explicitness. But if you go with a header prefix, then the shorter "py_" is fine.
FWIW, I tried to do a quick grep around some of our code and I found that the only "internal" header we include is structmember.h. Why is that not part of Python.h again?
- -Barry | https://mail.python.org/archives/list/python-dev@python.org/message/NSGZY43YGZZIGZLSVWPQHNLGP4YZRGGJ/ | CC-MAIN-2021-39 | refinedweb | 136 | 75.5 |
But for the Microsoft C/C++ compiler, it will require some minor configuration before it can be used inside the IDE.
Here the two steps required to get run the Microsoft C/C++ compiler working inside the Zeus IDE.
Step 1: Create a Desktop Shortcut to Zeus Batch File
Use the mouse right click button on the desktop an then use the New, Shortcut menu to create a shortcut with the following details:
Target: "C:\Program Files (x86)\Zeus\ze.cmd"
Run: Minimized
Icon: Hit the Change Icon button, browse to "C:\Program Files (x86)\Zeus\zeus.exe" and select the Zeus icon.
Step 2: Test Everything is Working
By double clicking on the new shortcut above, you will have a new Zeus session that is correctly configured to run the Microsoft C/C++ compiler.
To test that configuration, create the following test.cpp file:
Now using the Compiler, Compile menu should result in this compiler output:
Code: Select all
#include <iostream> using namespace std; int main(int argc, char *argv[]) { std::cout << "Hello, World!" << endl << endl; return 1; }
This will have also resulted in the creation of a working test.exe executable file:This will have also resulted in the creation of a working test.exe executable file:Microsoft (R) 32-bit C/C++ Optimizing Compiler Version 15.00.30729.01 for 80x86
test.cpp
Microsoft (R) Incremental Linker Version 9.00.30729.01
/out:test.exe
test.obj
user32.lib
Cheers JussiCheers JussiC:\temp>test.exe
Hello, World!
C:\temp>
** Extra Debug Step if Required **
The batch file from above will only work if the vsvars32.bat file was located in the default Microsoft installer locations which may not always be the case.
Luckily the location of this batch file can be easily found from inside Zeus using the Tools, DOS Command Line menu and entering the following command:
Running this command should result in the following output:
Code: Select all
dir "$MsVcVarsPath"
You will notice the results of the search locate the vsvars32.bat file in the the C:\Program Files\Microsoft Visual Studio 9.0\Common7\Tools folder.You will notice the results of the search locate the vsvars32.bat file in the the C:\Program Files\Microsoft Visual Studio 9.0\Common7\Tools folder.Directory of C:\Program Files\Microsoft Visual Studio 9.0\Common7\Tools
29/07/2009 14:03 <DIR> .
29/07/2009 14:03 <DIR> ..
29/07/2009 14:02 <DIR> 1033
28/04/2005 18:04 11,197 AtlTraceTool8.chm
08/11/2007 08:19 75,272 AtlTraceTool8.exe
29/07/2009 14:00 <DIR> Deployment
08/11/2007 08:19 45,568 errlook.exe
02/08/2002 15:50 7,427 errlook.hlp
07/11/2007 12:01 31,744 guidgen.exe
08/11/2007 08:19 51,704 gutils.dll
08/11/2007 08:19 27,640 makehm.exe
19/10/2004 14:34 115,559 spyxx.chm
08/11/2007 08:19 631,800 spyxx.exe
08/11/2007 08:19 161,280 spyxxhk.dll
29/07/2009 13:59 <DIR> Templates
08/11/2007 08:19 21,504 uuidgen.exe
30/08/2007 15:31 1,748 vcvars.txt
29/07/2009 13:59 <DIR> VDT
29/07/2009 14:06 2,257 vsvars32.bat
13 File(s) 1,184,700 bytes
6 Dir(s) 50,424,692,736 bytes free
This ties in with the code found in the batch file as shown below:
If you find the vsvars32.bat file in a different, non-default folder location then just edit the ze.cmd batch to suit
Code: Select all
REM Microsoft Visual Studio 2008 :VS2008 call "C:\Program Files\Microsoft Visual Studio 9.0\Common7\Tools\vsvars32.bat" goto VS2005 goto Zeus
| http://www.zeusedit.com/phpBB3/viewtopic.php?p=4936 | CC-MAIN-2018-13 | refinedweb | 627 | 68.26 |
Let's consider what "fixes" we may do to a photo.
- JPEG compression level - Introduces artifacts, colors may change a little, banding, file size.
- De-noise - Reduces noise, generally reduces filesize a little, removes some detail/micro-contrast.
- Exposure/color correction - Changes average brightness of image channel (red/green/blue). Color correction changes the ratio of one channel to another over samples of the image
- Scale - Changing the size of the image means we have to look at areas of the image proportional to the width and height.
- HDR - HDR I think is misunderstood. The term means capturing more stops of light than normally permitted my a digital camera. The act of displaying that many levels of light on a screen which only shows eight stops of light per channel is tone mapping. Tone mapping can be done in many ways, and can be a type of local contrast manipulation - i.e. making a dark area a little brighter to show details there better. Unlike simply lowering the contrast of the entire image which will effectively do the same thing, local contrast manipulation means a solid dark or bright area, may not be the same brightness throughout. e.g. a dark area surrounded by a bright area - the dark area may be brighter overall, but not uniformly so, in order to keep contrast high at the border to the brighter area of the image. More on playing with this in a future post ;)
- Cropping - this can be quite difficult or at least intensive to check.
So, looking at these, we can rule out
- file-size and checksums
- image size and aspect ratio checks
- pixel by pixel comparisons - noise and jpeg artifacts will render this unusable
One check that can be made is relative average comparisons between clumps of pixels and the overall average. If the clump of pixels is done relative to the change in width and height, then even re-sized images can be found.
Assuming a picture is divided by a grid into sections. The average brightness of each section is then compared to the overall average (add red, green, blue values of each pixel). If the section is brighter than overall average, call it '1', if darker it's '0'. If it's within a threshold, a third value can be used.
I've compared some pictures and added them below.
The first and third are the same - just color and exposure corrections, some curves used. The picture in the center is from a slightly different position and the position of my baby's head is different. The transparent yellow/green overlays highlight the sections where the local:global exposure ratio is different.
Writing out the flags for the exposure checks,
Picture 1:
00000x010000x11101000001111100000000010000000x0x00000000111001110010x11100011111x000001111x110111011
Picture 2:
00000x0100001111010000x1111100001000010000000x0x0000000011100111001011110x011111x000001111x110111011
Picture 1-color corrected:
00000001000001111100000011110000010000000000010000000011000011110001110111110111xx000001x1x110111011
I'm using a 5% threshold value - i.e. if the difference between local and global exposure of a tile is under 5%, that "bit" is marked "x" and not compared against the alternate image string.
From the image, the colored sections show the tiles which have been found to be different. As we can see the color corrected image only had a slight difference. This can be considered within a threshold and added to a collection of similar images for later review.
Here's what the python code for the image comparison looks like.
Requirements:
Python, Numpy, PIL (or pillow).
___________________________________________________
from PIL import Image
import numpy
#------------------------------ DEFINED CONSTANTS ----------------------------
grid=10 # 10x10 dot sample
debugger=0
unclear=1 # permit use of 'x' in hash string if variance threshold is met
localcontrast=0
unclearvariance=0.05
#------------------------------ DEFINED CONSTANTS ----------------------------
def debug(string):
global debugger
if (debugger==1):
print(string)
def imgsum(filename):
def fuzzsamxy(px,py,prx,pry): # at x,y, get average in radius pr
cntx = px-prx
psum = 0
ptot = 0
x1=max(px-prx,0)
x2=min(px+prx+1,ax) #quirk: operation.array[a:b] apparently operates on indexes a to b-1
y1=max(py-pry,0)
y2=min(py+pry+1,ay)
pavg=int(3*numpy.average(im1arr[x1:x2,y1:y2]))
return pavg
global grid
#read image
debug("Opening "+filename)
img_a = Image.open(filename)
im1arr = numpy.asarray(img_a)
ax,ay,colors=im1arr.shape
debug("Size of image"+str([ax, ay]))
debug("Determining average brightness")
avg = int(3*numpy.average(im1arr))
debug("Grid comparison")
cx=0
signature=""
radius=(ax//(grid*4))
debug("radius of :"+str(radius))
while cx < grid:
cy = 0
while cy < grid:
chkpntx=int((float(cx)+0.5)*ax)//grid
chkpnty=int((float(cy)+0.5)*ay)//grid
radx=ax//(grid*2)
rady=ay//(grid*2)
if (localcontrast==1):
avg = fuzzsamxy(chkpntx,chkpnty,radx,rady) #get sample about chkpntx-chkpnty#get sample about chkpntx-chkpnty
sampavg = fuzzsamxy(chkpntx,chkpnty,min(1,radx//8),min(1,rady//8))
if float(abs(sampavg-avg))/avg < unclearvariance and unclear==1:
signature=signature+"x"
else:
if sampavg > avg:
signature=signature+"1"
else:
signature=signature+"0"
cy = cy + 1
cx = cx + 1
return(signature)
def stringdiffs(str1, str2):
if len(str1) != len(str2):
return -1
a=0
cum=0
while a<len(str1):
if str1[a] != str2[a]:
if (str1[a] != "x" and str2[a] != "x"):
cum = cum + 1;
a = a + 1
return [cum,(float(cum)/len(str1))]
########################## MAIN HERE ################################
# replace with your own pictures
print("identifying image 1")
id1=imgsum('1.jpg')
print("identifying image 3")
id3=imgsum('1b.jpg')
print("identifying image 4")
id4=imgsum('2.jpg')
# output the ID strings - these "hash" values are simple strings
print (id1)
print (id3)
print (id4)
# string diffs(str1, str2) will compute the total number of differences and the fraction of the total checks)
print("ID differential similar stuff :"+str(stringdiffs(id1,id3)))
print("ID differential little different :"+str(stringdiffs(id1,id4)))
___________________________________________________________
And here's the output:
>pythonw -u "imageiden2b.py"
identifying image 1
identifying image 3
identifying image 4
00000x010000x11101000001111100000000010000000x0x00000000111001110010x11100011111x000001111x110111011
00000x0100001111010000x1111100001000010000000x0x0000000011100111001011110x011111x000001111x110111011
00000001000001111100000011110000010000000000010000000011000011110001110111110111xx000001x1x110111011
ID differential similar stuff :[1, 0.01]
ID differential little different :[18, 0.18]
>Exit code: 0 Time: 1.896
Averaging 0.63 seconds per image isn't bad considering the image is fully loaded, and all pixels are used in averages. Numpy is extremely efficient at speeding this up. Accessing individual pixels, instead of Numpy built in array averages is several times slower.
From this point, it's pretty simple to create a main function that will output filename, image-hash-string if passed an image, so it's trivial to use this to get a list of hash strings.. | http://beomagi.blogspot.com/2013/04/ | CC-MAIN-2017-39 | refinedweb | 1,081 | 54.42 |
... learn easily in a day:
Download and install
java
In this section, you
the day number within a current year.
e.g. The first day of the year has value 1...
Find the Day of the Week
This example finds the specified date of an year and
a day
How to create LineDraw In Java
How to create LineDraw In Java
Introduction
This is a simple java program . In this section, you
will learn how to create Line Drawing. This program implements a line
Depth-first Polymorphism - tutorial
Depth-first Polymorphism
2001-02-15 The Java Specialists' Newsletter [Issue 009] - Depth-first Polymorphism
Author:
Dr. Heinz M. Kabutz
If you...
Getting Previous, Current and Next Day Date
Getting Previous, Current and Next Day Date
In this section, you will learn how to get previous,
current and next date in java. The java util package provides
learn
learn how to input value in java
Learn Java online
, NetBeans), about Java and creating his/her first Hello World
program in Java... can learn at his/her pace.
Learning Java online is not difficult because every...
Runtime Environment (JRE) is required to run Java program and Java-based
websites
Conditions In Java Script
Conditions In Java Script
In this article you learn the basics of JavaScript and
create your first JavaScript program.
About JavaScript
Java Problem Statement
You are required to write a program... given, then your program must create the expression: 8 - 5 * 2 = -2 Here..., thus expression 2-2+2 evaluates to 2 and not -2
Input Specification
First line Script With Links and Images
Java Script With Links and Images
In this article you learn the basics of JavaScript and
create your first JavaScript program.
JavaScript Images
Create a Desktop Pane Container in Java
Create a Desktop Pane Container in Java
In this section, you will learn how to create a desktop
pane container in Java. The desktop pane container is a container, which has
Looping In Java Script
Looping In Java Script
In this article you learn the basics of JavaScript and
create your first JavaScript program.
What is JavaScript loop?
The JavaScript loops used to execute the same block or code a specified number
OOP Tutorial [first draft]
Java: OOP Tutorial [first draft]
Table of contents
Introduction....
Using the constructor
Here is the first program rewritten to use the above class....
These notes are about programming and Java language features necessary
for OOP, and
Hello world (First java program)
Hello world (First java program)
.... Hello world program is the first step of java programming
language... to develop the robust application. Java application program is
platform independent
Navigation with Combo box and Java Script
Navigation with Combo box and Java
Script
In this article you learn the basics of JavaScript and
create your first JavaScript program.
What is JavaScript
Java Create Directory - Java Tutorial
Java Create Directory - Java Tutorial
In the section of Java Tutorial you will learn how to
create directory using java program. This program also explains
Java How to Program
Java program on your computer and also how to write and run your first java... the program with the Java executable.
Here is the first Java Hello World code... with the
tradition Java Hello World app. But first take a look at the various
requirements
Java program? - Java Beginners
Java program? In order for an object to escape planet's.... The escape velocity varies from planet to planet.Create a Java program which calculates the escape velocity for the planet. Your program should first prompt
Create Layout Components in a Grid in Java
Create Layout Components in a Grid in Java
In this section, you will learn how to create layout components
with the help of grid in Java Swing. The grid layout provides
First Hibernate Application
First Hibernate Application
In this tutorial you will learn about how to create an application of Hibernate 4.
Here I am giving an example which... : At first I have created a table named person in
MySQL.
CREATE TABLE `person
to learn java
to learn java I am b.com graduate. Can l able to learn java platform without knowing any basics software language.
Learn Java from the following link:
Java Tutorials
Here you will get several java tutorials EE or Java
should first learn Java and then JEE.
Tutorials to Learn Java
Java Index...What to learn - Java EE or Java?
As a beginner if you are looking... Java correctly. So, let's
first understand about different distribution
Learn java
Learn java Hi,
I am absolute beginner in Java programming Language. Can anyone tell me how I can learn:
a) Basics of Java
b) Advance Java
c) Java frameworks
and anything which is important.
Thanks
java program
java program Write a program to create an applet and display
The message "welcome to java a JRadioButton Component in Java
Create a JRadioButton Component in Java
In this section, you will learn how to create a radio
button in java swing. Radio Button is like check box. Differences between check
Sum of first n numbers
Sum of first n numbers i want a simple java program which will show the sum of first
n numbers....
import java.util.*;
public class SumOfNumbers
{
public static void main(String[]args){
Scanner input=new
What is JavaScript? - Definition
the basics of JavaScript and
create your first JavaScript program.
What... JavaScript Program
In the first lesson we will create very simple
JavaScript program...;head>
<title>First Java Script</title>
<script
Create a Frame in Java
Create a Frame in Java
Introduction
This program shows you how to create a frame in java AWT package. The frame in java works like the main window where
java program
java program write a program to create text area and display the various mouse handling events
First Program - Do Nothing
Prev: none | Next: Dialog Box Output
Java NotesFirst Program - Do Nothing
Here is just about the smallest legal program you can write.
It starts up, does...
// Description: This is the smallest program. It does NOTHING.
// File: doNothing
Java Program MY NAME
Java Program MY NAME Write a class that displays your first name vertically down the screen where each letter uses up to 5 rows by 5 columns...() { }, Then, method main should create an object of your class, then call the methods
java program
java program write a program to create server and client such that server receives data from client using BuuferedReader and sends reply to client using PrintStream and xml problem. plz see this 1 first - XML
java and xml problem. plz see this 1 first hi, i need to write a java program that generates an xml file as follows:
xxx...
]]>
s
i have witten a program in java
Class Average Program
;
This is a simple program of Java class. In this
tutorial we will learn how to use java program for displaying average value. The
java instances....
Description this program
Here in program we are going to use Class. First
Learn Features of Spring 3.0
. The Spring 3.0 Framework is released with the support of
Java 5. So, you can use all the latest features of Java 5 with Spring 3
framework.
The first... and released to simplify the development of
Enterprise Java applications
How to Learn Java
,
how to learn Java? Learning Java is not
that difficult just the right start is needed. The best way to learn Java today
without spending money in through... Java training program in the market is provided by
Roseindia. This online
Java get Next Day
Java get Next Day
In this section, you will study how to get the next day in java...()
provide the string of days of week. To get the current day, we have used
First Window
Java NotesExample - First Window
This is about the simplest GUI... program must look like this.
JFrame is the Java class for a "window..., and will appear in the
top left corner of the screen, so you may not see it at first
program
program Create a class called Employee with member variables employeeId, employeeDept and employeeSalary.
Create a method called setEmployeeDetails with three parameters to set the employee details.
Create another method
java program
java program Problem 1
Write a javaScript program that would input Employee Name, rate per hour, No. of hours worked and will compute the daily wage... I received already the answer last day thanks for the help...Plz answer
How to Java Program
for the respective operating system.
Java Program for Beginner
Our first...
How to Java Program
If you are beginner in
java , want to learn and make career in the Java
Create a Scroll Pane Container in Java
Create a Scroll Pane Container in Java
In this section, you will learn how to create a scroll
pane container in Java Swing. When you simply create a Text Area
Create File in Java
;}
}
First, this program checks, the specified file "myfile.txt"
is exist...
Create a File
... in string, rows, columns and lines etc.
In this section, we will see
how to create
java program
java program Create a washing machine class with methods as switchOn, acceptClothes, acceptDetergent, switchOff. acceptClothes accepts the noofClothes as argument & returns the noofClothes
Java Applet - Creating First Applet Example
Java Applet - Creating First Applet Example
... the applet. An applet is a program written in java
programming language... an applet program. Java source of
applet is then compiled into java class file and we
String Number Operations in JavaScript
;
In this article you learn the basics of JavaScript and
create your first JavaScript program.
What is String Number... program'.toUpperCase());
document.write(' first JavaScript program
How to learn Java?
to
learn Java language than there are two ways you can do that. One is the
classroom... training.
A young developer who is seeking a career in Java can learn the language...?
Downloading JDK (Java)
Writing Hello World Java program
java program
java program . Create Product having following attributes: Product ID, Name, Category ID and UnitPrice. Create ElectricalProduct having the following additional attributes: VoltageRange and Wattage. Add a behavior to change
Create a ToolBar in Java
Create a ToolBar in Java
In this section, you will learn how to create toolbar in java... been arranged
horizontal in this program but, if you
want to make it vertically
program help - Java Beginners
program help In the following code, I want to modify class figure...,
Abstract Class
In java programming language, abstract classes are those... are not instantiated directly. First extend the base class and then instantiate
Create a JSpinner Component in Java
Create a JSpinner Component in Java
In this section, you will learn how to create... are used to increase or
decrease the numeric value.
This program provides | http://www.roseindia.net/tutorialhelp/comment/81645 | CC-MAIN-2014-10 | refinedweb | 1,789 | 63.59 |
What is Breadth First Search?
Breadth-first search (BFS) is an algorithm for traversing or searching tree or graph data structures. It starts at the tree root (or some arbitrary node of a graph) and explores the neighbor nodes first, before moving to the next level neighbors. Compare BFS with the equivalent, but more memory-efficient iterative deepening depth-first search and contrast with depth-first search.. Breadth First Traversal of the following graph is 2, 0, 3, 1.
Following is the Program to implement Breadth first search C++ with explanation.
PROGRAM:
#include <iostream.h> #include <conio.h> #define MAX_NODE 50 struct node{ int vertex; node *next; }; node *adj[MAX_NODE]; //For storing Adjacency list of nodes.int totNodes; //No. of Nodes in Graph.////////////Queue Operation\\\int queue[MAX_NODE],f=-1,r=-1; void q_insert(int item){ r = r+1; queue[r]=item; if(f==-1) f=0; } int q_delete(){ int delitem=queue[f]; if(f==r) f=r=-1; else f=f+1; return(delitem); } int is_q_empty(){ if(f==-1) return(1); elsereturn(0); } ////////////Queue Operation\\\void createGraph(){ node *newl,*last; int neighbours,neighbour_value; cout<<"nn---Graph Creation---nn"; cout<<"Enter total nodes in graph : "; cin>>totNodes; for(int i=1;i<=totNodes;i++){ last=NULL; cout<<"nEnter no. of nodes in the adjacency list of node "<<i<<"n"; cout<<"--> That is Total Neighbours of "<<i<<" : "; cin>>neighbours; for(int j=1;j<=neighbours;j++){ cout<<"Enter neighbour #"<<j<<" : "; cin>>neighbour_value; newl=new node; newl->vertex=neighbour_value; newl->next=NULL; if(adj[i]==NULL) adj[i]=last=newl; else{ last->next = newl; last = newl; } } } } void BFS_traversal(){ node *tmp; int N,v,start_node,status[MAX_NODE];//status arr for maintaing status.constint ready=1,wait=2,processed=3; //status of node. cout<<"Enter starting node : "; cin>>start_node; //step 1 : Initialize all nodes to ready state.for(int i=1;i<=totNodes;i++) status[i]=ready; //step 2 : put the start node in queue and change status. q_insert(start_node); //Put starting node into queue. status[start_node]=wait; //change it status to wait state.//step 3 : Repeat until queue is empty.while(is_q_empty()!=1){ //step 4 : Remove the front node N of queue.//process N and change the status of N to//be processed state. N = q_delete(); //remove front node of queue. status[N]=processed; //status of N to processed. cout<<" "<<N; //displaying processed node.//step 5 : Add to rear of queue all the neighbours of N,//that are in ready state and change their status to//wait state. tmp = adj[N]; //for status updation.while(tmp!=NULL){ v = tmp->vertex; if(status[v]==ready){//check status of N's neighbour. q_insert(v); //insert N's neighbour who are in ready state. status[v]=wait; //and make their status to wait state. } tmp=tmp->next; } } } void main(){ clrscr(); cout<<"*****Breadth First Search Traversal*****n"; createGraph(); cout<<"n===BFS traversal is as under===n"; BFS_traversal(); getch(); } | http://proprogramming.org/breadth-first-search-program-in-c/ | CC-MAIN-2017-09 | refinedweb | 481 | 59.5 |
Here’s all the code you need to receive an SMS message and to send a response using Python, Flask, and Twilio:
from flask import Flask, request from twilio import twiml app = Flask(__name__) @app.route('/sms', methods=['POST']) def sms(): number = request.form['From'] message_body = request.form['Body'] resp = twiml.Response() resp.message('Hello {}, you said: {}'.format(number, message_body)) return str(resp) if __name__ == '__main__': app.run()
If you’d like to know how that works, check out this short video:
Can you walk me through this step by step?
When someone texts your Twilio number, Twilio makes an HTTP request to your app. Details about that SMS are passed via the request parameters. Twilio expects an HTTP response from your web app in the form of TwiML, which is a set of simple XML tags used to tell Twilio what to do next.
First make sure you set your local environment up and have a directory where the code will live.
Open your terminal and install Flask, the popular micro web framework, which we’ll use to receive Twilio’s request:
pip install flask
Install the Twilio Python library to generate the response TwiML:
pip install twilio
Create a file called
app.py, and import the Flask and request objects from the Flask library. Also import the Twilio Python library and initialize a new Flask app:
from flask import Flask, request from twilio import twiml app = Flask(__name__)
We need a route to handle a post request on the message endpoint. Use the
@app.route decorator to tell our app to call the sms function whenever a
POST request is sent to the ‘/sms’ URL on our app:
@app.route('/sms', methods=['POST']) def sms():
Details about the inbound SMS are passed in the form encoded body of the request. Two useful parameters are the phone number the SMS was sent
From and the
Body of the message:
def sms(): number = request.form['From'] message_body = request.form['Body']
Next we’ll use the Twilio library to create a TwiML <Response> that tells Twilio to reply with a <Message>. This message will echo the phone number and body of the original SMS:
resp = twiml.Response() resp.message('Hello {}, you said: {}'.format(number, message_body)) return str(resp)
And don’t forget to tell the app to run:
if __name__ == '__main__': app.run()
In your terminal, start the server which will listen on port 5000:
python app.py
But how does Twilio see our app?
Our app needs a publicly accessible URL. To avoid having to deploy every time we make a change, we’ll use a nifty tool called ngrok to open a tunnel to our local machine.
Ngrok generates a custom forwarding URL that we will use to tell Twilio where to find our application. Download ngrok and run it in your terminal on port 5000
./ngrok http 5000
Now we just need to point a phone number at our app.
Open the phone number configuration screen in your Twilio console. Scroll down to the “a message comes in” field. You should see something like this:
Punch in the URL for our message route that was generated by ngrok. It should look something like.
Click save, then text your number to get a response!
Next steps
To recap, when the text hits Twilio, Twilio makes a request to our app, and our app responds with TwiML that tells Twilio to send a reply message.
If you’d like to learn more about how to use Twilio and Python together, check out:
Feel free to drop me a line if you have any question or just want to show off what you built:
- Twitter: @Sagnewshreds
- Github: Sagnew
- Twitch (streaming live code): Sagnewshreds | https://www.twilio.com/blog/how-to-receive-and-respond-to-a-text-message-with-python-flask-and-twilio-html | CC-MAIN-2019-51 | refinedweb | 622 | 72.36 |
background(r, g, b, a=1.0) background(h, s, b, a=1.0) background(c, m, y, k, a=1.0) background(k, a=1.0) background(color) background(None) # transparent backdrop background(*colors, angle, steps=[0,1]) # axial gradient background(*colors, steps=[0,1], center=[0,0]) # radial gradient
Sets the canvas background color using the same syntax as the fill(), stroke(), and color() commands. You can set the background to transparent by supplying
None as its sole parameter (making it easier to import into Photoshop or Illustrator).
You can set the background to a gradient by passing more than one color value. For example
background('#600', 'white') will draw a radial gradient ranging from dark red to white. Specifying an
angle will render a linear gradient at that orientation. The optional
steps parameter should be the same length as the number of colors in the gradient. Each entry defines the center of its corresponding color in the gradient in relative values ranging from 0–1.
background(.2) fill(1) rect(10,10, 50,50)
clear() # erase the canvas clear(all) # erase the canvas and reset drawing state clear(*grobs) # remove specific objects from the canvas
Erases any prior drawing to the canvas and can optionally reset the graphics state (transform, colors, and compositing) when called with the
all argument. Calling clear() with one or more references to previously-drawn objects will remove just those objects without otherwise affecting the drawing state.
r = rect(0,0, 100,10) # add a rectangle t = poly(50,50, 25) # add a square c = arc(125,125, 50) # add a circle clear(r, c) # remove the rectangle & circle
... # draw to the canvas export("spool.pdf", cmyk=False)
with export("movie.mov", fps=30, bitrate=1.0): ... # draw movie frames
with export("anim.gif", fps=30, loop=0): ... # draw gif frames
The export() command allows you to generate images and animations from within your scripts. Calling export() as part of a
with statement will allow you to render single images, multi-page sequences, or animations.
When running the PlotDevice application, you’ll likely find it more convenient to use the standalone scripts that import the
plotdevice module, the export() command is the main avenue for generating graphics.
You can call export() at any time to write the current set of graphics objects on the canvas to a single bitmap or vector image file. The only required argument is a file path (whose extension will determine the format). The optional
cmyk argument can be set to
True to use ‘process’ colors when exporting to a PDF, EPS, or TIFF file.
The export() command returns a context manager that takes care of canvas-setup and file-generation for both single images and animations. By enclosing your drawing code in a
with block, you can ensure that the correct sequence of clear() and export() calls is made automatically.
For instance these two methods of generating a PNG are functionally equivalent:
clear(all) ... # (do some drawing) export('output.png') # let the context manager handle clearing and saving the canvas automatically with export('output.png'): ... # (do some drawing)
If you specify a filename ending in
mov (or
gif if you also pass a
loop or
fps argument), the export() command will begin a multi-frame animation and return an object to help you coordinate things. You can ‘capture’ this object and give it a name (for instance,
movie) using Python’s ‘with … as …’ syntax.
Each time you call the
movie’s add() method, a new frame with the contents of the canvas will be added to the end of the animation. Once you’ve added the
movie’s final frame, you must call the finish() method to wait for the video encoder’s background thread to complete its work.
As with the single-image version of the export() call, you can use the
with statement in your code to tidy up some of the frame-drawing boilerplate. All three examples below are equivalent. Note the use of the
movie object’s frame property (which is itself a context manager) in the final example:
# export a 100-frame movie movie = export('anim.mov', fps=50, bitrate=1.8) for i in xrange(100): clear(all) # erase the previous frame from the canvas ... # (do some drawing) movie.add() # add the canvas to the movie movie.finish() # wait for i/o to complete
# export a movie (with the context manager finishing the file when done) with export('anim.mov', fps=50, bitrate=1.8) as movie: for i in xrange(100): clear(all) # erase the previous frame from the canvas ... # (do some drawing) movie.add() # add the canvas to the movie
# export a movie (with the context manager finishing the file when done) # let the movie.frame context manager call clear() and add() for us with export('anim.mov', fps=50, bitrate=1.8) as movie: for i in xrange(100): with movie.frame: ... # draw the next frame
If you’re generating a series of static images, export() will automatically give them consecutive names derived from the filename you pass as an argument. If the filename is a simple
"name.ext" string, the sequence number will be appended with 4 characters of padding (
"name-0001.ext",
"name-0002.ext", etc.).
If the filename contains a number between curly braces (e.g.,
"name-{4}.ext"), that substring will be replaced with the sequence number and zero padded to the specified number of digits:
# export a sequence of images to output-0001.png, output-0002.png, ... # output-0099.png, output-0100.png with export('output.png') as img: for i in xrange(100): with img.frame: ... # draw the next image in the sequence # export a sequence of images to 01-img.png, 02-img.png, ... # 99-img.png, 100-img.png with export('{2}-img.png') as img: for i in xrange(100): with img.frame: ... # draw the next image in the sequence
Creating PDF documents works the same way, letting you either clear(), add(), and finish() the export manually or take advantage of the
with statement to hide the repetitive bits. Note that PDF exports use the
page attribute rather than
frame:
# export a five-page pdf document pdf = export('multipage.pdf') for i in xrange(5): clear(all) # erase the previous page's graphics from the canvas ... # (do some drawing) pdf.add() # add the canvas to the pdf as a new page pdf.finish() # write the pdf document to disk # export a pdf document more succinctly with export('multipage.pdf') as pdf: for i in xrange(5): with pdf.page: ... # draw the next page
geometry(units)
Sets the units for angles supplied to drawing commands. By default this is
DEGREES, but you can also use either
RADIANS or
PERCENT. The rotate() command will use whatever geometry-unit you select, as will the Point object’s methods and the
angle argument used when creating Gradients.
# turn the canvas 1/4 of the way around using radians geometry(RADIANS) rotate(pi/2) # do the opposite using degrees geometry(DEGREES) rotate(-90)
plot(grob)
The plot() command will draw any PlotDevice primitive to the canvas. It’s useful in combination with the
plot argument that can be passed to poly(), arc(), text(), image(), and friends.
When you call a drawing-related function with
plot=, the object is created but not added to the canvas. You can save a reference to the object in a variable then pass it to plot() later to be drawn as-is or with additional styling in its keyword args.
False
The plot() command can also be used as part of a
with statement to control whether drawing commands affect the canvas by default. For instance, you could suspend drawing inside an indented code block with:
with plot(False): p = poly(10,10, 30) # won't be drawn
This allows you to omit the
plot=False argument in the individual commands, which can be convenient when creating a bunch of ‘template’ objects ahead of time.
# create a shape (but don't draw it immediately) r = rect(20,20,40,40, plot=False) ... # draw the saved shape (but override the canvas's fill color) plot(r, fill='red')
# the plot keyword arg prevents this from being drawn o = oval(0,0,100,100, plot=False) # the plot() command disables drawing for the entire block with plot(False): o = oval(0,0,100,100) # not drawn s = rect(100,100,10,10) # same here
size(width, height, unit=px)
Sets the size of the canvas using your preferred measurement scale. If called with just
width &
height values, the canvas will default to using PostScript points for measurement (a.k.a.
px).
If this command is used, it should generally be called at the very beginning of the script. An important exception to this is the use of the export() command as part of a
with statement. In these cases it’s perfectly valid to set the size as the first line of the export-block.
If the optional
unit arg is included, it should be one of:
px,
pica,
inch,
cm, or
mm. Setting the canvas’s ‘default unit’ in this manner causes all subsequent drawing commands to be interpreted at that scale. The unit types can also be used to multiply values from other systems to the canvas’s units. For example
8.5*inch will always be the width of a letter page (regardless of the canvas’s unit).
The dynamic variables
WIDTH and
HEIGHT can be used to obtain the canvas size.
size(20, 20, cm) # the canvas is 20 x 20 cm print 2*inch >>> 5.079992238900746
speed(fps)
Sets the frame-rate for previewing animations in the application. The
fps argument specifies the maximum speed of the animation in frames per second. Note that this is only a maximum and complex animations will likely update less frequently. Calling speed() only makes sense if your script has been written as an animation.
In an animation, your drawing code does not live in the top-level of the script, but instead is factored into a trio of commands that you define yourself: setup(), draw(), and stop(). Your setup() command is called once at the beginning of a run, and stop() is called once the run is halted. The draw() command gets called repeatedly (with the canvas being cleared between calls).
The global variable called
FRAME will be incremented before every draw() and can be used in your code to track the passage of ‘time’.
speed(30) def setup(): # initialize variables, etc. pass def draw(): # draw the next frame in the animation pass def stop(): # gather up accumulated data, print summary info, etc. pass
pi, tau
trig quantities equal to a half- and full-circle respectively
CENTER, CORNER
transformation origins used by transform()
DEGREES, RADIANS, PERCENT
units understood by geometry()
MITER, ROUND, BEVEL
path line-join styles set by pen(join=…)
BUTT, ROUND, SQUARE
path end-cap styles set by pen(cap=…)
FRAME, PAGENUM
the current iteration count in an animation or multi-page export (respectively). The first iteration will be
1 and the counter increments from there.
MOUSEX, MOUSEY, mousedown
mouse events
KEY_UP, KEY_DOWN, KEY_LEFT, KEY_RIGHT, KEY_BACKSPACE, KEY_TAB, KEY_ESC
keyboard events
halt()
Immediately ends an animation run.
Animations will typically run until you halt them manually using thekeyboard shortcut. The halt() command lets you ‘bail out’ of an animation from code. This can be quite useful during debugging, but we discourage using it in scripts that you share with others.
def draw(): if FRAME==100: halt()
outputmode(mode)
Changes the way colors are displayed. While the colormode() command specifies the input of colors, outpumode() specifies the output. By default, the output mode for colors is
RGB, but you can also set it to
CMYK so that exported PDF’s are ready for print. Output in
RGB will look brighter on screen, but for valid PDF documents the colors need to be in
CMYK.
the current output mode (
RGB or
CMYK)
Choosing the output color-mode now happens as part of the application’s dialog box. If you’re calling the export() command directly, you can include a
cmyk=True argument to override the canvas’s default mode. Otherwise exports will default to
RGB.
outputmode(CMYK)
libname = ximport("libname")
The ximport() command is of historical interest to NodeBox users and is no longer necessary in PlotDevice. It allowed for importing ‘Libraries’ that had been installed in the
~/Library/Application Support/PlotDevice directory.
When called with a string whose value matches the name of an installed Library, the Library will be loaded and returned as a module. In addition, a reference to the current ‘graphics context’ is handed to the module, allowing it to change the graphics state, draw to the canvas, etc.
You can assign the module to a variable of any name you choose. Most of the time it makes sense to use the same name as the Library, but you’re free to pick something shorter if it’s too much of a mouthful.
colors = ximport("colors") background(colors.papayawhip()) fill(colors.chocolate()) rect(10, 10, 50, 50) | https://plotdevice.io/ref/Canvas | CC-MAIN-2017-39 | refinedweb | 2,190 | 63.19 |
Hello,
I have been working on this code but I can't seem to make it work. I want to implement function index() that takes as input the name of a text file (as a string) and a list of words. For every word in the list, the function will print the lines in the text file where the word occurs and print the corresponding line numbers (where the numbering starts at 1).
For Example:
index('raven.txt', ['raven', 'mortal', 'dying', 'ghost', 'ghastly', 'evil','demon'])
ghost 9, dying 9, demon 122, evil 99, 106, ghastly 82, mortal 30, raven 44, 53, 55, 64, 78, 97, 104, 111, 118, 120,
So far my code looks like this:
def index(filename, words): infile = open(filename) content = infile.readlines() infile.close() count = {} for word in words: if word in count: count[word] += 1 else: count[word] = 1 for word in count: print('{:12}{},'.format(word, count[word]))
This is counting the words in the text file but I want to count the number of lines in which this words occur. HELP!!! | https://www.daniweb.com/programming/software-development/threads/321178/word-count-in-a-text-file | CC-MAIN-2017-09 | refinedweb | 178 | 73.1 |
Bummer! This is just a preview. You need to be signed in with a Basic account to view the entire video.
Nesting Components10:28 with Andrew Chalkley and Ken Howard
We already have a main component. In this video we'll create child components to display our photos.
- 0:00
Components are the course to developing an angular application.
- 0:04
In the previous video, we learned how to add functionality to parts of our
- 0:08
component by binding a function to an event, but
- 0:12
we need to do more with our application if we want to show off photos.
- 0:17
We'll be adding two new components, one for the list of posts, and one for
- 0:21
the post itself, let's dive in.
- 0:24
To start, I'm going to create a new directory inside the app directory.
- 0:29
It will contain all of my photo entry components.
- 0:34
So I'm going to call it Entries.
- 0:39
Angular style guide suggests keeping like components in a parent folder.
- 0:43
For this application it means that all of our components that's specific to entries
- 0:49
will be located in the entries directory.
- 0:51
You can find the link to the style guide in the teacher's notes.
- 0:54
I'll now add the entry Directory and the entry list directory.
- 1:06
In the entry list directory I'll create three component files.
- 1:15
Entry list.components .cs.
- 1:29
Entry-list.component.html and
- 1:38
entry-list.component.css.
- 1:46
Be sure to create all of these files
- 1:50
even if you don't intend on using the stylesheet for this component.
- 1:54
You'll be doing yourself a favor when the time comes when you do want to use it.
- 1:59
In the entry list component.ts file, let's create the base component.
- 2:04
First, you need to import the component decorator from the angular core.
- 2:22
Then we'll add the component class.
- 2:31
Make sure it's exported so we can import it later,
- 2:35
then add the component decorator.
- 2:47
When giving your component a selector, be sure to use all lowercase characters.
- 2:58
And the words are separated with a hyphen.
- 3:05
This is known as kebab case.
- 3:07
It's consistent with html specification.
- 3:10
Angular style guide has more information on naming a selector.
- 3:13
This selector is what we'll be using in our app component's template.
- 3:18
Let's add the templateUrl.
- 3:29
Then the styleUrls.
- 3:34
Set to an array containing the new recreated stylesheet.
- 3:44
Next we need to tell Angular about our new component.
- 3:50
We do this by opening up the app.module.ts and imported the component.
- 4:15
Then, adding it to the module's declarations property
- 4:28
Throughout this course, we'll be adding more components.
- 4:31
I don't really want to continue adding lines to the app.module.ts file for
- 4:36
every new component we create.
- 4:38
Instead I'm going to create what in @angular is called a barrel.
- 4:42
A barrel is a single file that re-exports all components and services for
- 4:47
a feature, it doesn't serve any other purpose than that.
- 4:50
So in the entry's directory, I'll create a new file called index.ts.
- 4:59
In this file I want a line to export everything from the entry list component.
- 5:17
Then at the app modules ts file, I'll change the previous
- 5:23
import statement to be the entry's directory.
- 5:37
Now whenever I add a new component to the entry's directory,
- 5:42
I can access it on this one import.
- 5:45
You can find out more information about barrels in the teacher's notes.
- 5:49
Now, that our entry list component is being referenced in the NG module,
- 5:53
we can start using it throughout our application.
- 5:56
Let's go back to our app.component.html file and mess things up.
- 6:01
We can start by replacing the H2 that we worked so hard on before with a new tag.
- 6:07
We want to add our entry list here.
- 6:10
I'll add the app-entry-list custom element to the page.
- 6:14
This is where Angular's compiler is going to check that it has a reference to our
- 6:18
component.
- 6:19
Let's look in the browser to see what's going on.
- 6:22
An empty pitch.
- 6:23
Just what I'd expect since we haven't added
- 6:26
any content to our entry list template.
- 6:28
Let's quickly demonstrate what happens when you try and
- 6:30
reference a selector Angular doesn't know about.
- 6:33
I'll change up app-entry-list with foo.
- 6:37
And we'll see what happens.
- 6:41
The app failed to load and the console is red.
- 6:45
There's one line that says foo is not a known element.
- 6:49
It tells me how I can go about fixing the issue.
- 6:51
This is super helpful information when something really goes wrong.
- 6:55
But in our case, everything is fine.
- 6:57
I'll change the foo tag back to app-entry-list.
- 7:02
I'll double check, and the browser's happy again, cool, great.
- 7:06
Now, if we take what we learned from the entry-list component,
- 7:09
we'll need to do the same for the entry component.
- 7:12
So in the entry directory,
- 7:16
let's create the three files,
- 7:21
entry.component.ts.
- 7:26
Then entry.component.html.
- 7:32
Then entry.component.css.
- 7:40
To create this component, I'm going to just copy the contents of the entry list.
- 7:49
And paste it into the EntryComponent.ts file.
- 7:54
Then I need to change the name of the component to
- 7:57
EntryComponent And adjust the selector
- 8:06
the template URL, and these styles.
- 8:11
Save the file.
- 8:15
Then to re-export the component in the barrel,
- 8:18
I'll re-open the index.ts file in the entry's folder.
- 8:22
And then add the export line.
- 8:27
Finally I'll reference the new component in the app.module.ts file.
- 8:40
Because we're using a barrel, and I did not need to add a new line.
- 8:43
All I needed to do was include entry component
- 8:46
in the import list from the entry's directory.
- 8:52
Then Add a reference to the declarations property so
- 8:55
Angularjs can use the new component.
- 8:57
Something to note here.
- 8:59
I placed entry component above entry list component in the declarations array.
- 9:05
Angulars compiler reads through each of the components templates looking for
- 9:09
the element.
- 9:10
It doesn't recognize.
- 9:11
Have the compiler reach the entrylist component first, and
- 9:16
entry component hadn't been processed by the compiler,
- 9:19
we'd see an error in the console and the app wouldn't work.
- 9:22
Always put your child components first.
- 9:25
Now let's make sure that we can see both components in the browser.
- 9:29
In the entrylist.component.html I'll add a friendly message.
- 9:33
Hello from entry-list.component.html.
- 9:42
I'll also add the app-entry element, so
- 9:46
I can verify that the entry component is being loaded.
- 9:53
Now if I jump on over to the entry.component.HTML.
- 9:58
Lets add a friendly message in there, hello from entry.component.HTML.
- 10:06
Save the file and we get to the browser and I see.
- 10:13
Both messages are there.
- 10:15
It's working.
- 10:16
Great.
- 10:17
We're really moving along now.
- 10:19
Keep up the great work.
- 10:21
In the next video we'll build up the basic HTML and CSS for entry.
- 10:26
I hope you're as excited as I am. | https://teamtreehouse.com/library/nesting-components | CC-MAIN-2019-43 | refinedweb | 1,429 | 76.32 |
0
Creotex Engine ~ Part 1
Engine Programming Games Creotex
Hello ladies and gentlemen,
Welcome to my journal!
So part 1 of writing an engine. I hope you guys are all so excited as me!
Before I step into DirectX graphics I first want to create a window.
For people who haven't learned DirectX yet, for popping up a window you don't need any DirectX libs.
DirectX will just need the handle and instance of the window and will turn everything to a 3D space.
By the way, didn't mentioned it before, I'm using Unicode!
Example:
#ifdef _UNICODE #define tstring wstring #else #define tstring string #endif
So.. Writing a game engine. How should I start? How should I write? How will my structure look like?
Don't worry already got the idea!
The plan is that my WinMain function creates an Application, which handles the core and game.
This is how it looks like:
int WINAPI _tWinMain(HINSTANCE hInstance, HINSTANCE hPrevInstance, PTSTR pCmdLine, int nCmdShow) { // Terminate if heap is corrupted HeapSetInformation(NULL, HeapEnableTerminationOnCorruption, NULL, 0); // Avoid warnings of unused parameters UNREFERENCED_PARAMETER(hPrevInstance); UNREFERENCED_PARAMETER(pCmdLine); // Allocate the application Application* pApplication = new Application(); // Initialize and run the application if(pApplication->Initialize(hInstance)) { pApplication->Run(); } // Shutdown and deallocate the application SafeDeleteCore(pApplication); return TRUE; }
So I'm Initializing the Application, running if succeeded, and shutting down.
And for those who wonder what "SafeDeleteCore" is:
template<class T> inline void SafeDeleteCore(T &pObject) { if(pObject != nullptr){ pObject->Shutdown(); delete pObject; pObject = nullptr; } }
My application has 4 data members:
core: Window*, System*, Graphics*
game: Game*
I think you all know what to do with those data members when you have 3 methods... So not going to show such simple code....
Ok maybe a bit.. Because all of you are so lovely
This is the Run method:
void Application::Run() { // GameStart m_pGame->GameStart(); MSG msg = {0}; while(msg.message != WM_QUIT) { if(PeekMessage(&msg,NULL,NULL,NULL,PM_REMOVE)) { TranslateMessage(&msg); DispatchMessage(&msg); } else { // GameUpdate m_pGame->GameUpdate(); // GameRender m_pGame->GameRender(); Sleep(1); } } // GameEnd m_pGame->GameEnd(); }
So we got the WinMain, Application.... That was it I think?
Yes everything is done, we have written our engine. Now we can create games like Starcraft!!
Or is it?
No there is more!!!! *screaming* *crying* *beating up the kid next door*
We still need that window on our screen! or maybe if you stare long enough it will magically appear.. You can give it a shot.
So for the window class. A window class needs to store some information about the window so you can easily change something on your screen.
class Window { public: ///////////////////////////////////////////////////// // Constructor(s) & Destructor ///////////////////////////////////////////////////// Window(); virtual ~Window(); ///////////////////////////////////////////////////// // Public Methods ///////////////////////////////////////////////////// bool Initialize(HINSTANCE hInstance); void Shutdown(); void ShowWindow(); LRESULT CALLBACK HandleEvents(HWND, UINT, WPARAM, LPARAM); private: HWND m_hMainWnd; HINSTANCE m_hInstance; tstring m_sWindowTitle; int m_iWindowWidth; int m_iWindowHeight; bool m_bFullscreen; }; static LRESULT CALLBACK WndProc(HWND, UINT, WPARAM, LPARAM);
The Window.cpp is the largest piece of code I have at the moment. Here it comes!
bool Window::Initialize(HINSTANCE hInstance) { // Store the instance of the application m_hInstance = hInstance; // Setup the window class with default settings WNDCLASSEX wndclass; wndclass.style = CS_HREDRAW | CS_VREDRAW | CS_OWNDC | CS_DBLCLKS; wndclass.lpfnWndProc = WndProc; wndclass.cbClsExtra = 0; wndclass.cbWndExtra = 0; wndclass.hInstance = m_hInstance; wndclass.hIcon = LoadIcon(NULL, IDI_WINLOGO); wndclass.hIconSm = wndclass.hIcon; wndclass.hCursor = LoadCursor(NULL, IDC_ARROW); wndclass.hbrBackground = (HBRUSH)CreateSolidBrush(RGB(255,255,255)); wndclass.lpszMenuName = m_sWindowTitle.c_str(); wndclass.lpszClassName = m_sWindowTitle.c_str(); wndclass.cbSize = sizeof(WNDCLASSEX); // Register the window class if(!RegisterClassEx(&wndclass)) { return false; } // Create new screen settings if fullscreen int posX = 0, posY = = (unsigned long)m_iWindowWidth; dmScreenSettings.dmPelsHeight = (unsigned long)m_iWindowHeight; dmScreenSettings.dmBitsPerPel = 32; dmScreenSettings.dmFields = DM_BITSPERPEL | DM_PELSWIDTH | DM_PELSHEIGHT; ChangeDisplaySettings(&dmScreenSettings, CDS_FULLSCREEN); } else { posX = (GetSystemMetrics(SM_CXSCREEN) - m_iWindowWidth) /2; posY = (GetSystemMetrics(SM_CYSCREEN) - m_iWindowHeight) /2; } // Create the window with the screen settings and get the handle to it m_hMainWnd = CreateWindowEx(WS_EX_APPWINDOW, m_sWindowTitle.c_str(), m_sWindowTitle.c_str(), WS_CAPTION | WS_POPUPWINDOW | WS_CLIPSIBLINGS | WS_CLIPCHILDREN | WS_POPUP | WS_MINIMIZEBOX, posX, posY, m_iWindowWidth, m_iWindowHeight, NULL, NULL, m_hInstance, NULL ); if(!m_hMainWnd) { return false; } // Set window as main focus ~ sets higher priority to this thread SetForegroundWindow(m_hMainWnd); SetFocus(m_hMainWnd); return true; } void Window::Shutdown() { // Show the mouse cursor ~ just in case the mouse was hidden in run-time ShowCursor(true); // Fix the display settings if leaving full screen mode if(m_bFullscreen) { ChangeDisplaySettings(NULL, 0); } // Remove the window DestroyWindow(m_hMainWnd); m_hMainWnd = NULL; // Remove the application's instance UnregisterClass(m_sWindowTitle.c_str(), m_hInstance); m_hInstance = NULL; } void Window::ShowWindow() { ::ShowWindow(m_hMainWnd, SW_SHOW); ::UpdateWindow(m_hMainWnd); } LRESULT CALLBACK Window::HandleEvents(HWND hWnd, UINT msg, WPARAM wParam, LPARAM lParam) { return DefWindowProc(hWnd, msg, wParam, lParam); } //////////////////////////////////////////////////////////////////////////////// // Window Procedure //////////////////////////////////////////////////////////////////////////////// static LRESULT CALLBACK WndProc(HWND hWnd, UINT msg, WPARAM wParam, LPARAM lParam) { switch(msg) { case WM_DESTROY: PostQuitMessage(0); return 0; case WM_CLOSE: PostQuitMessage(0); return 0; } return ((Window*)GetWindow(hWnd, 0))->HandleEvents(hWnd, msg, wParam, lParam); }
So now we should have a nice window appeared on our screen. If not, initialize you data members!
I initialized my data members on a default size 640*480 and title: Default Window
While I was re-programming my framework I found something cool but strange at the same time..
return ((Window*)GetWindow(hWnd, 0))->HandleEvents(hWnd, msg, wParam, lParam);
This should crash normally but it doesn't.
I never allocated that window, or received a pointer of an existing pointer and still. The HandleEvents() method can be called.
I'm sure I'm opening many eyes right now... Yes have a clear look at the code!
I thought if was the SetFocus(); method at the start but they make no difference at all. Also you can leave out the "static" keyword. Still doesn't crash.
So this is creating new possibilities I think? You don't need a singleton anymore to call the HandleEvents. Or a static global pointer of your own class. Everything can be done now through this method.
So thank you for your time ladies and gentlemen!! Now I'm going to write the system and graphics class for the next entry. So be sure to have a look!
You mentioned this in your post:
The reason this doesn't crash is because in HandleEvents(), you're only calling the default window procedure and not modifying the data within the class object. If you were to modify the data members of the Window class inside HandleEvents(), you'd probably corrupt memory. This is because you're casting the HWND returned from GetWindow() to a pointer to your custom window class, but HWND != Window*. You could probably try breaking it, by setting the window title inside the HandleEvents() method.
Hope this is useful, and I'm looking forward to your future posts!
About the function you are right indeed,I've tried to modify a data member of the class inside the HandleEvents method and my window didn't want to launch. Thanks for the answer!
~EngineProgrammer
All of my previous engines supported Unicode only to eventually prove to be a pain in the ass. Finally I realized that UTF-8 is the way to go.
The basic reason for using Unicode is to be sure that you can open files of any path, display strings of any language, etc.
As a beginner, I knew that Unicode was the “safe” way to go. Finally I learned that this was the only reason I was supporting Unicode. It was based off a lack of understanding.
UTF-8 not only supports all Unicode characters, but can easily be extended to support more than the standard supports.
Not only can it fully satisfy all of your current needs, be used to open any file, or to print any character in any language, it is also future-proof.
Furthermore, when you move onto systems such as iOS, UTF-8 is actually the required format for opening files, which means you will have to waste time converting your Unicode strings to UTF-8.
And then there is the matter of sending data over networks, which will require you to convert to UTF-8 anyway in order to keep packet sizes smaller.
I go into more details on this subject here.
Ultimately there is nothing to gain from supporting raw Unicode, plus the fact that L"" strings are not the same sizes across compilers/platforms. For the sake of consistency, forward-compatibility, and efficiency it is best to use UTF-8 strings only.
L. Spiro
Note: GameDev.net moderates comments. | https://www.gamedev.net/blog/1543/entry-2255043-creotex-engine-part-1/ | CC-MAIN-2017-17 | refinedweb | 1,386 | 57.37 |
Create a Route That Redirects
Let’s first create a route that will check if the user is logged in before routing.
Add the following to
src/components/AuthenticatedRoute.js.
import React from "react"; import { Route, Redirect } from "react-router-dom"; export default ({ component: C, props: cProps, ...rest }) => <Route {...rest} render={props => cProps.isAuthenticated ? <C {...props} {...cProps} /> : <Redirect to={`/login?redirect=${props.location.pathname}${props.location .search}`} />} />;
This component is similar to the
AppliedRoute component that we created in the Add the session to the state chapter. The main difference being that we look at the props that are passed in to check if a user is authenticated. If the user is authenticated, then we simply render the passed in component. And if the user is not authenticated, then we use the
Redirect React Rotuer v4 component to redirect the user to the login page. We also pass in the current path to the login page (
redirect in the querystring). We will use this later to redirect us back after the user logs in.
We’ll do something similar to ensure that the user is not authenticated.
Add the following to
src/components/UnauthenticatedRoute.js.
import React from "react"; import { Route, Redirect } from "react-router-dom"; export default ({ component: C, props: cProps, ...rest }) => <Route {...rest} render={props => !cProps.isAuthenticated ? <C {...props} {...cProps} /> : <Redirect to="/" />} />;
Here we are checking to ensure that the user is not authenticated before we render the component that is passed in. And in the case where the user is authenticated, we use the
Redirect component to simply send the user to the homepage.
Next, let’s use these components in our app.
If you liked this post, please subscribe to our newsletter, give us a star on GitHub, and check out our sponsors.
For help and discussionComments on this chapter | https://branchv21--serverless-stack.netlify.app/chapters/create-a-route-that-redirects.html | CC-MAIN-2022-33 | refinedweb | 304 | 56.55 |
[hackers] [scc] Emit newlines in onlycpp mode || Roberto E. Vargas Caballero
This message
: [
Message body
] [ More options (
top
,
bottom
) ]
Related messages
: [
Next message
] [
Previous message
]
Contemporary messages sorted
: [
by date
] [
by thread
] [
by subject
] [
by author
] [
by messages with attachments
]
From
: <
git_AT_suckless.org
>
Date
: Mon, 5 Oct 2015 17:43:26 +0200 (CEST)
commit 92c212afb9da93c192deddd2f44aaed198af192c
Author: Roberto E. Vargas Caballero <k0ga_AT_shike2.com>
AuthorDate: Mon Oct 5 17:40:02 2015 +0200
Commit: Roberto E. Vargas Caballero <k0ga_AT_shike2.com>
CommitDate: Mon Oct 5 17:40:02 2015 +0200
Emit newlines in onlycpp mode
In this mode we were printing a sequence of tokens, but
it was not very useful, because we are losing all the
information about lines. With this patch the situation
is not far better, but at least no everything is in
only one line.
diff --git a/cc1/cc1.h b/cc1/cc1.h
index 27a0d9a..2c6f37b 100644
--- a/cc1/cc1.h
+++ b/cc1/cc1.h
_AT_@ -411,7 +411,7 @@ extern unsigned short yylen;
extern int cppoff, disexpand;
extern unsigned cppctx;
extern Input *input;
-extern int lexmode, namespace;
+extern int lexmode, namespace, onlycpp;
extern unsigned curctx;
extern Symbol *curfun, *zero, *one;
diff --git a/cc1/lex.c b/cc1/lex.c
index 0fd977a..36a0c60 100644
--- a/cc1/lex.c
+++ b/cc1/lex.c
_AT_@ -226,6 +226,8 @@ repeat:
goto repeat;
}
+ if (onlycpp)
+ putchar('\n');
input->begin = input->p;
return 1;
}
diff --git a/cc1/main.c b/cc1/main.c
index c20f485..93b7af3 100644
--- a/cc1/main.c
+++ b/cc1/main.c
_AT_@ -13,7 +13,7 @@ int warnings;
jmp_buf recover;
static char *output;
-static int onlycpp;
+int onlycpp;
static void
clean(void)
Received on
Mon Oct 05 2015 - 17:43:26 CEST
This message
: [
Message body
]
Next message
:
git_AT_suckless.org: "[hackers] [st] Small style change. || Christoph Lohmann"
Previous message
:
git_AT_suckless.org: "[hackers] [scc] Add basic test for defined() in #if || Roberto E. Vargas Caballero"
Contemporary messages sorted
: [
by date
] [
by thread
] [
by subject
] [
by author
] [
by messages with attachments
]
This archive was generated by
hypermail 2.3.0
: Mon Oct 05 2015 - 17:48:10 CEST | http://lists.suckless.org/hackers/1510/8177.html | CC-MAIN-2022-05 | refinedweb | 351 | 58.89 |
Hi
I'm trying to write some functions.
This first function is suppose to help shuffle two strings which are the same length into the correct word.
So the output would be something like:
After shuffling the first string som and the second string trs the word is storms.
Here is my code now:
I don't know if I'm starting this correctly, hopefully I am though. Problem with this is that I don't know how to rearrange the a[] and b[] to reorder it in order from [0] onwards.
def reorder(a, b): """ returns a "shuffled" version of a + b: something like a[0] + b[0] + a[1] + b[1] + a[2] + b[2] + so on... """ reorder_str = a + b for i in range(len(reorder_str)): reorder_str = a[i] + b[i] return reorder_str #Testing function: first_str = "som" second_str = "trs" final = reorder(a, b) print "After shuffling the first string " +str(first_str) + " and the second string" + " " + str(second_str) + " the word is" + " " + str(reorder) + "."
The second function I'm trying to create is to find even numbers within a list and return a new list that only gives the elements divisible by 2.
The original list itself should not be changed though.
So for example:
lista = [1,2,5,6,7]
evenlist = even_list(lista)
print lista
print evenlist
[1,2,5,6,7]
[2, 6]
My code right now is:
(The problem I have here is that I don't know how to output the new list with only even numbers)
def even_list(a): """ original list will remain unchanged and the new list will only return the integers that are even """ lista = [] for i in range(len(lista)): if i % 2 == 0: lista = lista % 2 return lista #Testing the function lista = [1,2,5,6,7] evenlist = even_list(lista) print lista print evenlist
Thanks for any help/hints/explanation/suggestions.
Edited by saikeraku: n/a | https://www.daniweb.com/programming/software-development/threads/235291/problem-with-some-functions | CC-MAIN-2017-26 | refinedweb | 315 | 63.22 |
On Mon, 7 Jan 2002 06:40, Erik Hatcher wrote:
> ----- Original Message -----
> From: "Peter Donald" <peter@apache.org>
>
> > Most likely I would -1 it but it would depend upon how it was implemented
> > essentially. I think it is really poor design of tasks that would require
> > this and would almost certainly reject any task that used it ;)
>
> I would implement it using the interface that I proposed, and that Jose
> refined. Simple as that, and probably would only involve a few lines of
> code (at least it should :).
ok will actually have a proepr look at it tonight ;)
> Would you -1 that implementation? I just want to know before I code it and
> get shot down! :)
Theres plenty of things - will have a look and tell you if I don't like.
However the main reason I would -1 is because it is only a workaround for a
limitation of ant and I don't want to be supporting more ugly hack
workarounds into the future ;)
> How does implementing this open the flood gates to bad things?
Increasing the "surface area" of a project always comes at a cost. If the
cost can not be justified by added features etc or the cost is not offset
somehow then it is probably a bad idea to add specific feature.
We already have oodles more "join points" (ie places where we are flexible)
than we actually need if it had designed it properly from the start. Where we
have numerous patterns for things like
addX
createX
addConfiguredX
setX
we could have probably gotten away with just
addX
setX
or even just
setX
if we didn't want to make to much of a distinction between
elements/attributes.
Think of it this way. Give ant 50 points for every minor feature and 100
points for every major feature. Then subtract 5 points for every "access
point" (ie public/protected methods/attributes) and subtract 100 points for
every structure/pattern required and 10 points for every public class.
The higher the result the better ant is from both a user and developers point
of view. However I think ant would actually score rather low as it has a
whole bunch of tasks written in an "interesting" manner. Some have public
attributes (!!!), many include numerous protected/public methods that don't
need to be public/protected, we have many replicated/redundent
patterns/classes that come from different stages of ants evolution etc.
> > XDoclet use-case is the only use-case I have in mind now.
That makes me less inclined to support it if anything ;)
> Keep in mind that I'm of the opinion that Ant probably should be using
> XDoclet in the future to allow a lot of a tasks configuration to be
> specified in meta-data allowing documentation to be generated as well as
> any other artifacts needed (configurator Java classes perhaps?).
Sounds kool Could you give us an example of what something like that would
look like? and have you played with any of it yet?
--
Cheers,
Pete
*------------------------------------------------------*
| "Nearly all men can stand adversity, but if you want |
| to test a man's character, give him power." |
| -Abraham Lincoln |
*------------------------------------------------------*
--
To unsubscribe, e-mail: <mailto:ant-dev-unsubscribe@jakarta.apache.org>
For additional commands, e-mail: <mailto:ant-dev-help@jakarta.apache.org> | http://mail-archives.apache.org/mod_mbox/ant-dev/200201.mbox/%3C200201062025.g06KPQr00461@mail004.syd.optusnet.com.au%3E | CC-MAIN-2017-22 | refinedweb | 550 | 59.84 |
Up to [cvs.NetBSD.org] / src / lib / libc / gen
Request diff between arbitrary revisions
Default branch: MAIN
Revision 1.14.10.2 / (download) - annotate - [select for diffs], Thu Apr 17 16:25:37 2008 UTC (9 years, 3 months ago) by apb
Branch: christos-time_t
Changes since 1.14.10.1: +93 -0 lines
Diff to previous 1.14.10.1 (colored) to branchpoint 1.14 (colored)
Refer to the CAVEATS section of ctype(3) for more information.
Revision 1.14.10.1, Thu Apr 17 16:25:36 2008 UTC (9 years, 3 months ago) by apb
Branch: christos-time_t
Changes since 1.14: +0 -93 lines
FILE REMOVED
file isdigit.3 was added on branch christos-time_t on 2008-04-17 16:25:37 +0000
Revision 1.14 / (download) - annotate - [select for diffs], Thu Apr 17 16:25:36 2008 UTC (9 years,: christos-time_t
Changes since 1.13: +8 -3 lines
Diff to previous 1.13 (colored)
Refer to the CAVEATS section of ctype(3) for more information.
Revision 1.13 / (download) - annotate - [select for diffs], Thu Jan 18 08:35:07 2007 UTC (10 years, 6.12: +3 -2 lines
Diff to previous 1.12 (colored)
Added a reference to ctype.3. What's the value of having that page at all if it isn't referenced by others? Suggested by Slava Semushin via private mail.
Revision 1.12 / (download) - annotate - [select for diffs], Thu Oct 5 22:34:52 2006 UTC (10 years, 9.11: +4 -4 lines
Diff to previous 1.11 / (download) - annotate - [select for diffs], Thu Aug 7 16:42:52 2003 UTC (13 years, 11:37 2003 UTC (14 years, 3 months ago) by wiz
Branch: MAIN
Changes since 1.9: +2 -2 lines
Diff to previous 1.9 (colored)
Use .In header.h instead of .Fd #include \*[Lt]header.h\*[Gt] Much easier to read and write, and supported by groff for ages. Okayed by ross.
Revision 1.5.12.4 / (download) - annotate - [select for diffs], Thu Aug 1 03:28:10 2002 UTC (14 years, 11 months ago) by nathanw
Branch: nathanw_sa
CVS Tags: nathanw_sa_end
Changes since 1.5.12.3: +9 -2 lines
Diff to previous 1.5.12.3 (colored) to branchpoint 1.5 (colored) next main 1.6 (colored)
Catch up to -current.
Revision 1.9 / (download) - annotate - [select for diffs], Wed Jul 10 23:31:32 2002 UTC (15 years ago) by wiz
Branch: MAIN
CVS Tags: nathanw_sa_before_merge, nathanw_sa_base, fvdl_fs64_base
Changes since 1.8: +1 -2 lines
Diff to previous 1.8 (colored)
Remove Xrefs to ourselves in SEE ALSO.
Revision 1.8 / (download) - annotate - [select for diffs], Wed Jul 10 14:37:13 2002 UTC (15 years ago) by yamt
Branch: MAIN
Changes since 1.7: +9 -1 lines
Diff to previous 1.7 (colored)
import CAVEATS sections from OpenBSD. with little tweak by me.
Revision 1.5.12.3 / (download) - annotate - [select for diffs], Fri Mar 22 20:42:10 2002 UTC (15 years, 4 months ago) by nathanw
Branch: nathanw_sa
Changes since 1.5.12.2: +1 -1 lines
Diff to previous 1.5.12.2 (colored) to branchpoint 1.5 (colored)
Catch up to -current.
Revision 1.5.12.2 / (download) - annotate - [select for diffs], Fri Mar 8 21:35:11 2002 UTC (15 years, 4 months ago) by nathanw
Branch: nathanw_sa
Changes since 1.5.12.1: +2 -2 lines
Diff to previous 1.5.12.1 (colored) to branchpoint 1.5 (colored)
Catch up to -current.
Revision 1.7 / (download) - annotate - [select for diffs], Thu Feb 7 07:00:14 2002 UTC (15 years,)
Generate <>& symbolically.
Revision 1.5.12.1 / (download) - annotate - [select for diffs], Mon Oct 8 20:19:09 2001 UTC (15 years, 9 months ago) by nathanw
Branch: nathanw_sa
Changes since 1.5: +2 -2 lines
Diff to previous 1.5 (colored)
Catch up to -current.
Revision 1.6 / (download) - annotate - [select for diffs], Sun Sep 16 02:57:04 2001 UTC (15 years, 10 months ago) by wiz
Branch: MAIN
Changes since 1.5: +2 -2 lines
Diff to previous 1.5 (colored)
Standardize section headers, sort sections, sort SEE ALSO, punctuation and misc. fixes.
Revision 1.5 / (download) - annotate - [select for diffs], Thu Feb 5 18:47:13 1998 UTC (19 years,.4: +3 -1 lines
Diff to previous 1.4 (colored)
add LIBRARY section to man page
Revision 1.4 / (download) - annotate - [select for diffs], Mon Feb 27 04:34:47 1995 UTC (22 years,: +3 -2 lines
Diff to previous 1.3 (colored)
merge with Lite, keeping local changes. Fix up Id format, etc.
Revision 1.3 / (download) - annotate - [select for diffs], Fri Oct 15 00:58:57 1993: +4 -3 lines
Diff to previous 1.2 (colored)
Make sure all items in SEE ALSO list are comma separated. Add cross references to isblank().
Revision 1.2 / (download) - annotate - [select for diffs], Fri Jul 30 08:37:14 1993 UTC (23 years, 11 months ago) by mycroft
Branch: MAIN
Changes since 1.1: +2 -1 lines
Diff to previous 1.1 (colored)
Add RCS identifiers.
Revision 1.1.1.1 / (download) - annotate - [select for diffs] (vendor branch), Sun Mar 21 09:45:37 1993 UTC (24 years, 4,. | http://cvsweb.netbsd.org/bsdweb.cgi/src/lib/libc/gen/isdigit.3 | CC-MAIN-2017-30 | refinedweb | 877 | 77.13 |
See also: IRC log
RS Looking to get ARIA support in HTML/XHTML/SVG.
support in Dojo, Firefox, etc., but still a couple of issues.
some issues: namespaces in HTML, IE defects on attr selectors, triggering off ARIA states & properties. Would prefer something that doesn't use a colon.
can we come up with a more general solution to the problem?
(time out to find a relevant party)
DanC: Between 450-500 people have joined HTML WG since created earlier this year.
Level of chaos on list and wiki is high, but ARIA issues documented on wiki
Freaked out about implementation issues surfacing because it seems premature to be implementing
<DanC_lap> ast edited 2007-10-10 13:05:56 by GregoryRosmaita
StevenP: XHTML2 WG is chartered to produce an XML application, and standard extension in XML is to use namespaces. Role attr can be used in multiple ways, including accessibility.
It's possible to apply, for example, checkbox role to a div.
<DanC_lap> ("unable" is strong; there's argument against)
My understanding is that HTML cannot adopt this mechanism.
ChrisL: We use XLink for linking. We don't care how it's done so long as it's well-formed. Just put it in the DOM and don't worry about rendering it.
ARIA mechanism is fine because we can just use it. Having a role for mini map to navigate a larger map is useful. We don't expect this role to be defined, but specialized implementers would want that. But collisions could occur without namespaces.
<ChrisL> Colin can be used in internet explorer. MS Office already generates "html" that uses multiple namespaces - v: for vml, mso: for office extensions
DougS: I agree with everybody. Namespaces is elegant solution. Colon causes problems in IE. Would be ideal to have the same syntax across languages. Would be difficult for authors to use aria:* in one place and aria-* in others.
Don't have a good solution.
<ChrisL> ... they use /: in css to get around the colon being special in css
SP: I don't believe there's a syntactic solution. But did occur to me that maybe the solution is to use XBL.
XBL allows you to use shadow trees. Then we wouldn't have to care that they're different across languages. Like adding a style sheet after the fact.
<DanC_lap> in my opening remars, pls include: DanC: I gather some implementors are likely to make decision soon, which concerns me because it's not clear that the ARIA schedule and requirements are clear
<anne> So FWIW, I agree XBL would be nice, but that's not implemented at all
<DanC_lap> Squatting on link relationship names, x-tokens, registries, and URI-based extensibility
<DanC_lap> Squatting on link relationship names, x-tokens, registries, and URI-based extensibili
DanC: x-* tokens are common conventions. My belief is that you can start with a standard URI, and can work on a standard process. If it doesn't reach the level of standardization, then you'll still have the convention.
<Zakim> ChrisL, you wanted to comment on the "IE ioncompatibility" argument
We used to fight this with rel, and then rel="nofollow", and we celebrated that. Confusing.
ChrisL: I find it hard to believe IE doesn't do colon namespaces. Can use
<anne> The hack you described for Selectors doesn't work at least for attribute values ChrisL
\: for colons
<ChrisL> thx anne
hsivonen: IE does something with the colon. What it does is different from the XML DOM.
In Opera, WebKit and Gecko, backslash has no meaning. So multiple ways to handle the colon.
With HTML5, design goal is that parsing should be as close as possible to legacy so that it works consistently. Since legacy browsers do differently, spec follows non-IE browsers.
Since there is no interop with colon, best way is to avoid colon.
<ChrisL> the legacy is not interoperable again
You can't change the legacy. If you introduce something yet different for HTML5, then you have a fourth option.
<DanC_lap> (Al, we're now into HTML versioning issues, which is a notoriously tricky issue in public-html [perhaps you already know that])
<oedipus> content should be browser-agnostic -- implementation decisions should not be made based on limitations of non-complying UAs
Opera tried namespaces in HTML and it didn't work out because of legacy content.
<ChrisL> correct labelling of html5 would allow all browesers, including ie, to correctly process colons and have a namespace plus local name, same as in xhtml
DougS: You didn't describe what problems there were.
anne: We don't want another doctype switch.
<ChrisL> right, it can't apply to *all* legacy content. better to correctly label html5
hsivonen: goal is that the DOM you get behaves the same in HTML5 and legacy browsers for parts of the language other than errors.
There's no value to script writers to create two different mechanisms.
<ChrisL> but you *are* introducing a discrepancy
<anne> we're simply introducing new attributes...
If we have a script written for HTML5, same script should work in XHTML5.
<ChrisL> agreed, but there are two ways of having a script work the same in html5 and xhtml5
<ChrisL> ... and one of those ways breaks *everything else*
DanC: HTML WG decision-making procedure is just starting, so if you want an answer from us soon, you're stressing things.
<DanC_lap> ... especially a decision that's intertwingled with versioning
DougS: Things are being implemented presently.
DanC: As chair, no decisions have been made.
<DanC_lap> to clarify: the HTML WG as a whole has not made any design decisions.
DougS: anne, you said Opera doesn't want a switch, but IE clearly does. Saying there can't be one is not yet off the table.
anne: I'm saying that's our opinion.
<ChrisL> I agree with Dan that decisions have not been made, per process; I'm also hearing lots of claims that decisions can't be changed or that opinions will not be changed
hsivonen: I believe Mozilla position is the same.
<ChrisL> Anne and henris say 'no more switching'
<DanC_lap> yes, the HTML WG process has a fairly high level of chaos, true.
<ChrisL> Dave Hyatt came firmly in favour of a switching mechanism
DougS: I believe it's still open.
DougS: XBL could work if it were implemented. Mozilla doesn't support XBL2. Need something that works today.
<DanC_lap> "Dave Hyatt came firmly in favour of a switching mechanism" was what DougS said; let's please be more clear about attribution
<ChrisL> IE certainly needs a switching mechanism so they can not treat the old legacy (that they mostly madxe, from ms tools) differently
Rich: XBL is an enormous expectation of the browsers. Not that it's a bad idea.
<DanC_lap> (re things that are in e-mail already, pointers are welcome, but not everybody has read everything, and discussion can shed light on stuff that's already been said.)
Applications are being built today. People need access to this information. We have working examples, e.g., Dojo. But we want to make it easier. Need something that works in today's browsers, preferably easier in HTML5. Then want to look at SVG next.
<ChrisL> quoting from the html vision document "Also, as soon as there is a need for any extensibility, the XML serialization (with use of XML namespaces) gains an immediate practical advantage."
<ChrisL>.
<oedipus> +1 to the "vision thing"
ARIA works on HTML 4 now. No need to be designing or hacking this for HTML5. There's still time to do this for HTML 4 now, and HTML5 later. We have apps using this now. Why do this now where angels fear... even looking in?
<ChrisL> pointer to dojo doing this?
<DanC_lap> yes, please, pointer to dojo doing this?
<MichaelC> anne, explains HTML 4 implementation of ARIA
StevenP: Understand scripting issue is important. But Dojo supports it now, and toolkits are popular.
Can abstract this away.
<ChrisL>
Raman: No matter how this is done, it's one global replace to repair.
<ChrisL> quoting "Proper namespace support is one of the main reasons I fell for dojo. To avoid name collisions is far more important than writing the shortest statements."
hsivonen: Reason to rush this is that if ARIA support doesn't get into FF3/Opera9.5, then another generation is lost.
<anne> MichaelC, that's not really a realistic implementation imo, but ok
As for abstraction, if browsers do different things for scripting, you can abstract it in the library, but that's a fix for a problem, and there's no value to scripters to introduce these discrepancies to satisfy aesthetics or politics.
Rich: As a developer supporting ARIA, would rather set the attribute without having to use the colon. Cuts down on the code going down to the client if you can fix this problem.
Would be nice to consider having a dash equivalent to a colon.
DougS: Dash equal to colon would be very unrealistic.
<DanC_lap> FYI, the versioning stuff now has a home in the budding HTML WG tracker system
<Rich> scribe: Rich
<ChrisL> there are lots of hyphens in attributes. for example formatting properties
<ChrisL> font-style etc etc
Matt: My concern is that we can
fix this in scripting so that scripters can use this. Most
scripters have no clue how to do things accessibly
... scripters will come back to the browser developers asking why we need to do this additional work
... would like to take the gloves off and see how we can address this
<ChrisL>
Raman: Firefox has an implementation already
<Zakim> DanC_lap, you wanted to ask hsivonen for estimated decision schedule for firefox 3
<ChrisL>
DanC: henri, do you know about the Firefox schedule?
<ChrisL> see
DanC: lag between dev and release?
AaronL: Can't give a firm answer. If we can decide before Christmas, then we have a good chance.
Rich: We do have aria-* already implemented.
AaronL: You can still use - in HTML, and : in SVG
<ChrisL> see for example on aria in dojo
Raman: for the record, colons do work in FF3.
and FF2
<ChrisL> dojo 1.0 shipped last week and uses xml namespaces. works in ff2 and ff3. hyphen might work in ff3 perhaps
DougS: What about Opera?
ChrisL: I think the answer is that XHTML is supported. If HTML5 isn't going to improve on the situation, then if you need accessibility, you should be using XHTML.
AaronL: Everyone that I know doing ARIA is doing text/html.
<ChrisL> aaron - yes, because shipping text/html for ie and app/xhtml for everyone else is a pain
DougS: Expected content authors for ARIA are presumably not newbies. Someone who's making a custom control is not a novice. They're capable of different syntaxes.
AaronL: There's a range from newbie all the way to custom widget. Raising the learning curve isn't helping.
<ChrisL> and we need to enable all points on that experience curve to use it. something that only works for light use and doesn't work for heavy lifting is not desirable
Al: We have people with different attitudes toward a standard format. We started out doing ARIA with the extensions available to us, but they were fairly general and open, and suitable for extension vocabularies in a small domain: chemistry, etc.
<ChrisL> al: work started using well proven extensibility mechanisms grounded in uri space so domain experts can develop their own vocabularies. but accessibility needs to be everywhere
Accessibility wants to be a part of the core. We may not have built it perfectly, but what's in ARIA 1 is a collection of terms that should interop with different host languages. It's a different beast.
<DanC_lap> (Al makes an argument parallel to my position on TAG/standardizedFieldValues; to echo it back: we already did the URI-based experiment; it's succeeding; time to make centralized short names.)
<ChrisL> in text/html, dom understands setattrributeNS and getattributeNS but the parser will not do that for you; you have to do it post parsing in script. that works in ff2
hsivonen: What works in FF2: if parsed from text/html, then setAttributeNS works from script, but the parser doesn't do that for you.
<ChrisL> no value for authors that they have to implement namespaces in xml themselves in script. parser should do this
<ChrisL> so the hyphen methods wont work in ff3 and will work in ff2.
Concerned about documents being written in various doctypes, with scripts as modules.
<DanC_lap> (re "DTD for HTML", the HTML 5 spec treats DTDs and such as implementations rather than as part of the spec, and I don't expect the editors to change their mind on that.)
<oedipus> +1 compound documents are a VERY compelling reason to have 1 implementation solution
<ChrisL> I don't expect user agents to fetch external DTDs, ever
Raman: Wasn't advocating enable.js approach. In practice, role and state values come from script, not markup. Dynamic attributes need to change from script. And too complicated to do in markup. HTML5 will have better widget story. HTML 4 content will come through script.
hsivonen: XHTML attribs in SVG: doesn't make it easier for authors than having the aria: namespace in SVG. Only complicates things more.
ChrisL: Agree. But having to add hyphens is an extra burden.
<shepazu> DS: I agree with Henri on this point
hsivonen: Whole point of -*
scheme is that no namespace processing is done, and DOM doesn't
apply meaning to aria-*. So the XML stack doesn't know about
it.
... Hasn't been explained what practical problem is being solved.
... If SVG takes attributes starting with aria-, there's no collision.
ChrisL: That's incorrect.
<DanC_lap> (TESTCASE note... would be nice to have a test for aria- in SVG and collect data on what implementations do, recorded in EARL)
In the SVG spec, you may not add attributes in the same namespace and expect SVG to handle it.
We would have to add aria to our namespace.
DougS: And if ARIA makes a change, we would have to add it to our spec.
<oedipus> DanC: would you like me to take your EARL suggestion to the ERT (evaluation and repair tools WG that wrote EARL) or is this something you are planning to do yourself?
<DanC_lap> (hmm... I'm not sure the cost of changing the SVG spec is really higher than the cost to authors of namespace declarations. I'd love to get some economics student to study it or something.)
AaronL: Having to do things through the DOM really isn't acceptable. Yes, possible, but not everyone is as sophisticated.
<anne> I don't see what the problem is with adding accessibility support to SVG, HTML, and XHTML without namespaces
<ChrisL> we would be willing ti add all of aria to the svg spec, if thats what it takes, but then we need to rev svg each time aria changes, and other groups will ask to have their stuff added too
When you have to teach ARIA, to people who usually don't want to do it, you want to make it as easy as possible.
<anne> Also, you have to define the interaction of namespaced attributes and languages anyway. Such as SVG does for XLink and SVG
<anne> Otherwise SVG would not have to talk about XLink at all for instance...
<ChrisL> ... if its aria:whatever then there is no spec change needed and it already works
Only the people in this room really care about this. Most people are just trying to solve real problems.
StevenP: We're also talking about MathML, SMIL, etc. By having a standard extensibility mechanism, someone authoring can just use it and create a UA that does stuff with it.
<Zakim> hsivonen, you wanted to say spec org problem not a technical problem.
It's a fallacy to think that the specifiers of the language have to produce a schema.
<oedipus> examples of xml-based specialized domain markup that may need ARIA: CellML: -- MAGE (MicroArray and Gene Expression ML):
Even if you wanted to make it part of the spec deliverable, you could add that include file.
DougS: I still want to keep underscore on the table. Dash is overloaded. Having underscore gives us the option to do some namespace-like thing in the future.
<ChrisL> I'm reading and see that the role attribute is **not in a namespace anyway** so I wonder what in fact we are discussing
<Zakim> DanC_lap, you wanted to say I'm not at all suprised to re-visit the level of consensus around the XML namespaces mechanism
DanC: I'm not surprised about namespace argument. It was contentious throughout the process. Angry bloggers, etc.
ChrisL: If all else fails, read the spec. None of the attributes are namespaced. What are we discussing here? The attribute values.
Rich: This is the states document.
<oedipus>
ChrisL: It would be easy to add state to RELAX-NG schema for SVG.
<DanC_lap> just about 2.1.1.2 in /TR/aria-state is on the projector
<DanC_lap> just above
Can add a line of NVDL to point to the right schema.
It's already valid.
DougS: What if you change the colon to a dash?
ChrisL: Then you're screwed.
Al: It's clear that if you're using namespaces, can use namespace-based dispatching for validation. But can you write RELAX-NG so it imports by a match pattern of aria-*?
<DanC_lap> (indeed, we need to keep in mind the world-wide cost to authors when considering the cost of spec updates. and there are interactions between them, involving the value to the world of a trusted entity like W3C)
<anne> I would also like to say that making design decisions based on limitations of schema's seems silly at best
hsivonen: You can't use a wildcard in the local name. However, you can make a separate include that enumerates aria-* attributes at the time the include was authored, and update the include.
<Zakim> hsivonen, you wanted to say ease of authoring should override ease of using NVDL
hsivonen: You can still write RELAX-NG for aria-*. Can also write SAX code to deal with this. Validation technology shouldn't affect authoring.
ChrisL: We already had someone altering the SVG spec. That's why we have these rules.
Al: Next steps?
<DanC_lap> (please excuse me; I'm expected elsewhere now.)
DougS: role attribute module in SVG would still need colon. Needs to be resolved.
Al: HTML WG will consider on Saturday.
<ChrisL> the problem is "The document MUST conform to the constraints expressed in Appendix A - DTD Implementation, combined with the constraints expressed in its host language implementation."
DougS: I propose that in SVG spec we would normatively reference XHTML module to rely on it for semantics.
ChrisL: That could be a more focused discussion. We'll report back.
<oedipus> +1 to DougS' normative reference of XHTML Role module for semantics in SVG
Rich: That leaves ARIA modules.
Al: Agree that another spec is published controlling aria* attributes, and normatively referenced by conforming specs?
Raman: Only risk is that we have two namespacing mechanisms 5 years from now.
<ChrisL> I suggest that Steve, Doug myself and any other interested parties discuss the xhtml-role conformance requirements
DougS: SVG can talk about this later today.
StevenP: Can discuss this in Hypertext CG. | http://www.w3.org/2007/11/06-aria-minutes.html | CC-MAIN-2014-42 | refinedweb | 3,248 | 73.17 |
:
At first use Visual Studio 2010 to create a SharePoint web part project. As a result, VS2010 will open a ascx control for you on the designer.>
By calling getWebProperties method from any web part, you can get the current web’s title, id and creation date.
ctx.load(this.web,'Title','Id','Created');
Remember, here the properties names are properties of SPWeb. You need to pass Title instead of title. The properties name uses CAML casing. You can get the full lists of ECMAScript namespaces, object, properties following the link on MSDN. The document is not final yet and may be changed. You can also look into the sp.debug.js file in the folder “Program Files\Common Files\Microsoft Shared\Web Server Extensions\14\TEMPLATE\LAYOUTS”, to get an idea of objects, properties and methods of ECMAScript Client OM.:
To get the full list of namespaces and Classes, you can download the SharePoint 2010 SDK or you can follow the link on MS. | http://www.codeproject.com/Articles/60348/SharePoint-2010-Client-Object-Model-for-JavaScript?fid=1561723&df=90&mpp=10&sort=Position&spc=None&select=3923509&tid=4381329 | CC-MAIN-2016-36 | refinedweb | 165 | 66.13 |
This is the mail archive of the cygwin-apps mailing list for the Cygwin project.
On Oct 10 10:24, Corinna Vinschen wrote: > > > > >? Index: include/cygwin/in.h =================================================================== RCS file: /cvs/src/src/winsup/cygwin/include/cygwin/in.h,v retrieving revision 1.18 retrieving revision 1.19 diff -u -p -r1.18 -r1.19 --- include/cygwin/in.h 6 Jul 2012 13:52:18 -0000 1.18 +++ include/cygwin/in.h 10 Oct 2012 08:36:33 -0000 1.19 @@ -112,11 +112,15 @@ enum IPPORT_USERRESERVED = 5000 }; +/* Avoid collision with Mingw64 headers. */ +#ifndef s_addr /* Internet address. */ struct in_addr { in_addr_t s_addr; }; +#define s_addr s_addr +#endif /* Request struct for IPv4 multicast socket ops */ Other than that, was there any other roadblock on the way to the Mingw64 headers? Thanks, Corinna -- Corinna Vinschen Please, send mails regarding Cygwin to Cygwin Project Co-Leader cygwin AT cygwin DOT com Red Hat | http://cygwin.com/ml/cygwin-apps/2012-10/msg00118.html | CC-MAIN-2017-04 | refinedweb | 148 | 53.27 |
Using the Validating Parser
By now, you have done a lot of experimenting with the nonvalidating parser. It's time to have a look at the validating parser to find out what happens when you use it to parse the sample presentation.
You need to understand about two things about the validating parser at the outset:
- A schema or document type definition (DTD) is required.
- Because the schema or DTD is present, the
ignorableWhitespacemethod is invoked whenever possible.
Configuring the Factory
The first step is to modify the Echo program so that it uses the validating parser instead of the nonvalidating parser.
Note: The code in this section is contained in
Echo10.java.
To use the validating parser, make the following highlighted changes:public static void main(String argv[]) { if (argv.length != 1) { ... }
// Use the default (non-validating) parser
// Use the validating parserSAXParserFactory factory = SAXParserFactory.newInstance();
factory.setValidating(true);try { ...
Here, you configure the factory so that it will produce a validating parser when
newSAXParseris invoked. To configure it to return a namespace-aware parser, you can also use
setNamespaceAware(true). Sun's implementation supports any combination of configuration options. (If a combination is not supported by a particular implementation, it is required to generate a factory configuration error.)
Validating with XML Schema. You can also examine the sample programs that are part of the JAXP download.'ll use the phrase "XML Schema definition" to avoid the appearance of redundancy.
To be notified of validation errors in an XML document, the parser factory must be configured to create a validating parser, as shown in the preceding section. In addition, the following must be true:
- The appropriate properties must be set on the SAX parser.
- The appropriate error handler must be set.
- The document must be associated with a schema.
Setting the SAX Parser Properties
It's helpful to start by defining the constants you'll use when setting the properties:static final String
JAXP_SCHEMA_LANGUAGE= ""; static final String
W3C_XML_SCHEMA= "";
Next, you configure the parser factory to generate a parser that is namespace-aware as well as validating:... SAXParserFactory factory = SAXParserFactory.newInstance();
factory.setNamespaceAware(true);factory.setValidating(true);
You'll learn more about namespaces in Validating with XML Schema. For now, understand that schema validation is a namespace-oriented process. Because JAXP-compliant parsers are not namespace-aware by default, it is necessary to set the property for schema validation to work.
The last step is to configure the parser to tell it which schema language to use. Here, you use the constants you defined earlier to specify the W3C's XML Schema language:
In the process, however, there is an extra error to handle. You'll take a look at that error next.
Setting Up the Appropriate Error Handling
In addition to the error handling you've already learned about, there is one error that can occur when you are configuring the parser for schema-based validation. If the parser is not 1.2-compliant and therefore does not support XML Schema, it can throw a
SAXNotRecognizedException.
To handle that case, you wrap the
setProperty()statement in a
try/
catchblock, as shown in the code highlighted here:... SAXParser saxParser = factory.newSAXParser();
try {saxParser.setProperty(JAXP_SCHEMA_LANGUAGE, W3C_XML_SCHEMA); }
catch (SAXNotRecognizedException x) {// Happens if the parser does not support JAXP 1.2 ...
}...
Associating a Document with a Schema
Now that the program is ready to validate the data using an XML Schema definition, it is only necessary to ensure that the XML document is associated with one. There are two ways to do that:
- By including a schema declaration in the XML document
- By specifying the schema to use in the application
Note: When the application specifies the schema to use, it overrides any schema declaration in the document.
To specify the schema definition in the document, you create XML such as this:<
documentRoot
xmlns:xsi=""
xsi:noNamespaceSchemaLocation='
YourSchemaDefinition.xsd' > ...
The first attribute defines the XML namespace (
xmlns) prefix,
xsi, which stands for XML Schema instance. The second line specifies the schema to use for elements in the document that do not have a namespace prefix--that is, for the elements you typically define in any simple, uncomplicated XML document.
Note: You'll learn about namespaces in Validating with XML Schema. For now, think of these attributes as the "magic incantation" you use to validate a simple XML file that doesn't use them. After you've learned more about namespaces, you'll see how to use XML Schema to validate complex documents that use them. Those ideas are discussed in Validating with Multiple Namespaces.
You can also specify the schema file in the application:static final String
JAXP_SCHEMA_SOURCE= ""; ... SAXParser saxParser = spf.newSAXParser(); ...
saxParser.setProperty(
JAXP_SCHEMA_SOURCE, new File(schemaSource));
Now that you know how to use an XML Schema definition, we'll turn to the kinds of errors you can see when the application is validating its incoming data. To do that, you'll use a document type definition (DTD) as you experiment with validation.
Experimenting with Validation Errors
To see what happens when the XML document does not specify a DTD, remove the
DOCTYPEstatement from the XML file and run the Echo program on it.
Note: The output shown here is contained in
Echo10-01.txt. (The browsable version is
Echo10-01.html.)
The result you see looks like this:<?xml version='1.0' encoding='UTF-8'?> ** Parsing error, line 9, uri .../slideSample01.xml Document root element "slideshow", must match DOCTYPE root "null"
Note: This message was generated by the JAXP 1.2 libraries. If you are using a different parser, the error message is likely to be somewhat different.
This message says that the root element of the document must match the element specified in the
DOCTYPEdeclaration. That declaration specifies the document's DTD. Because you don't yet have one, it's value is null. In other words, the message is saying that you are trying to validate the document, but no DTD has been declared, because no
DOCTYPEdeclaration is present.
So now you know that a DTD is a requirement for a valid document. That makes sense. What happens when you run the parser on your current version of the slide presentation, with the DTD specified?
Note: The output shown here is produced using
slideSample07.xml, as described in Referencing Binary Entities. The output is contained in
Echo10-07.txt. (The browsable version is
Echo10-07.html.)
This time, the parser gives a different error message:** Parsing error, line 29, uri file:... The content of element type "slide" must match "(image?,title,item*)
This message says that the element found at line 29 (
<item>) does not match the definition of the
<slide>element in the DTD. The error occurs because the definition says that the
slideelement requires a
title. That element is not optional, and the copyright slide does not have one. To fix the problem, add a question mark to make
titlean optional element:
Now what happens when you run the program?
Note: You could also remove the copyright slide, producing the same result shown next, as reflected in
Echo10-06.txt. (The browsable version is
Echo10-06.html.)
The answer is that everything runs fine until the parser runs into the
<em>tag contained in the overview slide. Because that tag is not defined in the DTD, the attempt to validate the document fails. The output looks like this:... ELEMENT: <title> CHARS: Overview END_ELM: </title> ELEMENT: <item> CHARS: Why
** Parsing error, line 28, uri: ...
Element "em" must be declared.org.xml.sax.SAXParseException: ... ...
The error message identifies the part of the DTD that caused validation to fail. In this case it is the line that defines an
itemelement as
(#PCDATA | item).
As an exercise, make a copy of the file and remove all occurrences of
<em>from it. Can the file be validated now? (In the next section, you'll learn how to define parameter entries so that we can use XHTML in the elements we are defining as part of the slide presentation.)
Error Handling in the Validating Parser
It is important to recognize that the only reason an exception is thrown when the file fails validation is as a result of the error-handling code you entered in the early stages of this tutorial. That code is reproduced here:
If that exception is not thrown, the validation errors are simply ignored. Try commenting out the line that throws the exception. What happens when you run the parser now?
In general, a SAX parsing error is a validation error, although you have seen that it can also be generated if the file specifies a version of XML that the parser is not prepared to handle. Remember that your application will not generate a validation exception unless you supply an error handler such as the one here. | http://docs.oracle.com/javaee/1.4/tutorial/doc/JAXPSAX9.html | CC-MAIN-2014-41 | refinedweb | 1,469 | 55.74 |
the first version of SQL Server to offer CLR integration – giving developers the ability to write their own functions and stored procs as .NET assemblies. In addition to opening a whole new world of possibilities to SQL developers, when used wisely these functions/procs will in some cases outperform standard T-SQL, especially when it comes to math and string manipulation.
So, lets get down to business. First thing to do is create a class library to hold your functions. You can do this in C# or VB.net, I won’t judge. But I will use C#.
using System; using System.Data; using System.Data.SqlClient; using System.Data.SqlTypes; using Microsoft.SqlServer.Server; public partial class UserDefinedFunctions { [Microsoft.SqlServer.Server.SqlFunction] public static double clrDistCalc(double long1, double lat1, double long2, double lat2) { double EarthCircMeters = 40074199.82569710; double DeltaX; double DeltaY; double DeltaXMeters; double DeltaYMeters; double MetersPerDegreeLong; double CenterY; //calculate distance DeltaX = Math.Abs(long2 - long1); DeltaY = Math.Abs(lat2 - lat1); CenterY = (lat2 + lat1)/2; MetersPerDegreeLong = (Math.Cos(CenterY * (3.14159265/180)) * EarthCircMeters)/360; DeltaXMeters = DeltaX * MetersPerDegreeLong; DeltaYMeters = DeltaY * 111113.519; return Math.Sqrt(Math.Pow(DeltaXMeters, 2) + Math.Pow(DeltaYMeters, 2))/1609.344; } }
I’ve never tried to optimize this (it is pretty fast as is), but I am sure it is easily doable. The important thing to notice here is the additional namespace directives (past System). These allow you to use SQL Server types, Identify your function as a SQL Server function, and loads of other fun stuff. I don’t actually use any of the SQL types in this function, but they do work well in most cases. Its’ easy to see the SqlFunction attribute applied to the method.
Once this is done, compile that sucker! The trickiest part here is really deploying it to your server, but even that is simple, just a lot of steps. First, you’ll need to put the file in a location where the server can see it. On the server would be a nice easy place. Now we need to create an assembly in SQL Server for it. This is simple too:
CREATE ASSEMBLY DistanceCalculations FROM 'C:DistanceCalculationLibrary.dll'
Ok, we’re getting close. Once this assembly is created, you need to enable the CLR (it is disabled by default).
exec sp_configure 'clr enabled',1 reconfigure
And finally, create our SQL Server udf referencing the assembly.
CREATE FUNCTION [dbo].[clrDistCalc](@long1 [float], @lat1 [float], @long2 [float], @lat2 [float]) RETURNS [float] WITH EXECUTE AS CALLER AS EXTERNAL NAME [DistanceCalculations].[UserDefinedFunctions].[clrDistCalc]
You can test it by writing some queries against the same table used for Denis’ example (or George’s). Here’s an example using Denis’ data:
SELECT h.* FROM zipcodes g INNER JOIN zipcodes h ON g.zipcode <> h.zipcode AND g.zipcode = '10028' AND h.zipcode <> '10028' WHERE dbo.clrDistCalc(g.Longitude, g.Latitude, h.Longitude, h.Latitude) <= (20 * 1609.344)
If I know those guys like I think I do, the indexes and everything will be sufficient!
That’s really all there is to it. These CLR functions are an EXTREMELY powerful tool for any developer’s kit. Math calculations are just scratching the surface of the great uses for this capability. It is important to use them judiciously though, as with most powerful tools you can seriously injure yourself if not careful! I’m not sure I’ve determined where the line is yet, but I would not even attempt using the CLR unless I was having performance issues with frequently used calculations (or they were frequently used enough that I could see the problem coming) or if I needed access to the filesystem or other resources that are awkward to access from straight T-SQL. As developers we’re always going to need to make choices like this, and no amount of blog posts will make them for us. So like all things, just use it wisely and you’ll be fine!
In the future we’ll put all three to the test, and do a fourth post detailing the results. At least I hope we will, the idea of doing just that is the whole reason I blogged about this again!
*** If you have a SQL Server related question try our Microsoft SQL Server Programming forum or our Microsoft SQL Server Admin forum
Nice post Alex!
Ha, I had to reference your article on deployment over @ pretty heavily (in my old posting, I never discussed deployment, just linked to your tutorial). Since the idea was to have everything in one place, figured I should include those bits as well. So nice work to you also 🙂
I did some pretty unofficial testing with a SQL CLR function similar to this and saw a drastic performance difference when using the SqlTypes instead of the native data types. Using the native data type (as you have in your example) performed much better.
Just thought I’d throw that out here as an FYI.
Thanks for the post Alex. | http://blogs.lessthandot.com/index.php/datamgmt/dbprogramming/sql-server-distance-calculation-option-3/ | CC-MAIN-2017-22 | refinedweb | 832 | 66.84 |
Welcome to another fun filled tutorial. This time I will attempt to explain Volumetric Fog using the glFogCoordf Extension. In order to run this demo, your video card must support the "GL_EXT_fog_coord" extension. If you are not sure if your card supports this extension, you have two options... 1) download the VC++ source code, and see if it runs. 2) download lesson 24, and scroll through the list of extensions supported by your video card.
This tutorial will introduce you to the NeHe IPicture code which is capable of loading BMP, EMF, GIF, ICO, JPG and WMF files from your computer or a web page. You will also learn how to use the "GL_EXT_fog_coord" extension to create some really cool looking Volumetric Fog (fog that can float in a confined space without affecting the rest of the scene).
If this tutorial does not work on your machine, the first thing you should do is check to make sure you have the latest video driver installed. If you have the latest driver and the demo still does not work... you might want to purchase a new video card. A low end GeForce 2 will work just fine, and should not cost all that much. If your card doesn't support the fog extension, who's to say what other extensions it will not support?
For those of you that can't run the demo, and feel excluded... keep the following in mind: Every single day I get at least 1 email requesting a new tutorial. Many of the tutorials requested are already online! People don't bother reading what is already online and end up skipping over the topic they are most interested in. Other tutorials are too complex and would require weeks worth of programming on my end. Finally, there are the tutorials that I could write, but usually avoid because I know they will not run on all cards. Now that cards such as the GeForce are cheap enough that anyone with an allowance could afford one, I can no longer justify not writing the tutorials. Truthfully, if your video card only supports basic extensions, you are missing out! And if I continue to skip over topics such as Extensions, the tutorials will lag behind!
With that said... lets attack some code!!!
The code starts off very similar to the old basecode, and almost identical to the new NeHeGL basecode. The only difference is the extra line of code to include the OLECTL header file. This header must be included if you want the IPICTURE code to function. If you exclude this line, you will get errors when trying to use IPicture, OleLoadPicturePath and IID_IPicture.
Just like the NeHeGL basecode, we use #pragma comment ( lib, ... ) to automatically include the required library files! Notice we no longer need to include the glaux library (I'm sure many of you are cheering right now).
The next three lines of code check to see if CDS_FULLSCREEN is defined. If it is not (which it isn't in most compilers), we give it a value of 4. I know many of you have emailed me to ask why you get errors when trying to compile code using CDS_FULLSCREEN in DEV C++. Include these three lines and you will not get the error!
#include <windows.h> // Header File For Windows
#include <gl\gl.h> // Header File For The OpenGL32 Library
#include <gl\glu.h> // Header File For The GLu32 Library
#include <olectl.h> // Header File For The OLE Controls Library (Used In BuildTexture)
#include <math.h> // Header File For The Math Library (Used In BuildTexture)
#include "NeHeGL.h" // Header File For NeHeGL
#pragma comment( lib, "opengl32.lib" ) // Search For OpenGL32.lib While Linking
#pragma comment( lib, "glu32.lib" ) // Search For GLu32.lib While Linking
#ifndef CDS_FULLSCREEN // CDS_FULLSCREEN Is Not Defined By Some
#define CDS_FULLSCREEN 4 // Compilers. By Defining It This Way,
#endif // We Can Avoid Errors
GL_Window* g_window; // Window Structure
Keys* g_keys; // Keyboard
In the following code, we set the color of our fog. In this case we want it to be a dark orange color. A little red (0.6f) mixed with even less green (0.3f) will give us the color we desire.
The floating point variable camz will be used later in the code to position our camera inside a long and dark hallway! We will move forwards and backwards through the hallway by translating on the Z-Axis before we draw the hallway.
// User Defined Variables
GLfloat fogColor[4] = {0.6f, 0.3f, 0.0f, 1.0f}; // Fog Colour
GLfloat camz; // Camera Z Depth
Just like CDS_FULLSCREEN has a predefined value of 4... the variables GL_FOG_COORDINATE_SOURCE_EXT and GL_FOG_COORDINATE_EXT also have predefined values. As mentioned in the comments, the values were taken from the GLEXT header file. A file that is freely available on the net. Huge thanks to Lev Povalahev for creating such a valuable header file! These values must be set if you want the code to compile! The end result is that we have two new enumerants available to us (GL_FOG_COORDINATE_SOURCE_EXT & GL_FOG_COORDINATE_EXT).
To use the function glFogCoordfExt we need to declare a function prototype typedef that match the extensions entry point. Sounds complex, but it is not all that bad. In English... we need to tell our program the number of parameters and the the type of each parameter accepted by the function glFogCoordfEXT. In this case... we are passing one parameter to this function and it is a floating point value (a coordinate).
Next we have to declare a global variable of the type of the function prototype typedef. In this case PFNGLFOGCOORDFEXTPROC. This is the first step to creating our new function (glFogCoordfEXT). It is global so that we can use the command anywhere in our code. The name we use should match the actual extension name exactly. The actual extension name is glFogCoordfEXT and the name we use is also glFogCoordfEXT.
Once we use wglGetProcAddress to assign the function variable the address of the OpenGL drivers extension function, we can call glFogCoordfEXT as if it was a normal function. More on this later!
The last line prepares things for our single texture.
So what we have so far...
We know that PFNGLFOGCOORDFEXTPROC takes one floating point value (GLfloat coord)
Because glFogCoordfEXT is type PFNGLFOGCOORDFEXTPROC it's safe to say glFogCoordfEXT takes one floating point value... Leaving us with glFogCoordfEXT(GLfloat coord).
Our function is defined, but will not do anything because glFogCoordfEXT is NULL at the moment (we still need to attach glFogCoordfEXT to the Address of the OpenGL driver's extension function).
Really hope that all makes sense... it's very simple when you already know how it works... but describing it is extremely difficult (at least for me it is). If anyone would like to rewrite this section of text using simple / non complicated wording, please send me an email! The only way I could explain it better is through images, and at the moment I am in a rush to get this tutorial online!
// Variables Necessary For FogCoordfEXT
#define GL_FOG_COORDINATE_SOURCE_EXT 0x8450 // Value Taken From GLEXT.H
#define GL_FOG_COORDINATE_EXT 0x8451 // Value Taken From GLEXT.H
typedef void (APIENTRY * PFNGLFOGCOORDFEXTPROC) (GLfloat coord); // Declare Function Prototype
PFNGLFOGCOORDFEXTPROC glFogCoordfEXT = NULL; // Our glFogCoordfEXT Function
GLuint texture[1]; // One Texture (For The Walls)
Now for the fun stuff... the actual code that turns an image into a texture using the magic of IPicture :)
This function requires a pathname (path to the actual image we want to load... either a filename or a Web URL) and a texture ID (for example ... texture[0]).
We need to create a device context for our temporary bitmap. We also need a place to store the bitmap data (hbmpTemp), a connection to the IPicture Interface, variables to store the path (file or URL). 2 variables to store the image width, and 2 variables to store the image height. lwidth and lheight store the actual image width and height. lwidthpixels and lheightpixels stores the width and height in pixels adjusted to fit the video cards maximum texture size. The maximum texture size will be stored in glMaxTexDim.
int BuildTexture(char *szPathName, GLuint &texid) // Load Image And Convert To A Texture
{
HDC hdcTemp; // The DC To Hold Our Bitmap
HBITMAP hbmpTemp; // Holds The Bitmap Temporarily
IPicture *pPicture; // IPicture Interface
OLECHAR wszPath[MAX_PATH+1]; // Full Path To Picture (WCHAR)
char szPath[MAX_PATH+1]; // Full Path To Picture
long lWidth; // Width In Logical Units
long lHeight; // Height In Logical Units
long lWidthPixels; // Width In Pixels
long lHeightPixels; // Height In Pixels
GLint glMaxTexDim ; // Holds Maximum Texture Size
The next section of code takes the filename and checks to see if it's a web URL or a file path. We do this by checking to see if the filename contains http://. If the filename is a web URL, we copy the name to szPath.
If the filename does not contain a URL, we get the working directory. If you had the demo saved to C:\wow\lesson41 and you tried to load data\wall.bmp the program needs to know the full path to the wall.bmp file not just that the bmp file is saved in a folder called data. GetCurrentDirectory will find the current path. The location that has both the .EXE and the 'data' folder.
If the .exe was stored at "c:\wow\lesson41"... The working directory would return "c:\wow\lesson41". We need to add "\\" to the end of the working directory along with "data\wall.bmp". The "\\" represents a single "\". So if we put it all together we end up with "c:\wow\lesson41" + "\" + "data\wall.bmp"... or "c:\wow\lesson41\data\wall.bmp". Make sense?
if (strstr(szPathName, "http://")) // If PathName Contains http:// Then...
{
strcpy(szPath, szPathName); // Append The PathName To szPath
}
else // Otherwise... We Are Loading From A File
{
GetCurrentDirectory(MAX_PATH, szPath); // Get Our Working Directory
strcat(szPath, "\\"); // Append "\" After The Working Directory
strcat(szPath, szPathName); // Append The PathName
}
So we have the full pathname stored in szPath. Now we need to convert the pathname from ASCII to Unicode so that OleLoadPicturePath understands the path name. The first line of code below does this for us. The result is stored in wszPath.
CP_ACP means ANSI Codepage. The second parameter specifies the handling of unmapped characters (in the code below we ignore this parameter). szPath is the wide-character string to be converted. The 4th parameter is the width of the wide-character string. If this value is set to -1, the string is assumed to be NULL terminated (which it is). wszPath is where the translated string will be stored and MAX_PATH is the maximum size of our file path (256 characters).
After converting the path to Unicode, we attempt to load the image using OleLoadPicturePath. If everything goes well, pPicture will point to the image data and the result code will be stored in hr.
If loading fails, the program will exit.
MultiByteToWideChar(CP_ACP, 0, szPath, -1, wszPath, MAX_PATH); // Convert From ASCII To Unicode
HRESULT hr = OleLoadPicturePath(wszPath, 0, 0, 0, IID_IPicture, (void**)&pPicture);
if(FAILED(hr)) // If Loading Failed
return FALSE; // Return False
Now we need to create a temporary device context. If all goes well, hdcTemp will hold the compatible device context. If the program is unable to get a compatible device context pPicture is released, and the program exits.
hdcTemp = CreateCompatibleDC(GetDC(0)); // Create The Windows Compatible Device Context
if(!hdcTemp) // Did Creation Fail?
{
pPicture->Release(); // Decrements IPicture Reference Count
return FALSE; // Return False (Failure)
}
Now it's time to query the video card and find out what the maximum texture dimension supported is. This code is important because it will attempt to make the image look good on all video cards. Not only will it resize the image to a power of 2 for you. It will make the image fit in your video cards memory. This allows you to load images with any width or height. The only drawback is that users with bad video cards will loose alot of detail when trying to view high resolution images.
On to the code... we use glGetIntegerv(...) to get the maximum texture dimension (256, 512, 1024, etc) supported by the users video card. We then check to see what the actual image width is. pPicture->get_width(&lwidth) is the images width.
We use some fancy math to convert the image width to pixels. The result is stored in lWidthPixels. We do the same for the height. We get the image height from pPicture and store the pixel value in lHeightPixels.
glGetIntegerv(GL_MAX_TEXTURE_SIZE, &glMaxTexDim); // Get Maximum Texture Size Supported
pPicture->get_Width(&lWidth); // Get IPicture Width (Convert To Pixels)
lWidthPixels = MulDiv(lWidth, GetDeviceCaps(hdcTemp, LOGPIXELSX), 2540);
pPicture->get_Height(&lHeight); // Get IPicture Height (Convert To Pixels)
lHeightPixels = MulDiv(lHeight, GetDeviceCaps(hdcTemp, LOGPIXELSY), 2540);
Next we check to see if the image width in pixels is less than the maximum width supported by the video card.
If the image width in pixels is less than the maximum width supported, we resize the image to a power of two based on the current image width in pixels. We add 0.5f so that the image is always made bigger if it's closer to the next size up. For example... If our image width was 400 and the video card supported a maximum width of 512... it would be better to make the width 512. If we made the width 256, the image would loose alot of it's detail.
If the image size is larger than the maximum width supported by the video card, we set the image width to the maximum texture size supported.
We do the same for the image height. The final image width and height will be stored in lWidthPixels and lHeightPixels.
// Resize Image To Closest Power Of Two
if (lWidthPixels <= glMaxTexDim) // Is Image Width Less Than Or Equal To Cards Limit
lWidthPixels = 1 << (int)floor((log((double)lWidthPixels)/log(2.0f)) + 0.5f);
else // Otherwise Set Width To "Max Power Of Two" That The Card Can Handle
lWidthPixels = glMaxTexDim;
if (lHeightPixels <= glMaxTexDim) // Is Image Height Greater Than Cards Limit
lHeightPixels = 1 << (int)floor((log((double)lHeightPixels)/log(2.0f)) + 0.5f);
else // Otherwise Set Height To "Max Power Of Two" That The Card Can Handle
lHeightPixels = glMaxTexDim;
Now that we have the image data loaded and we know the height and width we want to make the image, we need to create a temporary bitmap. bi will hold our bitmap header information and pBits will hold the actual image data. We want the bitmap we create to be a 32 bit bitmap with a width of lWidthPixels and a height of lHeightPixels. We want the image encoding to be RGB and the image will have just one bitplane.
// Create A Temporary Bitmap
BITMAPINFO bi = {0}; // The Type Of Bitmap We Request
DWORD *pBits = 0; // Pointer To The Bitmap Bits
bi.bmiHeader.biSize = sizeof(BITMAPINFOHEADER); // Set Structure Size
bi.bmiHeader.biBitCount = 32; // 32 Bit
bi.bmiHeader.biWidth = lWidthPixels; // Power Of Two Width
bi.bmiHeader.biHeight = lHeightPixels; // Make Image Top Up (Positive Y-Axis)
bi.bmiHeader.biCompression = BI_RGB; // RGB Encoding
bi.bmiHeader.biPlanes = 1; // 1 Bitplane
Taken from the MSDN: The CreateDIBSection function creates a DIB that applications can write to directly. The function gives you a pointer to the location of the bitmap's bit values. You can let the system allocate the memory for the bitmap.
hdcTemp is our temporary device context. bi is our Bitmap Info data (header information). DIB_RGB_COLORS tells our program that we want to store RGB data, not indexes into a logical palette (each pixel will have a red, green and blue value).
pBits is where the image data will be stored (points to the image data). the last two parameters can be ignored.
If for any reason the program was unable to create a temporary bitmap, we clean things up and return false (which exits the program).
If things go as planned, we end up with a temporary bitmap. We use SelectObject to attach the bitmap to the temporary device context.
// Creating A Bitmap This Way Allows Us To Specify Color Depth And Gives Us Imediate Access To The Bits
hbmpTemp = CreateDIBSection(hdcTemp, &bi, DIB_RGB_COLORS, (void**)&pBits, 0, 0);
if(!hbmpTemp) // Did Creation Fail?
{
DeleteDC(hdcTemp); // Delete The Device Context
pPicture->Release(); // Decrements IPicture Reference Count
return FALSE; // Return False (Failure)
}
SelectObject(hdcTemp, hbmpTemp); // Select Handle To Our Temp DC And Our Temp Bitmap Object
Now we need to fill our temporary bitmap with data from our image. pPicture->Render will do this for us. It will also resize the image to any size we want (in this case... lWidthPixels by lHeightPixels).
hdcTemp is our temporary device context. The first two parameters after hdcTemp are the horizontal and vertical offset (the number of blank pixels to the left and from the top). We want the image to fill the entire bitmap, so we select 0 for the horizontal offset and 0 for the vertical offset.
The fourth parameter is the horizontal dimension of destination bitmap and the fifth parameter is the vertical dimension. These parameters control how much the image is stretched or compressed to fit the dimensions we want.
The next parameter (0) is the horizontal offset we want to read the source data from. We draw from left to right so the offset is 0. This will make sense once you see what we do with the vertical offset (hopefully).
The lHeight parameter is the vertical offset. We want to read the data from the bottom of the source image to the top. By using an offset of lHeight, we move to the very bottom of the source image.
lWidth is the amount to copy in the source picture. We want to copy all of the data horizontally in the source image. lWidth covers all the data from left to right.
The second last parameter is a little different. It's a negative value. Negative lHeight to be exact. What this means is that we want to copy all of the data vertically, but we want to start copying from the bottom to the top. That way the image is flipped as it's copied to the destination bitmap.
The last parameter is not used.
// Render The IPicture On To The Bitmap
pPicture->Render(hdcTemp, 0, 0, lWidthPixels, lHeightPixels, 0, lHeight, lWidth, -lHeight, 0);
So now we have a new bitmap with a width of lWidthPixels and a height of lHeightPixels. The new bitmap has been flipped right side up.
Unfortunately the data is stored in BGR format. So we need to swap the Red and Blue pixels to make the bitmap an RGB image. At the same time, we set the alpha value to 255. You can change this value to anything you want. This demo does not use alpha so it has no effect in this tutorial!
// Convert From BGR To RGB Format And Add An Alpha Value Of 255
for(long i = 0; i < lWidthPixels * lHeightPixels; i++) // Loop Through All Of The Pixels
{
BYTE* pPixel = (BYTE*)(&pBits[i]); // Grab The Current Pixel
BYTE temp = pPixel[0]; // Store 1st Color In Temp Variable (Blue)
pPixel[0] = pPixel[2]; // Move Red Value To Correct Position (1st)
pPixel[2] = temp; // Move Temp Value To Correct Blue Position (3rd)
pPixel[3] = 255; // Set The Alpha Value To 255
}
Finally, after all of that work, we have a bitmap image that can be used as a texture. We bind to texid, and generate the texture. We want to use linear filtering for both the min and mag (max) filters (looks nice).
We get the image data from pBits. When generating the texture, we use lWidthPixels and lHeightPixels one last time to set the texture width and height.
After the 2D texture has been generated, we can clean things up. We no longer need the temporary bitmap or the temporary device context. Both of these are deleted. We can also release pPicture... YAY!!!
glGenTextures(1, &texid); // Create The Texture
// Typical Texture Generation Using Data From The Bitmap
glBindTexture(GL_TEXTURE_2D, texid); // Bind To The Texture ID
glTexParameteri(GL_TEXTURE_2D,GL_TEXTURE_MIN_FILTER,GL_LINEAR); // (Modify This For The Type Of Filtering You Want)
glTexParameteri(GL_TEXTURE_2D,GL_TEXTURE_MAG_FILTER,GL_LINEAR); // (Modify This For The Type Of Filtering You Want)
// (Modify This If You Want Mipmaps)
glTexImage2D(GL_TEXTURE_2D, 0, 3, lWidthPixels, lHeightPixels, 0, GL_RGBA, GL_UNSIGNED_BYTE, pBits);
DeleteObject(hbmpTemp); // Delete The Object
DeleteDC(hdcTemp); // Delete The Device Context
pPicture->Release(); // Decrements IPicture Reference Count
return TRUE; // Return True (All Good)
}
The following code checks to see if the users video card support the EXT_fog_coord extension. This code can ONLY be called after your OpenGL program has a Rendering Context. If you try to call it before you set up the window, you will get errors.
The first thing we do is create a string with the name of our extension.
We then allocate enough memory to hold the list of OpenGL extensions supported by the users video card. The list of supported extensions is retreived with the command glGetString(GL_EXTENSIONS). The information returned is copied into glextstring.
Once we have the list of supported extensions we use strstr to see if our extension (Extension_Name) is in the list of supported extensions (glextstring).
If the extension is not supported, FALSE is returned and the program ends. If everything goes ok, we free glextstring (we no longer need the list of supported extensions).
int Extension_Init()
{
char Extension_Name[] = "EXT_fog_coord";
// Allocate Memory For Our Extension String
char* glextstring=(char *)malloc(strlen((char *)glGetString(GL_EXTENSIONS))+1);
strcpy (glextstring,(char *)glGetString(GL_EXTENSIONS)); // Grab The Extension List, Store In glextstring
if (!strstr(glextstring,Extension_Name)) // Check To See If The Extension Is Supported
return FALSE; // If Not, Return FALSE
free(glextstring); // Free Allocated Memory
At the very top of this program we defined glFogCoordfEXT. However, the command will not work until we attach the function to the actual OpenGL extension. We do this by giving glFogCoordfEXT the address of the OpenGL Fog Extension. When we call glFogCoordfEXT, the actual extension code will run, and will receive the parameter passed to glFogCoordfEXT.
Sorry, this is one of them bits of code that is very hard to explain in simple terms (at least for me).
// Setup And Enable glFogCoordEXT
glFogCoordfEXT = (PFNGLFOGCOORDFEXTPROC) wglGetProcAddress("glFogCoordfEXT");
return TRUE;
}
This section of code is where we call the routine to check if the extension is supported, load our texture, and set up OpenGL.
By the time we get to this section of code, our program has an RC (rendering context). This is important because you need to have a rendering context before you can check if an extension is supported by the users video card.
So we call Extension_Init( ) to see if the card supports the extension. If the extension is not supported, Extension_Init( ) returns false and the check fails. This will cause the program to end. If you wanted to display some type of message box you could. Currently the program will just fail to run.
If the extension is supported, we attempt to load our wall.bmp texture. The ID for this texture will be texture[0]. If for some reason the texture does not load, the program will end.
Initialization is simple. We enable 2D texture mapping. We set the clear color to black. The clear depth to 1.0f. We set depth testing to less than or equal to and enable depth testing. The shademodel is set to smooth shading, and we select nicest for our perspective correction.
BOOL Initialize (GL_Window* window, Keys* keys) // Any GL Init Code & User Initialiazation Goes Here
{
g_window = window; // Window Values
g_keys = keys; // Key Values
// Start Of User Initialization
if (!Extension_Init()) // Check And Enable Fog Extension If Available
return FALSE; // Return False If Extension Not Supported
if (!BuildTexture("data/wall.bmp", texture[0])) // Load The Wall Texture
return FALSE; // Return False If Loading Failed
glEnable(GL_TEXTURE_2D); // Enable Texture Mapping
glClearColor (0.0f, 0.0f, 0.0f, 0.5f); // Black Background
glClearDepth (1.0f); // Depth Buffer Setup
glDepthFunc (GL_LEQUAL); // The Type Of Depth Testing
glEnable (GL_DEPTH_TEST); // Enable Depth Testing
glShadeModel (GL_SMOOTH); // Select Smooth Shading
glHint (GL_PERSPECTIVE_CORRECTION_HINT, GL_NICEST); // Set Perspective Calculations To Most Accurate
Now for the fun stuff. We need to set up the fog. We start off by enabling fog. The rendering mode we use is linear (nice looking). The fog color is set to fogColor (orange).
We then need to set the fog start position. This is the least dense section of fog. To make things simple, we will use 1.0f as the least dense value (FOG_START). We will use 0.0f as the most dense area of fog (FOG_END).
According to all of the documentation I have read, setting the fog hint to GL_NICEST causes the fog to be rendered per pixel. Using GL_FASTEST will render the fog per vertex. I personally do not see a difference.
The last glFogi(...) command tells OpenGL that we want to set our fog based on vertice coordinates. This allows us to position the fog anywhere in our scene without affecting the entire scene (cool!).
We set the starting camz value to -19.0f. The actual hallways is 30 units in length. So -19.0f moves us almost the beginning of the hallway (the hallway is rendered from -15.0f to +15.0f on the Z axis).
// Set Up Fog
glEnable(GL_FOG); // Enable Fog
glFogi(GL_FOG_MODE, GL_LINEAR); // Fog Fade Is Linear
glFogfv(GL_FOG_COLOR, fogColor); // Set The Fog Color
glFogf(GL_FOG_START, 0.0f); // Set The Fog Start (Least Dense)
glFogf(GL_FOG_END, 1.0f); // Set The Fog End (Most Dense)
glHint(GL_FOG_HINT, GL_NICEST); // Per-Pixel Fog Calculation
glFogi(GL_FOG_COORDINATE_SOURCE_EXT, GL_FOG_COORDINATE_EXT); // Set Fog Based On Vertice Coordinates
camz = -19.0f; // Set Camera Z Position To -19.0f
return TRUE; // Return TRUE (Initialization Successful)
}
This section of code is called whenever a user exits the program. There is nothing to clean up so this section of code remains empty!
void Deinitialize (void) // Any User DeInitialization Goes Here
{
}
Here is where we handle the keyboard interaction. Like all previous tutorials, we check to see if the ESC key is pressed. If it is, the application is terminated.
If the F1 key is pressed, we toggle from fullscreen to windowed mode or from windowed mode to fullscreen.
The other two keys we check for are the up and down arrow keys. If the UP key is pressed and the value of camz is less than 14.0f we increase camz. This will move the hallway towards the viewer. If we went past 14.0f, we would go right through the back wall. We don't want this to happen :)
If the DOWN key is pressed and the value of camz is greater than -19.0f we decrease camz. This will move the hallway away from the viewer. If we went past -19.0f, the hallway would be too far into the screen and you would see the entrance to the hallway. Again... this wouldn't be good!
The value of camz is increased and decreased based on the number of milliseconds that have passed divided by 100.0f. This should force the program to run at the same speed on all types of processors.
void Update (DWORD milliseconds) // Perform Motion Updates Here
{
if (g_keys->keyDown [VK_ESCAPE]) // Is ESC Being Pressed?
TerminateApplication (g_window); // Terminate The Program
if (g_keys->keyDown [VK_F1]) // Is F1 Being Pressed?
ToggleFullscreen (g_window); // Toggle Fullscreen Mode
if (g_keys->keyDown [VK_UP] && camz<14.0f) // Is UP Arrow Being Pressed?
camz+=(float)(milliseconds)/100.0f; // Move Object Closer (Move Forwards Through Hallway)
if (g_keys->keyDown [VK_DOWN] && camz>-19.0f) // Is DOWN Arrow Being Pressed?
camz-=(float)(milliseconds)/100.0f; // Move Object Further (Move Backwards Through Hallway)
}
I'm sure you are dying to get the rendering, but we still have a few things to do before we draw the hallway. First off we need to clear the screen and the depth buffer. We reset the modelview matrix and translate into the screen based on the value stored in camz.
By increasing or decreasing the value of camz, the hallway will move closer or further away from the viewer. This will give the impression that the viewer is moving forward or backward through the hall... Simple but effective!
void Draw (void)
{
glClear (GL_COLOR_BUFFER_BIT | GL_DEPTH_BUFFER_BIT); // Clear Screen And Depth Buffer
glLoadIdentity (); // Reset The Modelview Matrix
glTranslatef(0.0f, 0.0f, camz); // Move To Our Camera Z Position
The camera is positioned, so now it is time to render the first quad. This will be the BACK wall (the wall at the end of the hallway).
We want this wall to be in the thickest of the fog. If you look at the Init section of code, you will see that GL_FOG_END is the most dense section of fog... and it has a value of 1.0f.
Fog is applied the same way you apply texture coordinates. GL_FOG_END has the most fog, and has a value of 1.0f. So for our first vertex we pass glFogCoordfEXT a value of 1.0f. This will give the bottom (-2.5f on the Y-Axis) left (-2.5f on the X-Axis) vertex of the furthest wall (wall you will see at the end of the tunnel) the most dense fog (1.0f).
We assign 1.0f to the other 3 glFogCoordfEXT vertices as well. We want all 4 points (way in the distance) to be in dense fog.
Hopefully by now you understand texture mapping coordinates and glVertex coordinates. I should not have to explain these :)
glBegin(GL_QUADS); // Back Wall(0.0f, 1.0f); glVertex3f(-2.5f, 2.5f,-15.0f);
glEnd();
So we have a texture mapped back wall in very dense fog. Now we will draw the floor. It's a little different, but once you spot the pattern it will all become very clear to you!
Like all quads, the floor has 4 points. The Y value is always -2.5f. The left vertex is -2.5f, the right vertex is 2.5f, and the floor runs from -15.0f on the Z-Axis to +15.0f on the Z-Axis.
We want the section of floor way in the distance to have the most fog. So once again we give these glFogCoordfEXT vertices a value of 1.0f. Notice that any vertex drawn at -15.0f has a glFogCoordfEXT value of 1.0f...?
The sections of floor closest the viewer (+15.0f) will have the least amount of fog. GL_START_FOG is the least dense fog and has a value of 0.0f. So for these points we will pass a value of 0.0f to glFogCoordfEXT.
What you should see if you run the program is really dense fog on the floor near the back and light fog up close. The fog is not dense enough to fill the entire hallway. It actually dies out halfway down the hall, even though GL_START_FOG is 0.0f.
glBegin(GL_QUADS); // Floor roof is drawn exactly the same way the floor was drawn, with the only difference being that the roof is drawn on the Y-Axis at 2.5f.
glBegin(GL_QUADS); // Roof right wall is also drawn the same way. Except the X-Axis is always 2.5f. The furthest points on the Z-Axis are still set to glFogCoordfEXT(1.0f) and the closest points on the z-Axis are still set to glFogCoordfEXT(0.0f).
glBegin(GL_QUADS); // Right();
Hopefully by now you understand how things work. Anything in the distance will have more fog, and should be set to a value of 1.0f. Anything up close should be set to 0.0f.
Of course you can always play around with the GL_FOG_START and GL_FOG_END values to see how they affect the scene.
The effect does not look convincing if you swap the start and end values. The illusion is created by the back wall being completely orange! The effect looks best in dead ends or tight corners where the player can not face away from the fog!
This type of fog effect works best when the player can see into the room that has fog, but can not actually go into the room. A good example would be a deep pit covered with some type of grate. The player could look down into the pit, but would not be able to get in to the pit.
glBegin(GL_QUADS); // Left();
glFlush (); // Flush The GL Rendering Pipeline
}
I really hope you enjoy this tutorial. It was created over a period of 3 days... 4 hours a day. Most of the time was spent writing the text you are currently reading.
I wanted to make a 3D room with fog in one corner of the room. Unfortunately, I had very little time to work on the code.
Even though the hallway in this tutorial is very simple, the actual fog effect is quite cool! Modifying the code for use in projects of your own should take very little effort.
This tutorials shows you how to use the glFogCoordfEXT. It's fast, looks great and is very easy to use! It is important to note that this is just ONE of many different ways to create volumetric fog. The same effect can be created using blending, particles, masks, etc.
As always... if you find mistakes in this tutorial let me know. If you think you can describe a section of code better (my wording is not always clear), send me an email!
A lot of the text was written late at night, and although it's not an excuse, my typing gets a little worse as I get more sleepy. Please email me if you find duplicate words, spelling mistakes, etc.
The original idea for this tutorial was sent to me a long time ago. Since then I have lost the original email. To the person that sent this idea in... Thank Rob Dieffenbach ) * DOWNLOAD Linux/SDL Code For This Lesson. ( Conversion by Anthony Whitehead ) * DOWNLOAD Python Code For This Lesson. ( Conversion by Brian Leair ) * DOWNLOAD Visual Studio .NET Code For This Lesson. ( Conversion by Joachim Rohde )
< Lesson 40Lesson 42 >
NeHe™ and NeHe Productions™ are trademarks of GameDev.net, LLC
OpenGL® is a registered trademark of Silicon Graphics Inc. | http://nehe.gamedev.net/tutorial/volumetric_fog__ipicture_image_loading/18007/ | CC-MAIN-2015-35 | refinedweb | 5,701 | 65.93 |
ReactJs patterns - A study based on google search
ReactJS is among one of the most used javascript library according to GitHub it is one of the mos popular started repository. Given its popularity it is expected that the community around it will start to develop techniques, guides and tutorials around patterns.
Inspired by the software systematic review literature paper [1], that collects a broader overview of the software engineering success and failure factors, this post has the goal to answer the following questions:
- Q1. What is the most popular ReactJS pattern?
- Q2. What are the themes that appears related to the patterns?
Opposed to the scientific method presented by the authors (the research was conducted mining scientific bases, named: IEEE explorer, ACM digital library, science direct, springer link, scopus and engineering village), this post is a collection based on google search.
Besides the questions to be answered in this post, the aim is also to be a source to access when in doubt of which reactjs pattern to learn first and also a guide to help beginners to have a picture of the patterns that developers most talk about.
Finding reactjs patterns articles
Google blocks the crawling on search, in this case the approach taken was the google custom search (). The custom search () API allow developers to use google search programmatically, and works as google search, the difference is that this integration allow calls programmatically.
The first interaction with the API showed a particular behavior that this service has, as pointed by [2], the total number in the result is not the real number, it is an approximation.
The search string used for explore the first researching question, was “reactjs patterns”. Searching for this string in google.com, in the first page is shown 904 results. Executing the same query through the programmable search gives 161 results, and mining the results through the API, gives 96 results. This behavior is expected as pointed by [2].
To mine the results a javascript file was developed, the script executes recursively based on the number of pages that google returns till the last page of results for a given search string. In total 96 links were found and saved to a XLS file [3] for further analysis described in the next section. Furthermore, the code used to search and generate the XLS file is available on github.
Mining results
A manual analysis of the results was made in the following steps:
- Removed results that does not have reactjs patterns discussions, such as explaining a pattern or listing a pattern.
- Classify each search result into a category.
For 1, a manual approach was taken to go through each item in the search result to apply the exclusion criteria. As result 14 items were removed [3] (the removed content can be found in the tab “mined”).
For 2, the following categories were created to group the search results: book, course, meetup, post, question, slides, video. The categories were generated based on a manual review of each item.
The category
post is the most popular, followed by
course and
book,
question and
video have the same number of items (2) and in the last two spots are
meetup and
slides.
Analyzing the results
This section dives into the results found, presents a brief explanation of each
item in the list. Therefore, this section does not cover the categories
question,
meetup,
video and
slides as they present less than three items.
The post category is the most popular with 66, as a first exploration the posts were read, and for each of them a pattern name was manually assigned, based on the content of the post. Most of them have more than one pattern associated with it, for example, the first post, in the list covered 22 patterns.
This process was repeated for each post in the list. Once the classification was done, the word cloud [4] visualization was generated (the process is described in the next sentence).
The raw classification was processed using the following steps:
- Similar words were normalized, the word “component” and “components” were normalized to the word without s (singular form), resulting in “component”
- Words with capital letter were normalized to use the lower case.
- Different words used with common mean were normalized, for example, the higher order component is commonly used as HoC, the shorter version was used.
The word cloud depicts the translation between the most cited patterns used in the dataset. The most cited pattern is Component, followed by props taking into consideration just a single word.
Discussion
This section dive deeper in the results depicted in the previous section, the first sub section focus on Q1 and the second section focuses on Q2.
Q1
For Q1, based on the pure pattern classification the results point to the most popular pattern being the “component”, followed by “props” which are the foundation of ReactJs, as everything is a component, and communication happens via props. A first hypothesis into this result is the repetition of those two patterns to explain more complex ones, for developers that are starting into the reactjs, components and props are the first principles to understand.
In addition to that complex patterns such as hooks and higher order components appear surrounding the component and prop pattern. Those are the patterns which requires from the developer a previous understanding of props and components as they are more complex, which in turn, can lead to less content related to those patterns.
Q2
For Q2 the surrounding elements are the focus, so for example, terms like best practices and design were found and are related to reactjs patterns.
As such, [5], entitled “Clean Code vs. Dirty Code: React Best Practices”, enumerates 14 sections about best practices. Those sections are related to code standards, javascript features, naming variables and also about industry standard to follow when coding like DRY.
Related work
This section dives in the content of each post mined and group them in the different patterns found. It is possible for the same post to appear in different sections, as the content might explore more than one at time.
Container component
[6] uses the Jason Bonta’s definition of the container component: the container component fetches data, and then renders it’s corresponding sub component. That’s it. [7] and [8] agrees on the same definition as [6] and adds that, the container component is the place to connect to redux.
The following code (adapted from [6]) depicts the container component using class component. [9] also offers a code example.
import React, { useState, useEffect } from 'react' const CommentList = ({ comments }) => ( <ul> {comments.map(comment => ( <li> {comment.body}-{comment.author} </li> ))} </ul> ); function CommentListContainer { const [comments, setComments] = useState([]]) useEffect(() => { fetch('/my-comments.json') .then(response => response.json()) .then(comments => setComments(comments) }, []) render() { return <CommentList comments={comments} /> } }
[10] elaborates on the container component with hooks along side with a to-do list app that implements the pattern. Also the definition followed was the same as [6], it seems that there is a consensus that Jason Bonta defined the container component pattern and developers point to him. [11] rates the container component as a pattern that provides: separation of concerns, it is reusable and it is testable.
[12] expands on the idea that container component is aware of redux, the same argument made by [6], but in this case, the author gives “the internet” the credit to agree on that. [13] has somehow a not so clean definition about container component, also the text mix HoC with presentational components and other patterns.
Conditional rendering
[14]
gives his opinions on the conditional rendering pattern and also states that
it is a natural step for developers to separate logic from the actual return code (the code
given as example is a ternary if). As an alternative to the conditional
rendering, the author suggests to use the JSX alternative with
&&. [15]
expands on the JSX alternatives for conditional rendering.
Compound Components
[16] and
[17]
share the same definition, the compound components are components that are distinct, but does not work without
the other, they make sense together only. Furthermore, [16] mentions the HTML
select and
option as an example of compound components. [18] uses
the compound component pattern to build a radio group component, which the user can select only one option
between the available options.
Decorated component
The decorated component is a pattern that does not not appear often, [19] demonstrates the decorated component as a way to decouple components, or even to enhance component features, Redux uses the decorated component pattern to enhance the component props.
Therefore, the decorated pattern can be used as a way to decouple the component that fetch data with the one that actually uses the data. [13] states that the decorated pattern is the same as HoC.
Higher-Order-Component
[14] and [13] agree on the definition that HoC, in a sense that HoC are decorators. [16] though, has an argument that the HoC name is misnomer, which is based on his own thoughts.
A HoC takes a reactjs component enhances it and then returns the new enhanced component to be used [17] [9] [20] [21] [22] [23].
[24] has a different definition, which states that the HoC receives a component as an argument and returns another component. This definition is wider than the previous one. Often the HoC receives a component and enhance the same component functionality, and returns the same component with added behavior, from [24] would be possible to receive A and then return B.
Therefore, [25]
states that HoC is responsible to fetch data and then propagate to child components. This definition is an addition
to the previous agreed definition, but does not restrict the pattern to be used as data fetch only. [26] adds that
the HoC is used to fetch data and also split data fetching from data presentation.
For the first time, the HoC is compared as a
container pattern and not the
decorated component pattern.
[27] explores the HoC in the new era of react hooks. [28] has no definition statement, though the content is followed with HoC code examples.
Render Props / Render callback
The
render props or
render callback pattern is used to render a given component
based on a function callback [6]
[17]
[7]
[20]
[11], or
as [29] states,
instead of rendering the children (which is as common technique in reactjs), this pattern
renders the prop. Even though [29] states that the render prop renders the prop, instead of the children, [27],[26] and [30] describes the render prop
using the children explicitly.
Furthermore [14] states that the pattern render props and HoC are interchangeable. The term render callback is clearer in the intention of the pattern, but the term, render props got more adoption from the community [31]. On the other hand [9], states that there is discussion between the effectiveness of the pattern.
Provider / Context
The provider pattern is used in libraries such as react-redux and react-router. The idea behind the provider pattern is to avoid passing
props for each component in the three, instead, the pattern makes,
the prop available for all the tree that uses the provider regardless
of the tree depth [16]. The
provider pattern is an answer for the problem called
props-drilling [33]
[34].
The provider pattern is often related to reactjs context [14] [33] as this is the feature that comes out of the box with reactjs.
[35] says that if the component needs to share more than two levels deep, the recommended approach is to use reactjs context. [36] uses the provider/context pattern to implement a translation engine.
Hooks
Hooks are the highlight feature introduced in reactjs 16.8, mainly focused in sharing logic between components and no class syntax, instead, a functional approach is the preferred way[37][38]. [39] gives a introduction followed with reactjs hooks best practices, the material is recommended to any level of developer that wants to understand hooks, or for any developer that already knows hooks, it can be a refresher. [40] also states that hooks replace the mixins pattern for sharing code.
[27] depicts the difference between class components and functional components with hooks and th benefits of using it. [25] and [26] compares fetching data between the class approach and the functional approach with hooks, however [10] refactor the container pattern using the class style with hooks.
[41] integrates the facade design pattern into a javascript implementation, and then combines the pattern with reactjs hooks. [42] builds a to-do app using hooks and uses a folder name called models to store custom hooks, and tries to relate this structure with the MVC pattern.
[43] and [44]
focus on the state management. [43] dives
in mocking the redux implementation using hooks. The approach used
is interesting for learning purposes. Therefore, the implementation
for both authors are simplifications over a more complex implementation
of redux. The benefits of implementing those state managements by hand
comes with a drawback. Redux on the other hand is more complex, but
it is a standard for state management, having a wide community that
created different libraries to work with it (
redux-offline).
Finally, [45] converts the BLoC pattern to be used with reactjs. The BLoC pattern was created to share code between flutter and angular dart.
Redux
The redux pattern is an implementation of FLUX, the state management pattern created by Facebook to handle global state [52][53]. [54] provides an introduction to redux and its main components, namely: Action, Reducer and Store.
Conclusion
ReactJs is among the most used UI libraries, as a result it has a lot of content created by the community and by Facebook (which is the company behind rectJS). The proposed study showed the most used reactjs patterns as well as the themes that surround the patterns. As it tuns out, the most popular patterns are components and props, which are the reactjs foundation and not advanced patterns for experienced reactjs developers. On the other hand, patterns like Higher order components, hooks and container component requires some previous knowledge to be used effectively, but those patterns that require more experience are the ones less popular as well.
References
- [1]D. A. Tamburri et al., “Success and Failure in Software Engineering: a Followup Systematic Literature Review,” 2020.
- [2]D. (G. Employee), “Нow can I get 2500 results in one request in Google Search API?,” 2020 [Online]. Available at:. [Accessed: 25-Jun-2020]
- [3]M. Marabesi, “ReactJs patterns - A study based on google search,” 2020 [Online]. Available at:. [Accessed: 05-Aug-2020]
- [4]A. Mueller, “word_cloud,” 2020 [Online]. Available at:. [Accessed: 05-Aug-2020]
- [5]D. West, “Clean Code vs. Dirty Code: React Best Practices - American ...,” 2017 [Online]. Available at:. [Accessed: 16-Nov-2017]
- [6]M. Chan, “React Patterns on GitHub,” 2020 [Online]. Available at:. [Accessed: 11-Jul-2020]
- [7]G. Matheus, “React Component Patterns - Level Up Coding,” 2017 [Online]. Available at:. [Accessed: 26-Oct-2017]
- [8]C. Yick, “Simple React Design Patterns: Container/View - serendipidata,” 2019 [Online]. Available at:. [Accessed: 15-Feb-2019]
- [9]L. Reis, “Simple React Patterns,” 2017 [Online]. Available at:. [Accessed: 08-Oct-2017]
- [10]S. Recio, “Implementing the Container Pattern using React Hooks,” 2019 [Online]. Available at:. [Accessed: 31-Dec-2019]
- [11]B. Williams, “Introduction to React Design Patterns | DrupalCon,” 2018 [Online]. Available at:. [Accessed: 11-Apr-2018]
- [12]S. DeBenedetto, “The React + Redux Container Pattern,” 2016 [Online]. Available at:. [Accessed: 16-Nov-2016]
- [13]B. Kulbida, “2019 ReactJS Best Practices,” 2019 [Online]. Available at:. [Accessed: 09-Mar-2019]
- [14]A. Moldovan, “Evolving Patterns in React,” 2018 [Online]. Available at:. [Accessed: 04-Feb-2018]
- [15]C. Rippon, “React Conditional Rendering Patterns | Building SPAs,” 2018 [Online]. Available at:. [Accessed: 17-Apr-2018]
- [16]K. C. Dodds, “Advanced React Component Patterns,” 2017 [Online]. Available at:. [Accessed: 05-Dec-2017]
- [17]Y. Aabed, “Five Ways to Advanced React Patterns - DEV,” 2019 [Online]. Available at:. [Accessed: 02-Apr-2019]
- [18]T. Deekens, “Seven patterns by example: The many ways to type=’radio’ in React,” 2017 [Online]. Available at:. [Accessed: 20-Dec-2017]
- [19]Zemuldo, “Zemuldo Blog - Patterns For Testable React Components,” 2019 [Online]. Available at:. [Accessed: 30-Dec-2019]
- [20]L. Maldonado, “Advanced Patterns in React,” 2019 [Online]. Available at:. [Accessed: 09-Apr-2019]
- [21]Krasimir, “React.js in patterns,” 2016 [Online]. Available at:. [Accessed: 20-Jul-2016]
- [22]J. Franklin, “Higher-order Components: A React Application Design Pattern ...,” 2017 [Online]. Available at:. [Accessed: 08-Sep-2017]
- [23]R. O. B. I. N. WIERUCH, “React Component Types: A complete Overview - RWieruch,” 2019 [Online]. Available at:. [Accessed: 12-Mar-2019]
- [24]T. Konrády, “React patterns | React and Ramda patterns,” 2018 [Online]. Available at:. [Accessed: 27-Sep-2020]
- [25]G. Sayfan, “Patterns for data fetching in React - LogRocket Blog,” 2019 [Online]. Available at:. [Accessed: 24-Mar-2019]
- [26]A. Mansour, “5 React Data-Fetching Patterns - Nordschool,” 2019 [Online]. Available at:. [Accessed: 23-Oct-2019]
- [27]N. Kulas, “How advanced React patterns changed with hooks | Sunscrapers,” 2019 [Online]. Available at:. [Accessed: 01-Jul-2019]
- [28]B. Jackson, “Patterns for Style Composition in React | Jxnblk,” 2016 [Online]. Available at:. [Accessed: 13-Aug-2016]
- [29]S. D. Hutch, “How To Master Advanced React Design Patterns — Render Props,” 2018 [Online]. Available at:. [Accessed: 17-Apr-2018]
- [30]T. Ehrlich, “Common i18n patterns in React — LinguiJS documentation,” 2018 [Online]. Available at:. [Accessed: 27-Sep-2020]
- [31]L. G. Crespo, “React Patterns - Render Callback | Lenny’s Blog,” 2020 [Online]. Available at:. [Accessed: 27-Sep-2020]
- [32]Matt, “React’s State Reducer Pattern - DSC Engineering,” 2019 [Online]. Available at:. [Accessed: 19-Sep-2019]
- [33]G. Thakur, “Provider Pattern in React using React Context API,” 2019 [Online]. Available at:. [Accessed: 02-Feb-2019]
- [34]A. Farmer, “8 no-Flux strategies for React component communication,” 2020 [Online]. Available at:. [Accessed: 27-Sep-2020]
- [35]D. A. I. S. H. I. KATO, “Four patterns for global state with React hooks: Context or Redux ...,” 2019 [Online]. Available at:. [Accessed: 27-Apr-2019]
- [36]G. Babiars, “AngularJS Patterns in React | Greg Babiars’s Blog,” 2019 [Online]. Available at:. [Accessed: 29-Apr-2019]
- [37]reactjs.org, “Introducing Hooks,” 2020 [Online]. Available at:. [Accessed: 26-Sep-2020]
- [38]C. Wilson, “React hooks design patterns and creating components without class,” 2019 [Online]. Available at:. [Accessed: 18-Jul-2019]
- [39]D. Adeneye, “Best Practices With React Hooks — Smashing Magazine,” 2020 [Online]. Available at:. [Accessed: 27-Sep-2020]
- [40]B. McCormick, “Reusable Code In React: Inheritance, Composition, Decorators and ...,” 2020 [Online]. Available at:. [Accessed: 27-Sep-2020]
- [41]M. Wanago, “The Facade pattern and applying it to React Hooks,” 2019 [Online]. Available at:. [Accessed: 09-Dec-2019]
- [42]A. Burdette, “Production-Level Patterns for React Hooks | FullStack Labs” [Online]. Available at:
- [43]T. Linsley, “React Hooks, the rebirth of State Management and beyond.,” 2020 [Online]. Available at:. [Accessed: 27-Sep-2020]
- [44]M. Lynch, “A state management pattern for Ionic React with React Hooks | The ...,” 2020 [Online]. Available at:. [Accessed: 27-Sep-2020]
- [45]Martin, “BLoC Pattern with React Hooks — magarcia,” 2020 [Online]. Available at:. [Accessed: 27-Sep-2020]
- [46]M. Pekala, “Discovering patterns with React hooks,” 2019 [Online]. Available at:. [Accessed: 13-Apr-2019]
- [47]K. Ball, “Friday Frontend: New React Patterns,” 2019 [Online]. Available at:. [Accessed: 17-Apr-2019]
- [48]A. Ray, “The ReactJS Controller View Pattern,” 2015 [Online]. Available at:. [Accessed: 07-Jul-2015]
- [49]S. G. Team, “Using the Adapter Design Pattern With React | SendGrid,” 2018 [Online]. Available at:. [Accessed: 26-Jun-2018]
- [50]K. Tsonev, “Dependency injection · React in patterns,” 2020 [Online]. Available at:. [Accessed: 27-Sep-2020]
- [51]T. Sallai, “Global listener patterns in React - Advanced Web Machinery,” 2016 [Online]. Available at:. [Accessed: 05-Jan-2016]
- [52]Facebook, “Flux - Application architecture for building user interfaces,” 2020 [Online]. Available at:. [Accessed: 27-Sep-2020]
- [53]G. Fink, “Getting to Know the Redux Pattern | DotNetCurry,” 2020 [Online]. Available at:. [Accessed: 27-Sep-2020]
- [54]P. J. Frias, “Redux design patterns & Reduxsauce,” 2020 [Online]. Available at:. [Accessed: 27-Sep-2020]
- [55]D. Vincijanovic, “Level up your React architecture with MVVM - COBE,” 2018 [Online]. Available at:. [Accessed: 30-Aug-2018]
Appendix
This section presents extra resources created during the development of this content.
Mined content
On the Title column, the original title from the source was preserved and on the right a short abstract was provided to illustrate what the source content is about.
Web almanac css
Web almanac [65] is a research that focus on the web features used in the wild.
Table of contents
- Introduction
- Finding reactjs patterns articles
- Mining results
- Analyzing the results
- Discussion
- Related work
- Conclusion
- References
- Appendix
Got a question?
If you have question or feedback, don't think twice and click here to leave a comment. Just want to support me? Buy me a coffee! | https://marabesi.com/web/2020/06/22/reactjs-patterns-a-study-based-on-google-search.html | CC-MAIN-2022-05 | refinedweb | 3,464 | 56.76 |
Library sample chapters
Beginning XML
Well-formed XML (2)
Every Start-tag Must Have an End-tag
One of the problems with - and sometimes even encouraged - - different browsers do it differently, leading to incompatibilities.
For now, just remember that in XML the end-tag is required, and has to exactly match the start-tag.
Tags Can Not Overlap
Because XML is strictly> Some <STRONG> formatted <EM> text </EM> </STRONG> <EM> , but </EM>> <name>John</name> :
- Names can start with letters (including non-Latin characters) or the "_" character, but not numbers or other punctuation characters.
- After the first character, numbers are allowed, as are the characters "-" and ".".
- Names can't contain spaces.
- Names can't contain the ":" character. Strictly speaking, this character is allowed, but the XML specification says that it's "reserved". You should avoid using it in your documents, unless you are working with namespaces (which are covered in Chapter 8).
- Names can't start with the letters "xml", in uppercase, lowercase, or mixed - you can't start a name with "xml", "XML", "XmL", or any other combination.
- There can't be a space after the opening "<" character; the name of the element must come immediately after it. However, there can be space before the closing ">"character, if desired.
Here are some examples of valid names:
<first.name> <résumé>
And here are some examples of invalid names:
<xml-tag>
which starts with xml,
<123>
which starts with a number,
<fun=xml>
because the "=" sign is illegal, and:
<my tag>
which contains a space.
Remember these rules for element names - they also apply to naming other things in XML.. | http://www.developerfusion.com/samplechapter/1704/beginning-xml/3/ | crawl-002 | refinedweb | 270 | 61.56 |
Analyzing Roam Research Attribute Tables with Python
Want to share your content on R-bloggers? click here if you have a blog, or here if you don't.
By: Brad Lindblad
LinkedIn | Github | Blog | Subscribe
Image Source: roamresearch.com
Roam Research is a revolutionary note-taking tool for networked thought. As a data scientist and reader, I take many notes on many different topics. I’ve tried many different note-taking apps from Evernote and OneNote to the bare bones of Simple Note. I’ve always bumped into a problem with these tools, which is that the design and format of the tool restricted how I took notes.
For example, with Evernote you are forced to more or less put an idea on a single card, with little freedom in relating that note to other notes in your Evernote corpus. The search function allowed you to find specific words, but each idea is and was indefinitely separated from the others.
The folders and tagging provide some semblance of structure, but eventually your work or life will change such that you’ll want to rearrange that structure. Not fun.
The human mind doesn’t work that way. Studies have shown that the neurological structure of the brain forms an incomprehensibly complex network that modern machine learning barely intimates.
Why Roam?
Roam allows you to sputter ideas without having to worry about which folder to place them in, or if an idea could fit into multiple folders. You don’t have to pick just one location for that idea to languish in. For instance, if you have a nice python code snippet for working in Databricks that you’d like to save for later, you don’t have to worry about whether to place it in your databricks snippet folder or your python snippet folder; you simply tag the code with both and it will appear in both. An idea can live in two places concurrently, no sweat.
This is huge for allowing you to simply take the note and trust the system to organize for you. Roam has allowed me to consolidate the following activities and functions under a single tool: – Code snippets and cheats – Commonplace book – Bible study – Short-form writing (like this article) – Data science lab notebook – Goal setting and tracking – And, for the purposes of this article, habit tracking.
Habit tracking in Roam
I wanted to track the arthritis in one of my hands along with a few other variables to look for any indication of a relationship. There are many tools and apps that are made for this very thing, but my goal is to do as much as I can in Roam.
We use a feature in Roam called attribute tables to accomplish this. This article on Reddit does a great job of explaining how to set up attribute tables, so check that out if you’ve never made one before. The output of a habit tracking table looks like this:
If you were to look under the hood at the table, you would find that it looks an awful lot like an html table. The Pandas python library has a nice little function for parsing simple html tables called
read_html(), and don’t ya know it parses this Roam attribute table real slick.
The python script
The best way I found to parse the table was to download the actual html page with your browsers download function; in Brave it’s as simple as right-clicking on the page, hitting Save as > Complete Webpage, and saving to a location. I like to have a daily page open which usually just has one table. If you have multiple tables, you will have to modify the last line of the script a bit to select it.
After that, this little python script reads your Roam attribute table into a pandas dataframe:
import pandas as pd # download daily page html to local FILE = "/home/brad/Desktop/July 12th, 2021.html" html = pd.read_html(FILE) df = html[0]
and gives us:
Now you can do any analysis on your habits that you wish, all within the comforts of Roam and python.
Want more content like. | https://www.r-bloggers.com/2021/07/analyzing-roam-research-attribute-tables-with-python/ | CC-MAIN-2022-27 | refinedweb | 696 | 66.37 |
C <stdio.h> - fopen() Function
The C <stdio.h> fopen() function opens a file indicated by filename and returns a file stream associated with that file. The mode is used to determine the file access mode.
The file stream can be disassociated from the file by calling fclose() or freopen() function. All opened files are automatically closed on normal program termination.
Syntax
FILE * fopen ( const char * filename, const char * mode );
Parameters
- The above discussed mode specifiers opens a file as a text file. To open a file as a binary file, a "b" character has to be included in the mode string. This additional "b" character can either be appended at the end of the string ("rb", "wb", "ab" OR "r+b", "w+b", "a+b") or be inserted between the letter and the "+" sign for the mixed modes ("rb+", "wb+", "ab+").
- File access mode flag "x" can optionally be appended to "w" or "w+" specifiers. This flag forces the function to fail if the file exists, instead of overwriting it (since C2011).
- The behavior is undefined if the mode is not one of the strings listed above. Some implementations define additional supported modes.
Return Value
On success, returns a pointer to a FILE object that can be used to identify the stream on future operations. On error, returns a null pointer. On most library implementations, the errno variable is also set to a system-specific error code on failure.
Example:
Lets assume that we have a file called test.txt. This file contains following content:
This is a test file. It contains dummy content.
In the example below, file is opened using fopen() to read the content of the file.
#include <stdio.h> int main (){ //open the file in read mode FILE *pFile = fopen("test.txt", "r"); //first character in the file int c = getc(pFile); //if first character is not EOF, reads and writes //characters from the file until EOF is not reached while (c != EOF) { putchar(c); c = getc(pFile); } //close the file fclose(pFile); return 0; }
The output of the above code will be:
This is a test file. It contains dummy content.
❮ C <stdio.h> Library | https://www.alphacodingskills.com/c/notes/c-stdio-fopen.php | CC-MAIN-2021-43 | refinedweb | 361 | 73.17 |
#include <sys/param.h>
#include <sys/mount.h> option's value, using the given format, into the specified variable arguments. The value must be a string (i.e., NUL terminated).
The vfs_copyopt() function creates a copy of the option's value. The len argument must match the length of the option's.
The vfs_setopt() and vfs_setopt_part() functions copy new data into the option's value. In vfs_setopt(), the len argument must match the length of the option's value exactly (i.e., a larger buffer will still cause vfs_copyout() to fail with EINVAL).
The vfs_setopts() function copies a new string into the option's value. The string, including NUL byte, must be no longer than the option's length.() and vfs_setopt() functions return.
The vfs_setopts() function returns 0 if the copy was successful, EINVAL if the option was found but the string was too long, and ENOENT if the option was not found.
Please direct any comments about this manual page service to Ben Bullock. Privacy policy. | https://nxmnpg.lemoda.net/9/vfs_getopt | CC-MAIN-2019-39 | refinedweb | 167 | 59.5 |
C++ Plus(+) Operator Overloading Program
Hello Everyone!
In this tutorial, we will learn how to demonstrate the concept of
+ Operator Overloading, in the C++ programming language.
To understand the concept of Operator Overloading in CPP, we will recommend you to visit here: C++ Operator Overloading, where we have explained it from scratch.
Code:
#include <iostream> using namespace std; //defining the class Cuboid to demonstrate the concept of Plus Operator Overloading in CPP class Cuboid { //Declaring class member variables as public to access from outside the class public: double length; // Length of Cuboid double breadth; // Breadth of Cuboid double height; // Height of Cuboid public: double getVolume(void) { return length * breadth * height; } void setLength(double l) { length = l; } void setBreadth(double b) { breadth = b; } void setHeight(double h) { height = h; } // Overload + operator to add two Cuboid objects with each other. Cuboid operator + (const Cuboid & c) { Cuboid cuboid; cuboid.length = this -> length + c.length; cuboid.breadth = this -> breadth + c.breadth; cuboid.height = this -> height + c.height; return cuboid; } }; //Defining the main method to access the members of the class int main() { cout << "\n\nWelcome to Studytonight :-)\n\n\n"; cout << " ===== Program to demonstrate the Plus Operator Overloading, in CPP ===== \n\n"; //Declaring the Class objects to access the class members Cuboid c1; Cuboid c2; Cuboid c3; //To store the volume of the Cuboid double volume = 0.0; // Setting the length, breadth and height for the first Cuboid object: c1 c1.setLength(3.0); c1.setBreadth(4.0); c1.setHeight(5.0); // Setting the length, breadth and height for the second Cuboid object: c2 c2.setLength(2.0); c2.setBreadth(5.0); c2.setHeight(8.0); // Finding the Volume of the first Cuboid: c1 cout << "Calling the getVolume() method to find the volume of Cuboid c1\n"; volume = c1.getVolume(); cout << "Volume of the Cuboid c1 is : " << volume << "\n\n\n"; // Finding the Volume of the first Cuboid: c1 cout << "Calling the getVolume() method to find the volume of Cuboid c2\n"; volume = c2.getVolume(); cout << "Volume of the Cuboid c2 is : " << volume << "\n\n\n"; // Adding the two Cuboid objects c1 and c2 to form the third object c3: c3 = c1 + c2; // Printing the dimensions of the third Cuboid: c3 cout << "Length of the Cuboid c3 is : " << c3.length << endl; cout << "Breadth of the Cuboid c3 is : " << c3.breadth << endl; cout << "Height of the Cuboid c3 is : " << c3.height << endl; // Finding the Volume of the third Cuboid: c3 cout << "\n\nCalling the getVolume() method to find the volume of Cuboid c3\n"; volume = c3.getVolume(); cout << "Volume of the Cuboid c3 is : " << volume << endl; cout << "\n\n\n"; return 0; }
Output:
We hope that this post helped you develop a better understanding of the concept of Operator Overloading in C++. For any query, feel free to reach out to us via the comments section down below.
Keep Learning : ) | https://studytonight.com/cpp-programs/cpp-plus-operator-overloading-program | CC-MAIN-2021-04 | refinedweb | 475 | 65.66 |
8.10. Deep Recurrent Neural Networks¶
Up to now, we only discussed recurrent neural networks with a single unidirectional hidden layer. In it the specific functional form of how latent variables and observations interact was rather arbitrary. This isn’t a big problem as long as we have enough flexibility to model different types of interactions. With a single layer, however, this can be quite challenging. In the case of the perceptron we fixed this problem by adding more layers. Within RNNs this is a bit more tricky, since we first need to decide how and where to add extra nonlinearity. Our discussion below focuses primarily on LSTMs but it applies to other sequence models, too.
- We could add extra nonlinearity to the gating mechansims. That is, instead of using a single perceptron we could use multiple layers. This leaves the mechanism of the LSTM unchanged. Instead it makes it more sophisticated. This would make sense if we were led to believe that the LSTM mechanism describes some form of universal truth of how latent variable autoregressive models work.
- We could stack multiple layers of LSTMs on top of each other. This results in a mechanism that is more flexible, due to the combination of several simple layers. In particular, data might be relevant at different levels of the stack. For instance, we might want to keep high-level data about financial market conditions (bear or bull market) available at a high level, whereas at a lower level we only record shorter-term temporal dynamics.
Beyond all this abstract discussion it is probably easiest to understand the family of models we are interested in by reviewing the diagram below. It describes a deep recurrent neural network with \(L\) hidden layers. Each hidden state is continuously passed to the next time step of the current layer and the next layer of the current time step.
Fig. 8.14 Architecture of a deep recurrent neural network.
8.10.1. Functional Dependencies¶
At time step \(t\) we assume that we have a minibatch \(\mathbf{X}_t \in \mathbb{R}^{n \times d}\) (number of examples: \(n\), number of inputs: \(d\)). The hidden state of hidden layer \(\ell\) (\(\ell=1,\ldots,T\)) is \(\mathbf{H}_t^{(\ell)} \in \mathbb{R}^{n \times h}\) (number of hidden units: \(h\)), the output layer variable is \(\mathbf{O}_t \in \mathbb{R}^{n \times q}\) (number of outputs: \(q\)) and a hidden layer activation function \(f_l\) for layer \(l\). We compute the hidden state of layer \(1\) as before, using \(\mathbf{X}_t\) as input. For all subsequent layers the hidden state of the previous layer is used in its place.
Finally, the output of the output layer is only based on the hidden state of hidden layer \(L\). We use the output function \(g\) to address this:
Just as with multilayer perceptrons, the number of hidden layers \(L\) and number of hidden units \(h\) are hyper parameters. In particular, we can pick a regular RNN, a GRU or an LSTM to implement the model.
8.10.2. Concise Implementation¶
Fortunately many of the logistical details required to implement multiple layers of an RNN are readily available in Gluon. To keep things simple we only illustrate the implementation using such built-in functionality. The code is very similar to the one we used previously for LSTMs. In fact, the only difference is that we specify the number of layers explicitly rather than picking the default of a single layer. Let’s begin by importing the appropriate modules and data.
In [1]:
import sys sys.path.insert(0, '..') import d2l from mxnet import nd from mxnet.gluon import rnn corpus_indices, vocab = d2l.load_data_time_machine()
The architectural decisions (parameters, etc.) are very similar to those
of previous sections. We pick the same number of inputs and outputs as
we have distinct tokens, i.e.
vocab_size. The number of hidden units
is still 256 and we retain a learning rate of 100. The only difference
is that we now select a nontrivial number of layers
num_layers = 2.
Since the model is somewhat slower to train we use 3000 iterations.
In [2]:
num_inputs, num_hiddens, num_layers, num_outputs = len(vocab), 256, 2, len(vocab) ctx = d2l.try_gpu() num_epochs, num_steps, batch_size, lr, clipping_theta = 500, 35, 32, 5, 1 prefixes = ['traveller', 'time traveller']
8.10.3. Training¶
The actual invocation logic is identical to before and we re-use
train_and_predict_rnn_gluon. The only difference is that we now
instantiate two layers with LSTMs. This rather more complex architecture
and the large number of epochs slow down training considerably.
In [3]:
lstm_layer = rnn.LSTM(hidden_size = num_hiddens, num_layers=num_layers) 9.005890, time 8.14 sec epoch 250, perplexity 1.046033, time 7.10 sec - traveller smiled. 'are you sure we can move freely in space - time traveller smiled. 'are you sure we can move freely in space epoch 375, perplexity 1.024058, time 6.68 sec epoch 500, perplexity 1.036108, time 6.71 sec - traveller smiled. 'are you sure we can move freely in space - time traveller smiled. 'are you sure we can move freely in space
8.10.4. Summary¶
- In deep recurrent neural networks, hidden state information is passed to the next time step of the current layer and the next layer of the current time step.
- There exist many different flavors of deep RNNs, such as LSTMs, GRUs or regular RNNs. Conveniently these models are all available as parts of the
rnnmodule in Gluon.
- Initialization of the models requires care. Overall, deep RNNs require considerable amount of work (learning rate, clipping, etc) to ensure proper convergence.
8.10.5. Exercises¶
- Try to implement a two-layer RNN from scratch using the “single layer implementation” we discussed in an earlier section.
- Replace the LSTM by a GRU and compare the accuracy.
- Increase the training data to include multiple books. How low can you go on the perplexity scale?
- Would you want to combine sources of different authors when modeling text? Why is this a good idea? What could go wrong? | http://d2l.ai/chapter_recurrent-neural-networks/deep-rnn.html | CC-MAIN-2019-18 | refinedweb | 1,004 | 58.08 |
I wanted to ask, how can i make a square rotate in base of
event.mouse.xevent.mouse.y
The square is al_draw_rectangle(x-15,y-15,x+15,y+15,al_map_rgb(0,0,255),3.0);
Is there any way to do that? The square cannot move with mouse, only with keyboard.
Your question is a bit incomplete... let me use my A5 mind reading addon and guess what you actually want.
You can do that using transformations:
ALLEGRO_TRANSFORM trans;
al_identity_transform(&trans);
al_translate_transform(&trans, -cx, -cy); //cx,cy are set to the mouse coordinates in the mouse event
al_rotate_transform(&trans, theta);
al_translate_transform(&trans, cx, cy);
al_use_transform(&trans);
al_draw_rectangle(x - 15, y - 15, x + 15, y + 15, al_map_rgb(0, 0, 255), 3.0);
/* Reset transform */
al_identity_transform(&trans);
al_use_transform(&trans);
"For in much wisdom is much grief: and he that increases knowledge increases sorrow."-Ecclesiastes 1:18[SiegeLord's Abode][Codes]:[DAllegro5]:[RustAllegro]
Your code works perfect. But i need to rotate the square.Here is what i want to do:The square must be at the center and wont move. If i move the mouse, it wont move again.It will only rotate in base of the mouse e.mouse.x and e.mouse.y
Sorry, what? You need to be clearer.
What I gather so far :1) The square is centered on the center of the screen.2?) You want to rotate the square according to the direction from the center of the square to the mouse position?
double theta_radians = atan2(event.mouse.y - square_center_y , event.mouse.x - square_center_x);
Then use a transform to center on the center of the square, then add in a rotate transform to rotate by theta_radians, then draw the used the AMCerasoli video of networking, because it is clearer.You see the player rotates when he moves the mouse?I want to make the same thing, just with a square.
Also, the code of siege worked, but the center was not the square, but the top left corner of the screen. How to change the center?
How about this:
ALLEGRO_TRANSFORM trans;
al_identity_transform(&trans);
al_rotate_transform(&trans, theta);
al_translate_transform(&trans, x, y);
al_use_transform(&trans);
al_draw_rectangle(-15, -15, 15, 15, al_map_rgb(0, 0, 255), 3.0);
/* Reset transform */
al_identity_transform(&trans);
al_use_transform(&trans);
You'd compute the theta using the formula Edgar posted.
Same thing again, it just moved a little
Who in the world understands this from the manual
"Apply a translation to a transformation."
Also, i tried scale transform and it didnt work at all with the mouse.
atan2 is wrong in my configuration after including math.h which wasnt even mentioned to use:double theta_radians = atan2(e.mouse.y - (x + 15) , e.mouse.x - (y - 15));
Same thing again, it just moved a little
Then you're doing something wrong, as this example shows that it works fine:
atan2 doesnt work. I dont know why. I am using visual studio 2012I copy pasted your code and it doesnt work on my machine only because of atan2
Are you linking with math added the math.lib in the Linker -> General -> Additional Library Dependencies
Used this pathC:\Program Files (x86)\Microsoft Visual Studio 11.0\VC\include;%(AdditionalLibraryDirectories)
Then i tried this other oneC:\Program Files %28x86%29\Microsoft Visual Studio 11.0\VC\include
Still the same
theta = atan2(my - y, mx - x);
I have the same problem Try this
theta = atan2((float)(my - y), (float)(mx - x));
Theta in this case must be float
I am using visual studio 2012
o.OVisual Studio for Windows 8?Is there Intelisense? I'm using 2008, because 2010 haven't got intelisense :/
And I think, that this is better way to do rotate something:
Theta in this case must be float
There is no integer version of atan2, that's just nonsensical.
What does this code print?
#include <iostream>
#include <typeinfo>
#include <math.h>
int main()
{
std::cout << typeid(atan2((int)0, int(0))).name() << std::endl;
return 0;
}
And I think, that this is better way to do rotate something:
My way is better for primitives and in fact works for all drawables, while your way only works for bitmaps.
Oh... but there is double, long dobule and float atan2 version!
In fact, you can create bitmap from all drawable primitives and then rotate it
While I compile your code, there is error:
error C2668: 'atan2' : ambiguous call to overloaded function
c:\program files\microsoft visual studio 9.0\vc\include\math.h(547): could be 'long double atan2(long double,long double)'
c:\program files\microsoft visual studio 9.0\vc\include\math.h(499): or 'float atan2(float,float)'
c:\program files\microsoft visual studio 9.0\vc\include\math.h(110): or 'double atan2(double,double)'
while trying to match the argument list '(int, int)'
I must do cast to float.
Sory, but my previous words are logical
In fact, you can create bitmap from all drawable primitives and then rotate it
Inefficient and unnecessary.
While I compile your code, there is error:
Well a compile error makes sense. It compiling without error and then not functioning does not. Either way, the original code is C not C++ .
But if you using bitmaps in this case, application works better Processor has less to countAt least I think so...
But if you using bitmaps in this case, application works better
If you only need to rotate a single pre-existing bitmap, then yeah... no need for transformations.
Well, I'm currently not using transformations at all...
@codestepperWOW THANKS! It works perfect! Can you make some notes in english to the program so i can understand it better? I mean especially the transformation, because i dont understand it well.
Visual Studio 2012 is the same as 2010, with some changes and it has intellisense. I have had 2010 and it had intellisense, i dont know what version you had. I have worked on both for Allegro 5. I recently moved a game from VC9 to VC10 and it was very easy and fast.
I dont get it why it is a bad thing to do it like that, because it works 100%
Also, if anyone can make a pacman which opens its mouth using the al_draw_pieslice that would be great as i was trying to do it today without any success.
@cerasoliWhat is your method of doing it?
Hmm... i think i don't make any bugs in comments If yes, sorry
Sorry for variable names - some of them don't reflect their roles
So, code for pac-man primitives version with comments:
Wow, you are amazing! Thank you very much man, it worked perfect
I forgot about this:
redraw = false;
Paste this before:
al_clear_to_color(al_map_rgb_f(0, 0, 0));
And it will run better | https://www.allegro.cc/forums/thread/610900/964159 | CC-MAIN-2018-30 | refinedweb | 1,128 | 65.32 |
Tk_RestackWindow - Change a window's position in the stacking order
#include <tk.h>
int
Tk_RestackWindow(tkwin, aboveBelow, other)
Tk_Window tkwin (in) Token for window to restack.
int aboveBelow (in) Indicates new position of tkwin rel-
ative to other; must be Above or
Below.
Tk_Window other (in) Tkwin will be repositioned just
above or below this window. Must be
a sibling of tkwin or a descendant
of a sibling. If NULL then tkwin is
restacked above or below all sib-
lings.
_________________________________________________________________
Tk_RestackWindow changes the stacking order of window relative to its
siblings. If other is specified as NULL then window is repositioned at
the top or bottom of its stacking order, depending on whether aboveBe-
low>.
above, below, obscure, stacking order
Tk Tk_RestackWindow(3) | http://www.syzdek.net/~syzdek/docs/man/.shtml/man3/Restack.3.html | crawl-003 | refinedweb | 124 | 57.98 |
There are times when you have written your code but while you execute, it might not run. These types of situations occur when the input is inappropriate or you try to open a file with a wrong path or try to divide a number by zero. Due to some errors or incorrect command the output will not be displayed. This is because of errors and exceptions which are a part of the Python programming language. Learn about such concepts and gain further knowledge by joining Python Programming Course.
What is Exception Handling?
Python raises exceptions when it encounters errors during execution. A Python Exception is basically a construct that signals any important event, such as a run-time error.
Exception Handling is the process of responding to executions during computations, which often interrupts the usual flow of executing a program. It can be performed both at the software level as part of the program and also at hardware level using built-in CPU mechanisms.
Why is Exception Handling Important?
Although exceptions might be irritating when they occur, they play an essential role in high level languages by acting as a friend to the user.
An error at the time of execution might lead to two things— either your program will die or will display a blue screen of death. On the other hand, exceptions act as communication tools. It allows the program to answer the questions — what, why and how something goes wrong and then terminates the program in a delicate manner.
In simple words, exception handling protects against uncontrollable program failures and increases the potency and efficiency of your code. If you want to master yourself in programming, the knowledge of exceptions and how to handle them is very crucial, especially in Python.
What are the Errors and Exceptions in Python?
Python doesn’t like errors and exceptions and displays its dissatisfaction by terminating the program abruptly.
There are basically two types of errors in the Python language-
- Syntax Error.
- Errors occuring at run-time or Exceptions.
Syntax Errors
Syntax Errors, also known as parsing errors, occur when the parser identifies an incorrect statement. In simple words, syntax error occurs when the proper structure or syntax of the programming language is not followed.
An example of a syntax error:
>>> print( 1 / 0 )) File "", line 1 print( 1 / 0 )) ^
SyntaxError: invalid syntax
Exceptions
Exceptions occur during run-time. Python raises an exception when your code has a correct syntax but it encounters a run-time issue which it is not able to handle.
There are a number of defined built-in exceptions in Python which are used in specific situations. Some of the built-in exceptions are:
There are another type of built-in exceptions called warnings. They are usually issued in situations where the user is alerted of some conditions. The condition does not raise an exception; rather it terminates the program.
What is a Python KeyError?
Before getting into KeyError, you must know the meaning of dictionary and mapping in Python.
Dictionary (dict) is an unordered collection of objects which deals with data type key. They are Python’s implementation of data structures and are also known as associative arrays. They comprise key-value pairs, in which each pair maps the key to its associated value.
Dictionary is basically a data structure that maps one set of values into another and is the most common mapping in Python.
Exception hierarchy of KeyError:
->BaseException
->Exception
->LookupError
->KeyError
A Python KeyError is raised when you try to access an invalid key in a dictionary. In simple terms, when you see a KeyError, it denotes that the key you were looking for could not be found.
An example of KeyError:
>>> prices = { 'Pen' : 10, 'Pencil' : 5, 'Notebook' : 25} >>> prices['Eraser'] Traceback (most recent call last): File "<pyshell#1>", line 1, in prices['Eraser'] KeyError: 'Eraser'</pyshell#1>
Here, dictionary prices is declared with the prices of three items. The KeyError is raised when the item ‘Eraser’ is being accessed which is not present in prices.
Whenever an exception is raised in Python, it is done using traceback, as you can see in the example code above. It tells why an exception is raised and what caused it.
Let’s execute the same Python code from a file. This time, you will be asked to give the name of the item whose price you want to know:
# prices.py prices = { 'Pen' : 10, 'Pencil' : 5, 'Notebook' : 25} item = input('Get price of: ') print(f'The price of {item} is {prices[item]}')
You will get a traceback again but you’ll also get the information about the line from which the KeyError is raised:
Get price of: Eraser Traceback (most recent call last): File "prices.py", line 5, in print(f'The price of {item} is {prices[item]}') KeyError: 'Eraser'
The traceback in the example above provides the following information:
- A KeyError was raised.
- The key ‘Eraser’ was not found.
- The line number which raised the exception along with that line.
Where else will you find a Python KeyError?
Although most of the time, a KeyError is raised because of an invalid key in a Python dictionary or a dictionary subclass, you may also find it in other places in the Python Standard Library, such as in a zipfile. However, it denotes the same semantic meaning of the Python KeyError, which is not finding the requested key.
An example of such:
>>> from zipfile import ZipFile >>> my_zip_file = ZipFile('Avengers.zip') >>> my_zip_file.getinfo('Batman')
Traceback (most recent call last): File "<pyshell#1>", line 1, in File "myzip.py", line 1119, in getinfo 'There is no item named %r in the archive' % name) KeyError: "There is no item named 'Batman' in the archive"</pyshell#1>
In this example, the zipfile.ZipFile class is used to derive information about a ZIP archive ‘Batman’ using the getinfo() function.
Here, the traceback indicates that the problem is not in your code but in the zipfile code, by showing the line which caused the problem. The exception raised here is not because of a LookUpError but rather due to the zipfile.ZipFile.getinfo()function call.
When do you need to raise a Python KeyError?
In Python Programming, it might be sensible at times to forcefully raise exceptions in your own code. You can usually raise an exception using the raise keyword and by calling the KeyError exception:
>>> raise KeyError('Batman')
Here, ‘Batman’ acts as the missing key. However, in most cases, you should provide more information about the missing key so that your next developer has a clear understanding of the problem.
Conditions to raise a Python KeyError in your code:
- It should match the generic meaning behind the exception.
- A message should be displayed about the missing key along with the missing key which needs to be accessed.
How to Handle a Python KeyError?
The main motive of handling a Python KeyError is to stop unexpected KeyError exceptions to be raised. There are a number of number of ways of handling a KeyError exception.
Using get()
The get()is useful in cases where the exception is raised due to a failed dictionary LookupError. It returns either the specified key value or a default value.
# prices.py prices = { 'Pen' : 10, 'Pencil' : 5, 'Notebook' : 25} item = input('Get price of: ') price = prices.get(item) if price: print(f'The price of {item} is {prices[item]}') else: print(f'The price of {item} is not known')
This time, you’ll not get a KeyError because the get() uses a better and safer method to retrieve the price and if not found, the default value is displayed:
Get price of: Eraser
The price of Eraser is not known
In this example, the variable price will either have the price of the item in the dictionary or the default value ( which is None by default ).
In the example above, when the key ‘Eraser’ is not found in the dictionary, the get() returns None by default rather than raising a KeyError. You can also give another default value as a second argument by calling get():
price = prices.get(item,0)
If the key is not found, it will return 0 instead of None.
Checking for Keys
In some situations, the get() might not provide the correct information. If it returns None, it will mean that the key was not found or the value of the key in Python Dictionary is actually None, which might not be true in some cases. In such situations, you need to determine the existence of a key in the dictionary.
You can use the if and in operator to handle such cases. It checks whether a key is present in the mapping or not by returning a boolean (True or False) value:
dict = dictionary() for i in range(50): key = i % 10 if key in dict: dict[key] += 1 else: dict[key] = 1
In this case, we do not check what the value of the missing key is but rather we check whether the key is in the dictionary or not. This is a special way of handling an exception which is used rarely.
This technique of handling exceptions is known as Look Before You Leap(LBYL).
Using try-except
The try-except block is one of the best possible ways to handle the KeyError exceptions. It is also useful where the get() and the if and in operators are not supported.
Let’s apply the try-except block on our earlier retrieval of prices code:
# prices.py prices = { 'Pen' : 10, 'Pencil' : 5, 'Notebook' : 25} item = input('Get price of: ') try: print(f'The price of {item} is {prices[item]}') except KeyError: print(f'The price of {item} is not known')
Here, in this example there are two cases— normal case and a backup case. try block corresponds to the normal case and except block to the backup case. If the normal case doesn’t print the name of the item and the price and raises a KeyError, the backup case prints a different statement or a message.
Using try-except-else
This is another way of handling exceptions. The try-except-else has three blocks— try block, except block and else block.
The else condition in a try-except statement is useful when the try condition doesn’t raise an exception. However, it must follow all the except conditions.
Let us take our previous price retrieval code to illustrate try-except-else:
# prices.py prices = { 'Pen' : 10, 'Pencil' : 5, 'Notebook' : 25} item = input('Get price of:') try: print(f'The price of {item} is {prices[item]}') except KeyError: print(f'The price of {item} is not known') else: print(f'There is no error in the statement')
First, we access an existing key in the try-except block. If the Keyerror is not raised, there are no errors. Then the else condition is executed and the statement is displayed on the screen.
Using finally
The try statement in Python can have an optional finally condition. It is used to define clean-up actions and is always executed irrespective of anything. It is generally used to release external sources.
An example to show finally:
# prices.py prices = { 'Pen' : 10, 'Pencil' : 5, 'Notebook' : 25} item = input('Get price of: ') try: print(f'The price of {item} is {prices[item]}') except KeyError: print(f'The price of {item} is not known') finally: print(f'The finally statement is executed')
Remember, the finally statement will always be executed whether an exception has occurred or not.
How to raise Custom Exceptions in Python?
Python comprises of a number of built-in exceptions which you can use in your program. However, when you’re developing your own packages, you might need to create your own custom exceptions to increase the flexibility of your program.
You can create a custom Python exception using the pre-defined class Exception:
def square(x): if x<=0 or y<=0: raise Exception('x should be positive') return x * x
Here, the function square calculates the square of a number. We raise an Exception if either the input number is negative or not.
Disadvantages of Exception Handling
Though exception handling is very useful in catching and handling exceptions in Python, it also has several disadvantages. Some of which are as follows—
- It can trap only run-time errors.
- When you use try-except, the program will lose some performance and slow down a bit.
- The size of the code increases when you use multiple try, except, else and finally blocks.
- The concept of try-catch might be a little difficult to understand for beginners.
- It is useful only in exceptional error cases.
Other than these disadvantages, understanding the concept of Exception Handling can ease your career as a programmer in the world of Python.
Conclusion
Since you have now become quite an expert in handling KeyError exceptions, you can easily debug actual errors and reduce the number of bugs in your code.
Let us sum up what we’ve learnt in the article so far:
- Exception Handling and its importance.
- Different types of exceptions.
- Python KeyError
- Finding and raising a Python KeyError.
- Handling Python KeyError.
- Custom Exceptions.
- Demerits of Exception Handling.
Exceptions are considered as the tools of communication that guard you from potential damage. If you’re clear in the understanding of exceptions, they will act as a guide to your solutions.
So next time when you see a Python KeyError raised, you’ll find all the information about the location of your error and how to handle that. You will easily know how to access the key using the safer get() or the more general try-except-else blocks to control your program’s flow more efficiently and predictably.
However, if you wish to know more about errors and exceptions, you can look into the full documentation of Python Standard Library’s Errors and Exceptions and Exception Handling or register for the Python Certification Course at KnowledgeHut. You can also learn more about Python Programming in this complete Python Tutorial.
Source: knowledgehut | https://learningactors.com/python-keyerror-exceptions-and-how-to-handle-them/ | CC-MAIN-2020-10 | refinedweb | 2,342 | 61.36 |
Use dotted characters in Windows
I have strings, which contains following dotted characters: "áéöüóőúű". I try to use paste() method to put it to field.
I found solution ALT key codes, but I don't know where these characters located in string.
I found paste(unicode()) method, but it isn't present in newest SNAPSHOT version.
Question information
- Language:
- English Edit question
- Status:
- Solved
- For:
- Sikuli Edit question
- Assignee:
- No assignee Edit question
- Solved by:
- Manfred Hampl
- Solved:
- 2019-01-22
- Last query:
- 2019-01-22
- Last reply:
- 2019-01-21
I tried it, but got: [error] NoMethodError ( (NoMethodError) undefined method `unicd' for main:Object )
since I cannot see, how you run your script (a main is not needed for SikuliX scripts), I cannot tell you what your problem is.
ucode() and unicd() are defined in Sikuli.py, which is auto-imported if you run the script in the SikuliX-style.
In other cases you might need
from sikuli import *
I use SikuliXIDE 1.1.4-SNAPSHOT.
this does what it should:
App.focus(
wait(2)
paste(unicd(
in Notepad window there is then:
áéöüóőúű
I see, but there are more than one inut fields in that application. Input field is already in focus in my script.
Please check image: https:/
You probably need certain require/include statements.
I suggest you look at https:/
@Manfred Hampl: I added require 'Lib/sikulix' and include Sikulix, but got same error message.
Sorry, didnot See that it is Ruby.
In IDE no additional require/inclusive needed.
... but the mentioned functions are only available in Python scripting.
You have to find out how to encode an utf8 string
@RaiMan : Bad news :( Could you offer some ways please, how could I implement it?
Hi Akos,
I have one idea that can help you.
Lets imagine you have 2 keyboard layout installed. I have English and Bulgarian. The hot key for switching is windows key + space key.
The idea is to switch between languages and type in English with key that will produce your desired output. Here is example in Cyrillic but will work for any language
# staring with English
switchApp(
# now switch to your second language
type(Key.SPACE, KeyModifier.WIN)
# type Latin letters that correspond to your second language letters
type("zdrawej")
Here is the video demonstrating above approach - https:/
Note in the bottom-right corner the language indicator
Hope this helps
@TestMechanic : I can't use type(). because computer behaves weird after command is executed. Unable to take a single click after that.
Where do these strings come from?
What happens if you try the simple paste("áéöüóőúű") statement?
https:/
Maybe one of these works:
paste("
paste("
or something similar.
@Akos,
So you cannot execute even type('test') in pure English?
@TestMechanic: I can, that is ok.
@Manfred Hampl: Strings are pathes and come from a txt file. If I paste those dotted characters I get non-readable characters in field. I already tried these practices, but has same or very similar effect. Path is pasted in a modified format.
Which encoding does the text file with the path names really have?
Which operating system and program was used for creating this text file?
What happens if you try opening this file in Microsoft Word, and play around with the different encoding possibilities, which one do you have to select that you get a correct preview in the import window?
Maybe you need other encodings, perhaps
paste(myPathnam
@Manfred Hampl: Text file has ANSI encoding. It was created with command prompt, it is output of a command. I opened it in Notepad++, but all other encodings contain not readable characters, if I change it.
I already tried that code, but doesn't worked for me, I got unreadable characters in form.
For an attempt to find the real encoding scheme, can you provide the values of "á".bytes, "é".bytes, "ö".bytes, ...
@Manfred Hampl: Thanks a lot for your help, that line: paste(myPathnam
Thanks Manfred Hampl, that solved my question.
it is
paste(ucode("some text with unicode characters"))
or
paste(unicd("some text with unicode characters")) | https://answers.launchpad.net/sikuli/+question/677799 | CC-MAIN-2019-09 | refinedweb | 685 | 65.42 |
In-Memory Virtual Filesystems
I really like his use case for having such a filesystem:
Having a virtual memory system to plug into would be fantastic for unit testing. You could create files to your heart’s content and the file access would be fast while also saving you from all the annoying issues with deleting temporary files, Windows file locking, etc.
Makes sense to me. In the comments to his blog entry, a few existing solutions are mentioned, in particular, Commons VFS and the JBoss Microcontainer project. Also, another use case for a filesystem of this kind can be found there:
It may also be useful when running on operating systems that don’t have /tmp or equivalent on swap.
Another solution, much less known in this context, is provided by two standalone JARs that are used within the NetBeans Platform (full disclosure: I work there). These two JARs, org-openide-filesystems.jar and org-openide-util.jar, can simply be dropped in your classpath and then... you have access to an in-memory virtual filesystem. Those two JARs are all that it takes. In other words, there are no dependencies of any kind on any part of the NetBeans Platform. These two JARs can simply be copied/pasted from the NetBeans Platform (i.e., which is a folder within NetBeans IDE) to your own classpath.
The filesystem that you then have is hierarchical. Basically, imagine an XML file in memory and you've got the idea. You can write folders and files into this filesystem and then you can ascribe attributes to those files. These can then be used as part of your unit tests, exactly as described above by Alex. Here's a small example:
package demo;
import java.io.IOException;
import junit.framework.TestCase;
import org.junit.Before;
import org.junit.Test;
import org.openide.filesystems.FileObject;
import org.openide.filesystems.FileSystem;
import org.openide.filesystems.FileUtil;
import org.openide.util.Exceptions;
public class FsJFrameTest extends TestCase {
FileSystem fs = FileUtil.createMemoryFileSystem();
FileObject root = fs.getRoot();
@Before
@Override
public void setUp() {
try {
//Create a virtual folder:
FileObject testDataFolder = root.createFolder("TestData");
//Create a virtual file:
FileObject testData1 = testDataFolder.createData("testData1");
//Create three virtual attributes for the file:
testData1.setAttribute("name", "John");
testData1.setAttribute("age", 27);
testData1.setAttribute("employed", true);
//Create a second virtual file:
FileObject testData2 = testDataFolder.createData("testData2");
//Create three virtual attributes for the file:
testData2.setAttribute("name", "Jane");
testData2.setAttribute("age", 34);
testData2.setAttribute("employed", false);
} catch (IOException ex) {
Exceptions.printStackTrace(ex);
}
}
//This test will pass because all attributes match the test data.
@Test
public void testData1() {
FileObject testData1 = root.getFileObject("TestData/testData1");
assertEquals(testData1.getAttribute("name"), "John");
assertEquals(testData1.getAttribute("age"), 27);
assertEquals(testData1.getAttribute("employed"), true);
}
//This test will fail because age = 34 in the test data.
@Test
public void testData2() {
FileObject testData2 = root.getFileObject("TestData/testData2");
assertEquals(testData2.getAttribute("name"), "Jane");
assertEquals(testData2.getAttribute("age"), 33);
assertEquals(testData2.getAttribute("employed"), false);
}
}
Here you can see that even though you haven't written any actual physical files anywhere on disk and even though you haven't retrieved anything from actual physical files on disk, you do have the important thing: the test data that you need for your unit tests. Plus, that data can be organized as easily as if one were working with a physical hierarchical filesystem. The two JARs provide a lot more besides, such as access to ZIP/JAR archives, via filesystems provided by the selfsame JARs. Just get them from your NetBeans IDE distro and then remove the distro and use another IDE, if that's your preference. (By the way, a screencast of the NetBeans Platform's filesystem [together with a transcript] can be found here.) Looking forward to seeing an in-memory virtual filesystem of this kind available in the JDK, though.
- Login or register to post comments
- 5541 reads
- Printer-friendly version
(Note: Opinions expressed in this article and its replies are the opinions of their respective authors and not those of DZone, Inc.)
Fuqiang Zhao replied on Wed, 2008/11/12 - 8:00pm
radek.jun replied on Thu, 2008/11/13 - 2:53am
Roman Pichlik replied on Thu, 2008/11/13 - 8:59am
Geertjan Wielenga replied on Thu, 2008/11/13 - 10:44am
in response to: rp117107
[quote=rp117107]+1 for in-memory file system. Maybe i missed something, but i don't see any addition value in using of Open IDE API. Usually you test some API depending on JDK abstraction as java.io.File, so classes from org.openide.filesystems do not help at all ;-). [/quote]
Classes from org.openide.filesystems give you an in-memory virtual filesystem.
kackbratze replied on Sat, 2008/11/15 - 1:28pm
in response to: geertjan
I think rp117107 is raising the same point that I would raise: that having an in-memory "file system" is all very well, but if it doesn't have the same interface (or a bridging interface available) to make it look like regular Java file access (and/or JSR203), it's not much help for code already written that writes/reads files to the file system in the usual Java way!
Aaron Digulla replied on Tue, 2008/11/25 - 8:32am
Michaelz replied on Tue, 2009/06/09 - 2:37pm?
thank you,
Dress Up Games
Michaelz replied on Wed, 2009/06/10 - 6:23am
Virtual File System is an interface providing a clearly defined link ... kernel's memory is the same for all File System implementations.
thank you,
Cheap Flights
Michaelz replied on Sun, 2009/06/14 - 6:59am
The results indicate that variable-size cache mechanisms work well when virtualmemory - and file-intensive programs are run in sequence; the cache is able to change in size in order to provide overall performance no worse than that provided by a small fixed-size cache.
cheers,
College Degrees
eugeneba replied on Mon, 2009/06/15 - 4:40am
When I started designing my new 3D Engine, I realized that I needed some kind of file system. Not only some file classes or so, but something I call a Virtual File System (VFS), my own file system that supports nice features like compression, encryption, fast access times, and so on.
thank you,
commercial mailboxes
eugeneba replied on Mon, 2009/06/15 - 3:34pm
A computer system having a kernel interface that provides a technique for creating memory descriptors that provides a single way of representing memory objects and provides a common interface to operations upon those objects.
great,
FL health insurance
jakslime replied on Wed, 2009/06/17 - 2:58am
cooljin replied on Wed, 2009/06/17 - 3:58am
eugeneba replied on Wed, 2009/06/17 - 5:40am
What's all that junk for? I would presume it's some sort of encrypted version of your file system, but I've no means of handling it so can't do anything with it...
cheers,
Cheesecake Recipes
knowledgebase replied on Wed, 2009/06/17 - 9:24am
You can take a disk file, format it as an ext2, ext3, or reiser filesystem, and then mount it, just like a physical drive. It's then possible to read and write files to this newly-mounted device.
regards,
knowledge base
jakslime replied on Thu, 2009/06/18 - 3:37am
knowledgebase replied on Fri, 2009/06/19 - 2:39pm
inmemvfs.tcl implements an in-memory virtual file system. This is
useful for creating a directory hierarchy in which to store small files
and mount other file systems.
regards,
Model Railroads
knowledgebase replied on Thu, 2009/06/25 - 12:14pm
I have a command-line executable which I need to run from Java on Windows XP. It uses files as input and output. But I want to avoid the overhead of file IO, so I thought of an in-memory RAM file system.
health insurance leads
dany123 replied on Fri, 2009/06/26 - 1:48pm
Like many technologies in the history of computing, virtual memory was not accepted without challenge. Before it could be implemented in mainstream operating systems, many models, experiments, and theories had to be developed to overcome the numerous problems.
cheap concert tickets
jakslime replied on Sun, 2009/06/28 - 2:51am
knowledgebase replied on Mon, 2009/06/29 - 4:02am
This module makes extensive use of the functions in File::Spec to be portable, so it might trip you up if you are developing on a linux box and trying to play with '/foo' on a win32 box :)
gratis inserate
dany123 replied on Mon, 2009/06/29 - 9:36pm
Variable-size cache mechanisms work well when virtualmemory - and file-intensive programs are run in sequence; the cache is able to change in size in order to provide overall performance no worse than that provided by a small fixed-size cache.
hot tubs
dany123 replied on Thu, 2009/07/02 - 7:16am
Note that "virtual memory" is more than just "using disk space to extend physical memory size" - that is merely the extension of the memory hierarchy to include hard disk drives.
cobra insurance | http://java.dzone.com/news/in-memory-virtual-filesystems | crawl-002 | refinedweb | 1,506 | 55.44 |
Functions and classes for creating thick line geometries in a application using SceneKit.
Introduction
SCNLine is a class for drawing lines of a given thickness in 3D space.
In most situations in 3D projects the typically used method would use GL_LINES; which can be exercised in SceneKit with this primitive type. However glLineWidth is now depricated, which is initially why I made this class.
For more information on drawing primitive types in OpenGL or otherwise, please refer to this document, or if you want to see how it’s applied in SceneKit, check out my first Medium article about building primitive geometries.
Please feel free to use and contribute this library however you like.
I only ask that you let me know when you’re doing so; that way I can see some cool uses of it!
Import
Add to Podfile:
pod 'SCNLine', '~> 1.0'
Add to .swift file:
import SCNLine
Example
It’s as easy as this to a line geometry:
let lineGeometry = SCNGeometry.line(points: [ SCNVector3(0,-1,0), SCNVector3(0,-1,-1), SCNVector3(1,-1,-1) ], radius: 0.1).0
Or using the node directly SCNLineNode:
drawingNode = SCNLineNode( with: [SCNVector3(0,-1,0), SCNVector3(0,-1,-1), SCNVector3(1,-1,-1)], radius: 0.01, edges: 12, maxTurning: 12 ) drawingNode.add(point: SCNVector3(1,-2,-2))
The latter is recommended if you want to update the line at a later time by adding a point to it.
This will draw a line of radius 10cm from below the origin, forwards and then to the right in an ARKit setup.
The y value is set to -1 just as an example that assumes the origin of your scene graph is about 1m above the ground.
Other parameters that can be passed into SCNGeometry.path:
While in the examples below it shows that this can be used for drawing apps I would not recommend this class for that in its current state, because the current class regathers vertices from the very beginning of the line right to the end, which is very inefficient as most will remain the same.
Here’s some basic examples of what you can do with this Pod:
Latest podspec
{ "name": "SCNLine", "version": "1.0.3", "summary": "SCNLine lets you draw tubes.", "description": "draw a thick line in SceneKit", "homepage": "", "license": "MIT", "authors": "Max Cobb", "source": { "git": "", "tag": "1.0.3" }, "swift_versions": "5.0", "platforms": { "ios": "11.0" }, "source_files": "SCNLine/*.swift", "swift_version": "5.0" }
Fri, 24 May 2019 10:30:10 +0000 | https://tryexcept.com/articles/cocoapod/scnline | CC-MAIN-2020-45 | refinedweb | 416 | 65.42 |
Progressive loading for modern web applications via code splitting
Are your users tired of waiting when your app is loading and they close the tab? Let’s fix it with the progressive loading!
I will use webpack for bundling and React for a demonstration.
I am compiling and bundling all my javascript files (sometimes css and images too) into ONE HUGE bundle.js . I guess you are doing this too, aren’t you? It is a pretty common approach for making modern web applications.
But this approach has one (sometimes very important) drawback : first loading of your app may take too much time. As a web browser have to (1) load large file and (2) parse a lot of javascript code. And loading can take really much time if a user has bad internet connection. Also, your bundled file can have components that user will never see (e.g. user will never open some parts of your application).
Progressive Web Apps?
One of the good solutions for better UX is Progressive Web App . Google this term if you don’t know it yet. There are tons of good posts and videos about it. So Progressive Web has several core ideas, but right now I want to focus on Progressive Loading and implement it .
The idea of Progressive Loading is very simple:
- Make “initial load” as fast as possible.
- Load UI components only when they are required.
Let us assume we have React Application that draws some charts on a page:
Chart components are very simple:
These charts can be very heavy. Both of them have react-konva as a dependency (and konva framework as a dependency of react-konva ).
Please note that LineChart and BarChart are not visible on the first load. To see them a user needs to toggle checkbox:
So it is possible that the user will NEVER toggle that checkbox. And this is a very common situation in real world web application: when a user never opens some parts of the app (or open them later). But with a current approach, we have to bundle all components and all their dependencies into one file. In this example we have: root App component, React, Chart components, react-konva, konva.
280kb for bundle.js and 3.5 seconds for initial loading with a regular 3g connection.
Implementing Progressive Loading
How can we remove these chart components from bundle.js and load them later and draw something meaningful as fast as possible? Say hello to good old AMD (asynchronous module definition)! And webpack has good support for code splitting .
I suggest to define HOC (hight order component) that will load chart only when a component is mounted into DOM (with componentDidMount lifecycle callback). Let’s define LineChartAsync.js:
Then instead of
import LineChart from ‘./LineChart’;
We should write:
import LineChart from ‘./LineChartAsync’;
Let us see what we have after bundling:
We have bundle.js that includes a root App component and React.
1.bundle.jsand 2.bundle.js are generated by webpack and they include LineChart and BarChart . But, wait, why is the total sum bigger? 143kb+143kb+147kb = 433kb vs 280kb from previous approach. That is because dependencies of LineChart and BarChart are included TWICE ( react-konva and konva defined in both 1.bundle.js and 2.bundle.js ), we can avoid this with webpack.optimize.CommonsChunkPlugin :
new webpack.optimize.CommonsChunkPlugin({
children: true,
// (use all children of the chunk)
async: true,
// (create an async commons chunk)
}),
Now dependencies of LineChart and BarChart are moved in another file 3.bindle.js , total size is almost the same (289kb):
Now 1.75 seconds for initial loading. It is much better then 3.5 seconds.
Refactoring
To make the code better I would like to refactor LineChartAsync and BarChartAsync. First, let’s define basic AsyncComponent :
And BarChartAsync (and LineChartAsync ) can be rewritten into simpler component:
But we can improve Progressive Loading even more! When application is initially loaded we can schedule loading of additional component on background, so it is possible that they will be loaded before user toggled checkbox
And loader.js will be something like this:
Also, we can define components that will visible on the first screen, but in fact loaded asynchronously later and a user may see beautiful placeholder while a component is loading. Please note that placeholder is not for API call. It is exactly for loading component’s module (its definition and all its dependencies).
const renderPlaceholder = () =>
<div style={{textAlign: ‘center’}}>
<CircularProgress/>
</div>
export default (props) =>
<AsyncComponent
{…props}
loader={loader}
renderPlaceholder={renderPlaceholder}
/>
Conclusion
As a result of all improvements:
- Initial bundle.js has a smaller size. That means a user will see some working UI components faster.
- Additional components can be loaded asynchronously in the background.
- While a component is loading it can be replaced with some placeholder components .
- For exactly this approach Webpack is required . But you can use it not only with React, but with other frameworks too.
Take a look for full source and webpack configurations. | http://126kr.com/article/6vrt9lmvdru | CC-MAIN-2016-50 | refinedweb | 833 | 67.15 |
=begin
== Migration issues between 1.8.x and 1.9
This page lists backward-incompatible changes between the 1.8.x serie of ruby and the current 1.9.
Unfortunately, this page is new and wasn't maintained during 1.9 development. Please contribute to help migrate to 1.9.
=== Inheritance evaluation order
class Base
def self.inherited(klass)
klass.instance_variable_set(:@x, :base)
end
end
Derived = Class.new(Base) do
@x = :derived
end
#ruby 1.8: @x => :base
#ruby 1.9: @x => :derived
See #4273 for more details
=== switch case
case foo
when x: bar
end
becomes
case foo
when x then bar
end
=== Date#to_s doesn't rely on locales anymore
If you where relying on Date#to_s to give you a certain output, it might change now.
ruby 1.8 was taking into account the system's locale, so that Date#to_s would change apparence depending on it.
TODO: find commit ID ?
=== Encoding issues
Ruby 1.9 now associates encoding type to every string it handles.
Since encodings can't always be converted without loosing information,
ruby might raise an exception on string concatenation, or if you try to
write in a file with a different encoding.
This is generally just unlocking an issue you already had in your application,
but that wasn't handled properly.
References:
=== Hash#select returns a Hash
ruby 1.8:
{:a=>1}.select{|*x| p x; true}
[:a, 1]
=> [ [:a, 1] ]
ruby 1.9:
{:a=>1}.select{|*x| p x; true}
[:a, 1]
=> {:a=>1}
=== Block parameters a block-local
If you relied on the fact that block parameters override parent variables, it might be an issue, but in general you're better off with the new behavior.
ruby 1.8:
x = 3
=> 3
y = proc {|x|}
=> #Proc:0x0000000000000000@(irb):2
y.call(4)
=> nil
x
=> 4
ruby 1.9:
x = 3
=> 3
y = proc{|x|}
=> #Proc:0x0000010109cb60@(irb):14
y.call(4)
=> nil
x
=> 3
=== String is not Enumerable anymore
Since String is no longer an array of bytes, you may want to iterate on bytes or characters.
ruby 1.9:
"foo".each
NoMethodError: undefined method `each' for "foo":String
"foo".each_char
=> #
"foo".bytes
=> #
=== #Hash Iteration in Ruby 1.8 vs Ruby 1.9 so will affect all the postgresql users out there that work with Ruby ;/ - how do I set anchors in this page?
=== #Multilingualization (m17n) issues between Ruby 1.8 and Ruby 1.9
* String class o Ruby 1.8: Array of bytes + include Enumerable => String can be used like an Array) + 'String#[0]' returns ascii code o Ruby 1.9: encoded characters (not Array but [] method is available) + String class behavior changes depending on the encoding + 'String#[0]' does not return ascii code (ex. use String#unpack('C*')[0], String.bytes.to_a[0]) + not include Enumerable module (ex. String#each does not work) + we cannot use binary data (byte data) directly through String class + we have to set 'ascii-8bit' as an encoding for binary data * Magic comment o we have to write encoding (magic comment) in each script o 'File.read' cannot read binary file anymore (Ruby 1.8 can) o ex. instead of 's = File.read("input.dat")' s = open("input.dat","rb"){|f| f.read} * Careful points from 1.8 to 1.9 (encoding) * String#[0] does not return ascii code (this does not output an error, the behavior changes) * Put magic comment (encoding) if there are non-ascii characters in source code (in most of the cases, an error comes) * Pay attention to the encoding when we use String class, binary data, regular expression * We cannot use Enumerable methods for a String instance
Also see:
=== subtle differences in dbi between Ruby 1.9.1 and Ruby 1.9.2
=== Gem compatibility
Not all rubygems are compatible with 1.9. is an effort to list all working gems, but I'm not sure if it's officially supported
=end | https://bugs.ruby-lang.org/projects/ruby-trunk/wiki/MigrationIssuesFrom18 | CC-MAIN-2018-51 | refinedweb | 653 | 67.55 |
SolidJS is a declarative UI library for building web applications, much like React, Angular, or Vue. It is built using brutally efficient fine-grained reactivity(No Virtual DOM), an ephemeral component model, and the full expressiveness of JavaScript(TypeScript) and JSX. While understandably no one is really in the market for a new JavaScript UI library, Solid is exceptional, a true standout amongst its competition. These are the 5 reasons you should be at least aware of SolidJS.
1. It's the fastest...
JS Framework Benchmark Feb 2020
Bold claim, and sure some small experimental renderers can pull better numbers in certain cases but Solid is a benchmark king. It's been at the top of the JS Frameworks Benchmark for over a year now, neck and neck with the most optimally hand-written plain JavaScript implementation. This includes surpassing the fastest low-level Web Assembly implementations and this is with a declarative UI library.
And I'm sure at this point you are like what about ____. Go take a look, everyone's there. Solid outpaces Inferno, LitHTML, Svelte, Vue 3.0, React, Angular,
WASM-bindgen you name it. (EDIT: Raw imperative WASM is now too close to call)
Into Web Components? It's the fastest there as well according to All the Ways to Make a Web Component
Solid is now the fastest on the server as well. Using the Isomorphic UI Benchmark it has pulled out in front of the competition.
See How we wrote the fastest JavaScript UI Framework, Again
2. It's the smallest...
Realworld Demo Initial JS Bundle Size
While it won't win size in toy demos and benchmark where everything happens in a single Component, that honor probably goes to Svelte, when it comes to larger actual applications Solid has almost no overhead on Components (more like a VDOM library rather than a Reactive one). In so it scales exceptionally. For example, SolidJS currently is the smallest implementation of the renowned Realworld Demo. Its initial JS payload is 11.1kb. This implementation doesn't leave anything out using Context API and Suspense. Svelte's version is 33% larger at 14.8kb. Solid's compiler does a great job of managing tree shaking, its codebase built off the same powerful primitives as the renderer makes the runtime small and completely scalable.
3 It's expressive...
Solid apps are built using JavaScript(TypeScript) and JSX. The compiler optimizes the JSX but nothing else. This means you have the full language at your disposal. You are not limited to premade helpers and directives to control how your view renders (although Solid ships with some). You don't get to rewrite
v-for the way you write a component. There are ways to write custom directives or precompiler hooks, but in Solid it's just another component. If you don't like how
<For> works, write your own. Solid's renderer is built on the same reactive primitives that the end-user uses in their applications.
Solid's reactive primitives manage their own lifecycle outside of the render system. This means they can be composed into higher-order hooks, be used to make custom Components, and store mechanisms. It is completely consistent whether working in local scope or pulling from a global store.
4 It's fully featured...
Solid still considers itself a library rather than a framework so you won't find everything you might in Angular. However, Solid supports most React features like Fragments, Portals, Context, Suspense, Error Boundaries, Lazy Components, Async and Concurrent Rendering, Implicit Event Delegation, SSR and Hydration(although there is no Next.js equivalent yet). It supports a few things not yet in React like Suspense for Async Data Loading, and Streaming SSR with Suspense.
For the reasons mentioned above, it has taken less effort to develop these more advanced features with Solid given its reactive foundation. React clones like Preact and Inferno would require significant changes to their VDOM core to offer the same so it has been a much longer road. And the same is true with new directions React has been doing in its experiments as async rendering and multiple roots are trivial with Solid. In general Solid's approach lets it adapt easily, as it becomes a matter of granularity so it can apply similar diffing as VDOM libraries as necessary and not where it is not.
5 It's familiar...
import { createSignal, onCleanup } from "solid-js"; import { render } from "solid-js/web"; const CounterComponent = () => { const [count, setCount] = createSignal(0), timer = setInterval(() => setCount(c => c + 1), 1000); onCleanup(() => clearInterval(timer)); return <div>{count()}</div>; }; render(() => <CounterComponent />, document.getElementById("app"));
While a new UI library is supposed to jump out and break the mould. Solid doesn't stand out when it comes to API's or developer experience. If you've developed with React Hooks before Solid should seem very natural. In fact, more natural as Solid's model is much simpler with no Hook rules. Every Component executes once and it is the Hooks and bindings that execute many times as their dependencies update.
Solid follows the same philosophy as React with unidirectional data flow, read/write segregation, and immutable interfaces. It just has a completely different implementation that forgoes using a Virtual DOM.
Too good to be true?
It's the real deal. Solid has been in development for over 4 years. But it is still in its infancy when comes to community and ecosystem. I hope you agree there is great potential here. It's always difficult to stand out in an overcrowded space, and more so for Solid as it doesn't look very different on the surface. But I hope this article gives you insight into why SolidJS is secretly the best JavaScript UI library you've never heard of.
Check it out on Github:
solidjs / solid
A declarative, efficient, and flexible JavaScript library for building user interfaces..
Key Features
- Real DOM with fine-grained updates (No Virtual DOM! No Dirty Checking Digest Loop!).
- Declarative data
- Simple composable primitives without the hidden rules.
- Function Components with no need for lifecycle methods or specialized configuration objects.
- Render once mental model.
- Fast
- Almost indistinguishable performance vs optimized painfully imperative vanilla DOM code. See Solid on JS Framework Benchmark.
- Fastest at Server Rendering in the Isomorphic UI Benchmarks
- Small! Completely tree-shakeable Solid's compiler will only include parts of the library you use.
- Supports and is built on TypeScript.
- Supports modern features like JSX, Fragments, Context, Portals, Suspense, Streaming SSR…
Discussion (24)
I've been using this library for a big project at work and I much prefer it to React.
Although for most of the app the extra speed did not change much, It did make a big difference for screens with large tables or large graphs (drawing dependency graphs with D3). I will be using it
again for another app that I'm starting now.
So my hat goes to you @ryansolid for this wonderful library.
Thank you for a gentle introduction to the library Ryan. I have a quick question about how being fast / slow affects the end user experience, pragmatically? Is it really perceivable to end users? There has to be boundary beyond which everything will be similar for an end user.
Thank you for taking the time to respond. I don't think that in general raw speed is very noticeable in typical cases on typical systems. There has been a decent amount of talk about bundle sizes and low power devices, and even there 10-20kb is only a couple hundred milliseconds. So what difference is it that say Solid renders a 5000 dom elements on a page 50ms faster than React on a Core I7. Sure it's more like 300ms on my Core I5 laptop. Realistically you are only going to notice this on initial load. And then maybe not. What's the difference between 1.4 seconds and 1.7 seconds perceptually? Almost nothing. In the Realworld Demo the gap between Solid and React Redux, the slowest library I tested, was only about 800ms on resource loading under 3G simulation and CPU throttling. Mind you TTI(Time to Interactive) differed upwards of 4 seconds.
Performance is an easy metric, only easier one is kb weight. I started working on Solid because I liked fine grained reactivity as found in libraries like KnockoutJS which had these patterns a decade ago. When I saw React basically copy it with React Hooks (and the Vue crowd finally acknowledge they had these primitives all along), I knew it was a good time to start promoting the approach as IMHO it is much better at doing very similar things by any metric. I created a library DOM Expressions to allow any reactive library(MobX, KnockoutJS, among others) benefit from my work but continued to develop Solid as the pinnacle of this approach.
So while I acknowledge leading with performance sort of cheapens it I hope that if anyone spends the time to look at Solid will see that we have a very well designed and thoughtful approach to effectively build user interfaces.
I really appreciate your work. Thank you.
It may be the best library to use if you want to make a declarative first-person shooter, where performance really matters if you want the best graphics and gameplay mechanics. :D
Svelte seems quite interested in smooth animations, I think that plays into Solid quite well too. I wonder how animations would be best written in solid and how that compares to other frameworks and their popular animation libraries.
Funny enough, I think Solid+Dart could steal some thunder from Flutter. Flutter is just gross, but, Dart is pretty nice TBH. Unfortunately there is the compiler aspect here, so it wouldn't be easy, but at some point any framework needs some answer to Native apps, in some way. NativeScript is interesting but Dart's VM is like WOW amazing. But then you inevitably see Flutter code and I just see all the horrors of Angular+OOP all over again.
I was looking at NativeScript a little. Only my JSX is compiled so it's a matter of figuring out how to best output NativeScript from it. But.. yeah I haven't dug deep at all. If people are interested I'm sure things will go there. I've been a web first guy myself mostly (although I've created and supported a React Native app). I was secretly hoping PWAs would gain more ground quickly by the time I needed to worry about this. But doesn't quite seem like it.
Animations is a place I will need to do more work. Svelte has figured out how to compile it into the template. I can do some similar stuff here but I have less explicit hooks. Svelte basically constructs a minimal lifecycle based components that it uses to anchor different stages of a components life. Solid doesn't really acknowledge components exist and relies on the scheduling of the reactive system. So far I've created solid-transition-group which basically copies React or Vue's equivalent. I've done a few other simple demos with animations but as you can imagine they work today very similar to how they'd work in React or Vue. Using state changes to drive CSS or Web Animations or what ever library you want to use. It definitely isn't as packaged in as Svelte.
I actually don't know all that much about how Svelte does animations, only that animating in svelte was nice. I think mimicking the age old transition-group is great, since it's easier to migrate from react (idk what else people use for animating in react, again, not super well researched on animation libs).
Any thoughts on Dart in general? I've been envision TypeScript as a bridge to Dart, to be honest. I guess I don't know how to gradually convert a codebase to Dart, but my sense is that once you are comfortable with typescript, if you see the Flutter/Dart demo's, it's quite an appealing option over TypeScript. "Add these types, with a syntax built into the language, and get all these extra benefits of faster experience everywhere (super fast hot reloading, AOT compilation, etc)"
It'd be really great to be able to use Flutter Components inside a theoretical Solid.dart, but there might need to be a Flutter-Solid connecting thing akin to react-redux. I'd imagine that any level of HOC-ish Flutter component would be a no-go, however, since the semantics of updating the UI begin to clash (not sure how hard it would be to bridge the two, but surely it's possible in some way)
Dart even avoids any locks during garbage collection, which is amazing. As cool as concurrent mode is, Dart might make concurrent mode/suspense/etc a minor 1% improvement once the performance improvements from Dart are accounted for. Who knows. Just a fanboy.
I honestly haven't had much of an opportunity/reason to look at Dart. TypeScript admittedly was mostly just a pragmatic choice since it seemed like a reasonable thing to do as a library author. It wasn't because of any personal like of TypeScript.
To be fair while the reactivity is a big part of what I do with Solid, my focus on the web came from knowing I could make a tangible difference there. Even SSR wasn't a place I was expecting have such success. I'm not sure how Solid would fit into Dart/Flutter ecosystem as it feels like they already have a good thing going.
I have fanboy'd over dart/flutter, google's presentations on dart/flutter show some amazing tech, but I think Dart is really the star of the show. I look at Flutter code and it's all the horrors of angular all over again.
What I really want is actually-native level performance (AOT compiled dart), great dev experience (dart dev compiler, ddc), BUT in a react/component/jsx paradigm.
Solid gives you that for sure. Also @lume/element gives you one more level higher: custom elements with templating powered by Solid.
I'm also planning to implement JSX for AssemblyScript unless someone else gets to it first, and the compile the implementations of my custom elements to WebAssembly.
The big milestone will be when we can also compile to native (run outside of the browser too, but with a web-first API that is easy for existing web developers).
To be fair I don't really consider CM a thing for raw performance. Unless you are maxing your CPU cycles scheduling is going to be slower for the raw performance. I think of it more of having a distributed way to model possible futures, without trying to coordinate everything directly in the parent.
Over 200ms is where people start noticing it, though I theorize if you can get over 70ms a few people will notice.
Rendering time does factor into SEO rankings, for both bot & hand-tested.
Though with SSR, perhaps none of this matters except for costs if your server time is metered.
How does it work with legacy apps? The beauty of vuejs is that it can work nicely with apps that have html separated and generated by the server code and also support loading dynamic html to certain extent. When we moved from angularjs 1.x, vuejs was the only option with that kind of support.
I mean Vue does make that easy so it broadcasts it as a use case but any modern library is up to the task as you can set the entry point. You can have a statically rendered site in PHP and then have React control 5 independent parts of the page by importing the bundle and calling
renderafter page load on those 5 locations. The reason I imagine we don't hear people talking about this is the build process. Vue lets you just import a script tag and start writing some components. Solid's Tagged Template literals allow you to skip the build process as well.
Here's a quick codepen of Solid with no compiler running completely in the browser. codepen.io/ryansolid/pen/KKprxBN.
Although I built Solid with modern browsers in mind. So without polyfills it is not supporting Internet Explorer. Obviously it's pretty easy to make a build with that in mind it just hasn't been a priority. Certain features like proxies are not easily polyfilled so they would not work in that environment anyway.
Vue is very unique and its ability to integrate with external HTML is why I love it. No other framework provides such capability.
Yeah it's a hold out from the old days. KnockoutJS used to do that too with it's HTML string data-binding. AlpineJS and Stimulus do this sort of manipulation as well but only Vue of the modern frameworks offer both this ability and the modern built template setup. One of the benefits from still prescribing to data-binding that is completely HTML compliant.
Great work with the library! Coming from the React world it felt very easy to grab the essence. Was looking for a most performant library with Typescript, JSX and all other goodies and SolidJS hit all the bells. Will give it a try!
This looks so good. Please promote it extensively. Also some good courses on platforms like Youtube and Udemy will help.
I love a lot of things about the library. Definitely going to check it out.
Thanks, will try solidjs
I wonder how it compares to lit-html
In what aspects? There are a lot of considerations. More than I highlighted in this article. I did look for a lit-html Realworld demo when I worked on a previous article but the only one I found was Lit Element + MobX which doesn't seem representative. In a handful of micro benchmarks I've done Solid is smaller but that is inconclusive. Pure performance definitely favours Solid as highlighted in the benchmark above. I've separately done benchmarks where I've shown even out of non-compiled tagged template approaches Solid's tagged template version is faster than libraries like lit-html lighterhtml etc...
Of course there are other things that make comparison difficult as lit-html is a library designed to handle rendering and doesn't worry about components and state management so feature sets are not equivalent. Lit-html has close relationship with Polymer and supported by Google so already a huge leg up. The author also works closely with those working on standards so it is very aligned with the future of the platform. More stars on Github, larger community, and already stable 1.0 release all favourable. Definitely a good library to be looking at. | https://dev.to/ryansolid/introducing-the-solidjs-ui-library-4mck | CC-MAIN-2022-21 | refinedweb | 3,156 | 64.1 |
29 October 2010 09:27 [Source: ICIS news]
LONDON (ICIS)--Halliburton and BP had tests that showed the cement used to seal the Macondo oil well in the ?xml:namespace>
In a letter to President Barack Obama's national commission on the BP Deepwater Horizon oil spill and offshore drilling, its chief counsel, Fred Bartlit, said the cement job may have been pumped without any laboratory results indicating that the foam cement slurry would be stable.
These finding were contrary to earlier claims made by oil contractor Halliburton that tests had shown the cement was stable.
“Halliburton and BP both had results in March showing that a very similar foam slurry design to the one actually pumped at the Macondo well would be unstable, but neither acted upon the data,” Bartlit said.
The rig explosion on 20 April killed 11 workers. It caused a huge oil leak, which led to the pollution of the shoreline and disruption to fishing, before the leaking well was successfully plugged in August.
The Deepwater Horizon oil rig was owned by Transcoean and was under contract to BP.
Bartlit said in the letter that even if the commission’s concerns regarding the foam slurry design were well founded, “the story of the blowout does not turn solely on the quality of the Macondo cement job”.
He added that cementing failures are not uncommon and that the oil industry has developed tests to indentify these failures and have methods in place to remedy deficient cement jobs.
“BP and/or Transocean personnel misinterpreted or chose not to conduct such tests at the Macondo well,” said Bart | http://www.icis.com/Articles/2010/10/29/9405672/halliburton-bp-had-tests-showing-macondo-well-cement-unstable.html | CC-MAIN-2014-35 | refinedweb | 269 | 53.24 |
On Wed, Jul 22, 2020 at 07:22:34PM +0200, Peter Krempa wrote: > On Wed, Jul 22, 2020 at 19:14:01 +0200, Pavel Hrdina wrote: > > On Wed, Jul 22, 2020 at 06:51:58PM +0200, Peter Krempa wrote: > > > On Thu, Jul 16, 2020 at 11:59:31 +0200, Pavel Hrdina wrote: > > > > Signed-off-by: Pavel Hrdina <phrdina at redhat.com> > > > > --- > > > > > > [...] > > > > > > > +foreach name : keyname_list > > > > + rst_file = custom_target( > > > > + 'virkeyname- at 0@.rst'.format(name), > > > > + input: keymap_src_file, > > > > + output: 'virkeyname- at 0@.rst'.format(name), > > > > + command: [ > > > > + meson_python_prog, python3_prog, keymap_gen_prog, 'name-docs', > > > > + '--lang', 'rst', > > > > + '--title', 'virkeyname- at 0@'.format(name), > > > > + '--subtitle', 'Key name values for @0@'.format(name), > > > > + '@INPUT@', name, > > > > + ], > > > > + capture: true, > > > > + build_by_default: true, > > > > + ) > > > > + > > > > + docs_man_files += { > > > > + 'name': 'virkeyname- at 0@'.format(name), 'section': '7', 'install': true, 'file': rst_file, > > > > + } > > > > +endforeach > > > > + > > > > +docs_man_conf = configuration_data() > > > > +docs_man_conf.set('SYSCONFDIR', sysconfdir) > > > > +docs_man_conf.set('RUNSTATEDIR', runstatedir) > > > > + > > > > +foreach data : docs_man_files > > > > + rst_in_file = '@0 at .rst.in'.format(data['name']) > > > > + html_in_file = '@0 at .html.in'.format(data['name']) > > > > + html_file = '@0 at .html'.format(data['name']) > > > > + > > > > + if data.has_key('file') > > > > + rst_file = data['file'] > > > > + else > > > > + rst_file = configure_file( > > > > + input: rst_in_file, > > > > + output: '@0 at .rst'.format(data['name']), > > > > + configuration: docs_man_conf, > > > > + ) > > > > + endif > > > > > > I must say it feels weird process these through configure_file. Also > > > it's super weird that they've overloaded 3 modes into configure_file. > > > > > > What's the difference to generator() by the way, since we use it for > > > rst->html conversion? I'd expect that we could use configure_file there > > > or generator here then. > > > > The main difference is that configure_file() is done during meson setup > > but generator() is executed while building the project. Another main > > So how does it then handle if the file is modified prior to another > build? Is 'ninja' re-running the setup phase? Correct, if the source file changes ninja will re-run meson setup phase. > > difference is that generator() outputs the files into target-private > > directory. > > > > There is a note in documentation: > > > > "NOTE: Generators should only be used for outputs that will only be used > > as inputs for a build target or a custom target. When you use the > > processed output of a generator in multiple targets, the generator will > > be run multiple times to create outputs for each target. Each output > > will be created in a target-private directory @BUILD_DIR at ." > > > > The reason why I went with configure_file is that we create a dictionary > > of placeholders that should be replaced in the input file. The other > > option would be custom_target() and calling 'sed'. > > > > The reason why we can use generator() for the rst->html.in conversion is > > that the output is used in custom_target to create html from html.in > > files. > > > > > Also is there a possibility where the input and output file will have > > > the same name? I'm not very fond of the .rst.in files. > > > > It should be possible but I prefer using .in suffix to make it clear > > that this file needs to be processed. > > I think we should keep the source files named .rst. The .rst format is > good for human consumption even in the pre-processed state. > > You can also see different behaviour e.g. when viewing them via gitlab > web interface: > > > > vs > > > > vs current state > > OK, this is a solid point which convinced me to remove the .in suffix. Pavel -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 833 bytes Desc: not available URL: < | https://listman.redhat.com/archives/libvir-list/2020-July/msg01580.html | CC-MAIN-2022-21 | refinedweb | 554 | 58.79 |
I'm looking at some Python code which used the
@ symbol, but I have no idea what it does. I also do not know what to search for as searching python docs or Google does not return relevant results when the
@ symbol is included.
It indicates that you are using a decorator. Here is Bruce Eckel's example from 2008.
The
@ symbol is used for class, function and method decorators.
Read more here:
The most common Python decorators you'll run into are:
When I began to answer I didn't saw the first answer, is exactly that than you need, respectly in java is a different concept and as you can read for example here java, annotation tutorial
In java this is an annotation and as you can read is used is completely different than in python sorry for the trouble.
Edit: Original post and as said in the comments I made a mistake with the option I choose. It is a decorator like in the Java language you use it with for the declaration and use of abstract methods. The difference is than in Python the abstract method could have an implementation.
Definition from docs.python.org
This code:
def decorator(func): return func @decorator def some_func(): pass
Is equivalent to this code:
def decorator(func): return func def some_func(): pass some_func = decorator(some_func)
In the definition of decorator you can add some modified things that wouldn't be returned by function normally.
I admit it took more than a few moments to fully grasp this concept for me, so I'll share what I've learned to save others the trouble.
The name decorator - the thing we define using the
@ syntax before a function definition - was probably the main culprit here.
class Pizza(object): def __init__(self): self.toppings = [] def __call__(self, topping): # when using '@instance_of_pizza' before a function def # the function gets passed onto 'topping' self.toppings.append(topping()) def __repr__(self): return str(self.toppings) pizza = Pizza() @pizza def cheese(): return 'cheese' @pizza def sauce(): return 'sauce' print pizza # ['cheese', 'sauce']
What this shows is that the
function/
method/
class you're defining after a decorator is just basically passed on as an
argument to the
function/
method immediatelly after the
@ sign.
The microframework Flask introduces decorators from the very beginning in the following format:
from flask import Flask app = Flask(__name__) @app.route("/") def hello(): return "Hello World!"
This in turn translates to:
rule = "/" view_func = hello # they go as arguments here in 'flask/app.py' def add_url_rule(self, rule, endpoint=None, view_func=None, **options): pass
Realizing this finally allowed me to feel at peace with flask.
To say what others have in a different way: yes, it is a decorator.
In Python, it's like:
This can be used for all kinds of useful things, made possible because functions are objects and just necessary just instructions.
In python3.5 you can overload
@ as an operator. It is named as
__matmul__ because It is designed to do matrix multiplication, but It can be anything you want. see PEP465 for details.
This is a simple implementation of matrix multiplication.
class Mat(list) : def __matmul__(self, B) : A = self return Mat([[sum(A[i][k]*B[k][j] for k in range(len(B))) for j in range(len(B[0])) ] for i in range(len(A))]) A = Mat([[1,3],[7,5]]) B = Mat([[6,8],[4,2]]) print(A @ B)
This code yields
[[18, 14], [62, 66]]
Starting with Python 3.5, the '@' is used as a dedicated infix symbol for MATRIX MULTIPLICATION (PEP 0465 -- see)
Similar Questions | http://ebanshi.cc/questions/3509385/what-does-the-at-symbol-do-in-python | CC-MAIN-2017-22 | refinedweb | 603 | 60.85 |
table of contents
NAME¶
madvise - give advice about use of memory
SYNOPSIS¶
#include <sys/mman.h>
int madvise(void *addr, size_t length, int advice);
madvise():
- Since glibc 2.19:
- _DEFAULT_SOURCE
- Up to and including glibc 2.19:
- _BSD_SOURCE
DESCRIPTION¶¶¶) was supported MADV_REMOVE; but since Linux 3.5, any filesystem which supports the fallocate(2) FALLOC_FL_PUNCH_HOLE mode also supports MADV_REMOVE. Hugetlbfs fails/admin-guide/mm/ksm.rst available only if the kernel was configured with CONFIG_MEMORY_FAILURE.
- MADV_HUGEPAGE (since Linux 2.6.38)
- Enable Transparent Huge Pages (THP) for pages in the range specified by addr and length. Currently, Transparent Huge Pages work only with private anonymous pages (see mmap(2)). The kernel will regularly scan the areas marked as huge page candidates to replace them with huge pages. The kernel will also allocate huge pages directly when the region is naturally aligned to the huge page size (see posix_memalign(2)).
- This feature is primarily aimed at applications that use large mappings of data and access large regions of that memory at a time (e.g., virtualization systems such as QEMU). It can very easily waste memory (e.g., a¶
On success, madvise() returns zero. On error, it returns -1 and errno is set appropriately.
ERRORS¶
-¶
Since Linux 3.18, support for this system call is optional, depending on the setting of the CONFIG_ADVISE_SYSCALLS configuration option.
CONFORMING TO¶.
NOTES¶
Linux notes¶¶
getrlimit(2), mincore(2), mmap(2), mprotect(2), msync(2), munmap(2), prctl(2), posix_madvise(3), core(5)
COLOPHON¶
This page is part of release 5.10 of the Linux man-pages project. A description of the project, information about reporting bugs, and the latest version of this page, can be found at. | https://manpages.debian.org/bullseye/manpages-dev/madvise.2.en.html | CC-MAIN-2022-40 | refinedweb | 282 | 59.3 |
Javascript function pattern
We all know the problem lots of javascript functions and a polluted global namespace. To manage this I have built a pattern for anonymous functions as variables to use client side in an application.
The basic requirement is that it is easy to stack the functions for initialisation calls and the functions within this function are effectively namespace protected from each other.
I am assuming you are familiar with the basic use of an anonymous function as the value of a variable:
var functionName = (() => { }) ();
To this we add some initialisation management:
var functionName = (() => {
let initDone = false;
let loadPrevious;
let init = () = {
if (!initDone) {
initDone = true;
…
if (functionName.loadPrevious) functionName.loadPrevious();
}}
Add a return to the function to expose internal variables:
return {init: init, loadPrevious:loadPrevious)
And after the function definition:
functionName.loadPrevious = window.onload; window.onload = functionName.init;
Replace the … with anything that needs to be done to initialise. Any asynchronous calls should have a callback function. You can sort of get some callback hell from this but there are ways to manage that (not the subject of this). Calling loadPrevious should be inside a callback function if you have async calls.
Of course other functions can be defined (as variables to avoid hoisting) within the function and exposed using the return value.
This simple and effective pattern has allowed us to build an extensive managed functional programming environment. | https://medium.com/@rickmarshall_57431/javascript-function-pattern-78dce6d2786 | CC-MAIN-2022-05 | refinedweb | 231 | 52.49 |
Home >> solar led street light manufacturers in usa for sale by owner.
america s leading supplier of commercial solar led lights that meet your need for various outdoor lighting applications from commercial solar street lights and solar parking lot lights, to residential pathway and signage projects.
import china led light factory from various high quality chinese led light factory suppliers & manufacturers on globalsources .. streetlights, solar street lights, led floodlights, led high bay lights, led street lights and more inquire.
by lennert van den berg. the soluxio is one of the most advanced solar powered streetlights in the market. technologically speaking, the light post will work with any type of battery chemistry. other solar street light manufacturers sell and produce their products with lead/vrla/agm batteries by default.
at greenshine new energy, we offer three state of the art systems perfect for solar street lighting. in our supera, brighta, and lumina series , you ll find high quality led fixtures that couple with solar panels for maximum efficiency and savings.
we make the most reliable solar and solar/wind powered streetlights. our smart off grid lets you monitor, control and proactively service them over the internet solar streetlights intelligent lighting control illumient
one of the few manufacturers in china who can produce all the main parts of solar street light by itself, including solar panel, gel battery, lighting pole, controller and led lamp. about us service
wista led lighting suppliers has 5 different lines of wall pack, from the full cut off wall pack to non cut off wall pack, from glass lens to polycarbonate wall pack. all has its ul and dlc listed. wista lighting wall pack already reach 128lm/w, perfect for the replacement of the old hps and hid fixture.
leadsun is a leading manufacturer of solar lighting products which can be used in parking lots, small roadways, streets, industrial and residential applications.
solar lighting international designs and manufactures solar led street lighting products and solutions that will exceed your expectations. solar lighting international is an american based manufacturer, employing skilled craftsmen, welders and an engineering staff providing a host of solar solutions from lighting, to security power, commercial
dialight s streetsense led street light fixtures bring roadway and parking area illumination into the 21st century. our patented optics maximize light distribution and placement with exceptional, low glare illumination on the intended area, while minimizing light trespass into nearby homes and businesses..
led light engine. ip66 rated led light engine eliminates the need for a glass lens for maximum light output. use of latest generation leds means more light from fewer leds, resulting in lower costs.
the solar led lighting experts. sels designs attractive solar products to light where you need it with no wiring, trenching, and no power bills. from residential to commercial, garden to streetlights, sels has the solution for your lighting and connectivity needs. start your project today
solar outdoor lighting, solar traffic calming solutions, roadway safety systems and led replacement fixtures solarpath sun solutions is a worldwide leader in the manufacturing and engineering of renewable energy (solar) products, active roadway safety products and led replacement solutions.
solar street light / solar area light / solar lampost / 10 watt fcc certified for use in the usa see 25watt solar area lighting with pir motion sensor and light sensor waterproof ip65 led street light dusk to dawn for outdoor garage/courtyard/garden light 2500 lumen 4.7 out of 5 stars.
40 items maestrozigbee, llc. maestrozigbee is the most rapidly growing supplier of led lighting, interior shades, and controls. we have the largest selection of led lights in usa.
led street light. bbe led street lights adopts philips power supply & seoul, cree light source with excellent quality, which apply in highway, expressway, primary road, subordinate road, and other similar square and road lighting.
solar street light 10,000 lumens led with remote the solar street light is 10,000 lumens and has dimensions of 76.1" x 16.6" x 8.9". the unit has aluminum alloy and tempered glass construction and integrated led
by being at the forefront of led and battery technology, dx3 s solar power lighting solutions provide reliable and sustainable lighting; anywhere, anytime. solar powered street lights are an environmentally friendly choice for all of your street lighting, parking lot, pathway, recreational, and commercial lighting.
the high power solar panel used in the led solar dusk to dawn security lights offer 8 10 hours of continuous light off one full charge giving out a powerful light when the built in motion detector senses movement within the range of the premises. | http://arllen.cz/881/solar-led-street-light-manufacturers-in-usa-for-sale-by-owner.html | CC-MAIN-2020-50 | refinedweb | 766 | 51.89 |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.