content stringlengths 0 557k | url stringlengths 16 1.78k | timestamp timestamp[ms] | dump stringlengths 9 15 | segment stringlengths 13 17 | image_urls stringlengths 2 55.5k | netloc stringlengths 7 77 |
|---|---|---|---|---|---|---|
Creating and Opening Scenes
All scenes created via Harmony Stand Alone are independent and local to the computer. You can create or open a scene using the Welcome screen or the File menu.
- In the Name field, type the scene’s name.
- To select the scene’s location, in the Location section, click Browse.
- From the Camera Size menu, select a scene resolution and click Create Scene.
A new scene is created.
- Do one of the following:
The New Scene dialog box opens.
- In the Project Name field, type the scene's name.
- Select a scene directory by clicking the Browse button.
- In the Resolution window, select the scene’s resolution and click Create.
A new scene is created.
- Create a new scene from the Welcome screen or from the File menu in Harmony.
- Set the scene resolution by doing one of the following:
- In the New Resolution dialog box, fill in the following fields and click Create.
- Do one of the following:
The Open Scene browser opens.
- Browse and select the desired
*.xstagefile.
- Click Open.
- In the Recent Scenes section, click Open.
The Open Scene browser opens.
- Browse and select the desired
*.xstagefile.
- Click Open.
- In the Open a Scene section, select a scene from the list.
- From the top menu, select File > Open Recent.
- Select a scene from the displayed list. | https://docs.toonboom.com/help/harmony-12-2/premium/getting-started/create-open-scenes.html | 2018-07-15T23:17:40 | CC-MAIN-2018-30 | 1531676589022.38 | [array(['../Skins/Default/Stylesheets/Images/transparent.gif', 'Closed'],
dtype=object)
array(['../Resources/Images/HAR/Stage/GettingStarted/HAR12/HAR12_wel_screen.png',
None], dtype=object)
array(['../Resources/Images/HAR/Stage/GettingStarted/step/001_HAR11_creatingscene_ws_001.png',
'Browse for scene location Browse for scene location'],
dtype=object)
array(['../Resources/Images/HAR/Stage/GettingStarted/HAR12/HAR12_cam_size.png',
None], dtype=object)
array(['../Skins/Default/Stylesheets/Images/transparent.gif', 'Closed'],
dtype=object)
array(['../Resources/Images/HAR/Stage/GettingStarted/step/002_HAR11_creatingscene_fm_001.png',
'New Scene from File Menu New Scene from File Menu'], dtype=object)
array(['../Resources/Images/HAR/Stage/GettingStarted/step/001_HAR11_creatingscene_ws_002.png',
'Name New Project Name New Project'], dtype=object)
array(['../Resources/Images/HAR/Stage/GettingStarted/step/001_HAR11_creatingscene_ws_001.png',
'Browse for scene location Browse for scene location'],
dtype=object)
array(['../Skins/Default/Stylesheets/Images/transparent.gif', 'Closed'],
dtype=object)
array(['../Resources/Images/HAR/Stage/GettingStarted/HAR11_new_resolution1.png',
None], dtype=object)
array(['../Skins/Default/Stylesheets/Images/transparent.gif', 'Closed'],
dtype=object)
array(['../Skins/Default/Stylesheets/Images/transparent.gif', 'Closed'],
dtype=object)
array(['../Skins/Default/Stylesheets/Images/transparent.gif', 'Closed'],
dtype=object)
array(['../Resources/Images/HAR/Stage/GettingStarted/step/003_HAR11_OpenRecent_001.png',
'Open Recent Scene Open Recent Scene'], dtype=object)
array(['../Skins/Default/Stylesheets/Images/transparent.gif', 'Closed'],
dtype=object) ] | docs.toonboom.com |
RevokeCertificate
Revokes a certificate that you issued by calling the IssueCertificate operation. If you enable a certificate revocation list (CRL) when you create or update your private CA, information about the revoked certificates will be included in the CRL. ACM PCA writes the CRL to an S3 bucket that you specify. For more information about revocation, see the CrlConfiguration structure. ACM PCA also writes revocation information to the audit report. For more information, see CreateCertificateAuthorityAuditReport.
Request Syntax
{ "CertificateAuthorityArn": "
string", "CertificateSerial": "
string", "RevocationReason": "
string" }
Request Parameters
For information about the parameters that are common to all actions, see Common Parameters.
The request accepts the following data in JSON format.
- CertificateAuthorityArn
Amazon Resource Name (ARN) of the private CA that issued the certificate to be revoked.
- CertificateSerial
Serial number of the certificate to be revoked. This must be in hexadecimal format. You can retrieve the serial number by calling GetCertificate with the Amazon Resource Name (ARN) of the certificate you want and the ARN of your private CA. The GetCertificate operation retrieves the certificate in the PEM format. You can use the following OpenSSL command to list the certificate in text format and copy the hexadecimal serial number.
openssl x509 -in file_path -text -noout
You can also copy the serial number from the console or use the DescribeCertificate operation in the AWS Certificate Manager API Reference.
Type: String
Length Constraints: Minimum length of 0. Maximum length of 128.
Required: Yes
- RevocationReason
Specifies why you revoked the certificate.
Type: String
Valid Values:
UNSPECIFIED | KEY_COMPROMISE | CERTIFICATE_AUTHORITY_COMPROMISE | AFFILIATION_CHANGED | SUPERSEDED | CESSATION_OF_OPERATION | PRIVILEGE_WITHDRAWN | A_A_COMPROMISE
Required: Yes private CA is in a state during which a report cannot be generated.
HTTP Status Code: 400
- RequestAlreadyProcessedException
Your request has already been completed.: 238 X-Amz-Target: ACMPrivateCA.RevokeCertificate X-Amz-Date: 20180226T200035=ab19c4301eb2e8e9f188f3d478cb1d5a28bfb41de3d54b5006c0738d411cfd86 { "CertificateSerial": "e8:cb:d2:be:db:12:23:29:f9:77:06:bc:fe:c9:90:f8", "RevocationReason": "KEY_COMPROMISE", "CertificateAuthorityArn": "arn:aws:acm-pca:region:account:certificate-authority/12345678-1234-1234-1234-123456789012" }
Example
Sample Response
This function does not return a value.
See Also
For more information about using this API in one of the language-specific AWS SDKs, see the following: | https://docs.aws.amazon.com/acm-pca/latest/APIReference/API_RevokeCertificate.html | 2018-07-15T23:20:47 | CC-MAIN-2018-30 | 1531676589022.38 | [] | docs.aws.amazon.com |
- User access options
- User authentication
- Optimize the user experience
- StoreFront high availability and multi-site configuration
StoreFront includes features designed to enhance the user experience. These features are configured by default when you create new stores and their associated Citrix Receiver for Web sites, Desktop Appliance sites, and XenApp Services URLs..
When delivering applications with XenDesktop and XenApp, consider the following options to enhance the experience for users when they access their applications through your stores. For more information about delivering applications, see Create a Delivery Group application. | https://docs.citrix.com/ko-kr/storefront/3-11/plan/optimize-user-experience.html | 2018-07-15T23:23:08 | CC-MAIN-2018-30 | 1531676589022.38 | [] | docs.citrix.com |
MSFTSM_ReferencedProfile class
Associates two instances of MSFTSM_RegisteredProfile where one of the registered profiles references the other.
The following syntax is simplified from Managed Object Format (MOF) code and includes all of the inherited properties.
Syntax
[Dynamic, Version("1.0.0"), provider("MSiSCSITargetProv")] class MSFTSM_ReferencedProfile : CIM_ReferencedProfile { CIM_RegisteredProfile REF Antecedent; CIM_RegisteredProfile REF Dependent; };
The MSFTSM_ReferencedProfile class has these types of members:
Properties
The MSFTSM_ReferencedProfile class has these properties.
Antecedent
Data type: CIM_RegisteredProfile
Access type: Read-only
Specifies the CIM_RegisteredProfile instance that is referenced by the Dependent profile.
This property is inherited from CIM_ReferencedProfile.
Dependent
Data type: CIM_RegisteredProfile
Access type: Read-only
Specifies a CIM_RegisteredProfile instance that references other profiles.
This property is inherited from CIM_ReferencedProfile. | https://docs.microsoft.com/en-us/previous-versions/windows/desktop/iscsitarg/msftsm-referencedprofile | 2018-07-16T00:14:12 | CC-MAIN-2018-30 | 1531676589022.38 | [] | docs.microsoft.com |
The
truncatetext filter truncates a
MySQL
BLOB field.
The length is determined by the
length parameter in the
properties:
replicator.filter.truncatetext=com.continuent.tungsten.replicator.filter.JavaScriptFilter replicator.filter.truncatetext.script=${replicator.home.dir}/samples/extensions/javascript/truncatetext.js replicator.filter.truncatetext.length=4000
Statement-based events are ignored, but row-based events are processed for
each volume value, checking the column type,
isBlob() method and then truncating the contents
when they are identified as larger than the configured length. To confirm
the type, it is compared against the Java class
com.continuent.tungsten.replicator.extractor.mysql.SerialBlob,
the class for a serialized
BLOB
value. These need to be processed differently as they are not exposed as a
single variable.
if (value.getValue() instanceof com.continuent.tungsten.replicator.extractor.mysql.SerialBlob) { blob = value.getValue(); if (blob != null) { valueBytes = blob.getBytes(1, blob.length()); if (blob.length() > truncateTo) { blob.truncate(truncateTo); } } } | http://docs.continuent.com/tungsten-replicator-5.3/filters-reference-truncatetext.html | 2018-07-15T23:05:38 | CC-MAIN-2018-30 | 1531676589022.38 | [] | docs.continuent.com |
Download SnapProtect Documentation
The latest revision of the SnapProtect Documentation is available for download.
Downloading the Documentation Package
The downloadable documentation package includes all of the documentation pages that are available on our Documentation Web site, and includes full search capabilities.
Before You Begin
Make sure that your computer has at least 800 MB of free disk space.
Procedure
- To download the documentation package (zip file) click the following link:
- Save the documentation package to your local drive.
- Navigate to the location where you saved the documentation package, and then extract the contents to a folder.
- Record the location where the documentation package was extracted.
You will need the location details when you set up the documentation Web site for offline access (this does not apply for the Only PDF Files package).
Setting Up the Documentation Web Site for Offline Access
You can use the documentation package as an offline version of the live documentation. This configuration is supported only on Windows computers.
Prerequisites
You must have Java 7 (JRE 1.7) installed on your local computer. Later Java versions are not supported.
To download the Java software, see the Oracle Web site. If you already have Java installed on your computer, use the following command to check the version currently installed:
java -version
Steps
- Open the command prompt and navigate to <Location of Extracted Files>/bin.
Tip: If you are not a user with administrative privileges, we recommend that you run the command prompt as an administrator to prevent errors during the setup process.
- Run the following command to set up the documentation Web site for offline access:
startService.bat -instance_name Instance001 -service_name "NetAppOfflineDocs" -service_display_name "NetApp Offline Docs v10" -start_params "start;-port;8080;-cv.solr.jetty.deployall;true;-sysprop.hosted.mode;offline;-sysprop.solr.url;"
port is an available HTTP port. If port 8080 is being used by other applications, specify a different port number.
A message appears that tells you that Instance001 was started successfully. For example:
"NetAppOfflineDocs(Instance001)" is started.
Fri 03/28/2014_20:13:53.58 startService completes.
- Open your Web browser and type the web address in the following format:
For example,.
- If a message states "Web page not available", retry after a few seconds.
You can now read and search the documentation pages in the same way that you do on the SnapProtect Documentation Web site.
If you want to configure the CommCell Console to access the offline documentation Web site, see Configuring CommCell Console to Access Offline Documentation.
Updating the Documentation Setup with New Content
Documentation files are continuously updated with new content. We recommend that you download the documentation package every time a new service pack is available.
Use the following steps to update the documentation package on your local computer:
- Open the command prompt and navigate to <Location of Old Extracted Files>/bin.
- Run the following command to uninstall Instance001:
uninstallEmbededJettyService.bat -instance_name Instance001 -service_name "NetAppOfflineDocs"
A message appears that tells you that Instance001 has been removed. For example:
"NetAppOfflineDocs(Instance001)" has been removed.
Fri 03/28/2014_20:13:53.58 uninstallService completes
- Remove the folder containing the old extracted files.
- Download the new documentation package. See Downloading the Documentation Package for instructions.
- Set up the offline documentation Web site. See Setting Up the Documentation Web Site for Offline Access for instructions. | http://docs.snapprotect.com/netapp/v10/article?p=features/misc/downloads.htm | 2018-07-15T23:17:14 | CC-MAIN-2018-30 | 1531676589022.38 | [] | docs.snapprotect.com |
Designing thread pickling or “the Essence of Stackless Python”¶
Note from 2007-07-22: This document is slightly out of date and should be turned into a description of pickling. Some research is necessary to get rid of explicit resume points, etc…
Thread pickling is a unique feature in Stackless Python and should be implemented for PyPy pretty soon.
What is meant by pickling?¶
I’d like to define thread pickling as a restartable subset of a running program. The re-runnable part should be based upon Python frame chains, represented by coroutines, tasklets or any other application level switchable subcontext. It is surely possible to support pickling of arbitrary interplevel state, but this seems to be not mandatory as long as we consider Stackless as the reference implementation. Extensions of this might be considered when the basic task is fulfilled.
Pickling should create a re-startable coroutine-alike thing that can run on a different machine, same Python version, but not necessarily the same PyPy translation. This belongs to the harder parts.
What is not meant by pickling?¶
Saving the whole memory state and writing a loader that reconstructs the whole binary with its state im memory is not what I consider a real solution. In some sense, this can be a fall-back if we fail in every other case, but I consider it really nasty for the C backend.
If we had a dynamic backend that supports direct creation of the program and its state (example: a Forth backend), I would see it as a valid solution, since it is relocatable. It is of course a possible fall-back to write such a backend of we fail otherwise.
There are some simple steps and some more difficult ones. Let’s start with the simple.
Basic necessities¶
Pickling of a running thread involves a bit more than normal object pickling, because there exist many objects which don’t have a pickling interface, and people would not care about pickling them at all. But with thread pickling, these objects simply exist as local variables and are needed to restore the current runtime environment, and the user should not have to know what goes into the pickle.
Examples are
- generators
- frames
- cells
- iterators
- tracebacks
to name just a few. Fortunately most of these objects already have got a pickling implementation in Stackless Python, namely the prickelpit.c file.
It should be simple and straightforward to redo these implementations. Nevertheless there is a complication. The most natural way to support pickling is providing a __getstate__/__setstate__ method pair. This is ok for extension types like coroutines/tasklets which we can control, but it should be avoided for existing types.
Consider for instance frames. We would have to add a __getstate__ and a __setstate__ method, which is an interface change. Furthermore, we would need to support creation of frames by calling the frame type, which is not really intended.
For other types with are already callable, things get more complicated because we need to make sure that creating new instances does not interfere with existing ways to call the type.
Directly adding a pickling interface to existing types is quite likely to produce overlaps in the calling interface. This happened for instance, when the module type became callable, and the signature was different from what Stackless added before.
For Stackless, I used the copyreg module, instead, and created special surrogate objects as placeholders, which replace the type of the object after unpickling with the right type pointer. For details, see the prickelpit.c file in the Stackless distribution.
As a conclusion, pickling of tasklets is an addition to Stackless, but not meant to be an extension to Python. The need to support pickling of certain objects should not change the interface. It is better to decouple this and to use surrogate types for pickling which cannot collide with future additions to Python.
The real problem¶
There are currently some crucial differences between Stackless Python (SLP for now) and the PyPy Stackless support (PyPy for now) as far as it is grown. When CPython does a call to a Python function, there are several helper functions involved for adjusting parameters, unpacking methods and some more. SLP takes a hard time to remove all these C functions from the C stack before starting the Python interpreter for the function. This change of behavior is done manually for all the helper functions by figuring out, which variables are still needed after the call. It turns out that in most cases, it is possible to let all the helper functions finish their work and return form the function call before the interpreter is started at all.
This is the major difference which needs to be tackled for PyPy. Whenever we run a Python function, quite a number of functions incarnate on the C stack, and they get not finished before running the new frame. In case of a coroutine switch, we just save the whole chain of activation records - c function entrypoints with the saved block variables. This is ok for coroutine switching, but in the sense of SLP, it is rather incomplete and not stackless at all. The stack still exists, we can unwind and rebuild it, but it is a problem.
Why a problem?¶
In an ideal world, thread pickling would just be building chains of pickled frames and nothing else. For every different extra activation record like mentioned above, we have the problem of how to save this information. We need a representation which is not machine or compiler dependent. Right now, PyPy is quite unstable in terms of which blocks it will produce, what gets inlined, etc. The best solution possible is to try to get completely rid of these extra structures.
Unfortunately this is not even possible with SLP, because there are different flavors of state which make it hard to go without extra information.
SLP switching strategies¶
SLP has undergone several rewrites. The first implementation heavy handled automatically.
- things like operator.__add__ can theoretically generate a wild pattern of recursive calls while CPy tries to figure out if it is a numeric add or a sequence add, and other callbacks may occur when methods like __coerce__ get involved. This will never be solved for SLP, but might get a solution by the strategy outlined below. (switching), the collaborative Soft (switching). Note that an improved version of Hard is still the building block for greenlets, which makes them not really green - I’d name it yellow.
The latest SLP rewrites combine both ideas, trying to use Soft whenever possible, but using Hard when nested interpreters are in the way.
Notabene, it was never tried to pickle tasklets when Hard was involved. In SLP, pickling works with Soft. To gather more pickleable situations, you need to invent new frame types or write replacement Python code and switch it using Soft.
Analogies between SLP and PyPy¶
Right now, PyPy saves C state of functions in tiny activation records: the alive variables of a block, together with the entry point of the function that was left. This is an improvement over storing raw stack slices, but the pattern is similar: The C stack state gets restored when we switch.
In this sense, it was the astonishing resume when Richard and I discussed this last week: PyPy essentially does a variant of Hard switching! At least it does a compromise that does not really help with pickling.
On the other hand, this approach is half the way. It turns out to be an improvement over SLP not to have to avoid recursions in the first place. Instead, it seems to be even more elegant and efficient to get rid of unnecessary state right in the context of a switch and no earlier!
Ways to handle the problem in a minimalistic way¶
Comparing the different approaches of SLP and PyPy, it appears to be not necessary to change the interpreter in the first place. PyPy does not need to change its calling behavior in order to be cooperative. The key point is to find out which activation records need to be stored at all. This should be possible to identify as a part of the stackless transform.
Consider the simple most common case of calling a normal Python function. There are several calls to functions involved, which do preparational steps. Without trying to be exact (this is part of the work to be done), involved steps are
- decode the arguments of the function
- prepare a new frame
- store the arguments in the frame
- execute the frame
- return the result
Now assume that we do not execute the frame, but do a context switch instead, then right now a sequence of activation records is stored on the heap. If we want to re-activate this chain of activation records, what do we really need to restore before we can do the function call?
- the argument decoding is done, already, and the fact that we could have done the function call shows, that no exception occurred. We can ignore the rest of this activation record and do the housekeeping.
- the frame is prepared, and arguments are stored in it. The operation succeeded, and we have the frame. We can ignore exception handling and just do housekeeping by getting rid of references.
- for executing the frame, we need a special function that executes frames. It is possible that we need different flavors due to contexts. SLP does this by using different registered functions which operate on a frame, depending on the frame’s state (first entry, reentry after call, returning, yielding etc)
- after executing the frame, exceptions need to be handled in the usual way, and we should return to the issuer of the call.
Some deeper analysis is needed to get these things correct. But it should have become quite clear, that after all the preparational steps have been done, there is no other state necessary than what we have in the Python frames: bound arguments, instruction pointer, that’s it.
My proposal is now to do such an analysis by hand, identify the different cases to be handled, and then trying to find an algorithm that automatically identifies the blocks in the whole program, where the restoring of the C stack can be avoided, and we can jump back to the previous caller, directly.
A rough sketch of the necessary analysis:
for every block in an RPython function that can reach unwind: Analyze control flow. It should be immediately leading to the return block with only one output variable. All other alive variables should have ended their liveness in this block.
I think this will not work in the first place. For the bound frame arguments for instance, I think we need some notation that these are held by the frame, and we can drop their liveness before doing the call, hence we don’t need to save these variables in the activation record, and hence the whole activation record can be removed.
As a conclusion of this incomplete first analysis, it seems to be necessary to identify useless activation records in order to support pickling. The remaining, irreducible activation records should then be those which hold a reference to a Python frame. Such a chain is pickleable if its root points back to the context switching code of the interp-level implementation of coroutines.
As an observation, this transform not only enables pickling, but also is an optimization, if we can avoid saving many activation records.
Another possible observation which I hope to be able to prove is this: The remaining irreducible activation records which don’t just hold a Python frame are those which should be considered special. They should be turned into something like special frames, and they would be the key to make PyPy completely stackless, a goal which is practically impossible for SLP! These activation records would need to become part of the official interface and need to get naming support for their necessary functions.
I wish to stop this paper here. I believe everything else needs to be tried in an implementation, and this is so far all I can do just with imagination.
best - chris
Just an addition after some more thinking¶
Actually it struck me after checking this in, that the problem of determining which blocks need to save state and which not it not really a Stackless problem. It is a system-immanent problem of a missing optimization that we still did not try to solve.
Speaking in terms of GC transform, and especially the refcounting, it is probably easy to understand what I mean. Our current refcounting implementation is naive, in the sense that we do not try to do the optimizations which every extension writer does by hand: We do not try to save references.
This is also why I’m always arguing that refcounting can be and effectively is efficient, because CPython does it very well.
Our refcounting is not aware of variable lifeness, it does not track references which are known to be held by other objects. Optimizing that would do two things: The refcounting would become very efficient, since we would save some 80 % of it. The second part, which is relevant to the pickling problem is this: By doing a proper analysis, we already would have lost references to all the variables which we don’t need to save any longer, because we know that they are held in, for instance, frames.
I hope you understand that: If we improve the life-time analysis of variables, the sketched problem of above about which blocks need to save state and which don’t, should become trivial and should just vanish. Doing this correctly will solve the pickling problem quasi automatically, leading to a more efficient implementation at the same time.
I hope I told the truth and will try to prove it.
ciao - chris | http://pypy.readthedocs.io/en/latest/discussion/howtoimplementpickling.html | 2018-07-15T22:45:08 | CC-MAIN-2018-30 | 1531676589022.38 | [] | pypy.readthedocs.io |
Building and Distributing Workflows in SharePoint Products and Technologies for Use in Customer and Partner Environments
Summary: Learn about the steps that are necessary to successfully deliver a workflow into a customer or partner environment using SharePoint Products and Technologies. (24 printed pages)
David Mann
June 2008
Applies to: Windows SharePoint Services 3.0, Microsoft Office SharePoint Server 2007, Microsoft Visual Studio 2008
Download the sample code for this article.
Contents
Overview of Workflows in Customer and Partner Environments
Planning and Designing for Multi-Environment Deployment
Defensive Coding
Supporting Customizations
When Things Go Wrong
Logging
Deployment
Documenting Your Workflow
Summary
Additional Resources
Overview of Workflows in Customer and Partner Environments
Writing and supporting software that is intended to run in a customer’s or partner’s environment can be a somewhat scary prospect. Lack of control, disparate environment specifications, unknown end users, and lack of direct interaction are just a few of the potential issues that you must deal with. For independent software vendors, or ISVs, these issues are a fact of life. This is just one more item to be concerned about when they introduce workflow into their application.
This article does not deal directly with the basic process of building and deploying a workflow. Instead, it focuses on the extra steps necessary to successfully deliver a workflow into a customer or partner environment.
The goal of this article is to make you think about workflows differently: to think about them as part of your product or application—not as an afterthought. I’ll raise many issues, give you a lot to think about, and point out some (not all) possible paths to a solution. The final direction you take depends on your application, your company, and your clients. It is likely that not all of the points made in this article will apply to your situation, and that is fine; just make it a conscious decision to disregard them—don’t just ignore them.
Planning and Designing for Multi-Environment Deployment
Before writing any code, you must thoroughly plan and design a workflow that is destined for external deployment—more so than in an in-house scenario. The simple fact that you have less knowledge of, and less control over, the environment within which your workflows may have to operate means that your up-front planning must be more deliberate and more thorough.
There are several elements to this—starting with identifying which version of Microsoft SharePoint Products and Technologies must be in place before your workflow can even run. You will revisit this information more than once during the development life cycle to keep it current, but this is the point at which you can lay the foundation.
Identifying the Development Tools
It may seem a little strange to start with choosing your development tool. Surely, it is more important to understand your requirements before you select a tool? Usually, yes, this would be the case. However, for SharePoint workflows, we flip things around a bit; with good reason, as you will soon see.
As you probably already know, there are two tools available for creating workflows in the 2007 Microsoft Office system—Microsoft Visual Studio and Microsoft Office SharePoint Designer. The question is, which of these will you use to develop your workflow? Before you think too much about this, here is some good news: this is a trick question. This is not a decision that you have to make. There is not really a viable means of deploying Office SharePoint Designer workflows that enables you to package your workflow for distribution, so this really is not an option for us in this case. You have no choice but to use Visual Studio to develop the workflow.
Because of this, the rest of this article focuses entirely on Visual Studio workflows.
Identifying the Environment
This is your opportunity to define the most basic requirements that must be met for your workflow to be supported. Naturally, for SharePoint workflows, you must have SharePoint Products and Technologies. But the question is, which version do you need? Will your workflow require anything that is available only in Microsoft Office SharePoint Server, or will the base-level Windows SharePoint Services 3.0 be sufficient? If Office SharePoint Server is required, do you need Enterprise or Standard?
Keeping with the theme of making your workflow available as widely as possible, unless you know right from the beginning that your workflow needs something available only in Office SharePoint Server, you should start by targeting Windows SharePoint Services only. This means that your development and testing environment should have only Windows SharePoint Services installed—not Office SharePoint Server. For a great breakdown of what is available in the different versions of SharePoint Products and Technologies that might help make this decision, see Microsoft Office SharePoint Server 2007 products comparison download.
One difference between Office SharePoint Server and Windows SharePoint Services that you should not consider here is InfoPath Forms Services. This component is available in Office SharePoint Server but not in Windows SharePoint Services. There are two reasons to ignore this component for now:
The functionality provided by Forms Services, that is, rendering workflow forms in the browser, is available without it. The difference is that you manually create the ASPX forms instead of creating the form in Microsoft Office InfoPath (a richer forms development environment) and letting the Forms Server convert it to HTML/ASPX for you. This requires more work to build your forms because you must build and integrate the ASPX forms by hand, but it is not overly difficult.
If you absolutely must develop your forms in InfoPath and have them converted for you (although I cannot imagine why this would be the case, I certainly do not presume to know all of your business requirements), you can still avoid requiring Office SharePoint Server by simply requiring Forms Server, a separate release within the SharePoint Products and Technologies family. Although this is more than simply Windows SharePoint Services, it is still less onerous (and costly) than the full version of Office SharePoint Server.
Unless otherwise noted, the rest of this article assumes that you are developing your workflow by targeting Windows SharePoint Services without Forms Server.
Workflow Paradigm
Windows Workflow Foundation (WF), the foundation upon which SharePoint workflows are built, supports two types of workflows out of the box: sequential and state machine. Although no one outside your development team will likely ever know which approach you choose, it is nonetheless a far-reaching decision, important enough to spend a little time understanding the difference between the two.
Sequential Workflows
Perhaps the best way to describe a sequential workflow is as a flowchart. Sequential workflows have a beginning, an ending, and a fairly clearly defined path between the two. Although they can support multiple levels of branching and conditional logic, sequential workflows tend to become confusing and unwieldy when they grow overly involved. Sequential workflows are also potentially more difficult to maintain and extend once they are defined and built.
Figure 1 shows an example of a simple sequential workflow. Without too much effort, you can trace the logic that is defined by the workflow.
Figure 1. A simple sequential workflow
Sequential workflows are best used in support of highly structured processes. They imply a straightforward path that has few conditional execution elements and that will not change much in the future. As you will see shortly, it is also somewhat more difficult to write bulletproof sequential workflows.
State Machine Workflows
State machine workflows are entirely different from sequential workflows. Made up of a collection of conditions (known as “states,” hence the name of the paradigm), the transitions between those conditions, and the events that trigger those transitions, state machines excel at defining complex processes. Figure 2 shows an example of a simple state machine workflow.
Figure 2. A simple state machine workflow
You can see from Figure 2 that there is no type of defined path through the process. Instead, the path is entirely dependent upon the events that occur.
Hidden within each state of the workflow is the secret to a state machine’s power. If you double-click the StateInitialization, EventDrivenActivity, or StateFinalization activities within a state, you see the regular sequential workflow designer, as shown in Figure 3. In a sense, then, a state machine workflow is made up of multiple associated sequential workflows.
Figure 3. Hidden inside each state in the state machine is a series of regular sequential workflows
This is what makes this approach so powerful. Each state in a state machine can have multiple related processes. Two of them—StateInitialization and StateFinalization—always process, when the state is entered or exited, respectively. The third is in the EventDrivenActivity. It executes when the event associated with the state occurs.
Workflow Paradigm Summary
With an understanding of the two approaches available, we can now examine the best approach to take to meet our goals. On the one hand, it is important to understand that there really is no clear answer here; either option will work. On the other hand, some features of state machines (particularly their simplified support for branching and conditional logic) make them better suited to writing the highly controlled, highly reliable workflows that we want. Naturally, that affinity comes at a price. State machines are harder to become accustomed to; many developers find this approach harder to adopt than a simple flowchart.
In addition to this, there are a number of other important factors to consider. To choose the best approach for your particular situation, you also need to answer a few questions:
How stable is your product? If the product itself is new, and it is growing and adapting rapidly as it continues to evolve based on market conditions and demands, you probably have a more volatile environment. Keep this in mind as you answer the next question.
What is the volatility of the process being modeled? If it is likely to change frequently, you should choose a state machine because it will be easier to maintain.
What is the complexity of the process being modeled? Do you have to support multiple execution paths with conditional logic, looping, and branching? If so, then again, state machines support this complexity more easily. However, if your process consists primarily of single-path processing, sequential may be a better option. You certainly need a good understanding of your business process in order to answer this question completely.
What is the skill level and workload of your development staff? State machines are typically more complicated to build and often require more development discipline. If this does not fit your development environment, perhaps you want to avoid this complexity. The difficulty in answering this question accurately stems from the fact that the workflow you are building may exist at client locations for many years and outlive more than one development staff. Instead of thinking about specific individuals currently on staff, instead think about the general type of individual your organization tends to hire and the typical workload.
Do you need to dynamically create tasks and assign them based upon criteria known only at run time? If so, you must use a sequential workflow because the Replicator activity cannot operate correctly in a state machine workflow.
Answers to those questions will help you decide which type of workflow to support—sequential or state machine. Understand, too, that these options are not mutually exclusive. You could support a sequential workflow for one process and a state machine workflow for another process all within the same application. Choose the one that best fits each process that you must support.
I will close this section by making the following recommendation: unless you have a compelling reason to build a sequential workflow, the best approach is likely to be a state machine workflow.
Workflow User Interfaces
The user interface for SharePoint workflows is delivered entirely through forms. We can create them manually or let InfoPath Forms Server create them for us. As mentioned earlier, there are significant implications for the environment required for your workflow based on this decision. We’ll cover each option here.
Whichever option you choose (InfoPath or ASPX), you can provide four forms as part of your workflow:
Association Used for making a workflow template available on a specific list or library. Users can then create instances of that template connected to a specific document or list item.
Initiation Used for creating an instance of a workflow template on a specific document or list item.
Task A central part of most SharePoint workflows is the act of assigning work items to users. This form lets you customize the experience and the capabilities exposed by the Task form.
Modification Facilitates an advanced capability of SharePoint workflows, that is, the ability to change a workflow while it is running (for example, changing task assignments, changing processing instructions, and so on).
InfoPath Forms
Although the result of either form-development approach is almost identical, InfoPath forms are easier to develop. Part of this is because the InfoPath client application is a very rich form development tool. Part of it is because the SharePoint Products and Technologies product team enhanced the interaction capabilities between InfoPath forms and the workflow host. As you will see, this is the benefit of using InfoPath forms.
A detailed review of the process for building an InfoPath form for use in a workflow is beyond the scope of this article. For more information, see Additional Resources in this article.
ASPX Forms
Although all workflow forms are ultimately delivered through an ASPX page, the difference here is that in this case, you must build the form manually in Visual Studio. Although this is not a difficult task, it is still more difficult than an InfoPath form. The hard part lies largely in the back-end form processing. The front end is simply building an ASPX form, which is not very difficult or different from any other ASPX form you have ever built. As with the InfoPath forms, a detailed description of the process is beyond the scope of this article.
Workflow Security
Security in SharePoint workflows is a double-edged sword. On one side, it is a non-event. All Visual Studio workflows run as the System Account so that the workflow can do whatever it needs to: reading and writing lists and libraries, manipulating content and structure, and so on. You do not have to worry about permissions or impersonation. On the other hand, all workflows run as the System Account and can do anything. This is a problem if it exposes functionality that you do not want your users to have.
From a design perspective, you must think about both sides of this to make sure that your needs are met and, more important, that you do not unintentionally expose your customer’s or partner’s system to a security vulnerability. To manage this part of the design correctly, you must look at your workflow from a security perspective after you map out the process itself.
Look for areas in which you are reading from, or writing to, your core application—either through Web services or through an object model API. Look for places where you are accepting user input, and make sure that you properly analyze and encode it before you interact with it, process it, or store it.
Also, you must consider the fact that for your workflow to run in the customer environment, any custom activities that you build must be installed on the customer’s server. There is nothing to prevent a customer from using one of those activities in a workflow of their own. Does this present a problem? Are your activities an exposure point that could give a customer access to information from your application that they should not have? Carefully consider what properties you expose on your custom activities to make sure that they have to be exposed, and ensure that they cannot be used in a way that was not intended by your design.
This security review is no different from a regular vulnerability assessment that you would do on any other part of your application. The important thing is to make sure that your workflow processes are included in the security assessment, especially since the workflow will run as the System Account.
Workflow Error Handling
Error handling is largely an implementation activity, but from a design and planning perspective, it is important to spend some time thinking about it. What areas of your process are most likely to present the potential for error? How will you handle errors when you catch them? How will you test various scenarios to ensure that your error handling is adequate? You should address these and other questions in your planning so that you have the answers readily available when it comes time to begin development.
In development, you must include error handling as part of the standard development process—not as a piece that is added later. Again, like many things here, this is not something unique to workflows running in an external environment. However, it is as important to have proper error handling in what may be considered an auxiliary process as it is in your main application. Think about errors, talk about errors, plan for errors and you’ll be more ready to handle errors.
One final element to think about regarding workflow error handling is that the final piece of error handling is completely out of your control. In SharePoint Products and Technologies, like any ASP.NET application, the final configuration of error handling is done in the Web.config file for the Web application. Your application cannot assume that the Web.config file is set up correctly. Other applications or inexperienced administrators could easily change the settings in Web.config and expose pieces of your application to scrutiny that you would rather not have. If for no other reason, this reason is enough to warrant the extra time required to do correct error handling.
For more information about how to do error handling in workflows, see the Error Handling_ and Cancellation Handling sections in this article.
Workflow Dependencies
Finally, with all of the above out of the way, it is time to look at the dependencies that your workflow might have. Do you need to have a certain list available? Does it need to be at a predefined URL? Do you need a connection string from Web.config? How about configuration information from a list or some other source?
As part of your design and planning, you must list these dependencies, the implications of them not being available, and possible courses of action if they are not. Error handling is driven, in part, by this list. But more important, it touches upon your strategy and standards for defensive coding and your plan for making your process self healing.
For now, just examine your process and list every dependency, no matter how small. Later, this article covers some defensive coding strategies for dealing with unmet dependency situations and some approaches for preventing problems from occurring in the first place.
Planning and Designing Summary
While the preceding discussions are not unique to the process of planning a workflow targeted towards an external environment, they are the ones that are important to nail right away. A mistake here in any development process is problematic. In a workflow to be run in an external environment, they can be disastrous.
The rest of your design and planning process is going to be largely uneventful, no different from any other workflow. You need to think about all of the items that are unique to your process and your application. Completely map out your processes; know what will happen at every decision point or at every event transition
With all of that said, there is one more important decision item to take care of. It is during this step of the process that your organization must make a difficult decision. After the initial design is complete, you must make a preliminary decision whether or not to continue. If your initial list of requirements is too cumbersome and unrealistic, and your design is too complex to be readily developed and supported for the lifetime of your product, perhaps it is unwise to continue. We will cover ways to avoid this throughout this article, but if it is the case, an alternative may be to deliver functionality in the form of modular workflow components and let your customers or partners develop their own full workflows.
Defensive Coding
There are few places where practicing good defensive coding is more important than in a collaborative application running in an environment over which you have no control. Any number of things could happen after your workflow is installed and activated that would render it totally, or perhaps worse, partially non-functional. Potentially dozens if not hundreds of people have the ability to break your workflow, either maliciously or accidently, and you can do little to stop them. For this reason, your workflow needs to be cautious, non-trusting, and as much as possible, self-healing.
We’ll cover some techniques for dealing with this potential problem in the following sections. As with many other things in this article, some of these elements are not unique to workflow development. They are simply good programming practices. The twist added by workflow is that you are working in a highly visual environment. It may not occur to you to think along these lines because a large part of what you’re doing does not involve writing code—it is simply configuring pre-built activities.
Verify Before Use
What happens to a workflow if the history list is deleted in the middle of processing? What if a user or another workflow deletes the list item that your workflow is processing against? These are just two very simple examples of why it is important to verify your objects before you use them. This is going to be different from simple error handling because ideally you want to try some sort of recovery that lets you continue processing. If a list you need does not exist or was deleted, perhaps you can recreate it. If a document is locked, you can wait for it to become unlocked, or you can perhaps notify the user who has it locked that you are waiting for it to be available. There are any number of possible paths you can take if we can first identify a problem before it is too late.
The problem is that you are working with pre-built components. If, for example, you are using the out-of-the-box LogToHistoryListActivity, how can you ensure that the History list for the workflow (which is not known until run time) exists and is available?
There are three approaches to this and other similar problems that fall within the purview of verify before use. The first two of these are available for both sequential and state machine workflows. The third option, available only for state machines, is one of the primary reasons that state machines are the preferred approach for building rock-solid workflows.
Make judicious use of Code activities. If you put one of the out-of-the-box Code activities before every single activity in your workflow that reaches outside of itself or your workflow, you could verify that things are as you need them to be. This has the serious downside of instantly doubling the number of activities in your workflow and muddying the presentation in the designer.
Most activities in your workflows (at least the out-of-the-box activities) have a MethodInvoking property. This property lets you specify a method that the workflow host will call just before it runs the activity. You can add whatever code you need into this method. That code executes just prior to the activity. It has the same effect as placing a Code activity before every other activity in the workflow without cluttering up the designer with multiple extra activities. This approach is nearly perfect for your needs, except for two problems:
There is no corresponding method that occurs after the activity is otherwise finished processing. Therefore, anything you do after the activity must be placed into the MethodInvoking property of the next activity. This is not very intuitive, and it is potentially a big problem if the next activity is not known at design time (conditional branching) or if the workflow is updated.
If you uncover a problem that you cannot remedy by using code, the only available options in the MethodInvoking property are to throw an error (we will cover error handling shortly) or cancel the workflow. Neither is particularly appealing.
Take advantage of the fact that state machines have essentially multiple individual sequential workflows—StateInitialization, EventDriven, and StateFinalization—contained within each state. This lets you check the environment in the StateInitialization activity before the main activity in the state executes, similar to the MethodInvoking option. However, unlike with MethodInvoking, you can also do some processing after the activity processes—via StateFinalization—or you can easily switch to a different state and therefore a different stream of processing.
Of the three approaches, the third is fairly obviously the best because it gives us the most flexibility. An example might help clarify this approach.
Imagine that you have a workflow operating on a document. At a certain point in the process, the status of the document changes to “Under Review.” There is a column in the SharePoint document library where you record the status, so the workflow has to update the value in the column. Your workflow designer looks something like Figure 4.
Figure 4. Subset of the Document Review workflow shown as a simple sequential workflow
This part of the process is simple. The review of a document starts, the workflow sets the document status to “Under Review,” and it waits for the review to be finished. The code in the set_StatusCode activity is as follows.
this.workflowProperties.Item["Status"] = "Under Review"; this.workflowProperties.Item.Update();
You test the workflow, and it runs without a problem in your development, Test, and QA environments. So you package it and release it to a pilot group of customers.
Within a week of releasing the code, you start receiving bug reports back reporting failures. Sometimes the process seems to work; sometimes it fails. Can you see the problem? Can you see why it is destined to fail intermittently?
Here’s the problem: every once in a while when your workflow tries to update the status of the document, some other user has the document checked out. When that happens, the workflow fails. Even running as the System Account does not let you sneak past the “Checked Out” barrier.
The solution to this problem is to check whether the document is checked out before you try to update it. If it is not, you should check it out, so that no one else can access it while you work. If it is checked out, you can do any or all of the following:
Send e-mail to the user asking him or her to release the document.
Switch states to one in which you are waiting for the item to be released.
Force the document to be checked in so that you can check it out.
Any other action necessary for our process.
Although you could put this code directly into your Code activity, that approach buries the functionality in code and makes things a little harder to maintain. Instead, a better approach might be to use the power of a state machine and spread the functionality across StateInitialization, StateFinalization, and multiple states.
Let’s examine this approach. First of all, the designer for this solution looks like Figure 5.
Figure 5. A subset of the state machine to demonstrate proper defensive coding in a workflow
In the interest of keeping things focused, only those states that are necessary for this part of the example are shown here: UnderReview and WaitForCheckIn. This part of the process shows the workflow coming into the UnderReview state (the arrow at the top left that points downward). By viewing this simplistic subset of the process, you can see that there are two basic paths through the process:
The “main” flow of <Enter> … UnderReview … <Finish>
The alternate flow of <Enter> … UnderReview … Wait … UnderReview
In the case of the second path, if the document is not available to check out, you enter this secondary flow and wait for it to be available. When it is available, you loop back and reenter the UnderReview state. This can continue for as long as necessary for the document to become available. Every change to the document that is saved back into Office SharePoint Server triggers a re-check, but the process only continues when you can obtain exclusive access to the document.
Starting with the UnderReview state, the StateInitialization activity (initStateUnderReview) is where you see whether you can check out the document to update its Status column. The internal process of this activity looks like Figure 6.
Figure 6. StateInitialization designer for the workflow
The heart of this part of the process is the top Code activity (codeDoCheckOut) and the ifElseActivity (ifDocCheckedOut). The Code activity is responsible for seeing if a user currently has the document checked out. If the document is not checked out, the Code activity checks the document out to the System Account. In either case, the name of the user that has the document checked out (either a user or “System Account” if the workflow checked it out) is then stored in a class-level string variable.
After the Code activity does its thing, the process continues on to the ifDocCheckedOut activity. As Figure 6 shows, this ifElseActivity has two branches. The left side of the workflow process was able to successfully check out the document, otherwise the right side occurs. This check is performed with a very simple set of code in the Condition for the ifProcess branch of the ifElseActivity.
private void verifyCheckedOutTo(object sender, ConditionalEventArgs e) { if (sCheckedOutBy.ToLower() == "system account") { e.Result = true; } else { e.Result = false; } }
If the string variable that contains the name of the user that has the document currently checked out indicates System Account, the condition is true; otherwise it is false. Based upon this, the appropriate branch of the IfElseActivity executes.
The first two activities in the ifProcess (left) branch do just what their names imply: set the status of the document and check it back in. Because you have made it to this point in the process, you can be certain that you will not have a problem updating the properties of the document because you have already checked it out. After you set the status as you need to, you check it back in so that other users, or other processes, can access it.
The next activity, setStateFinal is really the next piece that is of interest. Again, the activity is aptly named. It is responsible for transitioning our sample workflow to its final (completed) state. Because you have successfully updated the document properties, your work here is finished. The process can now continue on to its next step, or else, as in the case of the sample process, simply end.
The final piece of the sample that we need to examine is the other branch of the ifDocCheckedOutifElse activity. Looking at this, we can see what happens if our Code activity earlier was unable to check out the document.
Similar to the last activity of the first branch, this branch contains a SetState activity. In this case, it transitions our process to a new state: stateWaitforCheckIn. Here again, this state makes use of the multiple sub-processes available with the state machine, in this case, StateInitialization and EventDriven. In the StateInitialization phase, we take our actions to try to get the document checked in. In this example, that means sending e-mail to the person who has the document checked out. In your case, it could be forcing the document to be checked in, or any number of other possibilities.
The nature of a state machine means that the next phase of this state only executes when the event it is tied to triggers it—hence the event part of the EventDriven name. In this example, we have configured this event to be a change to the document the workflow is running against by using an instance of the onWorkflowItemChanged default activity, the first activity in Figure 7.
Figure 7. EventDriven process in the workflow fires when the payload document is changed
When this activity occurs, something has changed with the payload document; we do not know exactly what has changed, because any change on the document will trigger this activity. As you can see in the previous figure, this part of the EventDriven activity is fairly straightforward. On any change to the document, set the state back to UnderReview.
The first thing the UnderReview state does is check whether or not it can check out the document. However, this time through, because the document is available to be checked out (assuming that the user who had it checked out previously has in fact released their hold on it), the second branch of the IfElse activity fires, and the workflow is transitioned to the Final state. Now you're finished.
Although this is a simplistic example, it demonstrates well the idea of defensive coding for workflows. As your “main” flow of the process progresses, it must check to ensure that it can continue. If it cannot, it should divert processing off to a secondary (or tertiary, and so on) flow that can deal with the blocking conditions and when appropriate, redirect processing back to the main process flow to “try again.”
Proactive Error Avoidance
Perhaps Proactive Error Avoidance sounds a little grandiose for a workflow, but in reality, it is nothing out of the ordinary—nothing that any other application you write does not likely do already. All it means is that your application actively tries to prevent errors from occurring. Consider this your first line of defense, but it in no way diminishes the need for good defensive coding. It is also important to note that this activity takes place outside of the workflow itself.
The adage that states the best defense is a strong offense applies here. The best way to prevent problems from disrupting your workflow process is to take steps to prevent the problems in the first place. The key to this element is to know what your dependencies are. This is an element that we discussed earlier. It is now time to examine your dependencies list more closely.
Look for items that you can deal with programmatically, independent of your actual workflow process itself. These are going to be proactive attempts at heading off problems before your workflow is running, as opposed to the more reactive approach of the defensive coding strategies discussed above. For the most part, these items fall into the “required item x does not exist” category: something that your application or workflow created or installed has since been deleted or otherwise made inaccessible.
The mechanism for this type of approach is based entirely upon SharePoint event receivers. For example, if your workflow is dependent upon a certain list item existing, it is important that your workflow installer:
Creates or verifies the existence of the list.
Adds or verifies the item.
Registers an Event Receiver to prevent the item from being deleted.
You can take a similar approach for any of the items supported by SharePoint Event Receivers:
List Items
Webs
Sites
List Columns
For example, if your workflow creates a custom column named InternalID that links items in a SharePoint list to your core application, you want to make sure that this column is not removed from lists. To do this, your application should include an Event Receiver such as the following.
public override void FieldDeleting(SPListEventProperties properties) { base.FieldDeleting(properties); if (properties.FieldName.ToLower() == "internalid"); { properties.Cancel=true; properties.ErrorMessage = @"The InternalID field is required by the Contoso CaseTrak application. Please see the CaseTrak documentation for options and instructions for deleting this field"; } }
Now, when a user (any user, even an administrator) tries to delete this column from your list, they see the following error page.
Figure 8. Error page shown when Event Receiver prevents deletion of a column
Furthermore, when your workflow runs, it is quite likely that the InternalID field is still available to you. Mission accomplished.
Supporting Customizations
Regardless of the nature of your application, one size does not fit all for workflows. It is important to support a certain level of customization, which will likely take one of the following forms:
Configuration settings to enable your workflow to run in different environments
Customizations to alter the flow of your process based on the customer’s needs
The recommended approach is to support both options
Configuration Settings
Supporting configuration settings means that you can let your workflow execute in different environments without having to alter the workflow itself. It is fairly obviously a good idea. Essentially this means that instead of hard-coding settings, you retrieve them from a source that can be edited externally from your workflow. These types of settings could include things such as:
Server names (E-mail, Database, Application, and so on)
Domain names
Impersonation account settings
SharePoint Server details: site collections, webs, lists, URLs, and so on
Logging details (level, location, and so on)
Administrator names
This is just a sample list—you can add to it anything you need.
In looking at this list, it is evident that some of this information is available through the SharePoint Server object model. This is a perfectly acceptable way of supporting configuration settings. For example, if you need to look up an administrator name, you can call SPSite.Owner or iterate through the membership of the SPWeb.AssociatedOwnerGroup property.
The problem comes in for settings that are not available in SharePoint Products and Technologies, for example, impersonation information, domain names, and logging details. Where can you get that information? The answer is simply from a configuration file—no different from any other ASP.NET application. Provided that the information is neither too voluminous (large Web.config files are inefficient) nor too volatile (to edit the file, you must have rights on the server itself), the best place to store it is right in the Web.config itself. If this is not an option, any configuration file is sufficient.
Flow Control
Flow control settings are somewhat different from configuration settings. In configuration settings, the settings information is likely to be stored one time and accessed by every workflow in your farm. In flow control settings, the information is likely to be different for every workflow in your application. Because of this, the best place to capture this type of configuration information is on a workflow form (either Association or Initiation). The information is then stored with the workflow and accessible as follows.
XmlSerializer serializer = new XmlSerializer(typeof(InitForm)); XmlTextReader rdrInitForm = new XmlTextReader(new System.IO.StringReader (workflowProperties.InitiationData)); InitForm frmInit = (InitForm)serializer.Deserialize(rdrInitForm); sMyInitFormPropertyValue = frmInit.MyInitFormProperty;
(InitForm is the class name of your workflow initiation form.)
With what type of information might this option be concerned? Any of the following are possibilities:
Reviewer names
Approver names
Workflow instructions
Timeframes for workflow steps, task escalation, and so on
You could add to this list any information known to either the person initiating the workflow or the administrator who initially configures it. It all depends on your workflow and the business process it is supporting.
Rules Engine
One final method for customizing your workflow is far more advanced than either of the other two. In this case, you take advantage of the Rules Engine built into Windows Workflow Foundation and a custom application that enables the rules to be managed externally from the workflow itself. This approach is most appropriate in threshold or timeframe based customizations, such as escalating this task to the user’s manager after x days or assigning this task to the department manager when the purchase order amount is greater than x dollars. A business user can edit the value of x in either scenario without having any contact with or impact upon the other pieces of the workflow. It is also something that your end users can manage separately for different instances of a workflow.
Building a rules management application is beyond the scope of this article. See the Additional Resources section in this article for a link to a sample application that includes full source code and a description. If externally manageable threshold-based rules are supported, or could be supported, in your process, I recommend that you look into this further because it provides your workflows with a significant amount of flexibility at minimal cost.
When Things Go Wrong
Debugging and error handling both rear their ugly heads when things do not go as you planned them in your workflow. So now what do you do? If you are smart, you throw in the towel right now and take up something far more conducive to good mental health—like being a crash test dummy. If you are a glutton for punishment, you try to figure out what went wrong and fix it.
OK, maybe it’s not quite that bad.
Debugging
Let’s start with simple debugging. Fortunately, there is nothing really any different from any other Visual Studio debugging. Attach Microsoft Visual Studio to the W3WP process (with a type of workflow), set a breakpoint where you need it, run your code by launching an instance of the workflow on a document, and your breakpoint is hit just like any other assembly. You have the full power of Visual Studio debugging at your fingertips.
There are a few unique things that you should know:
If your workflow fails with an error of Failed on Start, it typically means that your workflow never started, so debugging does not get you anywhere. Look for reasons that would prevent the workflow host from finding and starting your assembly. Check workflow.xml, assembly versions, strong names, and so on. This is likely a deployment issue rather than a code issue.
If your workflow fails with an error of Error Occurred, it typically means that your workflow started but encountered an unhandled error while processing. Debugging will likely help in this case, so start the debugger and start stepping through your code. When you find the problem, spend a few minutes figuring out how you can blame it on “bad requirements.”
That one was easy.
Error Handling
First, you must get your terminology straight. In workflows, you do not have “errors,” because error sounds much too negative, as if you actually did something wrong. Instead, you have faults. Isn’t that a much kinder, gentler way of saying somebody messed up?
So, you have fault handling. In workflows, there are two distinct means of handling faults. If you are writing code (either in a Code activity or in a MethodInvoking handler for some other activity), you make use of the standard Microsoft .NET Framework try...catch...finally construct. You would handle these types of errors according to your company’s error-handling policies.
It is different when you are working within the graphical workflow designer. The goal remains the same: you want to handle your errors and let your workflow continue as best as possible, or at a minimum, to shut down gracefully. For information about how to configure fault handling in your workflows, see the Additional Resources section in this article.
Cancellation Handling
Because workflows are inherently long-running processes, there is the possibility that the workflow will be stopped by a user or administrator. Because of this, we need to make sure that we can handle the premature cancellation of our workflow. It would not look good if our workflow started having problems or causing problems because someone canceled it in mid-execution; especially when canceling workflows is an option available right in the user interface. We support allowing our workflow to be canceled by configuring a cancellation handler.
Like the fault handler, the cancellation handler is handled through the visual workflow designer. For information about how to configure cancellation handling in your workflows, see the Additional Resources section at the end of this article.
Logging
Logging is important in any application—more so in an application that is running in a foreign environment. The difference is that, in this type of environment, you are not around to see what is going on. Your application must record the information for you to look at later in the event of a problem. Similarly, if an administrator at the client or partner site is troubleshooting a problem with your software or with the environment in general, the administrator may need to be able to see what your application is doing as part of their troubleshooting.
There are two logical places for your workflow process to record information about its processing: the Windows Event Log and the SharePoint ULS Log. How do you decide which one to use? In general, the answer is straightforward. Use the ULS logs for tracing type information and all errors; use the Event Log only for system errors or critical errors that affect SharePoint Server as a whole or prevent your entire workflow from running on any document.
The difference between internal errors and system errors is somewhat vague and probably depends upon your application and your process. As a rule, an internal error is something that can only reasonably be expected to affect your application and is likely investigated only as part of troubleshooting a problem with your specific process. Examples include:
Inability to access a document or list item.
SharePoint Server permission problems.
Incorrect or missing data (e-mail address, and so on).
System errors, on the other hand, affect more than just your application. System errors may have implications for the whole SharePoint Server environment or even cross out of the SharePoint Server space and impact other unrelated applications. Examples could include:
Your main application being inaccessible to your workflow components.
Databases not being available.
Active Directory Domain Services (AD DS) problems.
Other applications not being available.
Losing Internet or network connectivity.
One important point to keep in mind is that company IT departments often have processes to monitor the Event Logs on their servers. It is almost a foregone conclusion that they do not monitor the SharePoint Server ULS logs the same way (not that they should not). If your application encounters a situation that is larger than your application or is of such import that you need to raise a flag, writing the information to the Event Log is the way to go.
For information about writing to the Event Log or the ULS Log, see the Additional Resources section in this article.
Phoning Home
For problems strictly within the purview of your application, you might want to consider one additional step: having your application automatically send the information about the error back to your company. This is a potentially slippery slope, fraught with potential legal and privacy implications, but one that you should consider. At the very least, this must be an opt-in mechanism that customers must agree to and one that they can easily change their minds about later.
Consider the following options with this approach:
Frequency of reporting Should your workflow application report the problem immediately or wait and send error reports in a batch?
Content of the report Can customers configure exactly what data points (on an individual basis) are included in the report, or can they simply configure it by level (minimum, typical, full)?
Transport mechanism How will the information be transmitted back to your company—e-mail (probably a bad idea), HTTP, FTP, and so on?
**Customer notification **Will customers be notified (and perhaps copied on) error reports?
Level of detail in report Will the specific error information be reported back to your company, or will you just be notified that an error (or perhaps a type of error) occurred and the details simply recorded in a file that can be retrieved at a later date?
Other items will be included on that list of options depending upon your situation. The answers to all of these questions will be highly dependent and likely left as options for clients or partners to set up according to their needs or policies.
However, this type of functionality is a potential treasure trove of information both for making your product better and for outstanding customer service. For that reason alone, you should consider it. It is not unheard of: Microsoft includes a similar facility for many of their products, including SharePoint Products and Technologies, and many other companies do also.
Deployment
When you are deploying to a customer or partner environment, it is important to have a simple deployment process. If the workflow is part of a standalone deployment, it must be easy for customers to install your workflow. If the workflow is installed with the rest of your application (or a service pack, for example), it must fit seamlessly into your current installation process.
The biggest question to be answered first is if it were not for the workflow elements, would your core application be installing anything to the SharePoint Products and Technologies environment? The answer to this question will drive much of the deployment plans for your workflow. If the answer is yes, your workflow deployment must fit in seamlessly with the rest of your SharePoint Products and Technologies deployment; it cannot feel like an add-on. If the answer is no, your standalone workflow installer must still feel logically like it belongs to your application. You cannot have a highly sophisticated, highly customized installer for your core application and simply a few batch files, and a couple of manual steps for your workflow installer. The impression of your workflow will be that it is not part of your core application and is merely an afterthought that does not receive a whole lot of attention from your company. Although that may not be true, it will be the perception, and unfortunately, in many cases, perception is reality. Therefore, make sure that you put your best foot forward with your workflow installer.
Because it is unlikely that your core application is installed with a batch file, your workflow cannot be either. Unfortunately, the current state of SharePoint deployment practices depend highly on the simple batch file executing a series of Stsadm commands. Fortunately, a few efforts are under way to make SharePoint solutions more acceptable in the deployment world. For more information, see Additional Resources.
Regardless of how your workflow is actually deployed, you must take certain steps to package it for deployment. Any of the deployment options discussed in this article or in the Additional Resources section require that you package your workflow as a SharePoint solution (WSP file) containing a collection of Features. Explaining Features and Solutions in great detail is beyond the scope of this article, but the rest of this section covers them briefly, and describes how to get to the point of having a packaged solution. It assumes that your workflow is finished and fully tested, your assemblies are strongly named and compiled, your workflow forms (either Office InfoPath or ASP.NET) are complete, and any ancillary files or resources are prepared.
At a high level, the steps are as follows:
Creating Features
Creating solutions
Preparing for deployment
Features
A workflow Feature is only slightly different from most other Features in SharePoint Products and Technologies. There are only a few unique elements, but for the sake of clarity, this article covers the whole Feature-building process. A SharePoint Workflow Feature consists of at least two XML files—Feature.xml and your Feature element manifest, typically called Workflow.xml.
Feature.xml
Feature.xml is the file that tells SharePoint Server about your Feature. The structure of this file is largely the same for every Feature you create, differing only in the actual values for the elements. A complete Feature file looks like the following example.
<Feature Id="06de9857-23cb-459e-9b5e-8732fd15b507" ImageUrl=”DefensiveCodingSample_Logo.jpg” <ElementFile Location="Forms\InitForm.xsn"/> </ElementManifests> <Properties> <Property Key="GloballyAvailable" Value="true" /> <Property Key="RegisterForms" Value="Forms\*.xsn" /> </Properties> </Feature>
The elements of Feature.xml are shown in Table 1.
Table 1. Feature.xml elements
A skeleton Feature file is created as part of the Visual Studio 2008 project template, so all that you have to do is fill in the blanks. There are other options available for your Feature file, some of which may apply to your workflows for either Windows SharePoint Services or Office SharePoint Server. For more information, see the Additional Resources section in this article.
The Feature file is now complete so you can move on to the element manifest.
Workflow.xml
Although it can have any name—as long as the file name matches what is specified in Feature.xml—the element manifest for workflows is typically called workflow.xml. The purpose of this file is to provide SharePoint Server with the specific information for the workflow. As with Feature.xml, the structure of this file is largely the same for every Feature you create, differing only in the actual values for the elements. A typical workflow.xml file looks like the following example.
<Elements xmlns=""> <Workflow Name="DefensiveCodingSample" Description=" This feature is a workflow that demonstrates proper defensive coding strategies” <Categories/> <AssociationData /> <MetaData> <AssociateOnActivate>False</AccociateOnActivate> <StatusPageUrl>_layouts/WrkStat.aspx</StatusPageUrl> <ExtendedStatusColumnValues> <StatusColumnValue>Under Review</StatusColumnValue> <StatusColumnValue>Waiting For Check In</StatusColumnValue> </ExtendedStatusColumnValues> <!--Include the following if using InfoPath forms--> <Instantiation_FormURN>urn:schemas-microsoft- com:office:infopath:InitForm:-myXSD-2007-11-06T15-31- 39</Instantiation_FormURN> </MetaData> </Workflow> </Elements>
Table 2 describes the pieces of this file.
Table 2. Workflow.xml elements
As with the Feature.xml, Visual Studio 2008 creates a skeleton version of this file for you, and all you need to do is fill in the appropriate pieces. There are a couple of other options available for your element manifest file. For more information, see the Additional Resources section at the end of this article.
Solutions
Building a SharePoint solution is not a difficult process, but it is very exacting. You must set up all the pieces exactly right to build the .wsp (solution) file correctly and enable your solution to be deployed properly. We will walk through the process of building the .wsp file at the end of this section. However, before going there, it is important that you understand the purpose and function of a solution package and take a look at the elements that make up a solution: the manifest and the directive (.ddf) file.
Solution packages are simply a mechanism for deploying new functionality to SharePoint Products and Technologies. They pick up a few bells and whistles along the way, and there are a few benefits to using them, but fundamentally, that is all they are; not that it isn’t enough, and it is considerably better than what has been available in the past, but they’re not solving global warming or your grandmother’s rheumatism. However, as a deployment mechanism, they rock, and if you want your workflow components to look and act like a first-rate member of your application you have to use them. No quibbling, so let’s look at how to get you to this utopia of application deployment.
There are two ways to build your solution: manually or by using tools. Normally, I’m a big fan of templates—they can save you a lot of time and effort. However, I think it is more important to understand what those tools are doing for you, so we will cover the manual process here. The Additional Resources section at the end of this article provides links to some tools that can take a lot of the pain out of this process.
Manifest File
The manifest file defines the solution and lists all of the elements that make up the solution. Like Feature.xml and Workflow.xml, the manifest for your solution is an XML file, this time adhering to the schema specified in Solution Schema. The following shows a sample manifest file.
<Solution xmlns="" SolutionId="78de9c59-2fab-4a1e-9b7a-1732431cbe07" > <Assemblies> <Assembly DeploymentTarget="GlobalAssemblyCache" Location="DefensiveCodingSample.dll" /> </Assemblies> <FeatureManifests> <FeatureManifest Location="DefensiveCodingSample\Feature.xml"/> </FeatureManifests> </Solution>
Table 3 provides details on the elements of Manifest.xml.
Table 3. Manifest.xml elements
Directive (DDF) File
The .ddf file is the final piece necessary for building your solution. It is this file that is actually used to define the structure of the solution file. During the process of building your .wsp (solution) file, the MakeCAB utility (more on this in a moment) reads the .ddf file and packages all the files for as specified in the .ddf file.
Although your solution file has a file extension of .wsp, it is really nothing more than a standard CAB file with a different extension. If you change the extension back to .cab, you can open the file in Windows Explorer and examine the contents.
The following shows a sample .ddf file.
.OPTION EXPLICIT .Set CabinetNameTemplate=DefensiveCodingSample.wsp .Set DiskDirectoryTemplate=CDROM .Set CompressionType=MSZIP .Set UniqueFiles="ON" .Set Cabinet=on .Set DiskDirectory1=Package manifest.xml manifest.xml GAC\DefensiveCodingSample.dll DefensiveCodingSample.dll TEMPLATE\FEATURES\DefensiveCodingSample\feature.xml DefensiveCodingSample\feature.xml TEMPLATE\FEATURES\DefensiveCodingSample\workflow.xml DefensiveCodingSample\workflow.xml ;*** add Workflow forms: TEMPLATE\FEATURES\DefensiveCodingSample\Forms\InitForm.xsn DefensiveCodingSample\Forms\InitForm.xsn
State of the Workflow
SharePoint workflows report their status by creating a column in the default view for the list or library that the workflow is running in. The column is named with the name of the workflow. The status is also shown on the workflow status page. Except for errors, the default values typically seen for status are In Progress or Completed. For a sequential workflow, this is likely sufficient, and it is really all one would expect because there really is not any other defined state for the workflow to be in.
State machine workflows, on the other hand, by their very nature have additional defined states. Wouldn’t it be great to report a status of the actual state the workflow is in at that moment? Well, it’s time to celebrate, because you can do just this, quite easily.
Previously, when we were covering the contents of the workflow.xml file, we looked briefly at the <ExtendedStatusColumnValues> tag within the <Metadata> element. By specifying custom values here that match the names of your states, you can set those values into the status column in your list. For example, you can specify values such as the following.
<ExtendedStatusColumnValues> <StatusColumnValue>Under Review</StatusColumnValue> <StatusColumnValue>Waiting For CheckIn</StatusColumnValue> </ExtendedStatusColumnValues>
Now, in your workflow, you can reference those values and have them appear in the status field. You do this by using the “other” Set State activity—that is, not the one used by a state machine workflow to transition from one state to another. It is the activity in the toolbox that has an icon that looks like this:
After you add that activity to your workflow at the appropriate spot (typically in StateInitialize), you can set your status to one of the new entries you added in workflow.xml by creating a MethodInvoking method and adding the following code to it.
((Microsoft.SharePoint.WorkflowActions.SetState)sender).State = ((Int32)SPWorkflowStatus.Max) + 1;
You explicitly cast to the SetState activity that you want (referencing the full name of the class to avoid confusion with the state machine SetState activity). The +1 added to the SPWorkflowStatus.Max indicates the second entry in your workflow.xml ("Waiting for Check In"), because the list is zero-based.
Figure 9. Waiting for check-in status in Workflow Information
The top of the file sets up some variables (see Table 4) that configure how the .wsp file is generated; they are generally self-explanatory.
Table 4. Variables in .ddf file that configure generation of the .wsp file
Typically, this top part of the .ddf file is the same (except for the CabinetNameTemplate parameter) for all solutions that you build.
The rest of the file is unique to the specific solution that you are building. It specifies the files that are packaged into the .wsp in addition to the following:
The first part of each line is the source location so that MakeCAB can read them to add them to the CAB. Note that this location is relative to the current directory, which is typically going to be either the location of the DDF or else the location of the MakeCAB utility.
The second part of each line is the destination location of each file within the CAB itself. In the example above, the Manifest.xml and our DLL will each be in the root of the CAB, the Feature.xml file will be in the root of our Feature (“DefensiveCodingSample”) folder, ACTIONS file will go into a folder path of 1033\Workflow. Remember that when the Solution is deployed, this location is referenced in the TemplateFile element of Manifest.xml and is relative to the \Template folder.
For the files portion of your .ddf file, it is important that you include each source/destination pair on one line. They are broken into two lines in our listing here just for readability.
To understand the file system structure that is required for this .ddf file to work properly, examine Figure 10. It shows a view of our project with all of the folders and files laid out as the DDF and MakeCAB utilities expect to find them. Notice the Utilities folder, which contains a few batch files for building and testing your solution file, and also the MakeCab.exe application. See Additional Resources for information about how to download a copy of MakeCAB.exe, as part of the Microsoft Cabinet SDK. The rest of this solution is available in the source code download for this article.
Figure 10. The directory structure required for the solution
With that, your .ddf file is complete. You can move on to building your .wsp file.
Building the Solution File
The last step to cover is actually building the solution (.wsp) file itself. This article has covered how to construct the files and the file structure you need for this. Now you can create the file itself. The good news is that you can handle building the .wsp file with two lines in your MakeCAB.cmd file (available in the source code download for this article).
cd ..
Utilities\MakeCab /F DefensiveCodingSample.ddf
After you run this batch file, your .wsp file is available in the DeploymentFiles\Package directory in your project folder. If you want to view the contents of the .wsp file, you can change the extension to .cab and open it in Windows Explorer; just make sure to change the extension back when you are finished.
Wrapping Up Deployment
As mentioned at the beginning of our deployment coverage, it is important that the deployment process for your workflow look professional and look like it belongs with the rest of your application deployment. Because of this, the typical batch files used for deployment (including the Deploy.cmd and the UnDeploy.cmd files in the source code download, which are good for testing your solution package) are not sufficient. Instead, you must look into something a little more polished. See Additional Resources for links to a few options.
Documenting Your Workflow
As you probably expect, documentation is critical. It is rare that a company lets you run anything that is not well documented in their environment. Even if you could somehow go past that barrier, it is not in the best interests of your workflow to be undocumented.
Taking that one-step further, there should be at least five levels of documentation:
End-user documentation Information for the people who will use your workflow processes.
Administrator documentation Information for the people who will install, configure, and maintain, your workflow process in the client or partner environment.
Support documentation Information for people in your company who are responsible for assisting customers with problems or questions about your workflow.
Technical documentation Information for the developers and engineers who are responsible for maintaining, fixing bugs in, and enhancing your workflow.
Source code The ultimate documentation of any custom application is the source code. The barrier to entry for this documentation is fairly high, but the buck stops here.
The first two of these are obviously customer facing, so they must play the dual role of being both marketing material and support material. They must be well written, contain many pictures, and not make any assumptions as to the technical aptitude, ability, or interest of the reader. Although they obviously cannot be condescending, they also cannot assume that the client understands technology. So, for example, a sentence such as, "Upon reaching a state in which the process is waiting upon an external event, it will automatically be dehydrated by the workflow host and written to persistent storage until the event is raised, at which point it will be rehydrated and continue processing" is not appropriate for this type of documentation. (Come to think of it, a sentence such as that may not be appropriate for any type of documentation that you actually want people to read.) Make your customer-facing documentation clear and concise, and your workflow and application will stand out from the crowd.
User documentation needs only to help people use the workflows. If your workflows are configurable, this documentation probably has to be custom written for each installation. This is not to say that you cannot start with elements largely written or even templates that just have customization details filled in. However, it does mean that you must revisit this documentation for each client or partner. You may even want to consider making this online documentation that is deployed as part of your workflow and available to users directly on or from the screens in which they interact with the workflow. Primarily this means adding elements directly to the workflow forms or else providing the documentation in separate files and linking to those files from the workflow forms. The more context-sensitive you can make this documentation, the better. Rather than simply providing a monolithic HTML file, look for ways in which you can point users directly to the information about the specific screen they are on or area of the process they are in.
The administrator documentation probably does not have to be so customized. It must cover all the options for configuring and maintaining your workflow, but it is sufficient to have a worksheet included that records all the customizations that were made for the specific implementation.
The next level of documentation, support documentation, is targeted at internal personnel, so it does not need to have quite the same degree of marketing spin as the previous documentation. This documentation must be focused primarily on the "what" aspect of your application (what features and functionality are available) and some of the "how" (the steps necessary to achieve certain functionality).
The intended audience for the fourth type of documentation is strictly technical, and so the document must be as well. Technical documentation explains the "how" aspects of your workflow in painstaking detail. People reading this level of documentation understand technology and expect it to be the main part of what they read. There should be no high-level overviews, and no feature or functionality lists; for that information, they can read the other types of documentation.
The final type of documentation is the source code itself. Perhaps it is not typical to think of source code as documentation. But by this point, if the code is well written, commented, and maintained, that is exactly what it is. If the higher-level documents say that the application can perform function x, but it does not appear to do so, the final judge as to whether x is available is the source code itself.
Summary
To a certain extent, building workflows for a customer or partner environment is no different from building solid workflows for your company’s enterprise. You still must produce a solid design and deliver a solid product. The difference comes into play when you think about the lack of control you have in an external environment. Things that, perhaps, you can handle through business rules or separately set security in your own environment must be locked down programmatically for external environments.
In many ways, too, deploying to an external environment means that you cannot make assumptions about the environment. Each assumption highlights defined dependency requirements, and if you have too many, you will make the barrier to entry too high, and clients or partners will not deploy your solution.
We covered a lot of ground in this article. To summarize it all here would be difficult, but I will leave you with the following. The four most important elements of producing a workflow for a customer or partner environment are:
Planning Spend time thinking about what you are going to build and support before you go any further. Know and understand the strengths and weaknesses of SharePoint workflows and how they will affect your project.
Error Handling Just expect that errors will happen, and make sure that you can recover gracefully. Simple, intuitive error messages and an easy way for administrators at the client or partner site to troubleshoot will go a long way to enhancing the reputation of your product.
Security It just plain needs to be rock solid. Workflows introduce another potential attack vector for your application. Just make sure you lock it down.
Simplicity From deployment to administration to usage, if it is too complicated, people will not use it. Think about this as you progress through every step of your workflow development. Test for this. Look for ways to make forms, processes, or other user interactions easier. Let the application do the heavy lifting; do not make your users or administrators think too much.
Additional Resources
Throughout this article, I touched upon a number of topics that were not directly related to our task at hand. Rather than leave you at the mercies of a search engine, here are some links that will help you find the information you are looking for:
MSDN Code Gallery: Building and Distributing Workflows in SharePoint Products and Technologies
InfoPath Forms for Workflows
Blog: SharePoint and the Office System: Workflow
-
Deployment Options: SharePoint Solution Installer
Feature Element (Feature).
Workflow Definition Schema
Tools
WSPBuilder - Facilitates the generation of the manifest.xml, DDF, and WSP files.
STSDev - Includes a wizard to generate Visual Studio projects. Version 1.3 and later supports both sequential and state machine workflows.
MakeCAB: Microsoft Cabinet Software Development Kit
Event Receivers (Ted Pattison, MSDN Magazine, November 2007) | https://docs.microsoft.com/en-us/previous-versions/office/developer/office-2007/cc511934(v=office.12) | 2018-07-16T00:05:05 | CC-MAIN-2018-30 | 1531676589022.38 | [array(['images/cc511934.d62e7e0e-8862-4b9c-b79d-fc2f20e99a0f%28en-us%2coffice.12%29.gif',
'A simple sequential workflow A simple sequential workflow'],
dtype=object)
array(['images/cc511934.76d2074b-d2b4-43a8-be91-b15092d7e883%28en-us%2coffice.12%29.gif',
'A simple state machine workflow A simple state machine workflow'],
dtype=object)
array(['images/cc511934.a99a1c57-8cd7-4f1e-b2be-f8cd5bb966c2%28en-us%2coffice.12%29.gif',
'Sequential workflows hidden inside each state Sequential workflows hidden inside each state'],
dtype=object)
array(['images/cc511934.f316cf13-d781-419a-be71-0b4a208bc992%28en-us%2coffice.12%29.gif',
'Subset of the Document Review workflow Subset of the Document Review workflow'],
dtype=object)
array(['images/cc511934.2d936911-c09a-437b-a379-27d08f06f12c%28en-us%2coffice.12%29.gif',
'Subset of State Machine workflow Subset of State Machine workflow'],
dtype=object)
array(['images/cc511934.652ca053-44cd-40b5-bab6-96dd5403ef3d%28en-us%2coffice.12%29.gif',
'StateInitialization designer for sample workflow StateInitialization designer for sample workflow'],
dtype=object)
array(['images/cc511934.56106387-8fb0-4c55-8f71-c48ff285b1e7%28en-us%2coffice.12%29.gif',
'EventDriven process fires when document is changed EventDriven process fires when document is changed'],
dtype=object)
array(['images/cc511934.8a268ce4-27e2-4e65-b67f-3ce3835df088%28en-us%2coffice.12%29.gif',
'Error shown when Event Receiver prevents deletion Error shown when Event Receiver prevents deletion'],
dtype=object)
array(['images/cc511934.7cd420d5-cd9b-4a6a-bcd3-04dc4f024ee1%28en-us%2coffice.12%29.gif',
'Toolbox activity icon Toolbox activity icon'], dtype=object)
array(['images/cc511934.4374dd92-47bd-46fa-b501-5a9c1999abb3%28en-us%2coffice.12%29.gif',
'Sample DDF file Sample DDF file'], dtype=object)
array(['images/cc511934.d8c41cb2-6150-484d-a032-62295726622e%28en-us%2coffice.12%29.gif',
'Directory structure required for solution Directory structure required for solution'],
dtype=object) ] | docs.microsoft.com |
The physical design consists of characteristics and decisions that support the logical design. The design objective is to deploy a fully functional Cloud Management Portal with High Availability and the ability to provision to both Regions A and B.
To accomplish this design objective, you deploy or leverage the following in Region A to create a cloud management portal of the SDDC.
2 vRealize Automation Server Appliances
2 vRealize Automation IaaS Web Servers.
2 vRealize Automation Manager Service nodes (including the DEM Orchestrator)
2 DEM Worker nodes
2 IaaS Proxy Agent nodes. All the components that make up the Cloud Management Portal, along with their network connectivity, are shown in the following diagrams.
| https://docs.vmware.com/en/VMware-Validated-Design/4.1/com.vmware.vvd.sddc-design.doc/GUID-DF1205F1-9B2D-4BD4-9988-87868F35E759.html | 2018-07-15T23:35:14 | CC-MAIN-2018-30 | 1531676589022.38 | [array(['images/GUID-1EA8D983-27DF-4A78-AF43-EB8DF429E10E-high.png',
'vRealize Automation Design for Region A'], dtype=object)
array(['images/GUID-974CD5CD-A641-4103-8A2B-E2D375E68E96-high.png', None],
dtype=object) ] | docs.vmware.com |
dhtmlxCombo provides the autocomplete feature, i.e. it can work in the filtering mode.
In this mode values in the list are filtered by the current text entered in the edit control. To enable filtration, the next command should be called:
myCombo.enableFilteringMode(true);
By default, combobox uses options which have been already loaded in it, but it is possible to define an external script which will be called for generating the list of suggestions.
myCombo a part of suggestion. The combobox can automatically send additional requests and add more data to the list of options.
myCombo.enableFilteringMode(true,"php/complete.php",cache,true);
The server-side script will be called by the URL below:
"php/complete.php?pos=START&mask=TEXT"
For all additional sub fetches returned XML must have the "add" attribute of the main tag:
<?xml version="1.0" ?> <complete add="true"> ...
To provide custom filtering rules in Combo, use the onDynXLS event. The event fires when Combo gets data from the server and allows you to specify a function for processing this data.
myCombo.enableFilteringMode(true,"dummy"); myCombo.attachEvent("onDynXLS", myComboFilter); function myComboFilter(text){ // where 'text' is the text typed by the user into Combo myCombo.clearAll(); dhtmlxAjax.get("data.php?mask="+text, function(xml){ myCombo.load(xml.xmlDoc.responseText); }) };
You can define a custom filtering function. For this purpose, use the code as in:
myCombo.filter(function(text){ if (text.indexOf("a") === 0) return true; return false; })
To keep option in the list, you need to return true.Back to top | http://docs.dhtmlx.com/combo__filtering.html | 2018-07-15T23:01:15 | CC-MAIN-2018-30 | 1531676589022.38 | [] | docs.dhtmlx.com |
Uses named arguments that match the property names of the domain class to produce a query that returns the first result. This method behaves just like findWhere except that it will never return null. If a matching instance cannot be found then a new instance will be created, populated with values from the query parameters and returned. The difference between this method and findOrSaveWhere is that this method will not save a newly created instance where findOrSaveWhere does.
null
Given the domain class:
class Book {
String title
Date releaseDate
String author
static constraints = {
releaseDate nullable: true
}
}
You can query in the form:
def book = Book.findOrCreateWhere(author: "Stephen King", title: "The Stand")
Parameters:
queryParams - A Map of key/value pairs to be used in the query. If no matching instance is found then this data is used to initialize a new instance.
queryParams
Map | http://docs.grails.org/3.3.x/ref/Domain%20Classes/findOrCreateWhere.html | 2018-07-15T22:59:40 | CC-MAIN-2018-30 | 1531676589022.38 | [] | docs.grails.org |
The Molecular Editor allows you to view and edit molecules using Avogadro libraries.
Using control panel on the left you can change the view parameters, edit molecule, or measure molecules. There are three tabs on this panel: Display, Edit, and Measure. The buttons along the bottom of the window can be used to , , , and the window. The downloaded files will be saved in your
Documents folder from where you can load them into the editor.
Statistics pane shows name (if available), formula, and weight of the molecule.
The Display tab can be used to change Quality of the image (, , or , can be useful on low-end system), Style (can be , , , or ), 2nd Style (can be , , , or ), and Labels (can be , , , or ).
The Edit tab allows you to edit the molecule. You can add elements by choosing them in the Element drop-down list then clicking with the mouse button on the view panel on the right. The molecule can be optimized by clicking button.
The Measure tab can be used to measure distances and angles in the molecule. To make the measurement use the instructions shown on the tab.
On the right of the Molecular Editor window the molecule will be shown. Use mouse button to rotate molecule, mouse button to move it, and mouse button to zoom.
The shows you the isotopes of the elements.
There are different kinds of isotopes, some are stable, some are not. The unstable isotopes can decay as alpha-rays are two different beta-rays. These differences are encoded by using different colors.
Kalzium can display the isotopes of a range of elements
The dialog allows you to plot some information about elements. The X-axis represents a range of elements (from one number to a higher number). You set this range using the First Element and Last Element fields on the dialog.
Kalzium can plot some data about a range of elements.
The Perform Calculation is the Kalzium calculator. This calculator contains a variety of calculators for different tasks performing different calculations.
You can find the following calculators in Kalzium
- Molecular mass calculator
This calculator helps you calculate the molecular masses of different molecules.
You can specify short form of the molecule names add more such aliases.
Kalzium calculates molecular mass of phenol.
- Concentrations calculator
You can calculate quantities which include
Amount of substance
Volume of solvent
Concentration of substance
There are a wide range of units to choose from and different methods to specify quantities.
Kalzium calculates solution parameters.
- Nuclear calculator
This calculator makes use of the nuclear data available in Kalzium to predict the expected masses of a material after time.
Kalzium calculates parameters of uranium decay.
- Gas calculator
This calculator can calculate the values of Temperature, pressure, volume, amount of gas etc. for various ideal as well as non-ideal gases.
Kalzium calculates gas parameters.
- Titration calculator
This calculator tries to find out the equivalence point of a pH-meter followed titration best fitting it with an hyperbolic tangent. You can also let it solve an equilibrium system of equations and see how the concentration of a species changes in function of another one.
There are two tabs on the calculator page, namely:
- Experimental Values
You can use this calculator to draw the plot of your experimental data obtained during a titration and find out the volume of equivalence. It's strongly recommended to insert a even number of points, because of the best fit algorithm, sorted by volume (the X axis value).
- Theoretical Equations
Here you can fill the table with the equations you have previously obtained for the chemical equilibrium.
For example, if you have this reaction A + B -> C + D then you will have the equation K=(C*D)/(A*B) so you must write
Kin the Parameter column and
(C*D)/(A*B)in the Value column. If you want to assign a known value to a parameter you can simply write the numeric value in the Value field.
For example, you can use the system
A=(C*D)/(B*K)
K=10^-3
C=OH
OH=(10^-14)/H
H=10^-4
B=6*(10^-2)
Then you have to write
Das X axis and
Aas Y axis: so you will find out how the concentration of A changes as a function of D concentration.
Note
Please don't use parenthesis for exponents:
10^-3is correct, while
10^(-3)is wrong.
The results can be visualized by pressing button. The plot shows in red the curve that comes from theoretical equations, in blue the experimental points, and in green the approximated curve for experimental points. You can save the plot as SVG image.
Predefined example of titration results.
- Equation Balancer
The enables the user to solve chemical equations. This is an example:
aH2O + bCO2 -> cH2CO3
The computed equation will be displayed on the top of the window. As you can see in the first example you can also define the value of one or more coefficients. The other coefficients will be adjusted. Furthermore, it is possible to use brackets around elements or electronic charges as shown in the last two examples.
Kalzium calculates equation balance.
The R/S Phrases, also known as Risk and Safety Statements, R/S statements, R/S numbers, and R/S sentences, is a system of hazard codes and phrases for labeling dangerous chemicals and compounds. The R/S phrase of a compound consists of a risk part (R) and a safety part (S), each followed by a combination of numbers. Each number corresponds to a phrase. The phrase corresponding to the letter/number combination has the same meaning in different languages.
Kalzium can display Risk/Security Phrases
The Glossary gives you definitions of the most used tools in chemistry as well as some knowledge data. On the left side of the windows you can see the tree of items. On top, there are chemical terms, below that there is a second tree of laboratory-tools.
On the top of the widget you can see a searchbar. If you type in the bar the trees will be adjusted immediately. The small button in the right end of the searchbar will clear it.
The Tables shows you the tables for Greek alphabet which is used to denote some chemical and physical entities, and for Latin prefixes and Roman numbers which correspond to common Arabic numbers.
The Overview tab is the first one and it shows you an overview of the element the mouse is over.
The View tab is the second in the navigation panel.
You are first presented with the following icons and text:
Kalzium can show you which elements are solid/liquid/vaporous at a given temperature.
The View tab can be used to filter PSE. For example, this feature allows you to explore the elements of the set time period. This is great for getting a feel for how the PSE evolved over time, as more and more elements were discovered. Choose from Gradient list. If you move the slider you will notice that color of some elements disappear if you move it to the left and reappear if you move it to the right. Furthermore the number will change constantly.
The number represents the date you are looking at. If you move the slider to e.g. 1856 you will only see the elements which where known in the year 1856.
The PSE back in time (elements known in 1856) | https://docs.kde.org/trunk4/en/kdeedu/kalzium/tools.html | 2016-07-23T15:01:08 | CC-MAIN-2016-30 | 1469257823072.2 | [array(['/trunk4/common/top-kde.jpg', None], dtype=object)
array(['screenshot-mol-edit.png', 'the “Molecular Editor”'], dtype=object)
array(['screenshotnuclidboard.png', 'the “Isotope Table” window'],
dtype=object)
array(['screenshot5.png', 'the “Plot Data” Dialog'], dtype=object)
array(['screenshot-rs-phrases.png', 'the “R/S Phrases” window'],
dtype=object)
array(['screenshot7.png', 'the “Glossary”'], dtype=object)
array(['screenshot-tables.png', 'the “Tables” window'], dtype=object)
array(['sidebar1.png', 'Overview'], dtype=object)
array(['screenshot2.png', 'the “State of Matter” Dialog'], dtype=object)
array(['screenshot6.png', 'the “Discovery date” gradient'], dtype=object)] | docs.kde.org |
Diagnose "SMTP" checks availabilty of a Mailserver and sends a testmail.
You can use this test, to check your configured Appliance and follow the SMTP Steps to view detailed information to possible Errors.
Further on it is useful to test the Appliance for incoming Mailflow, before switching on Port Forwarding on your Firewall.
Use following parameter for testing:
Target Host: <configured IP Address of your Appliance>
Sender: <configured sender Address of your Appliance (i.e. sf-engine@local)>
Recipient: <configured Adminstrator Address>
Subject: <any subject>
Message: <any Text> | https://appliance.docs.reddoxx.com/en/diagnose-center/smtp | 2020-01-17T16:42:26 | CC-MAIN-2020-05 | 1579250589861.0 | [] | appliance.docs.reddoxx.com |
This guide describes how to develop extensions and customize Alfresco Process Services.
Before beginning, you should read the Administering section to make sure you have an understanding of how Alfresco Process Services is installed and configured.
To learn more about Alfresco Process Services architecture, see our Alfresco ArchiTech Talks video.
- Alfresco Process Services high-level architectureAlfresco Process Services is a suite of components on top of the Activiti BPMN 2.0 platform that can be run on-premise or hosted on a private or public cloud, single, or multitenant.
-).
- Security configuration overrides Configure.
- | https://docs.alfresco.com/process-services1.8/topics/developmentGuide.html | 2020-01-17T16:09:35 | CC-MAIN-2020-05 | 1579250589861.0 | [] | docs.alfresco.com |
Quick Guide
Most of this document is various notes explaining how to do all kinds of things with the test suite. But if you’re new to the
fastai test suite, here is what you need to know to get started.
Step 1. Setup and check you can run the test suite:
git clone cd fastai tools/run-after-git-clone # python tools\run-after-git-clone on windows pip install -e ".[dev]" make test # or pytest
Step 2. Run a specific test module and a specific test of that module
The following will run all tests inside
tests/test_vision_transform.py:
pytest -sv tests/test_vision_transform.py
If you want to run just this
test_points_data_augof that test module:
pytest -sv tests/test_vision_transform.py::test_points_data_aug
Step 3. Write a new test, or improve an existing one.
fastaitest modules are named mostly to be the same as the python modules they test, so for example
test_vision_transform.pytests
fastai/vision/transform.py(but not always).
Locate an existing test that is similar to what you need, copy it, rename and modify it to test what you feel needs to be tested.
Let’s assume you took
test_points_data_augand converted it into
test_qualityin the same module. Test that it works: ``` pytest -sv tests/test_vision_transform.py::test_quality
If it reproduces a problem, i.e. assert fails, then add:
@pytest.mark.skip(reason=”fix me: brief note describing the problem”) def test_quality(): … ```
The best way to figure out how to test, is by looking at existing tests. And the rest of this document explains how to do all kinds of things that you might want to do in your tests.
Step 4. Submit a PR with your new test(s)
You won’t be able to PR from this plain checkout, so you need to switch to a forked version of fastai and create a new branch there. Follow the easy instructions here to accomplish that.
Note that this guide helps you to write tests with a plain git checkout, without needing to fork and branch, so that you can get results faster and easier. But once you’re ready, then switch to your own fork and branch as explained in the guide above. You can just copy the files over to the new branch. Of course, feel free, to start with making a PR branch first - whatever is the easiest for you.
Handy things
Here is a bunch of useful pytest extensions to install (most are discussed somewhere in this document):
pip install pytest-xdist pytest-sugar pytest-repeat pytest-picked pytest-forked pytest-flakefinder pytest-cov nbsmoke
Only
pytest-sugar will automatically change
pytest’s behavior (in a nice way), so remove it from the list if you don’t like it. All the other extensions need to be explicitly enabled via
pytest flag to have an impact, so are safe to install.
Automated tests
At the moment there are only a few automated tests, so we need to start expanding it! It’s not easy to properly automatically test ML code, but there’s lots of opportunities for unit tests.
We use pytest. Here is a complete pytest API reference.
The tests have been configured to automatically run against the
fastai directory inside the
fastai git repository and not pre-installed
fastai. i.e.
tests/test_* work with
../fastai.
Running Tests
Choosing which tests to run
For nuances of configuring pytest’s repo-wide behavior see collection.
Here are some most useful ways of running tests.
Run all
pytest
or:
make test
or:
python setup.py test
Run specific test module
To run an individual test module:
pytest tests/test_core.py
Run specific tests
Run tests by keyword expressions:
pytest -k "list and not listify" tests/test_core.py
For example, if we have the following tests:
def test_whatever(): def test_listify(p, q, expected): def test_listy():
it will first select
test_listify and
test_listy, and then deselect
test_listify, resulting in only the sub-test
test_listy being run.
A more superior way, which avoids unintentional multiple matches is to use the test node approach:
pytest tests/test_basic_train.py::test_save_load tests/test_basic_data.py::test_DataBunch_oneitem
It’s really just the test module followed by the specific test name, joined by
::.
Run only modified tests
Run the tests related to the unstaged files or the current branch (according to Git).failingroots root directories and all of their contents (recursively). If the default for this value does not work for you you can change it in your project by setting a configuration option in
setup.cfg:
[tool:pytest] looponfailroots = fastai tests
or
pytest.ini or
tox.ini files:
[pytest] looponfailroots = fastai tests
This would lead to only looking for file changes in the respective directories, specified relatively to the ini-file’s directory.
pytest-watch is an alternative implementation of this functionality.
Skip integration tests
To skip the integration tests in order to do quick testing while you work:
pytest --skipint
Skip a test module
If you need to skip a certain test module temporarily you can either tell
pytest which tests to run explicitly, so for example to skip any test modules that contain the string
link, you could run:
pytest `ls -1 tests/*py | grep -v link`
Clearing state
CI builds and when isolation is important (against speed), cache should be cleared:
pytest --cache-clear tests
Running tests in parallel
This can speed up the total execution time of the test suite.
pip install pytest-xdist
$ time pytest real 0m51.069s $ time pytest -n 6 real 0m26.940s
That’s twice the speed of the normal sequential execution!
XXX: We just need to fix the temp files creation to use a unique string (pid?), otherwise at times some tests collide in a race condition over the same temp file path.. Currently there is bisect-like module that can reduce a long sequence of tests that leads to failure to the minimal one..
Plugins:
Repeat tests:
pip install pytest-repeat
Now 2 new options becomes available:
--count=COUNT Number of times to repeat each test --repeat-scope={function,class,module,session} Scope for repeating tests
e.g.:
pytest --count=10 tests/test_fastai.py
pytest --count=10 --repeat-scope=function tests
Here is another similar module pytest-flakefinder:
pip install pytest-flakefinder
And then run every test multiple times (50 by default):
pytest --flake-finder --flake-runs=5
Run tests in a random order:
pip install pytest-random-order
Important: Presence of
pytest-random-orderwill automatically randomize tests, no configuration change or command line options is required.
XXX: need to find a package or write our own
pytestextension to be able to randomize at will, since the two available modules that do that once installed force the randomization by default.
As explained earlier this allows detection of coupled tests - where one test’s state affects the state of another. When
pytest-random-orderis implied, which will shuffle the files on the module levels. It can also shuffle on
class,
package,
globaland
nonelevels. For the complete details please see its documentation.
Randomization alternatives:_vision.py
To do the same inside the code of the test:
fastai.torch_core.default_device = torch.device('cpu')
To switch back to cuda:
fastai.torch_core.default_device = torch.device('cuda')
Make sure you don’t hard-code any specific device ids in the test, since different users may have a different GPU setup. So avoid code like:
fastai.torch_core.default_device = torch.device('cuda:1')
which tells
torch to use the 2nd GPU. Instead, if you’d like to run a test locally on a different GPU, use the
CUDA_VISIBLE_DEVICES environment variable:
CUDA_VISIBLE_DEVICES="1" pytest tests/test_vision.py
Report each sub-test name and its progress
For a single or a group of tests via
pytest (after
pip install pytest-pspec):
pytest --pspec tests/test_fastai.py pytest --pspec tests
For all tests via
setup.py:
python setup.py test --addopts="--pspec"
This also means that meaningful names for each sub-test are important._core.py
To send test results to JUnit format output:
py.test tests --junitxml=result.xml
Color control
To have no color (e.g. yellow on white bg is not readable):
pytest --color=no tests/test_core.py
Sending test report to online pastebin service
Creating a URL for each test failure:
pytest --pastebin=failed tests/test_core.py tests/test_core.py
Writing tests
When writing tests:
- Avoid mocks; instead, think about how to create a test of the real functionality that runs quickly
- Use module scope fixtures to run init code that can be shared amongst tests. When using fixtures, make sure the test doesn’t modify the global object it received, otherwise other tests will be impacted. If a given test modifies the global fixture object, it should either clone it or not use the fixture and create a fresh object instead.
- Avoid pretrained models, since they have to be downloaded from the internet to run the test
- Create some minimal data for your test, or use data already in repo’s data/ directory
Important: currently, in the test suite we can only use modules that are already in the required dependencies of fastai (i.e. conda dependencies). No other modules are allowed, unless the test is skipped if some new dependency is used.
Test Registry
fastai has a neat feature where users while reading the API documentation can also discover which tests exercise the function they are interested to use. This provides extra insights at how the API can be used, and also provides an incentive to users to write tests which are missing or improving the existing ones. Therefore, every new test should include a single call of
this_tests.
The following is an actual test, that tests
this_tests, so you can quickly see how it should be used:
from fastai.gen_doc.doctest import this_tests def test_this_tests(): # function by reference (and self test) this_tests(this_tests) # multiple entries: same function twice on purpose, should result in just one entry, # but also testing multiple entries - and this test tests only a single function. this_tests(this_tests, this_tests) import fastai # explicit fully qualified function (requires all the sub-modules to be loaded) this_tests(fastai.gen_doc.doctest.this_tests) # explicit fully qualified function as a string this_tests('fastai.gen_doc.doctest.this_tests') # special case for cases where a test doesn't test fastai API this_tests('na') # not a real function func = 'foo bar' try: this_tests(func) except Exception as e: assert f"'{func}' is not a function" in str(e) else: assert False, f'this_tests({func}) should have failed' # not a function as a string that looks like fastai function, but it is not func = 'fastai.gen_doc.doctest.doesntexistreally' try: this_tests(func) except Exception as e: assert f"'{func}' is not a function" in str(e) else: assert False, f'this_tests({func}) should have failed' # not a fastai function import numpy as np func = np.any try: this_tests(func) except Exception as e: assert f"'{func}' is not in the fastai API" in str(e) else: assert False, f'this_tests({func}) should have failed'
When you use this function ideally try to use live objects
obj.method and not
class.method approach, because if the API changes and classes get renamed behind the scenes the test will still work without requiring any modification. Therefore, instead of doing this:
def test_get_preds(): learn = fake_learner() this_tests(Learner.get_preds)
it’s better to write it as:
def test_get_preds(): learn = fake_learner() this_tests(learn.get_preds)
You can make the call
this_tests anywhere in the test, so if the object becomes available at line 10 of the test, add
this_tests after it.
And there is a special case for situations where a test doesn’t test fastai API or it’s a non-callable attribute, e.g.
learn.loss_func, in which case use
na (not applicable):
def test_non_fastai_func(): this_tests('na')
But we still want the call to be there, since we run a check to make sure we don’t miss out on any tests, hence each test needs to have this call.
The test registry is located at
fastai/test_registry.json and it gets auto-generated or updated when
pytest is run.
Expensive object reuse
Reusing objects, especially those that take a lot of time to create, helps to keep the test suite fast. If the test suite is slow, it’ll not be run and developers will tend to commit code without testing it first. Therefore, it’s OK to prototype things in a non-efficient way. But once the test is working, please spend extra effort to optimize its speed. Having hundreds of tests, a few extra seconds of unnecessary slowness per test quickly adds up to minutes. And chances are, you won’t want to wait for 20min before you can commit a shiny new code you have just written.
Currently we mostly use
module scoped fixtures (global variables scoped to the test module). For example:
@pytest.fixture(scope="module") def learn(): learn = ... create a learn object ... return learn
Now we can use it, in multiple tests of that module, by passing the fixture’s function name as an argument to the test function:
def test_opt_params(learn): learn.freeze() assert n_params(learn) == 2 def test_val_loss(learn): assert learn.validate()[1] > 0.3
You can have multiple fixtures and combine them too. For example, in the following code we create 2 fixtures:
path and
learn, and the
learn fixture receives the
path argument that is fixture itself, just like a test function will do. And then the example shows how you can pass one or more fixtures to a test function.
@pytest.fixture(scope="module") def path(): path = untar_data(URLs.MNIST_TINY) return path @pytest.fixture(scope="module") def learn(path): data = ImageDataBunch.from_folder(path, ds_tfms=([], []), bs=2) learn = cnn_learner(data, models.resnet18, metrics=accuracy) return learn def test_val_loss(learn): assert learn.validate()[1] > 0.3 def test_path(path): assert path def test_something(learn, path): assert learn.validate()[1] > 0.3 assert path
If we want test-suite global objects, e.g.
learn_vision,
learn_text, we can pre-create them from
conftest.py:
from fastai.vision import * @pytest.fixture(scope="session", autouse=True) def learn_vision(): path = untar_data(URLs.MNIST_TINY) data = ImageDataBunch.from_folder(path, ds_tfms=(rand_pad(2, 28), []), num_workers=2) data.normalize() learn = Learner(data, simple_cnn((3,16,16,16,2), bn=True), metrics=[accuracy, error_rate]) learn.fit_one_cycle(3) return learn
Now, inside, for example,
tests/test_vision_train.py we can access the global session-wide fixture in the same way the module-scoped one:
def test_accuracy(learn_vision): assert accuracy(*learn_vision.get_preds()) > 0.9
If we use:
@pytest.fixture(scope="session", autouse=True)
all global objects will be pre-created no matter whether the running tests need them or not, so we probably don’t want
autouse=True. Without this setting these fixture objects will be created on demand.
There is a cosmetic issue with having
learn_vision,
learn_text, since now we either have to spell out:
def test_accuracy(learn_vision): assert accuracy(*learn_vision.get_preds()) > 0.9
or rename:
def test_accuracy(learn_vision): learn = learn_vision assert accuracy(*learn.get_preds()) > 0.9
both aren’t very great…
We want to be able to copy-n-paste quickly and ideally it should always be
learn.foo, especially since there are many calls usually.
Another important nuance related to fixtures is that those global objects shouldn’t get modified by tests. If they do this can impact other tests that rely on a freshly created object. If that’s the case, let the test create its own object and do anything it wants with it. For example, our most commonly used
learn object is almost guaranteed to be modified by any method that calls it. If, however, you’re reusing global variables that don’t get modified, as in this example:
@pytest.fixture(scope="module") def path(): path = untar_data(URLs.MNIST_TINY) return path
then there is nothing to worry about.:
The whole test unconditionally:
@pytest.mark.skip(reason="this bug needs to be fixed") def test_feature_x():
or the
xfailway:
@pytest.mark.xfail def test_feature_x():way:
def test_feature_x(): pytest.xfail("expected to fail until bug XYZ is fixed")
Skip all tests in a module if some import is missing:
docutils = pytest.importorskip("docutils", minversion="0.3")
Skip if
import sys @pytest.mark.skipif(sys.version_info < (3,6), reason="requires python3.6 or higher") def test_feature_x():
or the whole module:
@pytest.mark.skipif(sys.platform == 'win32', reason="does not run on windows") class TestPosixCalls(object): def test_feature_x(self):
More details, example and ways are here.
Custom markers
Normally, you should be able to declare a test as:
import pytest @pytest.mark.mymarker def test_mytest(): ...
You can then restrict a test run to only run tests marked with
mymarker:
pytest -v -m mymarker
Running all tests except the
mymarker ones:
$ pytest -v -m "not mymarker"
Custom markers should be registered in
setup.cfg, for example:
[tool:pytest] # force all used markers to be registered here with an explanation addopts = --strict markers = marker1: description of its purpose marker2: description of its purpose
fastai custom markers
These are defined in
tests/conftest.py.
The following markers override normal marker functionality, so they won’t work with:
pytest -m marker
and may have their own command line option to be used instead, which are defined in
tests/conftest.py, and can also be seen in the output of
pytest -h in the “custom options” section:
custom options: --runslow run slow tests --runcpp run cuda cpp extension tests --skipint skip integration tests
slow- skip tests that can be quite slow (especially on CPU):
). These are usually declared on the test module level, by adding at the top of the file:
pytestmark = pytest.mark.integration
And to skip those use:
pytest --skipint
cuda- mark tests as requiring a CUDA device to run (skipped if no such device is present). These tests check CUDA-specific code, e.g., compiling and running kernels or GPU version of function’s
forward/
backwardmethods. Example:
@pytest.mark.cuda def test_cuda_something(): pass
After test cleanup
To ensure some cleanup code is always run at the end of the test module, add to the desired test module the following code:
@pytest.fixture(scope="module", autouse=True) def cleanup(request): """Cleanup the tmp file once we are finished.""" def remove_tmp_file(): file = "foobar.tmp" if os.path.exists(file): os.remove(file) request.addfinalizer(remove_tmp_file)
The
autouse=True tells
pytest to run this fixture automatically (without being called anywhere else).
Use
scope="session" to run the teardown code not at the end of this test module, but after all test modules were run, i.e. just before
pytest exits.
Another way to accomplish the global teardown is to put in
tests/conftest.py:
def pytest_sessionfinish(session, exitstatus): # global tear down code goes here
To run something before and after each test, add to the test module:
@pytest.fixture(autouse=True) def run_around_tests(): # Code that will run before your test, for example: some_setup() # A test function will be run at this point yield # Code that will run after your test, for example: some_teardown()
autouse=True makes this function run for each test defined in the same module automatically.
For creation/teardown of temporary resources for the scope of a test, do the same as above, except get
yield to return that resource.
@pytest.fixture(scope="module") def learner_obj(): # Code that will run before your test, for example: learn = Learner(...) # A test function will be run at this point yield learn # Code that will run after your test, for example: del learn
You can now use that function as an argument to a test function:
def test_foo(learner_obj): learner_obj.fit(...) w/ and w/o
-s, you have to make an extra cleanup to the captured output, using
re.sub(r'^.*\r', '', buf, 0, re.M). You can use a test helper function for that:
from utils.text import * output = apply_print_resets(output)
But, then we have a helper context manager wrapper to automatically take care of it all, regardless of whether it has some
\rs in it or not, so it’s a simple:
from utils.text import * with CaptureStdout() as cs: function_that_writes_to_stdout() print(cs.out)
Here is a full test example:
from utils.text import * msg = "Secret message\r" final = "Hello World" with CaptureStdout() as cs: print(msg + final) assert cs.out == final+"\n", f"captured: {cs.out}, expecting {final}"
If you’d like to capture
stderr use the
CaptureStderr class instead:
from utils.text import * with CaptureStderr() as cs: function_that_writes_to_stderr() print(cs.err)
If you need to capture both streams at once, use the parent
CaptureStd class:
from utils.text import * with CaptureStd() as cs: function_that_writes_to_stdout_and_stderr() print(cs.err, cs.out)
Testing memory leaks
This section is currently focused on GPU RAM since it’s the scarce resource, but we should test general RAM too.
Utils
Memory measuring helper utils are found in
tests/utils/mem.py:
from utils.mem import *
- Test whether we can use GPU:
use_gpu = torch.cuda.is_available()
torch.cuda.is_available()checks if we can use NVIDIA GPU. It automatically handles the case when CUDA_VISIBLE_DEVICES=”” env var is set, so even if CUDA is available it will return False, thus we can emulate non-CUDA environment.
- Force
pytorchto preload cuDNN and its kernels to claim unreclaimable memory (~0.5GB) if it hasn’t done so already, so that we get correct measurements. This must run before any tests that measure GPU RAM. If you don’t run it you will get erratic behavior and wrong measurements.
torch_preload_mem()
- Consume some GPU RAM:
gpu_mem_consume_some(n)
nis the size of the matrix of
torch.ones. When
n=2**14it consumes about 1GB, but that’s too much for the test suite, so use small numbers, e.g.:
n=2000consumes about 16MB.
- alias for
torch.cuda.empty_cache()
gpu_cache_clear()
It’s absolutely essential to run this one, if you’re trying to measure real used memory. If cache doesn’t get cleared the reported used/free memory can be quite inconsistent.
- This is a combination of
gc.collect()and
torch.cuda.empty_cache()
gpu_mem_reclaim()
Again, this one is crucial for measuring the memory usage correctly. While normal objects get destroyed and their memory becomes available/cached right away, objects with circular references only get freed up when python invokes
gc.collect, which happens periodically. So if you want to make sure your test doesn’t get caught in the inconsistency of getting
gc.collectto be called during that test or not, call it yourself. But, remember, that if you have to call
gc.collect()there could be a problem that you will be masking by calling it. So before using it understand what it is doing.
After
gc.collect()is called this functions clears the cache that potentially grew due to the released by
gcobjects, and we want to make sure we get the real used/free memory at all times.
- This is a wrapper for getting the used memory for the currently selected device.
gpu_mem_get_used()
Concepts
Taking into account cached memory and unpredictable
gc.collectcalls. See above.
Memory fluctuations. When measuring either general or GPU RAM there is often a small fluctuation in reported numbers, so when writing tests use functions that approximate equality, but do think deep about the margin you allow, so that the test is useful and yet it doesn’t fail at random times.
Also remember that rounding happens when Bs are converted to MBs.
Here is an example:
from math import isclose used_before = gpu_mem_get_used() ... some gpu consuming code here ... used_after = gpu_mem_get_used() assert isclose(used_before, used_after, abs_tol=6), "testing absolute tolerance" assert isclose(used_before, used_after, rel_tol=0.02), "testing relative tolerance"
This example compares used memory size (in MBs). The first assert compares whether the absolute difference between the two numbers is no more than 6. The second assert does the same but uses a relative tolerance in percents –
0.02in the example means
2%. So the accepted difference between the two numbers is no more than
2%. Often absolute numbers provide a better test, because a percent-based approach could result in quite a large gap if the numbers are big.
Getting reproducible results
In some situations you may want to remove randomness for your tests. To get identical reproducable results set, you’ll need to set
num_workers=1 (or 0) in your DataLoader/DataBunch, and depending on whether you are using
torch’s random functions, or python’s (
numpy) or both:)
Debugging tests
To start a debugger at the point of the warning, do this:
pytest tests/test_vision_data_block.py -W error::UserWarning --pdb
Tests requiring jupyter notebook environment
If pytest-ipynb pytest extension is installed it’s possible to add
.ipynb files to the normal test suite.
Basically, you just write a normal notebook with asserts, and
pytest just runs it, along with normal
.py tests, reporting any assert failures normally.
We currently don’t have such tests, and if we add any, we will first need to make a conda package for it on the fastai channel, and then add this dependency to fastai. (note: I haven’t researched deeply, perhaps there are other alternatives)
Here is one example of such test.
Coverage
When you run:
make coverage
it will run the test suite directly via
pytest and on completion open a browser to show you the coverage report, which will give you an indication of which parts of the code base haven’t been exercised by tests yet. So if you are not sure which new tests to write this output can be of great insight.
Remember, that coverage only indicated which parts of the code tests have exercised. It can’t tell anything about the quality of the tests. As such, you may have a 100% coverage and a very poorly performing code.
Notebook integration tests
The two places you should check for notebooks to test your code with are:
In each case, look for notebooks that have names starting with the application you’re working on - e.g. ‘text’ or ‘vision’.
docs_src/*ipynb
The
docs_src notebooks can be executed as a test suite: You need to have at least 8GB available on your GPU to run all of the tests. So make sure you shutdown any unnecessary jupyter kernels, so that the output of your
nvidia-smi shows that you have at least 8GB free.
cd docs_src ./run_tests.sh
To run a subset:
./run_tests.sh callback*
There are a lot more details on this subject matter in this document.
examples/*ipynb
You can run each of these interactively in jupyter, or as CLI:
jupyter nbconvert --execute --ExecutePreprocessor.timeout=600 --to notebook examples/tabular.ipynb
This set is examples and there is no pass/fail other than visual observation. | https://docs.fast.ai/dev/test.html | 2020-01-17T17:28:37 | CC-MAIN-2020-05 | 1579250589861.0 | [] | docs.fast.ai |
Qualifiers
Sometimes, the same word can mean multiple things depending on the context. Because Lacona commands are a largely context-free environment, this capability can be handled through Qualifiers.
Imagine the Lacona command "call Vicky" this is a simple command, unless "Vicky" has multiple phone numbers, or I have multiple contacts named "Vicky". This is when you would use qualifiers. If I have contacts "Vicky A" and "Vicky B", who each have mobile and home numbers, Lacona would display:
- call Vicky (A, Mobile)
- call Vicky (A, Home)
- call Vicky (B, Mobile)
- call Vicky (B, Home)
These qualifiers are only used if multiple options exist with the exact same text. If there is only one option, qualifiers will not be displayed.
An Example
To use this functionality, simply assign the
qualifier or
qualifiers prop to any element, or an
item used by a
<list />. If multiple qualifiers are assigned together, only the first unique qualifier will be displayed.
describe () { return ( <sequence> <literal text='call ' /> <choice> <literal text='Vicky' qualifiers={['A', 'Mobile']} /> <literal text='Vicky' qualifiers={['A', 'Home']} /> <literal text='Vicky' qualifiers={['B', 'Mobile']} /> <literal text='Vicky' qualifiers={['B', 'Home']} /> </choice> </sequence> ) } | https://docs.lacona.io/docs/advanced/qualifiers.html | 2020-01-17T15:33:28 | CC-MAIN-2020-05 | 1579250589861.0 | [] | docs.lacona.io |
Throttling pattern
Control the consumption of resources used by an instance of an application, an individual tenant, or an entire service. This can allow the system to continue to function and meet service level agreements, even when an increase in demand places an extreme load on resources.
Context and problem
The load on a cloud application typically varies over time based on the number of active users or the types of activities they are performing. For example, more users are likely to be active during business hours, or the system might be required to perform computationally expensive analytics at the end of each month. There might also be sudden and unanticipated bursts in activity. If the processing requirements of the system exceed the capacity of the resources that are available, it'll suffer from poor performance and can even fail. If the system has to meet an agreed level of service, such failure could be unacceptable.
There're many strategies available for handling varying load in the cloud, depending on the business goals for the application. One strategy is to use autoscaling to match the provisioned resources to the user needs at any given time. This has the potential to consistently meet user demand, while optimizing running costs. However, while autoscaling can trigger the provisioning of additional resources, this provisioning isn't immediate. If demand grows quickly, there can be a window of time where there's a resource deficit.
Solution.
The system could implement several throttling strategies, including:
Rejecting requests from an individual user who's already accessed system APIs more than n times per second over a given period of time. This requires the system to meter the use of resources for each tenant or user running an application. For more information, see the Service Metering Guidance.
Disabling or degrading the functionality of selected nonessential services so that essential services can run unimpeded with sufficient resources. For example, if the application is streaming video output, it could switch to a lower resolution.
Using load leveling to smooth the volume of activity (this approach is covered in more detail by the Queue-based Load Leveling pattern). In a multi-tenant environment, this approach will reduce the performance for every tenant. If the system must support a mix of tenants with different SLAs, the work for high-value tenants might be performed immediately. Requests for other tenants can be held back, and handled when the backlog has eased. The Priority Queue pattern could be used to help implement this approach.
Deferring operations being performed on behalf of lower priority applications or tenants. These operations can be suspended or limited, with an exception generated to inform the tenant that the system is busy and that the operation should be retried later.
The figure shows an area graph for resource use (a combination of memory, CPU, bandwidth, and other factors) against time for applications that are making use of three features. A feature is an area of functionality, such as a component that performs a specific set of tasks, a piece of code that performs a complex calculation, or an element that provides a service such as an in-memory cache. These features are labeled A, B, and C.
The area immediately below the line for a feature indicates the resources that are used by applications when they invoke this feature. For example, the area below the line for Feature A shows the resources used by applications that are making use of Feature A, and the area between the lines for Feature A and Feature B indicates the resources used by applications invoking Feature B. Aggregating the areas for each feature shows the total resource use of the system.
The previous figure illustrates the effects of deferring operations. Just prior to time T1, the total resources allocated to all applications using these features reach a threshold (the limit of resource use). At this point, the applications are in danger of exhausting the resources available. In this system, Feature B is less critical than Feature A or Feature C, so it's temporarily disabled and the resources that it was using are released. Between times T1 and T2, the applications using Feature A and Feature C continue running as normal. Eventually, the resource use of these two features diminishes to the point when, at time T2, there is sufficient capacity to enable Feature B again.
The autoscaling and throttling approaches can also be combined to help keep the applications responsive and within SLAs. If the demand is expected to remain high, throttling provides a temporary solution while the system scales out. At this point, the full functionality of the system can be restored.
The next figure shows an area graph of the overall resource use by all applications running in a system against time, and illustrates how throttling can be combined with autoscaling.
At time T1, the threshold specifying the soft limit of resource use is reached. At this point, the system can start to scale out. However, if the new resources don't become available quickly enough, then the existing resources might be exhausted and the system could fail. To prevent this from occurring, the system is temporarily throttled, as described earlier. When autoscaling has completed and the additional resources are available, throttling can be relaxed.
Issues and considerations
You should consider the following points when deciding how to implement this pattern:
Throttling an application, and the strategy to use, is an architectural decision that impacts the entire design of a system. Throttling should be considered early in the application design process because it isn't easy to add once a system has been implemented.
Throttling must be performed quickly. The system must be capable of detecting an increase in activity and react accordingly. The system must also be able to revert to its original state quickly after the load has eased. This requires that the appropriate performance data is continually captured and monitored.
If a service needs to temporarily deny a user request, it should return a specific error code so the client application understands that the reason for the refusal to perform an operation is due to throttling. The client application can wait for a period before retrying the request.
Throttling can be used as a temporary measure while a system autoscales. In some cases it's better to simply throttle, rather than to scale, if a burst in activity is sudden and isn't expected to be long lived because scaling can add considerably to running costs.
If throttling is being used as a temporary measure while a system autoscales, and if resource demands grow very quickly, the system might not be able to continue functioning—even when operating in a throttled mode. If this isn't acceptable, consider maintaining larger capacity reserves and configuring more aggressive autoscaling.
When to use this pattern
Use this pattern:
To ensure that a system continues to meet service level agreements.
To prevent a single tenant from monopolizing the resources provided by an application.
To handle bursts in activity.
To help cost-optimize a system by limiting the maximum resource levels needed to keep it functioning.
Example
The final figure illustrates how throttling can be implemented in a multi-tenant system. Users from each of the tenant organizations access a cloud-hosted application where they fill out and submit surveys. The application contains instrumentation that monitors the rate at which these users are submitting requests to the application.
In order to prevent the users from one tenant affecting the responsiveness and availability of the application for all other users, a limit is applied to the number of requests per second the users from any one tenant can submit. The application blocks requests that exceed this limit.
Related patterns and guidance
The following patterns and guidance may also be relevant when implementing this pattern:
- Instrumentation and Telemetry Guidance. Throttling depends on gathering information about how heavily a service is being used. Describes how to generate and capture custom monitoring information.
- Service Metering Guidance. Describes how to meter the use of services in order to gain an understanding of how they are used. This information can be useful in determining how to throttle a service.
- Autoscaling Guidance. Throttling can be used as an interim measure while a system autoscales, or to remove the need for a system to autoscale. Contains information on autoscaling strategies.
- Queue-based Load Leveling pattern. Queue-based load leveling is a commonly used mechanism for implementing throttling. A queue can act as a buffer that helps to even out the rate at which requests sent by an application are delivered to a service.
- Priority Queue pattern. A system can use priority queuing as part of its throttling strategy to maintain performance for critical or higher value applications, while reducing the performance of less important applications.
Feedback | https://docs.microsoft.com/en-us/azure/architecture/patterns/throttling | 2020-01-17T17:20:52 | CC-MAIN-2020-05 | 1579250589861.0 | [array(['_images/throttling-resource-utilization.png',
'Figure 1 - Graph showing resource use against time for applications running on behalf of three users'],
dtype=object)
array(['_images/throttling-autoscaling.png',
'Figure 2 - Graph showing the effects of combining throttling with autoscaling'],
dtype=object)
array(['_images/throttling-multi-tenant.png',
'Figure 3 - Implementing throttling in a multi-tenant application'],
dtype=object) ] | docs.microsoft.com |
.
This is not possible, the only way to do that is to create a different order for each subscription a user wants to activate.
Yes, you can, but you need the premium version of the plugin. Administrators can set a maximum amount of times users can pause a plan, and the maximum number of days for each pause.
Yes they can buy multiple subscription type products at the same time.
NOTE: However not if you use PayPal as a payment method. The system used to integrate PayPal recurring payments does not allow multiple subscriptions for the same order.
Yes, if you should use a variable subscription product, you are able to set up 1 variation with subscription option and 1 for a regular price. | https://docs.yithemes.com/yith-woocommerce-subscription/faqs/ | 2020-01-17T16:25:36 | CC-MAIN-2020-05 | 1579250589861.0 | [] | docs.yithemes.com |
Please login or sign up. You may also need to provide your support ID if you have not already done so.
This product can be discovered by Enterprise version of BMC Discovery, but you can still Download our free Community Edition to discover [other products] !
Extended Discovery pattern which models discovered Oracle applications and websites as a SoftwareComoponents and JDBC Resources as Detail Nodes is available for this product. | https://docs.bmc.com/docs/display/Configipedia/Oracle+Application+Server | 2020-01-17T17:43:17 | CC-MAIN-2020-05 | 1579250589861.0 | [] | docs.bmc.com |
VMware Integrated OpenStack 2.0 and greater, and so on). | https://docs.vmware.com/en/VMware-Integrated-OpenStack/3.1/com.vmware.openstack.admin.doc/GUID-13559EC4-C6BE-4D21-9E64-022424F78907.html | 2017-07-20T18:49:35 | CC-MAIN-2017-30 | 1500549423320.19 | [] | docs.vmware.com |
The NSX dashboard simplifies troubleshooting by providing visibility into the overall health of NSX components in one central view.
You can access the dashboard from vCenter Web Client > Networking & Security > Dashboard.
The dashboard checks the following states:
NSX infrastructure—NSX Manager status
Component status for following services is monitored
Database service
Message bus service
Replicator service—Also monitors for replication errors
NSX manager disk usage:
Yellow (disk usage >80%)
Red (disk usage >90%)
NSX infrastructure—NSX Controller status
Controller node status (running/deploying/removing/failed/unknown)
Controller peer connectivity status
Controller VM status (powered off/deleted)
Controller disk latency alerts
NSX infrastructure—Host status
Deployment related:
Number of clusters with installation failed status
Number of clusters that need upgrade
Number of clusters where installation is in progress
Firewall:
Number of clusters with firewall disabled
Number of clusters where firewall status is red/yellow
VXLAN:
Number of clusters with VXLAN not configured
Number of clusters where VXLAN status is red/yellow
NSX services—Firewall publish status
Number of hosts with firewall publish status failed.
NSX services—Logical Networking status
Number of logical switches with status Error, Warning
Flag if backing DVS portgroup is deleted for a virtual wire | https://docs.vmware.com/en/VMware-NSX-for-vSphere/6.2/com.vmware.nsx.troubleshooting.doc/GUID-D97DB7A1-FEA4-4575-96ED-C13EB2A6E5AF.html | 2017-07-20T18:48:33 | CC-MAIN-2017-30 | 1500549423320.19 | [array(['images/GUID-763B6CB1-F01A-41EC-9C98-080A4DF7A594-low.png', None],
dtype=object)
array(['images/GUID-41FC43EF-8BCD-4531-95E8-3F394B4DA5FA-low.png', None],
dtype=object) ] | docs.vmware.com |
You can follow several Auto Deploy best practices, set up networking, configure vSphere HA, and otherwise optimize your environment for Auto Deploy.
See the VMware Knowledge Base for additional best practice information.
Auto Deploy and vSphere HA Best Practices
You can improve the availability of the virtual machines running on hosts provisioned with Auto Deploy by following best practices.
Some environments configure the hosts provisioned with Auto Deploy with a distributed switch or configure virtual machines running on the hosts with Auto Start Manager. In such environments, deploy the vCenter Server system so that its availability matches the availability of the Auto Deploy server. Several approaches are possible.
Install vCenter Server on a Windows virtual machine or physical server or deploy the vCenter Server Appliance. Auto Deploy is deployed together with the vCenter Server system.
Deploy the vCenter Server system on a virtual machine. Run the vCenter Server virtual machine in a vSphere HA enabled cluster and configure the virtual machine with a vSphere HA restart priority of high. Include two or more hosts in the cluster that are not managed by Auto Deploy and pin the vCenter Server virtual machine to these hosts by using a rule (vSphere HA DRS required VM to host rule). You can set up the rule and then disable DRS if you do not want to use DRS in the cluster. The greater the number of hosts that are not managed by Auto Deploy, the greater your resilience to host failures.Note:
This approach is not suitable if you use Auto Start Manager. Auto Start Manager is not supported in a cluster enabled for vSphere HA.
Auto Deploy Networking Best Practices
Prevent networking problems by following Auto Deploy networking best practices.
Auto Deploy and IPv6
Because Auto Deploy takes advantage of the iPXE infrastructure, it requires that each host has an IPv4 address. After the deployment you can manually reconfigure the hosts to use IPv6 and add them to vCenter Server over IPv6. However, when you reboot a stateless host, its IPv6 configuration is lost.
IP Address Allocation
Use DHCP reservations for address allocation. Fixed IP addresses are supported by the host customization mechanism, but providing input for each host is not recommended.
VLAN Considerations
Use Auto Deploy in environments that do not use VLANs.
If you intend to use Auto Deploy in an environment that uses VLANs, make sure that the hosts that you want to provision can reach the DHCP server. How hosts are assigned to a VLAN depends on the setup at your site. The VLAN ID might be assigned by the switch or the router, or might be set in the host's BIOS or through the host profile. Contact your network administrator to determine the steps for allowing hosts to reach the DHCP server.
Auto Deploy and VMware Tools Best Practices-version-xxxxx-standard.
xxxxx-no-tools: An image profile that does not include the VMware Tools binaries. This image profile is usually smaller has a lower memory overhead, and boots faster in a PXE-boot environment. This image is usually named esxi-version-xxxxx-no-tools.
With vSphere 5.0 Update 1 and later, you can deploy ESXi using either image profile.
If the network boot time is of no concern, and your environment has sufficient extra memory and storage overhead, use the image that includes VMware Tools.
If you find the network boot time too slow when using the standard image, or if you want to save some space on the hosts, you can use the image profile that does not include VMware Tools, and place the VMware Tools binaries on shared storage. See, Provision ESXi Host by Using an Image Profile Without VMware Tools.
Auto Deploy Load Management Best Practices
Simultaneously booting large numbers of hosts places a significant load on the Auto Deploy server. Because Auto Deploy is a Web server at its core, you can use existing Web server scaling technologies to help distribute the load. For example, one or more caching reverse proxy servers can be used with Auto Deploy. The reverse proxies serve up the static files that make up the majority of an ESXi boot image. Configure the reverse proxy to cache static content and pass all requests through to the Auto Deploy server. For more information, watch the video "Using Reverse Web Proxy Servers for Auto Deploy Scalability":
Use multiple TFTP servers to point to different proxy servers. Use one TFTP server for each reverse proxy server. After that, set up the DHCP server to send different hosts to different TFTP servers.
When you boot the hosts, the DHCP server redirects them to different TFTP servers. Each TFTP server redirects hosts to a different server, either the Auto Deploy server or a reverse proxy server, significantly reducing the load on the Auto Deploy server.
After a massive power outage, bring up the hosts on a per-cluster basis. If you bring multiple clusters online simultaneously, the Auto Deploy server might experience CPU bottlenecks. All hosts might come up after a delay. The bottleneck is less severe if you set up the reverse proxy.
vSphere Auto Deploy Logging and Troubleshooting Best Practices
To resolve problems that you encounter with vSphere Auto Deploy, use the Auto Deploy logging information from the vSphere Web Client and set up your environment to send logging information and core dumps to remote hosts.
Auto Deploy Logs
Download the Auto Deploy logs by going to the Auto Deploy page in the vSphere Web Client. See, Download Auto Deploy Logs.
Setting Up Syslog
Set up a remote syslog server. See the vCenter Server and Host Management documentation for syslog server configuration information. Configure the first host you boot to use the remote syslog server and apply that host's host profile to all other target hosts. Optionally, install and use the vSphere Syslog Collector, a vCenter Server support tool that provides a unified architecture for system logging, enables network logging, and lets you combine logs from multiple hosts.
Setting Up ESXi Dump Collector
Hosts provisioned with Auto Deploy do not have a local disk to store core dumps on. Install ESXi Dump Collector and set up your first host so that all core dumps are directed to ESXi Dump Collector, and apply the host profile from that host to all other hosts. See Configure ESXi Dump Collector with ESXCLI.
Using Auto Deploy in a Production Environment
When you move from a proof of concept setup to a production environment, take care to make the environment resilient.
Protect the Auto Deploy server. See Auto Deploy and vSphere HA Best Practices.
Protect all other servers in your environment, including the DHCP server and the TFTP server.
Follow VMware security guidelines, including those outlined in Auto Deploy Security Considerations. | https://docs.vmware.com/en/VMware-vSphere/6.0/com.vmware.vsphere.install.doc/GUID-980D9E38-633E-4557-9144-AC422FA239C5.html | 2017-07-20T18:49:31 | CC-MAIN-2017-30 | 1500549423320.19 | [] | docs.vmware.com |
Shares.
Virtual machines with more than one virtual CPU are called SMP (symmetric multiprocessing) virtual machines. ESXi supports up to 128. | https://docs.vmware.com/en/VMware-vSphere/6.0/com.vmware.vsphere.resmgmt.doc/GUID-C0D2EFAE-1FE4-4867-AC3F-3E70D9A9ED59.html | 2017-07-20T18:49:37 | CC-MAIN-2017-30 | 1500549423320.19 | [] | docs.vmware.com |
Project Conifer has been live on Evergreen for just over a year now, and as one of the primary technologists I have had to work closely with the OpenSRF infrastructure during that time. As such, I am in a position to identify some of the strengths and weaknesses of OpenSRF based on our experiences.
As a service infrastructure, OpenSRF has been remarkably reliable. We initially deployed Evergreen on an unreleased version of both OpenSRF and Evergreen due to our requirements for some functionality that had not been delivered in a stable release at that point in time, and despite this risky move we suffered very little unplanned downtime in the opening months. On July 27, 2009 we moved to a newer (but still unreleased) version of the OpenSRF and Evergreen code, and began formally tracking our downtime. Since then, we have achieved more than 99.9% availability - including scheduled downtime for maintenance. This compares quite favourably to the maximum of 75% availability that we were capable of achieving on our previous library system due to the nightly downtime that was required for our backup process. The OpenSRF "maximum request" configuration parameter for each service that kills off drone processes after they have served a given number of requests provides a nice failsafe for processes that might otherwise suffer from a memory leak or hung process. It also helps that when we need to apply an update to a Perl service that is running on multiple servers, we can apply the updated code, then restart the service on one server at a time to avoid any downtime.
As promised by the OpenSRF infrastructure, we have also been able to tune our
cluster of servers to provide better performance. For example, we were able to
change the number of maximum concurrent processes for our database services
when we noticed that we seeing a performance bottleneck with database access.
Making a configuration change go live simply requires you to restart the
opensrf.setting service to pick up the configuration change, then restart the
affected service on each of your servers. We were also able to turn off some of
the less-used OpenSRF services, such as
open-ils.collections, on one of our
servers to devote more resources on that server to the more frequently used
services and other performance-critical processes such as Apache.
The support for logging and caching that is built into OpenSRF has been particularly helpful with the development of a custom service for SFX holdings integration into our catalogue. Once I understood how OpenSRF works, most of the effort required to build that SFX integration service was spent on figuring out how to properly invoke the SFX API to display human-readable holdings. Adding a new OpenSRF service and registering several new methods for the service was relatively easy. The support for directing log messages to syslog in OpenSRF has also been a boon for both development and debugging when problems arise in a cluster of five servers; we direct all of our log messages to a single server where we can inspect the complete set of messages for the entire cluster in context, rather than trying to piece them together across servers.
The primary weakness of OpenSRF is the lack of either formal or informal documentation for OpenSRF. There are many frequently asked questions on the Evergreen mailing lists and IRC channel that indicate that some of the people running Evergreen or trying to run Evergreen have not been able to find documentation to help them understand, even at a high level, how the OpenSRF Router and services work with XMPP and the Apache Web server to provide a working Evergreen system. Also, over the past few years several developers have indicated an interest in developing Ruby and PHP bindings for OpenSRF, but the efforts so far have resulted in no working code. Without a formal specification, clearly annotated examples, and unit tests for the major OpenSRF communication use cases that could be ported to the new language as a base set of expectations for a working binding, the hurdles for a developer new to OpenSRF are significant. As a result, Evergreen integration efforts with popular frameworks like Drupal, Blacklight, and VuFind result in the best practical option for a developer with limited time — database-level integration — which has the unfortunate side effect of being much more likely to break after an upgrade.
In conjunction with the lack of documentation that makes it hard to get started with the framework, a disincentive for new developers to contribute to OpenSRF itself is the lack of integrated unit tests. For a developer to contribute a significant, non-obvious patch to OpenSRF, they need to manually run through various (undocumented, again) use cases to try and ensure that the patch introduced no unanticipated side effects. The same problems hold for Evergreen itself, although the Constrictor stress-testing framework offers a way of performing some automated system testing and performance testing.
These weaknesses could be relatively easily overcome with the effort through contributions from people with the right skill sets. This article arguably offers a small set of clear examples at both the networking and application layer of OpenSRF. A technical writer who understands OpenSRF could contribute a formal specification to the project. With a formal specification at their disposal, a quality assurance expert could create an automated test harness and a basic set of unit tests that could be incrementally extended to provide more coverage over time. If one or more continual integration environments are set up to track the various OpenSRF branches of interest, then the OpenSRF community would have immediate feedback on build quality. Once a unit testing framework is in place, more developers might be willing to develop and contribute patches as they could sanity check their own code without an intense effort before exposing it to their peers. | http://docs.evergreen-ils.org/2.7/_evergreen_after_one_year_reflections_on_opensrf.html | 2017-07-20T18:37:04 | CC-MAIN-2017-30 | 1500549423320.19 | [] | docs.evergreen-ils.org |
public interface MethodSecurityExpressionHandler
Facade which isolates Spring Security's requirements for evaluation method-security expressions from the implementation of the underlying expression objects.
ExpressionParser getExpressionParser()
EvaluationContext createEvaluationContext(Authentication authentication, org.aopalliance.intercept.MethodInvocation mi)
Object filter(Object filterTarget, Expression filterExpression, EvaluationContext ctx)
filterTarget- the array or collection to be filtered.
filterExpression- the expression which should be used as the filter condition. If it returns false on evaluation, the object will be removed from the returned collection
ctx- the current evaluation context (as created through a call to
createEvaluationContext(Authentication, MethodInvocation)
void setReturnObject(Object returnObject, EvaluationContext ctx)
returnObject- the return object value
ctx- the context within which the object should be set (as created through a call to
createEvaluationContext(Authentication, MethodInvocation) | http://docs.spring.io/spring-security/site/docs/3.0.x/apidocs/org/springframework/security/access/expression/method/MethodSecurityExpressionHandler.html | 2017-07-20T18:49:16 | CC-MAIN-2017-30 | 1500549423320.19 | [] | docs.spring.io |
0.12.5 (Mar 2017)¶
Bokeh Version
0.12.5 is an incremental update that adds a few important
features and fixes several bugs. Some of the highlights include:
- New general capability for Python/JS events (#3210, #3748, #5278)
- Bokeh apps easily viewable inline in Jupyter Notebooks (#3461)
- Confusing
--hostparameter no longer necessary (#5692)
- Interactive legends can now control glyph visibility (#2274, #3715)
- Many fixes and improvements to
GMapPlotincluding a new
gmapfunction for creating Google Maps plots easily (#2822, #2940, #3737, #4835, #5592, #5826, #5845)
CustomJSTransformnow available for CDS columns (#5015)
- Sophisticated “pivot” example app contributed (#5894)
- Themes now work with
componentsand in Jupyter notebooks (#4722, #4952, ).
Known Issues¶
The ClientSession APIs defined in bokeh.client.session, such as push_session, don’t currently support the new on_event interface for UI event callbacks in Bokeh Server application. You may track the issue on GitHub (#6092).
MPL COMPATIBLITY IS DEPRECATED
Bokeh’s MPL compatibility was implemented using a third-party library that
only exposes a small fraction of all Matplotlib functionality, and is
now no longer being actively maintained. The Bokeh team unfortunately does
not have the resources to continue supporting this functionality, which
was never more than extremely limited in capability, and often produced
substandard results. Accordingly, in order to support the long term health
of the project, it has been decided to remove all MPL compatibility support
on the occasion of a 1.0 release. Any code that currently uses
to_bokeh
will continue to work with a deprecation warning until that time.
The
bokeh.embed.standalone_html_page_for_models method has been deprecated
in place of
bokeh.embed.file_html. For details see pull request 5978.
The
validate keyword argument to
bokeh.io.save has been deprecated.
Future usage of
bokeh.io.save will always validate the document before
outputting a file.
Deprecations removed¶
All previous deprecations up to
0.12.0 have be removed. Below is the
complete list of removals.
Modules and functions and classes that have been removed:
The methods
bokeh.document.add and
push_notebook of
ColumnDataSource have been removed.
The
bokeh.io.output_server function has been also been removed.
Additionally,
bokeh.io.push and other supporting functions or
properties that are useless without
output_server have been
removed. This includes any transitive imports of these functions
from other modules.
Additionally, the property
bokeh.charts.builder.Builder.sort_legend was
removed, as well as the following properties of
Plot
border_fill
background_fill
logo
responsive
title_text_align
title_text_alpha
title_text_baseline
title_text_color
title_text_font
title_text_font_size
Host Parameter Obsoleted¶
The
--host parameter is now unnecessary. For compatibility, supplying
it will currently cause a warning to be displayed, but it will otherwise
be ignored, and apps will run as normal. In a future release, supplying it
will result in an error.
The
--host parameter for the Bokeh server was confusing and difficult to
explain. As long as the Bokeh server relied on the HTTP “host” header to
provide URLs to resources, the
--host parameter was a necessary precaution
against certain kinds of HTTP spoofing attacks. However, the Bokeh server
has been updated to no longer require the use of the “host” header (and this
is maintained under test). Accordingly, there is no need to have any check
on the value of the “host” header, and so
--host is no longer needed.
Document and Model Refactoring¶
In order that
document.py and
models.py only contain things that might
be of usual interest to users, some changes and rearrangements were made.
The
abstract class decorator was moved from
models.py to
has_props.py. The class decorator now also adds an admonition to the
docstring of any class marked abstract that it is not useful to instantiate
directly.
The metaclass
Viewable has been renamed to
MetaModel.
The
document.py module has been split up, and parts that would not be of
normal interest to most users have been moved to better locations.
These changes are not expected to impact user code in any way. For complete details see pull request 5786.
JQuery and underscore.js removed from BokehJS¶
JQuery has been removed as a build dependency of BokehJS. The variable Bokeh.$ is no longer available. If you require JQuery (i.e. for a custom extension or when using the JavaScript API) you will need to provide it explicitly.
underscore.js has been removed as a build dependency of BokehJS. The variable Bokeh._ is no longer available. If you require underscore.js (i.e. for a custom extension or when using the JavaScript API) you will need to provide it explicitly.
Both of these removals together result in a ~10% reduction in the size of the minified BokehJS library.
Default tooltip position for lines changed to nearest point¶
When showing tooltips for lines, the new default is to label the nearest point, instead of the previous point, which used to be the default.
HTTP Request information for apps limited to query arguments¶
The
request previously attribute was added to session contexts as a way to
expose HTTP query parameters. It was discovered that providing the entire
request is not compatible with the usage of
--num-procs. A method was
found to satisfy the original feature request for query arguments, together
with
--num-procs (but only for query arguments). Accordingly the only
attribute that can now be accessed on
request is
.arguments, e.g.:
curdoc().session_context.request.arguments
Attempting to access any other attribute on
request will result in an
error.
Default save file¶
If user-specified or default destination cannot be written to, a temporary
file is generated instead. This mostly affects using
output_file in an
interactive session which formerly could result in a
PermissionError.
For details see pull request 5942.
The
bokeh.io.save method will now only accept a
LayoutDOM object and
no longer a
Document object for its
obj argument. This aligns the
bokeh.io.save argument types with
bokeh.io.show.
Reorganization of bokeh’s examples¶
Low-level examples, located under
examples/models, were split into
file
and
server examples and are available under
examples/models/file and
examples/models/server respectively (similarly to plotting examples). | https://docs.bokeh.org/en/0.12.11/docs/releases/0.12.5.html | 2020-08-03T21:15:25 | CC-MAIN-2020-34 | 1596439735833.83 | [] | docs.bokeh.org |
Chart Settings Section
The Chart Settings section displays all charts available, provides options for creating and configuring charts, and allows the user to add or remove charts in the Dashboards section.
In the Chart Settings section you can view:
- The chart Type.
- The Name of the chart.
- A Description of the chart.
- The chart's Dashboard Family.
- A toggle that determines if the chart appears in the Dashboard. Click the option to toggle between Yes and No.
Click on a chart to edit the chart. | https://docs.tenable.com/nnm/5_10/Content/ChartSettingsSection.htm | 2020-08-03T21:24:04 | CC-MAIN-2020-34 | 1596439735833.83 | [] | docs.tenable.com |
After a failover of a SAP, there will be error messages in the SAP logs. Many of these error messages are normal and can be ignored.
On Failure of the DB
BVx: Work Process is in reconnect status – This error message simply states that a work progress has lost the connection to the database and is trying to reconnect.
BVx: Work Process has left reconnect status – This is not really an error, but states that the database is back up and the process has reconnected to it.
Other errors – There could be any number of other errors in the logs during the period of time that the database is down.
On Startup of the CI
E15: Buffer SCSA Already Exists – This error message is not really an error at all. It is simply telling you that a previously created shared memory area was found on the system which will be used by SAP.
E07: Error 00000 : 3No such process in Module rslgsmcc (071) – See SAP Note 7316 – During the previous shutdown, a lock was not released properly. This error message can be ignored.
During a LifeKeeper In-Service Operation
The following messages may be displayed in the LifeKeeper In Service Dialog during an in-service operation:
error: permission denied on key ‘net.unix.max_dgram_qlen’
error: permission denied on key ‘kernel.cap-bound’
These errors occur when saposcol is started and can be ignored (see SAP Note 201144).
Post your comment on this topic. | http://docs.us.sios.com/spslinux/9.4.0/en/topic/sap-error-messages-during-failover-or-in-service | 2020-08-03T21:32:20 | CC-MAIN-2020-34 | 1596439735833.83 | [] | docs.us.sios.com |
Muting a participant's audio
There are several ways the audio being sent from one or more participants can be muted by an administrator or a meeting Host:
This is known as being "administratively muted". When a participant's audio has been administratively muted, Pexip Infinity still receives their audio stream but does not add it to the mix being sent to all other participants. This is different to a participant muting their own microphone locally, when the local client or endpoint does not send any audio to Pexip Infinity.
Administrators can also customize Infinity Connect clients so that the microphone is muted locally by default. In these cases, participants are able to subsequently unmute and mute themselves.
Note that low-level, almost imperceptible background noise is added to the audio mix in all conferences. This creates a similar effect to an open mic and gives reassurance that the conference is alive, even if all participants are muted. This background noise is not configurable and cannot be disabled.
Using the Administrator interface
Administrators can use the Pexip Infinity Administrator interface to either mute individual participants, or mute all Guest participants.
Muting an individual participant
To use the Pexip Infinity Administrator interface to mute a participant's audio:
- Select the participant. You can do this in two ways:
- Go toand select the participant to mute.
- Go to Virtual Meeting Room or Virtual Auditorium that the participant is in. From the tab, select the participant to mute.and select the
- At the bottom right of the screen, select.
Muting all Guest participants
To use the Pexip Infinity Administrator interface to mute the audio from all Guest participants:
- Go to Virtual Meeting Room or Virtual Auditorium.and select the
- At the bottom left of the screen, select.
All Guest participants currently in the conference, and any Guest participants who subsequently join the conference, will be muted. Individual Guest participants can still be unmuted and muted by a conference Host or system administrator.
Using Infinity Connect
You must have Host privileges to use this feature.
Host participants can mute an individual participant, or mute all Guests simultaneously. Note that it does not mute the participant's speakers, so they will still hear all other unmuted participants, but what that muted participant says will not be heard.
When Infinity Connect has been used to mute a participant, the Audio administratively muted? column of the Conference status page of the Pexip Infinity Administrator interface will show YES.
- Participants will not be notified that they have been muted or unmuted, although Infinity Connect participants will see a muted icon next to themselves in the participant list.
- Participants can mute and unmute themselves using Infinity Connect, but only if they have Host privileges.
- An Infinity Connect user can unmute a participant previously muted by another Infinity Connect user.
Muting an individual participant
To use Infinity Connect to mute or unmute an individual participant's audio:
From the Participant list, select the participant and then select or .
When muted, a
icon is shown next to the participant's name.
Host participants using Infinity Connect can also use the commands /mute [participant] and /unmute [participant].
Muting all Guest participants
To use Infinity Connect to mute the audio coming from all Guest participants:
From the top of the side panel, select
and then select Mute all Guests.
Host participants using Infinity Connect can also use the commands /muteall and /unmuteall.
All Guest participants currently in the conference, and any Guest participants who subsequently join the conference, will be muted
icon next to their name)
Using DTMF
If DTMF controls have been enabled, Host participants can mute and unmute all Guest participants. The default DTMF entry to do this is *5 but this may have been customized. For more information, see Using a DTMF keypad to control a conference. | https://docs.pexip.com/admin/muting_participant.htm | 2020-08-03T20:15:38 | CC-MAIN-2020-34 | 1596439735833.83 | [] | docs.pexip.com |
2D Map
The 2D Map plot type can be used to visualize geographic data at any scale, from worldwide to street level. You can overlay data on top of the map as points or as a heatmap.
Required Features and Dimensions
The 2D Map plot type requires a feature that represents longitude mapped to X and a feature that represents latitude mapped to Y. The coordinates must be in the form of Decimal Degrees (DD) — coordinates in other formats must be converted before loading the data set.
Controls
To move the 2D map, hold down the left mouse button and drag (desktop) or point at the map and move the control stick (VR). To rotate the map (to expose the Z axis) hold down the right mouse button and drag (desktop) or point away from the map and move the control stick (VR). To zoom in, roll the mouse wheel (desktop) or select the Zoom In / Zoom Out buttons in plot settings (desktop and VR).
Settings
In 2D Map mode you can change:
- Zoom level – Changes zoom level
- Region Selection – Create a selection on the map and zoom to the area selected
- Heatmap – Superimpose a heatmap on the plot
If you right-click on the 2D Map plot, you can bring up the Contextual Menu to choose your desired Map Style. This can also be changed from the Maps section of the Preferences window.<<
Map Provider Attributions
ArcGIS by Esri
Open Street Map
Map tiles by © OpenStreetMap contributors, under CC BY 3.0. Data by OpenStreetMap, under ODbL.
Stamen Design
Map tiles by Stamen Design, under CC BY 3.0. Data by OpenStreetMap, under ODbL. | https://docs.virtualitics.com/plot-types/2d-map/ | 2020-08-03T19:58:57 | CC-MAIN-2020-34 | 1596439735833.83 | [array(['https://docs.virtualitics.com/wp-content/uploads/2018/05/2DMaps-1.gif',
None], dtype=object) ] | docs.virtualitics.com |
WYSIWYG Export
- 3 minutes to read
In this mode, an exported document retains the layout of grid cells. Grid data shaping features in the exported document are not supported, in comparison to the data-aware export. This mode uses the Printing-Exporting library to export data.
Export Data in Code
The GridControl allows you to export its data to a file or stream. The following code sample exports the GridControl's data to a PDF file:
void Button_Click_Export(object sender, RoutedEventArgs e) { view.ExportToPdf(@"c:\Example\grid_export.pdf"); }
*these methods use the data-aware export mode. To enable the WYSIWYG mode in these methods, do one of the following:
Set the ExportSettings.DefaultExportType property to ExportType.WYSIWYG to enable the WYSIWYG export mode for all export methods.
Call a method with the XlsExportOptionsEx.ExportType, XlsxExportOptionsEx.ExportType, or CsvExportOptionsEx.ExportType property set to WYSIWYG.
Export Data with Print Preview
The Print Preview window allows end users to print document and export it to a file in the required format.
Customize Appearance
To customize the exported document, use approaches from the Customize Printed Document Appearance topic. | https://docs.devexpress.com/WPF/118842/controls-and-libraries/data-grid/printing-and-exporting/wysiwyg-export | 2020-08-03T21:58:56 | CC-MAIN-2020-34 | 1596439735833.83 | [array(['/WPF/images/printpreviewwindowexport128995.png',
'PrintPreviewWindowExport'], dtype=object) ] | docs.devexpress.com |
Settings storage¶
pretix is highly configurable and therefore needs to store a lot of per-event and per-organizer settings. For this purpose, we use django-hierarkey which started out as part of pretix and then got refactored into its own library. It has a comprehensive documentation which you should read if you work with settings in pretix.
The settings are stored in the database and accessed through a
HierarkeyProxy instance. You can obtain
such an instance from any event or organizer model instance by just accessing
event.settings or
organizer.settings, respectively.
Any setting consists of a key and a value. By default, all settings are strings, but the settings system includes serializers for serializing the following types:
Built-in types:
int,
float,
decimal.Decimal,
dict,
list,
bool
datetime.date,
datetime.datetime,
datetime.time
LazyI18nString
References to Django
Fileobjects that are already stored in a storage backend
References to model instances
In code, we recommend to always use the
.get() method on the settings object to access a value, but for
convenience in templates you can also access settings values at
settings[name] and
settings.name.
See the hierarkey documentation for more information.
To avoid naming conflicts, plugins are requested to prefix all settings they use with the name of the plugin
or something unique, e.g.
payment_paypal_api_key. To reduce redundant typing of this prefix, we provide
another helper class:
- class
pretix.base.settings.
SettingsSandbox(typestr: str, key: str, obj: django.db.models.base.Model)¶
Transparently proxied access to event settings, handling your prefixes for you.
- Parameters
typestr – The first part of the pretix, e.g.
plugin
key – The prefix, e.g. the name of your plugin
obj – The event or organizer that should be queried
When implementing e.g. a payment or export provider, you do not event need to create this sandbox yourself, you will just be passed a sandbox object with a prefix generated from your provider name.
Forms¶
Hierarkey also provides a base class for forms that allow the modification of settings. pretix contains a subclass that also adds support for internationalized fields:
You can simply use it like this:
class EventSettingsForm(SettingsForm): show_date_to = forms.BooleanField( label=_("Show event end date"), help_text=_("If disabled, only event's start date will be displayed to the public."), required=False ) payment_term_days = forms.IntegerField( label=_('Payment term in days'), help_text=_("The number of days after placing an order the user has to pay to " "preserve his reservation."), ) | https://docs.pretix.eu/en/latest/development/implementation/settings.html | 2020-08-03T21:06:00 | CC-MAIN-2020-34 | 1596439735833.83 | [] | docs.pretix.eu |
The Oracle Recovery Kit does not include support for Connection Manager and Oracle Names features
The LifeKeeper Oracle Recovery Kit does not include support for the following Oracle Net features of Oracle: Oracle Connection Manager, a routing process that manages a large number of connections that need to access the same service; and Oracle Names, the Oracle-specific name service that maintains a central store of service addresses.
The LifeKeeper Oracle Recovery Kit does protect the Oracle Net Listener process that listens for incoming client connection requests and manages traffic to the server. Refer to the LifeKeeper for Linux Oracle Recovery Kit Administration Guide for LifeKeeper configuration specific information regarding the Oracle Listener.
The Oracle Recovery Kit does not support the ASM or grid component features
The Oracle Automatic Storage Manager (ASM) feature provided in Oracle is not currently supported with LifeKeeper. In addition, the grid components are not protected by the LifeKeeper Oracle Recovery Kit. Support for raw devices, file systems, and logical volumes are included in the current LifeKeeper for Linux Oracle Recovery Kit. The support for the grid components can be added to LifeKeeper protection using the gen/app recovery kit.
The Oracle Recovery Kit does not support NFS Version 4
The Oracle Recovery Kit supports NFS Version 3 for shared database storage. NFS Version 4 is not supported at this time due to NFSv4 file locking mechanisms.
Oracle listener stays in service on primary server after failover
Network failures may result in the listener process remaining active on the primary server after an application failover to the backup server. Though connections to the correct database are unaffected, you may still want to kill that listener process.
Post your comment on this topic.
Post your comment on this topic. | http://docs.us.sios.com/spslinux/9.4.0/en/topic/oracle-known-issues-restrictions | 2020-08-03T20:28:08 | CC-MAIN-2020-34 | 1596439735833.83 | [] | docs.us.sios.com |
Branches allow you to keep track of changes that may affect the branch that your project is linked to. There are two types of branches:
- Long lived branches which allow you to see the full dashboard and history of your project
- Short lived branches which allow you to the the delta in issues between your branch and your master/production/target
Github and Bitbucket
When creating an Analysis Project from a GitHub or Bitbucket repository in CodeScanCloud, you will be asked to choose the branch that the project is linked to. This branch will become your Main Branch. If you also selected Check Pull requests, any changes that affect this main branch will be reflected in this project with the branch functionality.
Pull Requests
If you have set your Main Branch to master, any relevant pull request will be analysed when it is created and every time it is updated. The results will be displayed in the same page as your Main Branch analysis under the branches drop down menu.
Visual Studio Team Services
When a pull request is created in a VSTS repository or in a repository tracked by your VSTS Build Definition, a new branch will be created on your CodeScan Cloud project. Please note that any remote repositories will have to have Continuous Integration and Pull Request Validation checked for the appropriate branches in the Build Definition’s Triggers menu.
Salesforce
When creating an Analysis Project from Salesforce, the Org or Sandbox you authorize when creating the project will become your master. Sandboxes can be added to this project as “branches” by editing the project from the Project Analysis page. This allows for easy comparison between Production Orgs or Sandboxes, and is especially good for checking features before going into production.
Comparing Branches
Here you can see the branch has 4 new violations. If the branch is selected, the details of the violation/s can be seen. All new branches that are added in this way will be deleted in 30 days if they are not analysed again.
Managing branches
In the Administration drop down menu for your Analysis Project, you will find a link to the Branches page. This page allows users to delete any new branches and also change the branch that the Analysis Project is checking.
To delete a branch, click the Actions icon next to the branch and click Delete Branch.
To change the branch that the Analysis Project is tracking, click the Actions Button next to the Main Branch and click Rename Branch. Enter the name of the branch you would like to begin tracking. Changes will only be reflected on the project’s Overview page once the analysis has been performed again.
If you have any further questions about CodeScan Cloud, please contact us. | https://docs.codescan.io/hc/en-us/articles/360011898452-Understanding-branches-in-CodeScan-Cloud | 2020-09-18T13:28:55 | CC-MAIN-2020-40 | 1600400187899.11 | [array(['https://d33wubrfki0l68.cloudfront.net/c0bd74ad1d68f59c28acd3af098c9d949ba94c85/8c386/img/branches/new-branch-issues.png',
None], dtype=object) ] | docs.codescan.io |
Configure Your Data Model¶
On this page
Overview¶
In modern applications, users expect their data to be accurate no matter which device or browser they connect from. Data modeling ensures the accuracy of application data by performing type and format checks on fields.
MongoDB Realm allows you to define, synchronize, and enforce your data model in two formats:
- Realm Schema, which defines your data as MongoDB documents to enforce validation and synchronization in the cloud.
- Realm data model, which defines your data as native objects in your mobile application.
Having these two forms of data models allows your data to be consistent and synced regardless of which clients write the data.
Approaches¶
There are two alternative approaches to configuring both data models:
- Create a Realm Object Model from a Realm Schema: If you have data in your MongoDB Atlas cluster already, MongoDB generates a Realm Schema by sampling your data. MongoDB Realm can then translate that Realm Schema into a Realm Object Model to use in your mobile application.
- Create a Realm Schema from a Realm Object Model: Alternatively, if you are developing mobile-first and do not already have data in your Atlas cluster, you can translate your Realm Object Model into a Realm Schema for use with Atlas. Regardless of the approach that you take, when you configure both your Atlas cluster and Mobile application to use the respective data model, changes to the data model between the server and client are auto-updated.
Create a Realm Object Model from a Realm Schema¶
Link a MongoDB Atlas Data Source
Your Realm app must have at least one linked Atlas data source in order to define a Realm Data Model.
Define a Realm Schema¶
To get started, ensure you have a Realm Schema defined. MongoDB Realm will translate this Realm Schema into a Realm Object Model to be configured and utilized in your mobile application.
Primary Key _id Required
To work with Realm Sync, your data model must have a primary key field
called
_id.
_id can be of type
string,
int, or
objectId.
Note
To learn how to define a Realm Schema for at least one collection in the synced cluster, see the Enforce a Document Schema page.
View and Fix Schema Errors¶
Realm may fail to generate some or all of your Realm Object Model based on your Realm Schema. You can view a list of the errors in your Realm Schema that prevented Realm from generating the Realm Object Model on the SDKs page of the Realm UI.
Common errors include mismatched types, and differences in the way relationships are represented in the two respective models. For each error or warning, modify your Realm Schema to fix the specified issue.
Note
To learn more about the technical differences between the two data models read the Realm Object Model Restrictions page.
Open a Realm with the Realm Object Model¶
You can immediately use the generated Realm Object Model in your client application. In order to begin enforcing data validation with your data model, you can open a Realm with the Realm Object Model. This will prevent improper data from entering your database from your mobile client. Click Copy on the right-hand side of the Realm Object Model for the Object Model you want to integrate into your mobile application code. This will copy the Realm Object Model code for the SDK of your choice into your clipboard. Open your mobile application code in your IDE and paste the Realm Object Model code in.
- JavaScript
See also
See also
See also
See also
Sync Data on React Native
Create a Realm Schema from a Realm Object Model¶
Link a MongoDB Atlas Cluster
Your Realm app must have at least one linked Atlas data source in order to define a Realm Data Model.
Enable Development Mode Sync¶
First, enable development mode sync.
You can alter or define a Realm Object Model through your mobile client SDK. Changes to your Realm Object Model are only allowed when Development Mode is on in the MongoDB Realm UI. MongoDB Realm will reflect these changes to your Realm Object Model in your Realm Schema used for Atlas.
Edit Your Realm Object Model¶
As you continue to develop your application, you will need to modify your data model with it to enforce different data validation rules based on those changes. While Development Mode is on, you can edit your Realm Object Model in your client code. Data Validation occurs when Development Mode is off, so MongoDB Realm does not accept changes to your Realm Object Model while Development Mode is not on.
Primary Key _id Required
To work with Realm Sync, your data model must have a primary key field
called
_id.
_id can be of type
string,
int, or
objectId.
Example
A group is developing a social media application. When the group first developed their application, a user’s birthday was a required field of the User’s data model. However, due to privacy concerns over the amount of user data that is stored, management creates a new requirement to make the user’s birthday field an optional field. Application developers turn on Development Mode in the MongoDB Realm UI and then edit their user model within their client code.
See also
See also
See also
See also
Sync Data on React Native
Update Your Realm Schema with the Realm Object Model Changes¶
While Development Mode is on, MongoDB Realm doesn’t validate writes against your data model, allowing you to freely update your Realm Object Model. When you turn off Development Mode, MongoDB Realm automatically updates your Realm Schema and starts to enforce data validation for your Atlas cluster based on it.
Click the “Turn Dev Mode Off” button on the top banner or in the Sync screen to turn off Development Mode. Once you turn off Development Mode, the “Development Mode is OFF” modal will appear. The modal indicates that MongoDB Realm has stopped accepting new data model changes from clients. Click the “View my Realm Schema” button on the modal to view your updated Realm Schema.
Note
To make future data model updates from your mobile client code, you can follow this procedure again.
Summary¶
- MongoDB Realm uses two different data models: a Realm Object Model for mobile and a Realm Schema for Atlas. Changes to one data model match the other data model.
- If you already have data in Atlas, MongoDB Realm creates a Realm Schema by sampling that data. That Realm Schema can be translated into a Realm Object Model to be used in your Realm mobile application code.
- If you do not have data in Atlas or are developing with a mobile-first approach, you can turn on Development Mode to allow for data model changes from a Realm mobile client. When you finish developing your Realm Object Model, you can turn off Development Mode, and your Realm Schema will be auto-updated with your updated data model configuration. Atlas will begin using this updated data model configuration for data validation on your cluster immediately. | https://docs.mongodb.com/realm/sync/configure-your-data-model/ | 2020-09-18T12:50:50 | CC-MAIN-2020-40 | 1600400187899.11 | [] | docs.mongodb.com |
Manage Service Add-on Store
Service add-on store enables you to organize individual service add-ons into groups that can be used as a paid resource for the buckets. This allows you to easily create groups which can be added to the bucket to limit the amount or types of service add-ons that are available to a user.
Ensure that Service Add-on Groups permissions are on before managing service add-on Store. For more information about permissions refer to the Permissions section of this guide.
Service Add-On Group Management
The service add-on groups have hierarchical (tree) structure:
- service add-on group
- child group
- service add-ons
Click the service add-on group's label to expand the list of child groups, then click the service add-on group's label to view the list of service add-ons, respectively.
Add Service Add-On Groups
To add a service add-on group:
- Go to your Control Panel > Cloud > Service Add-ons menu > Store.
- Click the "+" button in the upper right corner of the page.
- Give a name to your group.
- Upload the service add-on group icon (click Choose File to select a necessary image).
- Click Save.
- You can add child service add-on groups to your service add-on group by clicking the "+" button > Add Child next to your service add-on group.
Assign Service Add-ons to Service Add-on Groups
To assign a service add-on to a service add-on group:
- Go to your Control Panel > Cloud > Service Add-ons menu > Store.
- Click the "+" button next to the required child group's label, then select Add Service Add-on.
- Choose the service add-on from the drop-down box at the Service add-on section.
- Click Save.
Remove Service from Service Add-on Group
To remove a service add-on from a service add-on group:
- Go to your Control Panel > Cloud > Service Add-ons menu > Store.
- Click the service add-on group's label, then click the name of the service add-on group from which you wish to remove a service add-on.
- Сlick the Delete icon next to a service add-on you want to remove.
- Confirm the deletion.
View/Edit/Delete a Service Add-on Group
To view/edit/delete a service add-on group:
- Go to your Control Panel > Cloud > Service Add-ons menu > Store.
- On the page that follows, you'll see the list of all service add-on groups created within your cloud:
- Click the group's label, then click the child group label to see the list of service add-ons assigned to this group.
- Click the Edit icon next to a group to edit its name or upload a service add-on group icon.
- Click Delete icon to delete a group. | https://docs.onapp.com/agm/latest/service-add-ons/manage-service-add-on-store | 2020-09-18T13:15:07 | CC-MAIN-2020-40 | 1600400187899.11 | [] | docs.onapp.com |
Microsoft Dynamics GP on Windows Azure!
I’m excited to announce that Microsoft Dynamics GP 2013 is now available from partners in the cloud hosted on Windows Azure! This means that starting today Microsoft Dynamics GP customers can take advantage of easy to use, quick to implement business solutions from Microsoft with the added benefit of knowing their solution is hosted on secure, enterprise-class cloud infrastructure from a trusted provider.
You can read more about this exciting announcement here:
And you can get great information, training and documents on PartnerSource here:
It’s important to note that there’s no change in our partner model here, meaning that Microsoft Dynamics GP continues to be available only through our partners, and not direct from Microsoft.
Jay Manley | https://docs.microsoft.com/en-us/archive/blogs/gp/microsoft-dynamics-gp-on-windows-azure | 2020-09-18T13:48:44 | CC-MAIN-2020-40 | 1600400187899.11 | [] | docs.microsoft.com |
FlushApiCache
Flushes an
ApiCache object.
Request Syntax
DELETE /v1/apis/
apiId/FlushCache HTTP
-: | https://docs.aws.amazon.com/appsync/latest/APIReference/API_FlushApiCache.html | 2020-09-18T15:38:20 | CC-MAIN-2020-40 | 1600400187899.11 | [] | docs.aws.amazon.com |
Key Features
Execute Service Checks - Sensu can monitor application and system services, detecting those in an unhealthy state. Service checks can be used, for example, to determine if a service like HAProxy is up or down, or if a web application is responding to requests..
Dynamic Client Registry - Sensu's use of the pubsub pattern of communication allows for automated registration & de-registration of ephemeral systems — allowing you to dynamically scale your infrastructure up and down without fear of generating false-positive alert storms.
Send Notifications - Sensu notifies your team about events before your customers do, using services such as Email, PagerDuty, Slack, IRC, etc.
Documented API - Sensu’s API provides access to event and client data, the ability to request check executions, and resolve events. The API also provides a key/value store which can be leveraged to extend Sensu's functionality in a variety of ways.
Self-Service Monitoring - Sensu provides support for centralized and decentralized (or distributed) monitoring, enabling operations teams to maintain a standard service level for the entire organization without placing unnecessary restrictions on developers. Give your users access to monitoring without relinquishing control.
Secure Connectivity - Sensu leverages transports that offer SSL encryption, authentication, and granular ACLs. Sensu's connections traverse complex network topologies, including those that use NAT and VPNs.
External Input - Sensu’s monitoring agent (sensu-client) provides a TCP and UDP socket that can accept external JSON data. Applications can leverage this interface to report errors directly to Sensu or ship application-specific metric data.
Uninstalling Sensu
You must update the configuration file on the Sensu server.
Obtain access to the Sensu server.
Procedure
On the Sensu server, go to the checks configuration directory. By default, the directory is located at:
/etc/sensu/conf.d
For each check, delete the BigPanda handler. For example, you would remove the
"bigpanda"handler from this check definition.
{ "checks": { "<check_name>": { "handlers": ["<handler_1_name>", "<handler_2_name>", ..., "bigpanda"]
- Remove the Bigpanda Handler and configuration files, which are located at:
/etc/sensu/handlers/bigpanda.rb
/etc/sensu/conf.d/handler_bigpanda.json
- Restart the Sensu server by running the following command:
sudo /etc/init.d/sensu-server restart
Post-Requisites
Delete the integration in BigPanda to remove the Sensu integration from your UI.
Updated 11 months ago | https://docs.bigpanda.io/docs/sensu?utm_source=site-partners-page | 2020-09-18T14:49:08 | CC-MAIN-2020-40 | 1600400187899.11 | [] | docs.bigpanda.io |
Invite users to your app and content by adding them to a group.
By sorting your users into groups, you can decide who gets access to what.
Add users to a group if you want to invite them to download your Falkor app. Invited users are sent an account activation code and download link.
If your app is public, users can sign up without being invited.
Private apps require users to sign in (no sign up option). All private app users must be invited to the app by being added to a group.
Invited users are sent an account activation code and download link.
If you have a public Falkor app users can sign up to your app without being added or invited to a group.
To see which users have signed up, open Groups and then select the "Users" tab from the top menu. This will show a list of all signed up users and their related details. You can use the export option to extract the user list.
Any user who signs up will not belong to a group (unless later invited to a group). The user will, therefore, show as belonging to "No Group". | https://docs.falkor.io/groups/managing-groups | 2020-09-18T12:44:04 | CC-MAIN-2020-40 | 1600400187899.11 | [] | docs.falkor.io |
Reports¶
There are 4 built-in reports in Welle - User Accounts & Roles, Users Without Managers, Orphan Accounts and Access Review Results.
User Accounts & Roles¶
This report lists down all users and their assigned roles. Managers or application owners can make use of this report to validate assigned roles offline.
Users Without Managers¶
This report lists down all the users found without managers. Administrators will be notified via emails and are expected to assign managers to users.
Note
Read Users and Email Templates for more information.
Orphan Accounts¶
This report lists down all the orphan accounts found from all applications. Application owners can make use of this report to clean up the accounts in their applications.
Access Review Results¶
This report lists down the actions taken by all reviewers for a particular access review campaign. Sometimes, IT auditors will request for this report for auditing purpose and to follow up on remediation actions carried out in the subsequent year.
Note
Only campaigns that have been the
Closed will appear here.
| https://welle.readthedocs.io/en/latest/ig/reports.html | 2020-09-18T13:49:48 | CC-MAIN-2020-40 | 1600400187899.11 | [array(['../_images/reports-ua-0.png', '../_images/reports-ua-0.png'],
dtype=object)
array(['../_images/reports-nm-0.png', '../_images/reports-nm-0.png'],
dtype=object)
array(['../_images/reports-oa-0.png', '../_images/reports-oa-0.png'],
dtype=object)
array(['../_images/reports-ar-0.png', '../_images/reports-ar-0.png'],
dtype=object)
array(['../_images/reports-ar-1.png', '../_images/reports-ar-1.png'],
dtype=object) ] | welle.readthedocs.io |
Web Framework & Plugins¶
TL;DR: Examples¶
The following examples include their documentation in the HTML page and source. Read the source and install them as plugins (see below) to see how they work.
API usage
Installing Plugins¶
The framework supports installing user content as pages or extensions to pages. To install an example:
- Menu Config → Web Plugins
- Add plugin: type “Page”, name e.g. “dev.metrics” (used as the file name in
/store/plugin)
- Save → Edit
- Set page to e.g. “/usr/dev/metrics”, label to e.g. “Dev: Metrics”
- Set the menu to e.g. “Tools” and auth to e.g. “None”
- Paste the page source into the content area
- Save → the page is now accessible in the “Tools” menu.
Hint: use the standard editor (tools menu) in a second session to edit a plugin repeatedly during test / development.
Basics¶
The OVMS web framework is based on HTML 5, Bootstrap 3 and jQuery 3. Plenty documentation and guides on these basic web technologies is available on the web, a very good resource is w3schools.com.
For charts the framework includes Highcharts 6. Info and documentation on this is available on Highcharts.com.
The framework is “AJAX” based. The index page
/ loads the framework
assets and defines a default container structure including a
and a
#main container. Content pages are loaded into the
#main
container. The window URL includes the page URL after the hash mark
#:– this loads page
/statusinto
#main– this loads the dashboard and activates night mode
Links and forms having id targets
#… are automatically converted to
AJAX by the framework:
<a href="/edit?path=/sd/index.txt" target="#main">Edit index.txt</a>– load the editor
Pages can be loaded outside the framework as well (e.g.). See index source on framework scripts and
styles to include if you’d like to design standalone pages using the
framework methods.
If file system access is enabled, all URLs not handled by the system or
a user plugin (see below) are mapped onto the file system under the
configured web root. Of course, files can be loaded into the framework
as well. For example, if the web root is
/sd (default):– load file
/sd/mypage.htminto
#main– load directory listing
/sd/logsinto
#main
Important Note: the framework has a global shared context (i.e. the
window object). To avoid polluting the global context with local
variables, good practice is to wrap your local scripts into closures.
Pattern:
<script> (function(){ … insert your code here … })(); </script> | https://docs.openvehicles.com/en/latest/components/ovms_webserver/docs/index.html | 2020-09-18T12:46:37 | CC-MAIN-2020-40 | 1600400187899.11 | [] | docs.openvehicles.com |
Requirements
Your LifeKeeper configuration must meet the following requirements prior to the installation of the LifeKeeper for Linux NAS Recovery Kit. Please see SIOS Protection Suite Installation Guide for specific instructions regarding the configuration of your LifeKeeper hardware and software.
Hardware Requirements
- Servers – LifeKeeper for Linux supported servers configured in accordance with the requirements described in SIOS Protection Suite Installation Guide and SPS for Linux Release Notes.
- IP Network Interface Cards – Each server requires at least one Ethernet TCP/IP-supported network interface card. Remember, however, that a LifeKeeper cluster requires.
-. | http://docs.us.sios.com/spslinux/9.5.0/en/topic/nas-recovery-kit-hardware-and-software-requirements | 2020-09-18T14:53:21 | CC-MAIN-2020-40 | 1600400187899.11 | [] | docs.us.sios.com |
public static class ListDrgsRequest.Builder extends Object
clone, equals, finalize, getClass, hashCode, notify, notifyAll, wait, wait, wait
public ListDrgsRequest.Builder invocationCallback(com.oracle.bmc.util.internal.Consumer<javax.ws.rs.client.Invocation.Builder> invocationCallback)
Set the invocation callback for the request to be built.
invocationCallback- the invocation callback to be set for the request
public ListDrgsRequest.Builder retryConfiguration(RetryConfiguration retryConfiguration)
Set the retry configuration for the request to be built.
retryConfiguration- the retry configuration to be used for the request
public ListDrgsRequest.Builder copy(ListDrgsRequest o)
Copy method to populate the builder with values from the given instance.
public ListDrgsRequest build()
Build the instance of ListDrgsRequest as configured by this builder
Note that this method takes calls to
invocationCallback(com.oracle.bmc.util.internal.Consumer) into account, while the method
buildWithoutInvocationCallback() does not.
This is the preferred method to build an instance.
public ListDrgsRequest.Builder compartmentId(String compartmentId)
public ListDrgsRequest.Builder limit(Integer limit)
public ListDrgsRequest.Builder page(String page)
public ListDrgsRequest buildWithoutInvocationCallback()
public String toString()
toStringin class
Object | https://docs.cloud.oracle.com/en-us/iaas/tools/java/1.12.3/com/oracle/bmc/core/requests/ListDrgsRequest.Builder.html | 2020-09-18T15:34:20 | CC-MAIN-2020-40 | 1600400187899.11 | [] | docs.cloud.oracle.com |
Start using Fastah's IP Geolocation API
- Sign Up for Fastah IP Geolocation API - City, Country, Timezone, Maps on AWS Marketplace
- You will be automatically redirected to Fastah's Developer Portal to set your password and obtain an API key.
- Use the keys generated to configure the REST API in your web app or server code.
- Note that the REST API endpoint will always be:
Run in Postman
Postman app lovers, look no farther!
Need More Help?
Drop us a line at [email protected] with any questions, bug reports, training requests or if you just want to chat!
Updated 5 months ago | https://docs.getfastah.com/docs | 2020-09-18T14:41:03 | CC-MAIN-2020-40 | 1600400187899.11 | [array(['/img/emojis/rocket.png', ':rocket: :rocket:'], dtype=object)] | docs.getfastah.com |
Docker
Threat Bus ships as pre-built Docker image. It can be used without any modifications to the host system. The Threat Bus executable is used as the entry-point of the container. You can transparently pass all command line options of Threat Bus to the container.
The pre-built image comes with all required dependencies and all existing plugins pre-installed.
Configuration Inside the ContainerConfiguration Inside the Container
Threat Bus requires a config file to operate. That file has to be made available inside the container, for example via mounting it.
The working directory inside the container is
/opt/tenzir/threatbus. To mount
a local file named
my-custom-config.yaml from the current directory into the
container, use the
--volume (
-v) flag.
See the configuration section to get started with a custom config file or refer to the detailed plugin documentation for fine tuning.
Port BindingsPort Bindings
Depending on the installed plugins, Threat Bus binds ports to the host system.
The used ports are defined in your configuration file. When running Threat
Bus inside a container, the container needs to bind those ports to the host
system. Use the
--port (
-p) flag repeatedly for all ports you need to bind. | https://docs.tenzir.com/threatbus/deployment/docker/ | 2020-09-18T14:30:37 | CC-MAIN-2020-40 | 1600400187899.11 | [] | docs.tenzir.com |
LUIS Library
LUIS stands for "Loadable User Interface System." When you access a LUIS-compatible device from the LUIS app (available for ioS and Android), this device sends a
library offers a persistent, convenient storage for your device's settings (operational parameters). It can also play a pivotal role in the device control and monitoring. The library is extremely easy to use — just define a list of all desired settings using a setting configurator and employ simple API calls to work with them.
Setting configurator allows you to specify the names, types, value constraints, etc. of your device's settings and the STG library uses this to automatically calculate memory addresses for storing settings, protect the settings with a checksum, verify the validity of their values, etc. You code is then able to reference setting values by their names, like this: s=stg_get("IP",0), stg_set("IP",0,"192.168.1.40").
The library keeps your settings in the non-volatile memory or RAM. The non-volatile memory used can be the EEPROM memory (stor.) or the flash disk (fd.). For RAM, you can choose to go with "regular" RAM (the one that stores variables), or "custom" RAM, for which you can create your own access routines. | https://docs.tibbo.com/taiko/lib_luis | 2020-09-18T13:38:56 | CC-MAIN-2020-40 | 1600400187899.11 | [] | docs.tibbo.com |
Mobile Wallets¶
Mobile Decred wallets are available for both Android and iOS.
Unlike desktop wallets, the mobile wallets do not download and validate the entire Decred blockchain. Instead, they run using SPV mode, which drastically decreases the use of the mobile devices system resources and data plan.
It is not currently possible to participate in Proof-of-Stake using mobile wallets.
Both of the mobile wallets are maintained by planetdecred.org developers. | https://docs.decred.org/wallets/mobile-wallets/ | 2020-09-18T12:45:57 | CC-MAIN-2020-40 | 1600400187899.11 | [] | docs.decred.org |
When deploying your service to a 3rd party provider you will need to generate a service token. Unlike your personal other tokens used by the the Doppler CLI, a service token provides read-only access to a single config.
Let's walk through how to create, use, and revoke a service token.
Creating a Token
To create a token, go to a project and then select a config. From there you want to select the access tab. You should now see a screen similar to this.
Next click on the Generate Service Token button. This will open a dialog where you can name the service token.
After naming the service token and clicking the Generate button, you should see the service token appear. You can revoke this service token at any time.
Once you have copied your service token, it will not be shown again.
Using a Sevice Token
By default the Doppler CLI will look try to configure itself if the environment variable
DOPPLER_TOKEN is provided. You can also pass the service token directly to the CLI. This will fetch and save the configured project and config to the CLI's config file for later use.
doppler setup --no-prompt
Revoking a Token
Revoking a service token is non-reversible and will immediately shutdown all access to the config. To revoke a token, click the "Revoke" button on the token you'd like to remove.
Updated 2 days ago | https://docs.doppler.com/docs/enclave-service-tokens | 2020-09-18T12:59:38 | CC-MAIN-2020-40 | 1600400187899.11 | [array(['https://files.readme.io/879ff74-Screen_Shot_2019-11-20_at_6.11.54_PM.png',
'Screen Shot 2019-11-20 at 6.11.54 PM.png'], dtype=object)
array(['https://files.readme.io/879ff74-Screen_Shot_2019-11-20_at_6.11.54_PM.png',
'Click to close...'], dtype=object)
array(['https://files.readme.io/399b796-Screen_Shot_2019-11-20_at_6.10.40_PM.png',
'Screen Shot 2019-11-20 at 6.10.40 PM.png'], dtype=object)
array(['https://files.readme.io/399b796-Screen_Shot_2019-11-20_at_6.10.40_PM.png',
'Click to close...'], dtype=object)
array(['https://files.readme.io/c2489f6-Screen_Shot_2019-12-15_at_3.53.37_AM.png',
'Screen Shot 2019-12-15 at 3.53.37 AM.png'], dtype=object)
array(['https://files.readme.io/c2489f6-Screen_Shot_2019-12-15_at_3.53.37_AM.png',
'Click to close...'], dtype=object)
array(['https://files.readme.io/8e72b82-Screen_Shot_2019-12-15_at_3.50.51_AM.png',
'Screen Shot 2019-12-15 at 3.50.51 AM.png'], dtype=object)
array(['https://files.readme.io/8e72b82-Screen_Shot_2019-12-15_at_3.50.51_AM.png',
'Click to close...'], dtype=object) ] | docs.doppler.com |
Constructor: channelAdminLogEventActionParticipantInvite
Back to constructors index
A user was invited to the group
Attributes:
Type: ChannelAdminLogEventAction
Example:
$channelAdminLogEventActionParticipantInvite = ['_' => 'channelAdminLogEventActionParticipantInvite', 'participant' => ChannelParticipant];
Or, if you’re into Lua:
channelAdminLogEventActionParticipantInvite={_='channelAdminLogEventActionParticipantInvite', participant=ChannelParticipant}
This site uses cookies, as described in the cookie policy. By clicking on "Accept" you consent to the use of cookies. | https://docs.madelineproto.xyz/API_docs/constructors/channelAdminLogEventActionParticipantInvite.html | 2020-09-18T14:34:35 | CC-MAIN-2020-40 | 1600400187899.11 | [] | docs.madelineproto.xyz |
Building the CoApp alpha bits — [...] | https://docs.microsoft.com/en-us/archive/blogs/garretts/building-the-coapp-alpha-bits | 2020-09-18T13:54:27 | CC-MAIN-2020-40 | 1600400187899.11 | [] | docs.microsoft.com |
- Realm >
- React Native SDK >
- The Realm Data Model
Notifications¶
On this page
Overview¶
Any modern app should be able to react when data changes, regardless of where that change originated. When a user adds a new item to a list, you may want to update the UI, show a notification, or log a message. When someone updates that item, you may want to change its visual state or fire off a network request. Finally, when someone deletes the item, you probably want to remove it from the UI. Realm’s notification system allows you to watch for and react to changes in your data, independent of the writes that caused the changes.
Realm emits three kinds of notifications:
- Realm notifications whenever a specific Realm commits a write transaction.
- Collection notifications whenever any Realm object in a collection changes, including inserts, updates, and deletes.
- Object notifications whenever a specific Realm object changes, including updates and deletes.
Generally, this is how you observe a Realm, collection, or object:
- Create a notification handler for the Realm, collection, or object notification.
- Add the notification handler to the Realm, collection, or object that you want to observe.
- Receive a notification token from the call to add the handler. Retain this token as long as you want to observe.
- When you are done observing, invalidate the token.
Realm Notifications¶
You can register a notification handler on an entire Realm. Realm Database calls the notification handler whenever any write transaction involving that Realm is committed. The handler receives no information about the change.
This is useful when you want to know that there has been a change but do not care to know specifically what changed. For example, proof of concept apps often use this notification type and simply refresh the entire UI when anything changes. As the app becomes more sophisticated and performance-sensitive, the app developers shift to more granular notifications.
Example
Suppose you are writing a real-time collaborative app. To give the sense that your app is buzzing with collaborative activity, you want to have an indicator that lights up when any change is made. In that case, a realm notification handler would be a great way to drive the code that controls the indicator.
- JavaScript
Collection Notifications¶
You can register a notification handler on a specific collection within a Realm. The handler receives a description of changes since the last notification. Specifically, this description consists of three lists of indices:
- The indices of the objects that were deleted.
- The indices of the objects that were inserted.
- The indices of the objects that were modified.
Order Matters
In collection notification handlers, always apply changes in the following order: deletions, insertions, then modifications. Handling insertions before deletions may result in unexpected behavior.
Realm Database emits an initial notification after retrieving the collection. After that, Realm Database delivers collection notifications asynchronously whenever a write transaction adds, changes, or removes objects in the collection.
Unlike Realm notifications, collection notifications contain detailed information about the change. This enables sophisticated and selective reactions to changes. Collection notifications provide all the information needed to manage a list or other view that represents the collection in the UI.
Example
The following code shows how to observe a collection for changes in order to update the UI.
- JavaScript
Object Notifications¶
You can register a notification handler on a specific object within a Realm. Realm Database notifies your handler:
- When the object is deleted.
- When any of the object’s properties change.
The handler receives information about what fields changed and whether the object was deleted.
Example
The following code shows how to open a default realm, create a new instance of a class, and observe that instance for changes.
- JavaScript
Summary¶
- Notifications allow you to watch for and react to changes on your objects, collections, and Realms.
- When you add a notification handler to an object, collection, or Realm that you wish to observe, you receive a token. Retain this token as long as you wish to keep observing.
- Realm Database has three notification types: Realm, collection, and object notifications. Realm notifications only tell you that something changed, while collection and object notifications allow for more granular observation. | https://docs.mongodb.com/realm/react-native/notifications/ | 2020-09-18T14:45:46 | CC-MAIN-2020-40 | 1600400187899.11 | [] | docs.mongodb.com |
Admin Console Enhancements
Payara Server. | https://docs.payara.fish/community/docs/4.1.2.181/documentation/payara-server/admin-console/admin-console.html | 2020-09-18T12:51:52 | CC-MAIN-2020-40 | 1600400187899.11 | [] | docs.payara.fish |
Install BizTalk Server 2020
Install BizTalk Server on a single computer.
Before you begin
System Administrator: When you install SQL Server, setup automatically grants your signed-in account System Administrator rights. Since these rights are also required to install BizTalk Server, do one of the following:
- Use the same account you used when you installed SQL Server.
- Give the signed-in account System Administrator rights.
- Confirm the signed-in account is a member of the local administrators group.
Account names: Use the default account names whenever possible. The BizTalk Server setup automatically enters the default accounts. If there are multiple BizTalk Server groups within the Domain, change the account names to avoid conflicts. If you change the names, BizTalk Server supports only NetBIOS domain name\user for service accounts and Windows groups.
Account names with BAM Management Web Service: BizTalk Server does not support built-in accounts or accounts without passwords for the BAM Management Web Service User. The web service accesses the BizTalk Server database and these accounts may suggest a security threat.
Note
Configuring BizTalk Server with these types of accounts may succeed, but the BAM Management Web Service fails. Built-in accounts or accounts without passwords can be used for the BAM Application pool.
Install and Uninstall: Uninstalling BizTalk Server requires manually deleting the BizTalk Server databases. If you are installing BizTalk Server as a developer or evaluator, use a virtual machine. If you need to reinstall, you can easily roll back the virtual machine without having to uninstall and delete the databases.
32-bit and 64-bit computers: There are few differences when installing BizTalk Server on 32-bit or 64-bit computer. The installation and configuration covers 32-bit and 64-bit computers. Any differences are noted.
Workgroups - Installing and configuring BizTalk Server in the workgroup environment on a single computer is supported. SQL Server and BizTalk Server features and components are installed and configured on the same computer.
Install BizTalk Server
Close any open programs. Run the BizTalk Server setup as an Administrator.
Select Install Microsoft BizTalk Server 2020.
Enter your User name, your Organization, and your product key. Select Next.
Accept the license agreement, and select Next.
Choose to participate in the Customer Experience Improvement Program, and select Next.
Choose the components you want to install:
Be sure to select Additional Software. You can also change the installation location:
Depending on the components you choose, there may be some additional prerequisites, such as ADOMD.NET. You need to manually install the missing prerequisites.
Select Next.
Review the summary page. To make any changes, select Back to check or uncheck any components.
To enable auto-logon after a system reboot, select Set, and enter the sign-in account. This is only enabled during the BizTalk setup. When setup is complete, this setting is disabled.
Select Install.
To complete Developer Tools component installation, install BizTalk Server extension in Visual Studio.
To configure BizTalk now, check Launch BizTalk Server Configuration. If you don't want to configure BizTalk now, then uncheck this option, and select Finish to close the installation wizard.
A setup log file is generated in a temp folder, similar to:
C:\Users\*username*\AppData\Local\Setup(011217 xxxxxx).htm.
Check the installation
- BizTalk Server is listed in Programs and Features.
- The
HKEY_LOCAL_MACHINE\SOFTWARE\Microsoft\BizTalk Server\3.0registry key lists the BizTalk Server version, the install path, the edition, and other details.
- BizTalk Server Administration, Configuration, and other components are listed in your Apps. | https://docs.microsoft.com/en-us/biztalk/install-and-config-guides/install-biztalk-server-2020 | 2022-08-07T23:26:51 | CC-MAIN-2022-33 | 1659882570730.59 | [] | docs.microsoft.com |
.jpg)
Installing Master Data Services in an AlwaysOn Environment
SQL Server Technical Article
Writers: Jim van de Erve
Technical Reviewers: Alexander Tolpin, Anand Subbaraj, Minh Pham, Bogdan Pienescu
Published: November 2012
Applies to: SQL Server 2012 SP1
Summary: This white paper describes how to set up a Microsoft SQL Server 2012 AlwaysOn environment with SQL Server Master Data Services. This requires Master Data Services in SQL Server 2012 SP1. You can set up this infrastructure with either a shared storage configuration or a nonshared storage configuration.
To review the document, please download the Installing Master Data Services in an AlwaysOn Environment Word document.
Ask a question in the SQL Server Forums | https://docs.microsoft.com/en-us/previous-versions/sql/sql-server-2012/jj884069(v=msdn.10)?redirectedfrom=MSDN | 2022-08-07T22:35:08 | CC-MAIN-2022-33 | 1659882570730.59 | [array(['images/jj215886.sqlserver2012_resized(en-us,msdn.10', None],
dtype=object) ] | docs.microsoft.com |
Pull Requests¶
Acceptance Criteria¶
Because we want to create the best possible product for our users, we have a set of criteria to ensure that all submissions are acceptable:
- Features and improvements must be fully implemented so that they can be released at any time without additional work
- Automated unit and/or acceptance tests are mandatory to ensure the changes work as expected and to reduce repetitive manual work
- Frontend components must be responsive to work and look properly on phones, tablets, and desktop computers; you must have tested them on all major browsers and different devices
- Documentation and translation updates should be provided if needed
- In case you submit database-related changes, they must be tested and compatible with SQLite 3 and MariaDB 10.5.12+
These guidelines are not intended as a filter or barrier to participation. If you are unfamiliar with Open Source development, we will help you.
Contributor License Agreement¶
After you submit your first pull request, you will be asked to accept our Contributor License Agreement (CLA). Visit photoprism.app/cla to learn more.
How to Create and Submit a Pull Request¶
Fork our repository¶
- Click the Fork button in the header of our main repository
- Clone the forked repository on your local computer:
git clone[your username]/photoprism
- Connect your local to our "upstream" main repository by adding it as a remote:
git remote add upstream
- Create a new branch from
develop- it should have a short and descriptive name (not "patch-1") that does not already exist, for example:
git checkout -b feature/your_feature_name
- See also
Make your changes¶
- While you are working on it and your pull request is not merged yet, pull in changes from "upstream" often so that you stay up to date and there is a lower risk for merge conflicts:
git fetch upstream
git merge upstream/develop
- We recommend running tests after each change to make sure you didn't break anything:
make test
- Add tests for any new code
- If you have questions about how to do this, please ask in your pull request
- Run
make fmtto ensure code is properly formatted according to our standards
- If all tests are green and you see no other errors, commit your changes. To reference related GitHub issues, please end your commit message with the issue ID like
#1234:
git status -s
git add .
git commit -m "Your commit message #1234"
When you are ready...¶
- Verify you didn't forget to add / commit files, output of
git status -sshould be empty
- Push all commits to your forked remote repository on GitHub:
git push -u origin feature/your_feature_name
- Create a pull request with a helpful description of what it does
- Wait for us to perform a code review and fix the remaining issues, if any
- Update and/or add documentation if needed
- Sign the Contributor License Agreement (CLA)
You can also create a pull request if your changes are not yet complete or working. Just let us know it's in progress, so we don't try to merge them. We can help you with a code review or other feedback if needed. Please be patient with us.
Reviewing, testing, and finally merging pull requests requires significant resources on our side. If it's not just a small fix, it can take several months.
Privacy Notice¶
We operate a number of web services that help us develop and maintain our software in collaboration with the open source community, such as Weblate hosted at translate.photoprism.app to keep translations up to date.
Because many of these apps and tools were originally developed for internal use without a high level of privacy in mind, we ask that you do not enter personal information such as your real name or personal email address if you want it to remain private.
Be aware that such information may unexpectedly show up in logs, source code, translation files, commit messages, and pull request comments. | https://docs.photoprism.app/developer-guide/pull-requests/ | 2022-08-07T22:15:39 | CC-MAIN-2022-33 | 1659882570730.59 | [] | docs.photoprism.app |
Removing Files Permanently¶
You can permanently delete photo and video files you do not want to keep from your filesystem. Photos and videos must be archived before they can be deleted permanently.
Before you start, make sure the Delete feature is enabled in Settings.
Delete Files¶
- Go to Archive
- Select the photos and videos your want to delete
- Click context menu
- Click
- Confirm | https://docs.photoprism.app/user-guide/organize/delete/ | 2022-08-07T21:35:30 | CC-MAIN-2022-33 | 1659882570730.59 | [] | docs.photoprism.app |
Read the Docs Blog - Posts tagged community 2020-05-18T00:00:00Z ABlog Shipping a CDN on Read the Docs Community 2020-05-18T00:00:00Z 2020-05-18T00:00:00Z Eric Holscher <div class="section" id="shipping-a-cdn-on-read-the-docs-community"> <p>You might have noticed that our Read the Docs Community site has gotten faster in the past few weeks. How much faster likely depends on how far away you live from Virginia, which is where our servers have traditionally lived.</p> <p>We have recently enabled a CDN on <strong>all Read the Docs Community sites</strong>, generously sponsored by <a class="reference external" href="">CloudFlare</a>. This post will talk a bit more about how we implemented this, and why we’re excited about it.</p> <p>We are also offering the CDN option to our Read the Docs for Business users on the Enterprise plan, you can <a class="reference external" href="mailto:support%40readthedocs.org">reach out</a> to us if you’re interested.</p> <div class="section" id="hosting-thousands-of-domains-is-hard"> <h2>Hosting thousands of domains is hard</h2> <p>Traditionally the largest problem that we’ve had with rolling out features to all of our documentation sites is scale. We host thousands of custom domains for our users, and any solution needs to work across all of them. This has presented a number of issues in the past, but CDN is one of the most complicated.</p> <p>To make a CDN function across all our sites, every data center that the CDN has needs to work with all our custom domains. Imagine we have 5,000 domains and there are 100 data centers across the world, this means that <code class="docutils literal notranslate"><span class="pre">5,000</span> <span class="pre">*</span> <span class="pre">100</span> <span class="pre">=</span> <span class="pre">500,000</span></code> different endpoints need to be configured for this to work at scale.</p> <p>We are lucky that CloudFlare has the global scale to be able to donate us this service. Specifically, their <a class="reference external" href="">SSL for SAAS</a> service is what we’re using for both SSL and CDN across all our custom domains. This allows us to offload the complexity to CloudFlare, and only focus on our integration.</p> </div> <div class="section" id="implementation"> <h2>Implementation</h2> <p>One of the coolest things that CloudFlare’s CDN offers is something called <a class="reference external" href="">Cache Tags</a>. This lets us add a <code class="docutils literal notranslate"><span class="pre">Cache-Tag</span></code> header to all the documentation that we serve, and invalidate the cache using just that tag.</p> <p>An example, when you load our docs, we will return a header:</p> <div class="highlight-default notranslate"><div class="highlight"><pre><span></span><span class="n">GET</span> <span class="n">https</span><span class="p">:</span><span class="o">//</span><span class="n">docs</span><span class="o">.</span><span class="n">readthedocs</span><span class="o">.</span><span class="n">io</span><span class="o">/</span><span class="n">en</span><span class="o">/</span><span class="n">latest</span><span class="o">/</span> <span class="o">...</span> <span class="n">Cache</span><span class="o">-</span><span class="n">Tag</span><span class="p">:</span> <span class="n">docs</span><span class="p">,</span><span class="n">docs</span><span class="o">-</span><span class="n">latest</span> </pre></div> </div> <p>We return a tag that matches both the <code class="docutils literal notranslate"><span class="pre">$project</span></code>, and the <code class="docutils literal notranslate"><span class="pre">$project-$version</span></code>. This allows us to invalidate the cache for a specific version, or across the entire project.</p> <p>As you know, cache invalidation is one of the harder problems, so we take a pretty conservative approach. We invalidate the cache in the following scenarios:</p> <ul class="simple"> <li>Project cache on Project or Domain save</li> <li>Version cache on documentation builds for specific versions</li> </ul> <p>The client code is quite simple, a single HTTP request to Cloudflare’s API with a list of <code class="docutils literal notranslate"><span class="pre">tags</span></code> to clear.</p> </div> <div class="section" id="outcomes"> <h2>Outcomes</h2> <p>There are two important outcomes from this work:</p> <ul class="simple"> <li>Page loads are much faster for users around the world</li> <li>Our server load has gone down quite a lot because we handle fewer requests</li> </ul> <p>The biggest winner here is our users. Docs are faster for everyone, and we are looking at implementing additional features into our documentation hosting code now that we have reduced load.</p> <p>We hope that Read the Docs Community has gotten noticeably faster, and that in the near future it will continue to get better with the new features that this enables.</p> </div> <> | https://blog.readthedocs.com/archive/tag/community/atom.xml | 2022-08-07T22:38:42 | CC-MAIN-2022-33 | 1659882570730.59 | [] | blog.readthedocs.com |
Deposit BTT/WBTT to BTFS
Deposit BTT to
BTTC ADDRESS
BTTC ADDRESS
As mentioned in the How do I get BTT/WBTT on the BTTC Mainet section, you probably already transferred certain amount of BTT to your own BTTC wallet, or to the
BTTC ADDRESS directly.
If you hold BTT in your own BTTC wallet, you can send certain amount of BTT to your
BTTC ADDRESS in your MetaMask. The
BTTC ADDRESS can be found in the dashboard, or via the
btfs id command.
Get BTTC ADDRESS in the dashboard
# Get BTTC ADDRESS via the command $ btfs id { "ID": "16Uiu2HAkuked4mhvUTmWB6wdTbrwy41b1CVsroF17fGJgkscZzUB", # ... # your bttc address "BttcAddress": "0xAf749476CFc24F2baA3B8CCE8b85FdbbCAe69543", "VaultAddress": "0x0BCA8D7525Dbcb2d9645606FB2f53Cd07233016A" }
Another complicated way is to import the
BTTC ADDRESS into your MetaMask by importing your BTTC node's Private Key, which can be found in the dashboard's setting page. Then you can treat it as a common BTTC account. That means you can deposit/withdraw BTT to/from this account, and you can also swap BTT to WBTT in the MetaMask.
Get Private Key in the dashboard
Note that if your node serves as a renter, you need to deposit enough amount of WBTT into your
BTTC ADDRESS, because you need to pay the host nodes when uploading files.
Deposit WBTT to
VAULT ADDRESS
VAULT ADDRESS
Your
VAULT ADDRESS serves as a wallet in the BTFS network. You pay the host nodes when you uploading files, and the hosts will cash from your
VAULT ADDRESS after they received your cheque. In brief, the WBTT token serves as currency in the BTFS network.
If you are a host node, you don't need to deposit WBTT to your
VAULT ADDRESS. If some renters uploaded files to your node, they will pay you certain amount of WBTT in the form of a cheque. After you cashing the cheques , you can transfer the WBTT from your
VAULT ADDRESSto
BTTC ADDRESSby just clicking the "Withdraw" button, or via the
btfs vault withdraw <amount>command.
But if you are a renter, you must deposit enough WBTT to your
VAULT ADDRESSfrom your
BTTC ADDRESSby clicking the "Deposit" button, or via the
btfs vault deposit <amount>comand. This means you must have deposited enough WBTT to your
BTTC ADDRESSas mentioned above.
Deposit or withdraw WBTT
# check the WBTT balance of the vault, note that the result is multiplied by 1e18 $ btfs vault balance the vault available balance: 5000000000000000000 # deposit 1 WBTT to vault for example $ btfs vault deposit 1000000000000000000 # == 1*1e18 the hash of transaction: 0x... # withdraw 6 WBTT from vault $ btfs vault withdraw 6000000000000000000 # == 1*1e18 the hash of transaction: 0x...
Updated 5 months ago | https://docs.btfs.io/docs/deposit-btt-to-btfs-1 | 2022-08-07T22:33:06 | CC-MAIN-2022-33 | 1659882570730.59 | [array(['https://files.readme.io/4e6d8b4-1645522937680.jpg',
'1645522937680.jpg 1680'], dtype=object)
array(['https://files.readme.io/4e6d8b4-1645522937680.jpg',
'Click to close... 1680'], dtype=object)
array(['https://files.readme.io/8e452a5-1645523188281.jpg',
'1645523188281.jpg 2520'], dtype=object)
array(['https://files.readme.io/8e452a5-1645523188281.jpg',
'Click to close... 2520'], dtype=object)
array(['https://files.readme.io/086821c-1645523281222.jpg',
'1645523281222.jpg 3128'], dtype=object)
array(['https://files.readme.io/086821c-1645523281222.jpg',
'Click to close... 3128'], dtype=object) ] | docs.btfs.io |
Agent Third-Party Deployment: Uninstalling FlexNet inventory agent from UNIX
FlexNet Manager Suite 2022 R1 (On-Premises)
If you need to manually uninstall FlexNet inventory agent from a managed device, use the following command lines.
Note: Uninstallation leaves behind the folder
/var/opt/managesoftsince it may contain a package cache that a future re-install of the FlexNet inventory agent can use.
FlexNet Manager Suite (On-Premises)
2022 R1 | https://docs.flexera.com/FlexNetManagerSuite2022R1/EN/GatherFNInv/SysRef/FlexNetInventoryAgent/topics/FA3-UninstallUNIX.html | 2022-08-07T22:24:27 | CC-MAIN-2022-33 | 1659882570730.59 | [] | docs.flexera.com |
UserDefinedOracleHome
FlexNet Manager Suite 2022 R1 (On-Premises)
Command line | Registry
UserDefinedOracleHomeis available only on UNIX-like platforms, and is only relevant when a symbolic link was included in the start-up path for an Oracle database instance targeted for inventory collection by the locally-installed tracker (ndtrack). The use of a symbolic link can 'mask' the database instance so it is not visible to the tracker, and inventory cannot be collected. There are two ways you can work around this:
- You can ensure that the Oracle home specified in the /etc/oratab file represents the ORACLE_HOME path used to start the database instance. With this work-around, no other settings are needed, and
UserDefinedOracleHomemay be set to
falseif you so desire.
- The account running the database instance (say OSUser4Oracle) may set an environment variable within its login profile specifying the
ORACLE_HOMEpath (including the symbolic link) which was used to start the database instance. To test this setting, the following command should display the correct
ORACLE_HOMEpath:
su -OSUser4Oracle -c "echo \$ORACLE_HOME"Tip: If this environment variable is set for any account on the database server, it is applied to all database instances started by the same account on this server. Any mismatch between the (non-empty) environment variable, and the actual path used to start any of these database instances, prevents the collection of database inventory from the mismatched instance by the locally-installed inventory component (ndtrack). Conversely, you can prevent the environment variable option being used for all accounts on the target Oracle server by setting the
UserDefinedOracleHomepreference (details of this preference are included in the Gathering FlexNet Inventory PDF, available through the title page of online help).
UserDefinedOracleHome=true(or when the setting is omitted, with default
true), the tracker (in addition to attempting its other normal detection methods) attempts to recover the value of the
$ORACLE_HOMEenvironment variable for the account running the database instance. If this attempt succeeds, the value recovered replaces any value for
ORACLE_HOMEfor this instance collected by any other means (for details of the methods used to detect the
ORACLE_HOMEvalue, see the Oracle Discovery and Inventory chapter of the FlexNet Manager Suite System Reference PDF, available through the title page of online help).
If for some reason you wish to prevent the tracker checking for this environment variable, set
UserDefinedOracleHome=falseon the target device. However, be aware that if the value of
ORACLE_HOMEcannot be determined for a database instance, Oracle inventory cannot be collected for the database instance by the locally-installed tracker.
Important: This preference controls behavior of the tracker across all Oracle database instances running on the current server (inventory device). If it happens that you have used multiple accounts for starting separate database instances on this server, and
this means that any mismatch between the
UserDefinedOracleHome=true, the tracker searches for the
$ORACLE_HOMEenvironment variable for each of these accounts, and for all of the database instances started by each of them. Since the priority order of data sources for the Oracle home path for each database instances is:
$ORACLE_HOMEenvironment variable in the account starting and running a database instance on this server
- The /etc/oratab file value for the ORACLE_HOME path
- The absolute path in use by the process currently running the database instance,
$ORACLE_HOMEenvironment variable and the path actually used to start and run the database instance causes database inventory collection to fail. This includes (for example) having an environment variable that identifies a symbolic link used for one database instance, even after a possibly-different database instance has been re-started by the same account but using an absolute path. A complete match (with either a symbolic link in both places, or an absolute path in both places) is required for every database instance.
Values
Command line
Registry
FlexNet Manager Suite (On-Premises)
2022 R1 | https://docs.flexera.com/FlexNetManagerSuite2022R1/EN/GatherFNInv/SysRef/FlexNetInventoryAgent/topics/PMD-UserDefinedOracleHome.html | 2022-08-07T23:19:08 | CC-MAIN-2022-33 | 1659882570730.59 | [] | docs.flexera.com |
rabbitmq – Retrieve messages from an AMQP/AMQPS RabbitMQ queue¶
New in version 2.8.
Synopsis¶
This lookup uses a basic get to retrieve all, or a limited number
count, messages from a RabbitMQ queue.
Requirements¶
The below requirements are needed on the local master node that executes this lookup.
The python pika package.
Notes¶¶
- name: Get all messages off a queue debug: msg: "{{ lookup(('rabbitmq', url='amqp://guest:[email protected]:5672/%2F', queue='hello', count=2) }}" - name: Dump out contents of the messages debug: var: messages
Return Values¶
Common return values are documented here, the following are the fields unique to this lookup:
Status¶
This lookup is not guaranteed to have a backwards compatible interface. [preview]
This lookup is maintained by the Ansible Community. [community] | https://docs.ansible.com/ansible/2.9/plugins/lookup/rabbitmq.html | 2022-08-07T22:41:24 | CC-MAIN-2022-33 | 1659882570730.59 | [] | docs.ansible.com |
With its hierarchical representation (see the diagram below), the ISO standard 25010 provides a good checklist for top-level quality “topics”.
As a more practical alternative, consider the the arc42 subproject “quality requirements examples” (see tip 1-15) for more than 60 real-world examples of quality requirements.
Some common “quality topics” are:
- availability
- modifiability
- maintainability
- reliability / robustness
- performance (runtime efficiency)
- security
- safety
- usability
- testability | https://docs.arc42.org/tips/1-14/ | 2022-08-07T23:20:44 | CC-MAIN-2022-33 | 1659882570730.59 | [array(['/images/01-ISO-25010-EN.png', None], dtype=object)] | docs.arc42.org |
Service
Fabric Management Client. Long Running Operation Retry Timeout Property
Definition
Important
Some information relates to prerelease product that may be substantially modified before it’s released. Microsoft makes no warranties, express or implied, with respect to the information provided here.
Gets or sets the retry timeout in seconds for Long Running Operations. Default value is 30.
public Nullable<int> LongRunningOperationRetryTimeout { get; set; }
member this.LongRunningOperationRetryTimeout : Nullable<int> with get, set
Public Property LongRunningOperationRetryTimeout As Nullable(Of Integer)
Property Value
- System.Nullable<System.Int32>
Implements
LongRunningOperationRetryTimeout Microsoft.Rest.Azure.IAzureClient.LongRunningOperationRetryTimeout | https://docs.azure.cn/zh-cn/dotnet/api/microsoft.azure.management.servicefabric.servicefabricmanagementclient.longrunningoperationretrytimeout?view=azure-dotnet | 2022-08-07T21:52:11 | CC-MAIN-2022-33 | 1659882570730.59 | [] | docs.azure.cn |
Problem Interface
This page defines the common problem interface. There are certain rules that can be applied to any function definition, and this page defines those behaviors.
In-place vs Out-of-Place Function Definition Forms
Every problem definition has an in-place and out-of-place form, commonly referred throughout DiffEq as IIP (
isinplace) and OOP (out of place). The in-place form is a mutating form. For example, on ODEs, we have that
f!(du,u,p,t) is the in-place form which, as its output, mutates
du. Whatever is returned is simply ignored. Similarly, for OOP we have the form
du=f(u,p,t) which uses the return.
Each of the problem types have that the first argument is the option mutating argument. The SciMLBase system will automatically determine the functional form and place a specifier
isinplace on the function to carry as type information whether the function defined for this
DEProblem is in-place. However, every constructor allows for manually specifying the in-placeness of the function. For example, this can be done at the problem level like:
ODEProblem{true}(f,u0,tspan,p)
which declares that
isinplace=true. Similarly this can be done at the DEFunction level. For example:
ODEFunction{true}(f,jac=myjac)
Type Specifications
Throughout DifferentialEquations.jl, the types that are given in a problem are the types used for the solution. If an initial value
u0 is needed for a problem, then the state variable
u will match the type of that
u0. Similarly, if time exists in a problem the type for
t will be derived from the types of the
tspan. Parameters
p can be any type and the type will be matching how it's defined in the problem.
For internal matrices, such as Jacobians and Brownian caches, these also match the type specified by the user.
jac_prototype and
rand_prototype can thus be any Julia matrix type which is compatible with the operations that will be performed.
Functional and Condensed Problem Inputs
Note that the initial condition can be written as a function of parameters and initial time:
u0(p,t0)
and be resolved before going to the solver. Additionally, the initial condition can be a distribution from Distributions.jl, in which case a sample initial condition will be taken each time
init or
solve is called.
In addition,
tspan supports the following forms. The single value form
t is equivalent to
(zero(t),t). The functional form is allowed:
tspan(p)
which outputs a tuple.
Examples
prob = ODEProblem((u,p,t)->u,(p,t0)->p[1],(p)->(0.0,p[2]),(2.0,1.0)) using Distributions prob = ODEProblem((u,p,t)->u,(p,t)->Normal(p,1),(0.0,1.0),1.0)
Lower Level
__init and
__solve
At the high level, known problematic problems will emit warnings before entering the solver to better clarify the error to the user. The following cases are checked if the solver is adaptive:
- Integer times warn
- Dual numbers must be in the initial conditions and timespans
- Measurements.jl values must be in the initial conditions and timespans
If there is an exception to these rules, please file an issue. If one wants to go around the high level solve interface and its warnings, one can call
__init or
__solve instead.
Modification of problem types
Problem-related types in DifferentialEquations.jl are immutable. This helps, e.g., parallel solvers to efficiently handle problem types.
However, you may want to modify the problem after it is created. For example, to simulate it for longer timespan. It can be done by the
remake function:
prob1 = ODEProblem((u,p,t) -> u/2, 1.0, (0.0,1.0)) prob2 = remake(prob1; tspan=(0.0,2.0))
A general syntax of
remake is
modified_problem = remake(original_problem; field_1 = value_1, field_2 = value_2, ... )
where
field_N and
value_N are renamed to appropriate field names and new desired values. | https://docs.sciml.ai/stable/modules/DiffEqDocs/basics/problem/ | 2022-08-07T21:44:22 | CC-MAIN-2022-33 | 1659882570730.59 | [] | docs.sciml.ai |
The gateway is a secure server instrumented for IT environment management. Gateway instrumentation collects the data needed to manage IT resources and sends the data to the cloud.
Data retention policy
On loss of network connectivity, the gateway retains alert data for the last 24 hours and retains metric data for the last one hour.
Additional reading
FAQs
What significant improvements I can see after upgrading the gateway?
The upgraded version of the gateway maintains compatibility with SaaS upgrades. Few elements have in-depth monitoring templates.
What if the upgrade is not successful?
The platform is tested end-to-end for gateway upgrades. However, in case of failure, upgrades are rolled back.
What are the steps involved in upgrading the gateway?
- The gateway downloads the latest firmware from the cloud.
- the gateway installs the firmware. When the install is in progress, monitoring is stopped for two to three minutes. The tunnel is re-connected.
- The gateway is updated across all clients. The update requires less than five to 10 minutes per gateway.
Which disruptions are possible during the upgrade process?
During the upgrading process, when a gateway is updated, it is re-connected to the tunnel. For failures associated with connecting the gateway to the cloud, contact.
- The download time depends on the available network bandwidth – downloads are typically in the size range of 200-300 MB.
Phase 2: Firmware upgrade
- With this upgrade, the firmware is actually applied to the gateway.
- This process takes five minutes to complete.
- The gateway shuts down during this five-minute interval and does not monitor your devices.
- The gateway restarts itself after the.
Can I install or update the software/packages on the gateway?
You should not install, update, or remove gateway software/packages. Doing so can lead to a gateway malfunction and OpsRamp is not responsible for fixing the problem. Also, you are not permitted to change any gateway configurations. | https://jpdemopod2.docs.opsramp.com/platform-features/gateways/ | 2022-08-07T22:15:06 | CC-MAIN-2022-33 | 1659882570730.59 | [] | jpdemopod2.docs.opsramp.com |
Getting Started
Background information
A description of the Poplar SDK with instructions for downloading and installing
An introduction to the IPU architecture, programming model and tools available
User guide for the pre-built Graphcore Docker containers for Poplar SDK components
A Dictionary of Graphcore Terminology
A dictionary of specialised terms related to Graphcore technology
Instruction Set Architecture
Instruction set architecture containing a subset of the IPU instruction set used by the Worker threads.
Cloud partners
Getting Started with Graphcloud
How to access IPUs and run ML applications on Graphcloud
G-Core Labs Cloud: Getting Started with IPUs
How to access IPUs and run ML applications on G-Core Labs Cloud
Graphcloud quick starts
An overview of the Graphcloud hardware and software features
How to access and log in to Graphcloud
Basic setup steps to make working on Graphcloud easier
Jupyter Notebook Quick Start
Run an application on IPUs from a Jupyter Notebook
Run a TensorFlow 2 application on Graphcloud
Run a TensorFlow 1 application on Graphcloud
Run a PyTorch application on Graphcloud
Run a PopART application on Graphcloud
Run an application directly in the Poplar Graph Programming Framework
An overview of the tools that are available to monitor running programs
Some tips on how to troubleshoot possible hardware problems
Visualisation tools that enable you to monitor and optimise your application
Pod system getting started guides
Getting Started with Bow Pod Systems
Installing the Poplar SDK and setting up the Bow Pod ready to run your application
Getting Started with IPU-POD Systems
Installing the Poplar SDK and setting up the IPU-POD ready to run your application
Getting Started with IPU-POD4 DA and IPU-POD16 DA
Installing the Poplar SDK and setting up the IPU-POD DA ready to run your application | https://docs.graphcore.ai/en/latest/getting-started.html | 2022-08-07T22:56:39 | CC-MAIN-2022-33 | 1659882570730.59 | [] | docs.graphcore.ai |
PyTorch CustomOp API¶
This document describes API exposed to write custom PyTorch Operators for the Habana Accelerator.
Overview¶
The API provides an ability to implement a custom HPU Kernel (Habana Processing Unit) for new PyTorch operators. This allows PyTorch to execute a given Operator on a Habana Gaudi.
Prerequisites¶
TPC is a fully programmable core designed for workloads that do not map to matrix multiplication operations. TPC kernel refers to a concrete implementation that performs a desired operation. It is the user’s responsibility to prepare the TPC kernel before working with PyTroch CustomOp API.
Note
This document does not describe how to implement custom TPC kernels.
For information on how to write TPC kernels, please refer to the following:
API Overview¶
The main part of the public interface resides in
hpu_custom_op.h header file. They contain all the necessary declarations to define custom HPU PyTorch Kernel.
The following lists the most important classes and structs to interact with:
HabanaCustomOpDescriptor - Descriptor with all the needed information for all custom kernel.
NodeDesc - PyTorch CustomOp info description.
InputDesc - PyTorch CustomOp inputs info.
OutputDesc - PyTorch CustomOp outputs info.
Basic Workflow¶
In order to define custom HabanaCustomOpDescriptor, call REGISTER_CUSTOM_OP_ATTRIBUTES macro:
Define input vector InputDesc for all inputs of kernel.
Define output vector OutputDesc for all outputs of kernel.
Call Macro with schema name, tpc guid, inputs, outputs and user param callback function.
Create the main excution function for CustomOp:
Access HabanaCustomOpDescriptor registered in previous step using getCustomOpDescriptor.
Call execute with vector of IValue inputs.
Define PyTroch schema for CustomOp using TORCH_LIBRARY and TORCH_LIBRARY_IMPL.
Define op schema using TORCH_LIBRARY.
Define PyTorch dispatcher function with the function from the previous section using TORCH_LIBRARY_IMPL.
API Limitations¶
Single TPC Kernel Definition per HabanaCustomOpDescriptor¶
It is the main assumption of this API. HabanaCustomOpDescriptor can define only a single TPC kernel within its implementation..
Output Shape¶
If the user does not set the output shape callback function, the output shape will be the same as input 0 Tensor shape.
Inputs Types to CustomOp¶
Currently, only Tensor and Scalar are supported as input types to CustomOp. Meaning, no arrays of any type are supported.
Habana Mixed Precision (HMP)¶
CustomOp is not integrated with the Habana Mixed Precision (HMP) package, hence mixed training support via the HMP package will not be applicable for CustomOp. If a CustomOp is required to be executed with BF16, then it can be explicitly written with the BF16 data type and invoked by the user.
Examples¶
Once the CustomOp is built, it needs to be loaded in the topology in Python. PyTorch has a util function to load the library:
import torch # it is important to load the module before loading custom op libs torch.ops.load_library(custom_op_lib_path) # output = torch.ops.<custom_op_schema>(<inputs>) a_topk_hpu, a_topk_indices_hpu = torch.ops.custom_op.custom_topk(a_hpu, 3, 1, False)
An example of how to use the API can be found in PyTorch Model References GitHub page. | https://docs.habana.ai/en/latest/PyTorch/PyTorch_CustomOp_API/page_index.html | 2022-08-07T21:42:42 | CC-MAIN-2022-33 | 1659882570730.59 | [] | docs.habana.ai |
16.1.2.6 Adding Replicas to a Replication Topology
You can add another replica to an existing replication configuration without stopping the source server. To do this, you can set up the new replica by copying the data directory of an existing replica, and giving the new replica a different server ID (which is user-specified) and server UUID (which is generated at startup).
To duplicate an existing replica:
Stop the existing replica and record the replica status information, particularly the source's binary log file and relay log file positions. You can view the replica status either in the Performance Schema replication tables (see Section 25.12.11, “Performance Schema Replication Tables”), or by issuing
SHOW SLAVE STATUSas follows:
mysql> STOP SLAVE; mysql> SHOW SLAVE STATUS\G
Shut down the existing replica:
shell> mysqladmin shutdown
Copy the data directory from the existing replica to the new replica, including the log files and relay log files. You can do this by creating an archive using tar or
WinZip, or by performing a direct copy using a tool such as cp or rsync.Important
Before copying, verify that all the files relating to the existing replica actually are stored in the data directory. For example, the
InnoDBsystem tablespace, undo tablespace, and redo log might be stored in an alternative location.
InnoDBtablespace files and file-per-table tablespaces might have been created in other directories. The binary logs and relay logs for the replica might be in their own directories outside the data directory. Check through the system variables that are set for the existing replica and look for any alternative paths that have been specified. If you find any, copy these directories over as well.
During copying, if files have been used for the replication metadata repositories (see Section 16.2.4, “Relay Log and Replication Applier Metadata Repositories”), which is the default in MySQL 5.7, ensure that you also copy these files from the existing replica to the new replica. If tables have been used for the repositories, the tables are in the data directory.
After copying, delete the
auto.cnffile from the copy of the data directory on the new replica, so that the new replica is started with a different generated server UUID. The server UUID must be unique.
A common problem that is encountered when adding new replicas is that the new replica_replica_hostname-relay-bin' to avoid this problem. 071118 16:44:10 [ERROR] Failed to open the relay log './old_replica_hostname-relay-bin.003525' (relay_log_pos 22940879) 071118 16:44:10 [ERROR] Could not find target log during relay log initialization 071118 16:44:10 [ERROR] Failed to initialize the master info structure
This situation can occur if the
relay_logsystem variable is not specified, as the relay log files contain the host name as part of their file names. This is also true of the relay log index file if the
relay_log_indexsystem variable is not used. For more information about these variables, see Section 16.1.6, “Replication and Binary Logging Options and Variables”.
To avoid this problem, use the same value for
relay_logon the new replica that was used on the existing replica. If this option was not set explicitly on the existing replica, use
. If this is not possible, copy the existing replica's relay log index file to the new replica and set the
existing_replica_hostname-relay-bin
relay_log_indexsystem variable on the new replica to match what was used on the existing replica. If this option was not set explicitly on the existing replica, use
. Alternatively, if you have already tried to start the new replica after following the remaining steps in this section and have encountered errors like those described previously, then perform the following steps:
existing_replica_hostname-relay-bin.index
If you have not already done so, issue
STOP SLAVEon the new replica.
If you have already started the existing replica again, issue
STOP SLAVEon the existing replica as well.
Copy the contents of the existing replica's relay log index file into the new replica's relay log index file, making sure to overwrite any content already in the file.
Proceed with the remaining steps in this section.
When copying is complete, restart the existing replica.
On the new replica, edit the configuration and give the new replica a unique server ID (using the
server_idsystem variable) that is not used by the source or any of the existing replicas.
Start the new replica server, specifying the
--skip-slave-startoption so that replication does not start yet. Use the Performance Schema replication tables or issue
SHOW SLAVE STATUSto confirm that the new replica has the correct settings when compared with the existing replica. Also display the server ID and server UUID and verify that these are correct and unique for the new replica.
Start the replication threads by issuing a
START SLAVEstatement:
mysql> START SLAVE;
The new replica now uses the information in its source metadata repository to start the replication process. | https://www.docs4dev.com/docs/en/mysql/5.7/reference/replication-howto-additionalslaves.html | 2022-08-07T21:51:01 | CC-MAIN-2022-33 | 1659882570730.59 | [] | www.docs4dev.com |
Installation
If you are (relatively) new to installing python packages, please jump to the getting started tutorial (The getting started with cellpy tutorial (opinionated version)) for an opinionated step-by-step procedure.
Stable release
The preferred way to install
cellpy is by using conda:
$ conda install cellpy --channel conda-forge
This will also install all of the critical dependencies, as well as
jupyter
that comes in handy when working with
cellpy.
If you would like to install only
cellpy, you should install using pip.
You also need to take into account that
cellpy uses several packages
that are a bit cumbersome to install on
windows. It is therefore recommended to install one of the
anaconda
python packages (python 3.8 or above) before installing
cellpy.
If you chose
miniconda, you should install
scipy,
numpy and
pytables using
conda:
$ conda install scipy numpy pytables
Then install
cellpy, by running this command in your terminal:
$ pip install cellpy
You can install pre-releases by adding the
--pre flag.
If you are on Windows and plan to work with Arbin files, we recommend that you try to install pyodbc (Python ODBC bridge). Either by using pip or from conda-forge:
$ pip install pyodbc
or:
$ conda install -c conda-forge pyodbc
Some of the utilities in
cellpy have additional dependencies:
Using the
ocv_rlxutilities requires
lmfitand
matplotlib.
For using the
batchutilities efficiently,
holoviewsis needed, as well as
bokehand
matplotlibfor plotting.
If this is the first time you install
cellpy, it is recommended
that you run the setup script:
$ cellpy setup -i
This will install a
.cellpy_prms_USER.conf file in your home directory
(USER = your user name).
Feel free to edit this to fit your needs.
If you are OK with letting
cellpy select your settings, you can omit
the -i (interactive mode).
Hint
It is recommended to run the command also after
each time you upgrade
cellpy. It will keep the settings you already
have in your prms-file and, if the newer version
has introduced some new parameters, it will add those too.
Hint
You can restore your prms-file by running
cellpy setup -r if needed
(i.e. get a copy of the default file copied to your user folder).
Caution
Since Arbin (at least some versions) uses access database files, you
will need to install
pyodbc, a python ODBC bridge that can talk to database
files. On windows, at least if you don´t have a newer version of office 365,
you most likely need to use Microsoft’s dll for handling access
database formats, and you might run into 32bit vs. 64bit issues.
The simplest solution is to have the same “bit” for python and
the access dll (or office). More advanced options are explained in more details
in the getting-started tutorial. For Posix-type systems, you will need to download
and install
mdbtools. If you are on Windows and you cannot get your
pyodbc to work, you can try the same there also (search for Windows
binaries and set the appropriate settings in your
cellpy config file).
From sources
The sources for
cellpy can be downloaded from the Github repo.
You can clone the public repository by:
$ git clone git://github.com/jepegit/cellpy
Once you have a copy of the source, you can install in development mode using pip:
$ pip install -e .
(assuming that you are in the project folder, i. e. the folder that contains the setup.py file)
Further reading
You can find more information in the Tutorials, particularly in The getting started with cellpy tutorial (opinionated version). | https://cellpy.readthedocs.io/en/204-restructure-txt-loaders/installation.html | 2022-08-07T22:12:15 | CC-MAIN-2022-33 | 1659882570730.59 | [] | cellpy.readthedocs.io |
Disabled (schedule component)
FlexNet Manager Suite 2022 R1 (On-Premises)
Command line | Registry
Disabled determines whether the schedule component is
disabled on the managed device. By default, this value is set to
False,
enabling the schedule component.
If a date and time in the future is specified, the schedule component is disabled, and
remains disabled until this date and time. Settings are not cleared as a given date and time
passes, so that a date and time that has passed is also a possible value. Such past values are
ignored, and have the same effect as the value
False (that is, when the date
is in the past, the schedule agent is enabled).
On Windows devices, management of this preference is different for user schedules and machine schedules:
- For user schedules, the user can run the schedule agent on the managed device. If users open a command line window, they can navigate to $(ProgramFiles)\ManageSoft\Schedule Agent, and run ndschedag.exe, without parameters. This presents a user interface where they can set the Disabled check box. When the schedule agent window closes (with the check box set), the schedule agent adds the number of seconds in the
DisablePeriodpreference to the current date and time, and writes the result to this
Disabled(schedule agent) setting.Note: User-based inventory (and therefore user-based scheduling) are deprecated, and are included only for backward compatibility. Use of machine schedules is recommended.
- Machine schedules cannot be disabled through any user interface, nor through command line interaction. If you need to temporarily disable a machine schedule, enter an appropriate date and time string in ISO format in the Computer preference registry setting listed in the Registry table below.
Values
Command line
Registry
FlexNet Manager Suite (On-Premises)
2022 R1 | https://docs.flexera.com/FlexNetManagerSuite2022R1/EN/GatherFNInv/SysRef/FlexNetInventoryAgent/topics/PMD-Disabled-SchedAg.html | 2022-08-07T22:14:05 | CC-MAIN-2022-33 | 1659882570730.59 | [] | docs.flexera.com |
Disaster Recovery best practices for SharePoint Server and Access Services
APPLIES TO:
2013
2016
2019
Subscription Edition
SharePoint in Microsoft 365
The article explains how to successfully implement a disaster recovery (DR) strategy for Access Services service applications for SharePoint Server.
Thanks to Neil Hodgkinson, Microsoft Senior Program Manager for testing this disaster recovery strategy and providing the content for this article.
Overview of Access Services and Disaster Recovery for SharePoint Server.
Regardless of your choice of technologies there are a few requirements and best-practices for configuring a disaster recovery farm to support Access Services. These are detailed below.
Important
Before you can use any of the Microsoft PowerShell cmdlets detailed in the steps below, verify that you meet all of the requirements in Permissions.
Step 1: Setting up SharePoint Server for Disaster Recovery.
a. Use the same Authentication RealmPowerShell command:
Get-SPAuthenticationRealm PowerShell commands:
Set-SPAuthenticationRealm -Realm 4a2cc8f8-51ab-4367-8a76-ab629c882a68 Restart-Service sptimerv4 Restart-Service spadminv4
Important
Restarting the SharePoint Timer service and SharePoint Admin service is recommended after changing the Authentication Realm. You may need to schedule time during which you can perform an IISReset (SharePoint sites will be unavailable until the successful end of an IISReset).
b. Use the same Database Server ReferenceID
Access Services in SharePoint Server 2013 and SharePoint Server 2016 use a SQL Server to host the individual databases that support Access-based Apps. Internally, these database servers aren't referenced by name, but by a ReferenceID.
Important
It's critical to the success of your disaster recovery strategy that the database servers in the secondary data center be registered as application server hosts using the exact same ReferenceID as their primary partner. This can only be done by registering the database servers by using PowerShell.
Register the secondary farm's Access Services database server
$serverGroupName = 'DEFAULT' $DatabaseServerName = "<Secondary Access Database Server>" $PrimaryServerRefID = "<Primary Server Reference ID>" $DatabaseServerName -DatabaseServerGroup $serverGroupName -ServerReferenceId $PrimaryServerRefID -AvailableForCreate $true
You can reference as many Access Services Application Database Servers as you need. In this simple scenario we only have one. If you have many, make sure you track the registrations and ensure that, in recovery, the databases are recovered correctly to the matched server in the DR site.
c. Know the databases that support the Access Services Service Application
Rather than having their own service application databases, Access Services in SharePoint Server.
Step 2: Recovery After Failover
After failing-over to the secondary datacenter, you need to use the five different databases listed in Step 1 to regenerate the Access Services app infrastructure on the disaster recovery farm.
Note
This article only deals with the five database types listed in the table above. To successfully recover a full SharePoint Server farm after a data center failover, additional steps are needed and the reader is directed to review the steps in Plan for high availability and disaster recovery for SharePoint Server..
a. Recreate the service applications from the restored/recovered databases
Use the following.
b. Attach the content database(s)
Mount the failover content databases can to the appropriate web application on the DR farm by using PowerShell:
Before you can use any of the Microsoft PowerShell cmdlets, verify that you meet all of the requirements in Permissions.
Mount-SPContentDatabase -WebApplication "<>" -Name "<Database name>" -DatabaseServer "<SecondaryDatabaseServerName>" Configure hybrid OneDrive.
Step 3: Configuration Actions Post Failover.
a. Setup.
Important
Failing to set the App Domain will result in a DNS lookup failure and a site not found error in the browser.
b. as the Access Application Database Server Platforms.
SharePoint Server 2016 has been tested in a disaster recovery scenario using. | https://docs.microsoft.com/en-us/SharePoint/administration/disaster-recovery-best-practices-for-sharepoint-server-and-access-services?redirectedfrom=MSDN | 2022-08-07T22:09:40 | CC-MAIN-2022-33 | 1659882570730.59 | [] | docs.microsoft.com |
Beginning with release 9.0.0, you can choose between the classic dashboard and Dashboard 2.0. This sections covers how to set up and use the classic dashboard.
Variables and templates
Using variables, dashboards can be reused widgets but with different resource category values. Variables can be defined in the widgets for values associated with the following resource group categories:
- Device Group
- Service Group
- Site
- Kubernetes
- Docker
Built-in templates are provided for Kubernetes and Docker resource groups. | https://jpdemopod2.docs.opsramp.com/platform-features/feature-guides/dashboards/classic/ | 2022-08-07T22:49:21 | CC-MAIN-2022-33 | 1659882570730.59 | [] | jpdemopod2.docs.opsramp.com |
Elasticsearch Connector
Elasticsearch is a popular fulltext search engine. You can use Elasticsearch as both a source and a sink with the Jet API.
Installing the Connector
This connector is included in the full distribution of Hazelcast.
To use this connector in the slim distribution, you must have one of the following modules on your members' classpaths:
hazelcast-jet-elasticsearch-6
hazelcast-jet-elasticsearch-7
Permissions
Enterprise
If security is enabled, your clients may need permissions to use this connector. For details, see Securing Jobs.
Elasticsearch as a Source
The Elasticsearch connector source provides a builder and several convenience factory methods. Most commonly you need to provide the following:
A client supplier function, which returns a configured instance of
RestClientBuilder(see Elasticsearch documentation),
A search request supplier, specifying a query to Elasticsearch,
A mapping function from
SearchHitto a desired type.
BatchSource<String> elasticSource = ElasticSources.elasticsearch( () -> client("user", "password", "host", 9200), () -> new SearchRequest("my-index"), hit -> (String) hit.getSourceAsMap().get("name") );
For all configuration options use the builder:
BatchSource<String> elasticSource = new ElasticSourceBuilder<String>() .name("elastic-source") .clientFn(() -> RestClient.builder(new HttpHost( "localhost", 9200 ))) .searchRequestFn(() -> new SearchRequest("my-index")) .optionsFn(request -> RequestOptions.DEFAULT) .mapToItemFn(hit -> hit.getSourceAsString()) .slicing(true) .build();
By default, the connector uses a single scroll to read data from Elasticsearch. There is only a single reader on a single node in the whole cluster.
Slicing can be used to parallelize reading from an index with more
shards. The number of slices is equal to
globalParallelism.
If Hazelcast members and Elasticsearch nodes are located on the same machines, the connector will use co-located reading, avoiding the overhead of physical network.
Failure Scenario Considerations
The connector uses retry capability of the underlying Elasticsearch client. This allows the connector to handle some transient network issues but it doesn’t cover all cases.
The source uses Elasticsearch’s Scroll API. The scroll context is stored on a node with the primary shard. If this node crashes, the search context is lost and the job can’t reliably read all documents, so the job fails.
If there is a network issue between Hazelcast and Elasticsearch the Elasticsearch client retries the request, allowing the job to continue.
However, there is an edge case where the scroll request is processed by the Elasticsearch server, moves the scroll cursor forward, but the response is lost. The client then retries and receives the next page, effectively skipping the previous page. The recommended way to handle this is to check the number of processed documents after the job finishes, possibly restart the job when not all documents are read.
These are known limitations of Elasticsearch Scroll API. There is an ongoing work on Elasticsearch side to fix these issues.
Elasticsearch as a Sink
The Elasticsearch connector sink provides a builder and several convenience factory methods. Most commonly you need to provide:
A client supplier, which returns a configured instance of
RestHighLevelClient(see Elasticsearch documentation),
A mapping function to map items from the pipeline to an instance of one of
IndexRequest,
UpdateRequestor
DeleteRequest.
Suppose type of the items in the pipeline is
Map<String, Object>, the
sink can be created using the following:
Sink<Map<String, Object>> elasticSink = ElasticSinks.elasticsearch( () -> client("user", "password", "host", 9200), item -> new IndexRequest("my-index").source(item) );
For all configuration options use the builder:
Sink<Map<String, Object>> elasticSink = new ElasticSinkBuilder<Map<String, Object>>() .clientFn(() -> RestClient.builder(new HttpHost( "localhost", 9200 ))) .bulkRequestFn(BulkRequest::new) .mapToRequestFn((map) -> new IndexRequest("my-index").source(map)) .optionsFn(request -> RequestOptions.DEFAULT) .build();
The Elasticsearch sink doesn’t implement co-located writing. To achieve
maximum write throughput, provide all nodes to the
RestClient
and configure parallelism.
Failure Scenario Considerations
The sink connector is able to handle transient network failures, failures of nodes in the cluster and cluster changes, e.g., scaling up.
Transient network failures between Hazelcast and Elasticsearch cluster are handled by retries in the Elasticsearch client.
The worst case scenario is when a master node containing a primary of a shard fails.
First, you need to set
BulkRequest.waitForActiveShards(int) to ensure
that a document is replicated to at least some replicas. Also, you can’t
use the auto-generated ids and need to set the document id manually to
avoid duplicate records.
Second, you need to make sure new master node and primary shard is allocated before the client times out. This involves:
configuration of the following properties on the client:
org.apache.http.client.config.RequestConfig.Builder.setConnectionRequestTimeout org.apache.http.client.config.RequestConfig.Builder.setConnectTimeout org.apache.http.client.config.RequestConfig.Builder.setSocketTimeout
and configuration of the following properties in the Elasticsearch cluster:
cluster.election.max_timeout cluster.fault_detection.follower_check.timeout cluster.fault_detection.follower_check.retry_count cluster.fault_detection.leader_check.timeout cluster.fault_detection.leader_check.retry_count cluster.follower_lag.timeout transport.connect_timeout transport.ping_schedule network.tcp.connect_timeout
For details see Elasticsearch documentation section on cluster fault detection. | https://docs.hazelcast.com/hazelcast/5.1/integrate/elasticsearch-connector | 2022-08-07T23:01:47 | CC-MAIN-2022-33 | 1659882570730.59 | [] | docs.hazelcast.com |
xdmp:value( expr as xs:string, [map as map:map?], [context as item()?] ) as item()*
Evaluate an expression in the context of the current evaluating statement.
This differs from
xdmp:eval in that
xdmp:value
preserves all of the context from the calling query, so you do not
need to re-define namespaces, variables, and so on. Although the expression
retains the context from the calling query, it is evaluated in its own
transaction with same-statement isolation.
You can only evaluate expressions with
xdmp:value; no
prolog definitions (namespace declarations, function definitions,
module imports, and so on) are allowed.
If the expression references something not in the context of either the calling query or the value expression, then an error is thrown. For example, the following throws an undefined variable exception:
xdmp:value("$y")
It is not recommended to use this with an inline function as static analysis of
inline functions do not look inside strings passed to
xdmp:value.
let $var := 5 return xdmp:value("$var") => 5
xquery version "1.0-ml"; xdmp:document-insert("/test.xml", <root> <step1>this is step1</step1> <step2>this is step2</step2> </root>) ; (: use xdmp:value to dynamically specify a step in an XPath expression :) for $x in ("step1", "step2") return /root/xdmp:value($x) => <step1>this is step1</step1> <step2>this is step2</step2>
Stack Overflow: Get the most useful answers to questions from the MarkLogic community, or ask your own question. | https://docs.marklogic.com/9.0/xdmp:value | 2022-08-07T21:35:40 | CC-MAIN-2022-33 | 1659882570730.59 | [] | docs.marklogic.com |
An increasing reliance on technology, and the fact we are all living and working in a progressively digital world, inevitably means that whilst we can enjoy many benefits in terms of operational productivity and business growth, there are more and more threats to deal with. Being on your guard and prepared to deal with the biggest risks is vital. But what are those risks, and how to deal with them?
Every day, new IT related risks and cyber security threats are evolving and putting businesses at risk. A survey conducted by the World Economic Forum revealed that cyber-attacks are top of the worry list for executives in Europe and other developed nations.
The trouble is that cyber criminals don’t just hack emails. They are now capable of bringing entire systems down and holding organisations to ransom. This is precisely why it is so important to be aware of how these criminals can take hold and pose a threat. To know what the IT risks to business are, and to respond with a robust IT risk management strategy.
Let’s take a look at the top IT risks for business.
1. Social Engineering
Social engineering is a tactic that involves gaining the trust of an individual, ahead of launching some form of cyber-attack.
This could be anything from mass spam phishing, to voice phishing or spear phishing, or whaling, which targets high value targets. Angler phishing is social media based, with attackers imitating a trusted organisation’s customer service department. Conversations are sparked, when are then hijacked and diverted to private messages, where the attacks are advanced.
Search engine phishing places links to fake websites top of the search results, whilst URL phishing uses tactics to mask web links so they look genuine. There’s also in-session phishing that interrupts regular web browsing with the likes of fake login pop-ups.
Baiting attacks take advantage of natural curiosity to coax individuals into giving away sensitive information. Strategies include USB flash drives left in public places, or email attachments offering something for free.
Social engineering is on the rise and, unfortunately, even the most robust cyber security measures are no match. But with a good dose of employee education, and some clearly laid out processes, you can boost the battle.
2. Third-Party Exposure
Whilst you may have your cyber-security, data protection and IT risk management policies off-pat, there still remains the risk of third party exposure.
If you use third parties for the likes of payment processing or bookings management, and those parties are subject to a data breach or cyber-attack, then you will be responsible for that breach or attack should your customers be affected. This means that you will be legally and financially liable, and legally required to notify your regulators, as well as facing the potential of fines and penalties.
It is therefore vital to take steps to monitor the policies and procedures of third party suppliers, and to do your due diligence on their commitment to cyber and data security.
3. Failure to Manage Updates
A large proportion of cyber-attacks occur due to outdated software and operating systems.
If you fail to install updates and the latest software patches, then your organisation will become seriously vulnerable to all sorts of security breaches.
Cyber criminals actively seek holes in software security, so be sure to keep on top of all your updates.
4. Bring Your Own Device Working
The trend for allowing staff to work from their own familiar devices may have increased productivity, flexibility and employee satisfaction. But it has brought with it heightened exposure to cyber security breaches.
With personal devices often falling off the radar of organisation cyber security protocols, and often easier to hack, this can leave them exposed to security breaches, and acting as a route in to company networks.
It is therefore crucial to put a BYOD policy in place, and ensure that all staff are adequately informed and trained to minimise the risks involved.
5. Remote Working
With so many employees working from home, the risk for cyber-attacks has increased. As staff log in to networks remotely, so there are more opportunities for attackers to find in-roads.
Setting up a virtual private network (VPN) is essential to secure the connections made into your organisation’s systems.
6. Internet of Things (IoT)
The Internet of Things is a network of connected devices that can send, receive and store data. From voice assistants to smart security, from wireless inventory trackers to connected appliances, the fact that all these devices are capable of producing data, and that they are all connected to the internet, poses a risk in itself.
With hackers increasingly finding ways to compromise IoT connected devices to steal data, it is crucial that steps are taken to protect these devices, such as setting secure passwords.
7. Outdated Hardware
Software and operating systems are not always responsible for cyber-attacks. As ageing hardware becomes obsolete, it becomes unable to support newer, more secure security measures. This can put company systems and its data at risk.
Monitoring devices and replacing or upgrading hardware on a regular basis is therefore vital.
Cyber security threats are showing any signs of abating. If anything, they are on the rise, and becoming more and more intricate, leading to more devastating consequences. It is therefore imperative for businesses to take active steps to protect their data and networks courtesy of good IT risk management strategies.
Expert cyber security support from PC Docs
At PC Docs we offer a comprehensive package of cyber security solutions, all of which can be tailored to suit your specific levels of IT risk.
From anti-malware and adware protection, to firewall and antivirus systems and other software, our services cover the entire spectrum of IT risks. We also offer tailored guidance on good IT risk management.
To learn how we can help safeguard your organisation against all the latest IT risks, you are welcome to get in touch. | https://www.pc-docs.co.uk/7-common-it-risks-to-business-and-how-to-combat-them/ | 2022-08-07T22:51:42 | CC-MAIN-2022-33 | 1659882570730.59 | [array(['https://www.pc-docs.co.uk/wp-content/uploads/2021/04/Depositphotos_83880900_s-2019-e1619389735990.jpg',
None], dtype=object) ] | www.pc-docs.co.uk |
v0.3.5 (February 22, 2021)
Bugfixes
Very minor one, fixes to the way
Terminalaccesses the
pilot_db.jsonfile to use
Terminal.pilotsproperty that makes a new pilot_db.json file if one doesn’t exist, but otherwise loads the one that is found in
prefs.get('PILOT_DB')
Reorganized
Terminalsource to group properties together & minor additions of type hinting
Fixed some bad fallback behavior looking for files in old hardcoded default directories, eg. in the ye olde
utils.get_pilotdb()
v0.3.4 (December 13, 2020)
Improvements
Unify the creation of loggers!!!! See the docs ;)
autopilot.core.loggers:
Unify prefs, including sensible defaults, refactoring of scripts into a reasonable format, multiprocess-safety, and just generally a big weight off my mind. Note that this is a breaking change to the way prefs are accessed. Previously one would do prefs.PREF_NAME, but that made it very difficult to provide default values or handle missing prefs. the new syntax is prefs.get(‘PREF_NAME’) which returns defaults with a warning and None if the pref is not set:
completely clean up scripts, and together that opened the path to clean up setup as well. so all things configuration got a major promotion
We’re on the board with CI and automated testing with a positively massive 3% code coverage!!!
new scripts to eg. create autopilot alias:
Bugfixes
cleanup scripts on object deletion:
don’t drop ‘floats’ from gui when we say we can use them…:
pigpio scripts dont like floats:
Docs
Clarification of supported systems:
Solved an ancient sphinx riddle of how to get data objects/constants to pretty-print:
Clarify hardware prefs
what numbering system do we use:
Logging
catch pigpio script init exception:
more of it idk
v0.3.3 (October 25, 2020)
Bugfixes
Fix layout in batch reassign gui widget from python 3 float division
Cleaner close by catching KeyboardInterrupt in networking modules
Fixing audioserver boot options – if ‘AUDIOSERVER’ is set even if ‘AUDIO’ isn’t set in prefs, should still start server. Not full fixed, need to make single plugin handler, single point of enabling/disabling optional services like audio server
Fix conflict between polarity and pull in initializing pulls in pilot
Catch
tables.HDF5ExtErrorif local .h5 file corrupt in pilot
For some reason ‘fs’ wasn’t being replaced in the jackd string, reinstated.
Fix comparison in LED_RGB that caused ‘0’ to turn on full becuse ‘value’ was being checked for its truth value (0 is false) rather than checking if value is None.
obj.next()to
next(obj)`in jackdserver
Improvements
Better internal handling of pigpiod – you’re now able to import and use hardware modules without needing to explicitly start pigpiod!!
Hopefully better killing of processes on exit, though still should work into unified process manager so don’t need to reimplement everything (eg. as is done with launching pigpiod and jackd)
Environment scripts have been split out into
setup/scripts.pyand you can now run them with
python -m autopilot.setup.run_script(use
--helpto see how!)
Informative error when setup is run with too narrow terminal:
More loggers, but increased need to unify logger creation!!!
Cleanup
remove unused imports in main
__init__.pythat made cyclical imports happen more frequently than necessary
single-sourcing version number from
__init__.py
more cleanup of unnecessary meta and header stuff left from early days
more debugging flags
filter
NaturalNameWarningfrom pytables
quieter cleanups for hardware objects
v0.3.2 (September 28, 2020)
Bugfixes - previously, I attempted to package binaries for the lightly modified pigpio and for jackd (the apt binary used to not work), but after realizing that was the worst possible way of going about it I changed install strategies, but didn’t entirely remove the vestiges of the prior attempt. The installation expected certain directories to exist (in autopilot/external) that didn’t, which crashed and choked install. Still need to formalize a configuration and plugin system, but getting there. - the jackd binary in the apt repos for the raspi used to not work, so i was in the habit of compiling jackd audio from source. I had build that into the install routine, but something about that now causes the JACK-Client python interface to throw segfaults. Somewhere along the line someone fixed the apt repo version of jackd so we use that now.
previously I had only tested in a virtual environment, but now the installation routine properly handles not being in a venv.
Cleanup
remove bulky static files like fonts and css from /docs/ where they were never needed and god knows how they got there
use a forked sphinx-sass when building docs that doesn’t specify a required sphinx version (which breaks sphinx)
removed skbuild requirements from install
fixed pigpio install requirement in requirements_pilot.txt
included various previously missed files in MANIFEST.in
added installation of system libraries to the pilot configuration menu
v0.3.1 (August 4, 2020)
Practice version!!! still figuring out pypi
v0.3.0 (August 4, 2020)
Major Updates
Python 3 - We’ve finally made it to Python 3! Specifically we have brought Autopilot up to compatibility with Python 3.8 – though the Spinnaker SDK is currently only available through Python 3.7, so we have formally required 3.7 for now while we work on moving acquisition to Aravis. I will not attempt to keep Autopilot compatible with Python 2, but no decision has been made about compatibility with other versions of Python 3. Until then, expect that Autopilot will attempt to keep up with major version changes. The switch also let up update PySide (Qt library used for the GUI) to PySide2, which uses Qt5 and has a whole raft of other improvements.
Continuous Data Handling - The
Subjectclass and
networkingmodules have been improved to handle continuous data (eg. streaming data, generally non-trialwise or non-event-sampled data). Continuous data can be set in a Task description either with a
tablescolumn descriptor as trial data is, but also can be set as
'infer', for which the
Subjectclass will wait until it receives the first data and automatically create a
tablescolumn depending on its type and shape. While previously we intended to nudge users to be explicit about declaring their data, this was necessary to allow for data that might be variable in type and shape to be included in a Task – eg. it should be possible to record video data without needing to specify the resolution or bit depth as a hardcoded parameter in a task class. I have come to like type inference, and may make it a general practice for all types of data. That would potentially allow tasks to be written without explicitly declaring the data that they produce at all, but I haven’t decided if that’s a good thing or not yet.
The GPIO engine has been rebuilt, relying more on
pigpio’s function interface. This means that GPIO timing is now ~microsecond precise, important for reward delivery, LED flashing, and a number of other basic infrastructural needs. The reorganization of hardware modules resulted in general
GPIO,
Digital_Inand
Digital_Outmetaclasses, making common operations like setting polarity, triggers, and pullup/down resistors much easier.
Setup has been greatly improved. This includes proper packaging and installation with setuptools & sk-build, allowing us to finally join PyPI :) . Setup has been unified into a single npyscreen-based set of prompts that allow the user to run scripts to install libraries or configure their environment (also see
run_script()and
list_scripts()), set
prefs, configure hardware objects (based on some very fun signature introspection), setup autopilot as a systemd service, etc. Getting started with Autopilot is now three commands!:
pip install auto-pi-lot autopilot.setup.setup_autopilot ~/autopilot/launch_autopilot.sh
Minor Updates
Logging level is now set from
prefs, so where before, eg. every message through the networking modules would be logged to stdout, now only warnings and exceptions are. This gives a surprisingly large performance boost.
Logging has also been much improved in
networkingmodules, where rather than an awkward
do_loggingflag that was used to avoid logging performance-critical events like streaming data, logging is controlled by log level throughout the system. By default, logging of most messages is set at
debuglevel so they don’t drown out important messages in the logs as they used to.
Networking modules now only deserialize messages if they are the final recipient, saving lots of processing time – particularly with streamed arrays.
Messageobjects also only re-serialize messages if they have been changed. Message structure has been changed such that serialized messages are now of the general format:
[sender, (optional) intermediate_node_1, intermediate_node_2, ... final_recipient, message_contents]
Configuration will continue to be a point of improvement, but a few minor updates were made:
prefs.CONFIGwill be used to signal multiple, potentially overlapping agent configurations, each of which may have their own system dependencies, external daemons, etc. Eg. a Pilot could be configured to play audio (which requires a jackd daemon to be started before Autopilot) and video (which requires Autopilot to be started in a X session). Checks of
prefs.CONFIGare now
inrather than
==to reflect that.
prefs.PINSwas renamed
prefs.HARDWARE, and now allows hardware to be configured with dictionaries rather than integers only. Initially
PINSwas meant to just contain pin numbering for GPIO objects, but having a single point of hardware configuration is preferable.
Task.init_hardware()now respects all parameters set in
prefs.
Throughout the code, minimal
get_thistype methods have begun to be replaced with
@propertyattributes. This is because a) I love them and think they are magical, but b) will also be building Autopilot’s closed-loop infrastructure around a Qt-style signal/slot architecture that wraps
@propertyattributes so they can be
.connectedto one another easily.
Previously it was possible to control presentation by groups of stimuli, but now it is possible to control the presentation frequency of individual stimuli.
PySide2has proper support for CSS Stylesheets, so the design of Autopilot’s GUI has been marginally improved, a process that will continue in the ceaseless quest for aesthetic perfection.
Several setup routines have been added to make installation of opencv, pyspin, etc. easier. I also wrote a routine to
download_box()files from a URL, which is mysteriously hard to do.
The To-Do page now reflects the full ambition of Autopilot, where before this vision was contained only in the whitepaper and a disorganized plaintext file in the repo.
The
Subjectclass can now export trial data
to_csv(). A very minor update, but one that is the first in a number of planned improvements to data export.
I have also opened up a message board in google groups to make feature requests and discuss use and development, hope to see you there :)
New Features
TRANSFORMS have been introduced!!!
Transformobjects have a
process()method that, well, transforms data in some way. Multiple transforms can be added together to make a transformation chain. This module is still very young and doesn’t have a developed API, but will be built to to automatic type compatibility checking, coersion, parallelization, and rhythm (FIFO/FILO) control. Transforms are implemented with different modalities (image, selection, logical) that imply different types of input and output data structures, but the hierarchical structure of the modules is still quite flat.
Autopilot is now integrated with DeepLabCut-live!!!! You can now use realtime pose tracking in your experiments. See the dlclive_example
HARDWARE has been substantially refactored to give objects an appropriate inheritance structure. This substantially reduces effort duplication across hardware objects and makes a bunch of obvious capabilities available to all of them, for example all hardware objects are now network (
init_networking()) and logging (
init_logging()) capable.
Cameras: The
cameras.Camera_CVclass allows webcams/other simple cameras to be accessed through OpenCV, and the
cameras.Camera_Spinnakerclass allows FLIR and other cameras to be accessed through the Spinnaker SDK. Cameras are capable of encoding videos locally (with x264), streaming frames over the network, and making acquired frames available to other objects on the same computer. The
Camera_Spinnakerclass provides simple
@propertysetter/getter methods for common parameters, but also makes all
PySpinattributes available to the user with its
get()and
set()methods. The
cameras.Camerametaclass is written so that new camera types can be added by overriding a few methods. A new
Video_Childcan be used to run a camera on a Child agent.
9DOF Motion Sensor: The
i2c.I2C_9DOFclass can use the LSM9DS1 sensor to collect accelerometer, magnetometer, and gyroscopic data to compute unambiguous position and orientation information. We will be including calibration and computation routines that make it easier to extract properties of interest – eg. computing vertical motion by combining readings from the three sensors.
Temperature Sensor: The
i2c.MLX90640class can use the MLX90640 sensor to measure temperature. The sensor is 32x24px, which the class can
interpolate(). The class also allows frames to be integrated and averaged over time, substantially reducing noise. I modified the driver library to enable capture at the full 64fps on the Raspberry Pi.
NETWORKING modules can stream continuous data better in a few ways:
Net_Nodemodules were given a
get_stream()method that lets objects, well, stream data. Specifically, they are given a
queue.Queueto shovel data into, which is then picked up by a dedicated
zmq.Socketin its own thread, which handles batching, serialization, and load balancing. Streamed messages are batched (ie. contain multiple messages), but behave like normal message when received – they are split and contain an
inner_keythat is used to call the
listenwith each message (see
l_stream()).
networkingobjects also now compress arrays-in-transit with the superfast blosc compression library. This increases their throughput dramatically, as many data streams in neuroscience are relatively low-entropy (eg. the pixels in a video of a mostly-white arena are mostly unchanged frame-to-frame and are thus highly compressible). See the
Message._serialize_numpy()and
Message._deserialize_numpy()methods.
STIMULI - The
JackClientcan now play continuous sounds rather than discrete sounds. An example can be found in the
Nafc_Gaptask, which plays continuous white noise. All sounds now have a
play_continuous()method, which continually dumps samples in a cycle into a queue for the
JackClient. The continuous sound will be interrupted if another sound has its
Jack_Sound.play()method called, but the continuous sound will resume seamlessly even if number of samples in the played sound aren’t a multiple of the jack buffer size. We use this for gaps in noise (using the new
Gapclass), which we have confirmed are sample-accurate.
UI & VIZ
A
Videowindow has been created to display streaming video. The
Terminal_Networking.l_continuous()method meters frames such that even if high-speed video is being acquired, frames are only sent at a rate of
prefs.DRAWFPS. The
Videoclass uses the
ImageItem_TimedUpdateobject, a slight modification of
pyqtgraph.ImageItem, that calls its
updatemethod according to a
PySide2.QtCore.QTimer.
A
plots_menumenu has been added to the Terminal, and a GUI dialog (
gui.Psychometric) has been added to create simple psychometric curves with the
viz.psychometricmodule, which uses altair. Plans for developing visualization are described in To-Do.
A general
gui.pop_dialog()function simplifies displaying messages to the user using the Terminal UI. This was an initial step towards improving status/error reporting from other agents, further detailed in To-Do.
Bugfixes
Some objects, particularly several
guiobjects, had the old mouse/mice terminology updated to subject/subjects.
Net_Nodeobjects were only implicitly destroyed by their
releasemethod which ends the threaded loop by setting the
closingevent.
Embarassingly,
Pilotobjects were not prevented from running multiple tasks at a time. This led to some very confusing and hard-to-debug problems, as well as frequent conflicts over hardware access and resources. Typically what would happen is the Terminal would send a
STARTmessage to begin a task, and if it wouldn’t received a message receipt quickly enough would resend it, resulting in two tasks being started – but this would happen whenever two
STARTmessages were sent to a pilot. This was fixed with a simple check of
Pilot.statebefore a task is initialized. Similar bugs were fixed in
Plotobjects.
The
Subjectclass would sometimes fail to get and increment the trial session. This has been fixed by saving the session number as an attribute in the
infonode.
The
Subjectclass would reset the session counter even when the same task was being reassigned (eg. if updated), now it preserves session number if the protocol name is unchanged.
The
update_protocols()method didn’t report which subjects had their protocols updated, and so if there was some exception when setting new protocols it happened silently, making it so a user would never know their task was never updated. This was fixed with a noisier protocol update method for the Subject class and by displaying a list of subjects that were updated after the method is called.
Correction trials were being calculated incorrectly by the
Stim_Manager, such that rather than only repeating a stimulus if the subject got the previous trial incorrect, the stimulus was always repeated at least once.
Code Structure
Modified versions of external libraries have been added as git submodules in autopilot/external.
Requirements files have been split out to better differentiate between different agents and use-cases. eg. requirements for Terminal agents are in
requirements/requirements_terminal.txt, requirements for build the docs are in
requirements/requirements_docs.txt, etc. This is a temporary arrangement, as a future design goal is restructuring setup routines so that they can flexibly install components as-needed (see To-Do)
autopilot.core.hardwarehas been refactored into its own module,
autopilot.hardware, and split by device type, currently…
autopilot.cameras
autopilot.gpio- devices that use the GPIO pins for standard digital I/O logic
autopilot.i2c- devices that use the GPIO pins for I2C
autopilot.usb
The docs are hosted on readthedocs again, so the docs structure has been collapsed to a single folder without built documentation
The autopilot user directory is now
~/autopilotrather than
/usr/autopilot, which was always a mistake anyway. Autopilot creates a wayfinder
~/.autopilotfile that is used to find the user directory if it’s set elsewhere
External Libraries
External libraries can now be built and packaged along with autopilot using cmake, see CMakeLists.txt. Still uh having a little bit of trouble getting this to work, so code is in place to build and package the custom pigpio repo and jack audio but this will likely need some more work.
pigpio
Added the ability to return absolute timestamps rather than system ticks. pigpio typically returns 1 32-bit integer of ticks since the daemon started, absolute timestamps are 64-bit, so the pigpio daemon and python interface (pi) were given two new methods:
synchronize gets several (default 5) sets of paired timestamps and ticks using get_sync_time. It then computes an offset for translating ticks to timestamps
ticks_to_timestamp converts ticks to timestamps based on the offset found with synchronize
get_current_time sends two requests to the daemon to get the seconds and microseconds of the complete timestamp and returns an isoformatted string
mlx90640-library
Removed building examples by default which require additional dependencies
When using the raspi I2C driver, the baudrate would never be set to 1MHz, which is necessary to achieve full 64fps. This was fixed to use 1MHz by default.
Regressions
Message confirmation (holding a message to resend if confirmation isn’t received) was causing a huge amount of problems and needed to be rethought. There are in general very low rates (near-zero) of messages being dropped without some larger bug causing them, so confirmation has been disabled for now.
The same is true of
heartbeat()- which polled for status of connected pilots. this will be repaired and restored, as the terminal currently has a pretty bad idea of the status of what’s connected to it. this will be part of a broader networking overhaul | https://docs.auto-pi-lot.com/en/main/changelog/v0.3.0.html | 2022-08-07T22:37:05 | CC-MAIN-2022-33 | 1659882570730.59 | [] | docs.auto-pi-lot.com |
Create Competition¶
The first thing you'll need to do, whether organising a road race, XC event, track & field meeting, or anything else, is to create the competition.
New Competition Button¶
Starting from the homepage, click the NEW COMPETITION+ button.
You may be prompted to sign up or log in at this point.
Filling the Form¶
Complete the form on the next page.
Competition Form
Field Descriptions¶
- Name (english) - Enter the name of your competition
- Date - Input the date it will be taking place. If it is a multi-day competition, input the first day. You can add the end date later.
- Competition Type - Select the type, e.g. track, indoor, XC, etc
- Organiser - Start typing the organisation (e.g. athletics club) that will be running the competition and then select it from the resulting options.
- Request new Organiser - If the organisation does not appear in the list above, input the requested organisation here
- Location - Input the location of the competition, whether it is the track, stadium, HQ, start point, etc
- Slug - Choose a slug. This is the part of the website URL that identifies your competition. It should be short and obviously related to your competition, e.g. Welsh Indoor Senior Championships could be "wasi". Do not include the year in your slug; it is already elsewhere in the URL. See here for guidance.
Finishing Up¶
Click CREATE, and your competition
You will next be prompted to complete the basic info of your competition. | https://docs.opentrack.run/cms/competition/create/ | 2022-08-07T22:26:44 | CC-MAIN-2022-33 | 1659882570730.59 | [] | docs.opentrack.run |
The
tectonic_bridge_freetype2 crate
This crate is part of the Tectonic project. It exposes the C API of the FreeType font rendering engine within the Rust/Cargo build framework, with no Rust bindings.
There are a variety of other low-level FreeType-related crates available, including:
This package is distinctive because:
- It uses Tectonic’s dependency-finding framework, which supports both pkg-config and vcpkg.
- It ensures that FreeType’s C API is exposed to Cargo.
Ideally, one day this crate will be superseded by one of the above crates.
If your project depends on this crate, Cargo will export for your build script
an environment variable named
DEP_FREETYPE2_INCLUDE, which will be the name of
a directory containing the
ft2build.h header.
You will need to ensure that your Rust code actually references this crate in
order for the linker to include linked libraries. A
use statement will
suffice:
#[allow(unused_imports)] #[allow(clippy::single_component_path_imports)] use tectonic_bridge_freetype2;
Cargo features
At the moment this crate does not provide any Cargo features. It is intended that eventually it will, to allow control over whether the FreeType library is vendored or not. | https://docs.rs/crate/tectonic_bridge_freetype2/0.1.0 | 2022-08-07T21:38:16 | CC-MAIN-2022-33 | 1659882570730.59 | [] | docs.rs |
op:join-cross-product( $leftPlan as map:map, $rightPlan as map:map, [$condition as map:map?] ) as map:map
This method yields one output row set that concatenates every left row with every right row. Matches other than equality matches (for instance, greater-than comparisons between keys) can be implemented with a condition on the cross product.-cross-product($expenses ) => op:where(op:eq(op:view-col("employees", "EmployeeID"), op:view-col("expenses", "EmployeeID"))) => op:order-by(op:view-col("employees", "EmployeeID")) => op:result()
Stack Overflow: Get the most useful answers to questions from the MarkLogic community, or ask your own question. | https://docs.marklogic.com/op:join-cross-product | 2022-08-07T22:39:59 | CC-MAIN-2022-33 | 1659882570730.59 | [] | docs.marklogic.com |
Cite This Page
Bibliographic details for XMPP
- Page name: XMPP
- Author: NixNet contributors
- Publisher: NixNet, .
- Date of last revision: 14 July 2022 18:18 UTC
- Date retrieved: 7 August 2022 22:51 UTC
- Permanent URL:
- Page Version ID: 2330
Citation styles for XMPP
APA style
XMPP. (2022, July 14). NixNet, . Retrieved 22:51, August 7, 2022 from.
MLA style
"XMPP." NixNet, . 14 Jul 2022, 18:18 UTC. 7 Aug 2022, 22:51 <>.
MHRA style
NixNet contributors, 'XMPP', NixNet, , 14 July 2022, 18:18 UTC, <> [accessed 7 August 2022]
Chicago style
NixNet contributors, "XMPP," NixNet, , (accessed August 7, 2022).
CBE/CSE style
NixNet contributors. XMPP [Internet]. NixNet, ; 2022 Jul 14, 18:18 UTC [cited 2022 Aug 7]. Available from:.
Bluebook style
XMPP, (last visited August 7, 2022).
BibTeX entry
@misc{ wiki:xxx, author = "NixNet", title = "XMPP --- NixNet{,} ", year = "2022", url = "", note = "[Online; accessed 7-August-2022]" }
When using the LaTeX package url (
\usepackage{url} somewhere in the preamble) which tends to give much more nicely formatted web addresses, the following may be preferred:
@misc{ wiki:xxx, author = "NixNet", title = "XMPP --- NixNet{,} ", year = "2022", url = "\url{}", note = "[Online; accessed 7-August-2022]" } | https://docs.nixnet.services/index.php?title=Special:CiteThisPage&page=XMPP&id=2330 | 2022-08-07T22:51:22 | CC-MAIN-2022-33 | 1659882570730.59 | [] | docs.nixnet.services |
Discretization
MethodOfLines.MOLFiniteDifference— Type
MOLFiniteDifference(dxs, time=nothing; approx_order = 2, advection_scheme = UpwindScheme(), grid_align = CenterAlignedGrid(), kwargs...)
A discretization algorithm.
Arguments
dxs:but not a
StepRangeLen.
time: Your choice of continuous variable, usually time. If
time = nothing, then discretization yeilds a
NonlinearProblem. Defaults to
nothing.
Keyword Arguments
approx_order: The order of the derivative approximation.
advection_scheme: The scheme to be used to discretize advection terms, i.e. first order spatial derivatives and associated coefficients. Defaults to
UpwindScheme(). This is the only relevant scheme at present.
grid_align: The grid alignment types. See
CenterAlignedGrid()and
EdgeAlignedGrid().
kwargs: Any other keyword arguments you want to pass to the
ODEProblem.
MethodOfLines.DiscreteSpace— Type
DiscreteSpace(domain, depvars, indepvars, discretization::MOLFiniteDifference)
A type that stores informations about the discretized space. It takes each independent variable defined on the space to be discretized and create a corresponding range. It then takes each dependant variable and create an array of symbolic variables to represent it in its discretized form.
Arguments
domain: The domain of the space.
depvars: The independent variables to be discretizated.
indepvars: The independent variables.
discretization: The discretization algorithm.
Fields
ū: The vector of dependant variables.
args: The dictionary of the operations of dependant variables and the corresponding arguments, which include the time variable if given.
discvars: The dictionary of dependant variables and the discrete symbolic representation of them. Note that this includes the boundaries. See the example below.
time: The time variable.
nothingfor steady state problems.
x̄: The vector of symbolic spatial variables.
axies: The dictionary of symbolic spatial variables and their numerical discretizations.
grid: Same as
axiesif
CenterAlignedGridis used. For
EdgeAlignedGrid, interpolation will need to be defined
±dx/2above and below the edges of the simulation domain where dx is the step size in the direction of that edge.
dxs: The discretization symbolic spatial variables and their step sizes.
Iaxies: The dictionary of the dependant variables and their
CartesianIndicesof the discretization.
Igrid: Same as
axiesif
CenterAlignedGridis used. For
EdgeAlignedGrid, one more index will be needed for extrapolation.
x2i: The dictionary of symbolic spatial variables their ordering.
Examples
julia> using MethodOfLines, DomainSets, ModelingToolkit julia> using MethodOfLines:DiscreteSpace julia> @parameters t x julia> @variables u(..) julia> Dt = Differential(t) julia> Dxx = Differential(x)^2 julia> eq = [Dt(u(t, x)) ~ Dxx(u(t, x))] julia> bcs = [u(0, x) ~ cos(x), u(t, 0) ~ exp(-t), u(t, 1) ~ exp(-t) * cos(1)] julia> domain = [t ∈ Interval(0.0, 1.0), x ∈ Interval(0.0, 1.0)] julia> dx = 0.1 julia> discretization = MOLFiniteDifference([x => dx], t) julia> ds = DiscreteSpace(domain, [u(t,x).val], [x.val], discretization) julia> ds.discvars[u(t,x)] 11-element Vector{Num}: u[1](t) u[2](t) u[3](t) u[4](t) u[5](t) u[6](t) u[7](t) u[8](t) u[9](t) u[10](t) u[11](t) julia> ds.axies Dict{Sym{Real, Base.ImmutableDict{DataType, Any}}, StepRangeLen{Float64, Base.TwicePrecision{Float64}, Base.TwicePrecision{Float64}, Int64}} with 1 entry: x => 0.0:0.1:1.0 | https://docs.sciml.ai/dev/modules/MethodOfLines/api/discretization/ | 2022-08-07T22:24:29 | CC-MAIN-2022-33 | 1659882570730.59 | [] | docs.sciml.ai |
The IP Manager uses Internet Control Message Protocol (ICMP) and Simple Network Management Protocol (SNMP) to discover and poll IPv4-only, IPv6-only, and dual-stack systems in the managed network environment. The IP Manager supports IPv4 ICMP (also known as ICMPv4), IPv6 ICMP (also known as ICMPv6), IPv4 SNMP, and IPv6 SNMP.
Unless specified otherwise, the term ICMP is used to refer to IPv4 and IPv6 ICMP polling, and the term SNMP is used to refer to IPv4 and IPv6 SNMP polling.
Because many Management Information Bases (MIBs) have not yet been modified to accommodate IPv6 addresses, the following IP Manager features are supported for IPv4 only:
Autodiscovery
Light discovery for satellite Domain Managers
Creation of CLI device access objects for satellite Domain Managers
IP tagging and IP tag filters
IPSec tunnel discovery
Virtual router discovery
Hot Standby Router Protocol (HSRP) and Virtual Router Redundancy Protocol (VRRP) discovery, monitoring, and analysis. | https://docs.vmware.com/en/VMware-Telco-Cloud-Service-Assurance/2.0.0/ip-manager-concepts-guide/GUID-068E256E-255D-4473-9E03-8588B7D70097.html | 2022-08-07T22:34:25 | CC-MAIN-2022-33 | 1659882570730.59 | [] | docs.vmware.com |
A wildcard pattern is a series of characters that are matched against incoming character strings. You can use these patterns when you define pattern matching criteria.
Matching is done strictly from left to right, one character or basic wildcard pattern at a time. Basic wildcard patterns are defined in Basic wildcard patterns. Characters that are not part of match constructs match themselves. The pattern and the incoming string must match completely. For example, the pattern abcd does not match the input abcde or abc.
A compound wildcard pattern consists of one or more basic wildcard patterns separated by ampersand (&) or tilde (~) characters. A compound wildcard pattern is matched by attempting to match each of its component basic wildcard patterns against the entire input string. For compound wildcard patterns, see Compound wildcard patterns.
If the first character of a compound wildcard pattern is an ampersand (&) or tilde (~) character, the compound is interpreted as if an asterisk (*) appeared at the beginning of the pattern. For example, the pattern ~*[0-9]* matches any string not containing any digits. A trailing instance of an ampersand character (&) can only match the empty string. A trailing instance of a tilde character (~) can be read as “except for the empty string.”
Spaces are interpreted as characters and are subject to matching even if they are adjacent to operators like “&”.
Special characters for compound wildcard patterns are summarized in Compound wildcard patterns. | https://docs.vmware.com/en/VMware-Telco-Cloud-Service-Assurance/2.0.0/ip-manager-reference-guide/GUID-1B285D17-A3B0-468D-8F28-4871DA7BF766.html | 2022-08-07T23:33:11 | CC-MAIN-2022-33 | 1659882570730.59 | [] | docs.vmware.com |
In addition to an unspecified duplex mode setting, the following properties also determine whether a network adapter is monitored for utilization:
If the network adapter is unmanaged, it is not monitored.
If the network adapter uses the description of “*Vlan” for a Cisco device, it is not monitored.
If the interface is a subinterface, it is not monitored unless subinterface performance analysis is enabled, which is explained in the Generic Interface/Port Performance setting or the Ethernet Interface/Port Performance setting. | https://docs.vmware.com/en/VMware-Telco-Cloud-Service-Assurance/2.0.0/ip-manager-reference-guide/GUID-5396E2EF-D23D-4DFE-A856-0E3A7142A87A.html | 2022-08-07T22:15:32 | CC-MAIN-2022-33 | 1659882570730.59 | [] | docs.vmware.com |
MMDetection¶
The MMDetection library is a popular library for object detection. It provides implementations for many popular object detection approaches like Faster-RCNN and Mask-RCNN in addition to cutting edge methods from the research community.
model-hub makes it easy to use MMDetection with Determined while keeping the developer experience as close as possible to what it’s like working directly with MMDetection. Our library serves as an alternative to the trainer used by MMDetection (see mmcv’s runner) and provides access to all of Determined’s benefits so you don’t have to worry about provisioning resources or babysitting your experiments.
Given the benefits above, we think this library will be particularly useful to you if any of the following apply:
You want to perform object detection using a powerful integrated platform that will scale easily with your needs.
You are an Determined user that wants to get started quickly with MMDetection.
You are a MMDetection user that wants to easily run more advanced workflows like multi-node distributed training and advanced hyperparameter search.
You are a MMDetection user looking for a single platform to manage experiments, handle checkpoints with automated fault tolerance, and perform hyperparameter search/visualization.
The easiest way to use MMDetection is to start with the provided experiment configuration for Faster-RCNN. The associated README is a tutorial on how to use MMDetection with Determined and covers how to modify the configuration for custom behavior. | https://docs.determined.ai/latest/model-hub-library/mmdetection/overview.html | 2022-08-07T23:10:39 | CC-MAIN-2022-33 | 1659882570730.59 | [] | docs.determined.ai |
24. Resources
You may find the following sources of information useful.
24.1. Graphcore
The latest versions of all the Graphcore software documentation including user guides and technical notes
The Graphcore GitHub examples repository has application examples
The Graphcore GitHub tutorials repository has examples of features and simple applications, and tutorials
There are Graphcore videos with explanations and demos of Graphcore software
The source code for TensorFlow for the IPU on GitHub
24.2. TensorFlow
24.3. Other
The TensorFlow model repository on GitHub an excellent reference for various model implementations
The book Hands-on Machine Learning with Scikit-Learn and TensorFlow a very useful practical guide to developing and training network models using TensorFlow | https://docs.graphcore.ai/projects/tensorflow1-user-guide/en/latest/tensorflow/references.html | 2022-08-07T23:03:36 | CC-MAIN-2022-33 | 1659882570730.59 | [] | docs.graphcore.ai |
Solving Nonlinear Systems
A nonlinear system $f(u) = 0$ is specified by defining a function
f(u,p), where
p are the parameters of the system. For example, the following solves the vector equation $f(u) = u^2 - p$ for a vector of equations:
using NonlinearSolve, StaticArrays f(u,p) = u .* u .- p u0 = @SVector[1.0, 1.0] p = 2.0 probN = NonlinearProblem{false}(f, u0, p) solver = solve(probN, NewtonRaphson(), tol = 1e-9)
where
u0 is the initial condition for the rootfind. Native NonlinearSolve.jl solvers use the given type of
u0 to determine the type used within the solver and the return. Note that the parameters
p can be any type, but most are an AbstractArray for automatic differentiation.
Using Bracketing Methods
For scalar rootfinding problems, bracketing methods exist. In this case, one passes a bracket instead of an initial condition, for example:
f(u, p) = u .* u .- 2.0 u0 = (1.0, 2.0) # brackets probB = NonlinearProblem(f, u0) sol = solve(probB, Falsi()) | https://docs.sciml.ai/stable/modules/NonlinearSolve/tutorials/nonlinear/ | 2022-08-07T21:40:49 | CC-MAIN-2022-33 | 1659882570730.59 | [] | docs.sciml.ai |
Financial Crime and Corruption
Скачать
1.89 Mb.
Название
Financial Crime and Corruption
страница
Financial Crime and Corruption
3
rd
EDITION
Sam Vaknin, Ph.D.
Editing and Design:
Lidija Rangelovska
Lidija Rangelovska
A Narcissus Publications Imprint, Skopje 2009
Not for Sale! Non-commercial edition.
© 2002-9!
ISBN: 9989-929-36-X
Created by: LIDIJA RANGELOVSKA
REPUBLIC OF MACEDONIA
C O N T E N T S
Slush Funds
Corruption and Transparency
Money Laundering in a Changed World
Hawala, the Bank that Never Was
Straf – Corruption in Central and Eastern Europe
The Kleptocracies of Eastern and Central Europe
Russia’s Missing Billions
The Enrons of the East
The Typology of Financial Scandals, Asset Bubbles,
and Ponzi (Pyramid) Schemes
The Shadowy World of International Finance
Maritime Piracy
Legalizing Crime
Nigerian Scams - Begging Your Trust in Africa
Organ Trafficking in east Europe
Arms Sales to Rogue States
The Industrious Spies
Russia’s Idled Spies
The Business of Torture
The Criminality of Transition
The Economics of Conspiracy Theories
The Demise of the Work Ethic
The Morality of Child Labor
The Myth of the Earnings Yield
The Future of the SEC
Trading from a Suitcase – Shuttle Trade
The Blessings of the Black Economy
Public Procurement and Very Private Benefits
Crisis of the Bookkeepers
Competition Laws
The Benefits of Oligopolies
Anarchy as an Organizing Principle
Narcissism in the Boardroom
The Revolt of the Poor and Intellectual Property Rights
The Kidnapping of Content
The Economics of Spam
Microsoft’s Third Front
NGOs – The Self-appointed Altruists
Who is Guarding the Guards
The Honorary Academic
Rasputin in Transition
The Eureka Connection
The Treasure Trove of Kosovo
Milosevic’s Treasure Island
Macedonia’s Augean Stables
The Macedonian Lottery
Crime Fighting Computer Systems and Databases
Using Data from Nazi Medical Experiments
Surviving on Nuclear Waste
Human Trafficking in Eastern Europe
The Mendicant Journalists
Moral Hazard and the Survival Value of Risk
Private Armies and Private Military Companies (PMCs)
The Con-man Cometh
The Author
About "After the Rain" | http://lib.convdocs.org/docs/index-4038.html | 2018-07-15T20:43:45 | CC-MAIN-2018-30 | 1531676588972.37 | [] | lib.convdocs.org |
Storefronts
This article applies to Contextual Commerce. (Looking for Classic Commerce documentation?)
Understanding the Difference Between Stores and Storefronts
FastSpring uses the terms "Store" and "Storefront" (among others) to refer to certain components of your account. Although the terms are similar, they have different meanings.
Stores
A newly-created FastSpring account comes with a Store already created for a faster and easier setup. The Store incorporates your products and offers and controls what happens before, during and after the order. You can have any number of Stores for your account, but many FastSpring clients find they only need a single Store. If you need to add additional Stores, please contact support@fastspring.com for assistance.
If you are migrating from Classic Commerce to the Contextual Commerce product, your account will have a new Store added to the dashboard.
When is it a good idea to create a new Store?.
For more information, see When Should You Use Multiple Storefronts?.
What's a "path"?
Each Storefront is available by a direct link which consists of the domain containing your vendor name and a "path". Your default Storefront will be accessible as http://<company name>.onfastspring.com and all additional Storefronts will add a "path" - a word (or a few). For example, if your company name was "example", then a Storefront created for a Christmas special might have a link of - in this case "christmas" is a "path". For more information on linking to your Storefronts, see Storefront URLs.
In this example, the "Christmas" Storefront might contain product variations with discounts specifically created for this Storefront. Linking to this Storefront allows your visitors to purchase items with discount while your regular Storefront remains intact. | http://docs.fastspring.com/storefronts | 2018-07-15T20:52:20 | CC-MAIN-2018-30 | 1531676588972.37 | [] | docs.fastspring.com |
Administering vRealize Log Insight provides information about administering VMware® vRealize™ Log Insight™, including how to manage user accounts and how to integrate Log Insight Agents with other VMware products. It also includes information about managing product security and upgrading your deployment.
The information is written for experienced Windows or Linux system administrators who are familiar with virtual machine technology and datacenter operations. | https://docs.vmware.com/en/vRealize-Log-Insight/4.3/com.vmware.log-insight.administration.doc/GUID-4F6ACCE8-73F4-4380-80DE-19885F96FECB.html | 2018-07-15T21:37:28 | CC-MAIN-2018-30 | 1531676588972.37 | [] | docs.vmware.com |
public:
property bool SwapRunFromCD { bool get(); void set(bool value); };
public bool SwapRunFromCD { get; set; }
member this.SwapRunFromCD : bool with get, set
Public Property SwapRunFromCD As Boolean
We'd love to hear your thoughts. Choose the type you'd like to provide:
Our new feedback system is built on GitHub Issues. Read about this change in our blog post. | https://docs.microsoft.com/en-us/dotnet/api/microsoft.visualstudio.vcprojectengine.vclinkertool.swaprunfromcd?view=netframework-1.1&viewFallbackFrom=visualstudiosdk-2017 | 2018-07-15T21:23:59 | CC-MAIN-2018-30 | 1531676588972.37 | [] | docs.microsoft.com |
Contents
- Clustering and Job Priority
- Supervisor Configuration
- SmartShare
- Worker Configuration
- Client Configuration
- Permissions
- Auto-Wrangling
- Job Preemption
- Log Files
- Per-User and Per-Pgrp Instance Limits
- System-Wide Resource Tracking
- Flight-Checks (Pre and Post)
- Universal Callbacks
- Queuing Algorithms
- Lights Out Management (LOM)
- Shotgun Integration
Configuration Files
The Qube! configuration files allow you to specify the settings for the Qube! Supervisor, Workers, and Clients. These files are located in the following directories depending on the platform:
- Linux & OS X:
/etc
- Windows Vista/2008:
C:\ProgramData\pfx\qube
- Windows XP/2003:
C:\Windows
qb.conf (Supervisor, Worker, Client)
The installer programs place a template
qb.conf in a suitable location, depending upon the platform. Examination of this file will reveal (Supervisor only)
The
qb.lic file must contain a license key string issued by PipelineFX. Additional key strings can be added to the file in no particular order. Whenever adding a new key to the file, always back it up to a safe location, just in case there is a problem.
qbwrk.conf (Supervisor only)
The
qbwrk.conf file is an optional file on the Supervisor that centrally configures all of the Qube! Workers under the Supervisor. See the section on "Centrally Configuring Workers" for more information.
Configuration Dialog
The Configuration dialog, launched from theWranglerViewI's "Administration->Configure (local)", allows one to configure the local machine for Qube! It exposes all of the options that the standalone Configuration GUI exposed and more. For centralized Worker configuration, see the Host Layout's popup menu item "Configure" when logged into the Supervisor. | http://docs.pipelinefx.com/pages/viewpage.action?pageId=4237733 | 2020-10-20T06:55:50 | CC-MAIN-2020-45 | 1603107869933.16 | [] | docs.pipelinefx.com |
After you install Appeon Mobile, the Appeon Sample database is automatically installed and defined as a data source in the ODBC Administrator. You use the Appeon Sample database in this tutorial.
Appeon Sample is a SQL Anywhere database that is accessed through ODBC. In this section you create the database profile for the Appeon Sample database. PowerBuilder stores database profile parameters in the registry.
Click the Database Profile (
) button in the PowerBar
or
Select Tools > Database Profile from the menu bar.
PowerBuilder displays the Database Profiles dialog box, which includes a tree view of the installed database interfaces and defined database profiles for each interface. You can click the + signs or double-click the icons next to items in the tree view to expand or contract tree view nodes.
Select ODB ODBC in the tree view of the Database Profiles dialog box and click New.
PowerBuilder displays the Connection tab of the Database Profile Setup dialog box.
On the Connection tab of the Database Profile Setup dialog box, select the AppeonSample data source from the Data Source drop-down list and type AppeonSample in the Profile Name text box.
Select the Preview tab.
The PowerScript connection syntax for the profile is shown on the Preview tab. If you change the profile connection options, the syntax changes accordingly.
Click the Test Connection button.
A message box tells you that the connection is successful. Click OK to close the message box.
If the message box tells you the connection is not successful
Close the message box and verify that the information on the Connection page of the Database Profile Setup dialog box is correct. Then check the configuration of the data source in the ODBC Administrator. You can run the ODBC Administrator by expanding the Utilities folder under the ODB ODBC node of the Database Profile painter and double-clicking the ODBC Administrator item.
Click OK to close the Database Profile Setup dialog box. The AppeonSample database profile is created under the ODB ODBC node.
Select the AppeonSample database profile and click Connect.
What happens when you connect
When you connect to a database in the development environment, PowerBuilder writes the connection parameters to the Windows registry. Each time you connect to a different database, PowerBuilder overwrites the existing parameters in the registry with those for the new database connection. When you open a PowerBuilder painter that accesses the database, you automatically connect to the last database used. PowerBuilder determines which database this is by reading the registry.
Click Close to close the Database Profiles dialog box. | https://docs.appeon.com/2015/getting_started/ch03s03s02.html | 2020-10-20T06:21:14 | CC-MAIN-2020-45 | 1603107869933.16 | [] | docs.appeon.com |
This section describes BizTalk360 from a somewhat higher perspective. This consists of the following parts:
- What is BizTalk360
- Architecture of BizTalk360
What is BizTalk360
Once
Each year we bring 3 to 4 major releases, which brings the number of features over 70. Here, a number of the most important features are mentioned..
| https://docs.biztalk360.com/v1/docs/en/what-is-biztalk360 | 2020-10-20T06:03:32 | CC-MAIN-2020-45 | 1603107869933.16 | [array(['https://cdn.document360.io/253f6006-3994-42d0-98cd-fdc637f51791/Images/Documentation/53652f1f-4e31-48e2-bd93-0009fa962680.png',
'BizTalk360-Home-Page.png'], dtype=object)
array(['https://cdn.document360.io/253f6006-3994-42d0-98cd-fdc637f51791/Images/Documentation/59ad9b4e-017a-4f58-86e4-9032f863dbe5.png',
'BizTalk360-Architecture.png'], dtype=object)
array(['https://cdn.document360.io/253f6006-3994-42d0-98cd-fdc637f51791/Images/Documentation/483ea05c-9615-4ac5-ba5c-6599229200a9.png',
'BizTalk360-Architecture-Scalability.png'], dtype=object) ] | docs.biztalk360.com |
Moviri Integrator for TrueSight Capacity Optimization - Entuity
The integration supports the extraction of both performance and configuration data across network components monitored by Entuity (also marketed as “Entuity Network Monitoring and Analytics for BMC ProactiveNet Performance Management” by BMC). Furthermore the connector is able to replicate relationships and logical dependencies among entities defined in Entuity into BMC TrueSight Capacity Optimization, thus enabling grouping and multiple perspectives.
The connector supports configuration parameters that allow devices filtering, metrics filtering, historical recovery, limiting data volume processed in a single run and many other settings.
The documentation is targeted at BMC TrueSight Capacity Optimization administrators, in charge of configuring and monitoring the integration between BMC TrueSight Capacity Optimization and Entuity.
Data collection from Entuity 17 systems is supported only when you apply Service Pack 1 (11.5.01) of TrueSight Capacity Optimization 11.5.
Requirements
Official name of data source software
“Entuity” (product marketed by Entuity company)
or
“Entuity Network Monitoring and Analytics for BMC ProactiveNet Performance Management” (product marketed by BMC through Market Zone program)
or
“Entuity for TrueSight Operations Management” (product marketed by BMC through Market Zone program)
Supported versions of data source software
- Entuity Network Monitoring and Analytics Version 9.0.00 to 14.0.00
- Entuity for TrueSight Operations Management 14.5.00 to 17.00
- Entuity 12.5 to 17.0
Supported configurations of data source software
Moviri – Entuity Extractor requires the configuration of the Data Export component. The supported DBMS is MySQL.
Installation
Downloading the additional package
ETL moduls
In order to make data available to third party solutions, Entuity provide a data export module, which can export different (configurable) datasets to an external MySQL database.
The integration between BMC TrueSight Capacity Optimization and Entuity takes advantage of this feature. If there are active export jobs already in place, the BMC TrueSight Capacity Optimization-job can re-use the existing database instance; otherwise a MySQL database instance has to be identified in order to host a database schema in which Entuity job will write.
This is the step-by-step procedure to enable the daily data export:
- Create an empty MySQL Database (default name is eye_bco) that is reachable from Entuity server and BMC TrueSight Capacity Optimization ETL Engine
Check which version of Entuity is installed: from Entuity menu choose Help → About Entuity
- Open the package:
“Moviri Integrator Version 2.5.00 for TrueSight Capacity Optimization.zip”
and extract the file “sw_data_export_def_bco_*****.cfg “ according to your Entuity version and installed modules:
- Use sw_data_export_def_bco_165_QoS.cfg for Entuity version>=16.5 and QoS module enabled
- Use sw_data_export_def_bco_165_noQoS.cfg for Entuity version>=16.5 and QoS module disabled
- Use sw_data_export_def_bco_16orLess_QoS.cfg for Entuity version<=16.0 and QoS module enabled
- Use sw_data_export_def_bco_16orLess_noQoS.cfg for Entuity version<=16.0 and QoS module disabled
- The extracted cfg file contains Entuity data export configuration (datasets and job), which enable the daily data export for BMC TrueSight Capacity Optimization. Open it, edit the job connection parameters (see below) and rename the file to sw_data_export_def_bco_Entuity.cfg
# BMC TrueSight Capacity Optimization Export Job configuration
[DataExportJob ~BCO Export Job]
Description=BCO Export Job
DbName=eye_bco #Enter here the name of MySQL export DB
DbUser=<username> #Enter here username of MySQL export DB
DbPassword=<password> #Enter here password of MySQL export DB
DbServer=<hostname>:<port> #Enter here hostname and port of MySQL export DB
ViewName=All Views
Backfill= 604800 #Enter time period (in seconds) of how far data
#export should go to get data
Ageout=1209600 #Enter time period (in seconds) of how long data
#should be retained
Schedule=Daily
Datasets=~BCO Devices,~BCO Modules,~BCO Processors,~BCO Processor Utilization,~BCO Processor Utilization Avg,~BCO Device Memory Utilization,~BCO Ports,~BCO Port Traffic,~BCO Port Fault,~BCO Router Resources,~BCO Switch Resources,~BCO Device View Membership,~BCO Memories,~BCO Memory,~BCO View,~BCO Latency,~BCO SNMP RespTime,~BCO MonitoredDevice,~BCO ClassMap,~BCO PolicyMap,~BCO QoS Class Map,~BCO Root
IsEnabled=0
For more details please see Entuity Data Export guide
- Place configuration file “sw_data_export_def_bco.cfg” file into the ‘etc’ folder under Entuity/ENMA installation folder (e.g. /opt/entuity/etc).
- Edit the ‘etc/sw_data_export_def_site_specific.cfg’ file to add the following line:
!sw_data_export_def_bco.cfg
- Shut the Entuity/ENMA package/service down if it is not already down.
- Run the ‘install/configure’ utility making sure that the Data Export module is enabled.
- Start the Entuity/ENMA package running.
Log into the web interface using the ‘’admin” user (default password is “admin”). Display the list of default Data Export jobs using Administration > Data Export > Jobs . This list should look as follows:
The configuration of the “~BCO Export Job” can be viewed and should look as follows:
- Go back to the Jobs page. An export can be initiated manually by clicking the “Run” for the “~BCO Export Job”. Check the status of the job by clicking “History”. If the job hasn’t finished try again in a minute or so.
- The data will have been written to tables the export database.
- The automatic daily scheduled running of the ~BCO Export Job can be enabled by checking the appropriate box on the Jobs page. It is preconfigured to maintain a 2 weeks rolling window of statistics and the first time the export is run it will export the previous 7 days. In a multi EYE-Server deployment the procedure needs to be applied on all EYE-Severs whose monitored devices have to be imported into BMC TrueSight Capacity Optimization. In this case, each EYE-Server deployment needs to target a different database. Multiple databases can be hosted by the same server.
Connector configuration
Connection parameters configuration
The Entuity ETL needs to be configured in order to successfully connect to MySQL export database. Select the following properties under the ETL run configuration, Connection parameters panel:
- Database type: “Other database”
- JDBC driver: your MySQL JDBC driver of choice, that must be present on the ETL engine in the following path: <<BMC TrueSight Capacity Optimization installation directory>>/etl/libext (e.g. /opt/bmc/BCO/etl/libext)
Multiple data export job configuration
Entuity supports environments with different Entuity server instances, in order to manage them in BMC TrueSight Capacity Optimization, it must be performed the same data export configuration procedure (see par. 3.6) in all the Entuity instances and all the instances are properly configured through the Entuity page Administration -> Multi-Server Administration (for detail see Entuity documentation).
The BMC TrueSight Capacity Optimization export jobs (one on each Entuity server) can write data
- in the same MySQL database (the ETL consider each Job as a different source) -> one ETL instance is enough to read all data
- in different MySQL database -> is needed one ETL instance for each MySQL database (different configurations in the connection parameters panel, see par. 3.6.7) and different configurations in Entuity export file (see par 3.6.4)
- Is NOT supported to configure export jobs of two different Entuity schema versions (Entuity 12.5-14.5 and EYE 2012) in the same MySQL database.
Connector configuration attributes
Guidelines to choose the right ETL configuration
In order to set the ETL configuration that match your needs, in this paragraph some common use cases are presented.
The choice of correct configuration brings you benefits also in terms of shorter execution time and saved disk space on database.
Every Entuity ETL instance configuration applies to all jobs/devices selected, if there are different scenario needs for different jobs/devices, is recommended to create many ETL instances with different configuration which filters on particular jobs/devices (e.g. There are both a big and stable environment (scenario 1) and the introduction of a new service (scenario2))
Data volume considerations
The following table is a reference to estimate the data volume produced daily by Entuity ETL for each device, interface and module based on which metric filter in ETL configuration is active (see “Filters on metric to import” in par. 5.3)
As a reference, it has been chosen a device with 24 active interfaces
In a data center with 3 network interfaces for each server and devices with 24 interfaces, the volume of samples collected using System+ traffic filter (as in par 5.3.1 – scenario 1) is approximately :
Historical data extraction
In order to perform a historical recovery of data, please use “recovery mode properties” settings:
Recovery mode active= true and filling the two properties “from date” and “to date” in order to specify the time window. When recovery mode is active, the lastcounter will not be updated.
If the ETL is newly created and has no lastcounter defined, the “default lastcounter” is used.
The pre-condition for an historical extraction is the presence of data inside the Entuity export DB.
Supported Platforms
For all metrics imported by the ETL and described below in this section, the following apply:
- Supported BMC TrueSight Capacity Optimization Entity Type: Network device, Router, Switch, Firewall
Troubleshooting
For general administration please refer to Working with ETLs.
A more specific common error is: ERROR 1130 (00000): Host ''xxx.xxx.xxx.xxx'' is not allowed to connect to this MySQL server
To solve this please check that the external export MySQL DBMS is set to allow access from external IPs.
Missing data can also represent a common problem; in this case a WARNING would be associated to the ETL task. This can be due to a number of scenarios, among which:
- Entuity data export job “~BCO Export Job” has not successfully completed, verify if the Entuity job connection parameters is correctly configured (try test connection button in par. 5.1 figure 14 of this document)
- Entuity data export job “~BCO Export Job” is not enabled, manually run the job and enable the automatic execution, when the job is completed try to run the ETL again
- BMC TrueSight Capacity Optimization ETL Lastcounter has reached the last data exported by Entuity data export job, in the export DB no more data are available, try to manually run the “~BCO Export Job” and enable the automatic execution, when the job is completed try to run the ETL again
- BMC TrueSight Capacity Optimization ETL Recovery mode is active, but in the export DB there are no more data (check Ageout property in the export job par. 5.1)
- The filters in BMC TrueSight Capacity Optimization ETL custom properties configuration are too strict and does not extract any device, check ETL configuration (par. 6.3)
The parameter “Max days to extract” is set (greater then 0), if the lastcounter is too far in the past to find data: the oldest data available must be lower than the lastcounter+max days to extract (see Figure 38)
Configuration and Performance Metrics Mapping
The ETL connects to the Entuity data export DB and extracts data from a set of defined tables. The metrics supported by the ETL are stored in tables populated by the job “~BCO Export Job”.
There are some differences in the data model between Entuity 12.5 or later versions and EYE 2012, in the following tables are present two columns that specify which metrics belongs to which version
Performance metrics
Configuration metrics
Lookup Fields
The ETL for Entuity supports the multiple lookup methodology available in BMC TrueSight Capacity Optimization. This means that, for each BMC TrueSight Capacity Optimization entity, more than one lookup values is stored.
Lookup values for Systems
Considering a system named hq01 with Entuity internal identifier 1-730, the lookup table is filled-up by the ETL with the following fields:
Lookup values for Domains
Considering a domain named “FW & VPN” with Entuity internal identifier “Firewalls & VPN”, the lookup table is filled-up by the ETL with the following fields:
Entuity “My network” view (Regional in EYE 2012) became “All networks” in BMC TrueSight Capacity Optimization and all Entuity user-defined views are placed into the folder “Entuity views”
Hi,
This version of product supposed to be compatible with Entuity version 17.
There is new Moviri Adapter released on 22-March-2019, i.e. version 2.8.
Can someone form BMC Documentation Team help us updating this document?
Regards, Anuparn Padalia
Hi Anuparn,
Documentation is now updated to mention support for Entuity 17.
Regards,
Bipin Inamdar | https://docs.bmc.com/docs/capacityoptimization/btco115/moviri-integrator-for-truesight-capacity-optimization-entuity-830156762.html | 2020-10-20T06:06:59 | CC-MAIN-2020-45 | 1603107869933.16 | [] | docs.bmc.com |
View and Modify Cloudera Search Configuration
Learn about viewing and editing. You can also restart dependent services using the Restart Stale Services wizard. | https://docs.cloudera.com/cdp-private-cloud-base/7.1.4/search-troubleshooting/topics/search-configure-log-files.html | 2020-10-20T07:08:19 | CC-MAIN-2020-45 | 1603107869933.16 | [] | docs.cloudera.com |
»
2015
»
Related Documents
»
Acts
»
2015 Wisconsin Act 170
Up
Up
(a) "Accompanied" has the meaning given in s. 23.33 (1) (a).
(b) "Agricultural purpose" includes a purpose related to the transportation of farm implements, equipment, supplies, or products on a farm or between farms.
(c) "Alcohol beverages" has the meaning specified under s. 125.02 (1).
(d) "Alcohol concentration" has the meaning given in s. 340.01 (1v).
(dm) "All-terrain vehicle" has the meaning given in s. 340.01 (2g).
(e) "All-terrain vehicle route" has the meaning given in s. 23.33 (1) (c).
(f) "All-terrain vehicle trail" has the meaning given in s. 23.33 (1) (d).
(g) "Approved public treatment facility" has the meaning specified under s. 51.45 (2) (c).
(gk) "Controlled substance" has the meaning given in s. 961.01 (4).
(gm) "Controlled substance analog" has the meaning given in s. 961.01 (4m).
(h) "Electric personal assistive mobility device" has the meaning given in s. 340.01 (15pm).
(hm) "Hazardous inhalant" means.
(i) "Highway" has the meaning given in s. 340.01 (22).
(j) "Immediate family" means persons who are related as spouses, who are related as siblings, or who are related as parent and child.
(k) "Intoxicant" means any alcohol beverage, hazardous inhalant, controlled substance, controlled substance analog, or other drug or any combination thereof.
(L) "Intoxicated operation of an off-highway motorcycle law" means sub. (12) (a) or (b) or a local ordinance in conformity therewith or, if the operation of an off-highway motorcycle is involved, s. 940.09 or 940.25.
(m) "Junked" means dismantled for parts or scrapped.
(n) "Law enforcement officer" has the meaning given in s. 23.33 (1) (ig).
(o) "Limited use off-highway motorcycle" means an off-highway motorcycle that is not registered by the department of transportation for use on highways.
(p) "Local governmental unit" means a city, village, town, or county.
(q) "Off-highway motorcycle" means a 2-wheeled motor vehicle that is straddled by the operator, that is equipped with handlebars, and that is designed for use off a highway, regardless of whether it is also designed for use on a highway.
(qm) "Off-highway motorcycle association" means a club or other association consisting of individuals that promotes the recreational operation of off-highway motorcycles.
(r) "Off-highway motorcycle club" means a club consisting of individuals that promotes use of off-highway motorcycles for recreational purposes off the highways within this state.
(s) "Off-highway motorcycle corridor" means an off-highway motorcycle trail or other established off-highway motorcycle corridor that is open to the public for the operation of off-highway motorcycles for recreational purposes but does not include an off-highway motorcycle route.
(t) "Off-highway motorcycle dealer" means a person who is engaged in this state in the sale of off-highway motorcycles for a profit at retail.
(u) "Off-highway motorcycle route" means a highway or sidewalk designated for recreational use by operators of off-highway motorcycles by the governmental agency having jurisdiction.
(v) "Off-highway motorcycle trail" means a marked corridor on public property or on private lands subject to public easement or lease, designated for recreational use by operators of off-highway motorcycles by the governmental agency having jurisdiction.
(y) "Off the highways" means off-highway motorcycle corridors, off-highway motorcycle routes, and areas where operation is authorized under sub. (10) or (11).
(z) "Operate" means to exercise physical control over the speed or direction of an off-highway motorcycle or to physically manipulate or activate any of the controls of an off-highway motorcycle necessary to put it in motion.
(zb) "Operation" means the exercise of physical control over the speed or direction of an off-highway motorcycle or the physical manipulation or activation of any of the controls of an off-highway motorcycle necessary to put it in motion.
(zc) "Operator" means a person who operates an off-highway motorcycle, who is responsible for the operation of an off-highway motorcycle, or who is supervising the operation of an off-highway motorcycle.
(zd) "Owner" means a person who has lawful possession of an off-highway motorcycle by virtue of legal title or an equitable interest in the off-highway motorcycle which entitles the person to possession of the off-highway motorcycle.
(zdm) "Proof," when used in reference to evidence of a registration document, safety certificate, nonresident trail pass, or temporary trail use receipt, means the original registration document, safety certificate, nonresident trail pass, or temporary trail use receipt issued by the department or an agent appointed under sub. (4) (f) 2. or (6) (e) 1. or any alternative form of proof designated by rule under s. 23.47 (1).
(ze) "Purpose of authorized analysis" means for the purpose of determining or obtaining evidence of the presence, quantity, or concentration of any intoxicant in a person's blood, breath, or urine.
(zf) "Refusal law" means sub. (12) (h) or a local ordinance in conformity therewith.
(zg) "Registration document" means an off-highway motorcycle registration certificate, a temporary operating receipt, or a registration decal.
(zgm) "Restricted controlled substance" means any of the following:
1. A controlled substance included in schedule I under ch. 961 other than a tetrahydrocannabinol.
2. A controlled substance analog of a controlled substance described in subd. 1.
3. Cocaine or any of its metabolites.
4. Methamphetamine.
5. Delta-9-tetrahydrocannabinol.
(zi) "Snowmobile" has the meaning given in s. 340.01 (58a).
(zj) "Snowmobile route" has the meaning given in s. 350.01 (16).
(zk) "Snowmobile trail" has the meaning given in s. 350.01 (17).
(zkm) "Temporary operating receipt" means a receipt issued by the department or an agent under sub. (4) (g) 1. a. that shows that an application and the required fees for a registration certificate have been submitted to the department or an agent appointed under sub. (4) (f) 2.
(zL) "Test facility" means a test facility or agency prepared to administer tests under s. 343.305 (2).
(zLm) "Utility terrain vehicle" has the meaning given in s. 23.33 (1) (ng).
(2)
(a)
Requirement.
No person may operate an off-highway motorcycle, and no owner may give permission for the operation of an off-highway motorcycle, off the highways unless the off-highway motorcycle is registered with the department under this section or is exempt from registration or the person operating the off-highway motorcycle holds a temporary operating receipt provided by an off-highway motorcycle dealer under sub. (3) (b).
(b)
Exemptions.
An off-highway motorcycle is exempt from the registration requirement under par. (a) if any of the following applies:
1. The off-highway motorcycle is covered by a valid registration of a federally recognized American Indian tribe or band, and all of the following apply:
a. The registration program of the tribe or band is covered by an agreement under s. 23.35.
b. The off-highway motorcycle displays the registration decal required by the tribe or band.
2. The off-highway motorcycle displays a plate or sign attached in the manner authorized under sub. (5) (c).
3. The off-highway motorcycle will be operated exclusively in racing on a raceway facility or as part of a special off-highway motorcycle event as authorized under sub. (10) (b).
4. The off-highway motorcycle is present in this state, for a period not to exceed 15 days, and is used exclusively as part of an advertisement being made for the manufacturer of the off-highway motorcycle.
5. The off-highway motorcycle is specified as exempt from registration by department rule.
(c)
Weekend exemption.
A person may operate an off-highway motorcycle off the highways in this state during the first full weekend in June of each year without registering the off-highway motorcycle as required under par. (a).
(3)
Registration; application process.
(a)
Public or private use.
Only the department may register off-highway motorcycles for off-highway operation. Any off-highway motorcycle may be registered for public use. An off-highway motorcycle may be registered for private use if the operation is limited to any of the following:
1. Operation for agricultural purposes.
2. Operation by the owner of the motorcycle or a member of his or her immediate family only on land owned or leased by the owner or a member of his or her immediate family.
(b)
Registration; sales by dealers.
If the seller of an off-highway motorcycle is an off-highway motorcycle dealer, the dealer shall require each buyer to whom he or she sells an off-highway motorcycle to complete an application for registration for public or private use and collect the applicable fee required under sub. (4) (d) at the time of the sale if the off-highway motorcycle will be operated off the highways and is not exempt from registration under sub. (2) (b). The department shall provide application and temporary operating receipt forms to off-highway motorcycle dealers. Each off-highway motorcycle dealer shall provide the buyer a temporary operating receipt showing that the application and accompanying fee have been obtained by the off-highway motorcycle dealer. The off-highway motorcycle dealer shall mail or deliver the application and fee to the department no later than 7 days after the date of sale.
(c)
Registration; other sales.
If an off-highway motorcycle is sold or otherwise transferred by a person other than an off-highway motorcycle dealer and is not registered with the department, the buyer or transferee shall complete an application for registration for public or private use if the buyer or transferee intends to operate the off-highway motorcycle off the highways and the off-highway motorcycle is not exempt from registration under sub. (2) (b).
(d)
Registration; action by department.
Upon receipt of an application for registration of an off-highway motorcycle on a form provided by the department, and the payment of any applicable fees under sub. (4) (d) and of any sales or use taxes that may be due, the department shall issue a registration certificate to the applicant.
(e)
Transfers of registered motorcycles.
Upon transfer of ownership of an off-highway motorcycle that is registered for public or private use, the transferor shall deliver the registration certificate to the transferee at the time of the transfer. The transferee shall complete an application for transfer on a form provided by the department and shall mail or deliver the form to the department within 10 days after the date of the transfer if the transferee intends to operate the off-highway motorcycle off the highways.
(f)
Transfers; action by department.
Upon receipt of an application for transfer of an off-highway motorcycle registration certificate under par. (e), and the payment of the fee under sub. (4) (d) 3. and of any sales or use taxes that may be due, the department shall transfer the registration certificate to the applicant.
(g)
Trades; registration required.
An off-highway motorcycle dealer may not accept a limited use off-highway motorcycle in trade unless the off-highway motorcycle is currently registered by the department or is exempt from being registered by the department under sub. (2) (b).
(4)
Registration; certificates and decals.
(a)
Period of validity; expiration.
1. A registration certificate issued under sub. (3) for public use is valid beginning on April 1 or the date of issuance or renewal and ending March 31 of the 2nd year following the date of issuance or renewal.
1m. A registration certificate issued under sub. (3) for private use is valid from the date of issuance until ownership of the off-highway motorcycle is transferred.
2. For renewals of registration certificates for public use, the department shall notify each owner of the upcoming date of expiration at least 2 weeks before that date.
(b)
Content of certificate.
Each registration certificate shall contain the registration number, the name and address of the owner, and any other information that the department determines is necessary.
(bm)
Display of registration
. The operator of an off-highway motorcycle shall have in his or her possession at all times while operating the vehicle proof of the registration certificate or, for an off-highway motorcycle the owner of which has received a temporary operating receipt but has not yet received the registration certificate, proof of the temporary operating receipt. The operator of an off-highway motorcycle shall display this proof upon demand for inspection by a law enforcement officer.
(c)
Decal required.
1. Each registration certificate issued under sub. (3) shall be accompanied by a registration decal. No person may operate an off-highway motorcycle for which registration is required without having the decal affixed as described in subd. 3., except as provided in subd. 4.
2. The decal shall contain a reference to the state and to the department, the vehicle identification number, and the expiration date of the registration, if the off-highway motorcycle is being registered for public use.
3. The person required to register an off-highway motorcycle shall affix the registration decal with its own adhesive in a position on the exterior of the motorcycle where it is clearly visible and shall maintain the decal so that it is in legible condition.
4. A person may operate an off-highway motorcycle without having a registration decal affixed if the owner has been issued a temporary operating receipt that shows that an application and the required fees for a registration certificate have been submitted to the department, and the person operating the off-highway motorcycle has the receipt in his or her possession. The person shall exhibit the receipt, upon demand, to any law enforcement officer.
(d)
Fees for certificates and decals.
1. The fee for the issuance or renewal of a registration certificate for public use and the accompanying decal is $30. The department shall impose an additional late fee of $5 for the renewal of a registration certificate under this subdivision that is filed after the expiration date of the registration certificate unless the renewal is included with an application for transfer of the registration certificate.
2. The fee for the issuance or renewal of a registration certificate for private use and the accompanying decal is $15.
3. The fee for transferring a certificate under sub. (3) (e) is $5.
(e)
Duplicate certificates and decals.
1. If a registration certificate issued under sub. (3) or accompanying decal is lost or destroyed, the holder of the certificate or decal may apply for a duplicate on a form provided by the department. Upon receipt of the application and the fee required under subd. 2., the department shall issue a duplicate certificate or decal to the applicant.
2. The fee for the issuance of a duplicate certificate for public or private use is $5, and the fee for a duplicate decal is $5.
(f)
Registration issuers.
For the issuance of original or duplicate registration documents, for the issuance of reprints under s. 23.47 (3), and for the transfer or renewal of registration documents, the department may do any of the following:
1. Directly issue, transfer, or renew the registration documents with or without using the service specified in par. (g) 1. and directly issue the reprints.
2. Appoint persons who are not employees of the department as agents of the department to issue, transfer, or renew the registration documents using either or both of the services specified in par. (g) 1. and to issue the reprints.
(g)
Methods of issuance.
1. For the issuance of original or duplicate registration documents and for the transfer or renewal of registration documents, the department may implement either or both of the following procedures to be provided by the department and any agents appointed under par. (f) 2.:
a. A procedure under which the department or an agent appointed under par. (f) 2. accepts applications for registration documents and issues temporary operating receipts at the time applicants submit applications accompanied by the required fees.
b. A procedure under which the department or an agent appointed under par. (f) 2. accepts applications for registration documents and issues to each applicant all or some of the registration documents at the time the applicant submits the application accompanied by the required fees.
2. Under either procedure under subd. 1., the department or agent shall issue to the applicant any remaining registration documents directly from the department at a later date. Any registration document issued under subd. 1. b. is sufficient to allow the vehicle for which the application is submitted to be operated in compliance with the registration requirements under this subsection.
(h)
Registration; supplemental fee
. In addition to the applicable fee under par. (d) 1., 2., or 3. or (e) 2., each agent appointed under par. (f) 2. who accepts an application to renew registration documents in person shall collect an issuing fee of 50 cents and a transaction fee of 50 cents each time the agent issues renewal registration documents under par. (g) 1. or 2. The agent shall retain the entire amount of each issuing fee and transaction fee the agent collects.
Down
Down
/2015/related/acts/170
true
acts
/2015/related/acts/170/12/_31
section
true
»
2015
»
Related Documents
»
Acts
»
2015 Wisconsin Act 170 | https://docs.legis.wisconsin.gov/2015/related/acts/170/12/_31?down=1 | 2020-10-20T06:41:05 | CC-MAIN-2020-45 | 1603107869933.16 | [] | docs.legis.wisconsin.gov |
Exchange Server supportability matrix
The Exchange Server supportability matrix provides a central source for Exchange administrators to easily locate information about the level of support available for any configuration or required component for supported versions of Microsoft Exchange Server.
Release model
The following table identifies the release model for each supported version of Exchange.
In Exchange Server 2010 and earlier, each update rollup package (RU) is cumulative. An RU for Exchange Server 2010 includes all fixes for Exchange Server from all previous update rollup packages, so you only need to install the latest RU to apply all of the fixes that were released up to that point. However, individual updates or hotfixes for Exchange 2010 or earlier do not contain all previous fixes for Exchange Server. The updated files that are included in an individual update or hotfix include all updates that were applied only to those specific files by all previous updates, but any other files on Exchange Server will not be updated. For more information, see Exchange 2010 Servicing.
In Exchange Server 2013 or later, we changed the way we deliver hotfixes and service packs by using a scheduled delivery model. In this model, cumulative updates (CUs) are released quarterly (every three months). Each CU is a full installation of Exchange that includes updates and changes from all previous CUs, so you don't need to install any previous CUs or Exchange Server RTM first. For more information, see Updates for Exchange Server.
Note
At this time, no additional CUs are planned for Exchange Server 2013 and no additional RUs are planned for Exchange Server 2010.
Support lifecycle
For more information about the support lifecycle for specific versions of Exchange, Windows Server, or Windows client operating systems, see the Microsoft Support Lifecycle page. For more information about the Microsoft Support Lifecycle, see the Microsoft Support Lifecycle Policy FAQ.
Exchange Server 2007 End-of life
Exchange 2007 reached end of support on April 11, 2017, per the Microsoft Lifecycle Policy. There will be no new security updates, non-security updates, free or paid assisted support options, or online technical content updates. Furthermore, as adoption of Microsoft 365 or Office 365 accelerates and cloud usage increases, custom support options for Office products will not be available. This includes Exchange Server, as well as Microsoft Office, SharePoint Server, Office Communications Server, Lync Server, Skype for Business Server, Project Server, and Visio. At this time, we encourage customers to complete their migration and upgrade plans. We recommend that customers leverage deployment benefits provided by Microsoft and Microsoft Certified Partners including Microsoft FastTrack for cloud migrations, and Software Assurance Planning Services for on-premises upgrades.
Supported operating system platforms
The following tables identify the operating system platforms on which each version of Exchange can run.
Important
Releases of Windows Server and Windows client that aren't listed in the tables below are not supported for use with any version or release of Exchange.
Note
Client operating systems only support the Exchange management tools.
Supported Active Directory environments
The following table identifies the Active Directory environments that Exchange can communicate with. An Active Directory server refers to both writable global catalog servers and to writable domain controllers. Read-only global catalog servers and read-only domain controllers are not supported.
Web browsers supported for use with the premium version of Outlook Web App or Outlook on the web
The following table identifies the web browsers supported for use together with the premium version of Outlook Web App or Outlook on the web.
* Current release of Firefox or Chrome refers to the latest version or the immediately previous version.
Web browsers supported for use with the basic version of Outlook Web App or Outlook on the web
The following table identifies the web browsers supported for use together with the light (basic) version of Outlook Web App or Outlook on the web.
Note
Outlook Web App Basic (Outlook Web App Light) is supported for use in mobile browsers. However, if rendering or authentication issues occur in a mobile browser, determine whether the issue can be reproduced by using Outlook Web App Light in the full client of a supported browser. For example, test the use of Outlook Web App Light in Safari, Chrome, or Internet Explorer. If the issue can't be reproduced in the full client, we recommend that you contact the mobile device vendor for help. In these cases, we collaborate with the vendor as appropriate.
* Current release of Firefox or Chrome refers to the latest version or the immediately previous version.
Web browsers supported for use of S/MIME with Outlook Web App or Outlook on the web
The following table identifies the web browsers supported for the use of S/MIME together with Outlook Web App or Outlook on the web.
Clients
The following tables identify the mail clients that are supported for use together with each version of Exchange.
1 Requires the latest Office service pack and the latest public update.
2 Requires Outlook 2010 Service Pack 1 and the latest public update.
3 Requires Outlook 2007 Service Pack 3 and the latest public update.
4 EWS only. There is no DAV support for Exchange 2010.
Microsoft .NET Framework
The following tables identify the versions of the Microsoft .NET Framework that can be used with the specified versions of Exchange.
Important
Versions of the .NET Framework that aren't listed in the tables below are not supported on any version of Exchange. This includes minor and patch-level releases of the .NET Framework.
If you are upgrading Exchange Server from an unsupported CU to the current CU and no intermediate CUs are available, you should first upgrade to the latest version of .NET that's supported by your version of Exchange Server.
Exchange 2019
Exchange 2016
* .NET Framework 4.6.1 also requires a hotfix, and a different hotfix is required for different versions of Windows. For more information, see Released: June 2016 Quarterly Exchange Updates.
Exchange 2013
* .NET Framework 4.6.1 also requires a hotfix, and a different hotfix is required for different versions of Windows. For more information, see Released: June 2016 Quarterly Exchange Updates.
Exchange 2010 SP3
1 On Windows Server 2012, you need to install the .NET Framework 3.5 before you can use Exchange 2010 SP3.
2 Exchange 2010 uses only the .NET Framework 3.5 and the .NET Framework 3.5 SP1 libraries. It doesn't use the .NET Framework 4.5 libraries if they're installed on the server. We support the installation of any version of the .NET Framework 4.5 (for example, .NET Framework 4.5.1, .NET Framework 4.5.2, etc.) as long as the .NET Framework 3.5 or the .NET Framework 3.5 SP1 is also installed on the server.
Windows PowerShell
Exchange 2013 or later requires the version of Windows PowerShell that's included in Windows (unless otherwise specified by an Exchange Setup-enforced prerequisite rule).
Exchange 2010 requires Windows PowerShell 2.0 on all supported versions of Windows.
Exchange does not support the use of Windows Management Framework add-ons on any version of Windows PowerShell or Windows.
If there are other installed versions of Windows PowerShell or PowerShell Core that support side-by-side operation, Exchange will use only the version that it requires.
Microsoft Management Console
The following table identifies the version of Microsoft Management Console (MMC) that can be used together with each version of Exchange.
Windows Installer
The following table identifies the version of Windows Installer that is used together with each version of Exchange. | https://docs.microsoft.com/en-us/exchange/plan-and-deploy/supportability-matrix?view=exchserver-2019&redirectSourcePath=%252fhu-hu%252foffice%252fc89774d6-0722-4c93-a547-ef45e693e006 | 2020-10-20T07:08:37 | CC-MAIN-2020-45 | 1603107869933.16 | [] | docs.microsoft.com |
NOT returns the logical opposite of the argument it is passed.
Function category: Logical
NOT(arg1)
Let's say we're given a response with the following fleet deployment information:
{"data":{"fleet_ready":{"fleet_1": false}}}
If we want to transform the return value in the first example into it's opposite value, use the following function.
# Transform a value into its oppositeNOT(data.fleet_ready.fleet_1)# Returns true
Let's say the API doesn't respond with a Boolean value but provides a string, as in the following.
{"data_2":{"fleet_ready":{"fleet_1": "not ready"}}}
Since strings are interpreted as true, we can transform
"not ready" in
"false" using the following function.
# Transform a string to falseNOT(data_2.fleet_ready.fleet_1)# Returns false | https://docs.xapix.io/axel-f/axel-f-functions/not | 2020-10-20T05:44:54 | CC-MAIN-2020-45 | 1603107869933.16 | [] | docs.xapix.io |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.