content stringlengths 0 557k | url stringlengths 16 1.78k | timestamp timestamp[ms] | dump stringlengths 9 15 | segment stringlengths 13 17 | image_urls stringlengths 2 55.5k | netloc stringlengths 7 77 |
|---|---|---|---|---|---|---|
Example
```
VARIABLE_COLUMN_LOG_PARSE(
< - character-expression
>, < - columns
>, < - delimiter-string
>
[, < - escape-string
>, < - quote-string
> ] )
- column description >[,…]’
Note: Parsing of binary files is not supported.
The arguments
Since SQLstream supports unicode character literals, a tab can also be a delimiter, specified using a unicode escape, e.g., u&’\0009’, which is a string consisting only of a tab character.
Specifying a
Note that the
When a list of columns
is supplied as the second parameter
By default, the output columns are named COLUMN1, COLUMN2, COLUMN3, etc., each of SQL data type VARCHAR(1024)
See also the REGEX_LOG_PARSE and other such write-ups in this SQL Reference Guide. | http://docs.sqlstream.com/sql-reference-guide/built-in-functions/variablecolumnlogparse/ | 2019-12-05T19:38:38 | CC-MAIN-2019-51 | 1575540482038.36 | [] | docs.sqlstream.com |
This integration collects data from Traefik in order to check its health and monitor:
If you are using Agent v6.8+ follow the instructions below to install the Traef
traefik package, run:
ddev -e release build traefik
Download and launch the Datadog Agent.
Run the following command to install the integrations wheel with the Agent:
datadog-agent integration install -w <PATH_OF_TRAEFIK_ARTIFACT_>/<TRAEFIK_ARTIFACT_NAME>.whl
Configure your integration like any other packaged integration.
Edit the
traefik.d/conf.yaml file in the
conf.d/ folder at the root of your Agent’s configuration directory to start collecting your Traefik metrics! | https://docs.datadoghq.com/integrations/traefik/ | 2019-12-05T20:15:38 | CC-MAIN-2019-51 | 1575540482038.36 | [] | docs.datadoghq.com |
Add or Edit Time Off
Use this dialog to add a new time off or edit an existing one.
If you select Time Off on the Add Calendar Item screen, then the Choose Time Off screen appears when you click Next.
- Select a time-off type from the Choose Time Off list.
This list shows all of the time-off types that are configured for the selected site.
- Select the Show all check box to display all Time-Off types. Clear this check box to display only the time-off types that are applicable to the selected agent.ImportantIf you select a time-off type that is not associated with a selected agent, then WFM assigns the time off but does not enforce the time-off balance rules ...except for limits. If the time-off type counts toward time-off limits, then WFM takes this time-off item into account when calculating limits.
- Optional: select the Full Day check box if the time-off preference is for a full day off.
- Optional: Adjust the Start time and End time for this time off, if the default values are not correct.
- Select the check box Specify Start/End to enable the Start Time and End Time fields, then click inside each field to modify the default values for hours, minutes, and AM/PM.
- Select Next Day to the right of the End Time text box if the time-off ends on the day after it begins.
- Optional: Specify a nonstandard length of your full-day time off.
- Select the Specify Paid Hours check box to enable the Paid Hours field, then click inside and enter or select a value to specify the exact number of hours in a full day for this particular time off. The Specify Paid Hours check box is enabled only if you selected a paid time-off type in the Choose Time Off list.
- Optional: Select the Wait-list check box to specify that the request stays in a Preferred status, if a time off request is denied because the time-off limits have already been reached.
The request could eventually be granted by a Supervisor, if an opening becomes available, although this is not guaranteed.
- Optional: Enter a comment inside the Comments text box.
- Click Finish.
The Calendar reappears, displaying the new or edited time-off item.
ImportantIf you are adjusting part-day time-off preferences, remember that all part-day time off must comply with all settings for at least one qualifying shift, including meal parameters. You may not need to adjust this value manually. If you selected a single agent, then the default is the number of paid hours/minutes configured for the agent's time-off rule, for the type of Time Off being inserted. If you selected multiple agents, the default value is 0 (zero).
ImportantIf you are entering multiple part-day time-off items, these cannot overlap each other or any part-day exception.
This page was last edited on August 10, 2017, at 13:45.
Feedback
Comment on this article: | https://docs.genesys.com/Documentation/PSAAS/latest/WMCloud/AETmOff | 2019-12-05T21:30:24 | CC-MAIN-2019-51 | 1575540482038.36 | [] | docs.genesys.com |
Installation¶
We offer several ways to install Imandra:
Installer (macOS and Linux only): If you’re new to the OCaml ecosystem, the simplest way to go from zero to a working Imandra installation is to use the Imandra installer.
Opam (macOS and Linux only): If you have the opam package manager available, you can install Imandra using our public opam repo.
Docker Image: If you don't want to bother with setting up a native environment, you can use our docker image.
Jupyter Notebook: If you'd rather use Imandra via a Jupyter Notebook rather than our REPL, check out the Jupyter Notebook setup.
Add-ons:
- VSCode extension: Once you've got a working Imandra installation, you may find it helpful to install our VSCode extension to aid development. | https://docs.imandra.ai/imandra-docs-dev/notebooks/installation/ | 2019-12-05T20:20:47 | CC-MAIN-2019-51 | 1575540482038.36 | [] | docs.imandra.ai |
Symptom:
If you have the Enterprise Edition of the Men & Mice Suite, you can back up the Men & Mice Central data tables using the command line interface. Launch the Men & Mice CLI (a shell command called mmcmd in version 5.5 and above, qdnscmd in earlier versions - if you don’t have it, you can download it from our FTP server ).
Execute the following commands within the command line shell:
Execute the following commands within the command line shell:
login server administrator password backup -f /some/path exit
The -f will force mmcmd to overwrite an existing backup file. Without the -f the backup command will fail when a backup file mmsuite.db.bak already exists.
Otherwise, you’ll have to back up the data using the instructions below.
Otherwise, you’ll have to back up the data using the instructions below.
When backing up and restoring the data of Men & Mice Central (except for backing up via the command line interface), be sure to stop the service beforehand. Otherwise, the data tables will be corrupted.
The instructions below involve shell commands. These can be executed either directly on the server in a Terminal window (/Applications/Utilities/Terminal.app) or remotely via an ssh session. These instructions are for version 5.1 or later of the Men & Mice Suite; for version 5.0, the stop and start commands are different.
Solution
Backing Up
Stop the Men & Mice Central service, using the following command:
sudo /Library/StartupItems/mmCentral/mmCentral stop
Back up the directory
/var/mmsuite/mmcentral
in version 5.6 and above,
/var/qdns/qdnscentral
in 5.1. For example, to create an archive in your home folder on the server:
In 5.6 and above
sudo tar czC /var/mmsuite -f ~/central.tgz mmcentral
In 5.1
sudo tar czC /var/qdns -f ~/central.tgz qdnscentral
Then start the service again:
sudo /Library/StartupItems/mmCentral/mmCentral start
Restoring
Stop the service as outlined above. Then restore the data directory. If you followed the instructions above exactly, you can use this shell command:
In 5.6 and above
sudo tar xzC /var/mmsuite -f ~/central.tgz
In 5.1
sudo tar xzC /var/qdns -f ~/central.tgz
Then start up the service again, as outlined in the Backing Up section above. | https://docs.menandmice.com/pages/viewpage.action?pageId=6360946 | 2019-12-05T19:46:15 | CC-MAIN-2019-51 | 1575540482038.36 | [] | docs.menandmice.com |
Telemetry uses notifications to collect Orchestration service meters. Perform these steps on the controller node.
Edit the
/etc/heat/heat.conf and complete the following actions:
In the
[oslo_messaging_notifications] sections, enable notifications:
[oslo_messaging_notifications] ... driver = messagingv2
Except where otherwise noted, this document is licensed under Creative Commons Attribution 3.0 License. See all OpenStack Legal Documents. | https://docs.openstack.org/ceilometer/rocky/install/heat/install-heat-ubuntu.html | 2019-12-05T19:32:17 | CC-MAIN-2019-51 | 1575540482038.36 | [] | docs.openstack.org |
App Frontend Application Architecture
Description of the Application architecture for App-Frontend
App Frontend is a Single Page Application based on React.
This application is responsible to present a UI to the end user. The application consist of serveral different features that is responsible for handling the ui for different steps in the workflow.
Each App developed in Altinn Studio will contain their own App Frontend as part of the Docker Container created during the build/deploy process to a Altinn Apps environment. This means that there can be different versions of App Frontend for two different deployed Apps
React Architecture
The React Architecture used for App Frontend is based on using different javascript frameworks together REACT to handle different responsibilties.
The React architecture tries to follow best practice React architecture.
The diagram below show the architecture
Store
A store holds the whole state tree of your application. The only way to change the state inside it is to dispatch an action on it.
Reducers
Reducers specify how the application’s state changes in response to actions sent to the store. Remember that actions only describe what happened, but don’t describe how the application’s state changes.
Action Creators
An action creator is, quite simply, a function that creates an action. Do not confuse the two terms—again, an action is a payload of information, and an action creator is a factory that creates an action.
App Frontend Features
The App Frontend SPA is seperated in serveral features that is a collection of components and containers that support a given functional area for a App. Typical a feature is connected to a type of workflow step. Like formfilling, signing, ++.
Support for new types of workflow steps are added
Instansiate
This feature is responsible for presenting the user
UI Render (FormFiller)
The UI rendering component is the one that is responsible for rendering the UI designed in Altinn Studio.
This feature uses the formlayout for an app together with other metdata about the datamodell.
Based on the content of the formlayout file the UI Render, renders the correct components like textbox, fileupload ++
Receipt
This feature is responsible to show the summary of the instance when an app is sent to end state of the process flow.
Configuration files
The App Frontend requires some configuration files to work correctly. These files are loaded through API
FormLayout
The formlayot is used the UI-render feature.
It decides the layout elements. App Frontend have access to form layout through API.
See details about FormLayout.json
Language
Contains all text resources
ServiceMetadata
Contains information about the data model and is used by UI-render to map the fields to the data model.
See details about ServiceMetadata.json | https://docs.altinn.studio/architecture/application/altinn-apps/app/app-frontend/ | 2019-12-05T20:50:24 | CC-MAIN-2019-51 | 1575540482038.36 | [] | docs.altinn.studio |
chainerx.fixed_batch_norm¶
chainerx.
fixed_batch_norm(x, gamma, beta, mean, var, eps=2e-5, axis=None)¶
Batch normalization function with fixed statistics.
This is a variant of
batch_norm(), where the mean and array statistics are given by the caller as fixed variables.
- Parameters
x (ndarray) – Input array.
gamma (ndarray) – Scaling parameter of normalized data.
beta (ndarray) – Shifting parameter of scaled normalized data.
mean (ndarray) – Shifting parameter of input.
var (ndarray) – Square of scaling parameter of input.
eps (float) – Epsilon value for numerical stability.
axis (int, tuple of int or None) – Axis over which normalization is performed. When axis is
None, the first axis is treated as the batch axis and will be reduced during normalization.
Note
During backpropagation, this function does not propagate gradients. | https://docs.chainer.org/en/stable/chainerx/reference/generated/chainerx.fixed_batch_norm.html | 2019-12-05T19:38:16 | CC-MAIN-2019-51 | 1575540482038.36 | [] | docs.chainer.org |
Connect SNS to Datadog in order to:
If you haven’t already, set up the Amazon Web Services integration first.
In the AWS integration tile, ensure that
SNS is checked under metric collection.
Add the following permissions to your Datadog IAM policy in order to collect Amazon SNS metrics. For more information on SNS policies, review the documentation on the AWS website.
Install the Datadog - AWS SNS integration.
On the Topics section of the SNS Management console, select the desired topic and click Create Subscription
Select https and enter the following Webhook url:<API KEY>
Do not enable “raw message delivery”
There is no SNS Logs. Process logs and events that are transiting through to the SNS Service.
Configure a new SNS subscription, Select the topic where messages come from, Select “Lambda” as protocol and the arn of the lambda you want to use:
Each of the metrics retrieved from AWS is assigned the same tags that appear in the AWS console, including but not limited to host name, security-groups, and more.
The AWS SNS integration includes events for topic subscriptions. See the example event below:
The AWS SNS integration does not include any service checks.
Currently, we do not support SNS notifications from Datadog to topics in GovCloud or China.
Need help? Contact Datadog support. | https://docs.datadoghq.com/integrations/amazon_sns/ | 2019-12-05T20:47:34 | CC-MAIN-2019-51 | 1575540482038.36 | [] | docs.datadoghq.com |
Easily search the event schedule
Introduction
Searching an event schedule has never been easier! Simpley follow the instructions below to find exactly what you are looking for.
Navigation Path
Overview
- Filter by Track: A track us what would normally be referred to as a theme. If you wish to search the schedule for sessions related to a specific theme you can do so here by typing in the theme
- Filter by Tag: Session tags are similar to themes but
- Session Types: Search the schedule for the type of session you would like to attend. For example, if you really want to attend a panel discussion you can search panel discussion here and see all of the panel discussions scheduled during the event.
- Locations: See which sessions are taking place in specific rooms or venues by searching location.
- Other Filters: Select the top box to only show sessions that will require a sign-up to attend. Select the bottom box to view new or recently updated sessions. | https://docs.grenadine.co/schedule-search.html | 2019-12-05T20:28:12 | CC-MAIN-2019-51 | 1575540482038.36 | [array(['images/filter_schedule.jpg', None], dtype=object)] | docs.grenadine.co |
Interface ReconnectListener
- All Superinterfaces:
GloballyAttachableListener
- Functional Interface:
- This is a functional interface and can therefore be used as the assignment target for a lambda expression or method reference.
@FunctionalInterface public interface ReconnectListener extends GloballyAttachableListenerThis listener listens to reconnected sessions. Reconnecting a session, means that it's likely that events were lost and therefore all objects were replaced.
Method Detail
onReconnect
void onReconnect(ReconnectEvent event)This method is called every time a connection is reconnected.
- Parameters:
event- The event. | https://docs.javacord.org/api/v/3.0.5/org/javacord/api/listener/connection/ReconnectListener.html | 2019-12-05T20:45:05 | CC-MAIN-2019-51 | 1575540482038.36 | [] | docs.javacord.org |
Creating URL Rewrites
The URLUniform Resource Locator: The unique address of a page on the internet. Rewrite tool can be used to create product and categoryA set of products that share particular characteristics or attributes.Content Management System: A software system that is used to create, edit, and maintain content on a website. content pages. Internally, the system always references products and categories by their ID. No matter how often the URL changes, the ID remains the same. Here are some ways you can use URL rewrites: | https://docs.magento.com/m2/2.2/b2b/user_guide/marketing/url-rewrite-create.html | 2019-12-05T20:58:49 | CC-MAIN-2019-51 | 1575540482038.36 | [] | docs.magento.com |
To Configure Java JDK (Windows)
In Search, search for System (Control Panel) and select it.
Click Advanced System Settings.
Click Environment Variables.
In System Variables, select the PATH environment variable.
Click Edit.
If the PATH environment variable does not exist, click New.
In Edit System Variable (or New System Variable), make sure that the JDK 1.8.0/bin directory is the first item in your PATH environment variable, and click OK.
Click OK.
To verify that your installation was configured correctly, reopen the command prompt window, and type:
> java -version
This should print the version of the java tool.
If the version is other than your desired JDK, or you get the error "java: Command not found", then the JDK is not properly installed. | https://docs.mulesoft.com/studio/7.2/jdk-requirement-wx-workflow | 2019-12-05T20:01:19 | CC-MAIN-2019-51 | 1575540482038.36 | [] | docs.mulesoft.com |
TRUNCATE TABLE (Transact-SQL)
SQL Server
Azure SQL Database
Azure Synapse Analytics (SQL DW)
Parallel Data Warehouse ] ) )
Applies to: SQL Server ( SQL Server 2016 (13.x) through current version)))
To truncate a partitioned table, the table and indexes must be aligned (partitioned on the same partition function).). | https://docs.microsoft.com/en-us/sql/t-sql/statements/truncate-table-transact-sql?view=sql-server-2017 | 2019-12-05T20:41:43 | CC-MAIN-2019-51 | 1575540482038.36 | [] | docs.microsoft.com |
VS.NET Project Templates
Servic.
web new
All ServiceStack Project Templates can be found and installed using the web new .NET Core tool that can be installed with:
$ dotnet tool install --global web
Then run
web new to view the list of available project templates:
$ web new
To upgrade to the latest version, run:
$ dotnet tool update -g web
dotnet-new
If you don’t have .NET Core installed you can find and create all project templates using the dotnet-new npm tool that can be installed with
$ npm install -g @servicestack/cli
Then run
dotnet-new to view the list of available project templates:
$ dotnet-new
To upgrade to the latest version, run:
$ npm install -g @servicestack/cli@latest
VS.NET Templates
A limited selection of project templates can be created inside VS.NET using the ServiceStackVS VS.NET Extension
- ServiceStack ASP.NET Empty
- ServiceStack ASP.NET MVC4
- ServiceStack ASP.NET Razor with Bootstrap
- ServiceStack ASP.NET Sharp Pages with Bootstrap
- Self Host Empty
- Self Host with Razor
- Windows Service Empty
- ASP.NET Empty
- AngularJS 1.5 App
Visual Studio 2019
The latest ServiceStackVS from v2+ includes support for Visual Studio 2019:
Visual Studio 2015 or 2013
If you’re still using VS.NET 2015 or 2013 you can use the earlier VS.NET ServiceStackVS extension:
Available Project Templates
ServiceStack is available in a number of popular starting configurations below: App
- Aurelia App
- React App
- React Desktop Apps
- Vue App
- AngularJS 1.5 Lite App
Website Templates
2017, 2015 and 2013
Example Projects
The Example projects below contain a working demo including further documentation about each of their templates they were built with:
TypeScript React VS.NET Template
TypeScript React Template incorporates today’s best-in-class technologies for developing rich, complex JavaScript Apps within VS.NET and encapsulated within ServiceStack’s new TypeScript React VS.NET VS.NET | https://docs.servicestack.net/templates-overview | 2019-12-05T20:29:45 | CC-MAIN-2019-51 | 1575540482038.36 | [array(['https://raw.githubusercontent.com/ServiceStack/docs/master/docs/images/ssvs/new-projects-dialog.png',
'ServiceStack VS.NET Templates Dialog'], dtype=object)
array(['https://raw.githubusercontent.com/ServiceStack/docs/master/docs/images/ssvs/vs2019-install-vue.gif',
None], dtype=object)
array(['https://raw.githubusercontent.com/ServiceStack/Assets/master/csharp-templates/web.png',
None], dtype=object) ] | docs.servicestack.net |
This section guides you through creating a BPEL process using the WSO2 Developer Studio and the BPS tooling plug-in.
- Install the tooling plug-in if you have not already done so.
- To create a business process, follow the steps below.
- Right-click on the Project Explorer and go to New > Project.
- Clicking project opens the New Project window
- Click Next and client creates the BPEL project.
- Generate a business process with the synchronous business process template. Use the following steps to accomplish this.
- Select a wizard from the subsequent screen that appears after creating the BPEL project.
- Select the New BPEL Process File and click Next.
- In the resulting screen, enter the BPEL Process Name, the Namespace and the Template as shown in the below image.
- Click Next.
- Create a WSDL file by filling out the Service Name, Port Name, Service Address and Binding Protocol as shown in the following image.
- Click Next and finish the process. The following screen appears.
- Drag and drop Receive, Assign and Reply activities such that you get the following process.
- Click on the Assign activity and assign ‘input’ from the input variable to ‘result’ from the output variable.
- Similarly, complete the next Receive, Assign and Reply activity by using the same partner link ‘client’. Now you have completed the process logic. Next you need to add the correlation set to two Receive activities.
- Create a correlation set by clicking on the ‘+’ sign next to Correlation Sets.
- Select the correlation set and select Properties and click Add.
- You see the Select a Property wizard. Click New.
- Give a Name to our correlation property and select Simple Type under the Defined As section.
- Select the data type for our correlation property. In this instance, you have selected a ‘Simple Type’.
- Click Browse. Now we have to select the data type from XML schema types. Select string as our type.
- Click OK. Now a pop up box appears and asks for the prefix to be used to the XML schema namespace. Enter the prefix as ‘xsd’.
- Click on the New button next to Alias to define the property alias. The Create Property Alias window appears.
- Select Message Type from the available options.
- Click on Browse. Also select input string for query.
- Click OK and finish the process.
Now you have finished defining the correlation set, property and alias.
Note that you selected only one alias here because you are using the same message for both receive activities.
Now we have to add the correlation set to the receive activities. On the first Receive activity, which creates the process instance, initialize the correlation set. On the next, Receive activity; it is not necessary to initialize the correlation set. Click on the Receive activity, and go to Properties, click Add and select the correlation set. Since there is only one correlation set, it is the only one that appears. On the Initiation section select Yes.
- On the next correlation activity, set Initiation to No.
- Next, generate the deployment descriptor.
- Next select all files related to the project and create a ZIP package. Upload the BPEL package. Once the process is deployed successfully; we can use tryit to send a request to the process.
- Now navigate to the instance view for this process. The instance has completed up to the Reply activity and is waiting on the next Receive activity. Under the correlation properties; you can see the value sent in the request.
- Now send the same request from tryit again. The process instance has gone to the completed state. You can follow the same steps used here to add correlation sets to any asynchronous business process you implement. Correlation sets can be added to ‘receive’, ‘invoke’ and ‘pick’ activities. | https://docs.wso2.com/pages/viewpage.action?pageId=51490634 | 2019-12-05T20:33:52 | CC-MAIN-2019-51 | 1575540482038.36 | [] | docs.wso2.com |
Search other guides
Documentation for developers and administrators on installing, configuring, and using the features and capabilities of DSE Graph.
Reference commands and other information for DSE Graph.
Reference guide to graph commands.
Explain terminology specific to DSE Graph.
Insert data and run traversals.
An overview of DSE Graph architecture.
Explain OLTP and OLAP relationship in DSE Graph.
Examine common mistakes made with DSE Graph.
Introduce graph data modeling.
Create graph schema, load external data files, and do advanced graph traversals.
Load schema and data using the DSE Graph Loader.
Perform OLAP analytics jobs on graph data using DSE.
Configure DSE Graph.
Compare DSE Graph and relational databases (RDBMS).
Compare DSE Graph and Cassandra.
Introduce tools available for DSE Graph.
How to troubleshoot common issues with DSE Graph.
How to specify an edge.
How to specify a vertex.
How to read or write a graph using low-level input/output.
How to specify a transaction-wide option.
Reference guide to schema commands.
Reference guide to system commands.
Describes the DSE Graph data types.
Describes graph storage in Cassandra at a high level.
Describe the Apache TinkerPop framework.
graph commands add data to an existing graph.
graph | https://docs.datastax.com/en/datastax_enterprise/5.0/datastax_enterprise/graph/reference/refGraphAPI.html | 2019-12-05T19:31:45 | CC-MAIN-2019-51 | 1575540482038.36 | [] | docs.datastax.com |
.
Do one of the following:
- From the top menu, select Windows > Script Editor.
- From any of the other views, click the Add View
button and select Script Editor. | https://docs.toonboom.com/help/harmony-14/essentials/reference/view/script-editor-view.html | 2018-03-17T12:41:19 | CC-MAIN-2018-13 | 1521257645069.15 | [array(['../../Resources/Images/HAR/Stage/Interface/HAR11/HAR11_ScriptEditor_View.png',
'Script Editor View Script Editor View'], dtype=object)
array(['../../Skins/Default/Stylesheets/Images/transparent.gif', 'Closed'],
dtype=object) ] | docs.toonboom.com |
AWS services or capabilities described in AWS Documentation may vary by region/location. Click Getting Started with Amazon AWS to see specific differences applicable to the China (Beijing) Region.
Initiates the asynchronous execution of the DescribeEC2InstanceLimits operation.
This is an asynchronous operation using the standard naming convention for .NET 4.5 or higher. For .NET 3.5 the operation is implemented as a pair of methods using the standard naming convention of BeginDescribeEC2InstanceLimits and EndDescribeEC2InstanceLimits.
Namespace: Amazon.GameLift
Assembly: AWSSDK.GameLift.dll
Version: 3.x.y.z
Container for the necessary parameters to execute the DescribeEC2InstanceLimits | https://docs.aws.amazon.com/sdkfornet/v3/apidocs/items/GameLift/MGameLiftDescribeEC2InstanceLimitsAsyncDescribeEC2InstanceLimitsRequestCancellationToken.html | 2018-03-17T13:03:17 | CC-MAIN-2018-13 | 1521257645069.15 | [] | docs.aws.amazon.com |
If you do not have an ESXi installation CD/DVD, you can create one.
About this task
You can also create an installer ISO image that includes a custom installation script. See Create an Installer ISO Image with a Custom Installation or Upgrade Script.
Procedure
- Download the ESXi installer from the VMware Web site at.
ESXi is listed under Datacenter & Cloud Infrastructure.
- Confirm that the md5sum is correct.
See the VMware Web site topic Using MD5 Checksums at.
- Burn the ISO image to a CD or DVD. | https://docs.vmware.com/en/VMware-vSphere/6.5/com.vmware.vsphere.install.doc/GUID-048CCB07-7E27-4CE8-9E6A-1BF655C33DAC.html | 2018-03-17T13:02:09 | CC-MAIN-2018-13 | 1521257645069.15 | [] | docs.vmware.com |
Running Locust without the web UI¶
You can run locust without the web UI - for example if you want to run it in some automated flow,
like a CI server - by using the
--no-web flag together with
-c and
-r:
locust -f locust_files/my_locust_file.py --no-web -c 1000 -r 100
-c specifies the number of Locust users to spawn, and
-r specifies the hatch rate
(number of users to spawn per second).
Setting a time limit for the test¶
Note
This is a new feature in v0.9. For 0.8 use
-n to specify the number of requests
If you want to specify the run time for a test, you can do that with
--run-time or
-t:
locust -f --no-web -c 1000 -r 100 --run-time 1h30m
Locust will shutdown once the time is up.
Running Locust distributed without the web UI¶
If you want to run Locust distributed without the web UI,
you should specify the
--expect-slaves option when starting the master node, to specify
the number of slave nodes that are expected to connect. It will then wait until that many slave
nodes have connected before starting the test. | https://docs.locust.io/en/latest/running-locust-without-web-ui.html | 2018-03-17T12:44:32 | CC-MAIN-2018-13 | 1521257645069.15 | [] | docs.locust.io |
3 Time-of-Day
Gregor’s time struct represents a time-of-day, irrespective of date or time zone. As with Gregor’s date struct, the name time also conflicts with an existing, incompatible definition in racket/base. The situation here is somewhat different, however. While Gregor completely replaces the functionality offered by the built-in date, it does not replace that of the built-in time function, which is used for measuring the time spent evaluating programs.
To mitigate problems that might be caused by this conflict, Gregor does not provide time-related bindings from the gregor module. Instead, they are provided by the gregor/time module.
Note the contract on second; a time is unable to represent times that fall on added UTC leap-seconds. For a discussion of Gregor’s relationship to UTC, see Time Scale. | https://docs.racket-lang.org/gregor/time.html | 2018-03-17T12:15:45 | CC-MAIN-2018-13 | 1521257645069.15 | [] | docs.racket-lang.org |
Supported integration interfaces ServiceNow provides a number of interfaces to be able to directly integrate with the platform. These interfaces are considered part of the platform and are provided at no additional charge. Table 1. Supported Integration Interfaces Interface Email JDBC JSON LDAP SOAP REST API SSO - SAML 2.0 Digest token authentication ODBC Data Export CTI Syslog probe Computer Telephony IntegrationComputer Telephony Integration (CTI) is accomplished by the CTI client on the user machine sending a URL to the instance.Create a simple reminder emailWith a date/time field, a business rule, and an email notification, you can set up a very simple reminder that sends a user an email at a specified time based on information in an incident.Integrating ServiceNow with your IntranetThere are several ways you can add a ServiceNow login link to your intranet.JDBCProbeA JDBC probe runs on the MID Server to query an external database via [JDBC] and returns results to ServiceNowServiceNow.Build a search provider for your instanceServiceNow Search Providers allow you search this Wiki and our Forums from the IE and Firefox search bar.Syslog probeThe ServiceNow Syslog probe uses the MID Server to deliver log messages from a ServiceNow instance to another machine, such as a dedicated log server, using the syslog protocol over an IP network. | https://docs.servicenow.com/bundle/kingston-platform-administration/page/integrate/vendor-specific-integrations/reference/r_SupportedIntegrationInterfaces.html | 2018-03-17T12:56:23 | CC-MAIN-2018-13 | 1521257645069.15 | [] | docs.servicenow.com |
TRADACOMS EDI
TRADACOMS EDI lets you convert TRADACOMS files to and from DataWeave-compatible representations using lists and maps.
Release Notes: TRADACOMS Connector Release Notes
TRADACOMS EDI includes:
TRADACOMS file reading, file validation, and file writing
Integration with DataSense and DataWeave
TRADACOMS message pack for all standard files: ACKMNT, AVLDET, CORDER, CRAINF, CREDIT, CUSINF, DELIVR, DLCDET, DRAINF, EXCINF, GENRAL, INVOIC, LPRDET, ORDERS, PAYINF, PICKER, PPRDET, PRIINF, PROINF, SADDET, SNPSTS, SRMINF, UCNDET, UPLIFT, and UTLBIL
The ability to define your own schemas or customize the base TRADACOMS schemas
Compatibility
To get started using the connector, follow these steps:
Install the TRADACOMS Connector
Customize Schemas if your implementation differs from the standard.
Configure TRADACOMS EDI Connector for your trading partner according to your implementation convention.
Use TRADACOMS EDI Inside Mule Flows
This page helps provides guidance for each of these steps.
Install the TRADACOMS Connector
The following sections explain how to install the TRADACOMS connector.
Installing TRADACOMS EDI.
Using the TRADACOMS EDI via Maven
To use the TRADACOMS EDI in conjunction with Maven:
Add the following repositories to your POM:
Add the module as a dependency to your project. This can be done by adding the following under the dependencies element your POM:
If you plan to use this module inside a Mule application, you need add it to the packaging process. That way the final ZIP file that contains your flows and Java code also contains this module and its dependencies. Add an special inclusion to the configuration of the Mule Maven plugin for this module as follows:
Customize Schemas
Customize schemas to describe your messages according to your implementation.
EDI Schema Language
The TRADACOMS EDI uses a YAML format called ESL (for EDI Schema Language) to represent EDI schemas. Basic ESLs define the structure of TRADACOMS files in terms of structures (messages, in TRADACOMS terminology), groups, segments, composites, and elements. ESLs for the most recent TRADACOMS versions are included:
If your implementation convention differs from the standard, you can define an overlay schema so that TRADACOMS is configured to work with your files. An overlay schema is a special form of ESL that allows you to modify a base schema, such as an TRADACOMS INVO.
TRADACOMS EDI schemas have a few differences from the generic EDI format. TRADACOMS files are composed of multiple messages, which occur in a fixed order and only one type is allowed to repeat within a file. To represent these constraints in ESL TRADACOMS adopts some conventions:
Each TRADACOMS file is defined in a separate ESL schema document
The schema version is the file name with the version number appended (so for CORDER at version 6 this would be
'CORDER6')
Each message used in the file is a structure definition in the schema, with the structure name consisting of the six-character message name, followed by a colon character, followed by the message version number (which may differ from the file schema version). So the CORHDR message used by CORDER would have the name
'CORHDR:6'.
Each structure definition uses a class value which is the order of the corresponding message within the file. So for the CORHDR message within CORDER this would be
'1', meaning it’s the first message within the file. By definition, the only message allowed to repeat within a file is the second message, with class
'2'.
Defining your Implementation Convention with an Overlay Schema
To specify a schema according to your implementation convention, you can follow the following process:
Create an "overlay" schema which imports the base schema you want to customize - for example, TRADACOMS INVOIC.
Add new messages as part of the file.
Customize the structure of individual messages -.
For example, here’s a sample overlay schema modifying the basic TRADACOMS INVOIC file definition:
This sample. Here’s the form taken:
Segment Overlays
A segment overlay details modifications to the base schema definition. Most often these modifications take the form of changing the usage of elements or composites in the base definition. Here is a full overlay modifying a segment of a message: TRADACOMS Schema Location
To use the connector, you need to know the locations of the schemas in your project. If you’re using the out of the box TRADACOMS schemas and not customizing anything, the schema location follows the
/tradacoms/{file}.esl pattern. For example, if you’re using the INVOIC file, your schema location is
/tradacoms/INVO/INVOIC.esl, your schema location is
${app.home}/mypartner/INVOIC.esl.
The Mule Runtime automatically checks
src/main/app for any locations that contain the
${app.home} value.
Configure TRADACOMS EDI Connector
After you install the connector and configure your schema customizations (if any), you can start using the connector. Create separate configurations for each implementation convention.
Topics:
Studio Visual Editor
Follow these steps to create a global TRADACOMS EDI configuration in a Mule application:
Click the Global Elements tab at the base of the canvas, then click Create.
In the Choose Global Type wizard, use the filter to locate and select, TRADACOMS EDI: Configuration, then click OK.
Configure the parameters according to the sections that follow.
Click OK to save the global connector configurations.
Return to the Message Flow tab in Studio.
Setting your TRADACOMS Identification
You can configure your STX identification information in the connector so that it automatically checks when a file is being received or set when a file is being sent.
This is the same setup as with X12 and EDIFACT. The message headers include both sender and recipient identification. The "Self" configuration should match the recipient identification in incoming messages. TRADACOMS uses the "Self" as the sender identification in outgoing messages, while the "Partner" configuration is the reverse.
For example, if we put the XYZ company as the Partner Sender, TRADACOMS uses that information to validate incoming messages. If the message is from the XYZ company, the message passes. If not the message fails.
The STX identification information is set in these fields:
Partner identification
Partner Sender/Recipient Code (STX FROM or UNTO Code):
Partner Sender/Recipient Name (STX FROM or UNTO Name):
For Partner identification, if a code is not specified, Transmission Recipient Code is not checked in received transmissions. Similarly, if a name is not specified, the Transmission Sender Name is not checked in received transmissions.
The Partner Sender/Recipient Code identifies a partner. When this value is specified, it is used both to validate the Transmission Sender Code in received transmissions and to set the Transmission Recipient Code in sent transmissions (if not already specified in map data). If not specified the Transmission Sender Code is not checked in received transmissions.
The Partner Sender/Recipient Name identifies a partner. When this value is specified it is used both to validate the Transmission Sender Name in received transmissions and to set the Transmission Recipient Name in sent transmissions (if not already specified in map data). If not specified the Transmission Sender Name is not checked in received transmissions.
Self identification
Self Sender/Recipient Code (STX FROM or UNTO Code):
Self Sender/Recipient Name (STX FROM or UNTO Name):
The "Self identification" parameters identify your side of the trading partner relationship, while the "Partner identification" parameters identify your trading partner. The values you set are used when writing TRADACOMS files to set the sender and recipient code and name, and are verified in order to receive files. If you don’t want to restrict incoming files, you can leave these blank, and set the values for outgoing files in the actual outgoing file data. Values set in the file data override the connector configuration.
The Self Sender/Recipient Code, identifies self. When this value is specified it is used both to validate the Transmission Recipient Code in received transmissions and to set the Transmission Sender Code in sent transmissions (if not already specified in map data). If not specified the Transmission Recipient Code is not checked in received transmissions.
The Self Sender/Recipient Name is used to identify self. When this value is specified, it is used both to validate the Transmission Recipient Name in received transmissions and to set the Transmission Sender Name in sent transmissions (if not already specified in map data). If not specified the Transmission Recipient Name is not checked in received transmissions.
Setting Sender Defaults
You can also configure the connector with defaults for other STX values. These defaults are used when writing TRADACOMS files to set the Sender’s and Recipient’s Transmission References, the Application Reference, and the Transmission Priority Code if not already set in the outgoing data.
Defaults are specified in these fields:
Sender Reference - Sender’s Transmission Reference used when writing a transmission. If specified, this value is used as a default if the required Sender’s Transmission Reference value is not specified in map data for a send transmission (write operation).
Recipient Reference - Recipient’s Transmission Reference used when writing a transmission. If specified, this value is used as a default if an optional Recipient’s Transmission Reference value is not specified in map data for a send transmission (write operation).
Application Reference - Application Reference used when writing a transmission. If specified, this value is used as a default if an optional Application Reference value is not specified in map data for a send transmission (write operation).
Priority Code - Transmission Priority Code used when writing a transmission. If specified, this value is used as a default if an optional Transmission Priority Code value is not specified in map data for a send transmission (write operation).
Setting Parser Options
You can set the following options if needed:
Fail when value length outside allowed range - Fail when the receive value lengthis outside the allowed range. If
true, a transmission with this error is rejected; if
false, the value is used anyway and the transmission is not rejected. In either case, the error is logged and reported in the returned error list.
Fail when unknown segment in transmission - Fail when an unknown segment is present in a transmission. If
true, a transmission with this error is rejected; if
false, the segment is ignored and the transmission is not rejected. In either case the error is logged and reported in the returned error list.
Fail when unused segment included in transmission - Fail when a segment marked as Unused is included in a transmission. If
true, a transmission with this error is rejected; if
false, the transmission is not rejected and the unused segment is ignored. In either case the error is logged and reported in the returned error list.
Fail when segment out of order in transmission - Fail when a segment is out of order in a transmission. If
true, a transmission with this error is rejected; if
falseand the segment can be reordered the transmission is not rejected. In either case the error is logged and reported in the returned error list.
XML Editor or Standalone
Ensure that you have included the EDI namespaces in your configuration file.
Follow these steps to configure TRADACOMS EDI in your application:
Create a global configuration outside and above your flows, using the following global configuration code:
Setting Your TRADACOMS Identification
You can configure the STX identification for you and your trading partner on the TRADACOMS EDI connector configuration.
The "Self identification" parameters identify your side of the trading partner relationship, while the "Partner identification" parameters identify your trading partner. The values you set are used when writing TRADACOMS files to supply the Sender/Recipient Code and Name, and are verified in receive files. If you don’t want to restrict incoming files you can leave these blank, and set the values for outgoing files in the data. Values set directly in the data override the connector configuration.
Self identification parameters:
Partner identification parameters:
Setting Your Schema Locations the message structure, operations, and acknowledgments.
Use TRADACOMS EDI Inside Mule Flows
You can use TRADACOMS EDI connector in your flows for reading and writing messages, and sending acknowledgments.
Topics:
Understanding TRADACOMS Message Structure
The TRADACOMS connector enables reading or writing of TRADACOMS.
Reading and Validating a TRADACOMS File
To read an TRADACOMS file, search the palette for "TRADACOMS EDI" and drag the TRADACOMS EDI building block into a flow. Then, go to the properties view, select the connector configuration you previously created and select the Read operation:
This operation reads any byte stream into the structure described by your TRADACOMS schemas.
TRADACOMS EDI validates the message structure when it reads it in. Message validation includes checking the syntax and content of the STX and all messages of the file, including component segments of the messages. Normally errors are logged and accumulated, and the message data is only supplied as output if no fatal errors occur in parsing the input. Errors reading the input data cause exceptions to be thrown.
Error data entered in the receive data map uses the TradacomsError class, a read-only JavaBean with the following properties:
Error data is returned by the read operation as an optional list with the "Errors" key.
Example Use Case
The following use case reads and writes TRADACOMS messages. A complete listing of the Mule flow is in Example Source Code.
Topics:
Read a TRADACOMS Order
To read a TRADACOMS order:
Create a new Mule Project in Anypoint Studio.
Drag an HTTP connector to the canvas, click the green plus sign to the right of Connector Configuration, and click OK to accept the default settings for Host and Port.
Locate and drag Set Payload next to the HTTP connector and set the Value to a TRADACOMS message as a string, for example:
#["STX=ANAA:1+12345678901234:XYZ COMPANY+43210987654321:ABC COMPANY"...]
See Example Source Code for the complete string.
Locate and drag Logger to the canvas. Set the Message to the
#[payload]value.
Locate and drag TRADACOMS EDI to the canvas. Click the green plus next to Connector Configuration, and click OK to accept the default values.
Locate and drag an Object to JSON transformer to the canvas. No settings are required.
Locate and drag Logger to the canvas. Set the Message to the
#[payload]value.
Write a TRADACOMS Order
Drag a HTTP Connector to the canvas and configure the following parameters:
Locate and drag Data Weave Transformer next to the HTTP connector.
Drag a Tradacoms EDI connector next Data Weave component and select write operation.
Create a new Tradacoms EDI connector configuration, and add /tradacoms/ORDERS.esl schema. If you refresh metadata you see the Orders Input Metadata.
In the Dataweave Transformer, set the following output parameters:
Drag Object to String next to the TRADACOMS EDI connector, and write the payload to a String.
Deploy the application, open a web browser and make a request to the URL.
If the input Map was successfully written, you should receive a TRADACOMS message as a String response in the web browser. | https://docs.mulesoft.com/anypoint-b2b/edi-tradacoms | 2018-03-17T12:30:39 | CC-MAIN-2018-13 | 1521257645069.15 | [array(['./_images/tradacoms-read-operation.png',
'tradacoms-read-operation'], dtype=object)
array(['./_images/tradacoms-write-order.png', 'tradacoms-write-order'],
dtype=object) ] | docs.mulesoft.com |
Project importing from GitLab.com to your private GitLab instance
You can import your existing GitLab.com projects to your GitLab instance. But keep in mind that it is possible only if GitLab support is enabled on your GitLab instance. You can read more about GitLab support here To get to the importer page you need to go to "New project" page.
Note: If you are interested in importing Wiki and Merge Request data to your new instance, you'll need to follow the instructions for project export
Click on the "Import projects from GitLab.com" link and you will be redirected to GitLab.com for permission to access your projects. After accepting, you'll be automatically redirected to the importer.
To import a project, you can simple click "Import". The importer will import your repository and issues. Once the importer is done, a new GitLab project will be created with your imported data. | https://docs.gitlab.com/ee/workflow/importing/import_projects_from_gitlab_com.html | 2017-02-19T21:14:29 | CC-MAIN-2017-09 | 1487501170253.67 | [] | docs.gitlab.com |
XPath
Mule 3.6.0 and above
Supported XPath Versions
Mule 3.6.0 and newer provide basic support for version 3.0 of the spec. This means that any feature of the spec is supported as long as it doesn’t rely on schema awareness, high order functions, or streaming.
Mule 3.6.0 and newer also provide improved support for XPath 2.0, XSLT 2.0 and XQuery 1.0.
Mule 3.6.0 and newer include the
xpath3() function which provides full support for XPath 3.0 and 2.0. However, versions 3.0 and 2.0 are not fully compatible with version 1.0; for this reason, the old `xpath() function is only supported until Mule 4.0.
In Mule 3.6.0, the following XPath components were deprecated:
xpathexpression evaluator
xpath2expression evaluator
beanexpression evaluator
jxpathfilter
jxpathextractor transformer
jaxenfilter
Since XPath 3.0 is completely backwards-compatible with XPath 2.0, the new function works for 2.0 expressions as well. XPath 1.0 expressions, however, are not guaranteed to work. Compatibility mode is limited, and disabled by default.
For a description of the new xpath3() function, see the next section.
The xpath3() Function
The
xpath3() function supports Function Parameters and Query Parameters.
The function takes the following form:
xpath3(xpath_expression, input_data, return_type)
Function Parameters
Input Data (Object, Optional)
If not provided, this defaults to the message payload. It supports the following input types:
org.w3c.dom.Document
org.w3c.dom.Node
org.xml.sax.InputSource
OutputHandler
byte[]
InputStream
String
XMLStreamReader
DelayedResult
If input is not of any of the above types, the function attempts to use a registered transformer to transform the input into a DOM document or node. If no such transformer is found, then an
IllegalArgumentException is thrown.
Additionally, this function verifies if the input is of a consumable type, such as streams or readers. Evaluating the expression over a consumable input causes the input source to be exhausted, so in cases in which the input value was the actual message payload (whether left blank by the user or provided by default), the function updates the output message payload with the result obtained from consuming the input.
Return Type (String, Optional)
If not provided, defaults to String.
This parameter was introduced to allow for the different intents you may have when invoking the
xpath3() function, such as retrieving the actual data or just verifying if a specific node exists. This feature conforms to the JAXP API JSR-206, which defines the standard way for a Java application to handle XML, and therefore to execute XPath expressions. This parameter allows you to leverage this feature of the JAXP API without delving into the API’s complexities.
You can select from the following list of possible output types:
BOOLEAN: Returns the effective boolean value of the expression as a
java.lang.String. Equivalent to wrapping the expression in a call of the XPath `boolean() ` function.
STRING: Returns the result of the expression converted to a string, as a
java.lang.String. Equivalent to wrapping the expression in a call to the XPath
string()function.
NUMBER: Returns the result of the expression converted to a double as a
java.lang.Double. Equivalent to wrapping the expression in a call of the XPath
number()function.
NODE: Returns the result as a node object.
NODESET: Returns a DOM NodeList object.
Query Parameters
The
xpath3() function supports passing parameters into the XPath query. The example below shows a query which returns all the LINE elements which contain a given word:
//LINE[contains(., $word)]
The $ sign marks the parameter. The function automatically resolves that variable against the current flow variables. The example below returns all occurrences of the word "handkerchief:"
Namespace Manager
The
xpath3() function is
namespace-manager-aware, which means that all namespaces configured through a
namespace-manager component is available on XPath evaluation.
The example below shows how to perform an XPath evaluation on a document with multiple namespaces.
The above sample document has several namespaces, which the Xpath engine needs to be aware of in order to navigate the DOM tree. The relevant
xpath3() function is shown below.
XQuery
The xquery-transformer element retains the same syntax as in previous versions. You can select the XQuery version using xquery version in the XQuery script, as shown below:
XQuery 3.0 introduces support for several new features in the XQuery transformer, such as using an XQuery script to operate on multiple documents at once. For more information, see XQuery Transformer. | https://docs.mulesoft.com/mule-user-guide/v/3.8/xpath | 2017-02-19T21:05:09 | CC-MAIN-2017-09 | 1487501170253.67 | [] | docs.mulesoft.com |
PIG 1 - PIG purpose and guidelines¶
- Author: Christoph Deil
- Created: December 20, 2017
- Accepted: January 9, 2018
- Status: accepted
- Discussion: GH 1239
Abstract¶
PIG stands for “proposal for improvement of Gammapy”. This is PIG 1, describing the purpose of using PIGs as well as giving some guidelines on how PIGs are authored, discussed and reviewed.
What is a PIG?¶
This article is about the design document . For other uses, see Pig (disambiguation)
Proposals for improvement of Gammapy (PIGs) are short documents proposing a major addition or change to Gammapy.
PIGs are like APEs, PEPs, NEPs and JEPs, just for Gammapy. Using such enhancement proposals is common for large and long-term open-source Python projects.
The primary goal of PIGs is to have an open and structured way of working on Gammapy, forcing the person making the change to think it through and motivate the proposal before taking action, and for others to have a chance to review and comment on the proposal. The PIGs will also serve as a record of major design decisions taken in Gammapy, which can be useful in the future when things are re-discussed or new proposals to do things differently arrive.
We expect that we will not use PIGs very often, but we think they can be useful e.g. in the following cases:
- Outline design for something that requires significant work, e.g. on topics like “Modeling in Gammapy” or “High-level user interface in Gammapy”. These PIGs can be rather long, i.e. more than one page, explaining the design in detail and even explaining which alternatives were considered and why the proposed solution was preferred.
- Have a conscious decision and make sure all interested parties are aware for things that might be controversial and have long-term effects for Gammapy, e.g. when to drop Python 2 support or defining a release cycle for Gammapy. These PIGs can usually be very short, a page or less.
Writing a PIG¶
Anyone is welcome to write a PIG!
PIGs are written as RST files in the
docs/development/pigs folder in the
main Gammapy repository, and submitted as pull requests.
Most discussions concerning Gammapy will happen by talking to each other directly (calls or face-to-face), or online on the mailing list or Github. If you’re not sure if you should write a PIG, please don’t! Instead bring the topic up in discussions first with other Gammapy developers, or on the mailing list, to get some initial feedback. This will let you figure out if writing a PIG will be helpful or not to realise your proposal.
When starting to write a PIG, we suggest you copy & paste & update the header at
the top of this file, i.e. the title, bullet list with “Author” etc, up to the
Abstract section. A PIG must have a number. If you’re not sure what the next
free number is, put
X and filename
pig-XXX.rst as placeholders, make the
pull request, and we’ll let you know what number to put.
Please make your proposal clearly and keep the initial proposal short. If more information is needed, people that will review it will ask for it.
In term of content, we don’t require a formal structure for a PIG. We do suggest that you start with an “Abstract” explaining the proposal in one or a few sentences, followed by a section motivating the change or addition. Then usually there will be a detailed description, and at the end or interspersed there will often be some comments about alternative options that have been discussed and explanation why proposed one was favoured. Usually a short “Decision” section will be added at the end of the document after discussion by the reviewers.
If you’re not sure how to structure your proposal, you could have a look at at
the APE template or some well-written APEs or PEPs. APE 5, APE 7 and
APE 13 are examples of “design documents”, outlining major changes /
extensions to existing code in Astropy. APE 2 and APE 10 are examples of
“process” proposals, outlining a release cycle for Astropy and a timeline for
dropping Python 2 support. PEP 389 is a good example proposing an improvement
in the Python standard library, in that case by adding a new module
argparse, leaving the existing
optparse alone for backward-compatibility
reasons. In Gammapy many PIGs will also be about implementing better solutions
and a major question will be whether to change and improve the existing
implementation, or whether to just put a new one, and in that case what the plan
concerning the old code is. PEP 481 is an example of a “process” PEP,
proposing to move CPython development to git and Github. For Gammapy, we might
also have “process” PIGs in the future, e.g. concerning release cycle or package
distribution or support for users and other projects relying on Gammapy. It’s
good to think these things through and write them down to make sure it’s clear
how things work and what the plan for the future is.
Writing a PIG doesn’t mean you have to implement it. That said, we expect that most PIGs will propose something that requires developments where someone has the intention to implement it within the near future (say within the next year). But it’s not required, e.g. if you have a great idea or vision for Gammapy that requires a lot of development, without the manpower to execute the idea, writing a PIG could be a nice way to share this idea in some detail, with the hope that in collaboration with others it can eventually be realised.
PIG review¶
PIG review happens on the pull request on Github.
When a PIG is put up, an announcement with a link to the pull request should be sent both to the Gammapy mailing list and the Gammapy coordinator list.
Anyone is welcome to review it and is encouraged to share their thoughts in the discussion!
Please note that Github hides inline comments after they have been edited, so we suggest that you use inline comments for minor points like spelling mistakes only. Put your main feedback as normal comments in the “Conversation” tab, so that for someone reading the discussion later they will see your comment directly.
The final decision on any PIG is made by the Gammapy coordination committee. We expect that in most cases, the people participating in the PIG review will reach a consensus and the coordination committee will follow the outcome of the public discussion. But in unusual cases where disagreement remains, the coordination committee will talk to the people involved in the discussion with the goal to reach consensus or compromise, and then make the final decision.
PIG status¶
PIGs can have a status of:
- “draft” - in draft status, either in the writing or discussion phase
- “withdrawn” - withdrawn by the author
- “accepted” - accepted by the coordination committee
- “rejected” - rejected by the coordination committee
When a PIG is put up for discussion as a pull request, it should have a status of “draft”. Then once the discussion and review is done, the status will change to one of “withdrawn”, “accepted” or “rejected”. The reviewers should add a section “Decision” with a sentence or paragraph summarising the discussion and decision on this PIG. Then in any case, the PIG should be merged, even if it’s status is “withdrawn” or “rejected”.
Final remarks¶
This PIG leaves some points open. This is intentional. We want to keep the process flexible and first gain some experience. The goal of PIGs is to help the Gammapy developer team to be more efficient, not to have a rigid or bureaucratic process.
Specifically the following points remain flexible:
- When to merge a PIG? There can be cases where the PIG is merged quickly, as an outline or design document, even if the actual implementation hasn’t been done yet. There can be other cases where the PIG pull request remains open for a long time, because the proposal is too vague or requires prototyping to be evaluated properly. Note that this is normal, e.g. Python PEPs are usually only accepted once all development is done and a full implementation exists.
- Allow edits of existing PIGs? We don’t say if PIGs are supposed to be fixed or live documents. We expect that some will remain fixed, while others will be edited after being merged. E.g. for this PIG 1 we expect that over the years as we gain experience with the PIG process and see what works well and what doesn’t, that edits will be made with clarifications or even changes. Whether to edit an existing PIG or whether to write a new follow-up PIG will be discussed on a case by case basis.
- What to do if the coordination committee doesn’t agree on some PIG? For now, we leave this question to the future. We expect that this scenario might arise, it’s normal that opinions on technical solutions or importance of use cases or projects to support with Gammapy differ. We also expect that Gammapy coordination committee members will be friendly people that can collaborate and find a solution or at least compromise that works for everyone. | https://docs.gammapy.org/0.12/development/pigs/pig-001.html | 2019-06-16T05:30:36 | CC-MAIN-2019-26 | 1560627997731.69 | [] | docs.gammapy.org |
1 Introduction
This how-to explains how to create an Unsupported Widget action for the Mendix text box widget widget. In a standard situation, the first step is checking if ATS supports the widget.
The how-to assumes that you must build your own action.
The how-to applies to all widgets like the text box widget, which means that, if ATS needs to enter text in a widget, you can follow this how-to. Keep in mind that it might need some adjustments according to the widget!
This how-to will teach you how to do the following:
- Approach a widget in which ATS must enter text
- Create a custom action for entering text in the the text box and entering the text. The clicking part is something a user does to focus the text box so they can enter text. After that, you press Enter or click somewhere to unfocus the text box.
This is the text box focused:
This is the text box unfocused:
Now you know that you must focus, enter text, and unfocus the widget. You perform these tasks on the
input element that is available inside all input widgets. The
input element with the type
text makes it possible to type inside a widget.
4 Creating the Action Structure
In the previous step, you wrote down the user approach for the text box widget. Now you are going to create this approach in ATS with actions.
To create the action structure, follow these steps:
Start by checking the parent element, which is always the element with
mx-namewhen creating an unsupported widget action. If the widget does not have
mx-name, look for the highest
divelement that is still referencing the widget. The parent element of the text box looks like this in the debugger:
The debugger creates the border around the selected element in the app:
The parent element is not an
inputelement. Find a child element that ATS can use to enter text in the widget. When you look at the parent element, you will see it has an
input must know if ATS can find the
inputelement within the text box widget. You use the debugger to simulate what ATS does. Since the Find Widget Child Node action uses the
mx-nameto find the parent, you must also use the
mx-namein your code.
Use jQuery to find out if ATS can find the element. Enter the following code in the console of the debugger:
$( ‘.mx-name-textBox2 input’ ).
inputchild node selector, then enter the test step description and output description:
Test step 1 provides the
inputelement that you need for the other steps. Now, add the Focus and Clear Element Value action. Enter the output of step 1 as the input, and give it a proper description:
After focusing the
inputelement, enter the text. When entering text in an
inputelement, use the Send Keys action. Add the action, connect the input element from step 1, and give it a proper description:
The last action you add is Mendix Wait. You trigger a possible event in the widget by entering text, so you need to ensure that ATS waits for all the background processes to finish: do not need an output parameter.
Connect the input parameters to the correct actions. Start with the Widget Name and Search Context parameters for the Find Widget Child Node action:
The last parameter to connect is the Value parameter. Connect this input parameter to the Send Keys action:
There is no need to add logic to this custom action. It only involves entering text in text box widget.
| https://docs.mendix.com/ats/howtos/ht-version-2/cab-03-textbox-2 | 2019-06-16T05:07:50 | CC-MAIN-2019-26 | 1560627997731.69 | [array(['attachments/create-unsupported-widget-2/cab-03-textbox-2/text-box-focused.png',
None], dtype=object)
array(['attachments/create-unsupported-widget-2/cab-03-textbox-2/text-box-unfocused.png',
None], dtype=object)
array(['attachments/create-unsupported-widget-2/cab-03-textbox-2/text-box-finishedaction.png',
None], dtype=object) ] | docs.mendix.com |
This setting disables OfficeScan client console access from the system tray or Windows Start menu. The only way users can access the OfficeScan client console is by clicking PccNT.exe from the <Client installation folder>. After configuring this setting, reload the OfficeScan client for the setting to take effect.
This setting does not disable the OfficeScan client. The OfficeScan client runs in the background and continues to provide protection from security risks. | http://docs.trendmicro.com/en-us/enterprise/control-manager-60/ch_policy_templates/osce_client/client_priv_sett_all/prmt_self_prot_cnsol.aspx | 2019-06-16T04:35:11 | CC-MAIN-2019-26 | 1560627997731.69 | [] | docs.trendmicro.com |
VM snapshots
Citrix Hypervisor provides a convenient mechanism that can take a snapshot of a VM storage and metadata at a given time.. Snapshots are supported on all storage types. However, for the LVM-based storage types the following requirements must be met:
- If the storage repository was created on a previous version of Citrix Hypervisor, it must have been upgraded
- The volume must be in the default format (you cannot take a snapshot of
type=rawvolumes)
The snapshot operation is a two-step process:
Capturing metadata as a template.
Creating a VDI snapshot of the disks.
Three types of VM snapshots are supported: regular, quiesced, and snapshot with memory
Regular snapshotsRegular snapshots
Regular snapshots are crash consistent and can be performed on all VM types, including Linux VMs.
Quiesced snapshotsQuiesced snapshots
Quiesced snapshots take advantage of the Windows Volume Shadow Copy Service (VSS) to generate application consistent point-in-time snapshots. The VSS framework helps VSS-aware applications (for example, Microsoft SQL Server).
Citrix Hypervisor supports quiesced snapshots on:
Windows Server 2016
Windows Server 2012 R2
Windows Server 2012
Windows Server 2008 R2
Windows Server 2008 (32/64-bit)
Windows 10, Windows 8.1, and Windows 7 are not supported for quiesced snapshots. For more information about quiesced snapshots, see Advanced Notes for Quiesced Snapshots.
Snapshots with memorySnapshots with memory
In addition to saving the VMs memory (storage) and metadata, snapshots with memory also save the VMs state (RAM). This feature can be useful when you upgrade or patch software, but you also want the option to revert to the pre-change VM state (RAM). Reverting to a snapshot with memory, does not require a reboot of the VM.
You can take a snapshot with memory of a running or suspended VM via the management API, the xe CLI, or by using XenCenter.
Create a VM snapshotCreate a VM snapshot
Before taking a snapshot, see the following information about any special operating system-specific configuration and considerations:
First, ensure that the VM is running or suspended so that the memory status can be captured. The simplest way to select the VM on which the operation is to be performed is by supplying the argument
vm=name or
vm=vm uuid.
Run the
vm-snapshot and
vm-snapshot-with-quiesce commands to take a snapshot of a VM.
xe vm-snapshot vm=vm uuid new-name-label=vm_snapshot_name xe vm-snapshot-with-quiesce vm=vm uuid new-name-label=vm_snapshot_name
Create a snapshot with memoryCreate a snapshot with memory
Run the
vm-checkpoint command, giving a descriptive name for the snapshot with memory, so that you can identify it later:
xe vm-checkpoint vm=vm uuid new-name-label=name of the checkpoint
When Citrix Hypervisor has completed creating the snapshot with memory, its uuid is displayed.
For example:
xe vm-checkpoint vm=2d1d9a08-e479-2f0a-69e7-24a0e062dd35 \ new-name-label=example_checkpoint_1 b3c0f369-59a1-dd16-ecd4-a1211df29886
A snapshot with memory requires at least 4 MB of disk space per disk, plus the size of the RAM, plus around 20% overhead. So a checkpoint with 256 MB RAM would require approximately 300 MB of storage.
Note:
During the checkpoint creation process, the VM is paused for a brief period, and cannot be used during this period.
To list all of the snapshots on your Citrix Hypervisor poolTo list all of the snapshots on your Citrix Hypervisor pool
Run the
snapshot-list command:
xe snapshot-list
This command lists all of the snapshots in the Citrix Hypervisor pool.
To list the snapshots on a particular VMTo list the snapshots on a particular VM
Get the uuid of the particular VM by running the
vm-list command.
xe vm-list
This command displays a list of all VMs and their UUIDs. For example:
xe vm-list uuid ( RO): 116dd310-a0ef-a830-37c8-df41521ff72d name-label ( RW): Windows Server 2012 (1) power-state ( RO): halted uuid ( RO): 96fde888-2a18-c042-491a-014e22b07839 name-label ( RW): Windows 2008 R2 (1) power-state ( RO): running uuid ( RO): dff45c56-426a-4450-a094-d3bba0a2ba3f name-label ( RW): Control domain on host power-state ( RO): running
VMs can also be specified by filtering the full list of VMs on the values of fields.
For example, specifying
power-state=halted selects all VMs whose power-state field is equal to ‘halted’. Where multiple VMs are matching, the option
--multiple must be specified to perform the operation. Obtain the full list of fields that can be matched by using the command
xe vm-list params=all.
Locate the required VM and then enter the following:
xe snapshot-list snapshot-of=vm uuid
For example:
xe snapshot-list snapshot-of=2d1d9a08-e479-2f0a-69e7-24a0e062dd35
This command lists the snapshots currently on that VM:
uuid ( RO): d7eefb03-39bc-80f8-8d73-2ca1bab7dcff name-label ( RW): Regular name-description ( RW): snapshot_of ( RO): 2d1d9a08-e479-2f0a-69e7-24a0e062dd35 snapshot_time ( RO): 20090914T15:37:00Z uuid ( RO): 1760561d-a5d1-5d5e-2be5-d0dd99a3b1ef name-label ( RW): Snapshot with memory name-description ( RW): snapshot_of ( RO): 2d1d9a08-e479-2f0a-69e7-24a0e062dd35 snapshot_time ( RO): 20090914T15:39:45Z
Restore a VM to its previous stateRestore a VM to its previous state
Ensure that you have the uuid of the snapshot that you want to revert to, and then run the
snapshot-revert command:
Run the
snapshot-listcommand to find the UUID of the snapshot or checkpoint that you want to revert to:
xe snapshot-list
Note the uuid of the snapshot, and then run the following command to revert:
xe snapshot-revert snapshot-uuid=snapshot uuid
For example:
xe snapshot-revert snapshot-uuid=b3c0f369-59a1-dd16-ecd4-a1211df29886
After reverting a VM to a checkpoint, the VM is suspended.
Notes:
-
If there’s insufficient disk space available to thickly provision the snapshot, you cannot restore to the snapshot until the current disk’s state has been freed. If this issue occurs, retry the operation.
-
It is possible to revert to any snapshot. Existing snapshots and checkpoints are not deleted during the revert operation.
Delete a snapshot
Ensure that you have the UUID of the checkpoint or snapshot that you want to remove, and then run the following command:
Run the
snapshot-listcommand to find the UUID of the snapshot or checkpoint that you want to revert to:
xe snapshot-list
Note the UUID of the snapshot, and then run the
snapshot-uninstallcommand to remove it:
xe snapshot-uninstall snapshot-uuid=snapshot-uuid
This command alerts you to the VM and VDIs that are deleted. Type
yesto confirm.
For example:
xe snapshot-uninstall snapshot-uuid=1760561d-a5d1-5d5e-2be5-d0dd99a3b1ef The following items are about to be destroyed VM : 1760561d-a5d1-5d5e-2be5-d0dd99a3b1ef (Snapshot with memory) VDI: 11a4aa81-3c6b-4f7d-805a-b6ea02947582 (0) VDI: 43c33fe7-a768-4612-bf8c-c385e2c657ed (1) VDI: 4c33c84a-a874-42db-85b5-5e29174fa9b2 (Suspend image) Type 'yes' to continue yes All objects destroyed
If you only want to remove the metadata of a checkpoint or snapshot, run the following command:
xe snapshot-destroy snapshot-uuid=snapshot-uuid
For example:
xe snapshot-destroy snapshot-uuid=d7eefb03-39bc-80f8-8d73-2ca1bab7dcff
Snapshot templatesSnapshot templates
Create a template from a snapshot
You can create a VM template from a snapshot. However, its memory state is removed.
Use the command
snapshot-copyand specify a
new-name-labelfor the template:
xe snapshot-copy new-name-label=vm-template-name \ snapshot-uuid=uuid of the snapshot
For example:
xe snapshot-copy new-name-label=example_template_1 snapshot-uuid=b3c0f369-59a1-dd16-ecd4-a1211df29886
Note:
This command creates a template object in the SAME pool. This template exists in the Citrix Hypervisor database for the current pool only.
To verify that the template has been created, run the command
template-list:
xe template-list
This command lists all of the templates on the Citrix Hypervisor server.
Export a snapshot to a template
When you export a VM snapshot, a complete copy of the VM (including disk images) is stored as a single file on your local machine. This file has a
.xva file name extension.
Use the command
snapshot-export-to-templateto create a template file:
xe snapshot-export-to template snapshot-uuid=snapshot-uuid \ filename=template- filename
For example:
xe snapshot-export-to-template snapshot-uuid=b3c0f369-59a1-dd16-ecd4-a1211df29886 \ filename=example_template_export
The VM export/import feature can be used in various different ways:
As a convenient backup facility for your VMs. An exported VM file can be used to recover an entire VM in a disaster scenario..
For more information about the use of templates, see Create VMs and also the Managing VMs section in the XenCenter Help.
Advanced notes for quiesced snapshots
Note:
Do not forget to install the Xen VSS provider in the Windows guest to support VSS. This installation is done using the
install- XenProvider.cmdscript provided with the Citrix VM Tools. For more information, see Windows VMs.
In general, a VM can only access VDI snapshots (not VDI clones) of itself using the VSS interface. A Citrix Hypervisor administrator can add an attribute of
snapmanager=true to the VM
other-config allows that VM to import snapshots of VDIs from other VMs.
Warning:
This configuration opens a security vulnerability. Use it with care. With it, an administrator can attach VSS snapshots using an in-guest transportable snapshot ID as generated by the VSS layer to another VM for the purposes of backup.
VSS quiesce timeout: the Microsoft VSS quiesce period is set to a non-configurable value of 10 seconds. It is probable that a snapshot cannot complete in time. For example, if the XAPI daemon has queued extra blocking tasks such as an SR scan, the VSS snapshot may time out and fail. If this timeout happens, retry the operation.
Note:
The more VBDs attached to a VM, the more likely it is that this timeout may be reached. We recommend attaching no more that 2 VBDs to a VM to avoid reaching the timeout. However, there is a workaround to this problem. The probability of taking a successful VSS based snapshot of a VM with more than 2 VBDs can be increased when all VDIs for the VM are on different SRs.
VSS snapshot all the disks attached to a VM: to store all data available at the time of a VSS snapshot. The XAPI manager takes a snapshot of all disks and the VM metadata associated with a VM that you can take a snapshot of by using the Citrix Hypervisor storage manager API. If the VSS layer requests a snapshot of only a subset of the disks, a full VM snapshot is not taken.
vm-snapshot-with-quiesce: Produces bootable snapshot VM images: The Citrix Hypervisor VSS hardware provider makes snapshot volumes writable, including the snapshot of the boot volume.
VSS snap of volumes hosted on dynamic disks in the Windows Guest: The
vm-snapshot-with-quiesce CLI and the Citrix Hypervisor VSS hardware provider do not support snapshots of volumes hosted on dynamic disks on the Windows VM.
Note:
Do not forget to install the Xen VSS provider in the Windows guest to support VSS. This installation is done using the
install-XenProvider.cmdscript provided with the Citrix VM Tools. For more information, see Windows VMs.
Scheduled snapshotsScheduled snapshots
The Scheduled Snapshots feature provides a simple backup and restore utility for your critical service VMs. Regular scheduled snapshots are taken automatically and can be used to restore individual VMs. Scheduled Snapshots work by having pool-wide snapshot schedules for selected VMs in the pool. When a snapshot schedule is enabled, Snapshots of the specified VM are taken at the scheduled time each hour, day, or week. Several Scheduled Snapshots may be enabled in a pool, covering different VMs and with different schedules. A VM can be assigned to only one snapshot schedule at a time.
XenCenter provides a range of tools to help you use this feature:
To define a Scheduled Snapshot, use the New snapshot schedule wizard.
To enable, disable, edit, and delete Scheduled Snapshots for a pool, use the VM Snapshot Schedules dialog box.
To edit a snapshot schedule, open its Properties dialog box from the VM Snapshot Schedules dialog box.
To revert a VM to a scheduled snapshot, select the snapshot on the Snapshots tab and revert the VM to it.
For more information about Scheduled Snapshots, see XenCenter Help. | https://docs.citrix.com/en-us/citrix-hypervisor/dr/snapshots.html | 2019-06-16T06:44:16 | CC-MAIN-2019-26 | 1560627997731.69 | [] | docs.citrix.com |
Reservation
Collection
Reservation Collection
Reservation Collection
Reservation Collection
Class
Definition
Manages the collection of space reservations made in a record sequence.
public ref class ReservationCollection abstract : System::Collections::Generic::ICollection<long>, System::Collections::Generic::IEnumerable<long>
public abstract class ReservationCollection : System.Collections.Generic.ICollection<long>, System.Collections.Generic.IEnumerable<long>
type ReservationCollection = class interface ICollection<int64> interface seq<int64> interface IEnumerable
Public MustInherit Class ReservationCollection Implements ICollection(Of Long), IEnumerable(Of Long)
- Inheritance
-
- Implements
-
Examples
Reservations can be performed in two ways as shown in the following examples. You can adopt the practices in the samples for robust processing. Notice that this task can only be performed when using the CLFS-based LogRecordSequence class.
//Using the ReserveAndAppend Method ReservationCollection reservations = recordSequence.CreateReservationCollection(); long[] lengthOfUndoRecords = new long[] { 1000 }; recordSequence.ReserveAndAppend(recordData, userSqn, previousSqn, RecordSequenceAppendOptions.None, reservations, lengthOfUndoRecords); recordSequence.Append(undoRecordData, // If necessary … userSqn, previousSqn, RecordSequenceAppendOptions.ForceFlush, reservations); // Using the Manual Approach ReservationCollection reservations = recordSequence.CreateReservationCollection(); reservations.Add(lengthOfUndoRecord); try { recordSequence.Append(recordData, userSqn, previousSqn, RecordAppendOptions.None); } catch (Exception) { reservations.Remove(lengthOfUndoRecord); throw; } recordSequence.Append(undoRecordData, userSqn, previousSqn, RecordAppendOptions.ForceFlush, reservations);
Remarks
This class represents a set of reservation areas that are made in a record sequence. Adding items to the collection allocates new reservations. Removing items from the collection frees those reservations.
An application reserves space in the log when it has data that is to be written to the log in the future, but cannot write it immediately. Reservations provide a guarantee that the data can be written to the log when the data is available to be written. When using logs, applications often reserve one or more log records in a marshaling area. You must reserve records prior to appending them.
Reservations can be used to guarantee that an operation can be completed before the data is committed; otherwise, the changes are rolled back. It can also be used to record an "undo action" in the log. During a rollback operation, a transactional resource manager (RM) must be able to recover its state if the RM is interrupted during the rollback operation. By using a reservation area, an RM can reserve space in a log before it is used.
The ReserveAndAppend method can either reserve space or append data, or both, depending on the parameters that are specified when making the call. As work progresses in a transaction, an application can append the undo information and reserve space for compensation records. During a rollback operation, compensation records that are created indicate what has been undone on the disk. The records are appended using space that has been previously reserved. This guarantees that an RM does not run out of log space, which is a fatal condition, while performing a rollback operation. If a log fills up during a transaction, an application can safely roll back a transaction without corrupting durable data.
CLFS is an ARIES-compliant logging system, meant for write-ahead logging. In write-ahead logging, an application writes an undo record before it performs the operation, reserving the amount of space it takes in the log to write a compensating record, which may be used during rollback. Later, the reserved space is used when the compensation record is actually written.
Applications can either reserve or access long space at any given time (they are mutually exclusive operations). After a commit record is written to the log, an application can free up the reservations for the compensation records. This action can be done by calling either the FreeReservation or ReserveAndAppend method. Calling the ReserveAndAppend method guarantees that the operation is atomic, while calling the FreeReservation method does not.
When you free records, you must free the same records that you reserved together in a previous call to the ReserveAndAppend method.
Note
Your implementation of IRecordSequence must implement the MakeReservation and FreeReservation methods to perform the actual reservation allocation and deallocation. In addition, your implementation must also call ReservationFreed when a record is written into a reserved space. | https://docs.microsoft.com/en-gb/dotnet/api/system.io.log.reservationcollection?view=netframework-4.8&viewFallbackFrom=netcore-2.0 | 2019-06-16T05:26:59 | CC-MAIN-2019-26 | 1560627997731.69 | [] | docs.microsoft.com |
Opt-In to Microsoft Update
You can opt a computer in to the Microsoft Update service and then register that service with Automatic Updates.
The scripting sample in this topic shows you how to use Windows Update Agent (WUA) to register the Microsoft Update service with Automatic Updates. Alternatively, to register the service, the user can visit Microsoft Update.
Before you attempt to run this sample, verify that the version of WUA that is installed on the computer is version 7.0.6000 or a later version. For more information about how to determine the version of WUA that is installed, see Determining the Current Version of WUA.
Example
The following scripting sample shows you how to use the Windows Update Agent (WUA) to register the Microsoft Update service with Automatic Updates. The sample allows deferred or offline processing if needed.,"")
In earlier versions of WUA (a minimum WUA version of 7.0.6000), you can simplify the opt-in process by using a registry setting. After the registry key and values are configured, the Microsoft Update opt-in process occurs the next time WUA performs a search. The opt-in process may be triggered by Automatic Updates or by an API caller.
For example, the full path of the registry key and values to set for the opt-in process are as follows:
HKLM\Software\Microsoft\Windows\CurrentVersion\WindowsUpdate\PendingServiceRegistration\7971f918-a847-4430-9279-4a52d1efe18d
ClientApplicationID = My App
RegisterWithAU = 1
Note
The registry key is respected once only when WUA is updated from a version that is earlier than version 7.0.6000 to version 7.0.6000 or to a later version. We recommend discretion when overwriting existing registry values because overwriting the values may change the result of an earlier service registration request.
Creating this registry key requires administrative credentials. For Windows Vista, the caller must create the registry key in an elevated process. | https://docs.microsoft.com/en-us/windows/desktop/Wua_Sdk/opt-in-to-microsoft-update | 2019-06-16T04:48:10 | CC-MAIN-2019-26 | 1560627997731.69 | [] | docs.microsoft.com |
Additional Issue hierarchies
This page describes advanced Jira and eazyBI integration. If you need help with this please contact eazyBI support.
Available from eazyBI version 4.1.
On this page:
Configuration
The Issue dimension has the following standard hierarchies:
- the default hierarchy with Project and Issue levels,
- the Sub-task hierarchy with Project, Parent and Sub-task levels,
- and if you have imported the standard Epic Link field then also
the Epic hierarchy with Project, Epic, Parent and Sub-task levels.
You can define additional Issue dimension hierarchies using the issue fields that are imported. Quite often additional hierarchies are defined using imported issue link custom fields.
Additional hierarchies are defined in eazyBI advanced settings.
As an example, let's create additional Feature hierarchy with Feature, Parent and Sub-task levels. Let's assume that we have an issue links from parents to features and we have already configured a custom field for this issue link:
[jira.customfield_feature] name = "Feature" outward_link = "is a feature for" issue_type = "Feature" update_from_issue_key = "parent_issue_key"
Now we can define an additional hierarchy in the following way:
[[jira.issue_hierarchies]] name = "Feature" all_member_name = "All Issues by features" levels = [ {name="Feature",key_column="customfield_feature",issue_type="Feature"}, {name="Parent",key_column="subtask_parent_key"}, {name="Sub-task",key_column="subtask_key"} ]
Now if you will select and import the Feature custom issue link field then also the corresponding additional Feature hierarchy will be defined for the Issue dimension.
Examples
Hierarchy with epics and themes
If you are using Jira Software (former Jira Agile) epics (and have imported Epic Link custom field) and are linking epics to higher level features or themes then you can define the following hierarchy.
Let's assume that epics are linked to themes (with the Theme issue type). Then at first you can define an issue link custom field:
[jira.customfield_theme] name = "Theme" outward_link = "is a theme for" issue_type = "Theme" update_from_issue_key = "epic_key"
As we have specified to update
customfield_theme from
epic_key then all stories and story sub-tasks within the epic will have the corresponding
customfield_theme as well.
Then we can define an additional hierarchy with Theme, Epic, Parent and Sub-task levels:
[[jira.issue_hierarchies]] name = "Theme" all_member_name = "All Issues by themes" levels = [ {name="Theme",key_column="customfield_theme",issue_type="Theme"}, {name="Epic",key_column="epic_key"}, {name="Parent",key_column="epic_parent_key"}, {name="Sub-task",key_column="subtask_key"} ]
Epic hierarchy without projects
In Issue dimension Epic hierarchy, epics are split by projects based on their stories, therefore, epics could be included in the report more than once. To avoid splitting them by projects and, meanwhile, use benefits of Issue dimension Epic hierarchy, you can define Issue dimension Epic hierarchy without project level.
If you are using Jira Software (former Jira Agile) epics (and have imported Epic Link custom field), you do not need to define a new custom field, just define hierarchy levels based on the existing levels. Add the following advanced settings:
[[jira.issue_hierarchies]] name = "Epics without project" all_member_name = "All Issues by epics without project" levels = [ {name="Epic",key_column="epic_key"}, {name="Parent",key_column="epic_parent_key"}, {name="Sub-task",key_column="subtask_key"} ]
Portfolio for Jira hierarchy import (one level above Jira epics)
Portfolio for Jira custom field data import is available as default import option starting from the eazyBI version 4.6 (for Jira Server) and eazyBI for Jira Cloud (limited support).
Use the solution below only for earlier eazyBI versions.
When Portfolio for Jira solution is used, there could be invented a new summary level (or several) above epics in the Portfolio structure, for example, Initiative or Program. To analyze data by this level, you may want to import corresponding issue type as a new hierarchical level to roll up values from lower level issues.
Let's say we have a new level Initiative (based on issue type Initiative) above epics in Portfolio, and the full structure is Initiative - Epic - Story - Sub-task. To import this structure as a new hierarchy in eazyBI Issue dimension, do the following steps.
1. Get a custom field with Initiative issue key.
To import a new hierarchy level you should use the key of parent issue. Using Portfolio, parent issue information (in this case, Initiative) is stored in the custom field "Parent link". Examine this field in Epic issue screen in Jira.
- If "Parent link" field contains only Initiative key (without initiative name), you can use "Parent link" custom field for the next step.
If the field contains Initiative key together with initiative name or another information, create a new scripted custom field "Initiative" to extract parent Issue key from Parent Link. For example, you can use Jira Misc Custom fields add-on calculated text custom field with the following code:
<!-- @@Formula: if (issue.get("customfield_NNNNN") != null) { return issue.get("customfield_NNNNN").getKey(); } else { return ""; } -->:
#Precalculated custom field with Parent link key or Jira Portfolio Parent link field [jira.customfield_AAAAA] data_type = "string" check_calculated_value = true update_from_issue_key = "epic_key" [[jira.issue_hierarchies]] name = "Initiative" all_member_name = "All Issues by Initiative" levels = [ {name="Initiative",key_column="customfield_AAAAA",issue_type="Initiative"}, {name="Epic",key_column="epic_key"}, {name="Parent",key_column="epic_parent_key"}, {name="Sub-task",key_column="subtask_key"} ]
Instead of
customfield_AAAAA, use ID of the custom field containing Inititative key (either Portfolio custom field "Parent link" or scripted custom field "Initiative"). Also, use correct
issue_type.
You can use values in
name sections according your needs.
3. Import the new hierarchy in eazyBI.
Once the both custom field and hierarchy definitions are added to eazyBI advanced settings, please select the custom field "Parent Link" or "Initiative" as a property for import and run an import. Import will create a new Issue dimension hierarchy in your account.
Hierarchy with several levels above standard
When there are several Portfolio for Jira summary levels above epics (or parent issue if epics are not used) in the Portfolio structure or this multi-level hierarchy is maintained by issue links, you my want to import corresponding issue types as new hierarchical levels to roll up values from lower level issues.
General approach
- In general, to create a new hierarchy, each lower level issue should know all it's parent and grandparents issue keys (for instance, a sub-task should "know" its task, theme, initiative, and programm issue keys).
Most usually, each issue has its parents key only, and then the value should be inherited to lower level issues during data import from Jira to eazyBI. This chain must be defined in advanced settings using setting
update_from_issue_key(see below).
- To create a hierarchy, each issue type should be included only in one hierarchy level. To ensure that, parent issue keys from different issue types (programms, initiatives, themes) should be retrieved in separate custom fields: each custom field would be used to define a particular level in the hierarchy.
Information about the parent issue type is not stored in the child issue data, therefore, it should be already precalculated in Jira, using some scripted field, for instance, Jira Misc Custom fields add-on.
- Instead of Portfolio Parent link, you may use other ways how to store issue parent issue key: specific custom fields or Jira issue links. The general principles stay the same: define a separate custom field for a parent link from a specific issue type (or types) that is inherited to the lower level issues. Inheritance could be maintained in Jira by scripted fields or during data import to eazybi according to the definition in advanced settings.
Importing more than one hierarchical level above the standard issue hierarchies might be complex. Please, contact support@eazybi.com if you need assistance with that!
Example with Jira Portfolio
Let's say, you have a hierarchy Initiative - Programm - Theme - Epic - Story - Sub-task, where all levels above Epic are Portfolio levels.
1. Get custom fields with parent issue keys.
Create three custom fields to retrieve initiative, programm, and theme issue keys. In Jira, each issue would have a value only in one field (accordingly to its parent issue type), values in other custom fields would be calculated during data import.
For example, you can use Jira Misc Custom fields add-on calculated text custom field. The code for all three custom fields is almost the same, only use correct parent issue type name in
issuetypename.equals function.
The following code would retrieve "Programm" issue type parent issue key from Portfolio Parent link field:
<!-- @@Formula: import com.atlassian.jira.component.ComponentAccessor; import org.apache.commons.lang.StringUtils; if (issue.get("customfield_NNNNN") != null) { // parent link custom field ID parentlinkID = issue.get("customfield_NNNNN").getId(); // parent link custom field ID parentlinkissue = ComponentAccessor.getIssueManager().getIssueObject(parentlinkID); issuetypename = parentlinkissue.getIssueType().getName(); if (issuetypename.equals("Programm")) return parentlinkissue.getKey(); else return null; } -->
#JIRA MISC field with Parent Initiative key [jira.customfield_AAAAA] data_type = "string" dimension = true check_calculated_value = true update_from_issue_key = "customfield_BBBBB" #JIRA MISC field with Parent Programm key [jira.customfield_BBBBB] data_type = "string" dimension = true check_calculated_value = true update_from_issue_key = "customfield_CCCCC" #JIRA MISC field with Parent Theme key [jira.customfield_CCCCC] data_type = "string" dimension = true check_calculated_value = true update_from_issue_key = "epic_key" #hierarchy definition [[jira.issue_hierarchies]] name = "All Issues by Initiatives" all_member_name = "All Issues" levels = [ {name="Initiative",key_column="customfield_AAAAA",issue_type="Initiative"}, {name="Programm",key_column="customfield_BBBBB",issue_type="Programm"}, {name="Theme",key_column="customfield_CCCCC",issue_type="Theme"}, {name="Epic",key_column="epic_key"}, {name="Task",key_column="epic_parent_key"}, {name="Sub-task",key_column="subtask_key"} ]
Instead of
customfield_AAAAA,
customfield_BBBBB, and
customfield_CCCCC, use corresponding IDs of the custom fields containing Parent Initiative, Programm, and Theme issue keys. Also, use correct
issue_type.
You can use values in
name sections according your needs.
3. Import the new hierarchy in eazyBI.
Once the definitions of custom fields and hierarchy are added to eazyBI advanced settings, in eazyBI import setting screen, select all three custom fields as properties for import and run data import. Import will create a new Issue dimension hierarchy in your account.
Troubleshooting
There might be cases when issue configuration parameters seem not working as expected and debugging is needed. Here follow hints which might help faster find out the reasons why hierarchy does not show up.
- If your hierarchy is built on issue links, check if you have correctly used inward_link or outward_link.
The general rule is that you should build the hierarchy with the link from the child perspective. For instance, if you need to link Epics to a higher level Feature you could have defined link as shown in the image. You should use the Inward end of the link, in this case, to build the hierarchy level above the Epic since it is the way how the Epic is linked to the Feature.
Incorrect issue link definition is one of the causes for failing issue hierarchy. How to troubleshoot the most common problems with issue link import, read in Import issue links troubleshooting article.
- Create a test report to check whether parent issue keys are imported correctly for each issue: use issues in rows and properties containing issue parent key in columns. In that way you ensure that parent key import is correct.
Once the properties contain correct keys of the parent issues of intended hierarchy you can proceed with the hierarchy configuration parameters.
- Whenever you change the advanced settings for a custom field or hierarchy, it is recommended to either do a full data reimport in the account, or to clear the previously loaded custom field configuration (run the import with the custom field un-checked) and load it again (run the import with the custom field checked).
The hierarachy might not show up also when other configuration parameters are wrong. Please check, if you have correct configuration parameters: update_from_issue_key for the custom field, key_column for the hierarchy, and Issue type.
Rules for defining update_from_issue_key correctly:
when define parent custom field, link it to the field storing issue key of the next lower issue level
if the next lower level is Epics, use
update_from_issue_key = "epic_key"
if the next lower level is a standard issue level (parent), set
update_from_issue_key = "parent_issue_key"
- Rules for defining key_column correctly:
- If the hierarchy is defined upon epic hierachy
{name="Epic",key_column="epic_key"}, {name="Parent",key_column="epic_parent_key"}, {name="Sub-task",key_column="subtask_key"}
- if the hierarchy is defined upon sub-task hierarchy
{name="Task",key_column="subtask_parent_key"}, {name="Sub-task",key_column="subtask_key"}
Use check_calculated_value setting if custom fields you use for creating the new hierarchy are scripted fields or Jira Portfolio Parent link field. It would guaranty that the changed link between upper level issues would be changed also in the inheritance chain. | https://docs.eazybi.com/eazybijira/data-import/advanced-data-import-options/additional-issue-hierarchies | 2019-06-16T04:39:37 | CC-MAIN-2019-26 | 1560627997731.69 | [] | docs.eazybi.com |
At StepChange Debt Charity, we help people with a variety of different types of debt, including people who have fallen behind on their household bills. Of everyone we helped in 2017, 2 in 5 were behind on at least one of these. This included 38,135 people with arrears on their electricity bill and 22,186 on their gas.
In our new briefing Behind on the Basics, we looked at which households were most at risk of falling behind on their household bills, and why. However, given the ongoing focus on addressing fuel poverty and the welcome steps taken to improve affordability in the energy sector, we wanted to take a closer look at what was happening when it came to these bills in particular.
Electricity and gas arrears in 2017
When it comes to StepChange clients, we’ve seen a small rise in the proportion who were behind on their electricity (up from 13.3% in 2013 to 14.3% in 2017).
In contrast, the proportion of clients in arrears on their gas bill has decreased slightly (from 12.9% in 2013 to 11.5% in 2017).
We have also seen a gradual increase in the average amount of arrears that our clients have.
Which households were most at risk of falling behind?
When we examined the rate of gas and electricity arrears by different household characteristics - such as age, family composition, income and housing tenure – we found that there were not hugely significant differences between the proportions of clients falling into arrears. This suggests that the focus should continue to be on affordability across the board, to help any customers struggling to keep up.
However, there were some small differences which suggested that young people, single parents and those on low and moderate incomes may be more likely to fall into arrears than other groups. Ensuring that support reaches these groups is therefore key.
We also found that people who had an additional vulnerability on top of their financial difficulties – such as a mental health problem, long-term health condition like cancer or a terminal illness – were more likely to be falling behind. A continued focus on customers in vulnerable circumstances is therefore vital, and that’s why we so warmly welcome the independently chaired Commission for Customers in Vulnerable Circumstances.
Preventing people from falling behind on the basics
The experiences of StepChange clients set out in Behind on the Basics reveal that many are still struggling to get by when it comes to essential household costs. Whilst the incidence of energy arrears may be lower than for some other bills, we are ambitious that there is still more that can be done to help people keep up with payments.
Initiatives such as the Warm Home Discount and extension of the vulnerable customer safeguard tariff are important steps, and we also know that many providers are working hard to help people in financial difficulty.
As ever, it will be important that providers continue to show the appropriate forbearance to customers who fall behind, and take proactive steps to identify and (if possible) reduce the bills of those struggling to pay. The energy industry have demonstrated what a difference they can make to customers in financial hardship, and the challenge now is to keep growing this support so it reaches everyone who needs it. We look forward to continuing to work with the industry and others to achieve this.
Grace Brownfield, Senior Public Policy Advocate, StepChange Debt Charity | https://docs.energy-uk.org.uk/media-and-campaigns/energy-uk-blogs/6725-a-spotlight-on-energy-arrears-who-is-at-risk-of-falling-behind.html | 2019-06-16T05:26:31 | CC-MAIN-2019-26 | 1560627997731.69 | [] | docs.energy-uk.org.uk |
Repair-SPOSite
Syntax
Repair-SPOSite [-Confirm] -Identity <SpoSitePipeBind> [-RuleId <Guid>] [-RunAlways] [-WhatIf] [<CommonParameters>]
Description.
For permissions and the most current information about Windows PowerShell for SharePoint Online, see the online documentation at ().
Examples
-----------------------EXAMPLE 1-----------------------------
Repair-SPOSite
This example runs all the site collection health checks in repair mode on the site collection.
-----------------------EXAMPLE 2-----------------------------
Repair-SPOSite -RuleID "ee967197-ccbe-4c00-88e4-e6fab81145e1"
This example runs the Missing Galleries Check rule in repair mode on the site collection.
Parameters
Prompts you for confirmation before running the cmdlet.
Specifies the SharePoint Online site collection on which to run the repairs.
Specifies a health check rule to run.
Displays a message that explains the effect of the command instead of executing the command.
Shows what would happen if the cmdlet runs. The cmdlet is not run.
Related Links
Comentarios
Envíe sus comentarios sobre:
Cargando comentarios... | https://docs.microsoft.com/es-es/powershell/module/sharepoint-online/Repair-SPOSite?view=sharepoint-ps | 2019-06-16T04:46:40 | CC-MAIN-2019-26 | 1560627997731.69 | [] | docs.microsoft.com |
dotTEST DTP Engine can run on either a local or a network license. The license type can be configured in the .
properties configuration file in the
INSTALL_DIR (or another location; see Configuration Overview for details).
Network License
There are two types of network licenses:
dtp: This license is stored in DTP. Your DTP license limits analysis to the number of files specified in your licensing agreement. This is the default type when
license.use_networkis set to
true.
ls: This is a "floating" or "machine-locked" license that limits usage to a specified number of machines. This type of license is stored in License Server.. | https://docs.parasoft.com/display/DOTTEST1033/Setting+the+License | 2019-06-16T05:36:36 | CC-MAIN-2019-26 | 1560627997731.69 | [] | docs.parasoft.com |
Connect a device to Pybytes by flashing Pybytes library
In this section, we will explain to you how to connect your device to Pybytes by flashing the Pybytes library. Use this, if you want to have full control over the Pybytes library on your device.
Pybytes firmware already contains Pybytes library. That means that you can add your device quickly without the need of flashing Pybytes library.
Step 1: Flash stable firmware to your device with Pycom firmware updater tool
- Open Pycom firmware updater tool
- Select a stable firmware
- Click on continue
Here's more information about firmware updates.
Step 2: Download your Pybytes Library
You can download Pybytes library at the device's settings page:
Navigate to your device in Pybytes;
Click on the settings tab;
Click on the Download button at Pybytes library section;
Step 3. Flash your device with Pymakr
In case you haven't installed Pymakr plugin, follow these instructions. We recommend to install Pymakr in Atom.
- Connect your device to the computer with a USB cable.
- Open zip archive of Pybytes library and extract a containing folder.
- Check your
flash/pybytes_config.jsonfile. It should be pre-filled with your Pybytes credentials (deviceToken, WiFi credentials, ...)
- Open Pybytes library folder as a project folder in Atom.
- Click on the Connect button in Pymakr. Pymakr should connect to your device.
Upload code to your device by clicking on the Upload button in Pymakr.
After all the Pybytes library files are uploaded to your device, the device will restart and connect to Pybytes. | https://docs.pycom.io/pybytes/connect/flash.html | 2019-06-16T05:41:54 | CC-MAIN-2019-26 | 1560627997731.69 | [array(['../../.gitbook/assets/pybytes/flash-pybytes-library/settingsTab.png',
None], dtype=object) ] | docs.pycom.io |
7.3
Reconstruct-Template
Source code:.
Provides ~list/ctx and ?list/ctx as a pattern-expander and a template-metafunction. These two go together so that if x is a syntax-list:
You can think of these as similar to (e ...) as a syntax-pattern or syntax-template, where ctx saves the lexical context, source location, and syntax properties of the parens, and transfers them over to the result syntax object.
A syntax-parse pattern that matches a syntax-list like (pattern ...). However, it also binds the ctx-id as a pattern variable that saves the lexical-context information, source-location, and syntax-properties of the parentheses.
Examples:
A syntax template form that constructs a syntax-list like (template ...). However, it attaches the information saved in the ctx-id onto the parentheses of the new syntax object, including its lexical-context, source-location, and syntax properties.
Examples:
The main intended use of ~list/ctx and ?list/ctx is for reconstructing syntax objects when making expanders for new core-form languages. This purpose is similar to syntax/loc/props from Alexis King’s blog post Reimplementing Hackett’s type language: expanding to custom core forms in Racket.
The main advantage of ~list/ctx and ?list/ctx over syntax/loc or syntax/loc/props is that they work even for stx-list templates nested deeply within a template, as well as for stx-list templates under ellipses. For example, if x is a well-formed let expression: | https://docs.racket-lang.org/reconstruct-template-list-ctx/index.html | 2019-06-16T05:14:00 | CC-MAIN-2019-26 | 1560627997731.69 | [] | docs.racket-lang.org |
TINYpulse gives you the flexibility to use a variety of popular browsers, however, only certain versions are supported. Make sure you're on the most current browser versions possible for the best TINYpulse experience.
We fully support:
- Internet Explorer 11
- Chrome 51 and above
- Safari
- Firefox 47 and above
- Edge 13 and above
All other browsers are unsupported and service is not guaranteed. Users (both from the admin and employee side) may experience an unappealing user interface and/or decreased functionality by using an unsupported browser. Everyone on an unsupported browser will be automatically redirected to the old Engage user interface with a permanent banner to update the browser for additional functionality.
So update your browser already so you can take advantage of the latest and greatest TINYpulse functionality! You're definitely missing out (and losing money) if you're on an older version so update today or talk to the IT administrator at your company for assistance.
Please sign in to leave a comment. | https://docs.tinypulse.com/hc/en-us/articles/115004716574-Supported-browsers | 2019-06-16T04:31:15 | CC-MAIN-2019-26 | 1560627997731.69 | [] | docs.tinypulse.com |
{"_id":"599da9e4d1222a00250be675e85ac41c8ba0e00259a44","githubsync":"","__v":0,"parentDoc":null,"updates":[],"next":{"pages":[],"description":""},"createdAt":"2017-08-23T16:14:28.550Z","link_external":false,"link_url":"","sync_unique":"","hidden":false,"api":{"results":{"codes":[]},"settings":"","auth":"required","params":[],"url":""},"isReference":false,"order":16,"body":"##Overview\nLearn how Cavatica uses [AWS EC2 Spot Instances]() and how this strategy could help you reduce the cost of running your tasks.\n\n##About Spot Instances\nCavatica uses two types of Amazon EC2 pricing for instances: On-Demand and Spot. On-Demand instances are purchased at a fixed rate per hour, while the price of Spot Instances varies according to supply and demand. \n\nSpot Instances allow bidding on spare AWS EC2 computing capacity, meaning you can use the instance as long as your bidding price is higher than the market price.\n\nBecause the user always pays the current market price for the Spot Instance and Spot Instances cost a fraction of the price for On-Demand instances, using Spot Instances can significantly lower the cost of running your analysis.\n\n##How Cavatica uses Spot Instances\nCavatica uses Spot instances by default.\n[block:image]\n{\n \"images\": [\n {\n \"image\": [\n \"\",\n \"AWS-EC2-4.png\",\n 433,\n 260,\n \"#373737\"\n ],\n \"caption\": \"Types of AWS EC2 instances used on Cavatica\"\n }\n ]\n}\n[/block]\nWhen using Spot Instances, you pay the market price of the Spot Instance, as long as this is lower than your bid price. The strategy is to bid the On-Demand instance price for Spot Instances. This ensures you are never charged more than the On-Demand instance price for using a Spot Instance.\n\n##Analysis costs when using Spot Instances\nUsing Spot Instances can significantly reduce the cost of your analysis. Running an app using Spot Instances can cost as little as a quarter of the On-Demand price. \n\nThe [AWS Spot Instance bid advisor]() allows you to see projected savings and a history of outbidding frequency for specific Spot Instances to help you in your decision. Some instances get outbid more frequently than others and the longer a job runs for, the more likely it is to be terminated. \n\nAWS EC2 will terminate your Spot Instance if the bid price becomes lower than the market price for that instance. In this case interrupted jobs need to be restarted, so your analysis will taken longer to complete and the analysis cost might increase as well. See the section on Spot Instance termination to learn how this could impact the cost of your analysis.\n\n###The impact of Spot Instance termination\nBecause there is a limited number of Spot Instances, the market price of Spot Instances also increases when demand increases. If the market price becomes higher than what we bid for the Spot Instance, our instance gets terminated. You will not be charged for the partial hour when the instance was terminated.\n\nThe job(s) running on the instance at the time of termination will be interrupted and have to be [run again from the beginning](). The jobs will be restarted on an equivalent On-Demand instance to minimise time wasted in completing your task.\n\nRestarting jobs on another instance will inevitably prolong task execution time and add to the cost of running that job. The cost of re-running is greatest for long jobs (in the order of hours) that get interrupted close to completion. The possibility that a Spot Instance is terminated is why they are not recommended for running long, time-critical jobs.","excerpt":"","slug":"about-spot-instances","type":"basic","title":"About Spot Instances"} | http://docs.cavatica.org/docs/about-spot-instances | 2019-06-16T04:39:06 | CC-MAIN-2019-26 | 1560627997731.69 | [] | docs.cavatica.org |
Synchronize Studio Database With Repository¶
Sometimes the Git repository and the Studio database can become out of sync, for example, if you import content into the repository from another environment. This is fixed by synchronizing the Studio database with the repository which is done automatically by Studio.
The time it takes to finish synchronizing from the repository depends on how much data needs to be synced. To find out when the system has finished synchronizing from the repository, tail the catalina log and look for the message that says:
Done syncing database with repository for site:{site_name}. Below is an example message in the log indicating it is done syncing from the repository:
[INFO] 2017-11-30 11:59:36,111 [studioSchedulerFactoryBean_Worker-4] [site.SiteServiceImpl] | Syncing database with repository for site: myawesomesite fromCommitId = deffff55157664a0895f495f472c73fbaab50f02 [INFO] 2017-11-30 11:59:36,172 [studioSchedulerFactoryBean_Worker-4] [site.SiteServiceImpl] | Done syncing database with repository for site: myawesomesite fromCommitId = deffff55157664a0895f495f472c73fbaab50f02 with a final result of: true | https://docs.craftercms.org/en/3.0/system-administrators/activities/sync-studio-database-with-repo.html | 2019-06-16T05:50:31 | CC-MAIN-2019-26 | 1560627997731.69 | [] | docs.craftercms.org |
Glossary:Custom Statistics
Also known as Formula Statistics. Mathematical expressions that can involve:
- One or more predefined statistics as variables.
- Constants (but not constant expressions).
- Basic arithmetical operators (such as +, –, x, /, and %).
Glossary
This page was last modified on November 18, 2013, at 18:00. | https://docs.genesys.com/Glossary:Custom_Statistics | 2019-06-16T04:44:57 | CC-MAIN-2019-26 | 1560627997731.69 | [] | docs.genesys.com |
Safely removes a Ghost installation and all related configuration & data. prompts appear:
WARNING: Running this command will delete all of your themes, images, data, and any files related to this Ghost instance!` > There is no going back! > Are you sure you want to do this? Type Y to confirm, or N to abort.
The following tasks are performed:
- stop ghost
- disable systemd if necessary
- remove the
contentfolder
- remove any related systemd or nginx configuration
- remove the remaining files inside the install folder
⚠ Running
ghost uninstall --no-promptor
ghost uninstall --forcewill skip the warning and remove Ghost without a prompt. | https://docs.ghost.org/api/ghost-cli/uninstall/ | 2019-06-16T04:59:18 | CC-MAIN-2019-26 | 1560627997731.69 | [] | docs.ghost.org |
Ghost comes with a built-in integration for Google AMP which transforms all of the content on your site into a lightening-fast AMP version ⚡
The open source AMP project is led by Google, and provides a way for publishers to generate lightweight versions of their content for a faster and smoother user experience.
In Ghost, the AMP integration utilises the AMP framework inside a single Handlebars template:
amp.hbs. This template transforms each post on your site into an AMP version which can be accessed by adding
/amp to the end of any URL.
Enabling AMP in Ghost
The AMP integration in Ghost is enabled by default, but if you prefer not to use the feature then you can turn it off in the settings within Ghost Admin. When enabled, all posts on your publication will automatically have an AMP page, with a canonical link to ensure the page is correctly identified by the search engines and no duplicate content issues are found.
Customise the template with your own styles
AMP in Ghost can be styled to suit your brand and theme too! Since the Ghost theme layer is entirely customisable, that means you can also customise the way your AMP pages are rendered.
So if you would like to add some styling, branding or even monetise your AMP pages with advertisements, then you can with a few lines of code in a single Handlebars template.
Read more about customising your AMP pages in Ghost in this handy tutorial.
Validate your template in the console
There is an effective way to validate your AMP pages as you go directly in the console by adding
#development=1 to any AMP URL.
This is a useful way to ensure any customisations you make to your AMP template are valid and your AMP content is rendering correctly. | https://docs.ghost.org/integrations/google-amp/ | 2019-06-16T04:57:51 | CC-MAIN-2019-26 | 1560627997731.69 | [array(['https://docs.ghost.io/content/images/2018/11/image-14.png', None],
dtype=object) ] | docs.ghost.org |
Module
Stdlib.Int64
val div : int64 -> int64 -> int64
Integer division. Raise
Division_by_zeroif the second argument is zero. This division rounds the real quotient of its arguments towards zero, as specified for
Pervasives.(/).
val rem : int64 -> int64 -> int64
Integer remainder. If
yis not zero, the result of
Int64.rem x ysatisfies the following property:
x = Int64.add (Int64.mul (Int64.div x y) y) (Int64.rem x y). If
y = 0,
Int64.rem x yraises
Division_by_zero.
val shift_left : int64 -> int -> int64
Int64.shift_left x yshifts
xto the left by
ybits. The result is unspecified if
y < 0or
y >= 64.
val.
val.
val to_int : int64 -> int
Convert the given 64-bit integer (type.
val of_float : float -> int64
Convert the given floating-point number to a 64-bit integer, discarding the fractional part (truncate towards 0). The result of the conversion is undefined if, after truncation, the number is outside the range [
Int64.min_int,
Int64.max_int].
val of_int32 : int32 -> int64
Convert the given 32-bit integer (type
int32) to a 64-bit integer (type
int64).
val to_int32 : int64 -> int32
Convert the given 64-bit integer (type
int64) to a 32-bit integer (type
int32). The 64-bit integer is taken modulo 232, i.e. the top 32 bits are lost during the conversion.
val of_nativeint : nativeint -> int64
Convert the given native integer (type
nativeint) to a 64-bit integer (type
int64).
val to_nativeint : int64 -> nativeint
Convert the given 64-bit integer (type
int64) to a native integer. On 32-bit platforms, the 64-bit integer is taken modulo 232. On 64-bit platforms, the conversion is exact.
val of_string : string -> int64
Convert the given string to a 64-bit integer. The string is read in decimal (by default, or if the string begins with
0u) or in hexadecimal, octal or binary if the string begins with
0x,
0oor
0brespectively.
The
0uprefix reads the input as an unsigned integer in the range
[0, 2*Int64.max_int+1]. If the input exceeds
Int64.max_intit is converted to the signed integer
Int64.min_int + input - Int64.max_int - 1.
The
_(underscore) character can appear anywhere in the string and is ignored. Raise
Failure "Int64.of_string"if the given string is not a valid representation of an integer, or if the integer represented exceeds the range of integers representable in type
int64.
val of_string_opt : string -> int64 option
Same as
of_string, but return
Noneinstead of raising.
- since
- 4.05
val bits_of_float : float -> int64
Return the internal representation of the given float according to the IEEE 754 floating-point 'double format' bit layout. Bit 63 of the result represents the sign of the float; bits 62 to 52 represent the (biased) exponent; bits 51 to 0 represent the mantissa.
val float_of_bits : int64 -> float
Return the floating-point number whose internal representation, according to the IEEE 754 floating-point 'double format' bit layout, is the given
int64.
val compare : t -> t -> int
The comparison function for 64-bit integers, with the same specification as
Pervasives.compare. Along with the type
t, this function
compareallows the module
Int64to be passed as argument to the functors
Set.Makeand
Map.Make. | https://docs.mirage.io/ocaml/Stdlib/Int64/index.html | 2019-06-16T05:46:17 | CC-MAIN-2019-26 | 1560627997731.69 | [] | docs.mirage.io |
Splunk Enterprise version 5.0 reached its End of Life on December 1, 2017. Please see the migration information.
This documentation does not apply to the most recent version of Splunk. Click here for the latest version.
regmon-filters.conf
The following are the spec and example files for regmon-filters.conf.
regmon-filters.conf.spec
# Version 5.0.5 # # This file contains potential attribute/value pairs to use when # configuring Windows Registry monitoring. The regmon-filters.conf file # contains the regular expressions you create to refine and filter the # registry key paths you want Splunk to monitor. # #>] * The name of the filter being defined. proc = <regular expression> * If set, is matched against the process name which performed the Registry access. * Events generated by processes that do not match the regular expression are filtered out. * Events generated by processes that match the regular expression are passed through. * There is no default. hive = <regular expression> * If set, is matched against the registry key which was accessed. * Events generated by processes that do not match the regular expression are filtered out. * Events generated by processes that match the regular expression are passed through. * There is no default. type = <string> * A regular expression that specifies the type(s) of Registry event(s) that you want Splunk to monitor. * There is no default. baseline = [0|1] * Specifies whether or not to establish a baseline value for the Registry keys that this filter defines. * 1 to establish a baseline, 0 not to establish one. * Defaults to 0 (do not establish a baseline). baseline_interval = <integer> * Splunk will only ever take a registry baseline only on registry monitoring startup. * On startup, a baseline will be taken if registry monitoring was not running for baseline_interval seconds. For example, if splunk was down for some days, or if registry monitoring was disabled for some days. * Defaults to 86400 (1 day). disabled = [0|1] * Specifies whether the input is enabled or not. * 1 to disable the input, 0 to enable it. * Defaults to 0 (enabled). index = <string> * Specifies the index that this input should send the data to. * This attribute is optional. * If no value is present, defaults to the default index.
regmon-filters.conf.example
# Version 5.0.5 # # This file contains example filters for use by the Splunk Registry # monitor scripted input. # # specification outlined in # regmon-filters.conf.spec. [default] disabled = 1 baseline = 0 baseline_interval = 86400 # Monitor all registry keys under the HKEY_CURRENT_USER Registry hive for # "set," "create," "delete," and "rename" events created by all processes. # Store the events in the "regmon" splunk index [User keys] proc = c:\\.* hive = HKU\\.* type = set|create|delete|rename index = regmon
This documentation applies to the following versions of Splunk® Enterprise: 5.0.5
Is there a way of excluding a process with the 'proc' setting? | https://docs.splunk.com/Documentation/Splunk/5.0.5/admin/Regmon-filtersconf | 2019-06-16T05:03:53 | CC-MAIN-2019-26 | 1560627997731.69 | [array(['/skins/OxfordComma/images/acrobat-logo.png', None], dtype=object)] | docs.splunk.com |
.
Cheers Breakdown
The Cheers Breakdown section helps you understand the overall status of your recognition programs and quickly see which segments are doing well and which need more attention to improve recognition and appreciation.
The color coding is your first indication of status.
- Green = Above the Company score
- Red = Below the Company score
Look to your green groups to understand what's going well in terms of recognition since they're leading your company. Then deep dive into the red groups to find out what's going wrong to identify opportunities for improvement.
- Recognition Pulse breakdown: This calculation is the average of all pulses belonging to the Recognition question category. Segments are ranked from high to low based on their average scores so you easily know who feels recognition is sufficient and who doesn't.
- Cheers breakdowns: These three columns show you the average number of Cheers received during the past 30 days. This will tell you which groups are actually well recognized and which aren't. Since overall happiness is linked to recognition, be sure to support any red groups in promoting recognition efforts to avoid any unwanted attrition down the line.
Availability
Full access to the Cheers dashboard is available by default to all Engage Super Admins and Admins and they automatically see the Cheers menu item in their top navigation. It's also available to Engage Segment Admins and Viewers but insights are filtered to only show information from the segments they've been assigned to.
Please sign in to leave a comment. | https://docs.tinypulse.com/hc/en-us/articles/360000374494-Get-recognition-insights | 2019-06-16T04:52:21 | CC-MAIN-2019-26 | 1560627997731.69 | [array(['/hc/article_attachments/360000825694/Screen_Shot_2018-02-07_at_2.37.57_PM.png',
'Screen_Shot_2018-02-07_at_2.37.57_PM.png'], dtype=object)
array(['/hc/article_attachments/360000411853/Screen_Shot_2018-01-26_at_11.11.35_AM.png',
'Screen_Shot_2018-01-26_at_11.11.35_AM.png'], dtype=object)
array(['/hc/article_attachments/360000411833/Screen_Shot_2018-01-26_at_11.05.54_AM.png',
'Screen_Shot_2018-01-26_at_11.05.54_AM.png'], dtype=object)
array(['/hc/article_attachments/360013014734/Screen_Shot_2018-09-18_at_14.03.58.png',
'Screen_Shot_2018-09-18_at_14.03.58.png'], dtype=object)
array(['/hc/article_attachments/360000825734/Screen_Shot_2018-02-07_at_2.39.11_PM.png',
'Screen_Shot_2018-02-07_at_2.39.11_PM.png'], dtype=object) ] | docs.tinypulse.com |
XenApp and XenDesktop 7.13
XenApp and XenDesktop 7.13 (PDF Download)
Documentation for this product version is provided as a PDF because it is not the latest version. For the most recently updated content, see the Citrix Virtual Apps and Desktops current release documentation. That documentation includes instructions for upgrading from earlier versions.
Note:
Links to external websites found in the PDF above take you to the correct pages, but links to other sections within the PDF are no longer usable. | https://docs.citrix.com/en-us/xenapp-and-xendesktop/7-13.html | 2019-06-16T06:35:11 | CC-MAIN-2019-26 | 1560627997731.69 | [] | docs.citrix.com |
- The matching expenses were split from a single expense
- All matching expenses were imported from a credit card
- Matching email receipts sent to receipts@expensify.com were received with different time-stamps.
Users will have the ability to resolve these duplicates by either deleting the duplicated transactions, merging them, or ignoring them (if they are legitimately separate expenses of the same date and amount).
From the user perspective:
A violation will be shown once a potential duplicate hits a report:
The user then has the ability to click into the red exclamation point to open the expense and resolve:
Once the user is ready to resolve the duplicates, they will have the ability to either trash the duplicate, merge them by selecting the appropriate fields from both expenses, or ignore if they are not duplicates:
If the user chooses to merge the expenses, they will be able to select which specific fields they want assigned to the finalized expense:
On the mobile app, potential duplicates will appear in red on a report:
Click into a violating expense to Ignore or Merge Expense on the app.
If you'd like to remove the expense from the report instead, tap < Expense and then swipe left to Remove the expense. This will return the expense to the Expenses page, but remove it from the report.
From the approver perspective:
A Potential duplicated expense violation will appear similar to the violation the user received. The approver will have the same ability to click into the red exclamation point to click View > Resolve:
Or the approver can use Guided Review to audit any potential duplicates:
Once the approver has resolved all potential duplicates, they can safely Approve, violation free!
Troubleshooting and FAQ
Why does Concierge keep deleting my receipts? They aren't duplicates!
If two expenses are SmartScanned via the app on the same day for the same amount, Concierge will flag a duplicate unless the expenses were split from a single expense, the expenses were imported from a credit card, or matching email receipts sent to receipts@expensify.com were received with different timestamps. Keep in mind that scanning the receipt again will trigger the same duplicate flagging.
In the event that you need to recover this receipt, you have the ability to undelete it. To do this, follow the flow below:
- Log into your web account and navigate to the Expenses page.
- Use the Filters to search for deleted expenses by selecting the Deleted filter.
- Select the checkbox next to the expenses you want to restore.
- Click Undelete at the top of the page.
Will Duplicate Detection flag the same receipt, even if I accidentally SmartScan it a year later? Duplicate Detection will look at all expenses in a single user’s Expensify account regardless of the date it is uploaded.
For example, if Jane added a 'Starbucks' receipt last April for $3.50 and it was Approved in May, when Jane accidentally uploads it again in October to a new Open report, it will be flagged as a possible duplicate for both Jane and their approver. | https://docs.expensify.com/articles/8283-duplicate-detection | 2019-06-16T05:02:39 | CC-MAIN-2019-26 | 1560627997731.69 | [array(['https://downloads.intercomcdn.com/i/o/108159127/344c101f6325e8a6f99dfe90/image.png',
None], dtype=object)
array(['https://downloads.intercomcdn.com/i/o/108159957/187b15e73a6087649068c3bf/image.png',
None], dtype=object) ] | docs.expensify.com |
Web
Response.
Web Headers Response.
Web Headers Response.
Web Headers Response.
Property
Headers
Definition
When overridden in a derived class, gets a collection of header name-value pairs associated with this request.
public: virtual property System::Net::WebHeaderCollection ^ Headers { System::Net::WebHeaderCollection ^ get(); };
public virtual System.Net.WebHeaderCollection Headers { get; }
member this.Headers : System.Net.WebHeaderCollection
Public Overridable ReadOnly Property Headers As WebHeaderCollection
Property Value
An instance of the WebHeaderCollection class that contains header values associated with this response.
Exceptions
Any attempt is made to get or set the property, when the property is not overridden in a descendant class.
Examples
The following example displays all of the header name-value pairs returned in the WebResponse.
// its it's Dim myWebRequest As WebRequest = WebRequest.Create("") ' Send the 'WebRequest' and wait for response. Dim myWebResponse As WebResponse = myWebRequest.GetResponse() ' Display all the Headers present in the response received from the URl. Console.WriteLine(ControlChars.Cr + "The following headers were received in the response") ' Headers property is a 'WebHeaderCollection'. Use it's properties to traverse the collection and display each header Dim i As Integer While i < myWebResponse.Headers.Count Console.WriteLine(ControlChars.Cr + "Header Name:{0}, Header value :{1}", myWebResponse.Headers.Keys(i), myWebResponse.Headers(i)) i = i + 1 End While ' Release resources of response object. myWebResponse.Close()
Remarks
The Headers property contains the name-value header pairs returned in the response.
Note. | https://docs.microsoft.com/en-us/dotnet/api/system.net.webresponse.headers?view=netframework-4.7.2 | 2019-06-16T04:42:18 | CC-MAIN-2019-26 | 1560627997731.69 | [] | docs.microsoft.com |
ChartCollection: ItemAt
Gets a chart based on its position in the collection.
Permissions
One of the following permissions is required to call this API. To learn more, including how to choose permissions, see Permissions.
HTTP request
POST /workbook/worksheets/{id|name}/charts/itemAt
Request headers
Request body
In the request body, provide a JSON object with the following parameters.
Response
If successful, this method returns
200 OK response code and WorkbookChart object in the response body.
Example
Here is an example of how to call this API.
Request
Here is an example of the request.
POST{id}/workbook/worksheets/{id|name}/charts/itemAt Content-type: application/json Content-length: 20 { "index": 8 }
Response }
SDK sample code
const options = { authProvider, }; const client = Client.init(options); const workbookChart = { index: 8 }; let res = await client.api('/me/drive/items/{id}/workbook/worksheets/{id|name}/charts/itemAt') .post({workbookChart : workbookChart});
Read the SDK documentation for details on how to add the SDK to your project and create an authProvider instance.
Feedback
Send feedback about: | https://docs.microsoft.com/en-us/graph/api/chartcollection-itemat?view=graph-rest-1.0 | 2019-06-16T05:46:24 | CC-MAIN-2019-26 | 1560627997731.69 | [] | docs.microsoft.com |
DirectSelfEnergy¶
- class
DirectSelfEnergy(save_self_energies=None, lambda_min=None, sparse_threshold=None, storage_strategy=None)¶
Self-energy calculator based on the direct method. The direct method uses direct diagonalization to get all the electrode modes.
Usage Examples¶
Define that the self energy on the real contour is calculated with direct diagonalization.
device_algorithm_parameters = DeviceAlgorithmParameters( self_energy_calculator_real=DirectSelfEnergy() )
Examples on how to use the
storage_strategy parameter can be found in the
Usage Examples of RecursionSelfEnergy.
Notes¶
- DirectSelfEnergy uses a diagonalization scheme to find all propagating and decaying modes of the electrodes. The self energy matrix is then determined from the modes [tSLJB99].
- The direct method implementation in QuantumATK is stable towards a sparse electrode Hamiltonian with null spaces. The implementation uses projection operators which removes any null spaces which could be problematic. | https://docs.quantumatk.com/manual/Types/DirectSelfEnergy/DirectSelfEnergy.html | 2019-06-16T04:30:07 | CC-MAIN-2019-26 | 1560627997731.69 | [] | docs.quantumatk.com |
MeanSquareDisplacement¶
- class
MeanSquareDisplacement(md_trajectory, start_time=None, end_time=None, atom_selection=None, anisotropy=None, time_resolution=None, info_panel=None)¶
Constructor for the MeanSquareDisplacement object.
Usage Examples¶
Load an MDTrajectory, and calculate the mean-square-displacement (MSD) of all aluminum atoms. Estimate the diffusion coefficient from the slope of the MSD-curve, according to \(MSD(t)=6 Dt\):
md_trajectory = nlread('alumina_trajectory.nc')[-1] msd = MeanSquareDisplacement(md_trajectory, atom_selection=Aluminum) # Get the times in ps and the MSD values in Ang**2. t = msd.times().inUnitsOf(ps) msd_data = msd.data().inUnitsOf(Angstrom**2) # Plot the data using pylab. import pylab pylab.plot(t, msd_data, label='MSD of aluminum') pylab.xlabel('t (ps)') pylab.ylabel('MSD(t) (Ang**2)') pylab.legend() pylab.show() # Fit the slope of the MSD to estimate the diffusion coefficient. # If you discover non-linear behavior at small times, discard this initial part in the fit. a = numpy.polyfit(t[5:], msd_data[5:], deg=1) # Calculate the diffusion coefficient in Ang**2/ps. diffusion_coefficient = a[0]/6.0
mean_square_displacement.py
Notes¶
The MeanSquareDisplacement is calculated
Note, that this requires a system which is equilibrated, i.e. its macroscopic properties do not change during the simulation.. | https://docs.quantumatk.com/manual/Types/MeanSquareDisplacement/MeanSquareDisplacement.html | 2019-06-16T05:30:33 | CC-MAIN-2019-26 | 1560627997731.69 | [] | docs.quantumatk.com |
Deprecation: #65790 - Remove pages.storage_pid and logic¶
See Issue #65790
Description¶
The DB field “pages.storage_pid” and its TCA definition have been moved to the compatibility6 extension as the field and its functionality is discouraged.
Additionally the method
getStorageSiterootPids() within the PHP class
TypoScriptFrontendController has been marked
as deprecated. The method is currently only used if the Frontend Login plugin is used without setting
a specific folder where the fe_users records are stored in.
Impact¶
Any usage of this field in any TypoScript, page or the usage of the method mentioned above in any third-party extension will only work if the compatibility6 extension is installed.
The Frontend Login functionality will throw a deprecation warning if the TypoScript option
plugin.tx_felogin.storagePid (via TypoScript directly or the flexform configuration within the plugin) is not set. | https://docs.typo3.org/c/typo3/cms-core/master/en-us/Changelog/7.4/Deprecation-65790-PagesStoragePidDeprecated.html | 2019-06-16T05:55:08 | CC-MAIN-2019-26 | 1560627997731.69 | [] | docs.typo3.org |
sunny health and fitness rowing machine with water in the tank this weighs approx lbs kg sf rw1205 silver.
Related Post
Cristal Stick Riveted Steel Vinyl Numbers Sunny Exercise Bike Golf Trunk Organizer Camping Cots For Two Star Wars Stuffed Animals Remote Control Toy Trucks Pop Up Tent Kid Tri Fold Sleeping Mats Lifesmart Spas Website Cap Dumbbells Kids Canvas Teepee Freestanding Bike Rack Womens Petite Golf Club Sets | http://top-docs.co/sunny-health-and-fitness-rowing-machine/sunny-health-and-fitness-rowing-machine-with-water-in-the-tank-this-weighs-approx-lbs-kg-sf-rw1205-silver/ | 2019-06-16T04:55:29 | CC-MAIN-2019-26 | 1560627997731.69 | [array(['http://top-docs.co/wp-content/uploads/2018/02/sunny-health-and-fitness-rowing-machine-with-water-in-the-tank-this-weighs-approx-lbs-kg-sf-rw1205-silver.jpg',
'sunny health and fitness rowing machine with water in the tank this weighs approx lbs kg sf rw1205 silver sunny health and fitness rowing machine with water in the tank this weighs approx lbs kg sf rw1205 silver'],
dtype=object) ] | top-docs.co |
Navigating Around Crafter Studio¶
My Sites¶
My Sites is the first screen you will encounter after logging in to Crafter Studio. This screen lists all of the websites you have been granted permission to. From this screen you can navigate to any site’s preview or dashboard.
- You can get back to the My Sites screen by:
-
My Account¶
My Account is where you go to change your personal Crafter Studio settings like language or to change your password.
- To get to My Account:
- Click on your username in the toolbar
- Select Settings in the dropdown
Site Dashboard¶
Each site has a Site Dashboard. To view a site’s dashboard, click on the Crafter CMS logo at the top left of the screen, or click on Dashboard at the top of the Sidebar. This screen is an overview of the workflow for that given site. The site dashboard has different widgets depending on your role.
Each dashboard has a header
Expand Collapse control. Each widget can be closed and opened to hide the items shown by the widget. This setting is remembered by your browser
Widget title and count. Most widgets include a count at the end of the name for the number of items in the widget
Widget level options. Options are different on each widget
Show count. Some widgets allow the author to decide how many items they want to see in the widget
Content “type” filter: Some widgets allow you to filter them by a broad content type (All, Pages, Components, Documents)
- For the dashboard shown above, here are the widgets listed:
- Items Waiting for Approval
- Shows all items currently in workflow
- Viewable only to admins and publishers
- Approved Scheduled Items
- Shows all items approved for a specific scheduled deployment date
- Viewable only to admins and publishers
- Recently Published
- Shows all items that have been previously deployed
- Viewable only to admins and publishers
- My Recent Activity
- Shows all items recently modified by the current user
- Viewable by all users
- Icon Guide
The Icon guide is simply a legend to help authors and content managers with the iconography on the system. While it can be very complex to sum up the state and nature of content in a glance, Crafter Studio attempts to achieve a high level visual summary for each object icons. You will see these icons throughout the application whenever an object is presented to the user. The icon always shows the Current state of the object.
Describes the meaning of icons within Crafter Studio
Viewable by all users
The Icon guide breaks down icons in to their elements. You have two basic elements which can be combined to form a specific icon: the item type and the worfkflow indicator.
Item Types Item types are high level archetypes of content objects within the system. These types and the iconography associated with them provide a basic classification of the type of object at a glance.
Page: A page is exactly what you would expect, it’s a URI addressable object that represents a web page or resource.
: A component is an object that is generally not URI addressable on the website. Examples are objects like Banners, Touts, Sidebar content etc. Components are usually re-usable assets that can be assigned and shared across many pages.
: A taxonomy is an object the same as a component used for classifying items.
Below is a list of all the other item types available:
Workflow Indicators Workflow indicators help authors and content managers understand at a glance what is going on with the content at a high level. Is it Live? Is it work in progress? Is it currently checked out? In some sort of approval process?
: You will find a * asterisk at the end of a content object’s name if the content has never been pushed live. This helps authors quickly identify which objects that are in progress are already live and which ones are entirely new.
: You will find that some objects have a strike-through on their name, this means that the object is not deleted but it should not be displayed on the site. It’s essentially a logical delete. Imagine a scenario where you need to take an object down immediately because of an inaccuracy while you make corrections. Disable is perfect for this and several other scenarios.
: Any item which carries the blue flag is in some sort of workflow
Submitted for Delete: Items which carry the * red X * but are editable and previewable have been submitted for delete
: Items which carry the * red X * but are not editable and previewable are deleted. You will only see these items in dashboards which show historical data
: Edited means that the item has been edited since it was made live. Items move to edited as soon as they are created or when they are edited.
: A locked item is currently in the process of being edited by another author.
: Item is currently being handled by the system
: Item has a launch schedule associated with it.
- Selecting a dashboard item
Dashboard items have the ability to be selected. Selecting an item allows the user to interact with the selected items via the context navigation
Items in the dashboard has a state icon which shows the type and current workflow status of the item
Clicking on the item’s name will take the user to preview if the object is previewable
Edit link. Clicking edit will check out the item and open the form for the item
Preview¶
Every site has a preview. This allows users to see, edit and test the site in a safe authoring sandbox prior to publishing changes.
- Preview is a fully functional site but in a safe-to-edit environment.
- Toolbar shows workflow options for the current page
- Author can change the type of preview from one channel to another
- Author can turn on in-context and drag and drop editing features
- Author can change the targeting attributes used to view the site
- Author can view the publish status of the site
- Preview Tools
- When in preview mode your context navigation will show additional controls beside the authoring search.
- The pencil provides a shortcut to turn on/off in-context editing.
- The wrench turns on/off the preview tools palette.
- The bulls eye provides a shortcut to targeting which allows the user to view and set targeting attributes for the site.
In-Context Editing¶
The in-context editing panel gives access to a number of features:
- The ability to turn on/off in-context editing controls on the page
- A jump to region selector that makes it easy to find a region by name
- The ability to edit the current page template depending on your user account permissions
When in-context editing is turned on, pencils will show up around regions of the page that have been wired for in-context edit.
- A yellow pencil relates to a specific field in the main model e.g the page
- A blue pencil indicates that you are editing a component
- </> allows you to edit the template of a component
When a user clicks on a pencil, a dialog will be presented to the user that contains ONLY the fields wired to that specific region. The user may cancel to quit without making a change or save and close (will save your changes and close the dialog)/ save draft (will save your changes and leave the dialog open)
Template Editing¶
The template editor provides users who have the proper permission with an ability to edit the Freemarker templates that are used to construct the page. Users who do not have write access may open the editor but have no ability to save edits.
A simple syntax highlighting editor is provided.
Page Components¶
The Page Components (drag and drop panel) puts the page in component construction mode. Regions on the page that are wired to accept components (“drop zones”) are highlighted.
The user may drag a component from one region to another. The user may create new components by dragging components from the panel out and on to the screen. A dialog is presented to the user when a new component is dropped on the screen so that the author can configure the component. Crafter Studio administrators can configure what components are available in this panel.
Publishing Channel¶
The Publishing Channel preview allows an author to review the current page in the context of all channels supported by the website.
The smart phone and tablet can be rotated through the use of the purple rotation control next to the drop down box selection of publishing channel preview presets. The channels are browsable
Common Navigation Elements¶
Contextual Navigation¶
The Navigation Bar is a fixed element at the top of the page and cannot be scrolled off the page. The navigation bar provide contextual workflow and other options relative to the page you are looking at, content you have selected or tool you are using.
The basic elements of the Contextual Navigation bar are:
- Branded Logo Button: Takes the user back to the Dashboard.
- Sidebar: Opens a menu that allows navigation to all pages, components and documents in the system.
- Contextual Navigation Links: An area reserved for dynamic links that will change based off of the current page view.
- Search: Allows a user to search all site content or choose a subset of content to search from the drop-down menu (Please see the later section on Search for more details about the search field.)
- Publish Status: Allows the user to view the site’s publish status.
- Users: Allows the user (depending on permissions granted to the user) to view/edit the users.
- Sites: Allows the user (depending on permissions granted to the user) to view/edit sites accessible to the user
- Help: Provides the user a shortcut to Crafter CMS documentation and the about screen, listing the Crafter Studio version, etc.
- Username: Allows a user to log out of the system or manage settings.
Sidebar¶
The Sidebar menu/panel allows for browsing all site content in the system. This includes Pages, Components and Documents.
- The “View” menu will allow selections of separate site properties.
- The menu width can be resized freely by the user.
- Users can have multiple tree paths open at the same time.
- If closed, the menu should retain it’s last state when re-opened.
- Clicking the Sidebar menu button a second time, or clicking anywhere off the menu will close the menu with the following exceptions:
- Any action executed by a right click in the menu should be allowed to complete without closing the menu (e.g.: a copy/paste operation or a delete operation.)
- The top level blocks “Pages, Components, Documents” can be hidden from users based on their privilege settings.
- The Sidebar menu panel can be stretched and will remember where you set the length and width on your browser
- Clicking the main folders will toggle them open or closed.
- Root folders allow a user to drill in to a hierarchy of content. If the item is previewable it will also be clickable.
- Clicking on an item will take the author to a preview of the item.
- Also, tooltips featuring extended information will be available when hovering over any item in the Sidebar Menu or on the dashboard.
- Right-clicking on an item opens a contextual right click menu for that item.
Occasionally you have so many pages or components in your information architecture that it is not practical to list them or you simply want to provide your authors with a quick way to get to a specific search.
For these use cases Crafter Studio’s site dropdown IA folders support the configuration of dedicated searches. That configuration can be made by an administrator on the Crafter Studio Admin Console.
| https://docs.craftercms.org/en/3.0/content-authors/content-authors-navigating-studio.html | 2019-06-16T05:51:03 | CC-MAIN-2019-26 | 1560627997731.69 | [array(['../_images/my-sites-screen.png',
'Navigating Studio - My Sites Screen'], dtype=object)
array(['../_images/get-to-my-sites.png',
'Navigating Studio - Get to My Sites Screen'], dtype=object)
array(['../_images/site-account.png',
'Navigating Studio - Open My Account Settings Screen'],
dtype=object)
array(['../_images/settings-account-management.png',
'Navigating Studio - Account Settings Screen'], dtype=object)
array(['../_images/site-dashboard.png',
'Navigating Studio - Site Dashboard'], dtype=object)
array(['../_images/site-dashboard-selected.png',
'Navigating Studio - Dashboard Selected'], dtype=object)
array(['../_images/site-preview.png', 'Navigating Studio - Site Preview'],
dtype=object)
array(['../_images/preview-tools.png',
'Navigating Studio - Preview Tools'], dtype=object)
array(['../_images/preview-in-context-editing.png',
'Navigating Studio - Preview In-Context Editing'], dtype=object)
array(['../_images/preview-in-context-edit.png',
'Navigating Studio - Preview Panel In-Context Edit'], dtype=object)
array(['../_images/preview-template-editing.png',
'Navigating Studio - Preview Panel Template Editing'], dtype=object)
array(['../_images/preview-page-components.png',
'Navigating Studio - Preview Panel Page Components'], dtype=object)
array(['../_images/preview-publishing-channel.png',
'Navigating Studio - Preview Panel Publishing Channel'],
dtype=object)
array(['../_images/site-context-nav.png',
'Navigating Studio - Site Context Navigation'], dtype=object)
array(['../_images/sidebar-dashboard-item-selected.png',
'Navigating Studio - Sidebar Panel'], dtype=object)
array(['../_images/sidebar-tooltips.png',
'Navigating Studio - Sidebar Tooltips'], dtype=object)
array(['../_images/sidebar-right-click-menu.png',
'Navigating Studio - Sidebar Right Click Menu'], dtype=object)
array(['../_images/crafter-studio-site-content-ia-folders.png',
'Navigating Studio - Site Content IA Folders'], dtype=object)] | docs.craftercms.org |
Google Spreadsheets
You can import data from selected Google Spreadsheets file. Data import is similar to Excel and CSV file upload and Import from REST API. You can also schedule regular daily import from provided Spreadsheet. If you are using eazyBI for Jira Server and do not see option to import data from Google Sheets, then you should enable this option in advanced settings. Google Spreadsheets application type.
If you have created already another similar Google Spreadsheets data source then you can export its definition and paste it in Import definition to create new Google Spreadsheets source application with the same parameters.
Source selection
In the next step you will need to authorize the access to your Google Drive documents and afterwards select from which Google Spreadsheets file you would like to import the data.
Selected Google Spreadsheets file should have the first row with column names and the next rows should be data rows that you would like to import.
Source columns mapping
Google Spreadsheets source columns mapping is similar to Excel or CSV file columns mapping and Import from REST API. Please review these documentation pages to learn more about columns mapping to eazyBI dimensions and measures. Google Spreadsheets source application will be queued for background import. You will see the updated count of imported rows during the import.
You can later visit Source Data tab again and click Import button again to import the latest data from Google Spreadsheet Google Spreadsheets source application definition by clicking Export definition from Source Data tab source application listing and then copy this definition and paste in Import definition field when creating different source application. | https://docs.eazybi.com/eazybi/data-import/google-spreadsheets | 2019-06-16T04:45:19 | CC-MAIN-2019-26 | 1560627997731.69 | [] | docs.eazybi.com |
Slurm. the new system.
Cheaha (new hardware) run the CentOS 7 version of the Linux operating system and commands are run under the "bash" shell (the default shell). There are a number of Linux and bash references, cheat sheets and tutorials available on the web.
Typical Workflow
- Stage data to $USER_SCRATCH (your scratch directory)
- Determine #SBATCH --mail-type=FAIL #SBATCH --mail-user=$USER@uab.edu srun hostname srun sleep 60
Interactive Session-per-cpu=4096 --time=08:00:00 --partition=medium --job-name=JOB_NAME
Job List - SQUEUE
To check your job status, you can use the following command
squeue -u BLAZERID
Following fields are displayed when you run squeue
JOBID - ID assigned to your job by Slurm scheduler PARTITION - Partition your job gets, depends upon time requested (express(max 2 hrs), short(max 12 hrs), medium(max 50 hrs), long(max 150 hrs), sinteractive(0-2 hrs)) NAME - JOB name given by user USER - User who started the job ST - State your job is in. The typical states are PENDING (PD), RUNNING(R), SUSPENDED(S), COMPLETING(CG), and COMPLETED(CD) TIME - Time for which your job has been running NODES - Number of nodes your job is running on NODELIST - Node on which the job is running
For more details on squeue, go here.
Job
$ rc-sacct --allusers --starttime 2016-08-30 User JobID JobName Partition State Timelimit Start End Elapsed MaxRSS MaxVMSize NNodes NCPUS NodeList --------- ------------ ---------- ---------- ---------- ---------- ------------------- ------------------- ---------- ---------- ---------- -------- ---------- --------------- kxxxxxxx 34308 Connectom+ interacti+ PENDING 08:00:00 Unknown Unknown 00:00:00 1 4 None assigned kxxxxxxx 34310 Connectom+ interacti+ PENDING 08:00:00 Unknown Unknown 00:00:00 1 4 None assigned dxxxxxxx 35927 PK_htseq1 medium COMPLETED 2-00:00:00 2016-08-30T09:21:33 2016-08-30T10:06:25 00:44:52 1 4 c0005 35927.batch batch COMPLETED 2016-08-30T09:21:33 2016-08-30T10:06:25 00:44:52 307704K 718152K 1 4 c0005 bxxxxxxx 35928 SI medium TIMEOUT 12:00:00 2016-08-30T09:36:04 2016-08-30T21:36:42 12:00:38 1 1 c0006 35928.batch batch FAILED 2016-08-30T09:36:04 2016-08-30T21:36:43 12:00:39 31400K 286532K 1 1 c0006 35928.0 hostname COMPLETED 2016-08-30T09:36:16 2016-08-30T09:36:17 00:00:01 1112K 207252K 1 1 c0006 | https://docs.uabgrid.uab.edu/w/index.php?title=Slurm&oldid=5410 | 2019-06-16T04:35:28 | CC-MAIN-2019-26 | 1560627997731.69 | [] | docs.uabgrid.uab.edu |
UDN
Search public documentation:
Lighting > Lighting Reference
UE3 Home > Level Designer > Lighting Reference
UE3 Home > Lighting Artist > Lighting Reference
UE3 Home > Level Designer > Lighting Reference
UE3 Home > Lighting Artist > Lighting Reference
Lighting Reference
- Lighting Reference
- Overview
- Legacy options
- Overview of Light Types and Options
- LightingChannels
- Light Functions
- Lightmaps
- LightVolumes
- Primitive Lighting Options
- Moving Lights And What Occurs
Overview
In Unreal Engine 3, we place light actors in the world to illuminate our scenes and characters. To manage the behavior of these lights we have created a number of different classes of lights and options to control them. These options give us a very broad spectrum of usage cases ranging from very simple baked lighting to very complex dynamic lighting. The kind of lighting you choose may vary based on the requirements of your project. This guide will serve to explain the various options so that you can understand the limitations and tradeoffs between different lighting options and make the most appropriate choices. We will also attempt to give rough estimates of the performance cost associated with enabling certain options.
Legacy options
The options on this page are accurate but not all relevant anymore. UE3 lighting has undergone several revisions and old lighting options have to be supported to maintain backwards compatibility. For static lighting, the current state of the art is to use DominantLights in conjunction with Lightmass. For moving objects, LightEnvironments are the way to go.
Overview of Light Types and Options
Global Light PropertiesHere is a quick rundown of the most common light properties. A pointlight is used as it is the most general case light. Note will be made where an option does not apply to all light types.
- Manually enter the color values for R G B. These values range from 0 through 255.
- Use the color picker (button above the red arrow).
- Pick a color from an editor viewport (button above the blue arrow).
PointLight
PointLightMovableThe same as a regular PointLight but you can move its location ingame using matinee. To do this, simply hook up the PointLight to a group in Matinee and create a float property track for it. You can also toggle a PointLightMovable on and off ingame using kismet. A PointLightMovable cannot use Lightmaps.
PointLightToggleableThis is the same as a regular PointLight but you can toggle it on and off ingame using kismet. You cannot move a PointlightToggleable ingame. A PointLightToggleable cannot use Lightmaps.
SpotlightA SpotLight is a directional cone of light.
SpotLightMovableThe same as a regular SpotLight but you can move its location ingame using matinee. To do this, simply hook up the SpotLight to a group in Matinee and create a float property track for it. You can also toggle a SpotLightMovable on and off ingame using kismet. A SpotLightMovable cannot use Lightmaps.
SpotlightToggleableThis is the same as a regular SpotLight but you can toggle it on and off ingame using kismet. You cannot move a SpotLightToggleable ingame. A SpotLightToggleable cannot use Lightmaps.
DirectionalLight
DirectionalLightToggleableThis is the same as a regular DirectionalLight but you can toggle it on and off ingame using kismet. You cannot move a DirectionalLightToggleable ingame. A DirectionalLightToggleable cannot use Lightmaps.
SkylightA SkyLight is a hemispherical light source (imagine a hemisphere around and above your map shining inward) that simulates the scattering of light from the sky. It cannot be rotated. It does not cast shadows. It is possible to create a character-only skylight using Lighting Channels. This is a relatively cheap and easy way of giving your character a constant ambient color so that you don't have to place as many character fill lights as pointlights in the world.
LightingChannels
Lighting Channels replace the previous `Light Exclusion' functionality that allowed you to exclude a light from a primitive on a per-object basis. We use lighting channels to control the influences between all lights and primitives. Both lights and primitives share the same Lighting Channel options. Here is what the LightingChannels look like for lights (located under LightComponent.LightingChannels):
LightAffectsClassificationLights now have a: Light Affects Classification. An LAC is a set of "presets" that are applied to a light's LightComponent. So instead of manually setting light flags (e.g. CastDynamicShadows) you now simply: 1 Select the light 1 Right click 1 Set what this Light Affects and then choose one of the options on that menu. (Affecting only Dynamic Objects, Affecting only Static Objects, Affecting both Dynamic and Static Objects ) The light will now be set to the flags which are appropriate for that that type of light for that specific classification. The high level idea here is to have the defaults for lights be performance friendly and make sense for what that light is commonly utilized for. Lights are now broadly categorized into the following sets: 0) Lights which affect Dynamic Primitives in the world. ( characters, movers, vehicles, physic objects bouncing around, projectiles ) 1) Lights which affect Static Primitives in the world. ( static meshes that have been placed, BSP which has been placed ) 2) Lights which affect both Dynamic and Static Primitives in the world. 3) Lights which the user has set flags which do not match any of the above. Each of the above categories is shown via a Letter on the Light Sprite Icon in the editor.
Affects Dynamics: D Affects Statics: S Affects Dynamics and Statics: D/S User Selected: U A nice way to think about lights is the following: How cast light Light Mobility / OnOffness What it Affects Directional Stationary UserSelected Point Toggleable Dynamics Spot Moveable (implies toggleable) Statics and DynamicsAndStaticsThose three categories map directly to the types of lights one places in a level. With the new LAC settings the goal is to look at a level and see: -LOTS of S classified lights. These are the lights one uses to light the level for basically no performance cost as the lighting is baked into the level. -1-2 DynamicAndStatic affecting lights per "scene" (these will probably be your directional lights which represent the sun or the major light source which is "tying" the scene together) -a small number of D classified lights per "area" which are small radius / cone and there for the doing nice window lighting, or computer screen glow -a number of U classified lights where there was a special need to go away from the norm.
Light Functions
SourceMaterialThis is the material to be applied to the light. Select it in the GenericBrowser and hit "Use."
ScaleThe material can be scaled in the X, Y, and Z dimensions. Generally you use the
LightVectorexpression in the material editor to do mapping. For point lights, just use
LightVectorto index into a cube map. For spot lights, people usually use
(LightVector.rg/LightVector.b+1)/2to index into a 2d texture, although this doesn't take the spot light's cone angle into account, and some further fudging will be required. For directional lights, you may want to try using the
TextureCoordinatesexpression.
Lightmaps
LightComponent.UseDirectLightMapIf a light has UseDirectLightMap=True, the light reaching a primitive directly from the light will be stored in the primitive's light-map. The default is True for static lights.
PrimitiveComponent.bForceDirectLightMapIf a primitive has bForceDirectLightMap=True, all lights hitting the primitive will behave as if they had UseDirectLightMap=True set. This is useful as an optimization for primitives where the difference between shadow-maps and light-maps isn't noticeable even for the brightest lights. The default is true.
LightVolumes
Light volumes allow you to restrict a light to only influence actors that are within certain volumes (inclusion volumes), or restrict a light from affecting actors in certain volumes (exclusion volumes). For this to work, bUseVolumes must be True. There are two cases that slightly change the way LightVolumes work. 1) If a light has bUseVolumes true and only has volumes listed under the InclusionVolumes array, then the light will be restricted within those volumes and will not affect actors that are outside of those volumes. 2) If a light has bUseVolumes true and only has volumes listed under the ExclusionVolumes array, then the light will be restricted to affect actors everywhere except within those volumes. In the following example image, the pointlight has bUseVolumes=True and then lists one volume under InclusionVolumes. This volume is just around the light, but the bottom of the volume extends to below the BSP floor. After rebuilding lighting, notice how the static mesh is not receiving any light, even though it is clearly within the radius of the pointlight. Since the volume extends below the BSP floor, the BSP is included in the lights influence.
Primitive Lighting Options
Most of this guide has focused on settings associated with Light Actors, but Primitives have important lighting options as well.
Lighting SubdivisionsIn Unreal Engine 2, all static meshes used vertex lighting. With low polygon objects, vertex lighting often introduces unsightly artifacts if a single vertex happens to be completely shadowed by a thin object (such as a wire). Unreal Engine 3 addresses this issue by subdividing the lighting traces on objects. This prevents harsh shadow bugs. These settings only take effect when lighting is rebuilt. The more subdivisions you use, the longer rebuilding lighting will take. bUseSubDivisions: Defaults to True. If this is false, none of the other AdvancedLighting options will have an effect. MaxSubDivisions: Specifies the maximum number of SubDivisions that will be traced when building lighting for a static mesh. MinSubDivisions: Specifies the minimum number of subdivisions that will be traced when building lighting for a static mesh. SubDivisionStepSize: The amount of subdivisions used is biased by the size of the mesh (or rather, distance between vertices). This sets the step size in unreal units for lighting traces. By default, the step size is 16, so a very small mesh will be clamped by the MinSubDivisions, and a very large mesh will subdivide to the amount specified by MaxSubDivisions. There are some cases where you might wish to disable subdivisions altogether. A case that comes to mind is in the use of vertex lighting for modular meshes. In this example, the lighting between these modular wall sections must be seamless, but because there is a large shadow affecting the majority of the wall section on the left, subdivisions are averaging that shadow across the mesh. This is a rather extreme example as using a fairly low resolution lightmap in this case fixes the problem and maintains the shadow.
Primitive Lighting ChannelsRefer to the Lighting Channels Section above for a description of these options.
Moving Lights And What Occurs
When you move a light which is accumulating light into a lightmap OR if your move an object which was lightmapped, then the lightmap(s) which were being affected will be replaced and you will see the scene as if you have not rebuilt lighting. (e.g. You have a desk with a Direct Light mapped lamp on it. Moving that lamp will invalidate the lightmap and then update in real time so you can see a good approximation of it instead of having to rebuild lighting.) This increases iteration time for the content teams. | https://docs.unrealengine.com/udk/Three/LightingReference.html | 2019-06-16T04:59:31 | CC-MAIN-2019-26 | 1560627997731.69 | [array(['rsrc/Three/LightingReference/PointlightProperties.jpg',
'PointlightProperties.jpg'], dtype=object)
array(['rsrc/Three/LightingReference/FalloffExponent.jpg',
'FalloffExponent.jpg'], dtype=object)
array(['rsrc/Three/LightingReference/lightcolor.jpg', 'lightcolor.jpg'],
dtype=object)
array(['rsrc/Three/LightingReference/PointLights.jpg', 'PointLights.jpg'],
dtype=object)
array(['rsrc/Three/LightingReference/lightclasses.jpg',
'lightclasses.jpg'], dtype=object)
array(['rsrc/Three/LightingReference/SpotLightComponents.jpg',
'SpotLightComponents.jpg'], dtype=object)
array(['rsrc/Three/LightingReference/InnerConeAngle.jpg',
'InnerConeAngle.jpg'], dtype=object)
array(['rsrc/Three/LightingReference/SpotLightCrisp.jpg',
'SpotLightCrisp.jpg'], dtype=object)
array(['rsrc/Three/LightingReference/SpotDiagram.jpg', 'SpotDiagram.jpg'],
dtype=object)
array(['rsrc/Three/LightingReference/lightclasses.jpg',
'lightclasses.jpg'], dtype=object)
array(['rsrc/Three/LightingReference/pcShadows.jpg', 'pcShadows.jpg'],
dtype=object)
array(['rsrc/Three/LightingReference/lightclasses.jpg',
'lightclasses.jpg'], dtype=object)
array(['rsrc/Three/LightingReference/lightingchannels_light.jpg',
'lightingchannels_light.jpg'], dtype=object)
array(['rsrc/Three/LightingReference/lightingchannels_primitive.jpg',
'lightingchannels_primitive.jpg'], dtype=object)
array(['rsrc/Three/LightingReference/lightingchannels_stairs.jpg',
'lightingchannels_stairs.jpg'], dtype=object)
array(['rsrc/Three/LightingReference/LightFunctionCombo.jpg',
'LightFunctionCombo.jpg'], dtype=object)
array(['rsrc/Three/LightingReference/LightFunction.jpg',
'LightFunction.jpg'], dtype=object)
array(['rsrc/Three/LightingReference/LightMapVsShadowMap.jpg',
'LightMapVsShadowMap.jpg'], dtype=object)
array(['rsrc/Three/LightingReference/lightvolumes.jpg',
'lightvolumes.jpg'], dtype=object)
array(['rsrc/Three/LightingReference/primitivelightingoptions.jpg',
'primitivelightingoptions.jpg'], dtype=object)
array(['rsrc/Three/LightingReference/subdivisionseams.jpg',
'subdivisionseams.jpg'], dtype=object) ] | docs.unrealengine.com |
.
. The role can be changed at a later point in time under Preferences, if.
A project is a top-level container for all the things that define the test environment. A project is simply a
directory in the file system that contains a
"workspace.json" file. This file is understood by both Sencha Test
as well as Sencha Cmd (6+). This file describes things like applications, packages, themes, Sencha frameworks, and
now Test Projects.
You will first need to open an existing project or create a new project. If you are using Sencha Cmd, you can
simply open your existing Sencha Cmd workspace or stand-alone application directly in Sencha Test. Sencha Cmd is
not required, so don't worry if you are not currently using it. If Sencha Cmd is installed, however, Sencha Studio
will enable its Cmd Integration feature by default. This can be disabled using the Preferences dialog. Be aware that
disabling this integration does not change the fact that both Sencha Test and Sencha Cmd will share the same
"workspace.json" file.
At present, each application, package, and workspace may contain a single test project (backed by a
test/project.json
file by default). Test projects house the test suites discussed in further detail in later guides.
You can create a new project using the “New Project” button on the welcome screen.
Once you have created (or opened) a project, Sencha Studio will display it on the welcome screen when you next launch the application. By default, Sencha Studio re-opens the previously opened project on launch (this behavior can be changed in Preferences).
If you have an existing project, for example if you pull the code from a source control repository, you would click on the “Open Project“ button and select the existing project/workspace folder.
Sencha Studio seamlessly integrates with projects/workspaces generated by Sencha Cmd, which may house applications, themes, or packages. Simply click “Open Project” to open these projects/workspaces.
Once you open a project, Sencha Studio will display its contents in the project navigation tree on the left side of the application.
In a Sencha Cmd workspace there can be applications in addition to test projects. Each application has its own test project indicated by the expandable Tests node.
The workspace itself can also contain a test project. If the test project is not yet initialized, this will be indicated the Sencha Studio text editor. Selecting unrecognized file types has no effect.
To review results from test runs associated with the project, select the Test Results tab.
The test runs listed in this tab are retrieved from the associated Sencha Test Archive Server. Selecting a run will download the results and display them in the content area.
To manage browser farms (for example, WebDriver hubs), select the Browsers tab.
From this tab, you can add new browser farm definitions. These defintions can be used in Sencha Studio's Test Runner or from the command-line using stc.
To return to the welcome screen after opening a workspace, instructions on setting up the organizational structure for your tests. See the stc usage guide for using stc to run your tests from the command-line.
If you have further questions, concerns, or bug reports, please visit the Sencha Test forums. | https://docs-devel.sencha.com/sencha_test/2.3.0/guides/getting_started.html | 2019-06-16T05:04:48 | CC-MAIN-2019-26 | 1560627997731.69 | [] | docs-devel.sencha.com |
On this page:
Related pages:
You can set up a custom action on the controller instance to integrate notification of AppDynamics health rule violations and events with an alerting or ticketing system. Use a push approach by creating custom notifications that pass the information to your alerting system.
Custom Notifications and Custom Actions
A custom notification lets you integrate alerts about AppDynamics health rule violations and events into your own alerting system. This integration extension requires:
- A custom.xml file that provides information about the custom notification
- An executable script that accepts parameters from AppDynamics about the events and health rule violations that trigger an alert
- Configuring AppDynamics events or policies to trigger the custom notification via a custom action
This topic describes how to create the script and the xml file. See the documentation on the Alert & Response features in the Related pages above for information on how to trigger the action.
Creating a Custom Action
Create the script
For each custom action that you want to implement, create an executable script (.bat extension for Windows, .sh extension for Linux) that can accept and process the parameters passed to it by AppDynamics. See Information Passed to the Custom Action Script from AppDynamics for details on the parameters. For each script:
- Set correct executable permissions for the shell scripts in a Linux environment. For example: chmod 770 script1.sh.
- Ensure that the script file has the correct character encoding. This is especially important when creating a Unix shell script on a Windows machine.
Install the script on an On Premise Controller
To install the script on an on-premises controller:
At the top level of the Controller installation directory, create a directory named "custom" with a sub-directory named "actions".
<controller_home>/custom/actions
In the <controller_home>/custom/actions directory, create a subdirectory for each custom action script that you will install. For example, for an action that interfaces with a JIRA system.
<controller_home>/custom/actions/jira
- Move the script to the appropriate subdirectory that you created in step 2.
Create the XML File
- Create a custom.xml file that describes the location and name of your custom action script. See Contents of the custom.xml File.
- For an on-premises Controller, move the file to the <controller_home>/custom/actions directory. For a SaaS Controller, contact your AppDynamics sales representative for instructions.
Verify on the Script on an on-premises Controller
- After you have installed the script and the custom.xml file, restart the Controller.
- Verify the script manually. To verify the script:
a. Open a command-line console on the Controller host machine.
b. Execute the script file from the command line console.
Create the Custom Action
Create the custom action in the AppDynamics UI to arrange how the custom action will be tiriggered. See Custom Actions.
Contents of the custom.xml File
The custom.xml file has an <actions> element for every custom action on the controller. The <type> element contains the subdirectory that contains the script file. The <executable> element contains the name of the script.
Sample custom.xml file
<custom-actions> <action> <type>jira</type> <executable>script1.bat</executable> </action> <action> <type>bugzilla</type> <executable>script2.sh</executable> </action> </custom-actions>
Information Passed to the Custom Action Script from AppDynamics
The custom action script must handle the parameters that the controller passes from the health rule violation or other event. The parameter values are passed as an array of strings.
The parameters are passed as $0 for the script name, then $1, $2, . . . $n. $1 is the first parameter (application name), $2 is the application id, and so on in the order in which they are documented in the sections below.
Health rule violations have a different set of parameters from events.
Parameters passed by a health rule violation
The parameters describe the violated health rule violation that triggered the action.
The total number of elements in the array depends on the number of entities evaluated by the health rule and the number of triggered conditions per evaluation entity. Examples of evaluation entities are application, tier, node, business transaction, JMX. For each evaluation entity, the script expects the entity type, entity name,entity id, number of triggered conditions, and for each triggered condition, the set of condition parameters.
The parameter values are passed in the order in which they are described below.
Structure of Parameters Sent by a Health Rule Violation
- APP_NAME
- APP_ID
- PVN_ALERT_TIME
- PRIORITY
- SEVERITY // INFO, WARN, ERROR
- HEALTH_RULE_NAME
- HEALTH_RULE_ ID
- PVN_TIME_PERIOD_IN_MINUTES
- AFFECTED_ENTITY_TYPE
- AFFECTED_ENTITY_NAME
- AFFECTED_ENTITY_ID
- NUMBER_OF_EVALUATION_ENTITIES; the following parameters are passed for each evaluation entity:
- EVALUATION_ENTITY_TYPE
- EVALUATION_ENTITY_NAME
- EVALUATION_ENTITY_ID
- NUMBER_OF_TRIGGERED_CONDITIONS_PER_EVALUATION_ENTITY; the following parameters are passed for each triggered condition for this evaluation entity:
- SCOPE_TYPE_x
- SCOPE_NAME_x
- SCOPE_ID_x
- CONDITION_NAME_x
- CONDITION_ID_x
- OPERATOR_x
- CONDITION_UNIT_TYPE_x
- USE_DEFAULT_BASELINE_x
- BASELINE_NAME_x
- BASELINE_ID_x
- THRESHOLD_VALUE_x
- OBSERVED_VALUE_x
- SUMMARY_MESSAGE
- INCIDENT_ID
- DEEP_LINK_URL
- EVENT_TYPE
- ACCOUNT_NAME
- ACCOUNT_ID
Definitions of Parameters Sent by a Health Rule Violation
Parameters passed by an event
The parameters describe the event that triggered the action.
The total number of elements in the array depends on the number of event types and event summaries that triggered the action.
The parameter values are passed in the order in which they are described below.
Structure of Parameters Sent by an Event
- APP_NAME
- APP_ID
- EN_TIME
- PRIORITY
- SEVERITY
- EN_NAME
- EN_ ID
- EN_INTERVAL_IN_MINUTES
- NUMBER_OF_EVENT_TYPES; the following parameters are passed for each event type:
- EVENT_TYPE_x
- EVENT_TYPE_NUM_x
- NUMBER_OF_EVENT_SUMMARIES; the following parameters are passed for each event summary:
- EVENT_SUMMARY_ID_x
- EVENT_SUMMARY_TYPE_x
- EVENT_SUMMARY_SEVERITY_x
- EVENT_SUMMARY_STRING _x
- DEEP_LINK_URL
- ACCOUNT_NAME
- ACCOUNT_ID
Definitions of Parameters Sent by an Event
Sample Custom Action Script
See the CreateServiceNow script, for an example of a script that creates ServiceNow tickets triggered by AppDynamics health rule violations. | https://docs.appdynamics.com/display/PRO43/Build+a+Custom+Action | 2019-06-16T05:41:33 | CC-MAIN-2019-26 | 1560627997731.69 | [] | docs.appdynamics.com |
Use the Assign tasks using increase or decrease the total work because the task requires less or more work hours. Keep duration constant option when you are adding or removing resources, but want to keep the due date and duration the same.
Use Case:
- JeffDesign has an assigned task – Write Requirements Documentation.
- The task has a Start Date of 10/1/2012
- The task has a Due Date of 10/11/2012
- The task has a Duration of 9.0 days and Work of 72 hours.
- The task has a Richard will assign a resource to the existing task
- The Work per resource will increase, but Due Date and Duration will remain the same.
Richard updates the task as shown below. QA One has been assigned to the task at 100%.
Richard clicks the Recalculate button on the Task Master toolbar.
The task now shows Work updated to reflect the work of both JeffDesign and QA One. Due Date and Duration remain constant.
The following table defines which items are recalculated based on the type of task.
Return to Task Master Dependency Settings | https://docs.bamboosolutions.com/document/increase_or_decrease_the_total_work_in_task_master/ | 2019-06-16T05:51:47 | CC-MAIN-2019-26 | 1560627997731.69 | [array(['/wp-content/uploads/2017/06/hw45-2010-taskmodeuc1.jpg',
'hw45-2010-taskmodeuc1.jpg'], dtype=object) ] | docs.bamboosolutions.com |
Prerequisite work for implementing identity and device access policies
This article describes prerequisites that need to be implemented before you can deploy the recommended identity and device access policies. This article also discusses the default platform client configurations we recommend to provide the best SSO experience to your users, as well as the technical prerequisites for conditional access.
Prerequisites
Before implementing the recommended identity and device access policies, there are several prerequisites that your organization must meet. The following table details the prerequisites that apply to your environment.
Recommended client configurations
This section describes the default platform client configurations we recommend to provide the best SSO experience to your users, as well as the technical prerequisites for conditional access.
Windows devices
We recommend the Windows 10 (version 1703 or later), as Azure is designed to provide the smoothest SSO experience possible for both on-premises and Azure AD. Work or school-issued devices should be configured to join Azure AD directly or if the organization uses on-premises AD domain join, those devices should be configured to automatically and silently register with Azure AD.
For BYOD Windows devices, users can use Add work or school account. Note that Chrome-browser users on Windows 10 need to install an extension to get the same smooth sign-in experience as Edge/IE users. Also, if your organization has domain-joined Windows 7 devices, you can install Microsoft Workplace Join for non-Windows 10 computers Download the package to register the devices with Azure AD.
iOS devices
We recommend installing the Microsoft Authenticator app on user devices before deploying conditional access or MFA policies. At a minimum, the app should be installed when users are asked to register their device with Azure AD by adding a work or school account, or when they install the Intune company portal app to enroll their device into management. This depends on the configured conditional access policy.
Android devices
We recommend users install the Intune Company Portal app and Microsoft Authenticator app before conditional access policies are deployed or when required during certain authentication attempts. After app installation, users may be asked to register with Azure AD or enroll their device with Intune. This depends on the configured conditional access policy.
We also recommend that corporate-owned devices (COD) are standardized on OEMs and versions that support Android for Work or Samsung Knox to allow mail accounts, be managed and protected by Intune MDM policy.
Recommended email clients
The following email clients support modern authentication and conditional access.
Recommended client platforms when securing documents
The following clients are recommended when a secure documents policy has been applied.
* Learn more about using conditional access with the OneDrive sync client.
Office 365 client support
For more information about Office 365 client support, see the following articles:
- Office 365 Client App Support - Conditional Access
- Office 365 Client App Support - Mobile Application Management
- Office 365 Client App Support - Modern Authentication
Protecting administrator accounts
Azure AD provides a simple way for you to begin protecting administrator access with a preconfigured conditional access policy. In Azure AD, go to Conditional access and look for this policy — Baseline policy: Require MFA for admins. Select this policy and then select Use policy immediately.
This policy requires MFA for the following roles:
- Global administrators
- SharePoint administrators
- Exchange administrators
- Conditional access administrators
- Security administrators
For more information, see Baseline security policy for Azure AD admin accounts.
Additional recommendations include the following:
- Use Azure AD Privileged Identity Management to reduce the number of persistent administrative accounts. See Start using PIM.
- Use privileged access management in Office 365 to protect your organization from breaches that may use existing privileged admin accounts with standing access to sensitive data or access to critical configuration settings.
- Use administrator accounts only for administration. Admins should have a separate user account for regular non-administrative use and only use their administrative account when necessary to complete a task associated with their job function. Office 365 administrator roles have substantially more privileges than Office 365 services.
- Follow best practices for securing privileged accounts in Azure AD as described in this article.
Next steps
Configure the common identity and device access policies
Feedback
Send feedback about: | https://docs.microsoft.com/en-us/microsoft-365/enterprise/identity-access-prerequisites | 2019-06-16T06:03:08 | CC-MAIN-2019-26 | 1560627997731.69 | [] | docs.microsoft.com |
PublishingWeb class
Provides publishing behavior for a Web instance that supports publishing.
Inheritance hierarchy
System.Object
Microsoft.SharePoint.Client.ClientObject
Microsoft.SharePoint.Client.Publishing.PublishingWeb
Namespace: Microsoft.SharePoint.Client.Publishing
Assembly: Microsoft.SharePoint.Client.Publishing (in Microsoft.SharePoint.Client.Publishing.dll)
Syntax
'Declaration Public NotInheritable Class PublishingWeb _ Inherits ClientObject 'Usage Dim instance As PublishingWeb
public sealed class PublishingWeb : ClientObject
Remarks
The PublishingWeb class provides publishing-specific behavior for a Web that supports publishing, including access to child PublishingPage and PublishingWeb instances, variations support, navigation settings, PageLayout and Web template restrictions, and Welcome page settings. This class wraps a Web instance that has the publishing feature activated.
Instantiate this class by using the static GetPublishingWeb(ClientRuntimeContext, Web) method or by retrieving it from a PublishingWebCollection collection.
The PublishingWeb class wraps the Web class. It also directly exposes the underlying Web through the Web property so that additional Web functionality can be easily accessed.
Thread safety
Any public static (Shared in Visual Basic) members of this type are thread safe. Any instance members are not guaranteed to be thread safe.
See also
Reference
Microsoft.SharePoint.Client.Publishing namespace | https://docs.microsoft.com/en-us/previous-versions/office/sharepoint-csom/jj165424%28v%3Doffice.15%29 | 2019-06-16T05:13:31 | CC-MAIN-2019-26 | 1560627997731.69 | [] | docs.microsoft.com |
CostFinalize Action
The CostFinalize action ends the internal installation costing process begun by the CostInitialize action.
Sequence Restrictions.
ActionData Messages
There are no ActionData messages.
Remarks.
Related topics | https://docs.microsoft.com/en-us/windows/desktop/msi/costfinalize-action | 2019-06-16T05:46:30 | CC-MAIN-2019-26 | 1560627997731.69 | [] | docs.microsoft.com |
.
Use Splunk Web
When you define a data input, you can set a source type value to be applied to all incoming data from that input. You can pick a source type from a list or enter your own source type value. Web Web software.
Caution:.
Note: If you forward data, and you want to assign a source type for a source, you must assign the source type in
props.conf on the forwarder. If you do it in
props.conf on the receiver, the override has no.
Your stanza source path regular expressions (such as
[source::.../web/....log]):
[source::/home/fflanda/....log(.\d+)?] sourcetype=my! | https://docs.splunk.com/Documentation/Splunk/6.6.1/Data/Bypassautomaticsourcetypeassignment | 2019-06-16T04:53:28 | CC-MAIN-2019-26 | 1560627997731.69 | [array(['/skins/OxfordComma/images/acrobat-logo.png', None], dtype=object)] | docs.splunk.com |
The graph demonstrates that putting protection in place as quickly as possible and ridding the network of post-attack vulnerabilities can minimize the devastating effects of outbreaks over time.
By using EPS and Outbreak Prevention Services, enterprises can minimize their risk and dramatically lower costs. By deploying policies early in the life cycle and before pattern file generation, an organization can dramatically reduce the cost and effort (area under the curve), in addition to increasing the overall level of protection.
Trend Micro’s expertise, architecture, and services provide a strong return on investment, improve overall protection, and increase the productivity of enterprise networks. | http://docs.trendmicro.com/en-us/enterprise/control-manager-60/ch_ag_tm_services/understand_eps/eps_value_highlight.aspx | 2019-06-16T04:35:57 | CC-MAIN-2019-26 | 1560627997731.69 | [] | docs.trendmicro.com |
Configure the following event notification to notify administrators when a significant increase in DLP template matches occurred over a predefined period.
The Event Notifications screen appears.
A list of events appears.
The Significant Template Match Increase screen appears.
The selected contact groups or user accounts appear in the Selected Users and Groups list. | http://docs.trendmicro.com/en-us/enterprise/control-manager-70/detections/notifications/data-loss-prevention123/significant-template.aspx | 2019-06-16T05:08:54 | CC-MAIN-2019-26 | 1560627997731.69 | [] | docs.trendmicro.com |
With basic virtual machine management workflows you can perform basic operations on virtual machines, for example, create, rename or delete a virtual machine, upgrade virtual hardware, and so on.
You can access the basic virtual machine management workflows from Workflows view of the Orchestrator client.in the
Create custom virtual machine
Creates a virtual machine with the specified configuration options and additional devices.
Create simple dvPortGroup virtual machine
Creates a simple virtual machine. The network used is a Distributed Virtual Port Group.
Create simple virtual machine
Creates a virtual machine with the most common devices and configuration options.
Delete virtual machine
Removes a virtual machine from the inventory and datastore.
Get virtual machines by name
Returns a list of virtual machines from all registered vCenter Server hosts that match the provided expression.
Mark as template
Converts an existing virtual machine to a template, not allowing it to start. You can use templates to create virtual machines.
Mark as virtual machine
Converts an existing template to a virtual machine, allowing it to start.
Move virtual machine to folder
Moves a virtual machine to a specified virtual machine folder.
Move virtual machine to resource pool
Moves a virtual machine to a resource pool. If the target resource pool is not in the same cluster, you must use the migrate or relocate workflows.
Move virtual machines to folder
Moves several virtual machines to a specified virtual machine folder.
Move virtual machines to resource pool
Moves several virtual machines to a resource pool.
Register virtual machine
Registers a virtual machine. The virtual machine files must be placed in an existing datastore and must not be already registered.
Reload virtual machine
Forces vCenter Server to reload a virtual machine.
Rename virtual machine
Renames an existing virtual machine on the vCenter Server system or host and not on the datastore.
Set virtual machine performance
Changes performance settings such as shares, minimum and maximum values, shaping for network, and disk access of a virtual machine.
Unregister virtual machine
Removes an existing virtual machine from the inventory.
Upgrade virtual machine hardware (force if required)
Upgrades the virtual machine hardware to the latest revision that the host supports. This workflow forces the upgrade to continue, even if VMware Tools is out of date. If the VMware Tools is out of date, forcing the upgrade to continue reverts the guest network settings to the default settings. To avoid this situation, upgrade VMware Tools before running the workflow.
Upgrade virtual machine
Upgrades the virtual hardware to the latest revision that the host supports. An input parameter allows a forced upgrade even if VMware Tools is out of date.
Wait for task and answer virtual machine question
Waits for a vCenter Server task to complete or for the virtual machine to ask a question. If the virtual machine requires an answer, accepts user input and answers the question. | https://docs.vmware.com/en/vRealize-Orchestrator/5.5.2/com.vmware.vsphere.vco_use_plugins.doc/GUID-18FD6E4C-673B-41AD-BBBE-583C36BB2F16.html | 2017-11-18T02:35:09 | CC-MAIN-2017-47 | 1510934804518.38 | [] | docs.vmware.com |
With datacenter workflows, you can create, delete, reload, rename, or rescan a datacenter.
You can access the datacenter workflows from Workflows view of the Orchestrator client.in the
Create datacenter
Creates a new data center in a data center folder.
Delete datacenter
Deletes a data center.
Reload datacenter
Forces vCenter Server to reload data from a data center.
Rename datacenter
Renames a data center and waits for the task to complete.
Rescan datacenter HBAs
Scans the hosts in a data center and initiates a rescan on the host bus adapters to discover new storage. | https://docs.vmware.com/en/vRealize-Orchestrator/6.0/com.vmware.vrealize.orchestrator-use-plugins.doc/GUID-75691BE4-A812-4251-8162-9F5D1F7AFE9C.html | 2017-11-18T02:34:37 | CC-MAIN-2017-47 | 1510934804518.38 | [] | docs.vmware.com |
User roles
User roles define sets of permissions that allow individual users to perform specific tasks, such as view, create, or modify a file.
Role-based access controls (RBAC) regulate access to computer or network resources based on assigned user roles. Each "user role" The components of RBAC such as role-permissions, user-role and role-role relationships make it simple to perform user assignments. Roles are defined according to job skills, authority, and responsibility.
Related terms
- RBAC
- Role-based access control
- User
More information
- Manage user roles—add and remove users, and assign user roles. | https://docs.interana.com/lexicon/User_roles | 2017-11-18T02:54:03 | CC-MAIN-2017-47 | 1510934804518.38 | [] | docs.interana.com |
StorageContext Class
The class representing the storage context of the persistent store.
The smart contract can obtain its own storage context through Storage.CurrentContext and pass the context as an argument to other contracts(as a way of authorization), allowing other contracts to call the read/write methods for its persistent store.
Note: This is different from the 1.6 version.
Namespace: Neo.SmartContract.Framework.Services.Neo
Assembly: Neo.SmartContract.Framework
Syntax
public class StorageContext
Constructor
The StorageContext object is constructed through Storage.CurrentContext. | http://docs.neo.org/en-us/sc/fw/dotnet/neo/StorageContext.html | 2017-11-18T02:52:09 | CC-MAIN-2017-47 | 1510934804518.38 | [] | docs.neo.org |
Resolving Font Issues
To keep the PDF Generator Extensions installation zip as small as possible a number of fonts had to omitted from the included mPDF package, font files can be extremely large.
If you find that the generated PDF is missing some characters or the file won't generate it is most likely due to a missing font. You may find a note in the entry timeline like this:
Error: Unable to generate PDF. mPDF Error - cannot find TTF TrueType font file - /app/public/wp-content/plugins/gravityflowpdf/includes/mpdf/ttfonts/Sun-ExtA.ttf
To resolve this issue you can create a custom fonts directory in another location, such as the wp-content directory. The missing font file(s) which can be downloaded from the mPDF GitHub repository can then be added to your custom fonts directory. Using a custom fonts directory outside the plugins directory ensures your changes will not be lost during plugin updates.
For issues with Chinese, Japanese, and Korean languages you would need the Sun-ExtA.ttf file.
Once you have added the missing font(s) to a custom location you can then instruct mPDF to use those fonts by adding the following line to the sites wp-config.php file.
define( '_MPDF_SYSTEM_TTFONTS', dirname(__FILE__) . '/wp-content/folder_name_here/' );
With the missing font(s) added to a custom directory and the _MPDF_SYSTEM_TTFONTS constant defined you can then restart the step to generate the PDF and continue the workflow. | http://docs.gravityflow.io/article/160-resolving-font-issues | 2017-11-18T02:33:37 | CC-MAIN-2017-47 | 1510934804518.38 | [] | docs.gravityflow.io |
Introduction to gmpy2¶
gmpy2 is a C-coded Python extension module that supports multiple-precision arithmetic. gmpy2 is the successor to the original gmpy module. The gmpy module only supported the GMP multiple-precision library. gmpy2 adds support for the MPFR (correctly rounded real floating-point arithmetic) and MPC (correctly rounded complex floating-point arithmetic) libraries. gmpy2 also updates the API and naming conventions to be more consistent and support the additional functionality.
The following libraries are supported:
GMP for integer and rational arithmetic
MPIR is based on the GMP library but adds support for Microsoft’s Visual Studio compiler. It is used to create the Windows binaries.
MPFR for correctly rounded real floating-point arithmetic
MPC for correctly rounded complex floating-point arithmetic
Generalized Lucas sequences and primality tests are based on the following code:
mpz_lucas:
mpz_prp:
gmpy2 Versions¶
This manual documents the two major versions of gmpy2. Sections that are specific to a particular version will be identified as such.
There are two versions of gmpy2. The 2.0 version is the stable release that only receives bug fixes and very minor updates. Version 2.1 is currently under active development and includes several new capabilities. Most gmpy2 2.0 code should run unchanged with gmpy2 2.1
Changes in gmpy2 2.1.0a1¶
- Thread-safe contexts are now supported. Properly integrating thread-safe contexts required an extensive rewrite of almost all internal functions. Changing the active context in one thread will no longer change the behavior in other threads.
- by mpz() to allow an unrecognized type to be converted to an mpz instance. The same is true for the other gmpy2 types.
- Support for Cython via the addition of a C-API and a gmpy2.pxd file.
Installation¶
This section will be updated soon to reflect improved support of pip and the wheel format.
Installing gmpy2 on Windows¶
Pre-compiled versions of gmpy2 are available at. Please select the installer that corresponds to the version of Python installed on your computer. Note that either a 32 or 64-bit version of Python can be installed on a 64-bit version of Windows. If you get an error message stating that Python could not be found in the registry, you have the wrong version of the gmpy2 installer.
Installing gmpy2 on Unix/Linux¶
Requirements¶
gmpy2 has only been tested with the most recent versions of GMP, MPFR and MPC. Specifically, for integer and rational support, gmpy2 requires GMP 5.0.x or later. To support multiple-precision floating point arithmetic, MPFR 3.1.x or later is required. MPC 1.0.1 or later is required for complex arithmetic.
Short Instructions¶
If your system includes sufficiently recent versions of GMP, MPFR and MPC, and you have the development libraries installed, compiling should be as simple as:
cd <gmpy2 source directory> python setup.py install
If this fails, read on.
Detailed Instructions¶
If your Linux distribution does not support recent versions of GMP, MPFR and MPC, you will need to compile your own versions. To avoid any possible conflict with existing libraries on your system, it is recommended to use a directory not normally used by your distribution. setup.py will automatically search the following directories for the required libraries:
- /opt/local
- /opt
- /usr/local
- /usr
- /sw
If you can’t use one of these directories, you can use a directory located in your home directory. The examples will use /home/case/local. If you use one of standard directories (say /opt/local), then you won’t need to specify –prefix=/home/case/local to setup.py but you will need to specify the prefix when compiling GMP, MPFR, and MPC.
Create the desired destination directory for GMP, MPFR, and MPC.
$ mkdir /home/case/local
Download and un-tar the GMP source code. Change to the GMP source directory and compile GMP.
$ cd /home/case/local/src/gmp-5.1.0 $ ./configure --prefix=/home/case/local $ make $ make check $ make install
Download and un-tar the MPFR source code. Change to the MPFR source directory and compile MPFR.
$ cd /home/case/local/src/mpfr-3.1.1 $ ./configure --prefix=/home/case/local --with-gmp=/home/case/local $ make $ make check $ make install
Download and un-tar the MPC source code. Change to the MPC source directory and compile MPC.
$ cd /home/case/local/src/mpc-1.0.1 $ ./configure --prefix=/home/case/local --with-gmp=/home/case/local --with-mpfr=/home/case/local $ make $ make check $ make install
Compile gmpy2 and specify the location of GMP, MPFR and MPC. The location of the GMP, MPFR, and MPC libraries is embedded into the gmpy2 library so the new versions of GMP, MPFR, and MPC do not need to be installed the system library directories. The prefix directory is added to the beginning of the directories that are checked so it will be found first.
$ python setup.py install --prefix=/home/case/local
If you get a “permission denied” error message, you may need to use:
$ python setup.py build --prefix=/home/case/local $ sudo python setup.py install --prefix=/home/case/local
Options for setup.py¶
- –force
- Ignore the timestamps on all files and recompile. Normally, the results of a previous compile are cached. To force gmpy2 to recognize external changes (updated version of GMP, etc.), you will need to use this option.
- –mpir
- Force the use of MPIR instead of GMP. GMP is the default library on non-Windows operating systems.
- –gmp
- Force the use of GMP instead of MPIR. MPIR is the default library on Windows operating systems.
- –shared=<...>
- Add the specified directory prefix to the beginning of the list of directories that are searched for GMP, MPFR, and MPC shared libraries.
- –static=<...>
- Create a statically linked library using libraries from the specified path, or from the operating system’s default library location if no path is specified | http://gmpy2.readthedocs.io/en/latest/intro.html | 2017-11-18T02:42:40 | CC-MAIN-2017-47 | 1510934804518.38 | [] | gmpy2.readthedocs.io |
Administrators can configure View Connection Server settings so that remote desktop and application sessions are established directly between the client system and the remote application or desktop virtual machine, bypassing the View Connection Server host. This type of connection is called a direct client connection.
With direct client connections, an HTTPS connection is still made between the client and the View Connection Server host for users to authenticate and select remote desktops and applications, but the second HTTPS connection (the tunnel connection) is not used.
Direct PCoIP connections include the following built-in security features:
PCoIP supports Advanced Encryption Standard (AES) encryption, which is turned on by default, and PCoIP uses IP Security (IPsec).
PCoIP works with third-party VPN clients.
For clients that use the Microsoft RDP display protocol, direct client connections to remote desktops are appropriate only if your deployment is inside a corporate network. With direct client connections, RDP traffic is sent unencrypted over the connection between the client and the desktop virtual machine. | https://docs.vmware.com/en/VMware-Horizon-6/6.0/com.vmware.horizon-view.planning.doc/GUID-B5A6A5F4-70E6-4D84-A665-0B74714DA65D.html | 2017-11-18T03:18:40 | CC-MAIN-2017-47 | 1510934804518.38 | [] | docs.vmware.com |
About this task
If VMware Identity Manager is configured in the AirWatch Organization Group from which you downloaded the installer, you do not need to generate the activation code. If you are activating the connector from the installer, the activation code is pre-populated in the Activation Code field. Continue with the installer.
Prerequisites
(SaaS environments) You have your VMware Identity Manager tenant URL, for example, mycompany.vmwareidentity.com. When you receive your email confirmation, go to your tenant URL and sign in using the local admin credentials you received. This admin is a local user.
Procedure
- Log in to the administration console.
- (SaaS environments) Click Accept to accept the Terms and Conditions agreement.
- Click the Identity & Access Management tab.
- Click Setup.
- On the Connectors page, click Add Connector.
- Enter a name for the connector.
- Click Generate Activation Code.
The activation code displays on the page.
- Copy the activation code and save it.
What to do next
If you are activating the VMware Identity Manager connector component while running the Enterprise Systems Connector installer, copy and paste the connector code into the VMware IDM Connector Activation page of the installer.
If you are activating the VMware Identity Manager connector component later, after installation, see Activate the VMware Identity Manager Connector. | https://docs.vmware.com/en/VMware-Identity-Manager/2.9.1/com.vmware.aw-enterpriseSystemsConn/GUID-0D204230-ACFA-4D93-B8E7-71F42096E663.html | 2017-11-18T03:10:00 | CC-MAIN-2017-47 | 1510934804518.38 | [] | docs.vmware.com |
The Scoreboard widget shows the current value for each metric of objects that you select.
How the Scoreboard Widget Works
Each metric appears in a separate box. The value of the metric determines the color of the box. You define the ranges for each color when you edit the widget. You can customize the widget to use a sparkline chart to show the trend of changes of each metric. If you point to a box, the widget shows the source object and metric data.
You edit a Scoreboard widget after you add it to a dashboard. The widget can display metrics of the objects selected during editing of the widget or selected on another widget. When the Scoreboard widget is not in Self Provider mode, it shows metrics defined in a configuration XML file that you select in the Metric Configuration. It shows 10 predefined metrics if you do not select an XML file or if the type of the selected object is not defined in the XML file.
For example, you can configure the Scoreboard widget to use the sample Scoreboard metric configuration and to receive objects from the Topology Graph widget. When you select a host on a Topology Graph widget, the Scoreboard widget shows the workload, memory, and CPU usage of the host.
To set a source widget that is on the same dashboard, you must use the Widget Interactions menu when you edit a dashboard. To set a source widget that is on another dashboard, you must use the Dashboard Navigation menu when you edit the source dashboard.
Where You Find the Scoreboard Widget
The widget might be included on any of your custom dashboards. On the menu, click Dashboards to display a list of dashboards in the left pane.
Scoreboard Widget Configuration Options
To configure a widget, click the Edit icon on the widget title bar. | https://docs.vmware.com/en/vRealize-Operations-Manager/6.6/com.vmware.vcom.core.doc/GUID-464C4018-FFD3-4D4A-8D6F-EF70C9572289.html | 2017-11-18T03:10:36 | CC-MAIN-2017-47 | 1510934804518.38 | [array(['images/GUID-EAD234AC-8201-4A44-9264-E66D36FF0DC7-low.png', None],
dtype=object) ] | docs.vmware.com |
You can set a policy to continuously listen for traps from an SNMP device.
Before you begin
Verify that you are logged in to the Orchestrator client as an administrator.
Verify that you have a connection to an SNMP device from the Inventory view.
Procedure
- Click the Policy Templates view in the Orchestrator client.
- In the workflows hierarchical list, select and navigate to the SNMP Trap policy template.
- Right-click the SNMP Trap policy template and select Apply Policy.
- In the Policy name text box, type a name for the policy that you want to create.
- (Optional) : In the Policy description text box, type a description for the policy.
- Select an SNMP device for which to set the policy.
- Click Submit to create the policy.
- On the Policies view, right-click the policy that you created and select Start policy.
Results
The trap policy starts to listen for SNMP traps.
What to do next
You can edit the trap policy. | https://docs.vmware.com/en/vRealize-Orchestrator/6.0/com.vmware.vrealize.orchestrator-use-plugins.doc/GUID-AE78D7CE-E2CE-4522-A9D4-ADE2601132DE.html | 2017-11-18T03:10:42 | CC-MAIN-2017-47 | 1510934804518.38 | [] | docs.vmware.com |
Site Recovery Manager can use vSphere Replication to replicate data to servers at the recovery site..
Using vSphere Replication and Site Recovery Manager with vSphere Storage vMotion and vSphere Storage DRS.
Using vSphere Replication and VMware Virtual SAN Storage with Site Recovery Manager
You can use VMware Virtual SAN storage with vSphere Replication and Site Recovery Manager. | https://docs.vmware.com/en/vRealize-Suite/2017/com.vmware.vrsuite.srm.doc/GUID-55F74A9F-6F3F-49E9-9C88-9C7BABB11301.html | 2017-11-18T03:10:52 | CC-MAIN-2017-47 | 1510934804518.38 | [array(['images/GUID-6EBCAD73-4F29-400A-8929-3E936109000B-high.png',
'SRM architecture with vSphere Replication'], dtype=object) ] | docs.vmware.com |
NEO Namespace
The NEO Namespace contains an API provided by the NEO blockchain. Methods of the API allow querying the blockchain and manipulation of the persistent store.
Note:
New and
Deprecated tags denote changes between version 1.6 and version 2.0.
Read-only API
Blockchain Query API:
Block class API:
Transaction class API:
Account class API:
Asset class API:
Contract class API:
Storage class API:
Runtime class API:
Note: The source code can be found under
NEO in the
src/neo/SmartContract/StateReader.cs file.
Read/Write API
This type of API will modify the status of the smart contract
Note: The source code for the above API can be found under
NEO in the
src/neo/SmartContract/StateMachine.cs file. | http://docs.neo.org/en-us/sc/api/neo.html | 2017-11-18T02:53:14 | CC-MAIN-2017-47 | 1510934804518.38 | [] | docs.neo.org |
Hyperadmin supports 3 different modes of content types:
The incomming media type is handled seperately from the outgoing media type. The selection of each media type can be controlled with the following HTTP headers:
Accept: Specifies the media type of the response
Content-Type: Specifies the media type of the request
Alternatively they may also be controlled by the following GET parameters:
_HTTP_ACCEPT: Specifies the media type of the response
_CONTENT_TYPE: Specifies the media type of the request
Note: Arguments passed by the GET method needs to be URL encoded. In JavaScript you can use the built-in encodeURIComponent method like so:
var contentType = encodeURIComponent('application/vnd.Collection.hyperadmin+JSON') | http://django-hyperadmin.readthedocs.io/en/latest/manual/content_types.html | 2017-11-18T02:56:05 | CC-MAIN-2017-47 | 1510934804518.38 | [] | django-hyperadmin.readthedocs.io |
Next: buildbot.util.json, Previous: buildbot.util.collections, Up: Utilities. | http://docs.buildbot.net/0.8.3/buildbot_002eutil_002eeventual.html | 2017-11-18T02:36:10 | CC-MAIN-2017-47 | 1510934804518.38 | [] | docs.buildbot.net |
Digital Documents just received our 9th straight President's Club award from
Cardiff (TeleForm and LiquidOffice) for 2008!
We are one of only two Cardiff Resellers in the World to receive this prestigious award for this many consecutive years.
If you are looking for a Document Imaging Solution, you have come to the right place! We have assembled a team with over 36 Years of Experience in the field of Fax Servers, Image Processing and Document Management. We are fully certified in all the products that we represent (Cardiff TeleForm, Documentum Application Xtender and Captaris RightFax).
We specialize in Business Process Automation.
We are enthusiastic about and capable of providing solutions to improve
your throughput, accuracy and retrieval systems.
Business process automation is the key to lowering unecessary overhead expenditures and allowing the fastest response times to your clients. We have extensive knowledge in Hardware (Scanners, Optical Jukeboxes, Fax Cards), Software and the necessary
Integration Technologies to provide you with a complete solution. You can be confident that we are fully certified in all of the Document Management Software and Scanning products that we represent.
Are you bewildered by the acronyms in the Document Management industry? We are here to help you understand things like: OMR, ICR, OCR.
Case Studies:
Recent News:
12/19/2007 -
RECOGNIZED AS A CARDIFF 2008 CHANNEL PARTNER PRESIDENT’S CLUB AWARD WINNER
2/6/2007 - Cardiff President's Club
3/28/2006 - Cardiff President's Club 2005 - Number 1 U.S. Reseller; Best TeleForm Solution 2006
DD Sponsors 9th Annual Heart Ride
10/6/2005 - Anthony J. Bettencourt, CEO of Verity, Inc., presides over the opening bell to celebrate its 10-year anniversary on NASDAQ.
Digital Documents Hosts Employee Lifecycle Seminar - Click Here to View Presentation
TeleForm Software Now Includes Secure Web-based Scanning and Document Capture Capability : DD Featured in Press Release
© 2003
DigitalDocuments, Inc | http://www.d-docs.com/default.htm | 2008-05-13T19:24:21 | crawl-001 | crawl-001-006 | [] | www.d-docs.com |
Introduction
SwiftyBeaver Logging Framework is the extensible & lightweight open-source logger for Swift 2.2 and later. It is great for development & release due to its support for many (custom) logging destinations.
SwiftyBeaver Framework is also an important the foundation of the SwiftyBeaver Logging Platform which consists of:
-.
Framework Unique Feature Set
- Log to Xcode Console and / or log to a file
- Add custom log destination handlers to log to Loggly, Redis, etc.
- Send your logs with end-to-end AES256 encryption to the SwiftyBeaver Mac App
- Colored output to log file, etc.
- Uses own serial background queues/threads for a great performance
- Log levels which are below the set minimum are not executed for even better release performance
- Increases productivity & saves a lot of time thanks to "Needle in the Haystack" mode
- Easy & convenient configuration
- Use multiple logging destinations & settings, even for the same type
- Already comes with good defaults
- Fully customizable log format
- Mighty filters
- Use
log.debug("foo")syntax
- Simple installation via Carthage, CocoaPods, Swift Package Manager or download
- Very detailed logging (optional):
- time (with microsecond precision)
- level (output in color)
- thread name (if not main thread)
- filename, function & line
- message (can be string or a variable of any type)
- Native support for leading Xcode development plugins
Let’s Log!
Start logging with just 6 lines of code, follow the installation guide and basic setup mentioned below. And if you have questions: we are always here to help! | http://docs.swiftybeaver.com/article/7-introduction | 2017-06-22T14:07:47 | CC-MAIN-2017-26 | 1498128319575.19 | [] | docs.swiftybeaver.com |
Product Downloads Distribution
With WHMCS, you can setup products that have downloads associated with them. This is useful if you want to offer software, templates or other files for purchasing. With the download distribution, WHMCS will automatically handle the download permissions and only allow the items to be downloaded by customers that purchase the associated required product and only when that product is active in their account.
You need to begin by adding the download to the support center downloads section. This is done in Support > Downloads > Add When adding the download, you need to tick the "Product Download" checkbox to activate the download restrictions for that download. This is shown in the screenshot below:
Once the download has been added, you can then enable the download on products in the Setup > Products/Services > Edit Product pages. This is done from the Other tab. You can select multiple downloads to be associated with a product by single clicking each item in the "Available Files" menu. A client who holds this product will be able to download the files appearing in the "Selected Files" menu. This is shown below:
When a client has an active order for the product, the download will appear on the Product Details page. This can be accessed by click the My Services Link in the client area, then the green arrow next to the product. The downloads are available on that product detail page.
Immediate Download
As standard you are required to accept an order before the client will be able to download the file. It is possible to have the download available immediately. To do this navigate to Setup > Products/Services > Edit > Module Settings tab and select the Autorelease module.
You will now be able to select the usual auto-setup options such as "Automatically setup the product as soon as the first payment is received".
Product Addons
In addition to using a product to grant access to download a file, it is possible to use a product addon to grant access to download a file. This can be useful for upselling related software to a client when ordering a product.
As above, begin by adding the download to the support center downloads section. This is done in Support > Downloads > Add When adding the download, you need to tick the "Product Download" checkbox to activate the download restrictions for that file.
Next create the addon via the Setup > Products/Services > Product Addons page and configure it in the normal way. Select the file you wish to become available from the Associated Download menu. Ctrl + Click to select multiple files.
Once the addon's status is Active, the chosen file(s) will become available for the client to download under the parent product's Downloads tab in the client area (Services > My Services > View Details > Downloads tab). | http://docs.whmcs.com/Product_Downloads_Distribution | 2017-06-22T14:20:21 | CC-MAIN-2017-26 | 1498128319575.19 | [] | docs.whmcs.com |
At i-Docs we love a couple of things: sharing and our community and this post is a perfect combination of the two!
A couple of brand new slideshows from Siobhan O’Flynn which she presented at the CWC Strategic Digital Leadership Accelerator in Toronto last week, particularly useful for people interested in interactivity and transmedia:
Also, although this presentation is from way back when (2011), it’s a must-see for anyone interested or involved in interactive documentary: | http://i-docs.org/2012/11/06/interactivity-and-transmedia-new-presentations/ | 2017-06-22T14:23:03 | CC-MAIN-2017-26 | 1498128319575.19 | [] | i-docs.org |
it via pip:
$ pip install pyexcel-io
or clone it and install it:
$ git clone $ cd pyexcel-io $ python setup.py install
For individual excel file formats, please install them as you wish:
In order to manage the list of plugins installed, you need to use pip to add or remove a plugin. When you use virtualenv, you can have different plugins per virtual environment. In the situation where you have multiple plugins that does the same thing in your environment, you need to tell pyexcel which plugin to use per function call. For example, pyexcel-ods and pyexcel-odsr, and you want to get_array to use pyexcel-odsr. You need to append get_array(..., library=’pyexcel-odsr’).
Footnotes
- Saving multiple sheets as CSV format
- File formats: .csvz and .tsvz
- Working with sqlalchemy
- Working with django database
- Working with xls, xlsx, and ods formats | http://pyexcel-io.readthedocs.io/en/latest/ | 2017-06-22T14:03:40 | CC-MAIN-2017-26 | 1498128319575.19 | [] | pyexcel-io.readthedocs.io |
Creating a Menu on WordPress
Instructions to add WordPress menus.
To create a navigation menu:
1. Go to Appearance > Menus.
2. Use left pane of the menu settings to add created pages, posts, custom links and categories to a menu. Simply select the appropriate check boxes and click the Add to Menu button.
3. Use the Menu Structure pane to drag each item into the order you prefer.
4. You can change the menu name in the text box.
NOTE: Do not forget to click the Save Menu button when you finish working with Menus.
5. Click the arrow on the right of the item to reveal additional configuration options.
| https://docs.10web.io/docs/themes/adding-content/creating-menus.html | 2018-04-19T13:31:30 | CC-MAIN-2018-17 | 1524125936969.10 | [array(['http://docscdn.10web.io/themes/themes-21.png',
'WordPress Menus page'], dtype=object)
array(['http://docscdn.10web.io/themes/themes-22.png', 'Add to Menu'],
dtype=object)
array(['http://docscdn.10web.io/themes/themes-23.png', 'Menu Structure'],
dtype=object)
array(['http://docscdn.10web.io/themes/themes-24.png', 'Add Custom Link'],
dtype=object) ] | docs.10web.io |
Release Notes for Version 6.2.4
Note: This product has been discontinued. Technical Guidance ends July 15th 2018.
(based on Apache HTTP Server 2.4.27)
What’s in the Release Notes
These release notes cover the following topics:
- Important Support Lifecycle Announcements
- Important Notice for Microsoft Windows Users
- Experimental HTTP/2 Protocol Support Included
- SSL Encryption and mod_ssl Configuration
- Update to mod_bmx
- Hyperic Monitoring
- Custom Module Deployment
- Apache HTTP User Resources
- Pivotal Web Server 6.2.1 Components Updated
- Pivotal Web Server 6.2.1 Security Issues Addressed
Important Support Lifecycle Announcements
All users must note the End of Availability of the Pivotal Web Server product, effective March 1st, 2015. General Support for the Pivotal Web Server releases 5.5 and 6.2 ended on July 15, 2017. Users of previous vFabric Web Server and Pivotal Web Server releases are strongly encouraged to update to the final 6.2.4 release for security fixes.
Note that July 15, 2017 also marked the End of General Support for open source Apache HTTP Server version 2.2. No further 2.2 series releases will occur. All customers seeking ongoing support for open source Apache HTTP Server must migrate to version 2.4 to receive General Support. Only Technical Guidance is available for version 2.2 and will cease on July 15, 2018.
Refer to for Pivotal product lifecycle policy and details.
Important Notice for Microsoft Windows Users
With the launch of the Pivotal Web Server 6.2.0 product, the x86-windows and x64-windows packages are now built using Visual C++ 19.0, a component of Microsoft Visual Studio 2015 Update 1. There are two immediate impacts on Windows Users upgrading to this latest release:
- Users on Windows 7, 8 or 8.1, or on Windows Server 2008 or 2012, must obtain and install the Microsoft Update for Universal C Runtime in Windows (also distributed with the Visual C++ Redistributable for Visual Studio 2015) from Microsoft; - and be certain to observe the prerequisites noted for that package. Installing this x86 or x64 package (as applicable) from Microsoft ensures that the runtime is updated by the Windows Update service for any new security vulnerabilities of the C Runtime itself. Windows 10 and Windows Server 2016 users do not need to obtain this package, as this is a core component of these operating systems.
- Users who have provisioned third-party modules in Pivotal Web Server 6.1.0 or earlier should consider rebuilding these modules for this new release 6.2.4 under the Visual Studio 2015 Update 1 product. This ensures that any resources allocated from the C Runtime by the module and consumed by the server or core modules (or vice versa) will not subject the server to possible crash-bugs. It will also make a significant difference in the resources consumed by the running server because each module loaded into the server that was linked to a different, earlier C Runtime causes each of these different C Runtimes to be loaded in process at once. The corresponding pivotal-web-server-devel packages include the apxs.bat script which simplifies the process of compiling modules.
Experimental HTTP/2 Protocol Support Included
This release introduces the mod_proxy_http2.so module, a proxy which establishes HTTP/2 connections to backend servers, and includes mod_http2.so, a protocol module that enables HTTP/2 connections to the deployed PWS instance. These are still beta/experimental features of Pivotal Web Server, and leverage the latest nghttp2 library to provide protocol compliance. These features should not be used for production environments unless they are carefully tested in the specific deployment scenario desired, with the ability to disable them in production and revert to classic HTTP/1.1 support. The mod_http2 and mod_proxy_http2 modules are provided for your testing and evaluation. While these are not covered under Premium Support [1] production SLA, we do encourage you to test them and file any issues you encounter with Pivotal technical support. Issues with these modules will be treated as Developer Support [1] issues with regards to response time.
For detailed information on these modules, refer to:
-
-
SSL Encryption and mod_ssl Configuration
This release includes the updated default configuration and hardening recommendations originally shipped with Pivotal Web Server release 6.1.0. The default extra/httpd-ssl.conf configuration improves the robustness of SSL/TLS cryptography. Users should review existing instances to ensure that mod_ssl features including SSLProtocol and SSLCipherList meet or exceed modern guidance. It may be especially helpful to use the instance’s certificate expiration dates to trigger a periodic review of these configurations, as guidance continues to evolve as end-users update their browsers and other clients to more modern TLS capabilities.
This release includes a modification to the default certificate creation in the ./newserver instance creation utility. This tool will now identify the certificate with a SHA256 hash (rather than SHA1) and will now encrypt a copy of the certificate using AES256 (rather than DES). This change anticipates the complete deprecation of SHA1 certificate hashes in the internet ecosystem by the end of 2016, in alignment with all major browser and service providers.
Update to mod_bmx
The Pivotal Web Server 6.2.0 release included a new update of the mod_bmx modules. If upgrading from earlier versions, users are cautioned to purge old bmx data collection files, bmx_vhost.db.dir and bmx_vhost.db.pag in each server instance logs/ directories. By default, the new mod_bmx_vhost will name these files as bmx_vhost1.db.* in order to prevent such collisions, but any user overriding the BMXVHostDBMFilename must rename or purge their vhost summary collection files after upgrading to PWS 6.2.4, prior to restarting the server instances. Unintelligible summaries and even server segfaults may result from using the old format vhost summary files.
Hyperic Monitoring
Since the release of 6.1.0, the default instance leaves the mod_bmx modules not-loaded and commented out. There is a performance impact, specifically in collecting the mod_bmx_vhost summaries, and these modules should only be loaded if this data is queried. Users requiring bmx monitoring, such as for the Web Server plug-in to Hyperic, may install new instances using one of three methods:
Use two additional –subst flags to override the default, and load mod_bmx modules for monitoring at initial startup:
./newserver --subst "#LoadModule bmx=LoadModule bmx" \ --subst "#Include conf/extra/httpd-info=Include conf/extra/httpd-info" [...]
To enable mod_bmx for all new instances, uncomment these lines in the deployed product httpd-2.4/_instance/conf/httpd.conf template file, prior to invoking ./newserver.
Simply modify each desired instance’s {instance}/conf/httpd.conf file after invoking ./newserver to uncomment these lines, prior to starting the server instance.
Custom Module Deployment
The Pivotal Web Server 6.2.0 release provides the opportunity to update and introduce several subcomponents, which may introduce a possibility of incompatibility issues for users who have extended the product with their own modules that are built upon these subcomponents. This specifically includes an update from OpenSSL 1.0.1 to version 1.0.2, and the introduction of Nghttp2 to support mod_http2.
The 6.2.1 release dropped all support of SSLv2 ciphers and protocol; these are no longer compiled into the distributed openssl library component. Third party components compiled against the openssl component of an earlier PWS release may need to be recompiled if load-time errors are encountered. The 6.2.0 release httpd mod_ssl build had already dropped all references to SSLv2, so this change is for completeness as strongly recommended by the OpenSSL Project.
We encourage users who have compiled their own or third-party modules (apart from modules distributed with Pivotal Web Server) to carefully review their modules’ dependencies, and test their functionality before moving these into production on 6.2.1 or later. Most modules with no direct dependencies on these subcomponents should remain compatible between modules built for PWS 6.0 or 6.1 and this 6.2.4 release. Note that modules compiled for PWS 5.x are not binary compatible with PWS 6.x releases, these modules must be recompiled, and in some cases, ported to httpd 2.4-specific APIs.
The previous 6.0.x release -devel- packages for building add-on modules have been streamlined in 6.1.0 and later to flatten the development package build/ and include/ trees, avoiding many extraneous build-1/, apr-1/ and libxml2/ subdirectory structures. Custom module build makefiles may need to be adjusted accordingly to build against the new -devel- package structure. Module builds that rely upon apxs, {component}-config scripts or pkginfo should not be affected.
Pivotal Web Server 6.2.4 Components Updated
The following components were updated since the 6.2.0 release:
- Apache HTTP Server 2.4.27
- Apache APR library 1.6.2
- Apache APR-util library 1.6.0
- Apache Tomcat mod_jk connector 1.2.42
- Apache Tomcat tcnative connector 1.2.12
- Expat 2.2.1
- libxml2 2.9.4
- nghttp2 1.24.0
- OpenSSL 1.0.2l
- PCRE 8.41
- zlib 1.2.11
Pivotal Web Server 6.2.4 Security Issues Addressed
Apache HTTP Web Server
The following vulnerabilites are addressed since the previous Pivotal Web Server 6.2.3 release:
- CVE-2017-9788 modauthdigest: Uninitialized memory reflection
- CVE-2017-7679 mod_mime: one byte overread by malicious response headers
- CVE-2017-7668 core: unbounded token list parsing
- CVE-2017-7659 mod_http2: NULL pointer dereference from malicious request
- CVE-2017-3169 mod_ssl: NULL pointer dereference by third-party modules
- CVE-2017-3167 auth: authentication bypass by third-party modules
Refer to for all security vulnerabilities addressed since release 2.4.18, shipped with Pivotal Web Server 6.2.0.
Apache Tomcat mod_jk Connector
No vulnerabilites are addressed since release 1.2.41, shipped with Pivotal Web Server 6.2.0.
Refer to for previous security vulnerabilities.
Apache Tomcat tcnative Connector
No vulnerabilites are addressed since release 1.2.3, shipped with Pivotal Web Server 6.2.0.
Refer to for previous security vulnerabilities.
Apache APR Library
No vulnerabilites are addressed since release 1.5.2, shipped with Pivotal Web Server 6.2.0.
Refer to for previous security vulnerabilities.
Apache APR-util Library
No vulnerabilites are addressed since release 1.5.2, shipped with Pivotal Web Server 6.2.0.
Refer to for previous security vulnerabilities.
Expat Library
The following vulnerabilites are addressed since the previous Pivotal Web Server 6.2.3 release:
- CVE-2017-9233: External entity infinite loop DoS
- CVE-2016-9063: Detect integer overflow
- CVE-2016-5300: Additional enhancements for higher quality entropy
Refer to for all security vulnerabilities addressed since release 2.1.0, shipped with Pivotal Web Server 6.2.0.
nghttp2 Library
Refer to for all security vulnerabilities addressed since release 1.5.0 shipped with Pivotal Web Server 6.2.0, or addressed since release 1.18.0 shipped with the previous Pivotal Web Server 6.2.3 maintenance release.
This project does not track security fixes independent of bug fixes. The combination of nghttp2 library and modhttp2 or modproxy_http2 in Pivotal Web Server is experimental, as noted above in this release note.
OpenSSL Library
The following vulnerabilites are addressed since the previous Pivotal Web Server 6.2.3 release:
- CVE-2017-3732: carry propagating bug in x86_64 Montgomery squaring
- CVE-2017-3731: out-of-bounds read in truncated RC4-MD5 packet
- CVE-2016-7055: carry bug in Broadwell-specific Montgomery squaring
Refer to for all security vulnerabilities addressed since release 1.0.2e, shipped with Pivotal Web Server 6.2.0.
PCRE Library
The following vulnerabilites are addressed since the previous Pivotal Web Server 6.2.3 release:
- CVE-2017-7246 Stack-based buffer overflow in pcre32copysubstring
- CVE-2017-7245 Stack-based buffer overflow in pcre32copysubstring
- CVE-2017-6004 out-of-bounds read in compilebracketmatchingpath
Refer to for all security vulnerabilities addressed since release 8.38, shipped with Pivotal Web Server 6.2.0.
zlib Library
The following vulnerabilites are addressed since the previous Pivotal Web Server 6.2.3 release:
CVE-2016-9843 crc32_big function big-endian CRC calculation defect CVE-2016-9842 inflateMark function left shifts of negative integers defect CVE-2016-9841 improper pointer arithmetic in inffast.c CVE-2016-9840 improper pointer arithmetic in inftrees.c
Refer to for all security vulnerabilities addressed since release 1.2.8, shipped with Pivotal Web Server 6.2.0. | http://webserver.docs.pivotal.io/doc/60/relnotes/release-notes624.html | 2018-04-19T13:38:46 | CC-MAIN-2018-17 | 1524125936969.10 | [] | webserver.docs.pivotal.io |
All content with label datagrid+non-blocking., xa
more »
( - datagrid, - non-blocking )
Powered by a free Atlassian Confluence Open Source Project License granted to Red Hat, Inc.. Evaluate Confluence today. | https://docs.jboss.org/author/label/datagrid+non-blocking | 2018-04-19T13:23:20 | CC-MAIN-2018-17 | 1524125936969.10 | [] | docs.jboss.org |
Slider and MessageBox
Dialogs
There are two dialogs that replace the existing dialog box, the Slider and the MessageBox from Dynamics AX 2012:
- Slider
- MessageBox
The following sections discuss the specific goals for each concept.
Slider
The slider, or slider dialog, is a dialog box that "slides" in on top of the active page's content from the right edge of the screen. In the following screen shot, the slider is the white region that has the caption Start rental on the right side of the window. Notice that the area to the left of the slider is shaded to help the user understand that the page beneath the slider isn't currently available for interaction.
After a slider opens, the user can dismiss it in two ways:
- Perform an action within the slider that causes the underlying form to dismiss itself. For example, click Cancel, or enter required information and then click OK.
- Click outside the slider in the shaded area to the left. This cancels the slider, and no further actions are performed.
A slider contains a modeled form and is used to gather information from the user. Therefore, a slider should be used in most situations where a dialog box has been used in the past. For example, a slider is typically used when the user creates a new record, as in the preceding screen shot. However, a slider should not be used for simple notifications or messages to the user. For these situations, a MessageBox should be used, as described in the next section. To model a slider, you create a form, and then set the Style property to Dialog on the Form.Design node. You then model the form elements that you require (for example, fields and buttons). The caption is defined by Form.Design.Caption. To simplify the process for creating sliders, we have provided the SysBPStyle_Dialog form as a template for modeling slider dialogs. To use this template, copy it into a new form, and then extend it as you require.
MessageBox
A MessageBox is a type of dialog that is rendered as a "lightbox" on top of an existing page. A MessageBox appears as a full-width modal pop-up. The following screen shot shows an example of a MessageBox.
A MessageBox is the correct mechanism to use when you must interrupt the user to notify him or her about a critical situation. For example, a MessageBox is used to display a Message center error message to the user. Because a MessageBox is modal, the user can't interact with the page beneath the MessageBox until that MessageBox has been dealt with or dismissed. In the preceding screen shot, notice that the page is obscured by the MessageBox. Additionally, the areas above and below the MessageBox are shaded to help the user understand that the page isn't currently available for interaction. Be aware that, unlike a slider, the user can't dismiss a MessageBox by clicking outside it, in the shaded areas. A MessageBox can be triggered by using either the Box application programming interface (API) or any of the methods that are described earlier for triggering the display of an error. For more information, see the Message API: Message center, message bar, message details. | https://docs.microsoft.com/en-us/dynamics365/unified-operations/dev-itpro/user-interface/slider-messagebox | 2018-04-19T13:59:30 | CC-MAIN-2018-17 | 1524125936969.10 | [] | docs.microsoft.com |
All Files About the File Structure in Harmony Stand Alone T-HFND-001-008. elements Contains all the drawing files. The drawings are organized by folders named like the layers they are associated with. environments If the scene was exported from a Harmony Server database, this directory will contain the palettes and templates that were stored in the scene's environment. Otherwise, it is empty. frames Contains the final frames after a render if you are using the default settings of the Write node in the Node view. jobs If the scene was exported from a Harmony Server database, this directory will contain the palettes and templates that were stored in the scene's job. Otherwise, it is empty. palette-library Contains scene palettes and scene palette list files. timings This folder is used when a user links external images to a scene and wants to store a backup copy in case the link breaks. | https://docs.toonboom.com/help/harmony-15/premium/project-creation/about-file-structure.html | 2019-12-05T22:49:58 | CC-MAIN-2019-51 | 1575540482284.9 | [] | docs.toonboom.com |
Overview of potential upgrade issues (Visual C++)
Over the years, the Microsoft C++ compiler has undergone many changes, along with changes in the C++ language itself, the C++ Standard Library, the C runtime (CRT), and other libraries such as MFC and ATL. As a result, when upgrading an application from an earlier version of Visual Studio you might encounter compiler and linker errors and warnings in code that previously compiled cleanly. The older the original code base, the greater the potential for such errors. This overview summarizes the most common classes of issues you are likely to encounter, and provides links to more detailed information.
Note
In the past, we have recommended that upgrades that span several versions of Visual Studio should be performed incrementally one version at a time. We no longer recommend this approach. We have found that it's almost always simpler to upgrade to the most current version of Visual Studio no matter how old the code base.
Questions or comments about the upgrade process can be sent to vcupgrade@microsoft.com.
Library and toolset dependencies
Note
This section applies to applications and libraries built with Visual Studio 2013 and earlier. The toolsets used in Visual Studio 2015, Visual Studio 2017 and Visual Studio 2019 are binary compatible. For more information, see C++ Binary Compatibility between Visual Studio 2015 and Visual Studio 2019.
When upgrading an application to a new version of Visual Studio, it's both advisable and in many cases necessary to also upgrade all libraries and DLLs that the application links to. It requires either that you have access to the source code, or that the library vendor can provide new binary files compiled with the same major version of the compiler. If one of these conditions is true, then you can skip this section, which deals with the details of binary compatibility. If neither is the case, then you might not be able to use the libraries in your upgraded application. The information in this section will help you understand whether you can proceed with the upgrade.
Toolset
The .obj and .lib file formats are well-defined and rarely change. Sometimes additions are made to these file formats, but these additions generally don't affect the ability of newer toolsets to consume object files and libraries produced by older toolsets. The major exception here is if you compile using /GL (Whole Program Optimization). If you compile using
/GL, the resulting object file can only be linked using the same toolset that was used to produce it. So, if you produce an object file with
/GL and using the Visual Studio 2017 (v141) compiler, you must link it using the Visual Studio 2017 (v141) linker. It's because the internal data structures within the object files are not stable across major versions of the toolset, and newer toolsets don't understand the older data formats.
C++ doesn't have a stable application binary interface (ABI). But Visual Studio maintains a stable C++ ABI for all minor versions of a release. Visual Studio 2015 (v140), Visual Studio 2017 (v141), and Visual Studio 2019 (v142) vary only in their minor version. They all have the same major version number, which is 14. For more information, see C++ Binary Compatibility between Visual Studio 2015 and Visual Studio 2019.
If you have an object file that has external symbols with C++ linkage, that object file may not link correctly with object files produced by a different major version of the toolset. There are many possible outcomes: the link may fail entirely (for example, if name decoration changed). The link may succeed, and things may not work at runtime (for example, if type layout changed). Or in many cases, things may happen to work and nothing will go wrong. Also note, while the C++ ABI is not stable, the C ABI and the subset of the C++ ABI required for COM are stable.
If you link to an import library, any later version of the Visual Studio redistributable libraries that preserve ABI compatibility may be used at runtime. For example, if your app is compiled and linked using the Visual Studio 2015 Update 3 toolset, you can use any Visual Studio 2017 or Visual Studio 2019 redistributable, because the 2015, 2017, and 2019 libraries have preserved backward binary compatibility. The reverse isn't true: You can't use a redistributable for an earlier version of the toolset than you used to build your code, even if they have a compatible ABI.
Libraries
If you compile a source file using a particular version of the Visual Studio C++ libraries header files (by #including the headers), the resulting object file must be linked with the same version of the libraries. So, for example, if your source file is compiled with the Visual Studio 2015 Update 3 <immintrin.h>, you must link with the Visual Studio 2015 Update 3 vcruntime library. Similarly, if your source file is compiled with the Visual Studio 2017 version 15.5 <iostream>, you must link with the Visual Studio 2017 version 15.5 Standard C++ library, msvcprt. Mixing-and-matching is not supported.
For the C++ Standard Library, mixing-and-matching has been explicitly disallowed via use of
#pragma detect_mismatch in the standard headers since Visual Studio 2010. If you try to link incompatible object files, or if you try to link with the wrong standard library, the link will fail.
For the CRT, mixing-and-matching was never supported, but it often just worked, at least until Visual Studio 2015 and the Universal CRT, because the API surface did not change much over time. The Universal CRT broke backwards compatibility so that in the future we can maintain backwards compatibility. In other words, we have no plans to introduce new, versioned Universal CRT binaries in the future. Instead, the existing Universal CRT is now updated in-place.
To provide partial link compatibility with object files (and libraries) compiled with older versions of the Microsoft C Runtime headers, we provide a library, legacy_stdio_definitions.lib, with Visual Studio 2015 and later. This library provides compatibility symbols for most of the functions and data exports that were removed from the Universal CRT. The set of compatibility symbols provided by legacy_stdio_definitions.lib is sufficient to satisfy most dependencies, including all of the dependencies in libraries included in the Windows SDK. However, there are some symbols that were removed from the Universal CRT for which it's not possible to provide compatibility symbols. These symbols include some functions (for example, __iob_func) and the data exports (for example, __imp___iob, __imp___pctype, __imp___mb_cur_max).
If you have a static library that was built with an older version of the C Runtime headers, we recommend the following actions, in this order:
Rebuild the static library using the new version of Visual Studio and the Universal CRT headers to support linking with the Universal CRT. This approach is the fully supported (and thus best) option.
If you can't (or don't want to) rebuild the static library, you may try linking with legacy_stdio_definitions.lib. If it satisfies the link-time dependencies of your static library, you will want to thoroughly test the static library as it's used in the binary, to make sure that it isn't adversely affected by any of the behavioral changes that were made to the Universal CRT.
If your static library’s dependencies are not satisfied by legacy_stdio_definitions.lib or if the library doesn't work with the Universal CRT due to the aforementioned behavioral changes, we would recommend encapsulating your static library into a DLL that you link with the correct version of the Microsoft C Runtime. For example, if the static library was built using Visual Studio 2013, you would want to build this DLL using Visual Studio 2013 and the Visual Studio 2013 C++ libraries as well. By building the library into a DLL, you encapsulate the implementation detail that is its dependency on a particular version of the Microsoft C Runtime. You'll want to be careful that the DLL interface doesn't leak details of which C Runtime it uses, for example, by returning a FILE* across the DLL boundary or by returning a malloc-allocated pointer and expecting the caller to free it.
Use of multiple CRTs in a single process isn't in and of itself problematic (indeed, most processes will end up loading multiple CRT DLLs; for example, Windows operating system components will depend on msvcrt.dll and the CLR will depend on its own private CRT). Problems arise when you jumble state from different CRTs. For example, you should not allocate memory using msvcr110.dll!malloc and attempt to deallocate that memory using msvcr120.dll!free, and you should not attempt to open a FILE using msvcr110!fopen and attempt to read from that FILE using msvcr120!fread. As long as you don’t jumble state from different CRTs, you can safely have multiple CRTs loaded in a single process.
For more information, see Upgrade your code to the Universal CRT.
Errors due to project settings
To begin the upgrade process, open an older project/solution/workspace in the latest version of Visual Studio. Visual Studio will create a new project based on the old project settings. If the older project has library or include paths that are hard-coded to non-standard locations, it's possible that the files in those paths won’t be visible to the compiler when the project uses the default settings. For more information, see Linker OutputFile setting.
In general, now is a great time to organize your project code properly in order to simplify project maintenance and help get your upgraded code compiling as quickly as possible. If your source code is already well-organized, and your older project is compiled with Visual Studio 2010 or later, you can manually edit the new project file to support compilation on both the old and new compiler. The following example shows how to compile for both Visual Studio 2015 and Visual Studio 2017:
<PlatformToolset Condition="'$(VisualStudioVersion)'=='14.0'">v140</PlatformToolset> <PlatformToolset Condition="'$(VisualStudioVersion)'=='15.0'">v141</PlatformToolset>
LNK2019: Unresolved external
For unresolved symbols, you might need to fix up your project settings.
If the source file is in a non-default location, did you add the path to the project’s include directories?
If the external is defined in a .lib file, have you specified the lib path in the project properties, and is the correct version of the .lib file located there?
Are you attempting to link to a .lib file that was compiled with a different version of Visual Studio? If so, see the previous section on library and toolset dependencies.
Do the types of the arguments at the call site actually match an existing overload of the function? Verify the underlying types for any typedefs in the function’s signature and in the code that calls the function are what you expect them to be.
To troubleshoot unresolved symbol errors, you can try using dumpbin.exe to examine the symbols defined in a binary. Try the following command line to view symbols defined in a library:
dumpbin.exe /LINKERMEMBER somelibrary.lib
/Zc:wchar_t (wchar_t Is Native Type)
(In Microsoft Visual C++ 6.0 and earlier, wchar_t was not implemented as a built-in type, but was declared in wchar.h as a typedef for unsigned short.) The C++ standard requires that wchar_t-. For more information, see /Zc:wchar_t (wchar_t Is Native Type).
Upgrading with the linker options /NODEFAULTLIB, /ENTRY, and /NOENTRY
The
/NODEFAULTLIB linker option (or the Ignore All Default Libraries linker property) tells the linker not to automatically link in the default libraries such as the CRT. It means that each library has to be listed as input individually. This list of libraries is given in the Additional Dependencies property in the Linker section of the Project Properties dialog.
Projects that use this option present a problem when upgrading, because the contents of some of the default libraries were refactored. Because each library has to be listed in the Additional Dependencies property or on the linker command line, you need to update the list of libraries to use all the current names.
The following table shows the libraries whose contents changed starting with Visual Studio 2015. To upgrade, you need to add the new library names in the second column to the libraries in the first column. Some of these libraries are import libraries, but that shouldn’t matter.
The same issue applies also if you use the
/ENTRY option or the
/NOENTRY option, which also have the effect of bypassing the default libraries.
Errors due to improved language conformance
The Microsoft C++ compiler has continuously improved its conformance to the C++ standard over the years. Code that compiled in earlier versions might fail to compile in later versions of Visual Studio because the compiler correctly flags an error that it previously ignored or explicitly allowed.
For example, the
/Zc:forScope switch was introduced early in the history of MSVC. It permits non-conforming behavior for loop variables. That switch is now deprecated and might be removed in future versions. It is highly recommended to not use that switch when upgrading your code. For more information, see /Zc:forScope- is deprecated.
One example of a common compiler error you might see when upgrading is when a non-const argument is passed to a const parameter. Older versions of the compiler did not always flag it as an error. For more information, see The compiler's more strict conversions.
For more information on specific conformance improvements, see Visual C++ change history 2003 - 2015 and C++ conformance improvements in Visual Studio.
Errors involving <stdint.h> integral types
The <stdint.h> header defines typedefs and macros that, unlike built-in integral types, are guaranteed to have a specified length on all platforms. Some examples are
uint32_t and
int64_t. The <stdint.h> header was added in Visual Studio 2010. Code that was written before 2010 might have provided private definitions for those types and those definitions might not always be consistent with the <stdint.h> definitions.
If the error is C2371, and a
stdint type is involved, it probably means that the type is defined in a header either in your code or a third-party lib file. When upgrading, you should eliminate any custom definitions of <stdint.h> types, but first compare the custom definitions to the current standard definitions to ensure you are not introducing new problems.
You can press F12 (Go to Definition) to see where the type in question is defined.
The /showIncludes compiler option can be useful here. In the Property Pages dialog box for your project, open the C/C++ > Advanced page and set Show Includes to Yes. Then rebuild your project and see the list of
#includes in the output window. Each header is indented under the header that includes it.
Errors involving CRT functions
Many changes have been made to the C runtime over the years. Many secure versions of functions have been added, and some have been removed. Also, as described earlier in this article, Microsoft’s implementation of the CRT was refactored in Visual Studio 2015 into new binaries and associated .lib files.
If an error involves a CRT function, search Visual C++ change history 2003 - 2015 or C++ conformance improvements in Visual Studio to see if those articles contain any additional information. If the error is LNK2019, unresolved external, make sure the function has not been removed. Otherwise, if you are sure that the function still exists, and the calling code is correct, check to see whether your project uses
/NODEFAULTLIB. If so, you need to update the list of libraries so that the project uses the new universal (UCRT) libraries. For more information, see the section above on Library and dependencies.
If the error involves
printf or
scanf, make sure that you are not privately defining either function without including stdio.h. If so, either remove the private definitions or link to legacy_stdio_definitions.lib. You can set this library in the Property Pages dialog under Configuration Properties > Linker > Input, in the Additional Dependencies property. If you are linking with Windows SDK 8.1 or earlier, then add legacy_stdio_definitions.lib.
If the error involves format string arguments, it's probably because the compiler is stricter about enforcing the standard. For more information, see the change history. Pay close attention to any errors here, because they can potentially represent a security risk.
Errors due to changes in the C++ standard
The C++ standard itself has evolved in ways that are not always backward compatible. The introduction in C++11 of move semantics, new keywords, and other language and standard library features can potentially cause compiler errors and even different runtime behavior.
The C++ standard now specifies that conversions from unsigned to signed integral values are considered as narrowing conversions. The compiler did not raise this warning prior to Visual Studio 2015. Inspect each occurrence to make sure the narrowing doesn't impact the correctness of your code.
Warnings to use secure CRT functions
Over the years, secure versions of C runtime functions have been introduced. Although the old, non-secure versions are still available, it's recommended to change your code to use the secure versions. The compiler will issue a warning for usage of the non-secure versions. You can choose to disable or ignore these warnings. To disable the warning for all projects in your solution, open View > Property Manager, select all projects for which you want to disable the warning, then right-click on the selected items and choose Properties. In the Property Pages dialog under Configuration Properties > C/C++ > Advanced, select Disable Specific Warnings. Click the drop-down arrow and then click on Edit. Enter 4996 into the text box. (Don't include the 'C' prefix.) For more information, see Porting to use the Secure CRT.
Errors due to changes in Windows APIs or obsolete SDKs
Over the years, Windows APIs and data types have been added, and sometimes changed or removed. Also, other SDKs that did not belong to the core operating system have come and gone. Older programs may therefore contain calls to APIs that no longer exist. They may also contain calls to APIs in other Microsoft SDKs that are no longer supported. If you see an error involving a Windows API or an API from an older Microsoft SDK, it's possible that an API has been removed and/or superseded by a newer, more secure function.
For more information about the current API set and the minimum supported operating systems for a specific Windows API, see Microsoft API and reference catalog and navigate to the API in question.
Windows version
When upgrading a program that uses the Windows API either directly or indirectly, you will need to decide the minimum Windows version to support. In most cases, Windows 7 is a good choice. For more information, see Header file problems. The
WINVER macro defines the oldest version of Windows that your program is designed to run on. If your MFC program sets WINVER to 0x0501 (Windows XP) you will get a warning because MFC no longer supports XP, even though the compiler itself has an XP mode.
For more information, see Updating the Target Windows Version and More outdated header files.
ATL / MFC
ATL and MFC are relatively stable APIs but changes are made occasionally. For more information, see Visual C++ change history 2003 - 2015, What's New for Visual C++ in Visual Studio, and C++ conformance improvements in Visual Studio.
LNK 2005 _DllMain@12 already defined in MSVCRTD.lib
This error can occur in MFC applications. It indicates an ordering issue between the CRT library and the MFC library. MFC must be linked first so that it provides new and delete operators. To fix the error, use the
/NODEFAULTLIB switch to Ignore these default libraries: MSVCRTD.lib and mfcs140d.lib. Then add these same libs as additional dependencies.
32 vs 64 bit
If your original code is compiled for 32-bit systems, you have the option of creating a 64-bit version instead of or in addition to a new 32-bit app. In general, you should get your program compiling in 32-bit mode first, and then attempt 64-bit. Compiling for 64-bit is straightforward, but in some cases it can reveal bugs that were hidden by 32-bit builds.
Also, you should be aware of possible compile-time and runtime issues relating to pointer size, time and size values, and format specifiers in printf and scanf functions. For more information, see Configure Visual C++ for 64-bit, x64 targets and Common Visual C++ 64-bit Migration Issues. For additional migration tips, see Programming Guide for 64-bit Windows.
Unicode vs MBCS/ASCII
Before Unicode was standardized, many programs used the Multibyte Character Set (MBCS) to represent characters that were not included in the ASCII character set. In older MFC projects, MBCS was the default setting, and when you upgrade such a program, you will see warnings that advise to use Unicode instead. You may choose to disable or ignore the warning if you decide that converting to Unicode isn't worth the development cost. To disable it for all projects in your solution, open View > Property Manager, select all projects for which you want to disable the warning, then right-click on the selected items and choose Properties. In the Property Pages dialog, select Configuration Properties > C/C++ > Advanced. In the Disable Specific Warnings property, open the drop-down arrow, and then choose Edit. Enter 4996 into the text box. (Don't include the 'C' prefix.) Choose OK to save the property, then choose OK to save your changes.
For more information, see Porting from MBCS to Unicode. For general information about MBCS vs. Unicode, see Text and Strings in Visual C++ and Internationalization .
See also
Upgrading projects from earlier versions of Visual C++
C++ conformance improvements in Visual Studio
Feedback | https://docs.microsoft.com/en-us/cpp/porting/overview-of-potential-upgrade-issues-visual-cpp?view=vs-2019 | 2019-12-05T22:43:40 | CC-MAIN-2019-51 | 1575540482284.9 | [] | docs.microsoft.com |
Active
Directory Site. Get Computer Site Method
Definition
Gets the site that this computer is a member of.
public: static System::DirectoryServices::ActiveDirectory::ActiveDirectorySite ^ GetComputerSite();
public static System.DirectoryServices.ActiveDirectory.ActiveDirectorySite GetComputerSite ();
static member GetComputerSite : unit -> System.DirectoryServices.ActiveDirectory.ActiveDirectorySite
Public Shared Function GetComputerSite () As ActiveDirectorySite
Returns
An ActiveDirectorySite object that contains the caller's current site.
Exceptions
The caller's computer does not belong to a site.
A call to the underlying directory service resulted in an error.
The target server is either busy or unavailable. | https://docs.microsoft.com/en-us/dotnet/api/system.directoryservices.activedirectory.activedirectorysite.getcomputersite?view=netframework-4.7.1 | 2019-12-05T22:13:08 | CC-MAIN-2019-51 | 1575540482284.9 | [] | docs.microsoft.com |
Comm
Comm-dev is a replacement for the deprecated Comm service. This page will house the documentation for the setup until it is renamed comm. Ideally, this will replace the current comm seamlessly.
Contents.
There is no password, so that can be left blank. | http://docs.cslabs.clarkson.edu/mediawiki/index.php?title=Comm&oldid=6597 | 2019-12-05T21:47:49 | CC-MAIN-2019-51 | 1575540482284.9 | [] | docs.cslabs.clarkson.edu |
After clicking the
icon, you switched to “Expert” mode. While in “Expert” mode, you can:
•Add new parameters (and edit the parameters) of the Action:
•See the other panels of the R box:
In the above example, we can see that Anatella will initialize the R engine with 6 variables (these 6 variables are named “myData”, “idxCenter”, “idxtop”, “GridX”, “GridY, “GridLen”) before running the R code. All the parameters that are tables (coming for the input pins) are injected inside the R engine as “data frames”.
In particular, the “Code” panel is interesting: it contains the R code: Here is an example of R code:
You can use the "print()" or the “cat()” command to display some values inside the Anatella Log window (this is useful for debugging your code).
When the R engine prints some results inside the Anatella Log Window, it assumes that the font used to display the results is of Constant-Width (e.g. so that, we. Inside R, there are no easy way to comment several lines of codes (i.e. there are no /*…*/ R code located beyond the “##astop” flag. This is handy when developing new R Actions.
Here are some “good practice” rules to follow when creating a new R code:
1.It might be easier to use an interactive tool such as “R-Studio” to develop the first version of your R code (i.e. during the first “iterations” of code development). Once your R code is working more-or-less properly, you can fine-tune its integration inside an Anatella box using the Anatella GUI. Once the integration inside Anatella is complete, you’ll have a block of R code (i.e. an Anatella box) that you can re-use easily everywhere with just a simple drag&drop! (…and without even looking at the R code anymore!)
When developing a new code in R, it happens very often that the R R engine. This allows to make many iterations, to quickly arrive to a working code.
2.If you need a specific data-type to run your R data frame (received as input) that is named “myDataFrame” into an Array of numbers (and get rid of the strings!) that is named “myArray”:
myArray <- apply (as.matrix (myDataFrame), 2, as.numeric);
3.If you need some specific packages to run your R code, add a few lines of code (at the top of your R script) that installs the required packages if they are not there yet. For example, this install the “kohonen” package if it’s not there yet:
if("kohonen" %in% rownames(installed.packages()) == FALSE)
{
print("installing cohonen package for first time use");
install.packages("kohonen",repos=RemoteRepository)
}
(don’t forget to use the option “repos=RemoteRepository” inside the above “install.packages” command)
When creating a R box that displays some plot window:
1.in the “Configuration” panel, check the option "This Action creates some plot windows" (otherwise the “plots windows” are destroyed as soon as the box stops running):
2.Use "x11();" to open new “plot windows” (otherwise all plots ends up inside the same plot-window and you only see the last plot because it has “overwritten” all the previous ones)
3.Optional: To avoid consuming much RAM memory for nothing (while R is just busy showing your plots), add at the end of your R code a few lines to destroy all large matrices stored in RAM. For example:
#free up memory:
myDataFrame=0; # replace the large data-frame named “myDataFrame” with
# a single number (0) to reclaim RAM
gc(); # run garbage collector to force R to release RAM
To pass back some table-results as output of the R Action, use the “R_Output” variable. The data-type of the variable used as output is very precise: it must be a data frame (and not an array). To convert your variables to data frames, use the following command:
myDataFrame <- data.frame( MyVariable, stringsAsFactors=FALSE)
(don’t forget the option “stringsAsFactors=FALSE”, otherwise R does strange things!!).
Here are some example of usage of the output variable named “R_Output”:
1.To pass on output pin 0 the data frame named “myDataFrame”, simply write:
R_Output <- myDataFrame
Please ensure that the type of the variable named “myDataFrame” is indeed a “data frame” (and not an “Array”), otherwise it won’t work.
2.To pass on output pin 0 the data frame named “myDataFrame1” and to pass on output pin 1 the data frame named “myDataFrame2”, simply write:
R_Output <- list(myDataFrame1, myDataFrame2)
For the above code to work, you must also setup Anatella to have 2 output pins on your box:
To get inside the R R engine is quite limited in terms of the size of the data it can analyze because all the data must fit into the (small) RAM memory of the computer. To alleviate this limitation of R, R engine without any “splitting”). The R engine can process easily each of this “little” table because they only consume a small quantity of RAM memory. After the split, Anatella calls the R engine “in-a-loop”, several times: At each iteration, the R) of the “AnatellaQuickGuide.pdf” where the same partitioning concept is also used.
More precisely, when using partitioning, Anatella does the following:
1.Split the table on pin 0 into many different “little” tables
(one table for each different partition).
2.Inject into R the variables that contains that data from all the rows of the tables available on pin 1, pin 2, pin 3, etc.
3.Inject into R the variable named “partitionType” (whose value is 0, 1 or 2, depending on which “Partition Type” you are using).
4.Inject into R the variable named “iterationCount” with the value 0.
5.Inject into R the variable named “finished” with the value “false”.
6.Run the loop (i.e. process each partition):
oInject into R one of the “little” tables (that are coming from the “big” table available on the first input pin – input pin 0).
oRun your R code inside the R engine.
oGet back some output results to forward onto the output pin.
oIncrease by one the value of the R variable “iterationCount”.
oExecute the next iteration of the loop (i.e. re-start R
Another example where partitioning “makes sense” is the following: Let’s assume that you are working for a large retailer (such as Carrefour, Wallmart, etc) and you need to manage the stocks of all your different products (the products are also named “SKU”=Stock Keeping Unit) at all your different Point-Of-Sales (POS). One important part of stock management involves predicting what will be the demand of each (SKU;POS) pair in the fore coming weeks. If you predict a high demand for a specific (SKU;POS) pair, then you’d better have a significant stock for this same (SKU;POS) pair (unless you want to lose sales because of “out-of-stock” conditions). A typical large retailer has around 20,000 SKU’s at each of their POS. Let’s assume that we have 200 POS. We thus have 20,000x200=4,000,000 (SKU;POS) pairs. This means that we’ll have to compute 4,000,000 predictions (one for each (SKU;POS) pair). This also means that we’ll typically have a matrix with 4,000,000 rows: Each row contains information about the past demand for a (SKU;POS) pair. Using the values available on the current row (about past demands), we’ll typically use some “time series” algorithm to predict the future demand (for the fore coming weeks). Actually, to compute the prediction for a specific (SKU;POS) pair, you only need one row of the table. In other words, all computation can be done row-by-row. This means that we can use the “Partition Type” named “Each Row is a different partition”. Because of the Anatella-Partitioning-Algorithm, we can handle any number of SKU or POS without being limited by the R engine when it comes to handle large matrices.
There is also a “Kill R” button inside the toolbar here:
This button kills all the R engines currently running. This means that, when you click this button:
•All (R-based) plot-windows are closed
•All R computations are stopped (i.e. don’t click this button when your graph is running!).
TIP: Use the
button inside the toolbar to close in one click all the plot-windows.
Once you are satisfied with your R-based Action, you can “publish it” so that it always becomes available inside the “common” re-usable Actions: See section 9.7. for more details. | http://docs.timi.eu/anatella/9_2_2_2__coding_in_r_inside_anatella.html | 2019-12-05T21:38:22 | CC-MAIN-2019-51 | 1575540482284.9 | [] | docs.timi.eu |
Request
Options. Resource Token Expiry Seconds Property
Definition
Gets or sets the expiry time for resource token. Used when creating/updating/reading permissions in the Azure Cosmos DB service.
public Nullable<int> ResourceTokenExpirySeconds { get; set; }
member this.ResourceTokenExpirySeconds : Nullable<int> with get, set
Public Property ResourceTokenExpirySeconds As Nullable(Of Integer)
Property Value
Remarks
When working with Azure Cosmos DB Users and Permissions, the way to instantiate an instance of DocumentClient is to get the Token for the resource the User wants to access and pass this to the authKeyOrResourceToken parameter of DocumentClient constructor
When requesting this Token, a RequestOption for ResourceTokenExpirySeconds can be used to set the length of time to elapse before the token expires. This value can range from 10 seconds, to 5 hours (or 18,000 seconds) The default value for this, should none be supplied is 1 hour (or 3,600 seconds). | https://docs.microsoft.com/en-us/dotnet/api/microsoft.azure.documents.client.requestoptions.resourcetokenexpiryseconds?view=azure-dotnet | 2019-12-05T23:13:35 | CC-MAIN-2019-51 | 1575540482284.9 | [] | docs.microsoft.com |
InterlockedXor function (HLSL reference)
Performs a guaranteed atomic xor.
Syntax
void InterlockedXor( in R dest, in T value, out T original_value ); XOR of value to the shared memory register referenced by dest. The second scenario is when R is a resource variable type. In this scenario, the function performs an atomic XORof | https://docs.microsoft.com/en-us/windows/win32/direct3dhlsl/interlockedxor | 2019-12-05T23:34:20 | CC-MAIN-2019-51 | 1575540482284.9 | [] | docs.microsoft.com |
1
Reading Comprehension
Old Faithful is a wonderful geyser. It erupts about every hour, winter or summer, day or night. Once every hour, Old Faithful sends a fountainlike column of hot water high into the air.
It's time for Old Faithful to erupt again. There is a crowd near the geyser. Suddenly we hear a strange noise. Then we see only a small column of water. Gradually it rises higher and higher. Higher and still higher the water rises. It usually rises over 100 feet. For about four minutes this natural fountain sends a giant column of water into the air. Then it gradually dies down. The people hurry away to see the other geysers.
In another hour another crowd will come to see Old Faithful. Again, a small
2
column of water will rise into the air. It will rise higher and higher, then gradually die down. Old Faithful is a good name for this faithful geyser.
A. Why is it called Old Faithful?
B. National parks in the USA.
C. What makes geysers erupt?
D. What is Old Faithful?
E. Seeing Old Faithful erupt.
F. The difference between Old Faithful and other geysers.
5. Old Faithful sends a giant column of water into the air for about_________
A) an hour B) five minutes C) four minutes D) forty minutes
6. ______________ sends a taller column of wafer info the air.
A) The Giant Geyser B) Old Faithful
7. Old Faithful erupts ___________________
A) only in winter B) only in summer C) all the year round
3
Part 2
The River Thames flows from the heart of England to the east coast, and London grew up at the lowest convenient crossing place. Here the north and south banks provided firm ground for the Romans to build a bridge, soon after their invasion of Britain in AD 43. They gave their settlement a Celtic name, Llyn-din (river place), later called Londinium.
The Thames Valley was certainly inhabited from the earliest times. A Stone Age has been uncovered in Acton; the site of an Iron Age temple lies under one of the runways at Heathrow Airport; and many pre-Roman objects have been found in the Thames. But London began with the Romans. In was not only the capital of Roman Britain, but also the sixth-largest city in the Roman Empire.
The Romans laid out military roads and the Thames itself provided a waterway for merchants trading with the Continent or the inland districts.
By AD 60 there was a sizable town there. But in that year the Iceni, an ancient British people, led by their Queen Boadicea, revolted against the Romans. They attacked London and burnt it down. The ashes of their destruction are still found when deep foundations are dug for new buildings. But such were the natural advantages of the place, that when the Romans returned the settlement was re-established.
Londinium flourished, and within a generation had become the administrative centre for the province. Houses were built inside massive walls, portions of which can be seen beside the Tower. Other pointers to the size and beauty of the city have been discovered: an altar to Diana, an ancient goddess of forest and childbirth, on the site of the Goldsmith's Hall; a temple to Mithras, an ancient god of light, in Queen Victoria Street. Also in Cannon Street is London Stone, possibly the lower part of a column which served as the central milestone for the whole Roman
4
Britain. The stone is set in the wall of the Bank of China.
King Alfred rebuilt the fortifications in 886 against Viking attacks, and from that point London grew in political and military importance and in trade and wealth.
By the end of the 11th century a significant new development had taken place. King Canute (1016—1035), England's only Danish king, had built a palace to the west of the city. Edward the Confessor (1042—1066) chose to live there too, and as he was a very religious man, beside the palace he built a church, his
minster in the west, which gave the name Westminster to the small area that grew up around the royal buildings.
After the Norman Conquest in 1066, William I made a separate peace with the citizens of London, promising that they could keep their own laws and customs. But at the same time he began to build the Tower of London just outside the city — to show them the King's authority.
8. A Celtic name Llyn-din means
A) the creation of a mint.
15) The citizens of London were given a right to keep their
laws and customs by
A) William I.
B) Edward the Confessor.
16) The banks of the river provided firm ground for a bridge.
17) The Thames Valley was not inhabited before the Roman invasion.
18) Diana is an ancient goddess of love.
19) St Augustine was a prominent Christian missionary.
20) The Tower of London was built to remind the people about the king.
21) Westminster means a church in the west.
Part 3
Questions 22 - 30 are based on the text about Walt Disney. 9 sentences and phrases have been removed from the text. Choose from the sentences A - I the one which fits each gap.
How it all began.
History shows that childhood played an important part in the life of many great people. The life of Walter Elias Disney, known as Walt Disney, 22) ___________. Disney's father Elias was not very successful in
7
whatever he did: he was a carpenter, an owner of a small hotel, a village postman. 23)___________ he decided to start a new life in a new place and in 1889 Elias with his young wife Flora and their small son Herbert moved from Florida to Chicago. In Chicago two more sons were born — Raymond and Roy, then Walter — on the fifth of December, 1901, and two years later — their daughter Ruth. 24)_______________, and thinking about the future of their children, Elias and Flora moved to the country where they bought a small farm.
At that time it appeared that Walt was fond of drawing. His parents often bought him albums for drawing and colour pencils. But one of Disney's first pictures was made in coal on a white wall of their recently painted house when his parents were away.
26)_______________Elias Disney had to sell the farm, and after four years of wonderful time in the country, the Disneys moved to Kansas City. Walt and Roy helped their father in his new business. They had to get up at half past three in the morning to deliver morning newspapers and not to be late for school. Soon nineteen-year-old Roy got tired of it and decided to live and work on his uncle's farm. After his departure Walt had to deliver the newspapers himself.
27)__________He went to the cinema and theatre, spent a lot of time in the library. His favourite writers were Mark Twain, Swift, Stevenson, Dickens. Disney also admired the great talent of Charlie Chaplin.
8
When Walt was fourteen, he made up his mind to become an artist and his father let him attend the courses at the Kansas City Art Institute. But after the USA joined the World War I in April 1917, Disney volunteered as a Red Cross ambulance driver in France. He was seventeen when he returned home and got a job in an art studio in Kansas City. Walt's brother Roy worked in a bank in Kansas at that time and Walt could always get help or a word of advice from him. Later Disney made animated commercials at the Kansas City Ad Company and as he was very successful he decided to found his own cartoon production company.
The twenty-year-old president of the new company thought these films 28)_________________. But it appeared that the cartoons were not profitable, and soon Walt went bankrupt.
29) ___________. Roy advised him to try his luck in Hollywood, and in 1923, with forty dollars in his pockets, Walt headed for Los Angeles. That year Walt and Roy founded their own cartoon company in Hollywood and made a series of cartoons called Alice in Cartoonland. These cartoons combined a "live" Alice with cartoon characters.
30)______________, Walt had to do everything — he was a playwright, a director, an animator. Alice in Cartoonland was very successful and Walt decided to move on.
A) But still it would be wrong to say that the boy's life was dull
B) As they didn't have enough money to pay the assistants
C) Chicago was a very dangerous place at that time
D) proves the rule
E) Unfortunately good things don't last long
9
F) And still Disney was not going to give up
H) and it greatly influenced his senses and interests
1.
Use of English
Arthur Conan Doyle always thought he (1) ______ remembered for his historical novels. However, he became famous (2) _____ his creation of Sherlock Holmes with whom any detective bears no comparison.
Arthur Conan Doyle was born in Edinburgh, Scotland, (3) _______ the twenty-second of May, 1859. His parents were Irish Roman Catholics, and he received his early education in a Jesuit school. (4) _______ he got a medical degree at (5) ______ Edinburgh University. He started practice (6)_______ a family physician. His income was small so he began (7) ______ stories to make ends meet. In 1891 he decided to give (8) _______medicine and devote all his time to writing.
A Study in Scarlet, published in 1887, introduced Holmes and his friend Dr John Watson. The second Holmes story was The Sign of Four. In 1891 Doyle began a series of stories called “The Adventures of Sherlock Holmes”.
Sherlock Holmes has become known to movie and television audiences as a tall and lean detective, who smoked a pipe and played (9) _______ violin. He lived (10) _______ 221 Baker Street in London, where he shared a flat with Watson. And according to Doyle, it was Watson (11) ______recorded the Holmes stories for future generations.
Doyle said he created Holmes after one of his teachers in Edinburgh, Dr Joseph Bell. Bell could, for example, glance (12) _______ a man and say that he was a left-handed
shoemaker. "It is all very well to say that a man is clever," Doyle wrote, "but the reader wants to see examples of it, such examples as Bell gave us every day." At last the author became bored (13) _______ Holmes and "killed" him.
invaded conquered
3
anything that ________22 (to remind) him in retirement 23___ his past career. Yet there was one article he could not bring himself ________24 (to sell). He took it home wrapped around 25____ a piece of sacking, silently __________ 26(to curse) his weakness in being unable to part 27____ it. Once home, he put it in a cupboard, then locked the door. The thing he couldn’t make himself ________28(to part) 29_______ was the suit of armour he used __________30 (to wear) when acting his favourite part, Macbeth.
For tasks 31 - 40 write a word beginning with S which is opposite in meaning to each of the following words.
32) meaningless - s_____________________
33) believing - s_____________________
34) complicated - s_____________________
35) doubtful - s______________________
36) careful - s______________________
37) wakefulness - s_____________________
38) rough - s______________________
39) objective - s______________________
40) laugh - s_____________________
4
Task 5
For tasks 41 - 47 , to the following list of the words find synonyms from the box.
unbeliever bulky ingenious
41) unsophisticated - ______________________
42) happen - _______________________
43) persuade - _______________________
44) eccentricity - ________________________
45) clever - _______________________
46) infidel - _______________________
47) large - _______________________
48) They (rose / raised) their glasses and drank to the happy couple.
49) The table has been carelessly (lain / laid).
50) Liverpool (lies / lays) on the north bank of the River Mersey.
51) It was (sensible / sensitive) of her to postpone the trip.
52) She became (conscious / conscientious) of danger.
53) Ann looked (strange /strangely) at her partner and left without saying a word.
54) Ben looked (awkward / awkwardly) in their company.
55) The journey was (exhausting / exhausted).
TRANSFER ALL YOUR ANSWERS TO YOUR ANSWER SHEET
Maximum 55
Use of English
You should write about 220 - 250 words.
Time: 1 hour
________________________________________________________________________________________________
________________________________________________________________________________________________
Эксперт № __________________________
Участника
Содержание
Композиция
Лексика
Грамматика
Стиль
Орфография пунктуация
Maximum 20
Speaking
(Monologue; Time: 1,5 - 2 minutes)
Your answers will be recorded.
Speaking
Карточка участника
1
Speaking
Карточка члена жюри
В конкурсе устной речи участвуют 2 члена жюри и 2 участника олимпиады.
Все инструкции участникам конкурса устной речи даются на английском языке.
Преподаватели – члены жюри приглашают к своему столу пару участников.
Пары составляются методом случайной подборки.
Члены жюри начинают вести беседу и задают каждому участнику 2- 3 вопроса для того, чтобы снять напряжение, расположить их к беседе и подготовить к выполнению устного задания олимпиады.
Время, отводимое на данный этап задания – разминку, 1 – 2 минуты.
Разминка
Примерный перечень вопросов:
How are you?
What do you think about the weather?
How long have you been learning English?
What other foreign languages do you know?
What do you usually do in your spare time?
Члены жюри могут дополнительно задать участнику, получившему карточку №2, вопросы по ходу своего ответа:
Карточка № 1
Участникам диалога следует помнить, что это обсуждение, а не монолог. Они должны давать возможность друг другу обмениваться своими мнениями. Свои доводы и мысли им следует подкреплять аргументами и примерами. | http://ru.convdocs.org/docs/index-471833.html | 2019-12-05T22:45:36 | CC-MAIN-2019-51 | 1575540482284.9 | [] | ru.convdocs.org |
Chapters¶
The chapters application (see What is an Application?) is the main focus of the CS Field Guide website, as it contains the majority of educational material for the project.
Contents
Chapters Overview¶
The application is made up of chapters and each chapter is broken down into sections.
Chapters Content Directory¶
The
content directory for the chapters application contains
- a directory for each language in which content exists, named using the Django locale code. This directory contains content Markdown files.
- a special
structuredirectory which contains all configuration YAML files
Content Files¶
There are 2 different types of files used for adding content to the CS Field Guide:
- Content Markdown files
- YAML configuration files
All files live inside the
chapters/content directory.
Content Markdown files are unique for each translation language, and are stored in a directory tree specific to that language.
This directory is named using the languages Django locale code (for example:
en or
de).
Configuration files are shared amongst all languages, because the content structure is the same for all languages.
These files live under a special
structure directory.
As a simple rule, structure files situated inside the
structure directory contain no text a website user will see.
Any user-facing text lives in a Markdown file inside the locale specific directories.
Configuration Files¶
This section details configuration files within the
content/structure directory.
These files are in YAML format. If you are not familiar with YAML, see Understanding Configuration Files.
The diagram below shows an example of YAML file locations for the
content/structure/ language directory, where:
- Blue is directories.
- Red is YAML configuration files.
│ ├── sections/
│ │ └── sections.yaml
│ └── algorithms.yaml
├── artificial-intelligence/
│ ├── sections/
│ │ └── sections.yaml
│ └── artificial-intelligence.yaml
└── structure.yaml
In the following sections, each configuration file is explained in more detail.
Note
- Some of the keys (What is a Key?) have angle brackets around them,
<like so>. This means that they are variables and you can call them whatever you like in your configuration file (without the angle brackets). Key names should be consistent, i.e every instance of <chapter-key> should be replaced with the exact same key.
Application Structure Configuration File¶
- File Name:
structure.yaml
- Location:
chapters/content/structure/
- Purpose: Defines the structure and location of all the different chapters.
- Required Fields:
chapters:A dictionary of chapters, where each key is a chapter.
- Required Fields:
<chapter-key>:The key for a chapter.
- Required Fields:
chapter-number:The number order for this chapter.
- Optional Fields:
glossary-folder: The name of the glossary folder.
A complete chapter application structure file with multiple chapters may look like the following:
chapters: introduction: chapter-number: 1 algorithms: chapter-number: 2 glossary-folder: glossary
Chapter Configuration File¶
- File Name:
<chapter-key>.yaml
- Location:
chapters/content/structure/<chapter-key>/
- Referenced in:
chapters/content/structure/structure.yaml
- Purpose: Defines the attributes for a particular chapter.
- Required fields:
icon:File path to the icon for the chapter. Icons must be SVG files.
sections:File path to the configuration files for sections in the chapter.
- Optional fields:
video:URL for the video that appears at the very beginning of the chapter introduction page.
A complete chapter structure file may look like the following:
icon: svg/introduction-icon.svg sections: sections/sections.yaml
Chapter Sections Configuration File¶
- File Name:
sections.yaml
- Location:
chapters/content/structure/<chapter-key>/sections/
- Referenced in:
chapters/content/structure/<chapter-key>/<chapter-key>.yaml
- Purpose: Specifiy sections for a chapter and their relative order.
- Required Fields:
<section-key>:Key for the section.
- Required Fields:
section-number:Number order for the section in the chapter.
A complete chapter application structure file with multiple chapters may look like the following:
introduction-for-teachers: section-number: 1 further-reading: section-number: 2 | https://cs-field-guide.readthedocs.io/en/latest/author/chapters.html | 2019-12-05T21:37:58 | CC-MAIN-2019-51 | 1575540482284.9 | [] | cs-field-guide.readthedocs.io |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.