content
stringlengths
0
557k
url
stringlengths
16
1.78k
timestamp
timestamp[ms]
dump
stringlengths
9
15
segment
stringlengths
13
17
image_urls
stringlengths
2
55.5k
netloc
stringlengths
7
77
The Windows-Installer for Groovy 1.6.7 can now be downloaded from the Groovy Website (direct link). It takes care of the gory details of a Windows installation, copying files, setting environment variables and file associations. It contains the Groovy 1.6.7 Binaries, API Docs and a PDF snapshot of the Wiki, the native launcher, Gant 1.8.1, Scriptom 1.6.0, Gaelyk 0.3.2 and the Griffon Builders. These contain, among others, GFXBuilder, SwingXBuilder and JideBuilder in versions compatible to Griffon 0.2. The installation of everything but the binaries including the native launcher is optional. Currently supported languages for the installer are english, german, spanish, french and brazilian portuguese.
http://docs.codehaus.org/display/GROOVY/2009/12/07/Windows-Installer+for+Groovy+1.6.7+released
2014-10-20T18:19:16
CC-MAIN-2014-42
1413507443062.21
[]
docs.codehaus.org
Revision history of "JDocument::mergeHeadData" View logs for this page There is no edit history for this page. This page has been deleted. The deletion and move log for the page are provided below for reference. - 11:52, 20 June 2013 JoomlaWikiBot (Talk | contribs) deleted page JDocument::mergeHeadData (cleaning up content namespace and removing duplicated API references)
https://docs.joomla.org/index.php?title=JDocument::mergeHeadData&action=history
2015-05-22T14:12:15
CC-MAIN-2015-22
1432207925201.39
[]
docs.joomla.org
3 Installing Libraries To reference an R6RS library from a top-level program or another library, it must be installed as a collection-based library in Racket. One way to produce an R6RS installed library is to create in a collection a file that starts with #!r6rs and that contains a library form. For example, the following file might be created in a "hello.sls" file within a "examples" collection directory: Alternately, the plt-r6rs executable with the --install flag accepts a sequence of library declarations and installs them into separate files in a collection directory, based on the declared name of each library: plt-r6rs --install ‹libraries-file› By default, libraries are installed into the user-specific collection directory (see find-user-collects-dir). The --all-users flag causes the libraries to be installed into the main installation, instead (see find-collects-dir): plt-r6rs --install --all-users ‹libraries-file› You may as well specify an arbitrary collections directory by using the --collections flag: plt-r6rs --install --collections ‹directory› ‹libraries-file› See Libraries and Collections for information on how R6RS library names are turned into collection-based module paths, which determines where the files are written. Libraries installed by plt-r6rs --install are automatically compiled to bytecode form. One final option is to supply a ++path flag to plt-r6rs. A path added with ++path extends the set of directories that are searched to find a collection (i.e., it sets current-library-collection-paths). If ‹dir› contains "duck" and "cow" sub-directories with "duck/feather.sls" and "cow/bell.sls", and if each file is an R6RS library prefixed with #!r6rs, then plt-r6rs ++path ‹dir› directs the R6RS library references (duck feather) and (cow bell) to the files. Note that this technique does not support accessing "duck.sls" directly within ‹dir›, since the library reference (duck) is treated like (duck main) for finding the library, as explained in Libraries and Collections. Multiple paths can be provided with multiple uses of ++path; the paths are searched in order, and before the installation’s collections.
http://docs.racket-lang.org/r6rs/Installing_Libraries.html
2015-05-22T13:01:29
CC-MAIN-2015-22
1432207925201.39
[]
docs.racket-lang.org
Show a Module on all Menu Items except selected ones From Joomla! Documentation Revision as of 16:10, 18 December 2009 by Yourmanstan (Talk | contribs) Overview One frequently requested feature (which will be added in version 1.6!) is the ability to show a Module on all Menu Items except for a selected list. That way, for example, if you want to show a Module on every page except your home page, you don't have to remember to assign it to each new Menu Item you create. Example 1) For this example, we are going to assign a Module to every Menu Item except for our home page. We'll use the Joomla! Sample website and the rhuk_milkyway template. Here are the steps: Create a new template position. We'll call it "x_left". We could use an existing template position, but in some cases it might be better to create a new one. 2) Edit the file templates/rhuk_milkyway/templateDetails.xml and add this line after the "breadcrumb" position on line 63: <position>x-left</position> . This adds the new position to the template. - Edit the file templates/rhuk_milkyway/index.php and change these lines 88-92: <div id="leftcolumn"> <?php if($this->countModules('left')) : ?> <jdoc:include <?php endif; ?> </div> becomes: <div id="leftcolumn"> <?php if($this->countModules('left') && !$this->countModules('x-left')) : ?> <jdoc:include <?php endif; ?> </div> This code does two things. First, it adds the new position to the template. However, this module position will not be visible on the template and is only used to disable the left column when a module is published to the "x-left" position for a specific page 3)In order to disable the left column, simply create a custom html module and assign it to position "x-left". You may title it whatever you like and enter any content you like for the content...this text will not show on the website. It is recommended to provide a title like "TURN OFF LEFT COLUMN" to properly notify any others who may be working on the website of the module's function. Then simply assign the module to the pages you do not want to see the left column. Additional positions You may want to create corresponding disabled positions for each of the module positions you have. This will allow you to individually turn off any module position on any page of the website.
https://docs.joomla.org/index.php?title=Show_a_Module_on_all_Menu_Items_except_selected_ones&direction=prev&oldid=68155
2015-05-22T14:07:08
CC-MAIN-2015-22
1432207925201.39
[]
docs.joomla.org
Group Communication Services & Service Spaces Overview Group Communication Services Each node hosts a Group Communication component, abstracted by: - org.codehaus.wadi.group.Dispatcher; and - org.codehaus.wadi.group.Cluster. Group communication components see each others and can dispatch messages, more accurately org.codehaus.wadi.group.Envelope, to each others. WADI provides four implementations of the Group Communication API, defined in wadi-group: - in-vm: in JVM implementation for testing purposes.This implementation is in wadi-group; - Tribes: leverage Tribes as the actual group communication implementation. This implementation is in wadi-tribes; - JGroups: leverage JGroups as the actual group communication implementation. This implementation is in wadi-jgroups; and - ActiveCluster: leverage ActiveCluster as the actual group communication implementation. This implementation is in wadi-activecluster. Service Space Service Spaces are components building on top of the above group communication infrastructure. They provide a logical group communication service, which restricts the view that clients have of the cluster to the sub-set of the nodes hosting a given Service Space. For instance, Service Space 1 is hosted by Node 1 and Node 3. Clients on Node 1 using the logical group communication service of Service Space 1 only see Node 3. Also, they can only dispatch messages to Node 3. Service Spaces are used to share the physical group communication services of a node between multiple applications, e.g. Web-applications.
http://docs.codehaus.org/pages/viewpage.action?pageId=228171282
2015-05-22T13:18:36
CC-MAIN-2015-22
1432207925201.39
[array(['/download/attachments/9764983/GroupCommunicationAndServiceSpaces.jpg?version=1&modificationDate=1188701403589&api=v2', None], dtype=object) ]
docs.codehaus.org
public interface BeanResolver A bean resolver can be registered with the evaluation context and will kick in for @myBeanName still expressions. Object resolve(EvaluationContext context, String beanName) throws AccessException context- the current evaluation context beanName- the name of the bean to lookup AccessException- if there is an unexpected problem resolving the named bean
http://docs.spring.io/spring/docs/3.1.1.RELEASE/javadoc-api/org/springframework/expression/BeanResolver.html
2015-05-22T13:07:39
CC-MAIN-2015-22
1432207925201.39
[]
docs.spring.io
SQLAlchemy 0.9 Documentation SQLAlchemy ORM - Object Relational Tutorial¶ - Version Check - Connecting - Declare a Mapping - Create a Schema - Create an Instance of the Mapped Class - Creating a Session - Adding New Objects - Rolling Back - Querying - Building a Relationship - Working with Related Objects - Querying with Joins - Eager Loading - Deleting - Building a Many To Many Relationship - Further Reference - Mapper Configuration - Relationship Configuration - Using the Session - Events and Internals - ORM Extensions - ORM Examples.9 of SQLAlchemy: >>> import sqlalchemy >>> sqlalchemy.__version__ 0.9.0 Connecting¶ For this tutorial we will use an in-memory-only SQLite database. return value of create_engine() is an instance of Engine, and it represents the core interface to the database, adapted through a dialect that handles the details of the database and DBAPI in use. In this case the SQLite dialect will interpret instructions to the Python built-in sqlite3 module. Lazy Connecting The Engine, when first returned by create_engine(), has not actually tried to connect to the database yet; that happens only the first time it is asked to perform a task against the database. The first time a method like Engine.execute() or Engine.connect() is called, the Engine establishes a real DBAPI connection to the database, which is then used to emit the SQL. Users familiar with the syntax of CREATE TABLE may notice that the VARCHAR columns were generated without a length; on SQLite and Postgresql, this is a valid datatype, but on others, it’s not allowed. So if running this tutorial on one of those databases, and you wish to use SQLAlchemy to issue CREATE TABLE, a “length” may be provided to the String type as below: Column(String(50)) The length field on String, as well as similar precision/scale fields available on Integer, Numeric, etc. are not referenced by SQLAlchemy other than when creating tables. Additionally, Firebird and Oracle require sequences to generate new primary key identifiers, and SQLAlchemy doesn’t generate or assume these without being instructed. For that, you use the Sequence construct: from sqlalchemy import Sequence Column. Creating a Session¶ We’re now ready to start talking to the database. The ORM’s “handle” to the database is the Session. When we first set up the application, at the same level as our create_engine() statement, we define a Session class which will serve as a factory for new Session objects: >>> from sqlalchemy.orm import sessionmaker >>> Session = sessionmaker(bind=engine) In the case where your application does not yet have an Engine when you define your module-level objects, just set it up like this: >>> Session = sessionmaker() Later, when you create your engine with create_engine(), connect it to the Session using?. This custom-made Session class will create new Session objects which are bound to our database. Other transactional characteristics may be defined when calling sessionmaker as well; these are described in a later chapter. Then, whenever you need to have a conversation with the database, you instantiate a Session: >>> session = Session() The above Session is associated with our SQLite-enabled. For example, below we create a new Query object which loads instances of User. We “filter by” the name attribute of ed, and indicate that we’d like only the first result in the full list of rows. A User instance is returned which is equivalent to that which we’ve added: sql>>> our_user = session.query(User).filter_by(name='ed').first()BEGIN ')> In fact, the Session has identified that the row returned is the same row as one already represented within its internal map of objects, so we actually got back the identical instance as that which we just added: >>> ed_user is our_user True The ORM concept at work here is known as an identity map and ensures that all operations upon a particular row within a Session operate upon the same set of data. Once an object with a particular primary key is present in the Session, all SQL queries on that Session will always return the same Python object for that particular primary key; it also will raise an error if an attempt is made to place a second, already-persisted object with the same primary key within the session. We can add more User objects at once using add_all(): >>> session.add_all([ ... User to the database, and commits the transaction. The connection resources referenced by the session are now returned to the connection pool. Subsequent operations with this session will occur in a new transaction, which will again re-acquire connection resources when first needed. If we look at Ed’s id attribute, which earlier was None, it now has a value: After the Session inserts new rows in the database, all newly generated identifiers and database-generated defaults become available on the instance, either immediately or via load-on-first-access. In this case, the entire row was re-loaded on access because a new transaction was begun after we issued commit(). SQLAlchemy by default refreshes data from a previous transaction the first time it’s accessed within a new transaction, so that the most recent state is available. The level of reloading is configurable as is described in(name='fakeuser', fullname='Invalid', password='12345') >>> session.add(fake_user) Querying the session, we can see that they’re flushed into the current transaction: sql>>> session.query(User).filter(User.name.in_(['Edwardo', 'fakeuser'])).all()UPDATE users SET name=? WHERE users.id = ? ('Edwardo', 1) INSERT INTO users (name, fullname,(user='fakeuser', fullname='Invalid', password='12345')>] Rolling back, we can see that ed_user‘s on Session. This function takes a variable number of arguments, which can be any combination of classes and class-instrumented descriptors. Below, we indicate a Query which loads User instances. When evaluated in an iterative context, the list of User objects present is returned: sql>>> for instance in session.query(User).order_by(User.id): ... print instance.name, instance.fullnameSELECT users.id AS users_id, users.name AS users_name, users.fullname AS users_fullname, users.password AS users_password FROM users ORDER BY users.id ()ed Ed Jones wendy Wendy Williams mary Mary Contrary fred Fred Flinstone The Query also accepts ORM-instrumented descriptors as arguments. Any time multiple class entities or column-based entities are expressed as arguments to the query() function, the return result is expressed as tuples: The tuples returned by Query are named tuples,='xxg527')> and filtering results, which is accomplished either with filter_by(), which uses keyword arguments: ...or filter(), which uses more flexible SQL expression language constructs. These allow you to use regular Python operators with the class-level attributes on your mapped class: The Query object is fully generative, meaning that most method calls return a new Query object upon which further criteria may be added. For example, to query for users named “ed” with a full name of “Ed Jones”, you can call filter() twice, which joins criteria using AND: sql>>> for user in session.query(User).\ ... filter(User.name=='ed').\ ... filter(User.fullname=='Ed Jones'): ... print userSELECT() With no rows found:. scalar()invokes the one()method, and upon success returns the first column of the row: Using Literal.nameSELECT: See also Using Text - Core description of textual segments. The behavior of the ORM Query object with regards to text() and related constructs is very similar to that of the Core select() object., backref >>> class Address(Base): ... __tablename__ = 'addresses' ... id = Column(Integer, primary_key=True) ... email_address = Column(String, nullable=False) ... user_id = Column(Integer, ForeignKey('users.id')) ... ... user = relationship("User", backref=backref('addresses', order_by=id)) ... ... def __repr__(self): ... return "<Address(email_address='%s')>" % self.email_addressSELECT users.id AS users_id, users.name AS users_name, users.fullname AS users_fullname, users.password AS users_password, addresses.id AS addresses_id, addresses.email_address AS addresses_email_address, addresses.user_id AS addresses_user_id FROM users, addresses WHERE users.id = addresses.user_id AND addresses.email_address = ? ('jack@google.com',)<User(name='jack', fullname='Jack Bean', password='gjffdd')> <Address(email_address='jack@google.com')> The actual SQL JOIN syntax, on the other hand, is most easily achieved using the Query.join() method: sql>>> session.query(User).join(Address).\ ... filter(Address.email_address=='jack@google.com').\ ... all()SELECT users.id AS users_id, users.name AS users_name, users.fullname AS users_fullname, users.password AS users_password FROM users JOIN addresses ON users.id = addresses.user_id WHERE addresses.email_address = ? ('jack@google.com',)[<User When querying across multiple tables, if the same table needs to be referenced more than once, SQL typically requires that the table be aliased with another name, so that it can be distinguished against other occurrences of that table. The Query supports this most explicitly using the aliased construct. Below we join to the Address entity twice, to locate a user who has two distinct email addresses at the same time: >>> from sqlalchemy.orm import aliased >>> adalias1 = aliased(Address) >>> adalias2 = aliased(Address) sql>>> for username, email1, email2 in \ ... session.query(User.name, adalias1.email_address, adalias2.email_address).\ ... join(adalias1, User.addresses).\ ... join(adalias2, User.addresses).\ ... filter(adalias1.email_address=='jack@google.com').\ ... filter(adalias2.email_address=='j25@yahoo.com'): ... print username, email1, email2SELECT users.name AS users_name, addresses_1.email_address AS addresses_1_email_address, addresses_2.email_address AS addresses_2_email_address FROM users JOIN addresses AS addresses_1 ON users.id = addresses_1.user_id JOIN addresses AS addresses_2 ON users.id = addresses_2.user_id WHERE addresses_1.email_address = ? AND addresses_2.email_address = ? ('jack@google.com', 'j25@yahoo.com')jack jack@google.com j25@yahoo.com Using Subqueries¶ The Query is suitable for generating statements which can be used as subqueries. Suppose we wanted to load User objects along with a count of how many Address records each user has. The best way to generate SQL like this is to get the count of addresses grouped by user ids, and JOIN to the parent. In this case we use a LEFT OUTER JOIN so that we get rows back for those users who don’t have any addresses, e.g.: SELECT users.*, adr_count.address_count FROM users LEFT OUTER JOIN (SELECT user_id, count(*) AS address_count FROM addresses GROUP BY user_id) AS adr_count ON users.id=adr_count.user_id Using the Query, we build a statement like this from the inside out. The statement accessor returns a SQL expression representing the statement generated by a particular Query - this is an instance of a construct, which are described in SQL Expression Language Tutorial: >>> from sqlalchemy.sql import func >>> stmt = session.query(Address.user_id, func.count('*').\ ... label('address_count')).\ ... group_by(Address.user_id).subquery() The func keyword generates SQL functions, and the subquery() method on Query produces a SQL expression construct representing a SELECT statement embedded within an alias (it’s actually shorthand for query.statement.alias()). Once we have our statement, it behaves like a Table construct, such as the one we created for users at the start of this tutorial. The columns on the statement are accessible through an attribute called c: sql>>> for u, count in session.query(User, stmt.c.address_count).\ ... outerjoin(stmt, User.id==stmt.c.user_id).order_by(User.id): ... print u, countSELECT users.id AS users_id, users.name AS users_name, users.fullname AS users_fullname, users.password AS users_password, anon_1.address_count AS anon_1_address_count FROM users LEFT OUTER JOIN (SELECT addresses.user_id AS user_id, count(?) AS address_count FROM addresses GROUP BY addresses.user_id) AS anon_1 ON users.id = anon_1.user_id ORDER BY users.id ('*',)='gjffdd')> 2 Selecting Entities from Subqueries¶ Above, we just selected a result that included a column from a subquery. What if we wanted our subquery to map to an entity ? For this we use aliased() to associate an “alias” of a mapped class to a subquery: sql>>> stmt = session.query(Address).\ ... filter(Address.email_address != 'j25@yahoo.com').\ ... subquery() >>> adalias = aliased(Address, stmt) >>> for user, address in session.query(User, adalias).\ ... join(adalias, User.addresses): ... print user ... print addressSELECT users.id AS users_id, users.name AS users_name, users.fullname AS users_fullname, users.password AS users_password, anon_1.id AS anon_1_id, anon_1.email_address AS anon_1_email_address, anon_1.user_id AS anon_1_user_id FROM users JOIN (SELECT addresses.id AS id, addresses.email_address AS email_address, addresses.user_id AS user_id FROM addresses WHERE addresses.email_address != ?) AS anon_1 ON users.id = anon_1.user_id ('j25@yahoo.com',)<User(name='jack', fullname='Jack Bean', password='gjffdd')> <Address(email_address='jack@google.com')> Using EXISTS¶ The EXISTS keyword in SQL is a boolean operator which returns True if the given expression contains any rows. It may be used in many scenarios in place of joins, and is also useful for locating rows which do not have a corresponding row in a related table. There is an explicit EXISTS construct, which looks like this: The Query features several operators which make usage of EXISTS automatically. Above, the statement can be expressed along the User.addresses relationship relationships (note the ~ operator here too, which means “NOT”): sql>>> session.query(Address).\ ... filter(~Address.user.has(User.name=='jack')).all()SELECT addresses.id AS addresses_id, addresses.email_address AS addresses_email_address, addresses.user_id AS addresses_user_id FROM addresses WHERE NOT (EXISTS (SELECT 1 FROM users WHERE users.id = addresses.user_id AND users.name = ?)) ('jack',)[] <no title>. will configure cascade options on the User.addresses FROM users WHERE users.id = ? (5,)# remove one Address (lazy load fires off) sql>>> del jack.addresses[1]SELECT addresses.id AS addresses_id, addresses.email_address AS addresses_email_address, addresses.user_id AS addresses_user_id FROM addresses WHERE ? = addresses.user_id (5,)# only one address remains sql>>> session.query(Address).filter( ... Address.email_address.in_(['jack@google.com', 'j25@yahoo.com']) ... ).count()DELETE FROM addresses WHERE addresses.id = ? (2,) SELECT count(*) We’re moving into the bonus round here, but lets show off a many-to-many relationship. We’ll sneak in some other features too, just to take a tour. We’ll make our application a blog application, where users can write BlogPost items, which have Keyword items associated with them.='posts') ... ... def __init__(self, headline, body, author): ... self.author = author ... self.headline = headline ... self.body = body ... ... def __repr__(self): ... return "BlogPost(%r, %r, %r)" % (self.headline, self.body, self.author) >>> class Keyword(Base): ... __tablename__ = 'keywords' ... ... id = Column(Integer, primary_key=True) ... keyword = Column(String(50), nullable=False, unique=True) ... .... We would also like our BlogPost class to have an author field. We will add this as another bidirectional relationship, except one issue we’ll have is that a single user might have lots of blog posts. When we access User.posts, we’d like to be able to filter results further so as not to load the entire collection. For this we use a setting accepted by relationship() called lazy='dynamic', which configures an alternate loader strategy on the attribute., FOREIGN KEY(post_id) REFERENCES posts (id), FOREIGN KEY(keyword_id) REFERENCES keywords (id) ) () COMMIT Usage is not too different from what we’ve been doing. Let’s give Wendy some blog posts: sql>>> wendy = session.query(User).\ ... filter_by(name='wendy').\ ... one()SELECT users.id AS users_id, users.name AS users_name, users.fullname AS users_fullname, users.password AS users_password FROM users WHERE users.name = ? ('wendy',)>>> post = BlogPost("Wendy's Blog Post", "This is a test", wendy) >>> session.add(post) We’re storing keywords uniquely in the database, but we know that we don’t have any yet, so we can just create them: >>> post.keywords.append(Keyword('wendy')) >>> post.keywords.append(Keyword('firstpost')) We can now look up all blog posts with the keyword ‘firstpost’. We’ll use the any operator to locate “blog posts where any of its keywords has the keyword string ‘firstpost’”: sql>>> session.query(BlogPost).\ ... filter(BlogPost.keywords.any(keyword='firstpost')).\ ... all()INSERT INTO keywords (keyword) VALUES (?) ('wendy',) INSERT INTO keywords (keyword) VALUES (?) ('firstpost',) INSERT INTO posts (user_id, headline, body) VALUES (?, ?, ?) (2, "Wendy's Blog Post", 'This is a test') INSERT INTO post_keywords (post_id, keyword_id) VALUES (?, ?) ((1, 1), (1, 2)) SELECT posts.id AS posts_id, posts.user_id AS posts_user_id, posts.headline AS posts_headline, posts.body AS posts_body FROM posts WHERE EXISTS (SELECT 1 FROM post_keywords, keywords WHERE posts.id = post_keywords.post_id AND keywords.id = post_keywords.keyword_id AND keywords.keyword = ?) ('firstpost',)[BlogPost("Wendy's Blog Post", 'This is a test', <User(name='wendy', fullname='Wendy Williams', password='foobar')>)] If we want to look up posts owned by the user wendy, we can tell the query to narrow down to that User object as a parent: sql>>> session.query(BlogPost).\ ... filter(BlogPost.author==wendy).\ ...')>)] Or we can use Wendy’s own posts relationship, which is a “dynamic” relationship, to query straight from there: sql>>> wendy.posts.\ ...')>)] Further Reference¶ Query Reference: query_api_toplevel Mapper Reference: Mapper Configuration Relationship Reference: Relationship Configuration Session Reference: Using the Session
http://docs.sqlalchemy.org/en/rel_0_9/orm/tutorial.html
2015-05-22T12:56:38
CC-MAIN-2015-22
1432207925201.39
[]
docs.sqlalchemy.org
Creating a content plugin From Joomla! Documentation Revision as of 13:57, 7 February 2009 by Infograf768 (Talk | contribs) I edited the part concerning installing ini files as it was confusing when the inis were placed in a language/xx-XX/ folder in the pack. It looked as if these files could be installed in the site language folder. Plugins in that eragrd are very different from components and modules.
https://docs.joomla.org/index.php?title=J1.5_talk:Creating_a_content_plugin&oldid=13108
2015-05-22T13:11:02
CC-MAIN-2015-22
1432207925201.39
[]
docs.joomla.org
User Roles and Permissions Be sure to review our overview of users: Introduction to Users There are three primary user roles in your Learning Management System (LMS): Admins, Group Admins, and Students. Each role has its own set of permissions and privileges in the platform, including a dashboard tailored to their role. As such, although each user is directed to the Dashboard upon login, a Student will see a dashboard where they can take the courses they are enrolled in, download certificates they have earned, and view orders for purchasing they may have made, whereas an Admin see the Administrator Dashboard that holds all the tools and resources to configure everything in the LMS. This guide will take a closer look at the specific permissions and platform experience each role has. In this article Admin The admin role is the role our clients have in their platform. It is the role that essentially is able to configure anything in the platform that there is to be configured from creating courses, managing students in courses, configuring the pages on the website part of the LMS, setting up automated email templates, providing organizations (Groups) with bulk seats for their employees or students, and much more. When we set up a new instance for our clients we will create one or multiple Admin accounts for each team member that will be involved in the platform. However, as an Admin, you will be able to add as many additional Admins as you wish. To learn more about how to add a new Admin user, be sure to read our Adding a User article. Note Admins (just like Group Admins) have a certain duality to their account. Yes, they are an Admin with all privileges of managing the entire LMS but they can also take part in the LMS as a Student. In the top right of your Admin Dashboard you will find a View As Student button that when clicked will temporarily swap your Admin role to a Student role, allowing you to see the Student Dashboard and take part in any courses that you are enrolled into. Once you are swapped to a student, you will see a banner at the top of the Student Dashboard that provides you the option of switching back to your Admin role. Just a handful of the platform management privileges that an Admin has are: - Create and manage courses - Create Products - Add users and enroll Students in a Course - Manage eCommerce settings - Edit site pages - Create groups - View reporting - Customize account branding Group Admin Group Admins are administrators that have course and student management capabilities over the specific Group they are assigned to. In order to fully understand the role of the Group Admin, it is important to first learn more about the Groups functionality that is available in your platform. The Group Admin is a user role that will manage a group of students and set of courses that the Group has available seats to. This role is often occupied by a representative of an organization that a client has sold seats to in bulk. After providing the organization with the access to the courses, client can hence delegate the enrollment and management of students in those courses to a representative of that organization (who becomes a Group Admin) and with that also provide them access into the progress of their students. Group Admins always have to be associated with a Group in the platform and can be created in two ways: - An Admin manually creates the Group Admin by adding it to a Group. - A Group Admin registers into the platform by selecting the associated checkbox on the sign up page. Group Admins are the only users that can change the quantities of a Product/Course in the checkout page and every purchase of a Course a Group Admin makes will become an available seat in their Group. In other words, Group Admins can increase the quantities of their purchase > 1, and every purchase they make will become an available seat in their Group. For more information on checking out as group, please review our guide: Navigating the Checkout Process For more information on managing a group, please review our guide: Creating and Managing Groups Note Similar to the Admin role, a Group Admin has a certain duality to their account. Yes, they have their Group Admin privileges of managing the Group but they can also take part in the Group as a Student. In the top right of their Dashboard they will find a View As Student button that when clicked will temporarily swap their Group Admin role to a Student role, allowing them to see the Student Dashboard and take part in any courses that they are enrolled into. Once they are swapped to a student, they will see a banner at the top of the Student Dashboard that provides them the option of switching back to their Group Admin role. Important: they will only see the courses they are enrolled in in their Student Dashboard and therefore have to make sure to use a seat for their own account by enrolling themselves. Student Students are the users that are actually taking your courses and potentially earning certificates upon completion. Their dashboard allows them to see the courses they are enrolled in, certificates they have earned, orders they have made, potential subscriptions that are running, and announcements that you have shared with them. They can launch courses from their dashboard and move through the different modules of the curriculum in the Course Player. To get a better impression of the Student Experience, be sure to view our Video Tour of the Student Experience. For more information on checking out as a single user, please review our guide: Navigating the Checkout Process Super Admin You may also see a user with a Super Admin role appear in certain areas. This role is restricted to Academy of Mine staff only and has the primary goal of monitoring and troubleshooting your account. Next Step Now that you understand the different types of user roles, you'll want to learn about adding a new user to the platform:
https://docs.academyofmine.com/article/99-user-roles-and-permissions
2021-11-27T08:08:38
CC-MAIN-2021-49
1637964358153.33
[array(['http://d33v4339jhl8k0.cloudfront.net/docs/assets/5e888b922c7d3a7e9aea626e/images/60be3ebe4173c622df929318/file-4cdl40fqUc.png', None], dtype=object) array(['http://d33v4339jhl8k0.cloudfront.net/docs/assets/5e888b922c7d3a7e9aea626e/images/60be3ebe4173c622df929318/file-4cdl40fqUc.png', None], dtype=object) ]
docs.academyofmine.com
, select the Yammer settings icon , and then select Network Admin. Invite users to Yammer If you are enforcing Office 365 identity in your network, all Office 365 users that have a Yammer license are created as pending users in Yammer. If you are not enforcing Office 365 identity, users are not part of the Yammer network until they have selected the Yammer tile from Office 365 or signed in once to Yammer. Note If your Yammer network is in Native Mode, this action can be performed only in the Azure Active Directory (Azure AD) User Management Portal and not within the Yammer Admin portal. Only employees with a company email address can be invited from this select Invite. Invite users in bulk just specifying their email address Note This is supported only in Classic Yammer. If your Yammer network is in Native Mode, see Bulk update users by importing a .CSV file. In the Yammer admin center, go to Users > Invite Users. Select signed in. Pending users can be added to groups even before they have used Yammer for the first time. Pending users receive announcement notification emails from group admins. If users don't want to receive announcements from a particular group they can sign in to their Yammer account and leave the group or follow the unsubscribe link in the email to unsubscribe from all Yammer emails. As in all Office products, pending users will be visible in the group member list even if they have never signed. Note If your Yammer network is in Native Mode, the only reason to use the Remove Users page in the Yammer Admin portal is to process a Data Subject Request for GDPR. If you just want to remove a user from your Yammer Network, this action should be performed directly within the Azure AD User Management Portal. In the Yammer admin center, go to Users > Remove Users. Enter an existing user's name. Select an action to take: Deactivate this user: If the user is not using Azure AD credentials, this blocks the user from signing in until they verify their email address again. Without access to their verified email account, they cannot sign back in to Yammer. User profile information, messages, and file uploads remain. This can be a useful option for contract employees that have completed their project but can be renewed again later. Deactivated users can reactivate their account within 90 days by enabling their email account and signing in to Yammer, where they will receive an email with links to reactivate. After 90 days, the account is permanently deleted. If the user is using Azure AD. Select Submit. If there are any deactivated users, they are listed on the Remove Users page. You can reactivate or delete a user from this list. Monitor account activity and device usage for a single user In the Yammer admin center, go to Users > Account Activity. Type a user's name. You'll see what devices they are currently signed in on, when they last signed in on each device, and which IP address was used. You can also sign the user out of. Note If your Yammer network is in Native Mode, this action can be performed only in the Azure AD User Management Portal and not within the Yammer Admin portal. Microsoft select Deactivate. Bulk update users by importing a .CSV file Note If your Yammer network is in Native Mode, this action can be performed only in the Azure AD User Management Portal and not within the Yammer Admin portal. Changes via bulk edit can take up to 24 hours to take effect throughout your network. *. Select Export. This provides a .ZIP file containing three selecting the three dots in the top navigation bar and going to Edit Profile > Notifications. They can also go to Edit Profile > Preferences to adjust their own message and time zone settings. Related articles Manage Yammer users across their lifecycle from Office 365 Can I unsubscribe myself from Yammer? Remove a user from a group
https://docs.microsoft.com/en-us/yammer/manage-yammer-users/add-block-or-remove-users?redirectSourcePath=%252fpt-BR%252farticle%252fGerenciar-seus-usu%2525C3%2525A1rios-do-Yammer-guia-de-administra%2525C3%2525A7%2525C3%2525A3o-do-Yammer-0fc72b66-cbf1-4202-bcf0-f2174ea96798
2021-11-27T07:54:14
CC-MAIN-2021-49
1637964358153.33
[]
docs.microsoft.com
Defining and Referring to Class Parameters This chapter describes how to define class parameters and how to refer to them programmatically. Introduction to Class Parameters A class parameter defines a special constant value available to all objects of a given class. When you create a class definition (or at any point before compilation), you can set the values for its class parameters. By default, the value of each parameter is the null string, but you can specify a non-null value as part of the parameter definition. At compile-time, the value of the parameter is established for all instances of a class. With rare exceptions, this value cannot be altered at runtime. Class parameters are typically used for the following purposes: To customize the behavior of the various data type classes (such as providing validation information). To define user-specific information about a class definition. A class parameter is simply an arbitrary name-value pair; you can use it to store any information you like about a class. To define a class-specific constant value. To provide parameterized values for method generator methods to use. A method generator is a special type of method whose implementation is actually a short program that is run at compile-time in order to generate the actual runtime code for the method. Many method generators use class parameters. Defining Class Parameters To add a class parameter to a class definition, add an element like one of the following to the class: Parameter PARAMNAME; Parameter PARAMNAME as Type; Parameter PARAMNAME as Type = value; Parameter PARAMNAME as Type [ Keywords ] = value; Where PARAMNAME is the name of the parameter. Note that by convention, parameters in InterSystems IRIS® system classes are nearly all in uppercase; this convention provides an easy way to distinguish parameters from other class members, merely by name. There is no requirement for you to do the same. Type is a parameter type. See the next section. value is the value of the parameter. In most cases, this is a literal value such as 100 or "MyValue". For some types, this can be an ObjectScript expression. See the next section. Keywords represents any parameter keywords. These are optional. For an introduction to keywords, see “Compiler Keywords,” earlier in this book. For parameter keywords; see “Parameter Keywords” in the Class Definition Reference. Parameter Types and Values It is not necessary to specify a parameter type. If you do, note that this information is primarily meant for use by the development environment. The parameter types include BOOLEAN, STRING, and INTEGER. Note that these are not InterSystems IRIS class names. For a complete list, see “Parameter Definitions” in the Class Definition Reference. Except for the types COSEXPRESSION and CONFIGVALUE (both described in subsections), the compiler ignores the parameter types. Class Parameter to Be Evaluated at Runtime (COSEXPRESSION) You can define a parameter as an ObjectScript expression that it is evaluated at runtime. To do so, specify its type as COSEXPRESSION and specify an ObjectScript expression as the value: Parameter PARAMNAME As COSEXPRESSION = "ObjectScriptExpression"; where PARAMNAME is the parameter being defined and ObjectScriptExpression is the ObjectScript content that is evaluated at runtime. An example class parameter definition would be: Parameter DateParam As COSEXPRESSION = "$H"; Class Parameter to Be Evaluated at Compile Time (Curly Braces) You can define a parameter as an ObjectScript expression that it is evaluated at compile time. To do so, specify no type and specify the value in curly braces: Parameter PARAMNAME = {ObjectScriptExpression}; where PARAMNAME is the parameter being defined and ObjectScriptExpression is the ObjectScript content that is evaluated at runtime. For example: Parameter COMPILETIME = {$zdatetime($h)}; Class Parameter to Be Updated at Runtime (CONFIGVALUE) You can define a parameter so that it can modified outside of the class definition. To do so, specify its type as CONFIGVALUE. In this case, you can modify the parameter via the $SYSTEM.OBJ.UpdateConfigParam() method. For example, the following changes the value of the parameter MYPARM (in the class MyApp.MyClass) so that its new value is 42: set sc=$system.OBJ.UpdateConfigParam("MyApp.MyClass","MYPARM",42) Note that $SYSTEM.OBJ.UpdateConfigParam() affects the generated class descriptor as used by any new processes, but does not affect the class definition. If you recompile the class, InterSystems IRIS regenerates the class descriptor, which will now use the value of this parameter as contained in the class definition, thus overwriting the change made via $SYSTEM.OBJ.UpdateConfigParam(). Referring to Parameters of a Class To refer to a parameter of a class, you can do any of the following: Within a method of the associated class, use the following expression: ..#PARMNAME You can use this expression with the DO and SET commands, or you can use it as part of another expression. The following shows one possibility: set object.PropName=..#PARMNAME In the next variation, a method in the class checks the value of a parameter and uses that to control subsequent processing: if ..#PARMNAME=1 { //do something } else { //do something else } To access a parameter in any class, use the following expression: ##class(Package.Class).#PARMNAME where Package.Class is the name of the class and PARMNAME is the name of the parameter. This syntax accesses the given class parameter and returns its value. You can use this expression with commands such as DO and SET, or you can use it as part of another expression. The following shows an example: w ##class(%XML.Adaptor).#XMLENABLED displays whether methods generated by the XML adaptor are XML enabled , which by default is set to 1. To access the parameter, where the parameter name is not determined until runtime, use the $PARAMETER function: $PARAMETER(classnameOrOref,parameter) where classnameOrOref is either the fully qualified name of a class or an OREF of an instance of the class, and parameter evaluates to the name of a parameter in the associated class. For information on OREFs, see “Working with Registered Objects.” For more information, see the $PARAMETER page in the ObjectScript Reference.
https://docs.intersystems.com/healthconnectlatest/csp/docbook/stubcanonicalbaseurl/csp/docbook/DocBook.UI.Page.cls?KEY=GOBJ_parameters
2021-11-27T09:56:15
CC-MAIN-2021-49
1637964358153.33
[]
docs.intersystems.com
Discussion Forums have been deprecated and will be removed in a future release of Appian. Support for specifying Forum message storage with a custom XML configuration file ( forums-config*.xml) or the change-paths.bat(.sh) script has also been deprecated. Use News within the Tempo environment. Tempo offers many benefits, such as mobile support, modern web browser support, improved search functionality, and greater user control through visibility settings and message targeting. See Social Smart Services. For information about the Discussion Forums [Deprecated], see the Appian 16.2 documentation. For information about Forum message storage configurations, see the Appian 16.2 documentation.
https://docs.appian.com/suite/help/20.3/Discussion_Forums.html
2021-11-27T07:50:07
CC-MAIN-2021-49
1637964358153.33
[]
docs.appian.com
Backlog and meeting agenda Everything starting with a tension (drivers and proposals) are added to a backlog. If you're going ahead and doing it yourself, it goes to your personal operations backlog. If it's a team effort that doesn't require a team agreement (or has one already) and we can just go ahead and do it, it goes to our team operations backlog (the Trello or Planner board is usually where it goes). If it's a team effort that requires a team agreement, it goes to our team governance backlog. Most teams' governance backlog is in Office 365 Planner with buckets for tension, driver, proposal, agenda and final agreements. When there are designed proposals on the table, these are added to the backlog and prioritised either by the meeting facilitator (responsible for sticking to the agenda and making sure decisions are made) or together with the team in the beginning of the meeting. The meeting secretary (responsible for making sure everyone is invited and knows when and where to go) sends out the agenda with the governance backlog items (i.e. proposals) at the very minimum 24 hours before the governance meeting (as described below). This kind of proactivity invites everyone to take responsibility and be prepared in the meeting. It's also a way of co-creating the agenda so that we all agree on what the meeting should be about. If something should be changed or added, this is communicated to the meeting secretary. Simply put, it saves precious time and energy. Making Decisions & Getting Stuff Done - Previous Advice Next - Making Decisions & Getting Stuff Done Types of meetings Last modified 3yr ago Copy link
https://docs.comprend.com/space-explorer/making-decisions-and-getting-stuff-done/backlog-and-meeting-agenda
2021-11-27T08:03:28
CC-MAIN-2021-49
1637964358153.33
[]
docs.comprend.com
October 19, 2020 the merchant portal was updated to provide a fresh look on transactional data allowing enhanced usability, providing a snapshot of historical data and is laying the groundwork for future enhancements. Download the Merchant Portal Guide above and read through new things in the portal and a few features coming soon.
https://docs.ncr.com/payments/merchant-portal-guide-3/
2021-11-27T08:32:09
CC-MAIN-2021-49
1637964358153.33
[]
docs.ncr.com
Version 3.0.0 Update Log New FeaturesNew Features Undo and Redo This was probably a big feature that everyone was waiting for. Although Sketchware is really easy to use with the drag and dropping functionality, one wrong drag and drop could really hurt you if you forgot to save. Also, an accidental deletion of widgets and blocks often happened. So we introduce to you, redo and undo functionality for both widgets and blocks. Save Frequently used Widgets and Blocks When we were creating more complicated projects, we often found the need to re-create the same screens again. For example, a project that requires Firebase Auth would need a login page. Because these are often follow the same styling rules, we wanted to save the hassle of re-creating similar widgets over and over again. Now, you will be able to save a custom widget that you can recycle all across the projects. For example, if I created a like button that I want to re-use, I would click on the save button on the property tab to save the widget. Blocks work similarly: Property Tab We've completely renovated the property tab, such that you will save yourself a click by editing on the spot. For new users, we thought learning new property was a high learning curve, so we made it more user-friendly. From this tab, you will be able to change properties and access the associated events as well. Separation of Event and Component tabs Whenever working on large projects, we found the user interface to be very clustered and complex. In version 2.x, the events and the components were placed in the same tab even though they hade very different functionalities. In this version, we decided to separate these two topics separately to make development experience less clustered. In addition, we've categorized events into 5 different types of events: Activity, View, Component, Drawer, and MoreBlock. My Collection Just like widgets and blocks, you probably find yourself reusing the same assets over and over--such as images, sounds, fonts, etc. We wanted to apply the same principle to the assets. Now, you will have a "Collection" box where you can save frequently used assets, as well as widgets and blocks. These are shared assets that you can access from all the projects. Right Drawer We tried our best to remove unnecessary context menus and popups to make user experience as smooth as possible. You will notice small animations here and there. By using a right drawer, it will give us more options to put new functionalities in there. Ability to Edit Resource Files You can now edit existing resource files, such as images, fonts, and sounds to different files without having to erase them and reimport them. This feature will be very useful if you have a boilerplate for other projects. Bulk Image Import Now, you can import images in a bulk without having to import one by one. The imported images will follow a naming principle you assign them. For example, if you import two images and name the bulk abcd, the images will be named abcd_1 and abcd_2. Language Settings / Sharing Platform We have users from all across the world, so we wanted to create a space where users can share their translated strings.xml file. Here, users will be able to share, download, and apply different language files onto Sketchware. If you speak more than one language, please help Sketchware become localized in different countries! New Components Camera, Firebase Storage, and File components will be added. ... And more! There are more to cover, and not enough space, so here is a few screenshot from the new version. Hope you enjoy them! Deprecated - Add Source Directly BlockDeprecated - Add Source Directly Block Add Source Directly won't be available anymore starting this update. Existing projects with add source directly blocks will still compile, but you won't be able to add new add source directly blocks. You guys may find this very saddening, but please try to see this issue from our perspective. Sketchware was designed for block programming.. We want to make Sketchware more scalable.. Tutorial Tab Under ConstructionTutorial Tab Under Construction Tutorials will be temporarily closed for maintenance. We found the tutorials to be very tedious, because the user doesn't have the ability to fast forward, preview, or review. We are hoping to make the tutorials really "step-by-step" this time, allowing users to learn visually from observings the actions rather than performing each actions by themselves. That's it! We are hoping to release the update really soon. Thank you for reading this long update log. :-)
https://docs.sketchware.io/blog/2018/06/05/new-version-3.0.0.html
2021-11-27T09:32:46
CC-MAIN-2021-49
1637964358153.33
[array(['/img/blogs/2018-06-05/undo-redo.png', 'undo redo'], dtype=object) array(['/img/blogs/2018-06-05/property.png', 'property'], dtype=object) array(['/img/blogs/2018-06-05/events.png', 'events'], dtype=object) array(['/img/blogs/2018-06-05/my-collection.png', 'collection'], dtype=object) array(['/img/blogs/2018-06-05/right-drawer.png', 'drawer'], dtype=object) array(['/img/blogs/2018-06-05/language.png', 'language'], dtype=object)]
docs.sketchware.io
Upgrade, rollback, or uninstall Genesys Engagement Service Contents Learn how to upgrade, rollback or uninstall Genesys Engagement Service. Upgrade Genesys Engagement Service GES uses a rolling upgrade approach. During a prescribed maintenance window, upgrade the Helm deployment for GES by running the following command: > helm upgrade --install -n <GES_NAMESPACE> -f <path/to/values.yaml> <GES_RELEASE_NAME> <PATH_TO_GES_HELM_CHART> Provided there are no issues or errors when running the Helm command, you can expect GES to work with any updates supplied in the latest version of the Helm charts. You can verify that GES has upgraded successfully by either looking at dashboards (if provisioned) or using the health check APIs for GES (through the internal port query ges:3050/ges/v1/health/detail) to make sure that all dependencies are working as provided. In some instances, there might be changes to the GES Helm manifest that, due to restrictions in Kubernetes, require the existing GES deployment to be uninstalled and then re-installed, rather than simply upgraded in place. Consult the Callback Release Notes to see if this will be necessary. Rollback Genesys Engagement Service To roll back to a previous iteration of GES, follow the same process you used to upgrade GES. Be sure to downgrade both the GES version and the Helm chart version that you're using. It might be necessary to use an older version of the Helm values file. Uninstall Genesys Engagement Service You can uninstall GES using the Helm uninstall command. Depending on how services like Postgres and Redis are provisioned for GES, you will have to decommission those services separately. Discussion of that process is outside the scope of this document.
https://all.docs.genesys.com/PEC-CAB/Current/CABPEGuide/Upgrade
2021-11-27T08:01:00
CC-MAIN-2021-49
1637964358153.33
[]
all.docs.genesys.com
Adding custom Javascript (JS) If you are looking to add some custom Javascript to your platform, whether for integrating 3rd-party applications or your own scripts, you can easily do so via the Integrations tab. - Navigate to the Integrations tab. You can find this in the sidebar on the left hand side of the Admin Dashboard. - Click on the Manage button for the Custom Javascript box. - Enable the Custom Javascript option by activating the toggle. - Paste your code into one of the code fields. You can choose where you'd like to add your Javascript code; either in the <head> tag (first box) or at the end of the <body> tag (second box). - Then, after having added the script, be sure to click the Save Changes button.
https://docs.academyofmine.com/article/51-adding-custom-javascript-js
2021-11-27T08:12:56
CC-MAIN-2021-49
1637964358153.33
[]
docs.academyofmine.com
VGridRows.AddRange(BaseRow[]) Method Namespace: DevExpress.XtraVerticalGrid.Rows Assembly: DevExpress.XtraVerticalGrid.v21.2.dll Declaration public virtual void AddRange( BaseRow[] rows ) Public Overridable Sub AddRange( rows As BaseRow() ) Parameters Remarks This method gets an array of row objects each of which is represented by an instance of the BaseRow class descendant. You can use the AddRange method to quickly add a group of rows to the rows collection represented by a VGridRows object instead of manually adding each row to the collection using the VGridRows.Add method. When using this method to add rows to the collection, you do not need to call the VGridControlBase.BeginUpdate and VGridControlBase.EndUpdate methods to optimize performance. To remove the previously added row, use its Dispose method or the collection’s VGridRows.Remove method. Use the Clear() method if you want to remove all rows from the collection.
https://docs.devexpress.com/WindowsForms/DevExpress.XtraVerticalGrid.Rows.VGridRows.AddRange(DevExpress.XtraVerticalGrid.Rows.BaseRow--)
2021-11-27T09:26:14
CC-MAIN-2021-49
1637964358153.33
[]
docs.devexpress.com
Products¶ PRODUCTS When you click products page it will show the list of all existing products will be displayed. We can add a new product by clicking Add Product. Add New Product¶ - When you click products page, the list of existing products will be shown. You then need to click on ‘Add Product’ to add a new product. After providing all the details, you need to click on ‘Save’ so that the product gets listed on the Storefront. - A form will open up, where you need to enter the details. - In general section, you can enter the product name or title, SKU prefix, product slug and product description. - Next, in the more information section, you can map the product to categories (one or more). - Then, in the data section, provide the UPC, HSN, quantity available, enable stock status, specify the date of availability or the date on which the product should be active on the storefront, enable the status, sort the order for the product to appear in the product list, select the manufacturer of the product from the dropdown and then enable postal code verification, if you are shipping only to a particular location and not all locations (the Customer will have to verify the postal code and only if the delivery is available, they can check out the product and place an order). - Add images to the product, by clicking on ‘Add Images’, you can either select an image from the existing library (if already available) or click on upload and fetch it from your system. - If you want to show any related product from the dropdown, you may select the list of products from the list and add them as related product. - If you want to add variants to the product, click on the variants from the available list on the right hand side. This will display the list of variant values on the right hand side. You may only show up to five variants values for a particular variant. You may select more than one variant from the left hand side and add up to five values for each of variant. Once the variants are added, the list of different SKUs will be ready and displayed in the following section on the same page. Each SKU will be a unique identity (based on the combinations of different product variants). For each SKU, you may add a unique photo, unique pricing, bar code and inventory and then activate each of them (you may deactivate any SKU any time). - Next, in the pricing section, you can set a default price, by mentioning the product cost, add tax (either value or percentage). It will then show the final default price of the product. - Then, any discount or special price can be added, by mentioning the priority, discounted or special price and the from and to dates on which the discount needs to be offered. Special pricing and discount pricing can be separate and exclusive for each SKU. The special pricing and discount pricing can be set for SKUs, only after the product has already been listed on the store. So, this feature is available only in ‘edit product’ and not ‘add product’. - Next, the SEO title, keywords and description can be added for making the product detail page search engine friendly. - Then, the tier pricing can be added for offering discounts and price reduction on bulk quantity purchases. The tier pricing has to be set for each SKU separately. You can select an SKU, mention the quantity and above and what will be the price per item then. You can add multiple tiers for different quantity purchase. This tier pricing has to be activated. The tier pricing can be set for SKUs, only after the product has already been listed on the store. So, this feature is available only in ‘edit product’ and not ‘add product’. List Product¶ - All the newly added products will be listed in this page. - The page has the features to make the product Active or Inactive. - You can hide the product or make the product visible by the changing the status. Update Product¶ - Locate the product that needs to be edited and then click the ‘edit’ icon. - It will redirect to the edit page in edit mode. - There you can edit your product details and it will be updated Delete Product¶ - Locate the product that needs to be deleted and then click on the product page. - Click the delete button and remove your product from the list. Tax Management¶ - In the ‘More Information’ Section, select the tab ‘Price’. - In the default price, under tax, either select value or percentage. - Select percentage, if you want to mention the percentage of tax. - Select value, if you want to specify an amount for tax deduction. - Enter the Value. Bulk Export Option¶ - You will see the list of products. - Either select all products, or select specific products. - Click on ‘Export’ option. - You will get the list of selected products, along with all the details, in the excel sheet. Product question and Answer¶ - You will see the list of products. - Look for a particular product for which you want to add a question and an answer. - Under ‘Action’, click on the ‘Question’ icon. - The Admin can view the list of questions posted by the Customers. The Admin can click on a question and enable it to make it live on the products detail page. - By clicking on a particular question, the Admin can also view the answers posted by other Customers, who already purchased the product. The Admin has to enable the answers to make it live on the products detail page. - To add a question, the Admin has to click on ‘Add Question’ - The Admin can post a question, post answer for it and click on ‘Post’. Add Tier Price for a Product¶ - Go to the second section, i.e., More Information. - Click on Tier Price - Click on ‘+’ Option - Mention the quantity and above and then the ‘Price per Item’ for it. For example, for quantity 500 and above, the price per product can be $10, for quantity 800 and above, the price per product can be $8, and so on. - Either click on ‘Yes’ or ‘No’ for - Do you want to activate this Tier Price?
https://docs.spurtcommerce.com/Admin/Catalog/Product.html
2021-11-27T07:43:00
CC-MAIN-2021-49
1637964358153.33
[array(['../../_images/addProduct.png', 'Add Product'], dtype=object) array(['../../_images/add_product_varient.png', 'Add Product varient'], dtype=object) array(['../../_images/add_sku_discount.png', 'Add sku tire'], dtype=object) array(['../../_images/add_sku_tire.png', 'Add sku tire'], dtype=object) array(['../../_images/List-Product.png', 'List Product'], dtype=object) array(['../../_images/UpdateProduct.png', 'Edit Product'], dtype=object) array(['../../_images/DeleteProduct.png', 'delete Product'], dtype=object) array(['../../_images/TaxManagement.png', 'tax Management'], dtype=object) array(['../../_images/Bulk-Export.png', 'bulk export'], dtype=object) array(['../../_images/QandA-productlist.png', 'QandA Product'], dtype=object) array(['../../_images/QandA-addQuestion.png', 'QandA add question'], dtype=object) ]
docs.spurtcommerce.com
Trifacta Wrangler Pro is no longer available. This space will be removed soon. Please visit this page instead: Replace Groups of Values Contents: Contents: Whether data is missing, mismatched, or simply wrong, you can use a variety of methods in the Trifacta® application to replace values in one or more columns with literal values or pattern-based replacements. Replacement methods In the Transformer page, you can use the following methods to replace values: Replace by selection When you select data in the data grid, the replacement suggestions are pre-specified for you, including a number of variants available in the suggestion card. Notes: - Suggestions are typically conservative in the scope of their changes. Case-sensitive searches and matching of the first occurrence only are the default settings. - Order of listing of suggestions in a suggestion card: - Pattern-based replacements are listed first. These replacements use Trifacta patterns, instead of regular expressions. Regular expressions can be more difficult to control. - Literal value replacements are listed below the pattern-based ones. For more information, see Overview of Predictive Transformation. Mask data For privacy reasons or for sensitivity reasons, you may wish to mask sensitive data in one or more columns with fixed strings. Delete whole column(s) If you need to remove the data in an entire column, the easiest method is to delete a column. Select one or more columns and then select Delete from the column drop-down. See Remove Data. Masking all values You can use a transformation like the following to replace all values in a column with a simple string. In this case, the value #REDACTED# has been inserted in place of all values in the column. NOTE: This replacement changes the data type of the column to String. If you must retain the original data type, the replacement value should be valid for the data type. Partial masking of values Suppose you wish to partially mask data in a column. In the following example, data for the AcctNum column is masked, except for the last four characters (digits): Mask multiple columns based on data type You can use the following type of transformation to hide data based on data type. In this example, the values in all columns with Social Security Number (SSN) are replaced with a masking value: XXX-XX-XXXX: This method performs a simple text replacement of the data in the columns(s). After this transformation has been applied to the data, the source data is no longer available, unless you step back to a step before this one. For these kinds of operations, you may find it more secure to apply these kinds of masking operations to the source data in a single recipe and then make that output available to other users to use as an imported dataset. Replace with values from another column Replace whole column You can do simple replacements of data from one column into another with transformations like the following. In this example, the values of colB are replaced with the values of colA with 0.15 added to them: Replace partial values from another column You can use the MERGE function to blend full or partial sets of columns into a new column. In the following example, the newBrandId value is concatenated with the product code in the ProdId column to create a new product identifier: Replace between positions You can perform replacements based on character positions that you specify as part of the transformation. - The beginning character value is specified as a number from 0, which starts on the left. - The ending character value must be equal to or greater than the beginning character value. In the following example, the Whse_Name column values are prepended with the value old-. Search and replace text or pattern You can search and replace content in your dataset based on literals or patterns. In the following example, the value ##CLT_NAME## is replaced with Our Customer, Inc. across all columns in the dataset: Replace missing values Replace missing with zeroes For numeric data, you may choose to replace values that are missing in a column with zeros. The following transformation sets missing values in the Qty and DiscountPct columns of Decimal data type to 0: Replace missing with average values. Replace mismatched values You can perform replacements based on the values in a column that are mismatched against a specified type. In the following example, Datetime values that do not match the yyyy*mm*dd, where the asterisk ( *) is a wildcard value. NOTE: In the above example, the Date/Time type parameter applies only to replacements that are mismatched against the Date/Time data type. This parameter is used to specify the Datetime format against which the source values are validated. The parameter does not appear in Replace mismatched values transformations for other data types. Rename columns For more information, see Rename Columns. This page has no comments.
https://docs.trifacta.com/pages/viewpage.action?pageId=163020206&navigatingVersions=true
2021-11-27T09:16:32
CC-MAIN-2021-49
1637964358153.33
[]
docs.trifacta.com
(PHP 5 >= 5.3.3, PHP 7) fastcgi_finish_request — Flushes all response data to the client fastcgi_finish_request ( ) : bool This function flushes all response data to the client and finishes the request. This allows for time consuming tasks to be performed without leaving the connection to the client open. Returns TRUE on success or FALSE on failure.
https://getdocs.org/Php/docs/function.fastcgi-finish-request
2021-11-27T09:04:32
CC-MAIN-2021-49
1637964358153.33
[]
getdocs.org
Markdown Guide and Code This guide details how to use Markdown and custom features available for this site. You can review Markdown code in this GitHub file or in local cloned files in the /reference/template.md file. You can write content using GitHub-flavored Markdown syntax. See the Contributions Guide for details on creating and updating documentation. #Markdown Syntax To serve as an example page when styling markdown based Docusaurus sites. About File Names The doc site requires all file names in lowercase, no spaces, with dashes used as needed. This includes Markdown, images, videos, and downloadable files. #Simple Template If you need a simple template to start a new page, see the /reference/templates/markdown.md file in your local cloned files. This file gives you simple Markdown to get started. For more Markdown formats, see this page's code. #Markdown Frontmatter All Markdown files require some information at the top of every file indicating the title, id, and other attributes as needed. Copy, paste, and fill out the following in the first lines of your Markdown file: See the following options for frontmatter fields: #Headers Use hashtags # to indicate the heading level. You should not use H1, this is automatically used for the page title when building the site. Link anchors automatically also generate. For numbered lists, always start with 1.. The generator will automatically number the list correctly when building the site: - First ordered list item - Another item - Unordered sub-list. - Actual numbers don't matter, just that it's a number - Ordered sub-list - And another item. - Use asterisks *for unordered lists. You can configure your editor to always use this format for lists. For Visual Studio Code, configure the following settings: - Ordered List: Marker set to one. - Unordered List: Marker set to *. #Links See the following example code for Markdown links. URLs and URLs in angle brackets will automatically get turned into links. or and sometimes example.com (but not on GitHub, for example). Some text to show that the reference links can be added at the bottom: #Images To include images, save PNG (.png) files to the /static/img folder. Add an image to a markdown file using the following format: See the following examples: Reference-style: alt text Images from any folder can be used by providing path to file. Path should be relative to markdown file. #Images for Dark and Light Mode You can save diagrams, animations, and images for dark and light mode to provide the best look for all users. At the bottom of your markdown file, add this code: note Depending on the location of your Markdown file, you may need more or less ../. Save your light image with a background color of white ( #ffffff) or charcoal ( #18191a). Save your images to the /static/img folder and add the image using the following code: For example: #Tabs Use the following code to create tabbed content. You can use Markdown in these tabs, including text, code content, images, and more. At the bottom of the Markdown file, add the following code: For each set of tabs, use the following code: See the following tabbed code examples: - UNet Example - MLAPI Example #Code important All code samples should meet coding standard guidelines and requirements for Unity. They should be tested, functioning, and provide live examples that developers can reuse easily. To add code inline, use single ticks: To add code sample blocks, use three ticks and the programming language. Optionally, you can add a title="name" to describe the example. To highlight a line of code, add {#} with the line number in the brackets. #Embedding Code from a Repository To embed a code sample from a file in a GitHub repositiory, use reference in the code block with a link to the file. The code sample is embedded using the language with a link to the original file. This code references a JavaScript file: ```js reference, for example: You can use a link to a file embedding the entire file, or embed a range of code lines using #L and a line range at the end of the link, such as #L105-108. #Tables Tables can have one header at the top, with a line indiciting the header and alignment, and rows surrounded by pipes (|). There must be at least 3 dashes separating each header cell. You can also use inline Markdown. Custom styles are available for tables, added using <div> tags around the markdown table. <div class="table-rows"> <div class="table-columns"> supports up to 5 columns with differing shades and highlights a column on hover. . #Embed Files We support embedding using the react-iframe plugin. You can use this feature for YouTube videos, Google Slides and Docs, and much more. Add the following code once in the Markdown file: #Embedding Youtube To embed a YouTube, use the following code with a YouTube link: #Admonitions Add an admonition using three colons, the type, and closing content with colons: note This is a note. tip This is a tip. important This is important. caution This is a caution. warning This is a warning. Community Contribution Fun Fact Use for helpful facts and info. Best Practice Highlight best practices and recommendations. unity Information specific to Unity, for example license information. #Mermaid Charts Mermaid provides sequence diagrams, charts, and more. Use these charts to detail processes, workflows, inheritance, and more. See the Mermaid guide for specifics and examples, and use the live editor to generate code. See the following example code for adding Mermaid charts. You need to include the import line once per page. #MDX You can write JSX and use React components within your Markdown thanks to MDX.Docusaurus green and Facebook blue are my favorite colors. I can write Markdown alongside my JSX! #Import Markdown Files Using MDX, you can create markdown files with a section of content that can be imported to multiple files. For example, you may have the same notation or install instructions across multiple files. Save your files with a file name starting with an underscore (_) in a shared folder (available in each version folder). These files will be included in generated versions. Currently, any headings will not build in the on-page navigation (on the right). You can add a heading to the page then import the file. Use the following code with an absolute path to the shared folder file.
https://docs-multiplayer.unity3d.com/reference/template/index.html
2021-11-27T08:25:41
CC-MAIN-2021-49
1637964358153.33
[array(['/img/ping-animation-dark.gif?text=DarkMode', 'Example banner'], dtype=object) ]
docs-multiplayer.unity3d.com
Integrating Zoom Meetings for Live Webinars If you already use Zoom and would like to include live webinars in your Courses, this guide will help you get started. To get going with Zoom, you'll need to tackle the following steps: - Set up your Zoom API credentials - Add your Zoom API credentials to Academy of Mine - Schedule your Zoom meeting - Create your Webinar Module - Add your Module to your Course In this article - Setting Up Your Zoom API Credentials - Adding Your Zoom API Credentials to Academy of Mine - Schedule Your Zoom Meeting - Create a Webinar Module - Schedule Your Webinar Module - Add Your Webinar Module to Your Course - Instructor Experience of a Zoom Meeting in a Webinar Module - Student Experience of a Zoom Meeting in a Webinar Module Setting Up Your Zoom API Credentials - 1 Integrating Zoom with Academy of Mine requires adding a JWT App API Key and API Secret from Zoom. If you don't have them already, detailed instructions for creating the JWT App Key and Secret can be found in the Zoom Developer Documentation. - - A First, head to the Zoom App Marketplace. In the Develop menu, select Build App. - B Then choose JWT and select Create. - C Create an app by adding your company's information and developer contact information. Select Continue to move to the next section. - D Copy your API Key and API Secret for use in your Academy of Mine integration. Adding Your Zoom API Credentials to Academy of Mine - 1 Returning to your Academy of Mine account, the Zoom integration can be found on the Integrations page of your dashboard. - 2 Select Manage to link your Zoom account to Academy of Mine. - 3 After creating your JWT API Key and JWT API Secret, they can be added to the Zoom Integration Settings. Lastly, you must Enable Zoom. If Enable is turned off, your integration will not function, even if it has been set up correctly. This setting can be changed in the Zoom Settings page, or on the Integrations page itself. Once your Zoom account is integrated, you can use it to schedule webinars in your courses by creating Webinar Modules. Schedule Your Zoom Meeting Before you can create your Webinar Module, a Zoom Meeting must first be created inside your Zoom account. - 1 In Zoom, head to the Schedule section to create your meeting. - 2 Make sure to collect your Meeting ID and Meeting Passcode for later use in your Webinar Module. -. - 1 A Zoom Meeting must first be created inside your Zoom account. Once you have the Meeting ID and Meeting Passcode, you can schedule your webinar inside the Webinar Module. - 2 The Module requires you to set the Start Time and End Time of the meeting. Keep in mind that the Start Time and End Time reflect the time zone set for your Academy of Mine account on the General Settings page. - 3 You'll then need to add the Zoom Meeting ID with no spaces, and then the Zoom Meeting Passcode. - the duration of your webinar. - 1 The Instructor will launch the scheduled Meeting directly from their Zoom app, using their camera or screen sharing as they would normally. - 2 Inside the Zoom Meeting, the Instructor acting as the Zoom Host will be able to view the active Student participants, control audio and video sharing, and chat. - 3 Instructors will also be able to share their screen with Students. - 4 When finished, the Instructor can end the Meeting for all participants. - 1 Inside the Module, the details of the webinar will be displayed, with a Launch Webinar button that will open the Zoom meeting directly in the Student's browser. Near the Launch Webinar button, the Student will be notified of the status of the Meeting. They will be alerted that they can launch the webinar, or that the start time is Past. - 2 The Zoom Meeting will launch inside the same browser tab as the Webinar Module. If requested by the Instructor's Meeting settings, the Student will wait to be admitted by the Instructor. - 3 Students will have the option to control audio and video sharing, as well as chat. Screen Sharing will be turned off by default. The Student's name in the meeting will be the name they have given for their Course. - 4 When a Student leaves the Meeting, they will be returned to the main Course page. - 5 When the Student leaves the webinar, their Webinar Module is changed to 100% Completed. Create a Webinar Module Modules are the building blocks of your Course. You can learn more about Modules in this Help Docs article. Schedule Your Webinar Module Add Your Webinar Module to Your Course When you have finished scheduling your webinar, you will need to add your Webinar Module to your Course. This can be done on the Curriculum page of your Course.
https://docs.academyofmine.com/article/142-integrating-zoom-meetings-for-live-webinars
2021-11-27T07:55:40
CC-MAIN-2021-49
1637964358153.33
[array(['http://d33v4339jhl8k0.cloudfront.net/docs/assets/5e888b922c7d3a7e9aea626e/images/5fec9f24f24ccf588e3ff6eb/file-JBBYqwJheE.png', None], dtype=object) array(['http://d33v4339jhl8k0.cloudfront.net/docs/assets/5e888b922c7d3a7e9aea626e/images/5fec921441fcb56e4047ceb3/file-bLiRnivlvR.png', None], dtype=object) array(['http://d33v4339jhl8k0.cloudfront.net/docs/assets/5e888b922c7d3a7e9aea626e/images/5feb7c3b6451e825e3b8ed2e/file-N6Z8YAm93m.png', None], dtype=object) ]
docs.academyofmine.com
, and clicking “setup alert” button. This will pre-populate the new trigger form with relevant information. New/Edit Trigger Form¶ The form for creating and editing triggers is identical. When editing, the form is pre-populated with the current trigger values. A description of each field follows (all fields are required): - Trigger Name - A name used to identify the trigger (for example ‘<consumer group name> under consumption’). Uniqueness is not enforced but you should use different names to avoid confusion. - Component Type - Currently we only support alerting on consumer group monitoring data and this is the only possible option for this field. In the future we may support alerting on other types of data. -. One of “Greater than”, “Less than”, “Equal to”, “Not equal to”. - Value - The value to compare the monitored metric to. - Buffer - The delay behind real time to wait until a time window is considered for triggering (refer to Concepts for more information). After creating a trigger, you will be given the option to associate it with one or more existing actions, or if none exist, to create a new action. Actions Management¶).
https://docs.confluent.io/3.1.1/control-center/docs/alerts.html
2021-11-27T08:48:43
CC-MAIN-2021-49
1637964358153.33
[]
docs.confluent.io
Date: Sun, 28 Dec 2003 10:50:57 -0500 From: Brian Black <BlackBsd@Mountain.Net> To: vanderlaars@online.ie Cc: freebsd-questions@freebsd.org Subject: RE: Dual Boot WinXP + FreeBSD Message-ID: <3FEEFBE1.3060103@Mountain.Net> Next in thread | Raw E-Mail | Index | Archive | Help Hi Julio, i have installed freebsd along side NTFS systems for some time now and they do work. you need to make sure in /stand/sysinstall that u have your partitions, slices etc set up correctly,(I mean set up the way that u think is correct). For Example, even though the XP slice that i have is the 3rd slice on my box(FreeBsd 5-Current, then RH9 Then WinXp) the device name for my xp slice still reads /dev/ad0s1. Though this dont seem right it works, i thought that it should read /dev/ad0s3. So to try and answer your questions. 1. Can I have another dual boot on my machine with XP (NTFS) and FreeBSD? yes u can, remember that u can mount the xp slice for "READ" not "Write". You can only read from the ntfs. "man mount" for more information. (for people who wish to correct me, there are some way to write to the ntfs, i know but as a general rule dont write to this file system type.) 2. Where can I read more about the process of instalation to keep my XP partition alive? u can read more in the handbook. if u do not tamper with your xp partition durring /stand/systinstall then the partition will not be bothered. (just remember what the device name is when you are using fips). What boot loader are u planning on using? i have tried BOOTMAGIC,FREEBSDs boot loader and also i have used grub (which is available in the ports). With grub i was not able to boot fbsd when my fbsd slice was formated with ufs2. this might have been fixed though. have fun. Want to link to this message? Use this URL: <>
https://docs.freebsd.org/cgi/getmsg.cgi?fetch=67068+0+archive/2003/freebsd-questions/20031231.freebsd-questions
2021-11-27T09:36:04
CC-MAIN-2021-49
1637964358153.33
[]
docs.freebsd.org
Monitoring¶ Overview.
https://docs.morpheusdata.com/en/5.2.4/monitoring/monitoring.html
2021-11-27T08:21:31
CC-MAIN-2021-49
1637964358153.33
[]
docs.morpheusdata.com
Ensure that logging is enabled for all S3 Buckets to track all the requests Risk Level: Low Description This plugin identifies the S3 buckets where logging is disabled. With S3 buckets logging enabled, you can track the requests made to access and modify S3 buckets, and with the logs generated, identify potential security threats by monitoring unusual requests. As good security practice, PingSafe recommends enabling logging for S3 buckets. About the Service Amazon S3: Amazon Simple Storage Service, popularly known as Amazon S3, is a storage space available on the cloud. Using Amazon S3, you can store and retrieve any amount of data. It offers various features such as logging, using which you can keep a track of requests made. You can read more about logging here. Impact Logs are important for keeping a track of requests made to the S3 bucket. It is recommended to regularly keep a close eye on the logs for any unusual activity. In the event of data compromise, generated logs will be useful to identify unusual or unauthorized access. It can eventually lead to the attacker's details. Without logs, the security team will not have any information to begin the investigation. Compliances covered PCI: PCI requires logging of all network access to environments containing cardholder data. To comply with PCI standards, you must enable logging of the Amazon S3 bucket. HIPAA: Health Insurance Portability and Accountability Act of 1996 (HIPAA) requires strict auditing controls around data access. S3 logging helps ensure these controls are met by monitoring access to all bucket objects. Logs should be stored in a secure, remote location. Steps to Reproduce Using AWS Console- - examine whether it is enabled or disabled for the bucket. If it is disabled, you might not be able to monitor unusual requests and track the user involved in case of unauthorized access. - Repeat steps 3 to 4 for all the S3 buckets you want to investigate. Steps for Remediation The steps to enable logging are- - select the Edit option beside the section title. - In the edit option, select the Enable option and enter the Target bucket where the logs will be stored in the format “s3://bucket/prefix”. Prefix stands for the subdirectory of the selected bucket. Alternatively, you can also browse to select the path. Finally, click on Save Changes. - Repeat steps 3 to 5 for all the vulnerable S3 buckets.
https://docs.pingsafe.ai/knowledge/s3-buckets-with-logging-disabled
2021-11-27T09:27:49
CC-MAIN-2021-49
1637964358153.33
[]
docs.pingsafe.ai
An interface class for Frame constraints. More... #include <QGLViewer/constraint.h>: Frame::translate(Vec& T) { if (constraint()) constraint()->constrainTranslation(T, this); t += T; } Frame::rotate(Quaternion& Q) { if (constraint()) constraint()->constrainRotation(Q, this); q *= Q; }: // This Constraint enforces that the Frame cannot have a negative z world coordinate. class myConstraint : public Constraint { public: virtual void constrainTranslation(Vec& t, Frame * const fr) { // Express t in the world coordinate system. const Vec tWorld = fr->inverseTransformOf(t); if (fr->position().z + tWorld.z < 0.0) // check the new fr z coordinate t.z = fr->transformOf(-fr->position().z); // t.z is clamped so that next z position is 0.0 } };: myConstraint::constrainTranslation(Vec& v, Frame* const fr) { constraint1->constrainTranslation(v, fr); constraint2->constrainTranslation(v, fr); // and so on, with possible branches, tests, loops... } Definition at line 117 of file constraint.h. Virtual destructor. Empty. Definition at line 121 of file constraint.h. qglviewer::CameraConstraint, qglviewer::WorldConstraint, qglviewer::LocalConstraint, and qglviewer::AxisPlaneConstraint. Definition at line 142 of file constraint.h. qglviewer::CameraConstraint, qglviewer::WorldConstraint, qglviewer::LocalConstraint, and qglviewer::AxisPlaneConstraint. Definition at line 133 of file constraint.h.
http://docs.ros.org/en/jade/api/octovis/html/classqglviewer_1_1Constraint.html
2021-11-27T09:50:00
CC-MAIN-2021-49
1637964358153.33
[]
docs.ros.org
Hi. I originally had a Microsoft account. But after entering university, I created a new Microsoft account with my school account. So I deleted the original account. Although it was said that it would be completely deleted in 2021. After that, I was told to log in again due to an account error. So I all logged out, and I tried to log back in. There's a problem here. When I tried to log in again, it said no account. So when I logged in to Microsoft's official website, I could log in there. But why can't I log in to my laptop? I'm so embarrassed. I hope there is a solution. I'll be waiting for your answer. I captured the screen and attached it, can you see the image well? I hope you can see it. By the way, the images and writings I sent can only be seen by us, right?
https://docs.microsoft.com/en-us/answers/questions/158815/i-can39t-log-in.html
2021-11-27T10:01:50
CC-MAIN-2021-49
1637964358153.33
[]
docs.microsoft.com
Visual Class Definition Important Some information relates to prerelease product that may be substantially modified before it’s released. Microsoft makes no warranties, express or implied, with respect to the information provided here. Provides rendering support in WPF, which includes hit testing, coordinate transformation, and bounding box calculations. public ref class Visual abstract : System::Windows::DependencyObject public abstract class Visual : System.Windows.DependencyObject type Visual = class inherit DependencyObject interface DUCE.IResource type Visual = class inherit DependencyObject Public MustInherit Class Visual Inherits DependencyObject - Inheritance - - Derived - Remarks. Visual class hierarchy .
https://docs.microsoft.com/en-us/dotnet/api/system.windows.media.visual?view=netframework-4.8
2021-11-27T09:44:13
CC-MAIN-2021-49
1637964358153.33
[array(['../media/visualclass01.png?view=netframework-4.8', 'Diagram of classes derived from the Visual object Diagram of classes derived from the Visual object'], dtype=object) ]
docs.microsoft.com
This release note applies to all 7.2 releases of Reporting Templates for Voice sections. This release does not contain any new features or functionality. No corrections or modifications were made between Release 7.1 or earlier releases and the initial 7.2 release. Top of Page This section provides the latest information on known issues and recommendations associated with this product. CCPulse+ mistakenly displays the Live Waiting metric in historical views in the Callback Queue report. These reported values return *Error* values because Live Waiting is a real-time metric. Live Waiting, unlike other current metrics, is calculated within CCPulse+ using its formula feature. You may safely ignore this message. (ER# 84990) The Not Rescheduled CB and Rescheduled CB metrics from the CB Operation Report may be calculated incorrectly if Scheduled Callback Requests with the same VCB_RECORD_HANDLE value arrive at the same virtual route point several times with different Connection IDs. If this occurs, the callback request is calculated as Rescheduled even though the agent has not yet accepted the callback request. (ER# 98315) limitations or restrictions for internationalization. Top of Page Additional information on Genesys Telecommunications Laboratories, Inc. is available on our Technical Support website. The following documentation also contains information about this software. Please consult the Deployment Guide first. Reporting 7 Getting Started Guide introduces the Reporting product set and describes Reporting component roles and interrelationships. Reporting Technical Reference Guide for the Genesys 7.2 Release. The Reporting 7 CCPulse+ Help file describes operation of the CCPulse+ application. Genesys 7 Migration Guide helps you plan and perform migration of Genesys 6.5, 7.0 or 7.1 software to release 7.2. Top of Page
https://docs.genesys.com/Special:Repository/vcb-report_templ72rn.html?id=7fb9758f-c8d3-42c6-b3fa-56d434d8877f
2019-06-16T04:43:30
CC-MAIN-2019-26
1560627997731.69
[]
docs.genesys.com
Integrate your Ghost publication with Setka Editor to design beautiful, UX-driven posts without having to code Setka Editor makes it possible for anyone to create standout editorial and branded content with smart design tools that make it easy to design a custom piece of content. Setka Editor users that have the Content Design Cloud plan can integrate structured layouts, art-directed posts or a magazine layout with Ghost in a few simple steps, using HTML code and the Ghost editor. Here’s how it works: Create a post in Setka Click the "Posts" tab in your account. Then, click the "Create post" button. Design your post for a highly engaging piece of visual content. Setka Editor tools make it intuitive to determine your text formats and fonts, choose colours, create dividers and more. Generate your HTML code Once your posts are ready to be integrated with your Ghost site, navigate to the Posts tab in the navigation menu and click the Export button on the post you wish to integrate with Ghost. To copy your post's HTML code, click on the Copy to clipboard button. Paste the HTML code into Ghost In a new post in the Ghost editor, paste your code into a new HTML block: Publish your post And that’s it! Ghost allows you to copy your Setka Editor post HTML code directly into the HTML block, rendering your final content with all the styles, visuals and designs you’ve created. Here's an example the type of content you can create and integrate into a post: Making changes to a post If you decide to update the text or design of your post, make the changes using the Setka Editor's Design Cloud, and repaste all new HTML into the Ghost editor.
https://docs.ghost.org/integrations/setka/
2019-06-16T05:30:08
CC-MAIN-2019-26
1560627997731.69
[array(['https://docs.ghost.io/content/images/2019/03/setka-editor.png', None], dtype=object) array(['https://docs.ghost.io/content/images/2019/01/Export-content-Setka.png', None], dtype=object) array(['https://docs.ghost.io/content/images/2019/01/Ghost-cards.png', None], dtype=object) array(['https://docs.ghost.io/content/images/2019/01/Visual-Post-Setka-Editor.png', None], dtype=object) ]
docs.ghost.org
Reporting Services Log Files and Sources A Reporting Services report server and report server environment support a variety of log destinations to record information about server operations and status. There are two basic categories of logging, execution logging and trace logging. Execution logging includes information about report execution statistics, auditing, performance diagnosis and optimization. Trace logging is information about error messages and general diagnostics. Applies to: Reporting Services SharePoint mode | Reporting Services Native mode The following table provides links to additional information about each log, including the log location and how to view the log contents. See Also Reporting Services Report Server (Native Mode) Errors and Events Reference (Reporting Services)
https://docs.microsoft.com/en-us/sql/reporting-services/report-server/reporting-services-log-files-and-sources?view=sql-server-2014
2019-06-16T04:41:17
CC-MAIN-2019-26
1560627997731.69
[]
docs.microsoft.com
Returns the natural logarithm of the specified number, which is the power that e must be raised to in order to equal the specified number. ln( number ) number: (Decimal) The number in which the natural logarithm will be returned. Decimal You can experiment with this function in the test box below. Test Input Test Output Test Output ln(10) returns 2.30258 On This Page
https://docs.appian.com/suite/help/19.3/fnc_mathematical_ln.html
2020-01-18T01:46:25
CC-MAIN-2020-05
1579250591431.4
[]
docs.appian.com
New employment services Information for Providers View this document as… Government employment services are being transformed to deliver better services to job seekers and employers and a better system for providers. The new model is being trialled in two regions from July 2019 before being rolled out nationally from July 2022. Last modified on Wednesday 3 July 2019 [46591|163017]
https://docs.employment.gov.au/documents/new-employment-services-information-providers
2020-01-18T01:26:07
CC-MAIN-2020-05
1579250591431.4
[]
docs.employment.gov.au
All content with label cloud+configuration+data_grid+dist+hibernate_search+hot_rod+infinispan+jboss_cache+listener+migration+release+repeatable_read+schema+searchable+snapshot. Related Labels: podcast, publish, datagrid, coherence, interceptor, server, replication, transactionmanager, partitioning, query, intro, contributor_project, archetype, lock_striping, jbossas, nexus, guide, cache, amazon, s3, grid, memcached, test, jcache, api, xsd, ehcache, maven, documentation, wcm, youtube, userguide, write_behind, ec2, 缓存, s, streaming, hibernate, getting, aws, interface, custom_interceptor, clustering, setup, eviction, gridfs, large_object, concurrency, out_of_memory, examples, import, index, events, batch, hash_function, buddy_replication, loader, xa, write_through, remoting, mvcc, notification, tutorial, presentation, read_committed, xml, jbosscache3x, distribution, started, cachestore, cacheloader, resteasy, cluster, br, development, adaptor, websocket, async, transaction, interactive, xaresource, build, gatein, demo, installation, cache_server, scala, client, non-blocking, jpa, filesystem, tx, user_guide, gui_demo, eventing, snmp, student_project, client_server, testng, infinispan_user_guide, standalone, webdav, hotrod, docs, consistent_hash, store, jta, faq, as5, 2lcache, jsr-107, jgroups, lucene, locking, rest more » ( - cloud, - configuration, - data_grid, - dist, - hibernate_search, - hot_rod, - infinispan, - jboss_cache, - listener, - migration, - release, - repeatable_read, - schema, - searchable, - snapshot ) Powered by a free Atlassian Confluence Open Source Project License granted to Red Hat, Inc.. Evaluate Confluence today.
https://docs.jboss.org/author/label/cloud+configuration+data_grid+dist+hibernate_search+hot_rod+infinispan+jboss_cache+listener+migration+release+repeatable_read+schema+searchable+snapshot
2020-01-18T00:03:46
CC-MAIN-2020-05
1579250591431.4
[]
docs.jboss.org
Date: Sat, 18 Jan 2020 02:25:33 +0200 (EET) Message-ID: <2058389707.785.1579307133717@new.docs.onapp.com> Subject: Exported From Confluence MIME-Version: 1.0 Content-Type: multipart/related; boundary="----=_Part_784_683846414.1579307133717" ------=_Part_784_683846414.1579307133717 Content-Type: text/html; charset=UTF-8 Content-Transfer-Encoding: quoted-printable Content-Location: To manage a virtual server power options: Reboot in Recovery- powers off and th= en restarts the VS in the recovery mode. For VSs with enable= d encryption the temporary login is "root" and password = is "recovery". For VSs with password encryption dis= abled, the VS root password will be used to reboot = in recovery. Windows virtual servers boot from the Linux-based recove= ry template in a recovery mode. You need to log in as admin via SSH o= r VNC console, then mount a Windows system disk manually. You cannot work with the "whole" disk (like mount -t ntfs-3g /dev/= sdb1) while mounting and checking block devices inside the recovery = image, as Windows disk is split into partitions. Boot from ISO - boots the VS from an ISO. You can= boot virtual servers from your own ISOs or the ISOs that are uploaded and = made publicly available by other users. As soon as you boot a VS from the ISO= , OnApp cannot control any components (backups, networks, disks). The migration option is not available for VSs boo= ted from ISO. The only available actions will be start and stop a VS= . Be aware, that all the contents of the disk will be deleted.
https://docs.onapp.com/exportword?pageId=39755104
2020-01-18T00:25:33
CC-MAIN-2020-05
1579250591431.4
[]
docs.onapp.com
Date: Fri, 17 Jan 2020 17:02:02 -0800 (PST) Message-ID: <1380746232.76528.1579309322493@docs-node.wso2.com> Subject: Exported From Confluence MIME-Version: 1.0 Content-Type: multipart/related; boundary="----=_Part_76527_1107512605.1579309322493" ------=_Part_76527_1107512605.1579309322493 Content-Type: text/html; charset=UTF-8 Content-Transfer-Encoding: quoted-printable Content-Location: If you changed the keystore from the d= efault WSO2 keystore, you need to configure the following files: Change the wso2carbon keystore alias to the new keystor= e alias in the following files. If you= added a public certificate, update the Identity Provider (IDP) with th= e new certificate. This is needed as WSO2 IoT Server uses the JWT token for= the servers to communicate with each other. Follow the steps given belo= w to update the IDP. If your public certificate is not in the .pem for= mat, export it to the .pem format using the command given belo= w: openssl= x509 -inform DER -outform PEM -in {YOUR_CERTIFICATE_NAME} -out server.crt.= pem=20 Open the server.crt.pem you just generated and co= py the content between BEGIN CERTIFICATE= and END CERTIFICATE. Open the <IOTS_HOME>/conf/identity/identity-providers/io= t_default.xml file and replace the content that is under the &= lt;Certificate> property with the content you just copied.
https://docs.wso2.com/exportword?pageId=72429327
2020-01-18T01:02:02
CC-MAIN-2020-05
1579250591431.4
[]
docs.wso2.com
WSO2 products, such as Application Server are shipped as binary packs, which contains a wide variety of features to support your enterprise requirements. You can easily download the binary distribution, and get started with the product immediately. However, as developers, you can download the source code and build the product as shown below. The following topics describe this process: ... Checking out the source The source code of all WSO2 products are maintained in GitHub as a list of repositories. Each of the WSO2 products are built using the source code stored in several of these repositories. Given below is the list of Git repositories that are used for building the WSO2 Git repository for AS 5.3.0 release, which you can use to build a fresh product pack. Follow the steps given below to download each of the above repositories for AS 5.3.0. ... Clone the relevant above mentioned Git repository to a folder of your choice. The location of the extracted source is referred to as <gitrepo< AS_SOURCE_HOME>. For example, to clone the Git repository for Carbon4 Kernel, use the following command: git clone <carbon4kernel product-as< AS _SOURCE_HOME> Navigate to the <gitrepothe < AS_SOURCE_HOME>directory using the following command: cd <gitrepocd <AS v5.43.1 v40 v5.4.1 ... 3.0 Building the product ...
https://docs.wso2.com/pages/diffpagesbyversion.action?pageId=47532115&selectedPageVersions=16&selectedPageVersions=17
2020-01-18T00:12:32
CC-MAIN-2020-05
1579250591431.4
[]
docs.wso2.com
See Also: Interlocked Interlocked.Increment(Int32@) and Interlocked.Decrement(Int32@) methods increment or decrement a variable and store the resulting value in a single operation. On most computers, incrementing a variable is not an atomic operation, requiring the following steps:[The 'ordered' type of list has not been implemented in the ECMA stylesheet.] If you do not use Interlocked.Increment(Int32@) and Interlocked.Decrement(Int32@), Interlocked.Exchange(Int32@, int) method atomically exchanges the values of the specified variables. The Interlocked.CompareExchange(Int32@, int, int) method combines two operations: comparing two values and storing a third value in one of the variables, based on the outcome of the comparison. The compare and exchange operations are performed as an atomic operation.
http://docs.go-mono.com/monodoc.ashx?link=T%3ASystem.Threading.Interlocked
2020-01-18T01:44:43
CC-MAIN-2020-05
1579250591431.4
[]
docs.go-mono.com
Monitor Active Directory The Active Directory (AD) database (also known as the NT Directory Service (NTDS) database) is the central repository for user, computer, network, device and security objects in an AD domain or forest. You can use Splunk Enterprise to record changes to AD, such as the addition or removal of a user, host, or domain controller (DC). If you have Splunk Cloud, you must use the Splunk Universal forwarder to collect Active Directory data. You can configure AD monitoring to watch for changes to your Active Directory forest and collect user and machine metadata. You can use this feature combined with dynamic list lookups to decorate or modify events with any information available in AD. After you have configured. Why monitor Active Directory? If you maintain the integrity, security, and health of your Active Directory, then what happens with it day to day is a concern. Splunk Enterprise lets. What do you need to monitor Active Directory? The following table lists the permissions you must have to monitor an Active Directory schema. Technical considerations for monitoring Active Directory For best results with monitoring AD, read and understand the following: - The AD monitor is only available on Splunk Enterprise on Windows. - While you cannot monitor AD changes from a *nix version of Splunk Enterprise, you can forward AD data from a Windows version of Splunk Enterprise should run as at installation time, see Choose the user Splunk should run as in the Installation Manual. How the AD monitor interacts with AD When you set up an AD monitoring input, the input connects to an AD domain controller to authenticate and, if necessary, perform any security ID (SID) translations while it gathers the AD schema or change events. The AD monitor uses the following logic to interact with Active Directory after you set it up: - If you specify a domain controller when you define the input (either with the targetDcsetting in inputs.confor the "Target domain controller" field in Splunk Web, an LDAP query and receives a referral, it does not chase this referral to complete the query. An LDAP referral represents a problem with your LDAP configuration and you or your designated administrators should determine and fix the configuration problem within AD. Configure Active Directory monitoring You can configure AD monitoring either in Splunk Web or by editing configuration files. More options, such as the ability to configure monitors for multiple DCs, are available when using configuration files. Configure AD monitoring with Splunk Web Go to the Add New page You can get there by two routes. - Splunk Home - Splunk Settings By Splunk Settings: - Click Settings in the upper right corner of Splunk Web. - Click Data Inputs. - Click Active Directory monitoring. - Click New to add an input. By Splunk Home: - Click the Add Data link in Splunk Home. - Click Monitor to monitor Active Directory on the local Windows machine. Select the input source - In the left pane, select in the Active Directory node you want the input to begin monitoring from. Use the Lightweight Directory Access Protocol (LDAP) format, for example, DC=Splunk-Docs,DC=com. - (Optional) You can The Input Settings page lets you specify application context, default host value, and index. All of these parameters are optional. Note: Host only sets the host field in the resulting events. It does not tell the input to look on a specific host Active Directory node. Configure AD monitoring with configuration files The inputs.conf configuration file controls Active Directory monitoring configurations. Edit copies of inputs.conf in the %SPLUNK_HOME%\etc\system\local directory. If you edit them in the default directory, your changes are overwritten when you upgrade. For more information about configuration file precedence, see "Configuration file precedence" in this manual. - Open %SPLUNK_HOME%\etc\system\local\inputs.conffor editing. You might need to: Example AD monitoring configurations The following are examples of how to use inputs.conf to monitor desired portions of your AD network. To index data from the top of the AD directory: #Gather all AD data that this server can see [admon://NearestDC] targetDc = startingNode = Splunk AD monitoring utility runs, it gathers AD change events, which are then indexed by Splunk software. To view these events as they arrive, use the Search app. There are several types of AD change events that Splunk software can index. Examples of these events follow. Some of the content of these events has been obscured or altered for publication purposes. Update event When an AD object changes, Splunk, Splunk software software tries to capture a baseline of AD metadata when it starts. Splunk software generates event type admonEventType=Sync, which represents the instance of one AD object and all its field values. Splunk software!
https://docs.splunk.com/Documentation/Splunk/7.0.8/Data/MonitorActiveDirectory
2020-01-18T00:29:08
CC-MAIN-2020-05
1579250591431.4
[array(['/skins/OxfordComma/images/acrobat-logo.png', None], dtype=object)]
docs.splunk.com
Date: Fri, 17 Jan 2020 17:36:13 -0800 (PST) Message-ID: <588048575.76552.1579311373354@docs-node.wso2.com> Subject: Exported From Confluence MIME-Version: 1.0 Content-Type: multipart/related; boundary="----=_Part_76551_519200593.1579311373354" ------=_Part_76551_519200593.1579311373354 Content-Type: text/html; charset=UTF-8 Content-Transfer-Encoding: quoted-printable Content-Location: Due to various reasons, the devices registered with WSO2 IoT Ser= ver might not be able to communicate with the server continuously. Whe= n the device is not actively communicating with the server, you need to kno= w of it to take necessary actions, such as checking if the device has any m= alfunctions and repairing it or checking if the device was stolen. To get a clear understanding, let's look at how this works in WSO2 IoT S= erver. If the device and the server are actively communicating, the device = is shown as active. In the device management console, click View under Devi= ces. You can see all your registered devices and their device statuses. =20 ==20 = If the server is unable to communicate with the device within a defi= ned time period, the device is shown as unreachable. In the device management console, click View under Devi= ces. You can see all your registered devices and their device statuses. =20=20 If the server is still unable to communicate with the device after a= defined time period, the device is shown as inactive. In the device management console, click View unde= r Devices. You can see all your registered devices and their device statuse= s. =20=20 The device's status has the following lifecycle: The device monitoring task is not applicable for all IoT devices. Theref= ore, you can choose to enable or disable it for your device type. Let's tak= e a look at how you can configure WSO2 IoT Server and your device type to m= onitor the device status. Open the <IOTS_HOME>/conf/cdm-config.xml file and= make sure the DeviceStatusTaskConfig is enabled. This co= nfiguration is enabled by default. If the DeviceStatusTaskConfig is enabled (or enabled on a n= ode that is in a clustered setup) it will run the status monitoring task in= the server. If the configuration is disabled, the server will not monitor = the status of the devices. <Dev= iceStatusTaskConfig> =09<Enable>true</Enable> </DeviceStatusTaskConfig>=20 Configure the device type to go into the unreachable state and then = to the inactive state after a specified time. Navigate to the <= ;IOTS_HOME>/repository/deployment/server/devicetype directory, op= en the <DEVICE_TYPE>.xml file, and configure the fields = given below: The default configuration in the android.xml f= ile is shown below: <Dev= iceStatusTaskConfig> <RequireStatusMonitoring>true</RequireStatusMonitoring> <Frequency>300</Frequency> <IdleTimeToMarkUnreachable>300</IdleTimeToMarkUnreachable> <IdleTimeToMarkInactive>600</IdleTimeToMarkInactive> </DeviceStatusTaskConfig>=20 Additionally to the above configurations, for the device monitorin= g task to actively function, you need to have pending operations on the dev= ice end. When there are pending operations the device communicates with the= server to send the operation details to the server and through it, the ser= ver keeps track that the device is active.
https://docs.wso2.com/exportword?pageId=72429328
2020-01-18T01:36:13
CC-MAIN-2020-05
1579250591431.4
[]
docs.wso2.com
PayPal Integration In order to configure PayPal settings, open appsettings.json file in *.Web.Host project and fill the fields below: - that current implementation of PayPal doesn't support recurring payments. So, If a tenant wants to pay via PayPal, ASP.NET Zero will not charge Tenant's account automatically when the Tenant's subscription expires. In that case, Tenant needs to pay the subscription price on every subscription cycle by entering it to the system and clicking the extend button on the subscription page.
https://docs.aspnetzero.com/en/aspnet-core-angular/latest/Features-Angular-Subscription-PayPal-Integration
2020-01-18T00:39:09
CC-MAIN-2020-05
1579250591431.4
[]
docs.aspnetzero.com
Verifying the App Visibility agent for .NET upgrade You can verify that the App Visibility agent for .NET upgrade was successful, and that the agent for .NET was invoked, by reviewing the log files created and updated during upgrade. For assistance with issues that you cannot solve, contact BMC Customer Support. To review the installation logs - Open the agentinstaller_install_log.txt file, located in the %temp% folder. - In the log, verify whether any error messages were reported. If there are no error messages, the upgrade was successful. - In the TrueSight console, select Administration > App Visibility Agents from the navigation pane and confirm that the agent is online. Where to go from here Access the TrueSight console. For more information, see Accessing and navigating the TrueSight console . Add App Visibility to the TrueSight console, if you haven't already done so. Related topics Upgrading App Visibility Manager components Installing App Visibility Manager components Setting up applications for monitoring Troubleshooting App Visibility Manager Known and corrected issues Was this page helpful? Yes No Submitting... Thank you
https://docs.bmc.com/docs/TSOperations/110/verifying-the-app-visibility-agent-for-net-upgrade-722060518.html
2020-01-18T01:54:30
CC-MAIN-2020-05
1579250591431.4
[]
docs.bmc.com
This wiki documents Piksi v2.3.1 which was discontinued April 1st, 2017. Visit support.swiftnav.com for newer products including Piksi Multi. Piksi Integration Tutorial This is a tutorial on how to integrate Piksi with a host system, receive SwiftNav Binary Protocol (SBP) messages from Piksi and decode position outputs. We use an STM32F4 DISCOVERY Board as an example of a host system. We use the official STM toolchain and libraries to keep things as simple as possible. We use the CooCox IDE on Windows - you must use this if you wish to build the project in the Git repository without making any changes. If anything in this guide is incorrect or unclear, please email our mailing list. Contents Before you start Please review the SwiftNav Binary Protocol documentation to familiarize yourself with the protocol. We also recommend a full run through of the Piksi User Getting Started Guide before starting work on integration. The Piksi Console will allow you to set Piksi in Simulation Mode, which will cause it to send out SBP messages for a simulated position solution. We recommend using Simulation Mode while working on SBP integration, as this will be far easier than sitting outside with your Piksi so you can get a real position fix. See the following for more info: Explanation of tutorial code The code run on the STM32F4 DISCOVERY Board in this tutorial can be found at: The SBP tutorial code consists of main.c and tutorial_implementation.c/tutorial_implementation.h. The source files tutorial_implementation.c/tutorial_implementation.h contain implementation specific functions for setting up and interacting with peripherals on the STM32F4 DISCOVERY Board microcontroller. The source file main.c contains the calls and instantiations of the SBP submodule that you will need to replicate in your code. These are documented in detail at: Receiving bytes from Piksi Bytes received from Piksi via USART1 are stored in a FIFO structure. This allows the host to receive multiple bytes from Piksi in between the SBP submodule processing them, without losing bytes. The following code is an interrupt that is called when the host's USART receives a byte from Piksi (in tutorial_implementation.c). It appends the received byte to the FIFO, toggles the LEDs once every 250 bytes received, and clears the interrupt. void USART1_IRQHandler(void) { fifo_write(USART1->DR); DO_EVERY(250, leds_toggle(); ); USART1->SR &= ~(USART_FLAG_RXNE); } Instantiating SBP structs An sbp_state_t must be statically instantiated to keep the state of the SBP message parser. In this example we statically instantiate a matching message struct (sbp_foo_t) for each message type we want to receive from Piksi and decode. Each message type you wish to receive and decode must also have its own unique, statically allocated sbp_msg_callbacks_node_t. The sbp_msg_callbacks_node_t structs associate a specific message ID with a callback function - the SBP message parser searches through these to find the correct callback function for a specific message ID. /* * State of the SBP message parser. * Must be statically allocated. */ sbp_state_t sbp_state; /* SBP structs that messages from Piksi will feed. */ msg_pos_llh_t pos_llh; msg_baseline_ned_t baseline_ned; msg_vel_ned_t vel_ned; msg_dops_t dops; msg_gps_time_t gps_time; /* * SBP callback nodes must be statically allocated. Each message ID / callback * pair must have a unique sbp_msg_callbacks_node_t associated with it. */ sbp_msg_callbacks_node_t pos_llh_node; sbp_msg_callbacks_node_t baseline_ned_node; sbp_msg_callbacks_node_t vel_ned_node; sbp_msg_callbacks_node_t dops_node; sbp_msg_callbacks_node_t gps_time_node; Defining SBP callback functions Every SBP message you wish to receive and decode must have a callback function associated with it. When the SBP message parser parses a valid SBP message, it searches through its list of sbp_msg_callback_node_t's to find a callback function associated with the message's ID. It then passes the data associated with the message to the callback function. The callback function then interprets the data. In the tutorial code, the callback functions are simple - they just cast the data from the SBP message to the associated message struct type, and assign it to the instantiated message struct. They can also implement more complex functionality, such as triggering events when a particular message is received. /* * Callback functions to interpret SBP messages. * Every message ID has a callback associated with it to * receive and interpret the message payload. */ void sbp_pos_llh_callback(u16 sender_id, u8 len, u8 msg[], void *context) { pos_llh = *(msg_pos_llh_t *)msg; } void sbp_baseline_ned_callback(u16 sender_id, u8 len, u8 msg[], void *context) { baseline_ned = *(msg_baseline_ned_t *)msg; } void sbp_vel_ned_callback(u16 sender_id, u8 len, u8 msg[], void *context) { vel_ned = *(msg_vel_ned_t *)msg; } void sbp_dops_callback(u16 sender_id, u8 len, u8 msg[], void *context) { dops = *(msg_dops_t *)msg; } void sbp_gps_time_callback(u16 sender_id, u8 len, u8 msg[], void *context) { gps_time = *(msg_gps_time_t *)msg; } Initializing SBP parser and registering callbacks The SBP parser must be initialized via a call to sbp_state_init before any SBP messages can be parsed. Each message ID / callback pair must be associated with an sbp_msg_callback_node_t and registered with the SBP parser via a call to sbp_register_callback. When the SBP parser successfully parses an SBP message it will search through the list of sbp_msg_callback_node_t's that have been registered with it to find the correct callback for the message's ID. void sbp_setup(void) { /* SBP parser state must be initialized before sbp_process is called. */ sbp_state_init(&sbp_state); /* Register a node and callback, and associate them with a specific message ID. */ sbp_register_callback(&sbp_state, SBP_MSG_GPS_TIME, &sbp_gps_time_callback, NULL, &gps_time_node); sbp_register_callback(&sbp_state, SBP_MSG_POS_LLH, &sbp_pos_llh_callback, NULL, &pos_llh_node); sbp_register_callback(&sbp_state, SBP_MSG_BASELINE_NED, &sbp_baseline_ned_callback, NULL, &baseline_ned_node); sbp_register_callback(&sbp_state, SBP_MSG_VEL_NED, &sbp_vel_ned_callback, NULL, &vel_ned_node); sbp_register_callback(&sbp_state, SBP_MSG_DOPS, &sbp_dops_callback, NULL, &dops_node); } Parsing SBP messages The function sbp_process must be called periodically in your program to parse the received bytes from Piksi. When it parses a valid SBP message and a callback function has been registered for this message's ID, it will call this function, passing it the data from the message. A returned value of less than zero indicates there was an error. while (1) { s8 ret = sbp_process(&sbp_state, &fifo_read); if (ret < 0) printf("sbp_process error: %d\n", (int)ret); } sbp_process must be passed a pointer to a function that provides access to the received bytes from Piksi, that conforms to the definition: - u32 get_bytes(u8 *buff, u32 n, void *context) Below is the function used in the tutorial, which reads bytes out of the FIFO: u32 fifo_read(u8 *buff, u32 n, void *context) { int i; for (i=0; i<n; i++) if (!fifo_read_char((char *)(buff + i))) break; return i; } Installing tools To build the code used in this tutorial, you must install the CooCox IDE, ARM-GCC, ST-Link, and configure CooCox IDE to use ARM-GCC. CooCox IDE - Download and install the CooCox IDE. ARM-GCC ST-Link Configure CooCox IDE to use ARM-GCC - After launching CooCox IDE, select Project->Select Toolchain Path. - Enter the path to the ARM-GCC binary, for example: C:\Program Files\GNU Tools ARM Embedded\4.8 2014q1\bin - Make sure to check what the actual path is on your system rather than just copying the above. Setting up the hardware The following photo shows the test setup for the integration tutorial. One of the pigtail Picoblade cables that ships with the Piksi RTK Kit is used to connect to one of the STM32F4 Discovery Board's USARTs. The wiring for the Piksi USART to the STM32F4 Discovery Board USART is as follows: Two USB cables, a micro and a mini, are required to connect to the STM32F4 Discovery Board. A micro-USB cable is connected to Piksi. All three cables should be plugged into your PC. Running the tutorial Now that you have the hardware set up, we can run the tutorial. Clone the tutorial git repository. $ git clone You'll also need to clone the libsbp submodule and update in order to build the project. $ cd sbp_tutorial $ git submodule init $ git submodule update Open the CooCox IDE project file in the sbp_tutorial folder and build the firmware: - Project -> Build Run the project via the Debug tab: - Debug -> Debug - Debug -> Run You should now see output being printed out through the Semihosting tab in the CooCox IDE, with all the fields being zero, as shown below. Set Piksi into simulation mode, as outlined in the instructions in the User's Guide: Now simulated output should be printed out in the Semihosting tab. If the STM32F4 Discovery Board is receiving data from Piksi over the UART, the four colored LEDs should be blinking.
https://docs.swiftnav.com/wiki/Piksi_Integration_Tutorial
2020-01-17T23:54:23
CC-MAIN-2020-05
1579250591431.4
[]
docs.swiftnav.com
SUJETS× Manage Data Sources Create a Data Source To create a new data source, go to Audience Data > Data Sources > Add New and complete the steps for each section described here. Administrator permissions are required to create a data source. See Data Source Settings and Menu Options for descriptions of these different controls. Data Source Details To complete the Data Source Details section: - Name the data source. - (Optional) Describe the data source. A concise description helps you define the role or purpose of the data source. - Provide an integration code. Generally, integration codes are optional. They are required when you want to: - Use the Experience Cloud ID service . - Work with Profile Merge Rules . - Choose an ID Type . ID Type options include: - Cookie - Device Advertising ID - Cross-device (Required to create a Profile Merge Rule). Note, for some customers, this selection exposes the ID Definition options. - Choose an ID Definition option. Options include: - Person - Household Data Export Controls Data Export Controls are optional classification rules you can apply to a data source and destination. They prevent you from sending data to a destination when that action violates a data privacy or use agreement. Skip this section if you do not use Data Export Controls. Data Source Settings These settings determine how a data source is identified, used, and shared. You can also enable error reporting for inbound data files. To complete the Data Source Settings section: - Select a Data Source Setting check box to apply an option to your data source. - Click Save . Delete a Data Source Delete a data source that you no longer need. Please note the following restrictions: - You cannot delete an Active Audience or Data Source Synced Trait . - For customers using Adobe Analytics: Audience Manager does not allow you to delete data sources created automatically from your Analytics report suites. Use the Core Service to unmap these data sources. - Click Audience Data > Data Sources . - Select the check box next to one or more data sources. You can use the Search box to locate the desired data sources if you have a long list. - Click , then confirm the deletion.
https://docs.adobe.com/content/help/fr-FR/audience-manager/user-guide/features/data-sources/manage-datasources.html
2020-01-18T01:19:14
CC-MAIN-2020-05
1579250591431.4
[]
docs.adobe.com
Wire Drawing¶ Drawing wires is an important aspect of creating diagrams in CertSAFE. Wires connect the outputs of components to the inputs of other components to visually describe the flow of data in a computation. Collectively, inputs and outputs of components in a diagram are called pins. You can draw wire segments to connect pins together. A wire segment is a straight horizontal or vertical line segment that conducts values from one grid location to another. Multiple wire segments can be connected to each other to create paths with corners and branches. A location where three or more pins or wire segments come together is called a junction and is drawn with a thick black dot in CertSAFE. A group of pins and wire segments that are all connected together is called a wire network. The diagram below illustrates these terms: Wire drawing mode¶ When you click on a pin or the end of a wire segment, CertSAFE enters wire drawing mode, indicated by a small yellow dot that follows the mouse cursor around in the diagram. In wire drawing mode, clicking on locations in the diagram will draw wires. Each click draws a single wire segment. You can also start wire drawing mode from any point in your diagram by right-clicking on the desired start location and choosing Start Wire Drawing Here from the menu. In wire drawing mode, CertSAFE will gently snap the end of the wire being drawn to align with existing pins and wire segments in the diagram. This helps when trying to line up different parts of a diagram. A light blue circle is drawn around the end of the wire to indicate when snapping is occurring: CertSAFE exits wire drawing mode if you do any of the following: * Connect the wire you are drawing to an existing pin or wire segment. * Click at the same grid position twice in a row. * Right-click anywhere in the diagram. * Use the Undo command. Editing existing wires¶ In wire drawing mode, drawing back over the wire segment currently being drawn from will shorten or delete that wire segment: Outside of wire drawing mode, wire segments can be selected in the same ways as components, by clicking on them or dragging a selection box. Wire segments can be deleted, cut, and copied just like components as well. Guess What I Mean mode¶ If you try to move a group of wires or components connected to wires, CertSAFE will perform one of two behaviors. The default behavior is called Guess What I Mean mode, because CertSAFE will attempt to guess what action you are trying to perform and change the diagram appropriately by stretching or moving wire segments: The rules CertSAFE uses to guess what you are trying to do are fairly simple by design, to avoid causing the interface to feel unpredictable. In particular, CertSAFE makes no attempt to route wires around components. If you drag at a diagonal angle instead of horizontally or vertically, CertSAFE will disconnect all the wires you are dragging; this gives a way to break connections if CertSAFE is being too aggressive with keeping wires connected. Sometimes Guess What I Mean mode is still too much. There is a toggle button at the bottom-left of the CertSAFE window with a magic wand icon that controls whether Guess What I Mean mode is enabled: If you turn this option off, CertSAFE will not do any wire stretching no matter how you move diagram elements: Connecting components without wire segments¶ It is also possible to connect pins together just by positioning components so that their pins line up, without drawing wire segments between them: A useful trick that works when Guess What I Mean mode is enabled is to move a component so that its pins are connected the way you want, then move it again to the position you’d like, allowing CertSAFE to fill in the connecting wire segments: Fixing common problems¶ A common mistake for those new to CertSAFE is to try to drop a component on top of a wire to connect it: This will actually result in a wire segment connecting the component’s input to its output, causing a “too many producers” error. If this happens, you can quickly fix the issue by selecting the wire segment on top of the component and deleting it: Deleting entire networks¶ Sometimes you have made a mess out of your wires and just want to start that area over. Double-clicking on a wire segment will select the entire wire network connected to that segment. Pressing Delete will then delete that entire network, leaving only the pins it connected:
https://docs.certsafe.com/reference-modeling/wire-drawing.html
2020-01-18T01:44:59
CC-MAIN-2020-05
1579250591431.4
[array(['../_images/Parts-of-a-diagram.png', 'ALT'], dtype=object) array(['../_images/Wire-drawing-1.png', 'ALT'], dtype=object) array(['../_images/Wire-drawing-2.png', 'ALT'], dtype=object) array(['../_images/Wire-drawing-3.png', 'ALT'], dtype=object) array(['../_images/Wire-drawing-4.png', 'ALT'], dtype=object) array(['../_images/Wire-drawing-5.png', 'ALT'], dtype=object) array(['../_images/Snapping.png', 'ALT'], dtype=object) array(['../_images/Shortening-wire.png', 'ALT'], dtype=object) array(['../_images/Guess-What-I-Mean-mode-on.png', 'ALT'], dtype=object) array(['../_images/Guess-What-I-Mean-button.png', 'ALT'], dtype=object) array(['../_images/Guess-What-I-Mean-mode-off.png', 'ALT'], dtype=object) array(['../_images/Adjacent-pins.png', 'ALT'], dtype=object) array(['../_images/Autofill-wires-1.png', 'ALT'], dtype=object) array(['../_images/Autofill-wires-2.png', 'ALT'], dtype=object) array(['../_images/Autofill-wires-3.png', 'ALT'], dtype=object) array(['../_images/Dropping-component-on-wire-1.png', 'ALT'], dtype=object) array(['../_images/Dropping-component-on-wire-2.png', 'ALT'], dtype=object) array(['../_images/Dropping-component-on-wire-3.png', 'ALT'], dtype=object) array(['../_images/Dropping-component-on-wire-4.png', 'ALT'], dtype=object) array(['../_images/Deleting-wire-network-12.png', 'ALT'], dtype=object) array(['../_images/Deleting-wire-network-3.png', 'ALT'], dtype=object)]
docs.certsafe.com
Activity Widgets Widgets provide a quick and clear overview of activity in your audit source. With live widgets, you can check that everything goes well and activity stays within the safe level. Unlike detailed reports and search queries, widgets give you a bird's eye view of your environment. Here in Cygna Labs, we developed widgets for each system individually to ensure you get the most demanded activity metrics for your critical assets. You can always review activity widgets under a corresponding tile or add the most important charts right to the source landing page. To learn how to add a widget to a landing page of your source, go to Enabling Widgets.
https://docs.cygnalabs.com/Features/Widgets.html
2020-01-18T00:33:22
CC-MAIN-2020-05
1579250591431.4
[]
docs.cygnalabs.com
With the InfluxDB query management features, users are able to identify currently-running queries, kill queries that are overloading their system, and prevent and halt the execution of inefficient queries with several configuration settings. List currently-running queries with SHOW QUERIES SHOW QUERIES lists the query id, query text, relevant database, and duration of all currently-running queries on your InfluxDB instance. Syntax SHOW QUERIES Example > SHOW QUERIES qid query database duration 37 SHOW QUERIES 100368u 36 SELECT mean(myfield) FROM mymeas mydb 3s Explanation of the output qidThe id number of the query. Use this value with KILL - QUERY. queryThe query text. databaseThe database targeted by the query. durationThe length of time that the query has been running. See Query Language Reference for an explanation of time units in InfluxDB databases. Stop currently-running queries with KILL QUERY KILL QUERY tells InfluxDB to stop running the relevant query. Syntax Where qid is the query ID, displayed in the SHOW QUERIES output: KILL QUERY <qid> InfluxDB Enterprise clusters: To kill queries on a cluster, you need to specify the query ID (qid) and the TCP host (for example, myhost:8088), available in the SHOW QUERIES output. KILL QUERY <qid> ON "<host>" A successful KILL QUERY query returns no results. Examples -- kill query with qid of 36 on the local host > KILL QUERY 36 > -- kill query on InfluxDB Enterprise cluster > KILL QUERY 53 ON "myhost:8088" > Configuration settings for query management The following configuration settings are in the coordinator section of the configuration file. max-concurrent-queries The maximum number of running queries allowed on your instance. The default setting ( 0) allows for an unlimited number of queries. If you exceed max-concurrent-queries, InfluxDB does not execute the query and outputs the following error: ERR: max concurrent queries reached query-timeout The maximum time for which a query can run on your instance before InfluxDB kills the query. The default setting ( "0") allows queries to run with no time restrictions. This setting is a duration literal. If your query exceeds the query timeout, InfluxDB kills the query and outputs the following error: ERR: query timeout reached log-queries-after The maximum time a query can run after which InfluxDB logs the query with a Detected slow query message. The default setting ( "0") will never tell InfluxDB to log the query. This setting is a duration literal. Example log output with log-queries-after set to "1s": [query] 2016/04/28 14:11:31 Detected slow query: SELECT mean(usage_idle) FROM cpu WHERE time >= 0 GROUP BY time(20s) (qid: 3, database: telegraf, threshold: 1s) qid is the id number of the query. Use this value with KILL QUERY. The default location for the log output file is /var/log/influxdb/influxdb.log. However on systems that use systemd (most modern Linux distributions) those logs are output to journalctl. You should be able to view the InfluxDB logs using the following command: journalctl -u influxdb max-select-point The maximum number of points that a SELECT statement can process. The default setting ( 0) allows the SELECT statement to process an unlimited number of points. If your query exceeds max-select-point, InfluxDB kills the query and outputs the following error: ERR: max number of points reached max-select-series The maximum number of series that a SELECT statement can process. The default setting ( 0) allows the SELECT statement to process an unlimited number of series. If your query exceeds max-select-series, InfluxDB does not execute the query and outputs the following error: ERR: max select series count exceeded: <query_series_count> series max-select-buckets The maximum number of GROUP BY time() buckets that a query can process. The default setting ( 0) allows a query to process an unlimited number of buckets. If your query exceeds max-select-buckets, InfluxDB does not execute the query and outputs the following error: ERR: max select bucket count exceeded: <query_bucket_count> buckets
https://docs.influxdata.com/influxdb/v1.7/troubleshooting/query_management/
2020-01-18T01:11:10
CC-MAIN-2020-05
1579250591431.4
[]
docs.influxdata.com
Glossary (Industry 8.1) 7/8/2014 Review the glossary of terms used in the Windows Embedded 8.1 Industry (Industry 8.1) documentation. answer file A file that automates Windows Setup. This file enables the configuration of Windows settings, the addition and removal of components, and many Windows Setup tasks, such as disk configuration. assigned access A feature that allows a system administrator to manage the user’s experience by limiting application entry points exposed to the user of the device. configuration pass A phase of Windows installation. Different parts of the Windows operating system are installed in different configuration passes. You can specify Windows unattended installation settings to be applied in one or more configuration passes. Custom Logon A feature that suppresses Windows 8 UI elements related to the system Welcome and shutdown screens. device The hardware on which Windows Embedded runs. Dialog Filter A feature that automatically controls which dialog boxes are displayed on the screen. ELM A snap-in to the Microsoft Management Console (MMC) that you can use to configure lockdown features. See also Embedded Lockdown Manager. Embedded Lockdown Manager A snap-in to the Microsoft Management Console (MMC) that you can use to configure lockdown features. See also ELM. flick A quick, straight stroke of a finger or pen on a screen. A flick is recognized as a gesture, and interpreted as a navigation or an editing command. gesture A quick movement of a finger or pen on a screen that the computer interprets as a command, rather than as a mouse movement, writing, or drawing. Gesture Filter A feature used to disable the new edge gestures introduced in Windows 8. input method editor A tool that lets you enter complex characters and symbols, such as those used in East Asian written languages, using a standard keyboard. key combination Any combination of keystrokes that can be used to perform a task that would otherwise require a mouse or other pointing device. Keyboard Filter A feature used to suppress undesirable keystrokes or key combinations. language identifier A standard international numeric abbreviation for a country or geographical region. A language identifier is a 16-bit value that consists of a primary language identifier and a secondary language identifier. language pack A collection of binaries that can be installed on top of the core product and enables users to select a preferred language so that the user interface and Help files appear in that preferred language. log file A file that stores messages generated by an application, service, or operating system. These messages are used to track the operations performed. Log files are usually plain text (ASCII) files and often have a .log extension. OS image A copy of the entire state of an operating system stored in a non-volatile form, such as a file. pan A multi-touch gesture that consists of one or two fingers moving in the same direction, parallel to each other. pinch A zoom out gesture represented by two fingers with at least one of them moving towards the other finger at any angle, within an acceptable tolerance. protected volume A volume that has been configured to prevent data from being deleted or overwritten accidentally. registry key An identifier for a record or group of records in the registry. shell The command interpreter that is used to pass commands to the operating system. Shell Launcher A feature used to replace the default Windows 8 shell with a custom shell. stretch A zoom-in gesture represented by two fingers with at least one of them moving away from each other at any angle, within an acceptable tolerance. tap A gesture represented by placing a finger or stylus on the screen and then lifting it up. target device A device selected to receive software updates or configuration changes. toast notification A transient message to the user that contains relevant, time-sensitive information and provides quick access to the subject of that content in an app. It can appear whether you are in another app, the Start screen, or on the desktop. Toasts, also called banners, are an optional part of the app experience and are intended to be used only when your app is not the active foreground app. A toast notification can contain text and images but secondary actions such as buttons are not supported. Toast Notification Filter A feature used to suppress toast notifications. Unbranded Boot A feature used to suppress Windows 8 elements that appear when the operating system starts or resumes. Unified Write Filter A sector-based write filter that you can use to protect your storage media by intercepting all write attempts to a protected volume and redirecting those write attempts to a virtual overlay. USB Filter A USB port and device base filter that you can use to allow trusted USB devices to connect to a system. volume An area of storage on a hard disk. A volume is formatted by using a file system, such as NTFS, and has a drive letter assigned to it. A single hard disk can have multiple volumes. Some volumes can span multiple hard disks. .wim file A Windows image file, which can contain one or more Windows images. See also Windows image file. Windows 8 Application Launcher A feature used to start a Windows 8 app immediately after a user signs in to a device and to restart the app when the app exits. Windows image file A Windows image file, which can contain one or more Windows images. See also .wim file. WMI provider In Windows Management Instrumentation (WMI), a set of interfaces that provide programmatic access to management information in a system. IIS 7.0 implements a WMI provider in the namespace called WebAdministration to provide programmatic access to configuration and system settings.
https://docs.microsoft.com/en-us/previous-versions/windows/embedded/dn602645(v=winembedded.82)?redirectedfrom=MSDN
2020-01-18T01:00:22
CC-MAIN-2020-05
1579250591431.4
[]
docs.microsoft.com
All content with label 2lcache+async+development+events+grid+hibernate_search+infinispan+installation+out_of_memory+repeatable_read+setup+tx+xsd. Related Labels: podcast, expiration, publish, datagrid, coherence, interceptor, server, replication, transactionmanager, dist, release, partitioning, query, deadlock, intro, pojo_cache, archetype, lock_striping, jbossas, nexus, guide, schema, listener, cache, s3, amazon, test, jcache, api, ehcache, maven, documentation, roadmap, wcm, youtube, userguide, write_behind, ec2, 缓存, s, hibernate, getting, aws, interface, clustering, eviction, gridfs, fine_grained, concurrency, examples, jboss_cache, index, configuration, hash_function, batch, buddy_replication, loader, pojo, write_through, cloud, mvcc, tutorial, notification, presentation, jbosscache3x, read_committed, xml, distribution, started, jira, cachestore, data_grid, cacheloader, cluster, br, websocket, transaction, interactive, xaresource, build, gatein, searchable, demo, scala, client, non-blocking, migration, jpa, filesystem, article, user_guide, gui_demo, eventing, client_server, infinispan_user_guide, standalone, webdav, hotrod, snapshot, docs, consistent_hash, batching, store, whitepaper, jta, faq, as5, jgroups, locking, rest, hot_rod more » ( - 2lcache, - async, - development, - events, - grid, - hibernate_search, - infinispan, - installation, - out_of_memory, - repeatable_read, - setup, - tx, - xsd ) Powered by a free Atlassian Confluence Open Source Project License granted to Red Hat, Inc.. Evaluate Confluence today.
https://docs.jboss.org/author/label/2lcache+async+development+events+grid+hibernate_search+infinispan+installation+out_of_memory+repeatable_read+setup+tx+xsd
2020-01-18T00:34:45
CC-MAIN-2020-05
1579250591431.4
[]
docs.jboss.org
Mouse Button Event Args. Button State Property Definition Gets the state of the button associated with the event. public: property System::Windows::Input::MouseButtonState ButtonState { System::Windows::Input::MouseButtonState get(); }; public System.Windows.Input.MouseButtonState ButtonState { get; } member this.ButtonState : System.Windows.Input.MouseButtonState Public ReadOnly Property ButtonState As MouseButtonState Property Value The state the button is in. Examples The following example creates a mouse button event handler that changes the color of the object that handles the event. The color that is chosen depends on whether the mouse button was pressed or released. private void MouseButtonEventHandler(object sender, MouseButtonEventArgs e) { if (e.ButtonState == MouseButtonState.Pressed) { this.Background = Brushes.BurlyWood; } if (e.ButtonState == MouseButtonState.Released) { this.Background = Brushes.Ivory; } } Private Sub MouseButtonEventHandler(ByVal sender As Object, ByVal e As MouseButtonEventArgs) If e.ButtonState = MouseButtonState.Pressed Then Me.Background = Brushes.BurlyWood End If If e.ButtonState = MouseButtonState.Released Then Me.Background = Brushes.Ivory End If End Sub Remarks The Mouse class provides additional properties and methods for determining the state of the mouse.
https://docs.microsoft.com/en-us/dotnet/api/system.windows.input.mousebuttoneventargs.buttonstate?view=netframework-4.8
2020-01-18T02:11:00
CC-MAIN-2020-05
1579250591431.4
[]
docs.microsoft.com
Nested Traffic Manager profiles Traffic Manager includes a range of traffic-routing methods that allow you to control how Traffic Manager chooses which endpoint should receive traffic from each end user. For more information, see Traffic Manager traffic-routing methods. Each Traffic Manager profile specifies a single traffic-routing method. However, there are scenarios that require more sophisticated traffic routing than the routing provided by a single Traffic Manager profile. You can nest Traffic Manager profiles to combine the benefits of more than one traffic-routing method. Nested profiles allow you to override the default Traffic Manager behavior to support larger and more complex application deployments. The following examples illustrate how to use nested Traffic Manager profiles in various scenarios. Example 1: Combining 'Performance' and 'Weighted' traffic routing Suppose that you deployed an application in the following Azure regions: West US, West Europe, and East Asia. You use Traffic Manager's 'Performance' traffic-routing method to distribute traffic to the region closest to the user. Now, suppose you wish to test an update to your service before rolling it out more widely. You want to use the 'weighted' traffic-routing method to direct a small percentage of traffic to your test deployment. You set up the test deployment alongside the existing production deployment in West Europe. You cannot combine both 'Weighted' and 'Performance traffic-routing in a single profile. To support this scenario, you create a Traffic Manager profile using the two West Europe endpoints and the 'Weighted' traffic-routing method. Next, you add this 'child' profile as an endpoint to the 'parent' profile. The parent profile still uses the Performance traffic-routing method and contains the other global deployments as endpoints. The following diagram illustrates this example: In this configuration, traffic directed via the parent profile distributes traffic across regions normally. Within West Europe, the nested profile distributes traffic to the production and test endpoints according to the weights assigned. When the parent profile uses the 'Performance' traffic-routing method, each endpoint must be assigned a location. The location is assigned when you configure the endpoint. Choose the Azure region closest to your deployment. The Azure regions are the location values supported by the Internet Latency Table. For more information, see Traffic Manager 'Performance' traffic-routing method. Example 2: Endpoint monitoring in Nested Profiles Traffic Manager actively monitors the health of each service endpoint. If an endpoint is unhealthy, Traffic Manager directs users to alternative endpoints to preserve the availability of your service. This endpoint monitoring and failover behavior applies to all traffic-routing methods. For more information, see Traffic Manager Endpoint Monitoring. Endpoint monitoring works differently for nested profiles. With nested profiles, the parent profile doesn't perform health checks on the child directly. Instead, the health of the child profile's endpoints is used to calculate the overall health of the child profile. This health information is propagated up the nested profile hierarchy. The parent profile uses this aggregated health to determine whether to direct traffic to the child profile. See the FAQ for full details on health monitoring of nested profiles. Returning to the previous example, suppose the production deployment in West Europe fails. By default, the 'child' profile directs all traffic to the test deployment. If the test deployment also fails, the parent profile determines that the child profile should not receive traffic since all child endpoints are unhealthy. Then, the parent profile distributes traffic to the other regions. You might be happy with this arrangement. Or you might be concerned that all traffic for West Europe is now going to the test deployment instead of a limited subset traffic. Regardless of the health of the test deployment, you want to fail over to the other regions when the production deployment in West Europe fails. To enable this failover, you can specify the 'MinChildEndpoints' parameter when configuring the child profile as an endpoint in the parent profile. The parameter determines the minimum number of available endpoints in the child profile. The default value is '1'. For this scenario, you set the MinChildEndpoints value to 2. Below this threshold, the parent profile considers the entire child profile to be unavailable and directs traffic to the other endpoints. The following figure illustrates this configuration: Note The 'Priority' traffic-routing method distributes all traffic to a single endpoint. Thus there is little purpose in a MinChildEndpoints setting other than '1' for a child profile. Example 3: Prioritized failover regions in 'Performance' traffic routing The default behavior for the 'Performance' traffic-routing method is when you have endpoints in different geographic locations the end users are routed to the "closest" endpoint in terms of the lowest network latency. However, suppose you prefer the West Europe traffic failover to West US, and only direct traffic to other regions when both endpoints are unavailable. You can create this solution using a child profile with the 'Priority' traffic-routing method. Since the West Europe endpoint has higher priority than the West US endpoint, all traffic is sent to the West Europe endpoint when both endpoints are online. If West Europe fails, its traffic is directed to West US. With the nested profile, traffic is directed to East Asia only when both West Europe and West US fail. You can repeat this pattern for all regions. Replace all three endpoints in the parent profile with three child profiles, each providing a prioritized failover sequence. Example 4: Controlling 'Performance' traffic routing between multiple endpoints in the same region Suppose the 'Performance' traffic-routing method is used in a profile that has more than one endpoint in a particular region. By default, traffic directed to that region is distributed evenly across all available endpoints in that region. Instead of adding multiple endpoints in West Europe, those endpoints are enclosed in a separate child profile. The child profile is added to the parent as the only endpoint in West Europe. The settings on the child profile can control the traffic distribution with West Europe by enabling priority-based or weighted traffic routing within that region. Example 5: Per-endpoint monitoring settings Suppose you are using Traffic Manager to smoothly migrate traffic from a legacy on-premises web site to a new Cloud-based version hosted in Azure. For the legacy site, you want to use the home page URI to monitor site health. But for the new Cloud-based version, you are implementing a custom monitoring page (path '/monitor.aspx') that includes additional checks. The monitoring settings in a Traffic Manager profile apply to all endpoints within a single profile. With nested profiles, you use a different child profile per site to define different monitoring settings. FAQs How do I configure nested profiles? How many layers of nesting does Traffic Manger support? Can I mix other endpoint types with nested child profiles, in the same Traffic Manager profile? How does the billing model apply for Nested profiles? Is there a performance impact for nested profiles? How does Traffic Manager compute the health of a nested endpoint in a parent profile? Next steps Learn more about Traffic Manager profiles Learn how to create a Traffic Manager profile Tilbakemeldinger Laster inn tilbakemelding ...
https://docs.microsoft.com/nb-no/azure/traffic-manager/traffic-manager-nested-profiles
2020-01-18T01:21:30
CC-MAIN-2020-05
1579250591431.4
[array(['media/traffic-manager-nested-profiles/figure-4.png', 'Single Traffic Manager profile'], dtype=object) array(['media/traffic-manager-nested-profiles/figure-2.png', 'Nested Traffic Manager profiles'], dtype=object) array(['media/traffic-manager-nested-profiles/figure-3.png', 'Nested Profile failover (default behavior)'], dtype=object) array(['media/traffic-manager-nested-profiles/figure-4.png', "Nested Profile failover with 'MinChildEndpoints' = 2"], dtype=object) array(['media/traffic-manager-nested-profiles/figure-6.png', "'Performance' traffic routing with preferential failover"], dtype=object) array(['media/traffic-manager-nested-profiles/figure-7.png', "'Performance' traffic routing in-region traffic distribution (default behavior)"], dtype=object) array(['media/traffic-manager-nested-profiles/figure-8.png', "'Performance' traffic routing with custom in-region traffic distribution"], dtype=object) array(['media/traffic-manager-nested-profiles/figure-9.png', 'Traffic Manager endpoint monitoring (default behavior)'], dtype=object) array(['media/traffic-manager-nested-profiles/figure-10.png', 'Traffic Manager endpoint monitoring with per-endpoint settings'], dtype=object) ]
docs.microsoft.com
We are preparing a new source of documentation for you. Work in progress! Barcode scanning Barcodes and QR codes are methods of representing data in a visual, machine-readable form. Barcode scanning is a must-have feature for many field sales scenarios. If your device is equipped with a camera with autofocus (and most modern devices are), you can use barcode scanning in Resco Mobile CRM. Contents Change field format to barcode If your entity has a field that can be read from a barcode, you must change the formatting of the field. Use barcode in forms If you add the field with barcode format to your form, the app displays a custom button that initiates camera and barcode scanning. The scanned value is saved to the field. Use barcode in views You can enable barcode scanner functionality to find records in views, particularly lookup views. You will notice barcode scanner icons. When you tap this icon, the camera turns on and when you focus it on the barcode or QR code, it recognizes the value and compares the value with the field set to barcode formatting. If the value is found, the record is selected. There's a subtle difference between the buttons in the top caption row and search bar: - The caption button is used in lookup views. If the search is successful, the record is automatically added to the field from which it was called. - The search bar button is useful in all views and serves simply an alternative way to search and filter records. Supported barcode formats - Partial - EAN2 - EAN5 - EAN8 - UPCE - ISBN10 - UPCA - EAN13 - ISBN13 - COMPOSITE - Interleaved25 - DataBar - DataBarExpanded - Code39 - PDF417 - QRCode - Code93 - Code128 See also - Quick tip: Barcode scanning Blog - Scanning codes can simplify your processes Blog - Use QR code for quick logins Blog - Barcode scanning in action YouTube - Resco Mobile CRM with Barcode Scanner Support Blog - Tailoring the mobile report: How to use customized and barcode fonts Blog - AI Image Recognition - using AI image classification for use cases similar to barcode scanning
https://docs.resco.net/wiki/Barcode
2020-01-18T00:33:46
CC-MAIN-2020-05
1579250591431.4
[]
docs.resco.net
Producers and consumers¶ Producers and consumers in definitions¶ Producers and consumers exist on two different levels in CertSAFE. At the definition level, CertSAFE evaluates the rules listed above for producers and consumers and generates errors and warnings within each definition as appropriate. These rules are checked entirely within the diagram or stitch, using only the information from the definition and any definitions it directly references. CertSAFE provides features to help you understand producer/consumer relationships in complicated diagrams. Right-clicking a component or wire (or a selected group of components and wires) in a diagram gives you options to “Trace Upstream From Selected” and “Trace Downstream From Selected”. Choosing “Trace Downstream From Selected” will select everything that uses the values produced by the selected elements, and everything that uses the values produced by those, and so on. Choosing “Trace Upstream From Selected” will do the reverse, selecting everything that is needed to produce the values of the selected elements. For complex diagrams, including diagrams that uses local variables, this provides a method of quickly seeing what is and is not relevant to a particular piece of logic. Producers and consumers in instances¶ When you mouse over the name of an instance variable (such when a diagram is in instance mode, or while the user is in Instance view) CertSAFE will display a list of the composite unit instances where the variable is referenced. These are categorized as “Producer”, “Consumer”, or “Alias”. * A “Producer” instance is a diagram that contains a non-composite unit that produces the variable. * A “Consumer” instance is a diagram that contains a non-composite unit that consumes the variable. * An “Alias” instance is a diagram or stitch that only passes the variable around without itself performing any logic on it. Occasionally you may see a diagram instance that is categorized as “Producer/Consumer”; this means that the diagram contains both a non-composite unit that produces the variable and one or more non-composite units that consume the variable. This list is the fastest way to navigate an instance hierarchy across data dependencies, since each item in the list is a hyperlink that will take you to that instance when clicked. So, for example, if you see a variable and you don’t know where it is produced, you can simply mouse over the variable and then click on the “Producer” entry to go directly to the diagram that actually computes the variable’s value. Variable usage icons¶ CertSAFE displays icons and other visual cues to help you understand the producer/consumer relationships of variables in your model. See the article on variable usage icons for more information.
https://docs.certsafe.com/concepts-instances/producers-and-consumers.html
2020-01-18T01:44:24
CC-MAIN-2020-05
1579250591431.4
[array(['../_images/Trace-upstream.png', 'ALT'], dtype=object) array(['../_images/Instance-variable-references.png', 'ALT'], dtype=object) array(['../_images/Variable-usage-icons.png', 'ALT'], dtype=object)]
docs.certsafe.com
- Activate the repo in your Travis account, so it is built when we push changes to it. - Under Travis More Options -> Settings->Environment Variables, ,:
https://docs.conan.io/en/1.11/integrations/travisci.html
2020-01-18T00:15:09
CC-MAIN-2020-05
1579250591431.4
[]
docs.conan.io
Occurs when a callback for server-side processing is initiated. BeginCallback: ASPxClientEvent<ASPxClientBeginCallbackEventHandler<ASPxClientPopupControlBase>> The BeginCallback event handler receives an argument of the ASPxClientBeginCallbackEventArgs type. The following properties provide information specific to this event. The BeginCallback event, together with the ASPxClientPopupControlBase.EndCallback event, can be used to perform specific client-side actions (for example, to display and hide an explanatory picture) while a callback is being processed on the server side.
https://docs.devexpress.com/AspNet/js-ASPxClientPopupControlBase.BeginCallback
2020-01-18T01:35:05
CC-MAIN-2020-05
1579250591431.4
[]
docs.devexpress.com
Accessibility Resources Chrome Accessibility Developer Tools are useful for testing for potential accessibility problems in GitLab. The axe browser extension (available for Firefox and Chrome) is also a handy tool for running audits and getting feedback on markup, CSS and even potentially problematic color usages. Accessibility best-practices and more in-depth information are available on the Audit Rules page for the Chrome Accessibility Developer Tools. The “awesome a11y” list is also a useful compilation of accessibility-related
https://docs.gitlab.com/ee/development/fe_guide/accessibility.html
2020-01-18T01:18:38
CC-MAIN-2020-05
1579250591431.4
[]
docs.gitlab.com
At a Glance The vEdge 100 router delivers highly secure site-to-site data connectivity to small business and home offices (SOHO). The vEdge 100 router is a fixed-port-configuration router with the following features: - Five built-in 10/100/1000 Mbps Ethernet ports - Power over Ethernet (PoE) source support on one Ethernet port - Encryption and QoS support - 100 Mbps forwarding throughput (inclusive of encryption) - Secure identification chip for anti-counterfeit and secure authentication - Integrated power supply - Kensington security lock slot to physically lock down the router - GPS input for geographical location - Desktop mount, wall mount, or rack-mountable in a 19-inch rack Chassis Views Figure 1 and Figure 2 show the front and back panels of the vEdge 100 router, indicating the location of the power interfaces, status indicators, and chassis identification labels. Figure 1: Front Panel of the vEdge 100 Router Figure 2: Back Panel of the vEdge 100 Router Additional Information Components and Specifications Planning and Installation Maintenance and Troubleshooting
https://docs.viptela.com/Product_Documentation/vEdge_Routers/02vEdge_100_Router/01At_a_Glance
2018-04-19T18:56:48
CC-MAIN-2018-17
1524125937016.16
[array(['https://docs.viptela.com/@api/deki/files/1639/h00104.png?revision=4', 'h00104.png'], dtype=object) array(['https://docs.viptela.com/@api/deki/files/1638/h00103.png?revision=6', 'h00103.png'], dtype=object) ]
docs.viptela.com
Software Upgrade Use the Software Upgrade screen to download new software images and to upgrade the software image running on a Viptela device. From a centralized vManage NMS, you can upgrade the software on Viptela devices in the overlay network and reboot them with the new software. You can do this for a single device or for multiple devices simultaneously. When you upgrade a group of vBond orchestrators, vSmart controllers, and vEdge routers, the software upgrade and reboot is performed first on the vBond orchestrator, next on the vSmart controllers, and finally on the vEdge routers. For vEdge routers, up to five routers can be upgraded and rebooted in parallel at the same time. You cannot include the vManage NMS in a group software upgrade operation. You must upgrade and reboot the vManage server by itself. It is recommended that you perform all software upgrades from the vManage NMS rather than from the CLI. Screen Elements - Top bar—On the left are the menu icon, for expanding and collapsing the vManage menu, and the vManage product name. On the right are a number of icons and the user profile drop-down. - Title bar—Includes the title of the screen, Software Upgrade. - Device List drop-down—Displays the list of devices in the overlay network. When you first open the Software Upgrade screen, Device List is selected by default. - vEdge tab bar—Includes the Controller and vManage tabs. - Rows Selected—Displays the number of rows selected from the table. Includes: - Upgrade button—Installs a new software version on the device. Includes: - Activate button—Reboots the device and activates the new software version. - Delete Available Software button—Delete a software version from a device. - Set Default Version button—Set a software image to be the default image on the device. - Device Group drop-down—List of all configured device groups in the network. - Table of devices in the overlay network—To re-arrange the columns, drag the column title to the desired position. - Repository drop-down—Click Repository from the Device List drop-down to display the list of software images on the vManage or remote server. - Add New Software drop-down (on Repository screen)—Upload new software images to the vManage or remote server. - Table of software images—To re-arrange the columns, drag the column title to the desired position. - Search box—Includes the Search Options drop-down, for a Contains or Match string. - Refresh icon—Click to refresh data in the device table with the most current data. - Export icon—Click to download all data to a file, in CSV format (on Device List screen only). - Show Table Fields icon—Click to display or hide columns from the device table. By default, all columns are displayed. View Software Images To view a list of software images in the repository on the vManage server or on a remote server, from the Device List drop-down, click Repository. Upload Software Images Before you can upgrade any devices to a new software version, you need to upload the images to the vManage server or point to a remote server on which you have downloaded the files from the Viptela website. To do so: - Click the Repository button located on the right side of the title bar. The Software Repository screen opens. - Click Add New Software. - Select the location from which to download the software images. - If you select vManage, the Upload Software to vManage dialog box opens. - Click Choose Files to select software images for vEdge routers, vSmart controllers/vEdge Cloud routers, or vManage NMS. - Click Upload to upload the images to the repository. - If you select Remote Server, the Location of Software on Remote Server dialog box opens. - Enter the version number of the software image. - Enter the URL of the FTP or HTTP server on which the software images reside. - Click Add to add the images to the repository. - The added software images are displayed in the Repository table and are available for installing on the devices. - To return to the Software Upgrade view, click the Device List toggle button. Upgrade a Software Image To upgrade the software image on a device: - In the title bar, click the vEdge, Controller, or vManage tab. - Select one or more devices on which to upgrade the software image. - Click the Upgrade button. The Software Upgrade dialog box opens. - Select the software version to install on the device. If the software is located on a Remote Server, select the VPN in which the software image is located. - To automatically activate the new software version and reboot the device, select the Activate and Reboot checkbox. - Click Upgrade. A progress bar indicates the status of the software upgrade. vEdge 100 routers, which have a default time of 12 minutes. Note: If you upgrade the vEdge software to a version higher than that running on a controller device, a warning message is displayed that software incompatibilities might occur. It is recommended that you upgrade the controller software first, before upgrading the vEdge software. Activate a New Software Image If you did not select the Activate and Reboot checkbox when upgrading the software image, the device continues to use the existing configuration. To activate the new software image: - In the title bar, click the vEdge, Controller, or vManage tab. - Select one or more devices on which to activate the new software image. - Click the Activate button. The Activate Software dialog box opens. - Select the software version to activate on the device. - Click Activate. vManage NMS reboots the device and activates the new software image. the vEdge 100 routers, which have a default time of 12 minutes. Delete a Software Image To delete a software image from a Viptela device: - In the title bar, click the vEdge, Controller, or vManage tab. - Select one or more devices from which to delete a software image. - Click the Delete Available Software button. The Delete Available Software dialog box opens. - Select the software version to delete. - Click Delete. Set the Software Default Version You can set a software image to be the default image on a Viptela device.. Export Device Data in CSV Format To export data for all devices to a file in CSV format, click the Export button. This button is located to the right of the filter criteria in the vEdge, Controller, and vManage tabs. vManage NMS downloads all data from the device table to an Excel file in CSV format. The file is downloaded to your brower's default download location and is named viptela_download.csv. View Log of Software Upgrade Activities To view the status of software upgrades and a log of related activities: - Click the Tasks icon located in the vManage toolbar. vManage NMS displays a list of all running tasks along with the total number of successes and failures. - Click a row to see details of a task. vManage NMS opens a status window displaying the status of the task and details of the device on which the task was performed.
https://docs.viptela.com/Product_Documentation/vManage_Help/Release_17.1/Maintenance/Software_Upgrade
2018-04-19T18:56:16
CC-MAIN-2018-17
1524125937016.16
[array(['https://docs.viptela.com/@api/deki/files/3854/g00376.png?revision=2', 'g00376.png'], dtype=object) ]
docs.viptela.com
Third-party JDBC tools can help you browse data in tables, issue SQL commands, design new tables, and so forth. You can configure these tools to use the GemFire XD JDBC thin client driver to connect to a GemFire XD distributed system. jdbc:gemfirexd://hostname:port/ where hostname and port correspond to the -client-bind-address and -client-port value of a GemFire XD server or locator in your distributed system. If authentication is disabled, specify "app" as both the username and password values, or any other temporary value. For a full example of configuring GemFire XD with a third-party JDBC tool, see Using SQuirreL SQL with GemFire XD on the GemFire XD community site.
http://gemfirexd.docs.pivotal.io/docs/1.3.0/userguide/getting_started/3rd_party_connections.html
2018-04-19T18:58:42
CC-MAIN-2018-17
1524125937016.16
[]
gemfirexd.docs.pivotal.io
Go, Go, Go! Getting started with web development on Joelie is a fun, fast-paced experience. Whether you are a complete beginner or a seasoned code warrior, Joelie can dramatically speed up your creative process and bring you closer to accomplishing your development goals. Joelie Docs will make more sense if you have already signed into Joelie and had a look around, but are meant to be read by anyone. 1 Minute Crash Course To use Joelie, you simply sign in and create a site. Your Joelie site begins with only one piece of code, called the "solo" routine. You can program this routine to do anything, even just display the word "hello". To run the code and start using your site, click the "Run" button, or navigate to yoursite.joelie.org. Important Setup Instructions Feels weird, doesn't it? There really is nothing to download or set up with Joelie. You just start coding. You can begin coding your site's solo routine instantly after creating the site. There are no downloads, configuration steps or installations, period. Of course, you'll have to learn how to code your site.
http://www.docs.joelie.org/
2018-04-19T18:51:32
CC-MAIN-2018-17
1524125937016.16
[]
www.docs.joelie.org
The sun submodule contains constants, parameters and models of the Sun. Contains astronomical and physical constants for use in SunPy or other places. These constants are taken from various sources. The structure of this module is heavily based on if not directly copied from the SciPy constants module but contains Solar Physical constants. This module contains standard models of the sun from various sources. All data is saved in pandas DataFrames with two added attributes
http://docs.sunpy.org/en/stable/code_ref/sun.html
2018-04-19T19:26:58
CC-MAIN-2018-17
1524125937016.16
[]
docs.sunpy.org
TestRail integration allows you to link a test in TestRail to a test in Testim. The test run results will automatically be shown in TestRails giving you full visibility to both your manual test results as well as your automated tests in a single view. Setting TestRail integration This process is required only once. - Navigate to "Settings", and then to "Integration" tab. - Click "login" to TestRail link. - Open TestRail, copy the domain from the URL (make sure you are logged in) and paste it into the URL field. - Enter your TestRail username. - Log into TestRail as Admin user, navigate to "My Settings" and then to "API Keys" tab. Click "Generate Key", enter any key name, copy the generated string and click "Save settings". Paste this key into the ApiKey field. 6. Click "Connect" 7. Choose the TestRail project to connect to. Connecting a test in Testim to a TestRail test Open the test you would like to connect to a TestRail test. In the setup steps' properties panel, choose the TestRail test to connect to and save the test. After running the test, the result will be presented in the relevant TestRail project under the "Test run and results" tab. Note: - Testim runs names will always follow this convention : "Report from Testim.io - Suite\Test name" - Only remote runs results will be shown in TestRail (local runs will not be shown). - Suite runs will be presented as one run in TestRail, click the certain run in order to see the results of all tests in the suite.
http://docs.testim.io/integrations/testrail-integration
2018-04-19T19:12:39
CC-MAIN-2018-17
1524125937016.16
[array(['https://downloads.intercomcdn.com/i/o/53799445/eba0db078074772e6467670e/TR.gif', None], dtype=object) array(['https://downloads.intercomcdn.com/i/o/53800130/9460cdb5269731bdf600ea21/Integrate.gif', None], dtype=object) array(['https://downloads.intercomcdn.com/i/o/53717226/09215380126ebd81289ae177/2.gif', None], dtype=object) ]
docs.testim.io
The following is a high-level summary of the differences between NAT instances and NAT gateways. Supports forwarding of IP fragmented packets for the UDP protocol. Does not support fragmentation for the TCP and ICMP protocols. Fragmented packets for these protocols will get dropped. Javascript is disabled or is unavailable in your browser. To use the AWS Documentation, Javascript must be enabled. Please refer to your browser's Help pages for instructions.
https://docs.aws.amazon.com/AmazonVPC/latest/UserGuide/vpc-nat-comparison.html
2018-04-19T19:42:25
CC-MAIN-2018-17
1524125937016.16
[]
docs.aws.amazon.com
Changing, the time zone is indicated on each scheduling page so that you know exactly when a scheduled operation will occur. Changing the Time Zone If you change the time zone on a computer hosting a report server, you must restart Microsoft Internet Information Services (IIS) in order for the time zone change to take effect. Timestamp values of existing generated reports that are not refreshed (for example,. Changing the Clock Settings. See Also Other Resources Scheduling Reports and Subscriptions Help and Information Getting SQL Server 2005 Assistance
https://docs.microsoft.com/en-us/previous-versions/sql/sql-server-2005/ms157158(v=sql.90)
2018-04-19T19:59:40
CC-MAIN-2018-17
1524125937016.16
[]
docs.microsoft.com
Snippets are parts of templates that are common for different email notifications. Snippets are useful in 2 cases: In these cases you can create a snippet with the necessary content. Once you save the snippet, you’ll be able to insert all this content by simply inserting a reference to the snippet into an email template. Note Changes in a snippet will automatically apply to all email templates that include the snippet. There are two snippets that exist by default: Header and Footer. They are used in all email notifications. By default, Header passes the variables of the theme to the template. This makes the look of your email notifications match the look of your active theme and style. Important This won’t work if you remove the variables of the theme from the Header snippet, or remove the snippet itself from the email template. The list of snippets for email notifications is available under Design → Email templates on the Code snippets tab. This is the page where you add, edit, and delete snippets. To add a new snippet, click the + button in the top right corner of the screen. To edit an existing snippet, click on its name. A popup window will open. Edit the snippet here: Once you are done editing the snippet, click Create (or Save, if the snippet already exists). Important If you edit an existing snippet, the Restore button will appear next to the Save button. The Restore button returns the snippet to its initial state (as it was when the snippet was created). All your snippets are exported and imported together with email templates in one XML file. Click the gear button in the top right corner of the page and choose Import or Export. Learn more in the article about exporting and importing email templates. Important An imported snippet will overwrite a snippet that exists in your store, if both snippets have the same <code>. All snippets with the Active status appear among the available snippets in the email template editor. Just click on the snippet, and it will be added to the place of the template where you left the cursor. As an alternative, you can insert a snippet manually. For example, to insert a snippet with the code test, add {{ snippet(“test”) }} to the template. Once you do that, the content of the snippet should appear in email notification preview.
https://docs.cs-cart.com/4.7.x/user_guide/look_and_feel/email_templates/email_snippets.html
2018-04-19T19:38:51
CC-MAIN-2018-17
1524125937016.16
[]
docs.cs-cart.com
This article describes how to create an option for a particular product. If you sell multiple products that all have the same options (such as color or size), it may be better to use global product options. In the Administration panel, go to Products → Products. Open the product editing page and then switch to the Options tab. Сlick the Add option button and fill in the form in a pop-up window: Click the Create button. Now if you return to the product editing page and go to the Options tab, you’ll be able to create option combinations. Hint Options for a product can also be added via CSV import.
https://docs.cs-cart.com/4.7.x/user_guide/manage_products/options/product_options.html
2018-04-19T19:40:31
CC-MAIN-2018-17
1524125937016.16
[]
docs.cs-cart.com
A non-blocking implementation of the EventPublisher interface. This class is a drop-in replacement for EventPublisherImpl except that it does not synchronise on the internal map of event type to ListenerInvoker, and should handle much higher parallelism of event dispatch. One can customise the event listening by instantiating with custom listener handlers and the event dispatching through EventDispatcher. See the com.atlassian.event.spi package for more information. ListenerHandler EventDispatcher If you need to customise the asynchronous handling, you should use the AsynchronousAbleEventDispatcher together with a custom executor. You might also want to have a look at using the EventThreadFactory to keep the naming of event threads consistent with the default naming of the Atlassian Event library. Publish an event that will be consumed by all listeners which have registered to receive it. Implementations must dispatch events to listeners which have a public method annotated with EventListener and one argument which is assignable from the event type (i.e. a superclass or interface implemented by the event object). Implementations may also dispatch events to legacy EventListener implementations based on the types returned from getHandledEventClasses(). This method should process all event listeners, despite any errors or exceptions that are generated as a result of dispatching the event. Register a listener to receive events. All implementations must support registration of listeners where event handling methods are indicated by the EventListener annotation. Legacy implementations may also support listeners which implement the now-deprecated EventListener interface. Un-register a listener so that it will no longer receive events. If the given listener is not registered nothing will happen. Un-register all listeners that this publisher knows about.
https://docs.atlassian.com/DAC/javadoc/events/2.1.1/reference/com/atlassian/event/internal/LockFreeEventPublisher.html
2021-06-12T22:14:11
CC-MAIN-2021-25
1623487586390.4
[]
docs.atlassian.com
.host Used by companies that provide web hosting platforms and services. - Registration and renewal period One to ten years. - Privacy protection (applies to all contact types: person, company, association, and public body) All information is hidden except organization name. - Domain locking to prevent unauthorized transfers Supported. - Internationalized domain names Supported for Arabic, Simplified Chinese, Traditional Chinese, Greek, Hebrew, Korean, and Thai. -.
https://docs.aws.amazon.com/Route53/latest/DeveloperGuide/host.html
2021-06-12T20:54:02
CC-MAIN-2021-25
1623487586390.4
[]
docs.aws.amazon.com
The ability to add a game becomes available after installing and activating the ACES Gambling plugin. 1) Go to “Games” – “Add New.“ 2) Add a game title and content. 3) “Featured Image” – A logo for the game. Recommended minimum size 570×570 px. 4) “Excerpt” – A few sentences with the primary meaning of the content. 5) Add “Additional information“: - “Short description” – A few words about the game. This information is displayed in game widgets/shortcodes.) “Game Rating” – Game rating from 1 to 5. 7) “Casinos” – From this list, select the casino with which this game is associated. This list shows the casinos that you add in the “Casinos” section. 8) “Game page style” – Select a prepared style for the game page template. 9) “Background Image” – Here, you can add a background image for the third style of the game page template. The minimum recommended size for the background image: 2000×768 px. 10) “Pros/Cons of the game” – You can add a description of the pros and cons of the game.>
https://docs.mercury.is/article/how-to-add-a-game/
2021-06-12T20:29:18
CC-MAIN-2021-25
1623487586390.4
[]
docs.mercury.is
Changelog for package warehouse_ros 0.10.0 (2020-10-09) clang-format-10 Python3 compatibility layer for roslaunch in CMake ( #35 ) Contributors: Robert Haschke, Tyler Weaver 0.9.2 (2020-07-21) Proper cmake test for mongo/version.h Bump cmake version to 3.0.2 ROS test: Start mongodb mongo_wrapper_ros.py: Handle permission errors creating database path python3 syntax compatibility Contributors: Robert Haschke 0.9.1 (2019-09-28) Initialize mongo client only once ( #28 ) Restore mongod python wrapper Contributors: Masaki Murooka, Robert Haschke 0.9.0 (2019-08-14) Use std pointers Fix catkin issues Refactored Warehouse ROS for pluginlib Contributors: Christian Rauch, Connor Brew, Dave Coleman, Robert Haschke, v4hn 0.8.8 (2014-10-01) Merge pull request #13 from corot/master Issue #11 : Add a Python library Merge pull request #15 from v4hn/shared-static-mongodb only export MongoDB dependency for shared mongodb-library only export MongoDB dependency for shared mongodb-library libmongoclient.a uses quite a number of other libs and the exact requirements can't be read from a cmake/pc file. Therefore it makes more sense to keep the dependency hidden from ROS when we use the static lib. libwarehouse_ros then provides all required functions. ... This is a bit like creating a libmongoclient.so, but the whole problem exists because debian/ubuntu don't provide this one, right? The shared library can - and has to - be exported as a dependency to ROS. Missing part of : requires both mongodb and mongodb-dev Merge branch 'master' of Add kwargs also to insert so we can solves issues as Add kwargs to ensure_index so we can solves issues as Add python-pymongo dependency Issue : rospy queue_size warnings Issue #11 : Add a Python library Contributors: Ioan A Sucan, corot, v4hn 0.8.5 (2014-02-23) Fixed malloc.h inclusion on Mac OS X Rename README.rst to README.md added travis support Contributors: Acorn, Dave Hershberger, Ioan A Sucan, Marco Esposito 0.8.4 (2013-07-03) update how we find MongoDB 0.8.2 (2013-07-03) fix typo and use correct install location add config.h.in for deciding how to include mongo headers 0.8.1 (2013-07-03) fix linking issues (missing SSL symbols) in deps, undef defined macros
https://docs.ros.org/en/noetic/changelogs/warehouse_ros_mongo/changelog.html
2021-06-12T19:59:56
CC-MAIN-2021-25
1623487586390.4
[]
docs.ros.org
Crate ripasso Version 0.5.1 See all ripasso's items This implements a handling of a pass directory compatible with . The encryption is handled by GPGme and the git integration is with libgit2. This is the library part of ripasso, it implements the functions needed to manipulate a pass directory. This is the library that handles password generation, based on the long word list from EFF
https://docs.rs/ripasso/0.5.1/ripasso/
2021-06-12T20:54:45
CC-MAIN-2021-25
1623487586390.4
[]
docs.rs
Get Settings API Endpoint GET /_signals/settings GET /_signals/settings/{key} Retries all Signals settings or a single setting item. Path Parameters {key} The configuration setting to be retrieved. See (Signals Administration)[administration.md] for a list of the available settings. Responses 200 OK The setting could be successfully retrieved. The value of the settings is returned in the response body. The response format is JSON. This means, that if a setting as a simple textual value, the value will be returned in double quotes. If you specify the header Accept: text/plain in the request, you will get a plain text response with unquoted textual values. 403 Forbidden The user does not have the permission to retrieve settings. 404 Not Found A setting does not exist for the particular key. Permissions For being able to access the endpoint, the user needs to have the privilege cluster:admin:searchguard:signals:settings/put. This permission is included in the following built-in action groups: - SGS_SIGNALS_ALL Examples GET /_signals/settings Response { "active": "true", "http": { "allowed_endpoints": [ "*", "*" ] }, "tenant": { "_main": { "active": "true", "node_filter": "node.attr.signals: true" } } } GET /_signals/settings/watchlog.index Response "<.signals_log_{now/d}>"
https://docs.search-guard.com/latest/elasticsearch-alerting-rest-api-settings-get
2021-06-12T19:39:08
CC-MAIN-2021-25
1623487586390.4
[]
docs.search-guard.com
. Array Boolean Date Ext Ext Ext.Base Ext.app.Application Ext.app.ViewController Ext.Array Ext.Assert Ext.env.Browser Ext.util.Positionable Ext.mixin.Focusable Ext.Widget Ext.plugin.Abstract Ext.Class Ext.ClassManager Ext.Config Ext.Configurator Ext.data.amf.RemotingMessage Ext.data.amf.XmlEncoder Ext.data.schema.BelongsTo Ext.data.schema.HasMany Ext.data.schema.HasOne Ext.data.schema.Reference Ext.Date Ext.dd.DragDropElement Ext.dd.DragDropManager.ElementWrapper Ext.dom.Helper Ext.env.OS Ext.env.Ready Ext.Error Ext.event.Event Ext.event.publisher.Focus Ext.Factory Ext.feature Ext.form.field.Checkbox Ext.form.field.Radio Ext.Function Ext.fx.Manager Ext.fx.Queue Ext.fx.target.CompositeSprite Ext.fx.target.Sprite Ext.GlobalEvents Ext.Inventory Ext.JSON Ext.layout.SizePolicy Ext.list.AbstractTreeItem Ext.list.Tree Ext.list.TreeItem Ext.Loader Ext.Manifest Ext.Number Ext.Object Ext.Progress Ext.scroll.Scroller Ext.sparkline.Base Ext.String Ext.supports Ext.util.Cache Ext.util.DelayedTask Ext.util.LRU Ext.util.Operators Ext.util.TaskRunner.Task Ext.util.translatable.CssPosition Ext.ux.DataViewTransition Ext.Version Function Number RegExp String Creates an object wrapper. The Object constructor creates an object wrapper for the given value. If the value is null or undefined, it will create and return an empty object, otherwise, it will return an object of a type that corresponds to the given value. When called in a non-constructor context, Object behaves identically. The following examples store an empty Object object in o: var o = new Object(); var o = new Object(undefined); var o = new Object(null); The following examples store Boolean objects in o: // equivalent to o = new Boolean(true); var o = new Object(true); // equivalent to o = new Boolean(false); var o = new Object(Boolean()); Specifies the function that creates an object's prototype. The following example creates a prototype, Tree, and an object of that type, theTree. The example then displays the constructor property for the object theTree. function Tree(name) { this.name = name; } theTree = new Tree("Redwood"); console.log("theTree.constructor is " + theTree.constructor); This example displays the following output: theTree.constructor is function Tree(name) { this.name = name; }")); Allows the addition of properties to all objects of type Object. Creates new Object. value : Object (optional) The value to wrap. Returns a boolean indicating whether an object contains the specified property as a direct property of that object and not inherited through the prototype chain. Every object descended from Object inherits the hasOwnProperty method. This method can be used to determine whether an object has the specified property as a direct property of that object; unlike the in operator, this method does not check down the object's prototype chain. The following example determines whether the o object contains a property named prop: o = new Object(); o.prop = 'exists'; function changeO() { o.newprop = o.prop; delete o.prop; } o.hasOwnProperty('prop'); //returns true changeO(); o.hasOwnProperty('prop'); //returns false The following example differentiates between direct properties and properties inherited through the prototype chain: o = new Object(); o.prop = 'exists'; o.hasOwnProperty('prop'); // returns true o.hasOwnProperty('toString'); // returns false o.hasOwnProperty('hasOwnProperty'); // returns false The following example shows how to iterate over the properties of an object without executing on inherit properties. var buz = { fog: 'stack' }; for (var name in buz) { if (buz.hasOwnProperty(name)) { alert("this is fog (" + name + ") for sure. Value: " + buz[name]); } else { alert(name); // toString or something else } } prop : String The name of the property to test. Returns true if object contains specified property; else returns false. Returns a boolean indication whether the specified object is in the prototype chain of the object this method is called upon. isPrototypeOf. prototype : Object an object to be tested against each link in the prototype chain of the object argument object : Object the object whose prototype chain will be searched Returns true if object is a prototype and false if not. Returns a boolean indicating if the internal ECMAScript DontEnum attribute is set. Every object has a propertyIsEnumerable method. This method can determine whether the specified property in an object can be enumerated by a for...in loop, with the exception of properties inherited through the prototype chain. If the object does not have the specified property, this method returns false. The following example shows the use of propertyIsEnumerable on objects and arrays: var o = {}; var a = []; o.prop = 'is enumerable'; a[0] = 'is enumerable'; o.propertyIsEnumerable('prop'); // returns true a.propertyIsEnumerable(0); // returns true The following example demonstrates the enumerability of user-defined versus built-in properties: var a = ['is enumerable']; a.propertyIsEnumerable(0); // returns true a.propertyIsEnumerable('length'); // returns false Math.propertyIsEnumerable('random'); // returns false this.propertyIsEnumerable('Math'); // returns false Direct versus inherited properties var a = []; a.propertyIsEnumerable('constructor'); // returns false function firstConstructor() { this.property = 'is not enumerable'; } firstConstructor.prototype.firstMethod = function () {}; function secondConstructor() { this.method = function method() { return 'is enumerable'; }; } secondConstructor.prototype = new firstConstructor; secondConstructor.prototype.constructor = secondConstructor; var o = new secondConstructor(); o.arbitraryProperty = 'is enumerable'; o.propertyIsEnumerable('arbitraryProperty'); // returns true o.propertyIsEnumerable('method'); // returns true o.propertyIsEnumerable('property'); // returns false o.property = 'is enumerable'; o.propertyIsEnumerable('property'); // returns true // These return false as they are on the prototype which // propertyIsEnumerable does not consider (even though the last two // are iteratable with for-in) o.propertyIsEnumerable('prototype'); // returns false (as of JS 1.8.1/FF3.6) o.propertyIsEnumerable('constructor'); // returns false o.propertyIsEnumerable('firstMethod'); // returns false prop : String The name of the property to test. If the object does not have the specified property, this method returns false. Returns a string representing the object. This method is meant to be overridden by derived objects for locale-specific purposes. Object's toLocaleString returns the result of calling toString. This function is provided to give objects a generic toLocaleString method, even though not all may use it. Currently, only Array, Number, and Date override toLocaleString. Object represented as a string. Returns a string representation of the object.] Object represented as a string. Returns the primitive value of the specified object.() Note: Objects in string contexts convert via the toString method, which is different from String objects converting to string primitives using valueOf. All objects have a string conversion, if only "[object type]". But many objects do not convert to number, boolean, or function. Returns value of the object or the object itself. Creates a new object with the specified prototype object and properties. Below is an example of how to use Object.create to achieve classical inheritance, this is for single inheritance, which is all that Javascript supports. //Shape - superclass function Shape() { this.x = 0; this.y = 0; } Shape.prototype.move = function(x, y) { this.x += x; this.y += y; console.info("Shape moved."); }; // Rectangle - subclass function Rectangle() { Shape.call(this); //call super constructor. } Rectangle.prototype = Object.create(Shape.prototype); var rect = new Rectangle(); rect instanceof Rectangle //true. rect instanceof Shape //true. rect.move(); //Outputs, "Shape moved." If you wish to inherit from multiple objects, then mixins are a possibility. function MyClass() { SuperClass.call(this); OtherSuperClass.call(this); } MyClass.prototype = Object.create(SuperClass.prototype); //inherit mixin(MyClass.prototype, OtherSuperClass.prototype); //mixin MyClass.prototype.myMethod = function() { // do a thing }; The mixin function would copy the functions from the superclass prototype to the subclass prototype, the mixin function needs to be supplied by the user. propertiesObjectargument with Object.create var o; // create an object with null as prototype o = Object.create(null); o = {}; // is equivalent to: o = Object.create(Object.prototype); // Example where we create an object with a couple of sample properties. // (Note that the second parameter maps keys to *property descriptors*.) o = Object.create(Object.prototype, { // foo is a regular "value property" foo: { writable:true, configurable:true, value: "hello" }, // bar is a getter-and-setter (accessor) property bar: { configurable: false, get: function() { return 10 }, set: function(value) { console.log("Setting `o.bar` to", value) } }}) function Constructor(){} o = new Constructor(); // is equivalent to: o = Object.create(Constructor.prototype); // Of course, if there is actual initialization code in the Constructor function, the Object.create cannot reflect it // create a new object whose prototype is a new, empty object // and a adding single property 'p', with value 42 o = Object.create({}, { p: { value: 42 } }) // by default properties ARE NOT writable, enumerable or configurable: o.p = 24 o.p //42 o.q = 12 for (var prop in o) { console.log(prop) } //"q" delete o.p //false //to specify an ES3 property o2 = Object.create({}, { p: { value: 42, writable: true, enumerable: true, configurable: true } }); NOTE: This method is part of the ECMAScript 5 standard. proto : Object The object which should be the prototype of the newly-created object. Throws a TypeError exception if the proto parameter isn't null or an object. propertiesObject : Object (optional) If specified and not undefined, an object whose enumerable own properties (that is, those properties defined upon itself and not enumerable properties along its prototype chain) specify property descriptors to be added to the newly-created object, with the corresponding property names. the newly created object. Defines new or modifies existing properties directly on an object, returning the object. In essence, it defines all properties corresponding to the enumerable own properties of props on the object. Object.defineProperties(obj, { "property1": { value: true, writable: true }, "property2": { value: "Hello", writable: false } // etc. etc. }); NOTE: This method is part of the ECMAScript 5 standard. obj : Object The object on which to define or modify properties. props : Object An object whose own enumerable properties constitute descriptors for the properties to be defined or modified. Defines a new property directly on an object, or modifies an existing property on an object, and returns the object. This method allows precise addition to or modification of a property on an object. Normal property addition through assignment creates properties which show up during property enumeration (for...in loop or Object#keys method),. Both data and accessor descriptor is an object with the following optional keys: configurable True if and only if the type of this property descriptor may be changed and if the property may be deleted from the corresponding object. Defaults to false. enumerable True if and only if this property shows up during enumeration of the properties on the corresponding object. Defaults to false. A data descriptor is an object with the following optional keys: value The value associated with the property. Can be any valid JavaScript value (number, object, function, etc) Defaults to undefined. writable True if and only if the value associated with the property may be changed with an assignment operator. Defaults to false. An accessor descriptor is an object with the following optional keys: get A function which serves as a getter for the property, or undefined if there is no getter. The function return will be used as the value of property. Defaults to undefined. set A function which serves as a setter for the property, or undefined if there is no setter. The function will receive as only argument the new value being assigned to the property. Defaults to undefined. Bear in mind that these options are not necessarily own properties so, if inherited, will be considered too. In order to ensure these defaults are preserved you might freeze the Object.prototype upfront, specify all options explicitly, or point to null as proto property. NOTE: This method is part of the ECMAScript 5 standard. obj : Object The object on which to define the property. prop : String The name of the property to be defined or modified. descriptor : Object The descriptor for the property being defined or modified.. Nothing can be added to or removed from the properties set of a frozen object. Any attempt to do so will fail, either silently or by throwing a TypeError exception (most commonly, but not exclusively, when in strict mode). Values cannot be changed for data properties. Accessor properties (getters and setters) work the same (and still give the illusion that you are changing the value). Note that values that are objects can still be modified, unless they are also frozen. NOTE: This method is part of the ECMAScript 5 standard. obj : Object The object to freeze. Returns a property descriptor for an own property (that is, one directly present on an object, not present by dint of being along an object's prototype chain) of a given object. This method permits examination of the precise description of a property. A property in JavaScript consists of a string-valued name and a property descriptor. Further information about property descriptor types and their attributes can be found in Object#defineProperty. NOTE: This method is part of the ECMAScript 5 standard. obj : Object The object in which to look for the property. prop : String The name of the property whose description is to be retrieved. A property descriptor is a record with some of the following attributes: value The value associated with the property (data descriptors only). writable True if and only if the value associated with the property may be changed (data descriptors only). get A function which serves as a getter for the property, or undefined if there is no getter (accessor descriptors only). set A function which serves as a setter for the property, or undefined if there is no setter (accessor descriptors only). configurable true if and only if the type of this property descriptor may be changed and if the property may be deleted from the corresponding object. enumerable true if and only if this property shows up during enumeration of the properties on the corresponding object. Value of the property descriptor. Returns an array of all properties (enumerable or not) found directly upon a given object. Rreturns an array whose elements are strings corresponding to the enumerable and non-enumerable properties found directly upon obj. The ordering of the enumerable properties in the array is consistent with the ordering exposed by a for...in loop (or by Object#keys) over the properties of the object. The ordering of the non-enumerable properties in the array, and among the enumerable properties, is not defined. var arr = ["a", "b", "c"]; print(Object.getOwnPropertyNames(arr).sort()); // prints "0,1,2,length" // Array-like object var obj = { 0: "a", 1: "b", 2: "c"}; print(Object.getOwnPropertyNames(obj).sort()); // prints "0,1,2" // Printing property names and values using Array.forEach Object.getOwnPropertyNames(obj).forEach(function(val, idx, array) { print(val + " -> " + obj[val]); }); // prints // 0 -> a // 1 -> b // 2 -> c // non-enumerable property var my_obj = Object.create({}, { getFoo: { value: function() { return this.foo; }, enumerable: false } }); my_obj.foo = 1; print(Object.getOwnPropertyNames(my_obj).sort()); // prints "foo, getFoo" If you want only the enumerable properties, see Object#keys or use a for...in loop (although note that this will return enumerable properties not found directly upon that object but also along the prototype chain for the object unless the latter is filtered with hasOwnProperty). Items on the prototype chain are not listed: function ParentClass () { } ParentClass.prototype.inheritedMethod = function () { }; function ChildClass () { this.prop = 5; this.method = function () {}; } ChildClass.prototype = new ParentClass; ChildClass.prototype.prototypeMethod = function () { }; alert( Object.getOwnPropertyNames( new ChildClass() // ["prop", "method"] ) ) NOTE: This method is part of the ECMAScript 5 standard. obj : Object The object whose enumerable and non-enumerable own properties are to be returned. Array of property names. Returns the prototype (i.e. the internal [[Prototype]]) of the specified object. NOTE: This method is part of the ECMAScript 5 standard. object : Object The object whose prototype is to be returned. Throws a TypeError exception if this parameter isn't an Object. the prototype Determines if an object is extensible (whether it can have new properties added to it). Objects are extensible by default: they can have new properties added to them, and can be modified. An object can be marked as non-extensible using Object#preventExtensions, Object#seal, or Object#freeze. // New objects are extensible. var empty = {}; assert(Object.isExtensible(empty) === true); // ...but that can be changed. Object.preventExtensions(empty); assert(Object.isExtensible(empty) === false); // Sealed objects are by definition non-extensible. var sealed = Object.seal({}); assert(Object.isExtensible(sealed) === false); // Frozen objects are also by definition non-extensible. var frozen = Object.freeze({}); assert(Object.isExtensible(frozen) === false); NOTE: This method is part of the ECMAScript 5 standard. obj : Object The object which should be checked. True when object is extensible. Determines if an object is frozen. An object is frozen if and only if it is not extensible, all its properties are non-configurable, and all its data properties (that is, properties which are not accessor properties with getter or setter components) are non-writable. // A new object is extensible, so it is not frozen. assert(Object.isFrozen({}) === false); // An empty object which is not extensible is vacuously frozen. var vacuouslyFrozen = Object.preventExtensions({}); assert(Object.isFrozen(vacuouslyFrozen) === true); // A new object with one property is also extensible, ergo not frozen. var oneProp = { p: 42 }; assert(Object.isFrozen(oneProp) === false); // Preventing extensions to the object still doesn't make it frozen, // because the property is still configurable (and writable). Object.preventExtensions(oneProp); assert(Object.isFrozen(oneProp) === false); // ...but then deleting that property makes the object vacuously frozen. delete oneProp.p; assert(Object.isFrozen(oneProp) === true); // A non-extensible object with a non-writable but still configurable property is not frozen. var nonWritable = { e: "plep" }; Object.preventExtensions(nonWritable); Object.defineProperty(nonWritable, "e", { writable: false }); // make non-writable assert(Object.isFrozen(nonWritable) === false); // Changing that property to non-configurable then makes the object frozen. Object.defineProperty(nonWritable, "e", { configurable: false }); // make non-configurable assert(Object.isFrozen(nonWritable) === true); // A non-extensible object with a non-configurable but still writable property also isn't frozen. var nonConfigurable = { release: "the kraken!" }; Object.preventExtensions(nonConfigurable); Object.defineProperty(nonConfigurable, "release", { configurable: false }); assert(Object.isFrozen(nonConfigurable) === false); // Changing that property to non-writable then makes the object frozen. Object.defineProperty(nonConfigurable, "release", { writable: false }); assert(Object.isFrozen(nonConfigurable) === true); // A non-extensible object with a configurable accessor property isn't frozen. var accessor = { get food() { return "yum"; } }; Object.preventExtensions(accessor); assert(Object.isFrozen(accessor) === false); // ...but make that property non-configurable and it becomes frozen. Object.defineProperty(accessor, "food", { configurable: false }); assert(Object.isFrozen(accessor) === true); // But the easiest way for an object to be frozen is if Object.freeze has been called on it. var frozen = { 1: 81 }; assert(Object.isFrozen(frozen) === false); Object.freeze(frozen); assert(Object.isFrozen(frozen) === true); // By definition, a frozen object is non-extensible. assert(Object.isExtensible(frozen) === false); // Also by definition, a frozen object is sealed. assert(Object.isSealed(frozen) === true); NOTE: This method is part of the ECMAScript 5 standard. obj : Object The object which should be checked. True if the object is frozen, otherwise false. Determines if an object is sealed. An object is sealed if it is non-extensible and if all its properties are non-configurable and therefore not removable (but not necessarily non-writable). // Objects aren't sealed by default. var empty = {}; assert(Object.isSealed(empty) === false); // If you make an empty object non-extensible, it is vacuously sealed. Object.preventExtensions(empty); assert(Object.isSealed(empty) === true); // The same is not true of a non-empty object, unless its properties are all non-configurable. var hasProp = { fee: "fie foe fum" }; Object.preventExtensions(hasProp); assert(Object.isSealed(hasProp) === false); // But make them all non-configurable and the object becomes sealed. Object.defineProperty(hasProp, "fee", { configurable: false }); assert(Object.isSealed(hasProp) === true); // The easiest way to seal an object, of course, is Object.seal. var sealed = {}; Object.seal(sealed); assert(Object.isSealed(sealed) === true); // A sealed object is, by definition, non-extensible. assert(Object.isExtensible(sealed) === false); // A sealed object might be frozen, but it doesn't have to be. assert(Object.isFrozen(sealed) === true); // all properties also non-writable var s2 = Object.seal({ p: 3 }); assert(Object.isFrozen(s2) === false); // "p" is still writable var s3 = Object.seal({ get p() { return 0; } }); assert(Object.isFrozen(s3) === true); // only configurability matters for accessor properties NOTE: This method is part of the ECMAScript 5 standard. obj : Object The object which should be checked. True if the object is sealed, otherwise false. Returns an array of a given object's own enumerable properties, in the same order as that provided by a for-in loop (the difference being that a for-in loop enumerates properties in the prototype chain as well). Returns an array whose elements are strings corresponding to the enumerable properties found directly upon object. The ordering of the properties is the same as that given by looping over the properties of the object manually. var arr = ["a", "b", "c"]; alert(Object.keys(arr)); // will alert "0,1,2" // array like object var obj = { 0 : "a", 1 : "b", 2 : "c"}; alert(Object.keys(obj)); // will alert "0,1,2" // getFoo is property which isn't enumerable var my_obj = Object.create({}, { getFoo : { value : function () { return this.foo } } }); my_obj.foo = 1; alert(Object.keys(my_obj)); // will alert only foo If you want all properties, even the not enumerable, see Object#getOwnPropertyNames. NOTE: This method is part of the ECMAScript 5 standard. obj : Object The object whose enumerable own properties are to be returned. Array of property names. Prevents new properties from ever being added to an object (i.e. prevents future extensions to the object). An object is extensible if new properties can be added to it.). It only prevents addition of own properties. Properties can still be added to the object prototype. If there is a way to turn an extensible object to a non-extensible one, there is no way to do the opposite in ECMAScript 5 NOTE: This method is part of the ECMAScript 5 standard. obj : Object The object which should be made non-extensible. Seals an object, preventing new properties from being added to it and marking all existing properties as non-configurable. Values of present properties can still be changed as long as they are writable.. NOTE: This method is part of the ECMAScript 5 standard. obj : Object The object which should be sealed.
https://docs.sencha.com/extjs/6.5.1/classic/Object.html
2021-06-12T21:01:14
CC-MAIN-2021-25
1623487586390.4
[]
docs.sencha.com
Box Password Box Password Box Password Box Class Definition Represents a control designed for entering and handling passwords. public ref class PasswordBox sealed : System::Windows::Controls::Control [System.Windows.TemplatePart(Name="PART_ContentHost", Type=typeof(System.Windows.FrameworkElement))] public sealed class PasswordBox : System.Windows.Controls.Control type PasswordBox = class inherit Control Public NotInheritable Class PasswordBox Inherits Control - Inheritance - PasswordBoxPasswordBoxPasswordBoxPasswordBox - Attributes - Examples The following example shows how to specify the handler method for the PasswordChanged event. <PasswordBox Name="pwdBox" MaxLength="64" PasswordChar="#" PasswordChanged="PasswordChangedHandler" /> The following example shows the corresponding event handler. In this case, the event handler simply increments a counter. private int pwChanges = 0; void PasswordChangedHandler(Object sender, RoutedEventArgs args) { // Increment a counter each time the event fires. ++pwChanges; } Private pwChanges As Integer = 0 Private Sub PasswordChangedHandler(ByVal sender As Object, ByVal args As RoutedEventArgs) ' Increment a counter each time the event fires. pwChanges += 1 End Sub Remarks Important PasswordBox has built-in handling for the bubbling MouseUp and MouseDown events. Consequently, custom event handlers that listen for MouseUp or MouseDown events from a Password PasswordBox native handling of these events, and be aware that this has notable effects on the control's UI..
https://docs.microsoft.com/en-us/dotnet/api/system.windows.controls.passwordbox?view=netframework-4.7.2
2019-07-16T03:01:58
CC-MAIN-2019-30
1563195524475.48
[]
docs.microsoft.com
2. requirements are strictly from a standpoint of what the VNF or PNF developer needs to know to operate and be compliant with ONAP. - Requirements that are not applicable to VNF or P VNFs or P. Glossary¶ - ACL - Access Control List - ACME - Automated Certificate Management - API - Application Programming Interface - BGP - Border Gateway Protocol - CA - Certificate Authority - CCL - Commerce Control List - CLLI - Common Language Location Identification - CMOS - Complementary metal-oxide-semiconductor - CMP - Certificate Management Protocol - CRL - Certificate Revocation List - CSAR - Cloud Service Archive - DBaaS - Database as a Service - DDOS - Distributer Denial-of-Service - DNS - Domain Name System - DPDK - Data Plane Development Kit - DPI - Deep Packet Inspection - DPM - Data Position Measurement - DSS - Digital Signature Services - ECCN - Export Control Classification Number - EMS - Element Management Systems - EVC - Ethernet Virtual Connection - FIPS - Federal Information Processing Standards - FQDN - Fully Qualified Domain Name - FTPES - File Transfer Protocol Secure - GPB - Google Protocol Buffers - GUI - Graphical User Interface - GVNFM - Generic Virtualized Network Function Manager - HSM - Hardware Security Module - IDAM - Identity and Access Management - IPSec - IP Security - JMS - Java Message Service - JSON - JavaScript Object Notation - KPI - Key Performance Indicator - LCM - Life Cycle Management - LCP - Link Control Protocol - LDAP - Lightweight Directory Access Protocol - LTE - Long-Term Evolution - MD5 - Message-Digest Algorithm - MIME - Multipurpose Internet Mail Extensions - MTTI - Mean Time to Identify - MTTR - Mean Time to Repair - NCSP - Network Cloud Service Providers - NFS - Network File System - NFV - Network Functions Virtualization - NIC - Network Interface Controller - NIST - National Institute of Standards and Technology - NTP - Network Time Protocol - OA&M - Operations, administration and management - OAuth - Open Authorization - OID - Object Identifier - OPNFV - Open Platform for Network Functions Virtualization - OWASP - Open Web Application Security Project - PCEF - Policy and Charging Enforcement Function - PCRF - Policy and Charging Rules Function - PKI - Public Key Infrastructure - PM - Performance Monitoring - PNF - Physical Network Function - PnP - Plug and Play - QoS - Quality of Service - RAN - Radio Access Network - RBAC - Role-Based Access Control - RTPM - Real Time Performance Monitoring - RFC - Remote Function Call - RFP - Request For Proposal - RPC - Remote Procedure Call - SAML - Security Assertion Markup Language - SCEP - Simple Certificate Enrollment Protocol - SDN - Software-Defined Networking - SFTP - SSH File Transfer Protocol - SHA - Secure Hash Algorithm - SLA - Service Level Agreement - SNMP - Simple Network Management Protocol - SP - Service Provider - SPI - Sensitive Personal Information - SR-IOV - Single-Root Input/Output Virtualization - SSL - Secure Sockets Layer - SSH - Secure Shell - TACACS - Terminal Access Controller Access Control System - TCA - Threshold Crossing Alert - TLS - Transport Layer Security - TOSCA - Topology and Orchestration Specification for Cloud Applications - TPM - Trusted Platform Module - UUID - Universally Unique Identifier - VDU - Virtualization Deployment Unit - VES - VNF Event Streaming - VLAN - Virtual LAN - VM - Virtual Machine - VNF - Virtual Network Function - VNFC - Virtual Network Function Component - VNF-D - Virtual Network Function Descriptor - VPN - Virtual Private Network - XML - eXtensible Markup Language - YAML - YAML Ain’t Markup Languag - YANG - Yet Another Next Generation - NFVI - Network Function Virtualization Infrastructure - VNFC - Virtualized Network Function Components - MANO - Management And Network Orchestration - VNFM - Virtualized Network Function Manager - BUM - Broadcast, Unknown-Unicast and Multicast traffic 2.3. Submitting Feedback¶ Please refer to the VNF Requirements - How to Contribute guide for instructions on how to create issues or contribute changes to the VNF Requirements project.
https://docs.onap.org/en/dublin/submodules/vnfrqts/requirements.git/docs/Chapter2/index.html
2019-07-16T01:59:14
CC-MAIN-2019-30
1563195524475.48
[]
docs.onap.org
Slack OpenEBS UninstallUninstall Whenever a Jiva PVC is deleted, a job will created and status is seeing as completed Installation Installation failed because of insufficient user rightsInstallation failed because of insufficient user rights OpenEBS installation can fail in some cloud platform with the your Nodes. You need to check if the service iscsid.service is running. If it is not running, you have to enable and start the service. You can refer prerequisites section and choose your platform to get the steps for enabling see). Make sure the following prerequisites are done. - Verify iSCSI initiator is installed on nodes and services are running. - Added extra_binds under kubelet service in cluster YAML More details are mentioned.).
https://docs.openebs.io/v082/docs/next/troubleshooting.html
2019-07-16T03:15:05
CC-MAIN-2019-30
1563195524475.48
[]
docs.openebs.io
Comparing datasets with LMM¶ Consider the following datasets. A treatment is applied three times at different time points. For each treatment, a control measurement is performed. For each measurement day, a reservoir measurement is performed additionally for treatment and control. - Day1: - one sample, called “Treatment I”, measured at flow rates of 0.04, 0.08 and 0.12 µl/s and one measurement in the reservoir - one control, called “Control I”, measured at flow rates 0.04, 0.08 and 0.12 µl/s and one measurement in the reservoir - Day2: - two samples, called “Treatment II” and “Treatment III”, measured at flow rates 0.04, 0.08 and 0.12 µl/s and one measurement in the reservoir - two controls, called “Control II” and “Control III”, measured at flow rates 0.04, 0.08 and 0.12 µl/s and one measurement in the reservoir Linear mixed models (LMM) allow to assign a significance to a treatment (fixed effect) while considering the systematic bias in-between the measurement repetitions (random effect). We will assume that the datasets are loaded into Shape-Out and that invalid events have been filtered (see e.g. Excluding invalid events). The Analyze configuration tab enables the comparison of an experiment (control and treatment) and repetitions of the experiment using LMM [HKP+17], [HMMO18]. Basic analysis: Assign which measurement is a control and which is a treatment by choosing the option in the dropdown lists under Interpretation. Group the pairs of control and treatment done in one experiment, by choosing an index number, called Repetition. Here, Treatment I and Control I are one experiment – called Repetition 1, Treatment II and Control II are a repetition of the experiment – called Repetition 2, Treatment III and Control III are another repetition of the experiment – called Repetition 3. Press Apply to start the calculations. A text file will open to show the results. The most important numbers are: Fixed effects: - (Intercept)-Estimate The mean of the parameter chosen for all controls. - treatment-Estimate The effect size of the parameter chosen between the mean of all controls and the mean of all treatments. Full coefficient table: Shows the effect size of the parameter chosen between control and treatment for every single experiment. Model-Pr(>Chisq): Shows the p-value and the significance of the test. Differential feature analysis: The LMM analysis is only applicable if the respective measurements show little difference in the reservoir for the feature chosen. For instance, if a treatment results in non-spherical cells in the reservoir, then the deformation recorded for the treatment might be biased towards higher values. In this case, the information of the reservoir measurement has to be included by means of the differential deformation [HMMO18]. This can be achieved by selecting the respective reservoir measurements in the dropdown menu.
https://shapeout.readthedocs.io/en/develop/sec_qg_mixed_effects.html
2019-07-16T03:03:34
CC-MAIN-2019-30
1563195524475.48
[]
shapeout.readthedocs.io
You can configure Marathon’s actions on unreachable tasks. The unreachableStrategy parameter of your app or pod definition allows you to configure this in two ways: by defining when a new task instance should be launched, and by defining when a task instance should be expunged. "unreachableStrategy": { "inactiveAfterSeconds": "<integer>", "expungeAfterSeconds": "<integer>" } Configuration OptionsConfiguration Options inactiveAfterSeconds: If a task instance is unreachable for longer than this value, it is marked inactive and a new instance will launch. At this point, the unreachable task is not yet expunged. The default value is 0 seconds. expungeAfterSeconds: If an instance is unreachable for longer than this value, it will be expunged. An expunged task will be killed if it ever comes back. Instances are usually marked as unreachable before they are expunged, but that is not required. This value is required to be greater than inactiveAfterSecondsunless both are zero. If the instance has any persistent volumes associated with it, then they will be destroyed and associated data will be deleted. The default value is 0 seconds. You can use inactiveAfterSeconds and expungeAfterSeconds in conjunction with one another. For example, if you configure inactiveAfterSeconds = 60 and expungeAfterSeconds = 120, a task instance will be expunged if it has been unreachable for more than 120 seconds and a second instance will be started if it has been unreachable for more than 60 seconds. Kill SelectionKill Selection You call also define a kill selection to declare whether Marathon kills the youngest or oldest tasks first when rescaling or otherwise killing multiple tasks. The default value for this parameter is YoungestFirst. You can also specify OldestFirst. Add the killSelection parameter to your app definition, or to the PodSchedulingPolicy parameter of your pod definition. { "killSelection": "YoungestFirst" } Persistent VolumesPersistent Volumes The default unreachableStrategy for apps with persistent volumes will create new instances with new volumes and delete existing volumes (if possible) after an instance has been unreachable for longer than 7 days and has been expunged by Marathon.
http://docs-staging.mesosphere.com/1.11/deploying-services/task-handling/configure-task-handling/
2019-07-16T02:07:22
CC-MAIN-2019-30
1563195524475.48
[]
docs-staging.mesosphere.com
Third-party tools If you are already using a third-party monitoring solution and wish to monitor your nio deployment using that same tool, you have the ability to have nio report metrics to be visualized in your existing dashboards. nio has blocks that integrate with Google Stackdriver and Datadog. To access guides on how to monitor nio with these third-party tools, visit the following sections:
http://docs.n.io/monitoring/third-party/
2019-07-16T02:14:24
CC-MAIN-2019-30
1563195524475.48
[]
docs.n.io
Getting started with Read the Docs¶ Learn more about configuring your automated documentation builds and some of the core features of Read the Docs. - Overview of core features: Read the Docs features - Configure your documentation: Configuration reference | Webhooks | Badges | Custom domains - Connecting with GitHub, BitBucket, or GitLab: Connecting your account - Read the Docs build and versioning process: Build process | Handling multiple docs versions - Troubleshooting: Support | Frequently asked questions Advanced features of Read the Docs¶ Read the Docs offers many advanced features and options. Learn more about these integrations and how you can get the most out of your documentation and Read the Docs. - Advanced project configuration: Subprojects | Single version docs | Privacy levels - Multi-language documentation: Translations and localization - Redirects: User defined redirects | Automatic redirects - Topic specific guides: How-to guides - Extending Read the Docs: REST API The Read the Docs project and organization¶ Learn about Read the Docs, the project and the company, and find out how you can get involved and contribute to the development and success of Read the Docs and the larger software documentation ecosystem. - Getting involved with Read the Docs: Contributing | Roadmap | Google Summer of Code | Code of conduct - Policies & Process: Security | Privacy policy | DMCA takedown policy | Policy for abandoned projects | Release notes & changelog - The people and philosophy behind Read the Docs: Team | Open source philosophy | Our story - Financial and material support: Advertising | Sponsors - Read the Docs for business: Support and additional features - Running your own version of Read the Docs: Private installations
https://docs.readthedocs.io/en/latest/?badge=latest
2019-07-16T02:07:37
CC-MAIN-2019-30
1563195524475.48
[]
docs.readthedocs.io
App settings reference for Azure Functions App settings in a function app contain global configuration options that affect all functions for that function app. When you run locally, these settings are accessed as local environment variables. This article lists the app settings that are available in function apps. There are several ways that you can add, update, and delete function app settings: There are other global configuration options in the host.json file and in the local.settings.json file. APPINSIGHTS_INSTRUMENTATIONKEY The Application Insights instrumentation key if you're using Application Insights. See Monitor Azure Functions. AZURE_FUNCTIONS_ENVIRONMENT In version 2.x of the Functions runtime, configures app behavior based on the runtime environment. This value is read during initialization. You can set AZURE_FUNCTIONS_ENVIRONMENT to any value, but three values are supported: Development, Staging, and Production. When AZURE_FUNCTIONS_ENVIRONMENT isn't set, it defaults to Production. This setting should be used instead of ASPNETCORE_ENVIRONMENT to set the runtime environment. AzureWebJobsDashboard Optional storage account connection string for storing logs and displaying them in the Monitor tab in the portal. The storage account must be a general-purpose one that supports blobs, queues, and tables. See Storage account and Storage account requirements. Tip For performance and experience, it is recommended to use APPINSIGHTS_INSTRUMENTATIONKEY and App Insights for monitoring instead of AzureWebJobsDashboardJobsSecretStorageType Specifies the repository or provider to use for key storage. Currently, the supported repositories are blob storage ("Blob") and the local file system ("Files"). The default is blob in version 2 and file system in version 1. Dictates whether editing in the Azure portal is enabled. Valid values are "readwrite" and "readonly". FUNCTIONS_EXTENSION_VERSION The version of the Functions runtime to use in this function app. A tilde with major version means use the latest version of that major version (for example, "~2"). When new versions for the same major version are available, they are automatically installed in the function app. To pin the app to a specific version, use the full version number (for example, "2.0.12345"). Default is "~2". A value of ~1 pins your app to version 1.x of the runtime. FUNCTIONS_WORKER_RUNTIME The language worker runtime to load in the function app. This will correspond to the language being used in your application (for example, "dotnet"). For functions in multiple languages you will need to publish them to multiple apps, each with a corresponding worker runtime value. Valid values are dotnet (C#/F#), node (JavaScript/TypeScript), java (Java), powershell (PowerShell), and python (Python). a preview feature - and only reliable if set to a value <= 5 WEBSITE_NODE_DEFAULT_VERSION Default is "8.11.1". WEBSITE_RUN_FROM_PACKAGE Enables your function app to run from a mounted package file. Valid values are either a URL that resolves to the location of a deployment package file, or 1. When set to 1, the package must be in the d:\home\data\SitePackages folder. When using zip deployment with this setting, the package is automatically uploaded to this location. In preview, this setting was named WEBSITE_RUN_FROM_ZIP. For more information, see Run your functions from a package file. AZURE_FUNCTION_PROXY_DISABLE_LOCAL_CALL By default Functions proxies will utilize a shortcut to send API calls from proxies directly to functions in the same Function App, rather than creating a new HTTP request. This setting allows you to disable that behavior. AZURE_FUNCTION_PROXY_BACKEND_URL_DECODE_SLASHES This setting controls whether %2F is decoded as slashes in route parameters when they are inserted into the backend URL. Example Here is an example proxies.json in a function app at the URL myfunction.com { "$schema": "", "proxies": { "root": { "matchCondition": { "route": "/{*all}" }, "backendUri": "example.com/{all}" } } } Next steps Learn how to update app settings See global settings in the host.json file See other app settings for App Service apps Feedback
https://docs.microsoft.com/en-us/azure/azure-functions/functions-app-settings
2019-07-16T02:07:18
CC-MAIN-2019-30
1563195524475.48
[array(['media/functions-app-settings/function-app-landing-page.png', 'Function app landing page'], dtype=object) ]
docs.microsoft.com
Recipes for PDSF-users speciffic Shifter tasks¶ (see NERSC generic Shifter tutorial for explanation of bacis concepts) Table of content: how to transfer payload of a live STAR DB and convert it into static directory allowing to use Shifter-based local DB to serve the DB tables to root4star jobs on Cori Example of interactive use of Shifter at PDSF and Cori, uses CHOS-SL6.4 image, CVMFS, features Atlas simulation job, runs for any PDSF user
https://docs.nersc.gov/pdsf/shifter/
2019-07-16T02:40:43
CC-MAIN-2019-30
1563195524475.48
[]
docs.nersc.gov
Architecture¶ Introduction¶ NBI stands for NorthBound Interface. It brings to ONAP a set of API that can be used by external systems as BSS for example. These API are based on TMF API. Global NBI architecture for Dublin release¶ Following illustration provides a global view about NBI architecture, integration with other ONAP components and API resource/operation provided. Developer Guide¶ Technical information about NBI (dependencies, configuration, running & testing) could be found here: NBI_Developer_Guide API Flow illustration (with example messages) is described in this document: nbicallflow.pdf
https://docs.onap.org/en/dublin/submodules/externalapi/nbi.git/docs/architecture/architecture.html
2019-07-16T02:07:24
CC-MAIN-2019-30
1563195524475.48
[array(['../../../../../_images/onap_nbi_dublin.jpg', '../../../../../_images/onap_nbi_dublin.jpg'], dtype=object)]
docs.onap.org
Contents Now Platform Capabilities Previous Topic Next Topic Calculator widget Subscribe Log in to subscribe to topics and get notified when content changes. ... SAVE AS PDF Selected Topic Topic & Subtopics All Topics in Contents Share Calculator widget The calculator widget does simple calculations. The calculator widget does not have instance options. You can use it as an example of how to pass data between the client and server. Figure 1. Calculator widget On this page Send Feedback Previous Topic Next Topic
https://docs.servicenow.com/bundle/istanbul-servicenow-platform/page/build/service-portal/concept/calculator-widget.html
2019-07-16T02:43:15
CC-MAIN-2019-30
1563195524475.48
[]
docs.servicenow.com
Contents Now Platform Administration Previous Topic Next Topic Create a project time category Subscribe Log in to subscribe to topics and get notified when content changes. ... SAVE AS PDF Selected Topic Topic & Subtopics All Topics in Contents Share Create a project time category A time card admin or a project manager can create sub-categories to define specific activities in the projects. The time card users can use these project sub-categories to report time for a specific activity in a project. Before you beginRole required: timecard_admin, it_project_manager Procedure Navigate to Time Sheets > Administration > Project Time Categories. Click New. Fill in the fields. Table 1. Project time category form fields Field Description Name Unique name for the project time category. Description A description of the type of project activity. Click Submit. On this page Send Feedback Previous Topic Next Topic
https://docs.servicenow.com/bundle/jakarta-platform-administration/page/administer/task-table/task/create-project-time-category.html
2019-07-16T02:53:04
CC-MAIN-2019-30
1563195524475.48
[]
docs.servicenow.com
. Lookupobject, it is possible to crack any direct method handle to recover a symbolic reference for the underlying method, constructor, or field. Cracking must be done via a Lookupobject equivalent to that which created the target method handle, or which has enough access permissions to recreate an equivalent method handle. If the underlying method is caller sensitive, the direct method handle will have been "bound" to a particular caller class, the lookup class of the lookup object used to create it. Cracking this method handle with a different lookup class will fail even if the underlying method is public (like Class.forName). The requirement of lookup object matching provides a "fast fail" behavior for programs which may otherwise trust erroneous revelation of a method handle with symbolic information (or caller binding) from an unexpected scope. Use MethodHandles.reflectAs(java.lang.Class<T>, java.lang.invoke.MethodHandle) to override this limitation.() void. If it is to a non-static method, the method type will not mention the thisparameter. If it is to a field and the requested access is to read the field, the method type will have no parameters and return the field type. If it is to a field and the requested access is to write the field, the method type will have one parameter of the field type and return void. Note that original direct method handle may include a leading this) MethodHandleInfo, given the four parts of its symbolic reference. This is defined to be of the form "RK C.N:MT", where RKis the reference kind string for kind, Cis the name of defc Nis the name, and MTis the type. These four values may be obtained from the reference kind, declaring class, member name, and method type of a MethodHandleInfoobject..
http://docs.sumile.cn/java/api/java/lang/invoke/MethodHandleInfo.html
2019-07-16T02:48:24
CC-MAIN-2019-30
1563195524475.48
[]
docs.sumile.cn
Module - 8! Module - 8 Units Get started with Azure by creating and configuring your first virtual machine in the cloud.Start learning for free Up your game with a module or learning path tailored to today's developer and technology masterminds and designed to prepare you for industry-recognized Microsoft certifications. Explore more advanced Azure topics with online courses. Explore more advanced Azure topics with free online courses, delivered in partnership with Pluralsight.Free Pluralsight Courses LinkedIn Learning.
https://docs.microsoft.com/en-us/learn/?l=QqkzveF1_1804984382
2019-07-16T03:14:47
CC-MAIN-2019-30
1563195524475.48
[]
docs.microsoft.com
Temporary Projects By working with temporary projects, you can create and experiment with a project without specifying a disk location. When you create a temporary project, you specify only a project type, a template, and a name in the New Project dialog box. You can save or discard a temporary project at any time while you are working with it. Working with Temporary Projects All Visual Basic and Visual C# projects can be created as temporary projects. If you want to work with temporary projects, open the Tools menu, choose Options, choose Project Solutions, and then, in the General dialog box, clear the Save new projects when created check box. A solution can contain only one temporary project at a time. Therefore, if you want to add a temporary project to a solution that already contains one, you will be prompted to save the existing temporary project first., the file is deleted permanently and cannot be saved if you later save the project. See Also Tasks How to: Enable Temporary Projects How to: Save Temporary Projects
https://docs.microsoft.com/en-us/previous-versions/visualstudio/visual-studio-2010/6yx39k28%28v%3Dvs.100%29
2019-07-16T02:50:49
CC-MAIN-2019-30
1563195524475.48
[]
docs.microsoft.com
MF_MT_FRAME_RATE_RANGE_MIN attribute The minimum frame rate that is supported by a video capture device, in frames per second. Data type UINT64 Get/set To get this attribute, call MFGetAttributeRatio. To set this attribute, call MFSetAttributeRatio. Remarks The frame rate is expressed as a ratio. The upper 32 bits of the attribute value contain the numerator, and the lower 32 bits contain the denominator. For example, if the frame rate is 30 frames per second (fps), the ratio is 30/1. If the capture device reports a minimum frame rate, the media source sets this attribute on the media type. The maximum frame rate is given in the MF_MT_FRAME_RATE_RANGE_MAX attribute. The device is not guaranteed to support every increment within this range. To set the device's frame rate, first modify the value of the MF_MT_FRAME_RATE attribute on the media type. Then set the media type on the media source. The GUID constant for this attribute is exported from mfuuid.lib. Requirements See also Alphabetical List of Media Foundation Attributes How to Set the Video Capture Frame Rate - - MF_MT_FRAME_RATE_RANGE_MAX
https://docs.microsoft.com/en-us/windows/win32/medfound/mf-mt-frame-rate-range-min
2019-07-16T02:56:39
CC-MAIN-2019-30
1563195524475.48
[]
docs.microsoft.com
Transform Meta Info Description This transform queries two historical DNS providers to determine if this IP address is also used by other domains as an NS record. This type of reverse NS lookup cannot be performed using standard DNS queries and is very useful to find other domains associated with the IP number. In most cases one would work from the actual DNS name of the NS record, but if you only have the IP address available there is no standard way of knowing if the IP address is an NS for a domain or not. This transform gives you the ability to do this. Unlike the reverse MX lookup the reverse NS lookup does not always imply that the domains found have a close relationship with the IP address as many companies and organizations outsource their DNS service. Typical Use Case Domain --> DNS Server --> IP Address ==> Related Domains ==> To Domain [Sharing this NS] --> Related Transform Example Starting with the domain "google.com" we can get their nameservers. We can then resolve the nameservers to IP addreses. Using "To Domain [Sharing this NS]" we get other domains sharing the same NS record. This returns a long list of domains, which can be edited down to a list of domains owned/operated by Google and it's services.
https://docs.maltego.com/support/solutions/articles/15000019039-to-domain-sharing-this-ns-
2019-07-16T02:18:59
CC-MAIN-2019-30
1563195524475.48
[array(['https://s3-eu-central-1.amazonaws.com/euc-cdn.freshdesk.com/data/helpdesk/attachments/production/1010005501092/original/CR5o5M5nXsKsnpHvDljyLJZUBsxc2FId4A.png?1545044232', None], dtype=object) ]
docs.maltego.com
Object permissions Granting or revoking privileges on database resources. Object permissions may be assigned using the authorization mechanism for the following objects: Authenticated roles with passwords stored in the database are authorized selective access. The permissions are stored in tables. - keyspace - table - function - aggregate - roles - MBeans Permission is configurable for CQL commands CREATE, ALTER, DROP, MODIFY, and DESCRIBE, which are used to interact with the database. The EXECUTE command may be used to grant permission to a role for the INSERT, and UPDATE commands. In addition, the AUTHORIZE command may be used to grant permission for a role to GRANT, REVOKE, or AUTHORIZE another role's permissions. Read access to these system tables is implicitly given to every authenticated user or role because the tables are used by most tools: - system_schema.keyspaces - system_schema.columns - system_schema.tables - system.local - system.peers
https://docs.datastax.com/en/ddacsecurity/doc/ddacsecurity/secureObjectPerms.html
2019-07-16T02:30:46
CC-MAIN-2019-30
1563195524475.48
[]
docs.datastax.com
System Variables SQL Server, including on Linux Azure SQL Database Azure SQL Data Warehouse Parallel Data Warehouse. System Variables for Packages The following table describes the system variables that Integration Services provides for packages. System Variables for Containers The following table describes the system variables that Integration Services provides for the For Loop, Foreach Loop, and Sequence containers. System Variables for Tasks The following table describes the system variables that Integration Services provides for tasks. System Variables for Event Handlers The following table describes the system variables that Integration Services provides for event handlers. Not all variables are available to all event handlers. System Variables in Parameter Bindings. Related Tasks Map Query Parameters to Variables in an Execute SQL Task
https://docs.microsoft.com/en-us/sql/integration-services/system-variables?view=sql-server-2017
2019-07-16T02:45:54
CC-MAIN-2019-30
1563195524475.48
[]
docs.microsoft.com
High Level Steps for Using PDX Serialization To use PDX serialization, you can configure and use GemFire’s reflection-based autoserializer, or you can program the serialization of your objects by using the PDX interfaces and classes. Optionally, program your application code to deserialize individual fields out of PDX representations of your serialized objects. You may also need to persist your PDX metadata to disk for recovery on startup. Procedure Use one of these serialization options for each object type that you want to serialize using PDX serialization: To ensure that your servers do not need to load the application classes, set the pdx read-serializedattribute to true. In gfsh, execute the following command before starting up your servers: gfsh>configure pdx --read-serialized=true By using gfsh, this configuration can propagated across the cluster through the Cluster Configuration Service. Alternately, you would need to configure pdx read-serializedin each server’s cache.xmlfile. If you are storing any GemFire data on disk, then you must configure PDX serialization to use persistence. See Persisting PDX Metadata to Disk for more information. (Optional) Wherever you run explicit application code to retrieve and manage your cached entries, you may want to manage your data objects without using full deserialization. To do this, see Programming Your Application to Use PdxInstances. PDX and Multi-Site (WAN) Deployments For multisite (WAN) installations only– if you will use PDX serialization in any of your WAN-enabled regions, for each distributed system, you must choose a unique integer between 0 (zero) and 255 and set the distributed-system-id in every member’s gemfire.properties file. See Configuring a Multi-site (WAN) System.
http://gemfire.docs.pivotal.io/geode/developing/data_serialization/use_pdx_high_level_steps.html
2018-02-18T04:35:19
CC-MAIN-2018-09
1518891811655.65
[]
gemfire.docs.pivotal.io
Your email must be working to send messages to the user. Please test all the above settings by sending a test mail to your personal account. If the above settings are not correct, you may confront with some warning messages HTML email header and footer sections will be taken from this template. To modify the email template, click the Template link, paste your Header and Footer information and click the Save button. When you send bulk emails, the mail server puts automatically them in a queue with other messages: the delivery starts from the first ones and then continues with the others. To view these email, click the Queue link. To start the delivery process, select messages and click the Process Now button. You can also test queue, by inserting a test email. To insert a Test Email, click the Insert Test Email button. Subscribers are users who have subscribed to a newsletter on your website. With the help of this module, you can create and send HTML newsletters to subscribers. The functionality of this module is similar to the Send New Email module described above.
http://docs.goclixy.com/docs/application/settings/email-queue
2018-02-18T04:49:05
CC-MAIN-2018-09
1518891811655.65
[]
docs.goclixy.com
Acquia Lift data warehouse Acquia Lift Omnichannel provides access to a cloud-based data warehouse that contains all of your Acquia Lift-hosted visitor data. You can directly connect to this data warehouse to use your own business intelligence tools to analyze your visitors and discover new insights. You can also extend your use of the visitor data warehouse by using the Acquia Lift APIs or the manual Profile Manager file upload feature to import external visitor information (such as from your CRM system) into Profile Manager, providing you with a single location for your aggregated visitor information. The data warehouse updates either every 10,000 records or approximately every 15 minutes, depending upon which threshold is first met. However, it may take up to 24 hours for processed data to be available in all tables. After you purchase a license for Omnichannel, you will be provided with connection instructions to your visitor data warehouse. Field layouts The data warehouse structure uses the following field layouts for organizing and presenting its data: - Person - This field layout describes visitors that have been uniquely identified with first-party cookies, their email addresses, or some other identifier. - Touch - This field layout describes a series of contiguous events (such as content views) with a duration between events of no more than 30 minutes. Each touch has a Person record as its parent. - Event - This field layout describes discrete visitor actions, such as a content view of an article or a click-through on a subscription offer. An event has a Touch record as its parent. Events also contain information about the specific content that a visitor is consuming. - Segments - This field layout describes the segments that are associated with each event for a given person, and is made up of a segmentstable (for specific segment information) and the matching_segmentstable (links events with their associated segments). Mapping taxonomy terms to Acquia Lift With Drupal, you can map taxonomy terms to the user-defined fields for Person, Touch, and Event types. These mapped terms will appear as meta tags when applicable for a page. Mapping Drupal-tagged content to Acquia Lift allows you to obtain a better understanding of the kinds of content that your visitors are consuming. To map your existing taxonomies into Acquia Lift, complete the following steps: - Sign in to your website as a user with the permission to administer the personalization settings. - In the admin menu, click Configuration. - In the Web Services section, click the Acquia Lift link. - In the Data collection settings section, click the Field Mappings left tab. - For the Content Section, Content Keywords, and Persona lists, click your website's taxonomy that most closely correlates with the item affected by that list. - Optional - If your website contains any additional taxonomy vocabularies that you want to map to Acquia Lift, complete the following steps: - In the left tab listing, click the tab with the vocabulary that you want to modify: - User Person Mappings - User Touch Mappings - User Event Mappings - Select the taxonomy that you want to map for each field. - Click Save configuration. For examples of content mapping for several industries, see Content tagging use case examples.
https://docs.acquia.com/lift/omni
2018-02-18T05:15:39
CC-MAIN-2018-09
1518891811655.65
[]
docs.acquia.com
Developers thrive on a continuous delivery process in which time is invested in creating, not waiting for tools, environments, and or delays due to bottlenecks in processing pull requests from multiple developers. Acquia Cloud CD provides developer and tester teams a continuous delivery cloud service to quickly create customer value by accelerating integration and delivery of production ready features. Acquia Cloud CD is developer-ready, automating the process of building and testing versions of code and applications using pipeline orchestration and production-like environments. Acquia Cloud CD’s continuous delivery is seamlessly integrated with the Acquia Cloud platform to speed adoption and ease operations. Features For developer teams, Acquia Cloud CD enables increased productivity, code quality, and performance for the development-to-staging process. To accelerate the delivery process, it provides access to a consistent set of continuous code building, testing, and integration tools. Quality is assured with continuous, self-service access to production-like environments. Automated process pipelines allow teams to execute with continuous delivery best practices. - Seamless integration with Acquia Cloud - Provides a single environment for continuous integration and delivery orchestration with self-service production-like environments. - Risk mitigation - Reduces the risk of meaningful differences between the development and test environments. By working in environments that closely match production, test results are predictable, mitigating the risk of bugs when completing deployments. - Digital development velocity - Brings speed and velocity to developers while lowering risks in quality, stability, and intellectual property. The automation benefits of ready to use building and testing orchestration and self-service environments greatly reduces code and versioning inconsistencies. - Automates release deliveries - Automates which tests are required against which target production environments ensuring quality deployments. - Delivers self-service, production-like environments - Eliminates the need to move code and testing between multiple cloud platforms saving time and reducing the risk of errors, by allowing you to provision and deprovision production-like environments in Acquia Cloud on your own. Why use Acquia Cloud CD? Code that is built frequently needs consistency to achieve continuous integration. However, it can be challenging for individual developers on a team to achieve consistency. In traditional deployment workflows, you need to do all of the work of assembling a website's deployment image in your code repository. For example, if a module is updated, you will have to copy the new version of that module into your repository. Also, for deployment workflows based on local builds, each developer needs to ensure that they have available all the correct prerequisites in their correct versions (such as PHP, Git, or Acquia Cloud repository access), and all developers must be able to consistently replicate this development environment locally. Using Acquia Cloud CD, your workspace can contain just your website's version-controlled build instructions and private resources (called pipelines). You can keep the canonical version of external resources outside of your repository, and pull in the latest or other required version as you build your website for deployment. Acquia Cloud CD also provides a consistent build and testing environment to all developers in a cloud-based container, which also has access to production-like environments you can create or remove as needed. About Acquia Cloud CD features To learn more about the available features of Acquia Cloud CD and how it can help you and your development team with continuous development and delivery, click one or more of the following links:
https://docs.acquia.com/node/27911
2018-02-18T05:15:29
CC-MAIN-2018-09
1518891811655.65
[]
docs.acquia.com
. Example of highlighting extremely overlapping patterns
https://docs.eyesopen.com/toolkits/python/depicttk/OEDepictConstants/OEHighlightOverlayStyle.html
2018-02-18T05:10:17
CC-MAIN-2018-09
1518891811655.65
[]
docs.eyesopen.com
Prepare your Source Server for Windows Server 2012 Essentials migration 2008 You must install the latest updates and service packs on the Source Server prior to migration. If updates or service packs are missing, the Migration Preparation Tool will report the problem and ask you to install the necessary updates before proceeding.. To disable the VPN on the Source Server. Verify the settings for the DHCP Server role Windows SBS 2008. Note Ensure that your Source Server is in a healthy state before you proceed by performing the procedures in the following section. check for the latest updates From the Source Server, click Start, click All Programs, and then click Windows Update. Click Check online for updates from Microsoft Update. If updates are found, click Install updates. Note Alternately, you can update the Windows Server Update Services (WSUS) server to review, approve and install available updates. Check the network reports for critical errors You can generate network reports from the Windows SBS 2008 Console. To generate a network report. Run the Windows Small Business Server 2008 Best Practices Analyzer You can run the Windows SBS 2008 The Windows SBS 2008 BPA checks the following services and applications: Exchange Server Update Services Network configuration Windows SharePoint® Services Microsoft SQL Server™ To use the Windows SBS 2008 BPA to analyze your Source Server Download and install the Microsoft Windows Small Business Server 2008 Best Practices Analyzer from the Microsoft Download Center. After the download is complete, click Start, click All Programs, and then click SBS Best Practices Analyzer Tool. Note Check for updates before you scan the server. SBS 2008 SBS 2008 BPA affect migration, but you should solve as many of the issues as possible to ensure that the migration is successful. Synchronize the Source Server time with an external time source. To synchronize the Source Server time with the NTP server. Important During the Windows Server 2012 Essentials installation, you have an opportunity to verify the time on the Destination Server and change it, if necessary. Ensure that the time is within five minutes of the time that is set on the Source Server. When the installation finishes, the Destination Server synchronizes with the NTP. All domain-joined computers, including the Source Server, synchronize to the Destination Server, which assumes the role of the primary domain controller (PDC) emulator master. Run the Migration Preparation Tool on the Source Server You cannot perform a migration mode install without first running the Migration Preparation Tool on your Source Server. This tool is designed to prepare your Source Server and domain to be migrated to Windows Server 2012 Essentials. Important Back up your Source Server before you run the Migration Preparation Tool. All changes that the Migration Preparation Tool makes to the schema are irreversible. If you experience issues during the migration, the only way to return the Source Server to the state it was in before you ran the tool is to restore the server from a system backup.. Note - You might receive a permission error if the Netlogon service is not started. - You must log off and log back on the server for the changes to take effect. ().. Note - If the Migration Preparation Tool is already installed on the server, run the tool from the Start menu. - To ensure that you are prepared for the best possible migration experience, it is recommended that you always choose to install the most recent update. The wizard installs the Migration Preparation Tool on the Source Server. When the installation is complete, the Migration Preparation Tool runs automatically and installs the latest updates. - In the Migration Preparation Tool, select I have a backup and am ready to proceed, and then click Next. Note If you receive an error message relating to hotfix installation, follow the instructions in “Method 2: Rename the Catroot2 Folder” in the Microsoft Knowledge Base article “You cannot install some updates or programs” ().. Note You must complete a successful run of the Migration Preparation Tool on the Source Server within two weeks of installing Windows Server 2012 Essentials on the Destination Server. Otherwise, installation of Windows Server 2012 Essentials on the Destination Server will be blocked. If this occurs, you must run the Migration Preparation Tool on the Source Server again. Create a. Note If you used the Windows Small Business Server 2008 SDK to develop a customized system health or alert add-In, and you want to continue to use the add-in with Windows Server 2012 Essentials, you must also update the add-in and deploy it on the destination server. You can fill in the following table as you collect LOB application information. A good place to start collecting information is to run Windows Control Panel, click Add/Remove Programs and look in the Program Files (x86) and the Program Files folders. Create a plan to migrate email that is hosted on Windows Small Business Server 2008 In Windows SBS 2008, email is provided through Exchange Server. However, Windows Server 2012 Essentials does not provide an inbox email service. If you are currently using Windows SBS 2008 to host your company’s email, you will need to migrate to an alternate on-premise or hosted solution. Note After you update and prepare your Source Server for migration, we recommend that you create a backup of the updated server before you continue the migration process. Migrate email to Microsoft Office 365. Note The step to remove the on-premises Exchange Server on the Source Server is optional. Office 365 does not support the use of public folders. For information about how to move messages from Exchange Public Folders to Office 365, see Migrate from Exchange Public Folders to Microsoft Office 365 After you Install Windows Server 2012 Essentials in migration mode, you should turn on the Office 365 Integration feature on Windows Server 2012 Essentials. Migrate email to another on-premises Exchange Server For information about how to migrate email to another on-premises Exchange Server, see Integrate an On-Premises Exchange Server with Windows Server Essentials. We recommend that you set up the new on-premises Exchange Server after you Install Windows Server 2012 Essentials in migration mode, and then finish the email migration before demoting the Source Server. Note The Windows Small Business Server POP3 Connector is not included with Exchange Server. After you migrate email data to another Exchange Server, you can no longer use the POP3 Connector feature. Note After you update and prepare your Source Server for migration, you should create a backup of the updated server before you continue the migration process.
https://docs.microsoft.com/en-us/previous-versions/windows/it-pro/windows-server-essentials/jj200165(v=ws.11)
2018-02-18T05:57:25
CC-MAIN-2018-09
1518891811655.65
[array(['images/jj721738.e1a7262c-e38f-4c08-aeb2-255e6e996979%28ws.11%29.jpeg', None], dtype=object) ]
docs.microsoft.com