text
stringlengths 2
97.5k
| meta
dict |
|---|---|
us critic-type people are always shaking our heads and telling everyone that movies aren't thrill rides , but i think that " back to the future ii " is one of the few exceptions .
if the original film had the spirit of the 1950s , then this has the spirit of the 80s : full of fights ; chase scenes ; cliff hangers ; special effects ; some detective work ; and of course the vision of the high-tech future .
the film picks up exactly where the first film left off , with doc ( lloyd ) , marty ( fox ) , and jennifer ( elizabeth shue ) travelling 30 years into the future , because something bad is going to happen to marty and jennifer's kids .
doc tells marty he must take his son's place at an incident that will cause a chain reaction if marty mcfly jr . says yes to griff ( wilson - in one of four different roles ) .
whoever said history tends to repeat itself wasn't joking , especially when it comes to the movies .
essentially the same chase scene that marty endured in the 50s takes place again in the year 2015 , but it's not as authentically exciting this time because it is so obviously a parody of itself .
at the end of the situation marty has changed the course of history for the better , and it seems like everything's copasetic right ?
wrong .
since when do things go according to plan in the " back to the future " movies ?
there are so many minor conflicts and details that effect the plot and the direction of the story i won't even bother to list them all .
basically we get to see marty as an old man , his house , his family , etc . we also go back to an alternate 1985 , and then back again to 1955 , with everything happening so fast the film never stops to catch a breath .
the films' best aspect is the fact that it actually goes back to the first movie and shows a lot of the action that occurred from another angle .
it's difficult to convey the sense of wild and zany fun without describing every little detail .
the only thing sacrificed in this film is the suspense .
instead of a grand finale , we get lots of little victories .
by the end everything is back to normal , but something happens that leads to yet another sequel , but it doesn't seem gratuitous .
" back to the future part ii " is a really great adventure movie .
it certainly has more originality than other films but it lacks a certain charm that was dominate throughout its predecessor .
|
{
"pile_set_name": "Github"
}
|
SUBROUTINE CHPMV(UPLO,N,ALPHA,AP,X,INCX,BETA,Y,INCY)
* .. Scalar Arguments ..
COMPLEX ALPHA,BETA
INTEGER INCX,INCY,N
CHARACTER UPLO
* ..
* .. Array Arguments ..
COMPLEX AP(*),X(*),Y(*)
* ..
*
* Purpose
* =======
*
* CHPMV performs the matrix-vector operation
*
* y := alpha*A*x + beta*y,
*
* where alpha and beta are scalars, x and y are n element vectors and
* A is an n by n hermitian matrix, supplied in packed form.
*
* Arguments
* ==========
*
* UPLO - CHARACTER*1.
* On entry, UPLO specifies whether the upper or lower
* triangular part of the matrix A is supplied in the packed
* array AP as follows:
*
* UPLO = 'U' or 'u' The upper triangular part of A is
* supplied in AP.
*
* UPLO = 'L' or 'l' The lower triangular part of A is
* supplied in AP.
*
* Unchanged on exit.
*
* N - INTEGER.
* On entry, N specifies the order of the matrix A.
* N must be at least zero.
* Unchanged on exit.
*
* ALPHA - COMPLEX .
* On entry, ALPHA specifies the scalar alpha.
* Unchanged on exit.
*
* AP - COMPLEX array of DIMENSION at least
* ( ( n*( n + 1 ) )/2 ).
* Before entry with UPLO = 'U' or 'u', the array AP must
* contain the upper triangular part of the hermitian matrix
* packed sequentially, column by column, so that AP( 1 )
* contains a( 1, 1 ), AP( 2 ) and AP( 3 ) contain a( 1, 2 )
* and a( 2, 2 ) respectively, and so on.
* Before entry with UPLO = 'L' or 'l', the array AP must
* contain the lower triangular part of the hermitian matrix
* packed sequentially, column by column, so that AP( 1 )
* contains a( 1, 1 ), AP( 2 ) and AP( 3 ) contain a( 2, 1 )
* and a( 3, 1 ) respectively, and so on.
* Note that the imaginary parts of the diagonal elements need
* not be set and are assumed to be zero.
* Unchanged on exit.
*
* X - COMPLEX array of dimension at least
* ( 1 + ( n - 1 )*abs( INCX ) ).
* Before entry, the incremented array X must contain the n
* element vector x.
* Unchanged on exit.
*
* INCX - INTEGER.
* On entry, INCX specifies the increment for the elements of
* X. INCX must not be zero.
* Unchanged on exit.
*
* BETA - COMPLEX .
* On entry, BETA specifies the scalar beta. When BETA is
* supplied as zero then Y need not be set on input.
* Unchanged on exit.
*
* Y - COMPLEX array of dimension at least
* ( 1 + ( n - 1 )*abs( INCY ) ).
* Before entry, the incremented array Y must contain the n
* element vector y. On exit, Y is overwritten by the updated
* vector y.
*
* INCY - INTEGER.
* On entry, INCY specifies the increment for the elements of
* Y. INCY must not be zero.
* Unchanged on exit.
*
* Further Details
* ===============
*
* Level 2 Blas routine.
*
* -- Written on 22-October-1986.
* Jack Dongarra, Argonne National Lab.
* Jeremy Du Croz, Nag Central Office.
* Sven Hammarling, Nag Central Office.
* Richard Hanson, Sandia National Labs.
*
* =====================================================================
*
* .. Parameters ..
COMPLEX ONE
PARAMETER (ONE= (1.0E+0,0.0E+0))
COMPLEX ZERO
PARAMETER (ZERO= (0.0E+0,0.0E+0))
* ..
* .. Local Scalars ..
COMPLEX TEMP1,TEMP2
INTEGER I,INFO,IX,IY,J,JX,JY,K,KK,KX,KY
* ..
* .. External Functions ..
LOGICAL LSAME
EXTERNAL LSAME
* ..
* .. External Subroutines ..
EXTERNAL XERBLA
* ..
* .. Intrinsic Functions ..
INTRINSIC CONJG,REAL
* ..
*
* Test the input parameters.
*
INFO = 0
IF (.NOT.LSAME(UPLO,'U') .AND. .NOT.LSAME(UPLO,'L')) THEN
INFO = 1
ELSE IF (N.LT.0) THEN
INFO = 2
ELSE IF (INCX.EQ.0) THEN
INFO = 6
ELSE IF (INCY.EQ.0) THEN
INFO = 9
END IF
IF (INFO.NE.0) THEN
CALL XERBLA('CHPMV ',INFO)
RETURN
END IF
*
* Quick return if possible.
*
IF ((N.EQ.0) .OR. ((ALPHA.EQ.ZERO).AND. (BETA.EQ.ONE))) RETURN
*
* Set up the start points in X and Y.
*
IF (INCX.GT.0) THEN
KX = 1
ELSE
KX = 1 - (N-1)*INCX
END IF
IF (INCY.GT.0) THEN
KY = 1
ELSE
KY = 1 - (N-1)*INCY
END IF
*
* Start the operations. In this version the elements of the array AP
* are accessed sequentially with one pass through AP.
*
* First form y := beta*y.
*
IF (BETA.NE.ONE) THEN
IF (INCY.EQ.1) THEN
IF (BETA.EQ.ZERO) THEN
DO 10 I = 1,N
Y(I) = ZERO
10 CONTINUE
ELSE
DO 20 I = 1,N
Y(I) = BETA*Y(I)
20 CONTINUE
END IF
ELSE
IY = KY
IF (BETA.EQ.ZERO) THEN
DO 30 I = 1,N
Y(IY) = ZERO
IY = IY + INCY
30 CONTINUE
ELSE
DO 40 I = 1,N
Y(IY) = BETA*Y(IY)
IY = IY + INCY
40 CONTINUE
END IF
END IF
END IF
IF (ALPHA.EQ.ZERO) RETURN
KK = 1
IF (LSAME(UPLO,'U')) THEN
*
* Form y when AP contains the upper triangle.
*
IF ((INCX.EQ.1) .AND. (INCY.EQ.1)) THEN
DO 60 J = 1,N
TEMP1 = ALPHA*X(J)
TEMP2 = ZERO
K = KK
DO 50 I = 1,J - 1
Y(I) = Y(I) + TEMP1*AP(K)
TEMP2 = TEMP2 + CONJG(AP(K))*X(I)
K = K + 1
50 CONTINUE
Y(J) = Y(J) + TEMP1*REAL(AP(KK+J-1)) + ALPHA*TEMP2
KK = KK + J
60 CONTINUE
ELSE
JX = KX
JY = KY
DO 80 J = 1,N
TEMP1 = ALPHA*X(JX)
TEMP2 = ZERO
IX = KX
IY = KY
DO 70 K = KK,KK + J - 2
Y(IY) = Y(IY) + TEMP1*AP(K)
TEMP2 = TEMP2 + CONJG(AP(K))*X(IX)
IX = IX + INCX
IY = IY + INCY
70 CONTINUE
Y(JY) = Y(JY) + TEMP1*REAL(AP(KK+J-1)) + ALPHA*TEMP2
JX = JX + INCX
JY = JY + INCY
KK = KK + J
80 CONTINUE
END IF
ELSE
*
* Form y when AP contains the lower triangle.
*
IF ((INCX.EQ.1) .AND. (INCY.EQ.1)) THEN
DO 100 J = 1,N
TEMP1 = ALPHA*X(J)
TEMP2 = ZERO
Y(J) = Y(J) + TEMP1*REAL(AP(KK))
K = KK + 1
DO 90 I = J + 1,N
Y(I) = Y(I) + TEMP1*AP(K)
TEMP2 = TEMP2 + CONJG(AP(K))*X(I)
K = K + 1
90 CONTINUE
Y(J) = Y(J) + ALPHA*TEMP2
KK = KK + (N-J+1)
100 CONTINUE
ELSE
JX = KX
JY = KY
DO 120 J = 1,N
TEMP1 = ALPHA*X(JX)
TEMP2 = ZERO
Y(JY) = Y(JY) + TEMP1*REAL(AP(KK))
IX = JX
IY = JY
DO 110 K = KK + 1,KK + N - J
IX = IX + INCX
IY = IY + INCY
Y(IY) = Y(IY) + TEMP1*AP(K)
TEMP2 = TEMP2 + CONJG(AP(K))*X(IX)
110 CONTINUE
Y(JY) = Y(JY) + ALPHA*TEMP2
JX = JX + INCX
JY = JY + INCY
KK = KK + (N-J+1)
120 CONTINUE
END IF
END IF
*
RETURN
*
* End of CHPMV .
*
END
|
{
"pile_set_name": "Github"
}
|
package(default_visibility = ["//visibility:public"])
load(
"@io_bazel_rules_go//go:def.bzl",
"go_library",
)
go_library(
name = "go_default_library",
srcs = ["storage_apps.go"],
importpath = "k8s.io/kubernetes/pkg/registry/apps/rest",
deps = [
"//pkg/api/legacyscheme:go_default_library",
"//pkg/apis/apps:go_default_library",
"//pkg/registry/apps/controllerrevision/storage:go_default_library",
"//pkg/registry/apps/daemonset/storage:go_default_library",
"//pkg/registry/apps/deployment/storage:go_default_library",
"//pkg/registry/apps/replicaset/storage:go_default_library",
"//pkg/registry/apps/statefulset/storage:go_default_library",
"//staging/src/k8s.io/api/apps/v1:go_default_library",
"//staging/src/k8s.io/apiserver/pkg/registry/generic:go_default_library",
"//staging/src/k8s.io/apiserver/pkg/registry/rest:go_default_library",
"//staging/src/k8s.io/apiserver/pkg/server:go_default_library",
"//staging/src/k8s.io/apiserver/pkg/server/storage:go_default_library",
],
)
filegroup(
name = "package-srcs",
srcs = glob(["**"]),
tags = ["automanaged"],
visibility = ["//visibility:private"],
)
filegroup(
name = "all-srcs",
srcs = [":package-srcs"],
tags = ["automanaged"],
)
|
{
"pile_set_name": "Github"
}
|
# this file is part of refractiveindex.info database
# refractiveindex.info database is in the public domain
# copyright and related rights waived via CC0 1.0
REFERENCES: "N. Sultanova, S. Kasarova and I. Nikolov. Dispersion properties of optical polymers, <a href=\"http://przyrbwn.icm.edu.pl/APP/ABSTR/116/a116-4-42.html\"><i>Acta Physica Polonica A</i> <b>116</b>, 585-587 (2009)</a><br> (fit of the experimental data with the Sellmeier dispersion formula: refractiveindex.info)"
COMMENTS: "20 °C"
DATA:
- type: formula 2
range: 0.4368 1.052
coefficients: 0 1.124 0.011087
|
{
"pile_set_name": "Github"
}
|
{
"$jason": {
"head": {
"title": "Common component styling",
"data": {
"items": [
{ "type": "Opacity", "url": "https://jasonette.github.io/Jasonpedia/view/component/style/opacity.json" }
]
},
"templates": {
"body": {
"sections": [{
"items": {
"{{#each items}}": {
"type": "label",
"style": {
"padding": "10",
"font": "HelveticaNeue-Bold",
"size": "15"
},
"href": {
"url": "{{url}}"
},
"text": "{{type}}"
}
}
}]
}
}
}
}
}
|
{
"pile_set_name": "Github"
}
|
%include "arm64/unused.S"
|
{
"pile_set_name": "Github"
}
|
// OpenTween - Client of Twitter
// Copyright (c) 2007-2011 kiri_feather (@kiri_feather) <kiri.feather@gmail.com>
// (c) 2008-2011 Moz (@syo68k)
// (c) 2008-2011 takeshik (@takeshik) <http://www.takeshik.org/>
// (c) 2010-2011 anis774 (@anis774) <http://d.hatena.ne.jp/anis774/>
// (c) 2010-2011 fantasticswallow (@f_swallow) <http://twitter.com/f_swallow>
// (c) 2011 Egtra (@egtra) <http://dev.activebasic.com/egtra/>
// (c) 2012 kim_upsilon (@kim_upsilon) <https://upsilo.net/~upsilon/>
// All rights reserved.
//
// This file is part of OpenTween.
//
// This program is free software; you can redistribute it and/or modify it
// under the terms of the GNU General Public License as published by the Free
// Software Foundation; either version 3 of the License, or (at your option)
// any later version.
//
// This program is distributed in the hope that it will be useful, but
// WITHOUT ANY WARRANTY; without even the implied warranty of MERCHANTABILITY
// or FITNESS FOR A PARTICULAR PURPOSE. See the GNU General Public License
// for more details.
//
// You should have received a copy of the GNU General Public License along
// with this program. If not, see <http://www.gnu.org/licenses/>, or write to
// the Free Software Foundation, Inc., 51 Franklin Street - Fifth Floor,
// Boston, MA 02110-1301, USA.
#nullable enable
using System;
using System.Collections.Generic;
using System.Linq;
using System.Text;
using System.Threading.Tasks;
using OpenTween.Setting;
namespace OpenTween.Models
{
public class FavoritesTabModel : TabModel
{
public override MyCommon.TabUsageType TabType
=> MyCommon.TabUsageType.Favorites;
public FavoritesTabModel() : this(MyCommon.DEFAULTTAB.FAV)
{
}
public FavoritesTabModel(string tabName) : base(tabName)
{
}
public override async Task RefreshAsync(Twitter tw, bool backward, bool startup, IProgress<string> progress)
{
bool read;
if (!SettingManager.Common.UnreadManage)
read = true;
else
read = startup && SettingManager.Common.Read;
progress.Report(Properties.Resources.GetTimelineWorker_RunWorkerCompletedText19);
await tw.GetFavoritesApi(read, this, backward)
.ConfigureAwait(false);
TabInformations.GetInstance().DistributePosts();
progress.Report(Properties.Resources.GetTimelineWorker_RunWorkerCompletedText20);
}
}
}
|
{
"pile_set_name": "Github"
}
|
#include "Python.h"
#include "ik/python/ik_module_log.h"
#include "ik/ik.h"
/* ------------------------------------------------------------------------- */
static PyObject*
log_message(PyObject* self, PyObject* args)
{
(void)self;
PyObject* uni;
PyObject* ascii;
/* Convert to string, might be necessary */
if ((uni = PyObject_Str(args)) == NULL)
goto str_call_failed;
if ((ascii = PyUnicode_AsASCIIString(uni)) == NULL)
goto ascii_conversion_failed;
IKAPI.log.message("%s", PyBytes_AS_STRING(ascii));
Py_DECREF(ascii);
Py_DECREF(uni);
Py_RETURN_NONE;
ascii_conversion_failed : Py_DECREF(uni);
str_call_failed : return NULL;
}
/* ------------------------------------------------------------------------- */
static void
module_free(void* x)
{
(void)x;
IKAPI.log.deinit();
}
/* ------------------------------------------------------------------------- */
static PyMethodDef log_functions[] = {
{"message", log_message, METH_O, "Log a message to the library."},
{NULL}
};
/* ------------------------------------------------------------------------- */
static PyModuleDef ik_module_log = {
PyModuleDef_HEAD_INIT,
"log", /* Module name */
NULL, /* docstring, may be NULL */
-1, /* size of per-interpreter state of the module, or -1 if the module keeps state in global variables */
log_functions, /* module methods */
NULL, /* m_reload */
NULL, /* m_traverse */
NULL, /* m_clear */
module_free /* m_free */
};
/* ------------------------------------------------------------------------- */
PyObject*
ik_module_log_create(void)
{
PyObject* m;
if (IKAPI.log.init() != IK_OK)
goto ik_log_init_failed;
m = PyModule_Create(&ik_module_log);
if (m == NULL)
goto module_alloc_failed;
return m;
module_alloc_failed : IKAPI.log.deinit();
ik_log_init_failed : return NULL;
}
|
{
"pile_set_name": "Github"
}
|
// Copyright 2018 The Prometheus Authors
// Licensed under the Apache License, Version 2.0 (the "License");
// you may not use this file except in compliance with the License.
// You may obtain a copy of the License at
//
// http://www.apache.org/licenses/LICENSE-2.0
//
// Unless required by applicable law or agreed to in writing, software
// distributed under the License is distributed on an "AS IS" BASIS,
// WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
// See the License for the specific language governing permissions and
// limitations under the License.
package procfs
import (
"github.com/prometheus/procfs/internal/fs"
)
// FS represents the pseudo-filesystem sys, which provides an interface to
// kernel data structures.
type FS struct {
proc fs.FS
}
// DefaultMountPoint is the common mount point of the proc filesystem.
const DefaultMountPoint = fs.DefaultProcMountPoint
// NewDefaultFS returns a new proc FS mounted under the default proc mountPoint.
// It will error if the mount point directory can't be read or is a file.
func NewDefaultFS() (FS, error) {
return NewFS(DefaultMountPoint)
}
// NewFS returns a new proc FS mounted under the given proc mountPoint. It will error
// if the mount point directory can't be read or is a file.
func NewFS(mountPoint string) (FS, error) {
fs, err := fs.NewFS(mountPoint)
if err != nil {
return FS{}, err
}
return FS{fs}, nil
}
|
{
"pile_set_name": "Github"
}
|
/*
* Copyright (c) 2018, 2019, Oracle and/or its affiliates. All rights reserved.
* DO NOT ALTER OR REMOVE COPYRIGHT NOTICES OR THIS FILE HEADER.
*
* This code is free software; you can redistribute it and/or modify it
* under the terms of the GNU General Public License version 2 only, as
* published by the Free Software Foundation. Oracle designates this
* particular file as subject to the "Classpath" exception as provided
* by Oracle in the LICENSE file that accompanied this code.
*
* This code is distributed in the hope that it will be useful, but WITHOUT
* ANY WARRANTY; without even the implied warranty of MERCHANTABILITY or
* FITNESS FOR A PARTICULAR PURPOSE. See the GNU General Public License
* version 2 for more details (a copy is included in the LICENSE file that
* accompanied this code).
*
* You should have received a copy of the GNU General Public License version
* 2 along with this work; if not, write to the Free Software Foundation,
* Inc., 51 Franklin St, Fifth Floor, Boston, MA 02110-1301 USA.
*
* Please contact Oracle, 500 Oracle Parkway, Redwood Shores, CA 94065 USA
* or visit www.oracle.com if you need additional information or have any
* questions.
*/
package java.io;
import java.lang.annotation.*;
/**
* Indicates that an annotated field or method is part of the {@linkplain
* Serializable serialization mechanism} defined by the
* <cite>Java Object Serialization Specification</cite>. This
* annotation type is intended to allow compile-time checking of
* serialization-related declarations, analogous to the checking
* enabled by the {@link java.lang.Override} annotation type to
* validate method overriding. {@code Serializable} classes are encouraged to
* use <code>@Serial</code> annotations to help a compiler catch
* mis-declared serialization-related fields and methods,
* mis-declarations that may otherwise be difficult to detect.
*
* <p>Specifically, annotations of this type should be
* applied to serialization-related methods and fields in classes
* declared to be {@code Serializable}. The five serialization-related
* methods are:
*
* <ul>
* <li>{@code private void writeObject(java.io.ObjectOutputStream stream) throws IOException}
* <li>{@code private void readObject(java.io.ObjectInputStream stream) throws IOException, ClassNotFoundException}
* <li>{@code private void readObjectNoData() throws ObjectStreamException}
* <li><i>ANY-ACCESS-MODIFIER</i> {@code Object writeReplace() throws ObjectStreamException}
* <li><i>ANY-ACCESS-MODIFIER</i> {@code Object readResolve() throws ObjectStreamException}
* </ul>
*
* The two serialization-related fields are:
*
* <ul>
* <li>{@code private static final ObjectStreamField[] serialPersistentFields}
* <li>{@code private static final long serialVersionUID}
* </ul>
*
* Compilers are encouraged to validate that a method or field marked with a
* <code>@Serial</code> annotation is one of the defined serialization-related
* methods or fields declared in a meaningful context and issue a warning
* if that is not the case.
*
* <p>It is a semantic error to apply this annotation to other fields or methods, including:
* <ul>
* <li>fields or methods in a class that is not {@code Serializable}
*
* <li>fields or methods of the proper structural declaration, but in
* a type where they are ineffectual. For example, {@code enum} types
* are defined to have a {@code serialVersionUID} of {@code 0L} so a
* {@code serialVersionUID} field declared in an {@code enum} type is
* ignored. The five serialization-related methods identified above
* are likewise ignored for an {@code enum} type.
*
* <li>in a class that is {@code Externalizable}:
* <ul>
* <li> method declarations of {@code writeObject}, {@code
* readObject}, and {@code readObjectNoData}
*
* <li>a field declaration for {@code serialPersistentFields}
* </ul>
*
* While the {@code Externalizable} interface extends {@code
* Serializable}, the three methods and one field above are
* <em>not</em> used for externalizable classes.
*
* </ul>
*
* Note that serialization mechanism accesses its designated fields
* and methods reflectively and those fields and methods may appear
* otherwise unused in a {@code Serializable} class.
*
* @see Serializable
* @see Externalizable
* @since 14
*/
@Target({ElementType.METHOD, ElementType.FIELD})
@Retention(RetentionPolicy.SOURCE)
public @interface Serial {}
|
{
"pile_set_name": "Github"
}
|
---
category: api-reference
---
# GitHub Flavored Markdown support for CKEditor 5
[](https://www.npmjs.com/package/@ckeditor/ckeditor5-markdown-gfm)
This package implements the GitHub Flavored Markdown data processor for CKEditor 5.
## Demo
Check out the {@link features/markdown#demo demo in the Markdown output feature} guide.
## Documentation
See the {@link features/markdown Markdown output} guide and the {@link module:markdown-gfm/gfmdataprocessor~GFMDataProcessor} documentation.
## Installation
```bash
npm install --save @ckeditor/ckeditor5-markdown-gfm
```
## Contribute
The source code of this package is available on GitHub in https://github.com/ckeditor/ckeditor5/tree/master/packages/ckeditor5-markdown-gfm.
## External links
* [`@ckeditor/ckeditor5-markdown-gfm` on npm](https://www.npmjs.com/package/@ckeditor/ckeditor5-markdown-gfm)
* [`ckeditor/ckeditor5-markdown-gfm` on GitHub](https://github.com/ckeditor/ckeditor5/tree/master/packages/ckeditor5-markdown-gfm)
* [Issue tracker](https://github.com/ckeditor/ckeditor5/issues)
* [Changelog](https://github.com/ckeditor/ckeditor5/blob/master/CHANGELOG.md)
|
{
"pile_set_name": "Github"
}
|
/*
* Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except
* in compliance with the License. You may obtain a copy of the License at
*
* http://www.apache.org/licenses/LICENSE-2.0
*
* Unless required by applicable law or agreed to in writing, software distributed under the License
* is distributed on an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express
* or implied. See the License for the specific language governing permissions and limitations under
* the License.
*/
/*
* This code was generated by https://github.com/googleapis/google-api-java-client-services/
* Modify at your own risk.
*/
package com.google.api.services.cloudsearch.v1.model;
/**
* Available metadata fields for the item.
*
* <p> This is the Java data model class that specifies how to parse/serialize into the JSON that is
* transmitted over HTTP when working with the Cloud Search API. For a detailed explanation see:
* <a href="https://developers.google.com/api-client-library/java/google-http-java-client/json">https://developers.google.com/api-client-library/java/google-http-java-client/json</a>
* </p>
*
* @author Google, Inc.
*/
@SuppressWarnings("javadoc")
public final class ItemMetadata extends com.google.api.client.json.GenericJson {
/**
* The name of the container for this item. Deletion of the container item leads to automatic
* deletion of this item. Note: ACLs are not inherited from a container item. To provide ACL
* inheritance for an item, use the inheritAclFrom field. The maximum length is 1536 characters.
* The value may be {@code null}.
*/
@com.google.api.client.util.Key
private java.lang.String containerName;
/**
* The BCP-47 language code for the item, such as "en-US" or "sr-Latn". For more information, see
* http://www.unicode.org/reports/tr35/#Unicode_locale_identifier. The maximum length is 32
* characters.
* The value may be {@code null}.
*/
@com.google.api.client.util.Key
private java.lang.String contentLanguage;
/**
* The time when the item was created in the source repository.
* The value may be {@code null}.
*/
@com.google.api.client.util.Key
private String createTime;
/**
* Hashing value provided by the API caller. This can be used with the items.push method to
* calculate modified state. The maximum length is 2048 characters.
* The value may be {@code null}.
*/
@com.google.api.client.util.Key
private java.lang.String hash;
/**
* A list of interactions for the item. Interactions are used to improve Search quality, but are
* not exposed to end users. The maximum number of elements is 1000.
* The value may be {@code null}.
*/
@com.google.api.client.util.Key
private java.util.List<Interaction> interactions;
static {
// hack to force ProGuard to consider Interaction used, since otherwise it would be stripped out
// see https://github.com/google/google-api-java-client/issues/543
com.google.api.client.util.Data.nullOf(Interaction.class);
}
/**
* Additional keywords or phrases that should match the item. Used internally for user generated
* content. The maximum number of elements is 100. The maximum length is 8192 characters.
* The value may be {@code null}.
*/
@com.google.api.client.util.Key
private java.util.List<java.lang.String> keywords;
/**
* The original mime-type of ItemContent.content in the source repository. The maximum length is
* 256 characters.
* The value may be {@code null}.
*/
@com.google.api.client.util.Key
private java.lang.String mimeType;
/**
* The type of the item. This should correspond to the name of an object definition in the schema
* registered for the data source. For example, if the schema for the data source contains an
* object definition with name 'document', then item indexing requests for objects of that type
* should set objectType to 'document'. The maximum length is 256 characters.
* The value may be {@code null}.
*/
@com.google.api.client.util.Key
private java.lang.String objectType;
/**
* Additional search quality metadata of the item
* The value may be {@code null}.
*/
@com.google.api.client.util.Key
private SearchQualityMetadata searchQualityMetadata;
/**
* Link to the source repository serving the data. earch results apply this link to the title.
* The maximum length is 2048 characters.
* The value may be {@code null}.
*/
@com.google.api.client.util.Key
private java.lang.String sourceRepositoryUrl;
/**
* The title of the item. If given, this will be the displayed title of the Search result. The
* maximum length is 2048 characters.
* The value may be {@code null}.
*/
@com.google.api.client.util.Key
private java.lang.String title;
/**
* The time when the item was last modified in the source repository.
* The value may be {@code null}.
*/
@com.google.api.client.util.Key
private String updateTime;
/**
* The name of the container for this item. Deletion of the container item leads to automatic
* deletion of this item. Note: ACLs are not inherited from a container item. To provide ACL
* inheritance for an item, use the inheritAclFrom field. The maximum length is 1536 characters.
* @return value or {@code null} for none
*/
public java.lang.String getContainerName() {
return containerName;
}
/**
* The name of the container for this item. Deletion of the container item leads to automatic
* deletion of this item. Note: ACLs are not inherited from a container item. To provide ACL
* inheritance for an item, use the inheritAclFrom field. The maximum length is 1536 characters.
* @param containerName containerName or {@code null} for none
*/
public ItemMetadata setContainerName(java.lang.String containerName) {
this.containerName = containerName;
return this;
}
/**
* The BCP-47 language code for the item, such as "en-US" or "sr-Latn". For more information, see
* http://www.unicode.org/reports/tr35/#Unicode_locale_identifier. The maximum length is 32
* characters.
* @return value or {@code null} for none
*/
public java.lang.String getContentLanguage() {
return contentLanguage;
}
/**
* The BCP-47 language code for the item, such as "en-US" or "sr-Latn". For more information, see
* http://www.unicode.org/reports/tr35/#Unicode_locale_identifier. The maximum length is 32
* characters.
* @param contentLanguage contentLanguage or {@code null} for none
*/
public ItemMetadata setContentLanguage(java.lang.String contentLanguage) {
this.contentLanguage = contentLanguage;
return this;
}
/**
* The time when the item was created in the source repository.
* @return value or {@code null} for none
*/
public String getCreateTime() {
return createTime;
}
/**
* The time when the item was created in the source repository.
* @param createTime createTime or {@code null} for none
*/
public ItemMetadata setCreateTime(String createTime) {
this.createTime = createTime;
return this;
}
/**
* Hashing value provided by the API caller. This can be used with the items.push method to
* calculate modified state. The maximum length is 2048 characters.
* @return value or {@code null} for none
*/
public java.lang.String getHash() {
return hash;
}
/**
* Hashing value provided by the API caller. This can be used with the items.push method to
* calculate modified state. The maximum length is 2048 characters.
* @param hash hash or {@code null} for none
*/
public ItemMetadata setHash(java.lang.String hash) {
this.hash = hash;
return this;
}
/**
* A list of interactions for the item. Interactions are used to improve Search quality, but are
* not exposed to end users. The maximum number of elements is 1000.
* @return value or {@code null} for none
*/
public java.util.List<Interaction> getInteractions() {
return interactions;
}
/**
* A list of interactions for the item. Interactions are used to improve Search quality, but are
* not exposed to end users. The maximum number of elements is 1000.
* @param interactions interactions or {@code null} for none
*/
public ItemMetadata setInteractions(java.util.List<Interaction> interactions) {
this.interactions = interactions;
return this;
}
/**
* Additional keywords or phrases that should match the item. Used internally for user generated
* content. The maximum number of elements is 100. The maximum length is 8192 characters.
* @return value or {@code null} for none
*/
public java.util.List<java.lang.String> getKeywords() {
return keywords;
}
/**
* Additional keywords or phrases that should match the item. Used internally for user generated
* content. The maximum number of elements is 100. The maximum length is 8192 characters.
* @param keywords keywords or {@code null} for none
*/
public ItemMetadata setKeywords(java.util.List<java.lang.String> keywords) {
this.keywords = keywords;
return this;
}
/**
* The original mime-type of ItemContent.content in the source repository. The maximum length is
* 256 characters.
* @return value or {@code null} for none
*/
public java.lang.String getMimeType() {
return mimeType;
}
/**
* The original mime-type of ItemContent.content in the source repository. The maximum length is
* 256 characters.
* @param mimeType mimeType or {@code null} for none
*/
public ItemMetadata setMimeType(java.lang.String mimeType) {
this.mimeType = mimeType;
return this;
}
/**
* The type of the item. This should correspond to the name of an object definition in the schema
* registered for the data source. For example, if the schema for the data source contains an
* object definition with name 'document', then item indexing requests for objects of that type
* should set objectType to 'document'. The maximum length is 256 characters.
* @return value or {@code null} for none
*/
public java.lang.String getObjectType() {
return objectType;
}
/**
* The type of the item. This should correspond to the name of an object definition in the schema
* registered for the data source. For example, if the schema for the data source contains an
* object definition with name 'document', then item indexing requests for objects of that type
* should set objectType to 'document'. The maximum length is 256 characters.
* @param objectType objectType or {@code null} for none
*/
public ItemMetadata setObjectType(java.lang.String objectType) {
this.objectType = objectType;
return this;
}
/**
* Additional search quality metadata of the item
* @return value or {@code null} for none
*/
public SearchQualityMetadata getSearchQualityMetadata() {
return searchQualityMetadata;
}
/**
* Additional search quality metadata of the item
* @param searchQualityMetadata searchQualityMetadata or {@code null} for none
*/
public ItemMetadata setSearchQualityMetadata(SearchQualityMetadata searchQualityMetadata) {
this.searchQualityMetadata = searchQualityMetadata;
return this;
}
/**
* Link to the source repository serving the data. earch results apply this link to the title.
* The maximum length is 2048 characters.
* @return value or {@code null} for none
*/
public java.lang.String getSourceRepositoryUrl() {
return sourceRepositoryUrl;
}
/**
* Link to the source repository serving the data. earch results apply this link to the title.
* The maximum length is 2048 characters.
* @param sourceRepositoryUrl sourceRepositoryUrl or {@code null} for none
*/
public ItemMetadata setSourceRepositoryUrl(java.lang.String sourceRepositoryUrl) {
this.sourceRepositoryUrl = sourceRepositoryUrl;
return this;
}
/**
* The title of the item. If given, this will be the displayed title of the Search result. The
* maximum length is 2048 characters.
* @return value or {@code null} for none
*/
public java.lang.String getTitle() {
return title;
}
/**
* The title of the item. If given, this will be the displayed title of the Search result. The
* maximum length is 2048 characters.
* @param title title or {@code null} for none
*/
public ItemMetadata setTitle(java.lang.String title) {
this.title = title;
return this;
}
/**
* The time when the item was last modified in the source repository.
* @return value or {@code null} for none
*/
public String getUpdateTime() {
return updateTime;
}
/**
* The time when the item was last modified in the source repository.
* @param updateTime updateTime or {@code null} for none
*/
public ItemMetadata setUpdateTime(String updateTime) {
this.updateTime = updateTime;
return this;
}
@Override
public ItemMetadata set(String fieldName, Object value) {
return (ItemMetadata) super.set(fieldName, value);
}
@Override
public ItemMetadata clone() {
return (ItemMetadata) super.clone();
}
}
|
{
"pile_set_name": "Github"
}
|
/*
* eXist-db Open Source Native XML Database
* Copyright (C) 2001 The eXist-db Authors
*
* info@exist-db.org
* http://www.exist-db.org
*
* This library is free software; you can redistribute it and/or
* modify it under the terms of the GNU Lesser General Public
* License as published by the Free Software Foundation; either
* version 2.1 of the License, or (at your option) any later version.
*
* This library is distributed in the hope that it will be useful,
* but WITHOUT ANY WARRANTY; without even the implied warranty of
* MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU
* Lesser General Public License for more details.
*
* You should have received a copy of the GNU Lesser General Public
* License along with this library; if not, write to the Free Software
* Foundation, Inc., 51 Franklin Street, Fifth Floor, Boston, MA 02110-1301 USA
*/
package org.exist.storage.dom;
import org.exist.storage.dom.DOMFile.DOMPage;
public final class RecordPos {
private DOMPage page;
int offset;
private short tupleID;
private boolean isLink = false;
public RecordPos(int offset, DOMPage page, short tupleID) {
this.offset = offset;
this.page = page;
this.tupleID = tupleID;
}
public RecordPos(int offset, DOMPage page, short tupleID, boolean isLink) {
this.offset = offset;
this.page = page;
this.tupleID = tupleID;
this.isLink = isLink;
}
public DOMPage getPage() {
return page;
}
public void setPage(DOMPage page) {
this.page = page;
}
public short getTupleID() {
return tupleID;
}
//Strange : only one call to this method
public void setTupleID(short tupleID) {
this.tupleID = tupleID;
}
public boolean isLink() {
return isLink;
}
}
|
{
"pile_set_name": "Github"
}
|
<?php
/* Prototype : bool ctype_digit(mixed $c)
* Description: Checks for numeric character(s)
* Source code: ext/ctype/ctype.c
*/
/*
* Pass octal and hexadecimal values as $c argument to ctype_digit() to test behaviour
*/
echo "*** Testing ctype_digit() : usage variations ***\n";
$orig = setlocale(LC_CTYPE, "C");
$octal_values = array(061, 062, 063, 064);
$hex_values = array (0x31, 0x32, 0x33, 0x34);
echo "\n-- Octal Values --\n";
$iterator = 1;
foreach($octal_values as $c) {
echo "-- Iteration $iterator --\n";
var_dump(ctype_digit($c));
$iterator++;
}
echo "\n-- Hexadecimal Values --\n";
$iterator = 1;
foreach($hex_values as $c) {
echo "-- Iteration $iterator --\n";
var_dump(ctype_digit($c));
$iterator++;
}
setlocale(LC_CTYPE, $orig);
?>
===DONE===
|
{
"pile_set_name": "Github"
}
|
depends=plan9port-^(base postscript)
desc='Plan 9 from User Space - postscript B&H fonts
This package contains PostScript fonts from Bigelow & Holmes
as found in the Plan 9 from Bell Labs distribution.
'
|
{
"pile_set_name": "Github"
}
|
<?xml version="1.0" encoding="UTF-8"?>
<!--
~ Copyright (c) 2014-2017 Evolveum and contributors
~
~ This work is dual-licensed under the Apache License 2.0
~ and European Union Public License. See LICENSE file for details.
-->
<securityPolicy oid="2997a20a-0423-11e7-af65-a7ab7d19442c"
xmlns='http://midpoint.evolveum.com/xml/ns/public/common/common-3'>
<name>Security Policy: password storage none</name>
<credentials>
<password>
<storageMethod>
<storageType>none</storageType>
</storageMethod>
</password>
<securityQuestions>
<question>
<identifier>http://midpoint.evolveum.com/xml/ns/public/security/question-2#q001</identifier>
<enabled>true</enabled>
<questionText>How much wood would a woodchuck chuck if woodchuck could chuck wood?</questionText>
</question>
<question>
<identifier>http://midpoint.evolveum.com/xml/ns/public/security/question-2#q002</identifier>
<questionText>What is your mother's best friend's uncle's granddaughter's dog's mother maiden name?</questionText>
</question>
</securityQuestions>
</credentials>
</securityPolicy>
|
{
"pile_set_name": "Github"
}
|
package net.symphonious.disrupter.dsl;
import com.lmax.disruptor.AbstractEntry;
import com.lmax.disruptor.BatchHandler;
import com.lmax.disruptor.Consumer;
import com.lmax.disruptor.ConsumerBarrier;
class ConsumerInfo<T extends AbstractEntry>
{
private final Consumer consumer;
private final BatchHandler<T> handler;
private final ConsumerBarrier<T> barrier;
private boolean endOfChain = true;
ConsumerInfo(final Consumer consumer, final BatchHandler<T> handler, final ConsumerBarrier<T> barrier)
{
this.consumer = consumer;
this.handler = handler;
this.barrier = barrier;
this.endOfChain = true;
}
public Consumer getConsumer()
{
return consumer;
}
public BatchHandler<T> getHandler()
{
return handler;
}
public ConsumerBarrier<T> getBarrier()
{
return barrier;
}
public boolean isEndOfChain()
{
return endOfChain;
}
public void usedInBarrier()
{
endOfChain = false;
}
}
|
{
"pile_set_name": "Github"
}
|
import computeAutoPlacement from '../utils/computeAutoPlacement';
import getReferenceOffsets from '../utils/getReferenceOffsets';
import getPopperOffsets from '../utils/getPopperOffsets';
import runModifiers from '../utils/runModifiers';
/**
* Updates the position of the popper, computing the new offsets and applying
* the new style.<br />
* Prefer `scheduleUpdate` over `update` because of performance reasons.
* @method
* @memberof Popper
*/
export default function update() {
// if popper is destroyed, don't perform any further update
if (this.state.isDestroyed) {
return;
}
let data = {
instance: this,
styles: {},
arrowStyles: {},
attributes: {},
flipped: false,
offsets: {},
};
// compute reference element offsets
data.offsets.reference = getReferenceOffsets(
this.state,
this.popper,
this.reference,
this.options.positionFixed
);
// compute auto placement, store placement inside the data object,
// modifiers will be able to edit `placement` if needed
// and refer to originalPlacement to know the original value
data.placement = computeAutoPlacement(
this.options.placement,
data.offsets.reference,
this.popper,
this.reference,
this.options.modifiers.flip.boundariesElement,
this.options.modifiers.flip.padding
);
// store the computed placement inside `originalPlacement`
data.originalPlacement = data.placement;
data.positionFixed = this.options.positionFixed;
// compute the popper offsets
data.offsets.popper = getPopperOffsets(
this.popper,
data.offsets.reference,
data.placement
);
data.offsets.popper.position = this.options.positionFixed
? 'fixed'
: 'absolute';
// run the modifiers
data = runModifiers(this.modifiers, data);
// the first `update` will call `onCreate` callback
// the other ones will call `onUpdate` callback
if (!this.state.isCreated) {
this.state.isCreated = true;
this.options.onCreate(data);
} else {
this.options.onUpdate(data);
}
}
|
{
"pile_set_name": "Github"
}
|
# Lines starting with '#' and sections without content
# are not displayed by a call to 'details'
#
[Website]
http://blog.zombiesrungame.com/post/21662042177/its-update-time-so-many-new-features-and-missions-in
[filters]
http://blog.zombiesrungame.com/tweets.js
[other]
# Any other details
[comments]
fanboy
|
{
"pile_set_name": "Github"
}
|
// SPDX-License-Identifier: GPL-2.0+
#include <linux/clk.h>
#include <linux/component.h>
#include <linux/delay.h>
#include <linux/io.h>
#include <linux/mfd/syscon.h>
#include <linux/module.h>
#include <linux/of.h>
#include <linux/platform_device.h>
#include <linux/regmap.h>
#include <linux/regulator/consumer.h>
#include <video/mipi_display.h>
#include <drm/drm_atomic_helper.h>
#include <drm/drm_bridge.h>
#include <drm/drm_device.h>
#include <drm/drm_drv.h>
#include <drm/drm_encoder.h>
#include <drm/drm_mipi_dsi.h>
#include <drm/drm_modeset_helper_vtables.h>
#include <drm/drm_of.h>
#include <drm/drm_panel.h>
#include <drm/drm_print.h>
#include <drm/drm_probe_helper.h>
#include "mcde_drm.h"
#include "mcde_dsi_regs.h"
#define DSI_DEFAULT_LP_FREQ_HZ 19200000
#define DSI_DEFAULT_HS_FREQ_HZ 420160000
/* PRCMU DSI reset registers */
#define PRCM_DSI_SW_RESET 0x324
#define PRCM_DSI_SW_RESET_DSI0_SW_RESETN BIT(0)
#define PRCM_DSI_SW_RESET_DSI1_SW_RESETN BIT(1)
#define PRCM_DSI_SW_RESET_DSI2_SW_RESETN BIT(2)
struct mcde_dsi {
struct device *dev;
struct mcde *mcde;
struct drm_bridge bridge;
struct drm_panel *panel;
struct drm_bridge *bridge_out;
struct mipi_dsi_host dsi_host;
struct mipi_dsi_device *mdsi;
struct clk *hs_clk;
struct clk *lp_clk;
unsigned long hs_freq;
unsigned long lp_freq;
bool unused;
void __iomem *regs;
struct regmap *prcmu;
};
static inline struct mcde_dsi *bridge_to_mcde_dsi(struct drm_bridge *bridge)
{
return container_of(bridge, struct mcde_dsi, bridge);
}
static inline struct mcde_dsi *host_to_mcde_dsi(struct mipi_dsi_host *h)
{
return container_of(h, struct mcde_dsi, dsi_host);
}
bool mcde_dsi_irq(struct mipi_dsi_device *mdsi)
{
struct mcde_dsi *d;
u32 val;
bool te_received = false;
d = host_to_mcde_dsi(mdsi->host);
dev_dbg(d->dev, "%s called\n", __func__);
val = readl(d->regs + DSI_DIRECT_CMD_STS_FLAG);
if (val)
dev_dbg(d->dev, "DSI_DIRECT_CMD_STS_FLAG = %08x\n", val);
if (val & DSI_DIRECT_CMD_STS_WRITE_COMPLETED)
dev_dbg(d->dev, "direct command write completed\n");
if (val & DSI_DIRECT_CMD_STS_TE_RECEIVED) {
te_received = true;
dev_dbg(d->dev, "direct command TE received\n");
}
if (val & DSI_DIRECT_CMD_STS_ACKNOWLEDGE_WITH_ERR_RECEIVED)
dev_err(d->dev, "direct command ACK ERR received\n");
if (val & DSI_DIRECT_CMD_STS_READ_COMPLETED_WITH_ERR)
dev_err(d->dev, "direct command read ERR received\n");
/* Mask off the ACK value and clear status */
writel(val, d->regs + DSI_DIRECT_CMD_STS_CLR);
val = readl(d->regs + DSI_CMD_MODE_STS_FLAG);
if (val)
dev_dbg(d->dev, "DSI_CMD_MODE_STS_FLAG = %08x\n", val);
if (val & DSI_CMD_MODE_STS_ERR_NO_TE)
/* This happens all the time (safe to ignore) */
dev_dbg(d->dev, "CMD mode no TE\n");
if (val & DSI_CMD_MODE_STS_ERR_TE_MISS)
/* This happens all the time (safe to ignore) */
dev_dbg(d->dev, "CMD mode TE miss\n");
if (val & DSI_CMD_MODE_STS_ERR_SDI1_UNDERRUN)
dev_err(d->dev, "CMD mode SD1 underrun\n");
if (val & DSI_CMD_MODE_STS_ERR_SDI2_UNDERRUN)
dev_err(d->dev, "CMD mode SD2 underrun\n");
if (val & DSI_CMD_MODE_STS_ERR_UNWANTED_RD)
dev_err(d->dev, "CMD mode unwanted RD\n");
writel(val, d->regs + DSI_CMD_MODE_STS_CLR);
val = readl(d->regs + DSI_DIRECT_CMD_RD_STS_FLAG);
if (val)
dev_dbg(d->dev, "DSI_DIRECT_CMD_RD_STS_FLAG = %08x\n", val);
writel(val, d->regs + DSI_DIRECT_CMD_RD_STS_CLR);
val = readl(d->regs + DSI_TG_STS_FLAG);
if (val)
dev_dbg(d->dev, "DSI_TG_STS_FLAG = %08x\n", val);
writel(val, d->regs + DSI_TG_STS_CLR);
val = readl(d->regs + DSI_VID_MODE_STS_FLAG);
if (val)
dev_dbg(d->dev, "DSI_VID_MODE_STS_FLAG = %08x\n", val);
if (val & DSI_VID_MODE_STS_VSG_RUNNING)
dev_dbg(d->dev, "VID mode VSG running\n");
if (val & DSI_VID_MODE_STS_ERR_MISSING_DATA)
dev_err(d->dev, "VID mode missing data\n");
if (val & DSI_VID_MODE_STS_ERR_MISSING_HSYNC)
dev_err(d->dev, "VID mode missing HSYNC\n");
if (val & DSI_VID_MODE_STS_ERR_MISSING_VSYNC)
dev_err(d->dev, "VID mode missing VSYNC\n");
if (val & DSI_VID_MODE_STS_REG_ERR_SMALL_LENGTH)
dev_err(d->dev, "VID mode less bytes than expected between two HSYNC\n");
if (val & DSI_VID_MODE_STS_REG_ERR_SMALL_HEIGHT)
dev_err(d->dev, "VID mode less lines than expected between two VSYNC\n");
if (val & (DSI_VID_MODE_STS_ERR_BURSTWRITE |
DSI_VID_MODE_STS_ERR_LINEWRITE |
DSI_VID_MODE_STS_ERR_LONGREAD))
dev_err(d->dev, "VID mode read/write error\n");
if (val & DSI_VID_MODE_STS_ERR_VRS_WRONG_LENGTH)
dev_err(d->dev, "VID mode received packets differ from expected size\n");
if (val & DSI_VID_MODE_STS_VSG_RECOVERY)
dev_err(d->dev, "VID mode VSG in recovery mode\n");
writel(val, d->regs + DSI_VID_MODE_STS_CLR);
return te_received;
}
static void mcde_dsi_attach_to_mcde(struct mcde_dsi *d)
{
d->mcde->mdsi = d->mdsi;
d->mcde->video_mode = !!(d->mdsi->mode_flags & MIPI_DSI_MODE_VIDEO);
/* Enable use of the TE signal for all command mode panels */
d->mcde->te_sync = !d->mcde->video_mode;
}
static int mcde_dsi_host_attach(struct mipi_dsi_host *host,
struct mipi_dsi_device *mdsi)
{
struct mcde_dsi *d = host_to_mcde_dsi(host);
if (mdsi->lanes < 1 || mdsi->lanes > 2) {
DRM_ERROR("dsi device params invalid, 1 or 2 lanes supported\n");
return -EINVAL;
}
dev_info(d->dev, "attached DSI device with %d lanes\n", mdsi->lanes);
/* MIPI_DSI_FMT_RGB88 etc */
dev_info(d->dev, "format %08x, %dbpp\n", mdsi->format,
mipi_dsi_pixel_format_to_bpp(mdsi->format));
dev_info(d->dev, "mode flags: %08lx\n", mdsi->mode_flags);
d->mdsi = mdsi;
if (d->mcde)
mcde_dsi_attach_to_mcde(d);
return 0;
}
static int mcde_dsi_host_detach(struct mipi_dsi_host *host,
struct mipi_dsi_device *mdsi)
{
struct mcde_dsi *d = host_to_mcde_dsi(host);
d->mdsi = NULL;
if (d->mcde)
d->mcde->mdsi = NULL;
return 0;
}
#define MCDE_DSI_HOST_IS_READ(type) \
((type == MIPI_DSI_GENERIC_READ_REQUEST_0_PARAM) || \
(type == MIPI_DSI_GENERIC_READ_REQUEST_1_PARAM) || \
(type == MIPI_DSI_GENERIC_READ_REQUEST_2_PARAM) || \
(type == MIPI_DSI_DCS_READ))
static ssize_t mcde_dsi_host_transfer(struct mipi_dsi_host *host,
const struct mipi_dsi_msg *msg)
{
struct mcde_dsi *d = host_to_mcde_dsi(host);
const u32 loop_delay_us = 10; /* us */
const u8 *tx = msg->tx_buf;
u32 loop_counter;
size_t txlen = msg->tx_len;
size_t rxlen = msg->rx_len;
u32 val;
int ret;
int i;
if (txlen > 16) {
dev_err(d->dev,
"dunno how to write more than 16 bytes yet\n");
return -EIO;
}
if (rxlen > 4) {
dev_err(d->dev,
"dunno how to read more than 4 bytes yet\n");
return -EIO;
}
dev_dbg(d->dev,
"message to channel %d, write %zd bytes read %zd bytes\n",
msg->channel, txlen, rxlen);
/* Command "nature" */
if (MCDE_DSI_HOST_IS_READ(msg->type))
/* MCTL_MAIN_DATA_CTL already set up */
val = DSI_DIRECT_CMD_MAIN_SETTINGS_CMD_NAT_READ;
else
val = DSI_DIRECT_CMD_MAIN_SETTINGS_CMD_NAT_WRITE;
/*
* More than 2 bytes will not fit in a single packet, so it's
* time to set the "long not short" bit. One byte is used by
* the MIPI DCS command leaving just one byte for the payload
* in a short package.
*/
if (mipi_dsi_packet_format_is_long(msg->type))
val |= DSI_DIRECT_CMD_MAIN_SETTINGS_CMD_LONGNOTSHORT;
val |= 0 << DSI_DIRECT_CMD_MAIN_SETTINGS_CMD_ID_SHIFT;
val |= txlen << DSI_DIRECT_CMD_MAIN_SETTINGS_CMD_SIZE_SHIFT;
val |= DSI_DIRECT_CMD_MAIN_SETTINGS_CMD_LP_EN;
val |= msg->type << DSI_DIRECT_CMD_MAIN_SETTINGS_CMD_HEAD_SHIFT;
writel(val, d->regs + DSI_DIRECT_CMD_MAIN_SETTINGS);
/* MIPI DCS command is part of the data */
if (txlen > 0) {
val = 0;
for (i = 0; i < 4 && i < txlen; i++)
val |= tx[i] << (i * 8);
}
writel(val, d->regs + DSI_DIRECT_CMD_WRDAT0);
if (txlen > 4) {
val = 0;
for (i = 0; i < 4 && (i + 4) < txlen; i++)
val |= tx[i + 4] << (i * 8);
writel(val, d->regs + DSI_DIRECT_CMD_WRDAT1);
}
if (txlen > 8) {
val = 0;
for (i = 0; i < 4 && (i + 8) < txlen; i++)
val |= tx[i + 8] << (i * 8);
writel(val, d->regs + DSI_DIRECT_CMD_WRDAT2);
}
if (txlen > 12) {
val = 0;
for (i = 0; i < 4 && (i + 12) < txlen; i++)
val |= tx[i + 12] << (i * 8);
writel(val, d->regs + DSI_DIRECT_CMD_WRDAT3);
}
writel(~0, d->regs + DSI_DIRECT_CMD_STS_CLR);
writel(~0, d->regs + DSI_CMD_MODE_STS_CLR);
/* Send command */
writel(1, d->regs + DSI_DIRECT_CMD_SEND);
loop_counter = 1000 * 1000 / loop_delay_us;
if (MCDE_DSI_HOST_IS_READ(msg->type)) {
/* Read command */
while (!(readl(d->regs + DSI_DIRECT_CMD_STS) &
(DSI_DIRECT_CMD_STS_READ_COMPLETED |
DSI_DIRECT_CMD_STS_READ_COMPLETED_WITH_ERR))
&& --loop_counter)
usleep_range(loop_delay_us, (loop_delay_us * 3) / 2);
if (!loop_counter) {
dev_err(d->dev, "DSI read timeout!\n");
return -ETIME;
}
} else {
/* Writing only */
while (!(readl(d->regs + DSI_DIRECT_CMD_STS) &
DSI_DIRECT_CMD_STS_WRITE_COMPLETED)
&& --loop_counter)
usleep_range(loop_delay_us, (loop_delay_us * 3) / 2);
if (!loop_counter) {
dev_err(d->dev, "DSI write timeout!\n");
return -ETIME;
}
}
val = readl(d->regs + DSI_DIRECT_CMD_STS);
if (val & DSI_DIRECT_CMD_STS_READ_COMPLETED_WITH_ERR) {
dev_err(d->dev, "read completed with error\n");
writel(1, d->regs + DSI_DIRECT_CMD_RD_INIT);
return -EIO;
}
if (val & DSI_DIRECT_CMD_STS_ACKNOWLEDGE_WITH_ERR_RECEIVED) {
val >>= DSI_DIRECT_CMD_STS_ACK_VAL_SHIFT;
dev_err(d->dev, "error during transmission: %04x\n",
val);
return -EIO;
}
if (!MCDE_DSI_HOST_IS_READ(msg->type)) {
/* Return number of bytes written */
ret = txlen;
} else {
/* OK this is a read command, get the response */
u32 rdsz;
u32 rddat;
u8 *rx = msg->rx_buf;
rdsz = readl(d->regs + DSI_DIRECT_CMD_RD_PROPERTY);
rdsz &= DSI_DIRECT_CMD_RD_PROPERTY_RD_SIZE_MASK;
rddat = readl(d->regs + DSI_DIRECT_CMD_RDDAT);
if (rdsz < rxlen) {
dev_err(d->dev, "read error, requested %zd got %d\n",
rxlen, rdsz);
return -EIO;
}
/* FIXME: read more than 4 bytes */
for (i = 0; i < 4 && i < rxlen; i++)
rx[i] = (rddat >> (i * 8)) & 0xff;
ret = rdsz;
}
writel(~0, d->regs + DSI_DIRECT_CMD_STS_CLR);
writel(~0, d->regs + DSI_CMD_MODE_STS_CLR);
return ret;
}
static const struct mipi_dsi_host_ops mcde_dsi_host_ops = {
.attach = mcde_dsi_host_attach,
.detach = mcde_dsi_host_detach,
.transfer = mcde_dsi_host_transfer,
};
/* This sends a direct (short) command to request TE */
void mcde_dsi_te_request(struct mipi_dsi_device *mdsi)
{
struct mcde_dsi *d;
u32 val;
d = host_to_mcde_dsi(mdsi->host);
/* Command "nature" TE request */
val = DSI_DIRECT_CMD_MAIN_SETTINGS_CMD_NAT_TE_REQ;
val |= 0 << DSI_DIRECT_CMD_MAIN_SETTINGS_CMD_ID_SHIFT;
val |= 2 << DSI_DIRECT_CMD_MAIN_SETTINGS_CMD_SIZE_SHIFT;
val |= DSI_DIRECT_CMD_MAIN_SETTINGS_CMD_LP_EN;
val |= MIPI_DSI_GENERIC_SHORT_WRITE_1_PARAM <<
DSI_DIRECT_CMD_MAIN_SETTINGS_CMD_HEAD_SHIFT;
writel(val, d->regs + DSI_DIRECT_CMD_MAIN_SETTINGS);
/* Clear TE reveived and error status bits and enables them */
writel(DSI_DIRECT_CMD_STS_CLR_TE_RECEIVED_CLR |
DSI_DIRECT_CMD_STS_CLR_ACKNOWLEDGE_WITH_ERR_RECEIVED_CLR,
d->regs + DSI_DIRECT_CMD_STS_CLR);
val = readl(d->regs + DSI_DIRECT_CMD_STS_CTL);
val |= DSI_DIRECT_CMD_STS_CTL_TE_RECEIVED_EN;
val |= DSI_DIRECT_CMD_STS_CTL_ACKNOWLEDGE_WITH_ERR_EN;
writel(val, d->regs + DSI_DIRECT_CMD_STS_CTL);
/* Clear and enable no TE or TE missing status */
writel(DSI_CMD_MODE_STS_CLR_ERR_NO_TE_CLR |
DSI_CMD_MODE_STS_CLR_ERR_TE_MISS_CLR,
d->regs + DSI_CMD_MODE_STS_CLR);
val = readl(d->regs + DSI_CMD_MODE_STS_CTL);
val |= DSI_CMD_MODE_STS_CTL_ERR_NO_TE_EN;
val |= DSI_CMD_MODE_STS_CTL_ERR_TE_MISS_EN;
writel(val, d->regs + DSI_CMD_MODE_STS_CTL);
/* Send this TE request command */
writel(1, d->regs + DSI_DIRECT_CMD_SEND);
}
static void mcde_dsi_setup_video_mode(struct mcde_dsi *d,
const struct drm_display_mode *mode)
{
/* cpp, characters per pixel, number of bytes per pixel */
u8 cpp = mipi_dsi_pixel_format_to_bpp(d->mdsi->format) / 8;
u64 pclk;
u64 bpl;
int hfp;
int hbp;
int hsa;
u32 blkline_pck, line_duration;
u32 val;
val = 0;
if (d->mdsi->mode_flags & MIPI_DSI_MODE_VIDEO_BURST)
val |= DSI_VID_MAIN_CTL_BURST_MODE;
if (d->mdsi->mode_flags & MIPI_DSI_MODE_VIDEO_SYNC_PULSE) {
val |= DSI_VID_MAIN_CTL_SYNC_PULSE_ACTIVE;
val |= DSI_VID_MAIN_CTL_SYNC_PULSE_HORIZONTAL;
}
/* RGB header and pixel mode */
switch (d->mdsi->format) {
case MIPI_DSI_FMT_RGB565:
val |= MIPI_DSI_PACKED_PIXEL_STREAM_16 <<
DSI_VID_MAIN_CTL_HEADER_SHIFT;
val |= DSI_VID_MAIN_CTL_VID_PIXEL_MODE_16BITS;
break;
case MIPI_DSI_FMT_RGB666_PACKED:
val |= MIPI_DSI_PACKED_PIXEL_STREAM_18 <<
DSI_VID_MAIN_CTL_HEADER_SHIFT;
val |= DSI_VID_MAIN_CTL_VID_PIXEL_MODE_18BITS;
break;
case MIPI_DSI_FMT_RGB666:
val |= MIPI_DSI_PIXEL_STREAM_3BYTE_18
<< DSI_VID_MAIN_CTL_HEADER_SHIFT;
val |= DSI_VID_MAIN_CTL_VID_PIXEL_MODE_18BITS_LOOSE;
break;
case MIPI_DSI_FMT_RGB888:
val |= MIPI_DSI_PACKED_PIXEL_STREAM_24 <<
DSI_VID_MAIN_CTL_HEADER_SHIFT;
val |= DSI_VID_MAIN_CTL_VID_PIXEL_MODE_24BITS;
break;
default:
dev_err(d->dev, "unknown pixel mode\n");
return;
}
/* TODO: TVG (test video generator) could be enabled here */
/*
* During vertical blanking: go to LP mode
* Like with the EOL setting, if this is not set, the EOL area will be
* filled with NULL or blanking packets in the vblank area.
* FIXME: some Samsung phones and display panels such as s6e63m0 use
* DSI_VID_MAIN_CTL_REG_BLKLINE_MODE_BLANKING here instead,
* figure out how to properly configure that from the panel.
*/
val |= DSI_VID_MAIN_CTL_REG_BLKLINE_MODE_LP_0;
/*
* During EOL: go to LP mode. If this is not set, the EOL area will be
* filled with NULL or blanking packets.
*/
val |= DSI_VID_MAIN_CTL_REG_BLKEOL_MODE_LP_0;
/* Recovery mode 1 */
val |= 1 << DSI_VID_MAIN_CTL_RECOVERY_MODE_SHIFT;
/* All other fields zero */
writel(val, d->regs + DSI_VID_MAIN_CTL);
/* Vertical frame parameters are pretty straight-forward */
val = mode->vdisplay << DSI_VID_VSIZE_VACT_LENGTH_SHIFT;
/* vertical front porch */
val |= (mode->vsync_start - mode->vdisplay)
<< DSI_VID_VSIZE_VFP_LENGTH_SHIFT;
/* vertical sync active */
val |= (mode->vsync_end - mode->vsync_start)
<< DSI_VID_VSIZE_VSA_LENGTH_SHIFT;
/* vertical back porch */
val |= (mode->vtotal - mode->vsync_end)
<< DSI_VID_VSIZE_VBP_LENGTH_SHIFT;
writel(val, d->regs + DSI_VID_VSIZE);
/*
* Horizontal frame parameters:
* horizontal resolution is given in pixels but must be re-calculated
* into bytes since this is what the hardware expects, these registers
* define the payload size of the packet.
*
* hfp = horizontal front porch in bytes
* hbp = horizontal back porch in bytes
* hsa = horizontal sync active in bytes
*
* 6 + 2 is HFP header + checksum
*/
hfp = (mode->hsync_start - mode->hdisplay) * cpp - 6 - 2;
if (d->mdsi->mode_flags & MIPI_DSI_MODE_VIDEO_SYNC_PULSE) {
/*
* Use sync pulse for sync: explicit HSA time
* 6 is HBP header + checksum
* 4 is RGB header + checksum
*/
hbp = (mode->htotal - mode->hsync_end) * cpp - 4 - 6;
/*
* 6 is HBP header + checksum
* 4 is HSW packet bytes
* 4 is RGB header + checksum
*/
hsa = (mode->hsync_end - mode->hsync_start) * cpp - 4 - 4 - 6;
} else {
/*
* Use event for sync: HBP includes both back porch and sync
* 6 is HBP header + checksum
* 4 is HSW packet bytes
* 4 is RGB header + checksum
*/
hbp = (mode->htotal - mode->hsync_start) * cpp - 4 - 4 - 6;
/* HSA is not present in this mode and set to 0 */
hsa = 0;
}
if (hfp < 0) {
dev_info(d->dev, "hfp negative, set to 0\n");
hfp = 0;
}
if (hbp < 0) {
dev_info(d->dev, "hbp negative, set to 0\n");
hbp = 0;
}
if (hsa < 0) {
dev_info(d->dev, "hsa negative, set to 0\n");
hsa = 0;
}
dev_dbg(d->dev, "hfp: %u, hbp: %u, hsa: %u bytes\n",
hfp, hbp, hsa);
/* Frame parameters: horizontal sync active */
val = hsa << DSI_VID_HSIZE1_HSA_LENGTH_SHIFT;
/* horizontal back porch */
val |= hbp << DSI_VID_HSIZE1_HBP_LENGTH_SHIFT;
/* horizontal front porch */
val |= hfp << DSI_VID_HSIZE1_HFP_LENGTH_SHIFT;
writel(val, d->regs + DSI_VID_HSIZE1);
/* RGB data length (visible bytes on one scanline) */
val = mode->hdisplay * cpp;
writel(val, d->regs + DSI_VID_HSIZE2);
dev_dbg(d->dev, "RGB length, visible area on a line: %u bytes\n", val);
/*
* Calculate the time between two pixels in picoseconds using
* the supplied refresh rate and total resolution including
* porches and sync.
*/
/* (ps/s) / (pixels/s) = ps/pixels */
pclk = DIV_ROUND_UP_ULL(1000000000000, mode->clock);
dev_dbg(d->dev, "picoseconds between two pixels: %llu\n",
pclk);
/*
* How many bytes per line will this update frequency yield?
*
* Calculate the number of picoseconds for one scanline (1), then
* divide by 1000000000000 (2) to get in pixels per second we
* want to output.
*
* Multiply with number of bytes per second at this video display
* frequency (3) to get number of bytes transferred during this
* time. Notice that we use the frequency the display wants,
* not what we actually get from the DSI PLL, which is hs_freq.
*
* These arithmetics are done in a different order to avoid
* overflow.
*/
bpl = pclk * mode->htotal; /* (1) picoseconds per line */
dev_dbg(d->dev, "picoseconds per line: %llu\n", bpl);
/* Multiply with bytes per second (3) */
bpl *= (d->mdsi->hs_rate / 8);
/* Pixels per second (2) */
bpl = DIV_ROUND_DOWN_ULL(bpl, 1000000); /* microseconds */
bpl = DIV_ROUND_DOWN_ULL(bpl, 1000000); /* seconds */
/* parallel transactions in all lanes */
bpl *= d->mdsi->lanes;
dev_dbg(d->dev,
"calculated bytes per line: %llu @ %d Hz with HS %lu Hz\n",
bpl, drm_mode_vrefresh(mode), d->mdsi->hs_rate);
/*
* 6 is header + checksum, header = 4 bytes, checksum = 2 bytes
* 4 is short packet for vsync/hsync
*/
if (d->mdsi->mode_flags & MIPI_DSI_MODE_VIDEO_SYNC_PULSE) {
/* Set the event packet size to 0 (not used) */
writel(0, d->regs + DSI_VID_BLKSIZE1);
/*
* FIXME: isn't the hsync width in pixels? The porch and
* sync area size is in pixels here, but this -6
* seems to be for bytes. It looks like this in the vendor
* code though. Is it completely untested?
*/
blkline_pck = bpl - (mode->hsync_end - mode->hsync_start) - 6;
val = blkline_pck << DSI_VID_BLKSIZE2_BLKLINE_PULSE_PCK_SHIFT;
writel(val, d->regs + DSI_VID_BLKSIZE2);
} else {
/* Set the sync pulse packet size to 0 (not used) */
writel(0, d->regs + DSI_VID_BLKSIZE2);
/* Specifying payload size in bytes (-4-6 from manual) */
blkline_pck = bpl - 4 - 6;
if (blkline_pck > 0x1FFF)
dev_err(d->dev, "blkline_pck too big %d bytes\n",
blkline_pck);
val = blkline_pck << DSI_VID_BLKSIZE1_BLKLINE_EVENT_PCK_SHIFT;
val &= DSI_VID_BLKSIZE1_BLKLINE_EVENT_PCK_MASK;
writel(val, d->regs + DSI_VID_BLKSIZE1);
}
/*
* The line duration is used to scale back the frequency from
* the max frequency supported by the HS clock to the desired
* update frequency in vrefresh.
*/
line_duration = blkline_pck + 6;
/*
* The datasheet contains this complex condition to decreasing
* the line duration by 1 under very specific circumstances.
* Here we also imply that LP is used during burst EOL.
*/
if (d->mdsi->lanes == 2 && (hsa & 0x01) && (hfp & 0x01)
&& (d->mdsi->mode_flags & MIPI_DSI_MODE_VIDEO_BURST))
line_duration--;
line_duration = DIV_ROUND_CLOSEST(line_duration, d->mdsi->lanes);
dev_dbg(d->dev, "line duration %u bytes\n", line_duration);
val = line_duration << DSI_VID_DPHY_TIME_REG_LINE_DURATION_SHIFT;
/*
* This is the time to perform LP->HS on D-PHY
* FIXME: nowhere to get this from: DT property on the DSI?
* The manual says this is "system dependent".
* values like 48 and 72 seen in the vendor code.
*/
val |= 48 << DSI_VID_DPHY_TIME_REG_WAKEUP_TIME_SHIFT;
writel(val, d->regs + DSI_VID_DPHY_TIME);
/*
* See the manual figure 657 page 2203 for understanding the impact
* of the different burst mode settings.
*/
if (d->mdsi->mode_flags & MIPI_DSI_MODE_VIDEO_BURST) {
int blkeol_pck, blkeol_duration;
/*
* Packet size at EOL for burst mode, this is only used
* if DSI_VID_MAIN_CTL_REG_BLKEOL_MODE_LP_0 is NOT set,
* but we instead send NULL or blanking packets at EOL.
* This is given in number of bytes.
*
* See the manual page 2198 for the 13 reg_blkeol_pck bits.
*/
blkeol_pck = bpl - (mode->htotal * cpp) - 6;
if (blkeol_pck < 0) {
dev_err(d->dev, "video block does not fit on line!\n");
dev_err(d->dev,
"calculated bytes per line: %llu @ %d Hz\n",
bpl, drm_mode_vrefresh(mode));
dev_err(d->dev,
"bytes per line (blkline_pck) %u bytes\n",
blkline_pck);
dev_err(d->dev,
"blkeol_pck becomes %d bytes\n", blkeol_pck);
return;
}
dev_dbg(d->dev, "BLKEOL packet: %d bytes\n", blkeol_pck);
val = readl(d->regs + DSI_VID_BLKSIZE1);
val &= ~DSI_VID_BLKSIZE1_BLKEOL_PCK_MASK;
val |= blkeol_pck << DSI_VID_BLKSIZE1_BLKEOL_PCK_SHIFT;
writel(val, d->regs + DSI_VID_BLKSIZE1);
/* Use the same value for exact burst limit */
val = blkeol_pck <<
DSI_VID_VCA_SETTING2_EXACT_BURST_LIMIT_SHIFT;
val &= DSI_VID_VCA_SETTING2_EXACT_BURST_LIMIT_MASK;
writel(val, d->regs + DSI_VID_VCA_SETTING2);
/*
* This BLKEOL duration is claimed to be the duration in clock
* cycles of the BLLP end-of-line (EOL) period for each line if
* DSI_VID_MAIN_CTL_REG_BLKEOL_MODE_LP_0 is set.
*
* It is hard to trust the manuals' claim that this is in clock
* cycles as we mimic the behaviour of the vendor code, which
* appears to write a number of bytes that would have been
* transferred on a single lane.
*
* See the manual figure 657 page 2203 and page 2198 for the 13
* reg_blkeol_duration bits.
*
* FIXME: should this also be set up also for non-burst mode
* according to figure 565 page 2202?
*/
blkeol_duration = DIV_ROUND_CLOSEST(blkeol_pck + 6,
d->mdsi->lanes);
dev_dbg(d->dev, "BLKEOL duration: %d clock cycles\n",
blkeol_duration);
val = readl(d->regs + DSI_VID_PCK_TIME);
val &= ~DSI_VID_PCK_TIME_BLKEOL_DURATION_MASK;
val |= blkeol_duration <<
DSI_VID_PCK_TIME_BLKEOL_DURATION_SHIFT;
writel(val, d->regs + DSI_VID_PCK_TIME);
/* Max burst limit, this is given in bytes */
val = readl(d->regs + DSI_VID_VCA_SETTING1);
val &= ~DSI_VID_VCA_SETTING1_MAX_BURST_LIMIT_MASK;
val |= (blkeol_pck - 6) <<
DSI_VID_VCA_SETTING1_MAX_BURST_LIMIT_SHIFT;
writel(val, d->regs + DSI_VID_VCA_SETTING1);
}
/* Maximum line limit */
val = readl(d->regs + DSI_VID_VCA_SETTING2);
val &= ~DSI_VID_VCA_SETTING2_MAX_LINE_LIMIT_MASK;
val |= (blkline_pck - 6) <<
DSI_VID_VCA_SETTING2_MAX_LINE_LIMIT_SHIFT;
writel(val, d->regs + DSI_VID_VCA_SETTING2);
dev_dbg(d->dev, "blkline pck: %d bytes\n", blkline_pck - 6);
}
static void mcde_dsi_start(struct mcde_dsi *d)
{
unsigned long hs_freq;
u32 val;
int i;
/* No integration mode */
writel(0, d->regs + DSI_MCTL_INTEGRATION_MODE);
/* Enable the DSI port, from drivers/video/mcde/dsilink_v2.c */
val = DSI_MCTL_MAIN_DATA_CTL_LINK_EN |
DSI_MCTL_MAIN_DATA_CTL_BTA_EN |
DSI_MCTL_MAIN_DATA_CTL_READ_EN |
DSI_MCTL_MAIN_DATA_CTL_REG_TE_EN;
if (d->mdsi->mode_flags & MIPI_DSI_MODE_EOT_PACKET)
val |= DSI_MCTL_MAIN_DATA_CTL_HOST_EOT_GEN;
writel(val, d->regs + DSI_MCTL_MAIN_DATA_CTL);
/* Set a high command timeout, clear other fields */
val = 0x3ff << DSI_CMD_MODE_CTL_TE_TIMEOUT_SHIFT;
writel(val, d->regs + DSI_CMD_MODE_CTL);
/*
* UI_X4 is described as "unit interval times four"
* I guess since DSI packets are 4 bytes wide, one unit
* is one byte.
*/
hs_freq = clk_get_rate(d->hs_clk);
hs_freq /= 1000000; /* MHz */
val = 4000 / hs_freq;
dev_dbg(d->dev, "UI value: %d\n", val);
val <<= DSI_MCTL_DPHY_STATIC_UI_X4_SHIFT;
val &= DSI_MCTL_DPHY_STATIC_UI_X4_MASK;
writel(val, d->regs + DSI_MCTL_DPHY_STATIC);
/*
* Enable clocking: 0x0f (something?) between each burst,
* enable the second lane if needed, enable continuous clock if
* needed, enable switch into ULPM (ultra-low power mode) on
* all the lines.
*/
val = 0x0f << DSI_MCTL_MAIN_PHY_CTL_WAIT_BURST_TIME_SHIFT;
if (d->mdsi->lanes == 2)
val |= DSI_MCTL_MAIN_PHY_CTL_LANE2_EN;
if (!(d->mdsi->mode_flags & MIPI_DSI_CLOCK_NON_CONTINUOUS))
val |= DSI_MCTL_MAIN_PHY_CTL_CLK_CONTINUOUS;
val |= DSI_MCTL_MAIN_PHY_CTL_CLK_ULPM_EN |
DSI_MCTL_MAIN_PHY_CTL_DAT1_ULPM_EN |
DSI_MCTL_MAIN_PHY_CTL_DAT2_ULPM_EN;
writel(val, d->regs + DSI_MCTL_MAIN_PHY_CTL);
val = (1 << DSI_MCTL_ULPOUT_TIME_CKLANE_ULPOUT_TIME_SHIFT) |
(1 << DSI_MCTL_ULPOUT_TIME_DATA_ULPOUT_TIME_SHIFT);
writel(val, d->regs + DSI_MCTL_ULPOUT_TIME);
writel(DSI_DPHY_LANES_TRIM_DPHY_SPECS_90_81B_0_90,
d->regs + DSI_DPHY_LANES_TRIM);
/* High PHY timeout */
val = (0x0f << DSI_MCTL_DPHY_TIMEOUT_CLK_DIV_SHIFT) |
(0x3fff << DSI_MCTL_DPHY_TIMEOUT_HSTX_TO_VAL_SHIFT) |
(0x3fff << DSI_MCTL_DPHY_TIMEOUT_LPRX_TO_VAL_SHIFT);
writel(val, d->regs + DSI_MCTL_DPHY_TIMEOUT);
val = DSI_MCTL_MAIN_EN_PLL_START |
DSI_MCTL_MAIN_EN_CKLANE_EN |
DSI_MCTL_MAIN_EN_DAT1_EN |
DSI_MCTL_MAIN_EN_IF1_EN;
if (d->mdsi->lanes == 2)
val |= DSI_MCTL_MAIN_EN_DAT2_EN;
writel(val, d->regs + DSI_MCTL_MAIN_EN);
/* Wait for the PLL to lock and the clock and data lines to come up */
i = 0;
val = DSI_MCTL_MAIN_STS_PLL_LOCK |
DSI_MCTL_MAIN_STS_CLKLANE_READY |
DSI_MCTL_MAIN_STS_DAT1_READY;
if (d->mdsi->lanes == 2)
val |= DSI_MCTL_MAIN_STS_DAT2_READY;
while ((readl(d->regs + DSI_MCTL_MAIN_STS) & val) != val) {
/* Sleep for a millisecond */
usleep_range(1000, 1500);
if (i++ == 100) {
dev_warn(d->dev, "DSI lanes did not start up\n");
return;
}
}
/* TODO needed? */
/* Command mode, clear IF1 ID */
val = readl(d->regs + DSI_CMD_MODE_CTL);
/*
* If we enable low-power mode here, with
* val |= DSI_CMD_MODE_CTL_IF1_LP_EN
* then display updates become really slow.
*/
val &= ~DSI_CMD_MODE_CTL_IF1_ID_MASK;
writel(val, d->regs + DSI_CMD_MODE_CTL);
/* Wait for DSI PHY to initialize */
usleep_range(100, 200);
dev_info(d->dev, "DSI link enabled\n");
}
static void mcde_dsi_bridge_enable(struct drm_bridge *bridge)
{
struct mcde_dsi *d = bridge_to_mcde_dsi(bridge);
u32 val;
if (d->mdsi->mode_flags & MIPI_DSI_MODE_VIDEO) {
/* Enable video mode */
val = readl(d->regs + DSI_MCTL_MAIN_DATA_CTL);
val |= DSI_MCTL_MAIN_DATA_CTL_VID_EN;
writel(val, d->regs + DSI_MCTL_MAIN_DATA_CTL);
}
dev_info(d->dev, "enable DSI master\n");
};
static void mcde_dsi_bridge_pre_enable(struct drm_bridge *bridge)
{
struct mcde_dsi *d = bridge_to_mcde_dsi(bridge);
unsigned long hs_freq, lp_freq;
u32 val;
int ret;
/* Copy maximum clock frequencies */
if (d->mdsi->lp_rate)
lp_freq = d->mdsi->lp_rate;
else
lp_freq = DSI_DEFAULT_LP_FREQ_HZ;
if (d->mdsi->hs_rate)
hs_freq = d->mdsi->hs_rate;
else
hs_freq = DSI_DEFAULT_HS_FREQ_HZ;
/* Enable LP (Low Power, Energy Save, ES) and HS (High Speed) clocks */
d->lp_freq = clk_round_rate(d->lp_clk, lp_freq);
ret = clk_set_rate(d->lp_clk, d->lp_freq);
if (ret)
dev_err(d->dev, "failed to set LP clock rate %lu Hz\n",
d->lp_freq);
d->hs_freq = clk_round_rate(d->hs_clk, hs_freq);
ret = clk_set_rate(d->hs_clk, d->hs_freq);
if (ret)
dev_err(d->dev, "failed to set HS clock rate %lu Hz\n",
d->hs_freq);
/* Start clocks */
ret = clk_prepare_enable(d->lp_clk);
if (ret)
dev_err(d->dev, "failed to enable LP clock\n");
else
dev_info(d->dev, "DSI LP clock rate %lu Hz\n",
d->lp_freq);
ret = clk_prepare_enable(d->hs_clk);
if (ret)
dev_err(d->dev, "failed to enable HS clock\n");
else
dev_info(d->dev, "DSI HS clock rate %lu Hz\n",
d->hs_freq);
if (d->mdsi->mode_flags & MIPI_DSI_MODE_VIDEO) {
/* Put IF1 into video mode */
val = readl(d->regs + DSI_MCTL_MAIN_DATA_CTL);
val |= DSI_MCTL_MAIN_DATA_CTL_IF1_MODE;
writel(val, d->regs + DSI_MCTL_MAIN_DATA_CTL);
/* Disable command mode on IF1 */
val = readl(d->regs + DSI_CMD_MODE_CTL);
val &= ~DSI_CMD_MODE_CTL_IF1_LP_EN;
writel(val, d->regs + DSI_CMD_MODE_CTL);
/* Enable some error interrupts */
val = readl(d->regs + DSI_VID_MODE_STS_CTL);
val |= DSI_VID_MODE_STS_CTL_ERR_MISSING_VSYNC;
val |= DSI_VID_MODE_STS_CTL_ERR_MISSING_DATA;
writel(val, d->regs + DSI_VID_MODE_STS_CTL);
} else {
/* Command mode, clear IF1 ID */
val = readl(d->regs + DSI_CMD_MODE_CTL);
/*
* If we enable low-power mode here with
* val |= DSI_CMD_MODE_CTL_IF1_LP_EN
* the display updates become really slow.
*/
val &= ~DSI_CMD_MODE_CTL_IF1_ID_MASK;
writel(val, d->regs + DSI_CMD_MODE_CTL);
}
}
static void mcde_dsi_bridge_mode_set(struct drm_bridge *bridge,
const struct drm_display_mode *mode,
const struct drm_display_mode *adj)
{
struct mcde_dsi *d = bridge_to_mcde_dsi(bridge);
if (!d->mdsi) {
dev_err(d->dev, "no DSI device attached to encoder!\n");
return;
}
dev_info(d->dev, "set DSI master to %dx%d %u Hz %s mode\n",
mode->hdisplay, mode->vdisplay, mode->clock * 1000,
(d->mdsi->mode_flags & MIPI_DSI_MODE_VIDEO) ? "VIDEO" : "CMD"
);
if (d->mdsi->mode_flags & MIPI_DSI_MODE_VIDEO)
mcde_dsi_setup_video_mode(d, mode);
}
static void mcde_dsi_wait_for_command_mode_stop(struct mcde_dsi *d)
{
u32 val;
int i;
/*
* Wait until we get out of command mode
* CSM = Command State Machine
*/
i = 0;
val = DSI_CMD_MODE_STS_CSM_RUNNING;
while ((readl(d->regs + DSI_CMD_MODE_STS) & val) == val) {
/* Sleep for a millisecond */
usleep_range(1000, 2000);
if (i++ == 100) {
dev_warn(d->dev,
"could not get out of command mode\n");
return;
}
}
}
static void mcde_dsi_wait_for_video_mode_stop(struct mcde_dsi *d)
{
u32 val;
int i;
/* Wait until we get out og video mode */
i = 0;
val = DSI_VID_MODE_STS_VSG_RUNNING;
while ((readl(d->regs + DSI_VID_MODE_STS) & val) == val) {
/* Sleep for a millisecond */
usleep_range(1000, 2000);
if (i++ == 100) {
dev_warn(d->dev,
"could not get out of video mode\n");
return;
}
}
}
static void mcde_dsi_bridge_disable(struct drm_bridge *bridge)
{
struct mcde_dsi *d = bridge_to_mcde_dsi(bridge);
u32 val;
/* Disable all error interrupts */
writel(0, d->regs + DSI_VID_MODE_STS_CTL);
if (d->mdsi->mode_flags & MIPI_DSI_MODE_VIDEO) {
/* Stop video mode */
val = readl(d->regs + DSI_MCTL_MAIN_DATA_CTL);
val &= ~DSI_MCTL_MAIN_DATA_CTL_VID_EN;
writel(val, d->regs + DSI_MCTL_MAIN_DATA_CTL);
mcde_dsi_wait_for_video_mode_stop(d);
} else {
/* Stop command mode */
mcde_dsi_wait_for_command_mode_stop(d);
}
/* Stop clocks */
clk_disable_unprepare(d->hs_clk);
clk_disable_unprepare(d->lp_clk);
}
static int mcde_dsi_bridge_attach(struct drm_bridge *bridge,
enum drm_bridge_attach_flags flags)
{
struct mcde_dsi *d = bridge_to_mcde_dsi(bridge);
struct drm_device *drm = bridge->dev;
int ret;
if (!drm_core_check_feature(drm, DRIVER_ATOMIC)) {
dev_err(d->dev, "we need atomic updates\n");
return -ENOTSUPP;
}
/* Attach the DSI bridge to the output (panel etc) bridge */
ret = drm_bridge_attach(bridge->encoder, d->bridge_out, bridge, flags);
if (ret) {
dev_err(d->dev, "failed to attach the DSI bridge\n");
return ret;
}
return 0;
}
static const struct drm_bridge_funcs mcde_dsi_bridge_funcs = {
.attach = mcde_dsi_bridge_attach,
.mode_set = mcde_dsi_bridge_mode_set,
.disable = mcde_dsi_bridge_disable,
.enable = mcde_dsi_bridge_enable,
.pre_enable = mcde_dsi_bridge_pre_enable,
};
static int mcde_dsi_bind(struct device *dev, struct device *master,
void *data)
{
struct drm_device *drm = data;
struct mcde *mcde = to_mcde(drm);
struct mcde_dsi *d = dev_get_drvdata(dev);
struct device_node *child;
struct drm_panel *panel = NULL;
struct drm_bridge *bridge = NULL;
if (!of_get_available_child_count(dev->of_node)) {
dev_info(dev, "unused DSI interface\n");
d->unused = true;
return 0;
}
d->mcde = mcde;
/* If the display attached before binding, set this up */
if (d->mdsi)
mcde_dsi_attach_to_mcde(d);
/* Obtain the clocks */
d->hs_clk = devm_clk_get(dev, "hs");
if (IS_ERR(d->hs_clk)) {
dev_err(dev, "unable to get HS clock\n");
return PTR_ERR(d->hs_clk);
}
d->lp_clk = devm_clk_get(dev, "lp");
if (IS_ERR(d->lp_clk)) {
dev_err(dev, "unable to get LP clock\n");
return PTR_ERR(d->lp_clk);
}
/* Assert RESET through the PRCMU, active low */
/* FIXME: which DSI block? */
regmap_update_bits(d->prcmu, PRCM_DSI_SW_RESET,
PRCM_DSI_SW_RESET_DSI0_SW_RESETN, 0);
usleep_range(100, 200);
/* De-assert RESET again */
regmap_update_bits(d->prcmu, PRCM_DSI_SW_RESET,
PRCM_DSI_SW_RESET_DSI0_SW_RESETN,
PRCM_DSI_SW_RESET_DSI0_SW_RESETN);
/* Start up the hardware */
mcde_dsi_start(d);
/* Look for a panel as a child to this node */
for_each_available_child_of_node(dev->of_node, child) {
panel = of_drm_find_panel(child);
if (IS_ERR(panel)) {
dev_err(dev, "failed to find panel try bridge (%ld)\n",
PTR_ERR(panel));
panel = NULL;
bridge = of_drm_find_bridge(child);
if (!bridge) {
dev_err(dev, "failed to find bridge\n");
return -EINVAL;
}
}
}
if (panel) {
bridge = drm_panel_bridge_add_typed(panel,
DRM_MODE_CONNECTOR_DSI);
if (IS_ERR(bridge)) {
dev_err(dev, "error adding panel bridge\n");
return PTR_ERR(bridge);
}
dev_info(dev, "connected to panel\n");
d->panel = panel;
} else if (bridge) {
/* TODO: AV8100 HDMI encoder goes here for example */
dev_info(dev, "connected to non-panel bridge (unsupported)\n");
return -ENODEV;
} else {
dev_err(dev, "no panel or bridge\n");
return -ENODEV;
}
d->bridge_out = bridge;
/* Create a bridge for this DSI channel */
d->bridge.funcs = &mcde_dsi_bridge_funcs;
d->bridge.of_node = dev->of_node;
drm_bridge_add(&d->bridge);
/* TODO: first come first serve, use a list */
mcde->bridge = &d->bridge;
dev_info(dev, "initialized MCDE DSI bridge\n");
return 0;
}
static void mcde_dsi_unbind(struct device *dev, struct device *master,
void *data)
{
struct mcde_dsi *d = dev_get_drvdata(dev);
if (d->panel)
drm_panel_bridge_remove(d->bridge_out);
regmap_update_bits(d->prcmu, PRCM_DSI_SW_RESET,
PRCM_DSI_SW_RESET_DSI0_SW_RESETN, 0);
}
static const struct component_ops mcde_dsi_component_ops = {
.bind = mcde_dsi_bind,
.unbind = mcde_dsi_unbind,
};
static int mcde_dsi_probe(struct platform_device *pdev)
{
struct device *dev = &pdev->dev;
struct mcde_dsi *d;
struct mipi_dsi_host *host;
struct resource *res;
u32 dsi_id;
int ret;
d = devm_kzalloc(dev, sizeof(*d), GFP_KERNEL);
if (!d)
return -ENOMEM;
d->dev = dev;
platform_set_drvdata(pdev, d);
/* Get a handle on the PRCMU so we can do reset */
d->prcmu =
syscon_regmap_lookup_by_compatible("stericsson,db8500-prcmu");
if (IS_ERR(d->prcmu)) {
dev_err(dev, "no PRCMU regmap\n");
return PTR_ERR(d->prcmu);
}
res = platform_get_resource(pdev, IORESOURCE_MEM, 0);
d->regs = devm_ioremap_resource(dev, res);
if (IS_ERR(d->regs)) {
dev_err(dev, "no DSI regs\n");
return PTR_ERR(d->regs);
}
dsi_id = readl(d->regs + DSI_ID_REG);
dev_info(dev, "HW revision 0x%08x\n", dsi_id);
host = &d->dsi_host;
host->dev = dev;
host->ops = &mcde_dsi_host_ops;
ret = mipi_dsi_host_register(host);
if (ret < 0) {
dev_err(dev, "failed to register DSI host: %d\n", ret);
return ret;
}
dev_info(dev, "registered DSI host\n");
platform_set_drvdata(pdev, d);
return component_add(dev, &mcde_dsi_component_ops);
}
static int mcde_dsi_remove(struct platform_device *pdev)
{
struct mcde_dsi *d = platform_get_drvdata(pdev);
component_del(&pdev->dev, &mcde_dsi_component_ops);
mipi_dsi_host_unregister(&d->dsi_host);
return 0;
}
static const struct of_device_id mcde_dsi_of_match[] = {
{
.compatible = "ste,mcde-dsi",
},
{},
};
struct platform_driver mcde_dsi_driver = {
.driver = {
.name = "mcde-dsi",
.of_match_table = of_match_ptr(mcde_dsi_of_match),
},
.probe = mcde_dsi_probe,
.remove = mcde_dsi_remove,
};
|
{
"pile_set_name": "Github"
}
|
-- source include/not_embedded.inc
-- source ../include/ps_truncate_all_tables.inc
DESC sys.x$ps_schema_table_statistics_io;
DESC sys.x$ps_schema_table_statistics_io;
# Ensure structure changes don't slip in
DESC sys.x$ps_schema_table_statistics_io;
# Make sure view select does not error, but ignore results
--disable_result_log
SELECT * FROM sys.x$ps_schema_table_statistics_io;
--enable_result_log
# Ensure structure changes don't slip in
DESC sys.x$ps_schema_table_statistics_io;
# Make sure view select does not error, but ignore results
--disable_result_log
SELECT * FROM sys.x$ps_schema_table_statistics_io;
--enable_result_log
|
{
"pile_set_name": "Github"
}
|
using System.Resources;
using System.Reflection;
using System.Runtime.CompilerServices;
using System.Runtime.InteropServices;
// General Information about an assembly is controlled through the following
// set of attributes. Change these attribute values to modify the information
// associated with an assembly.
[assembly: AssemblyTitle("ConsoleUtils")]
[assembly: AssemblyDescription("")]
[assembly: AssemblyConfiguration("")]
[assembly: AssemblyCompany("")]
[assembly: AssemblyProduct("ConsoleUtils")]
[assembly: AssemblyCopyright("Copyright © 2017")]
[assembly: AssemblyTrademark("")]
[assembly: AssemblyCulture("")]
// Setting ComVisible to false makes the types in this assembly not visible
// to COM components. If you need to access a type in this assembly from
// COM, set the ComVisible attribute to true on that type.
[assembly: ComVisible(false)]
// The following GUID is for the ID of the typelib if this project is exposed to COM
[assembly: Guid("1e44f5eb-ad5f-4def-a9fb-e754ab732390")]
// Version information for an assembly consists of the following four values:
//
// Major Version
// Minor Version
// Build Number
// Revision
//
// You can specify all the values or you can default the Build and Revision Numbers
// by using the '*' as shown below:
// [assembly: AssemblyVersion("1.0.*")]
[assembly: AssemblyVersion("1.0.0.0")]
[assembly: AssemblyFileVersion("1.0.0.0")]
[assembly: NeutralResourcesLanguage("en")]
|
{
"pile_set_name": "Github"
}
|
include AUTHORS.rst CHANGES.rst LICENSE.rst README.rst
|
{
"pile_set_name": "Github"
}
|
/*
(C) Copyright IBM Corp. 2007, 2008
All rights reserved.
Redistribution and use in source and binary forms, with or without
modification, are permitted provided that the following conditions are met:
* Redistributions of source code must retain the above copyright notice,
this list of conditions and the following disclaimer.
* Redistributions in binary form must reproduce the above copyright
notice, this list of conditions and the following disclaimer in the
documentation and/or other materials provided with the distribution.
* Neither the name of IBM nor the names of its contributors may be
used to endorse or promote products derived from this software without
specific prior written permission.
THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS "AS IS"
AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE
IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE
ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT OWNER OR CONTRIBUTORS BE
LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR
CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF
SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, DATA, OR PROFITS; OR BUSINESS
INTERRUPTION) HOWEVER CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN
CONTRACT, STRICT LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE)
ARISING IN ANY WAY OUT OF THE USE OF THIS SOFTWARE, EVEN IF ADVISED OF THE
POSSIBILITY OF SUCH DAMAGE.
*/
#include <string.h>
#include <stddef.h>
#include "ea_internal.h"
#include <ea.h>
#include <spu_cache.h>
COMPAT_EA_ALIAS (memcmp_ea);
int
memcmp_ea (__ea void *s1, __ea const void *s2, size_ea_t n)
{
__ea void *curr_s1 = s1;
__ea void *curr_s2 = (__ea void *) s2;
void *l_s1;
void *l_s2;
size_ea_t local_n;
size_ea_t s2_n;
size_ea_t s1_n;
int ret;
ret = 0;
while (n)
{
l_s2 = __cache_fetch (curr_s2);
l_s1 = __cache_fetch (curr_s1);
/*
* Use the smaller of the size left to compare (n), the space left in
* s2 cacheline (s2_n), or the space left in the s1 cacheline (s1_n).
*/
s2_n = ROUND_UP_NEXT_128 ((size_ea_t) curr_s2) - (size_ea_t) curr_s2;
s1_n = ROUND_UP_NEXT_128 ((size_ea_t) curr_s1) - (size_ea_t) curr_s1;
local_n = three_way_min (s2_n, s1_n, n);
ret = memcmp (l_s1, l_s2, local_n);
if (ret)
return ret;
/* update values to take into account what we copied */
curr_s2 += local_n;
curr_s1 += local_n;
n -= local_n;
}
return ret;
}
|
{
"pile_set_name": "Github"
}
|
// Copyright (c) 1994-2006 Sun Microsystems Inc.
// All Rights Reserved.
//
// Redistribution and use in source and binary forms, with or without
// modification, are permitted provided that the following conditions
// are met:
//
// - Redistributions of source code must retain the above copyright notice,
// this list of conditions and the following disclaimer.
//
// - Redistribution in binary form must reproduce the above copyright
// notice, this list of conditions and the following disclaimer in the
// documentation and/or other materials provided with the
// distribution.
//
// - Neither the name of Sun Microsystems or the names of contributors may
// be used to endorse or promote products derived from this software without
// specific prior written permission.
//
// THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS
// "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT
// LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS
// FOR A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE
// COPYRIGHT OWNER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT,
// INCIDENTAL, SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES
// (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR
// SERVICES; LOSS OF USE, DATA, OR PROFITS; OR BUSINESS INTERRUPTION)
// HOWEVER CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN CONTRACT,
// STRICT LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE)
// ARISING IN ANY WAY OUT OF THE USE OF THIS SOFTWARE, EVEN IF ADVISED
// OF THE POSSIBILITY OF SUCH DAMAGE.
// The original source code covered by the above license above has been modified
// significantly by Google Inc.
// Copyright 2014 the V8 project authors. All rights reserved.
#ifndef V8_CODEGEN_PPC_ASSEMBLER_PPC_INL_H_
#define V8_CODEGEN_PPC_ASSEMBLER_PPC_INL_H_
#include "src/codegen/ppc/assembler-ppc.h"
#include "src/codegen/assembler.h"
#include "src/debug/debug.h"
#include "src/objects/objects-inl.h"
namespace v8 {
namespace internal {
bool CpuFeatures::SupportsOptimizer() { return true; }
bool CpuFeatures::SupportsWasmSimd128() { return false; }
void RelocInfo::apply(intptr_t delta) {
// absolute code pointer inside code object moves with the code object.
if (IsInternalReference(rmode_)) {
// Jump table entry
Address target = Memory<Address>(pc_);
Memory<Address>(pc_) = target + delta;
} else {
// mov sequence
DCHECK(IsInternalReferenceEncoded(rmode_));
Address target = Assembler::target_address_at(pc_, constant_pool_);
Assembler::set_target_address_at(pc_, constant_pool_, target + delta,
SKIP_ICACHE_FLUSH);
}
}
Address RelocInfo::target_internal_reference() {
if (IsInternalReference(rmode_)) {
// Jump table entry
return Memory<Address>(pc_);
} else {
// mov sequence
DCHECK(IsInternalReferenceEncoded(rmode_));
return Assembler::target_address_at(pc_, constant_pool_);
}
}
Address RelocInfo::target_internal_reference_address() {
DCHECK(IsInternalReference(rmode_) || IsInternalReferenceEncoded(rmode_));
return pc_;
}
Address RelocInfo::target_address() {
DCHECK(IsCodeTarget(rmode_) || IsRuntimeEntry(rmode_) || IsWasmCall(rmode_));
return Assembler::target_address_at(pc_, constant_pool_);
}
Address RelocInfo::target_address_address() {
DCHECK(HasTargetAddressAddress());
if (FLAG_enable_embedded_constant_pool &&
Assembler::IsConstantPoolLoadStart(pc_)) {
// We return the PC for embedded constant pool since this function is used
// by the serializer and expects the address to reside within the code
// object.
return pc_;
}
// Read the address of the word containing the target_address in an
// instruction stream.
// The only architecture-independent user of this function is the serializer.
// The serializer uses it to find out how many raw bytes of instruction to
// output before the next target.
// For an instruction like LIS/ORI where the target bits are mixed into the
// instruction bits, the size of the target will be zero, indicating that the
// serializer should not step forward in memory after a target is resolved
// and written.
return pc_;
}
Address RelocInfo::constant_pool_entry_address() {
if (FLAG_enable_embedded_constant_pool) {
DCHECK(constant_pool_);
ConstantPoolEntry::Access access;
if (Assembler::IsConstantPoolLoadStart(pc_, &access))
return Assembler::target_constant_pool_address_at(
pc_, constant_pool_, access, ConstantPoolEntry::INTPTR);
}
UNREACHABLE();
}
int RelocInfo::target_address_size() { return Assembler::kSpecialTargetSize; }
HeapObject RelocInfo::target_object() {
DCHECK(IsCodeTarget(rmode_) || rmode_ == FULL_EMBEDDED_OBJECT);
return HeapObject::cast(
Object(Assembler::target_address_at(pc_, constant_pool_)));
}
HeapObject RelocInfo::target_object_no_host(Isolate* isolate) {
return target_object();
}
Handle<HeapObject> RelocInfo::target_object_handle(Assembler* origin) {
DCHECK(IsCodeTarget(rmode_) || rmode_ == FULL_EMBEDDED_OBJECT);
return Handle<HeapObject>(reinterpret_cast<Address*>(
Assembler::target_address_at(pc_, constant_pool_)));
}
void RelocInfo::set_target_object(Heap* heap, HeapObject target,
WriteBarrierMode write_barrier_mode,
ICacheFlushMode icache_flush_mode) {
DCHECK(IsCodeTarget(rmode_) || rmode_ == FULL_EMBEDDED_OBJECT);
Assembler::set_target_address_at(pc_, constant_pool_, target.ptr(),
icache_flush_mode);
if (write_barrier_mode == UPDATE_WRITE_BARRIER && !host().is_null()) {
WriteBarrierForCode(host(), this, target);
}
}
Address RelocInfo::target_external_reference() {
DCHECK(rmode_ == EXTERNAL_REFERENCE);
return Assembler::target_address_at(pc_, constant_pool_);
}
void RelocInfo::set_target_external_reference(
Address target, ICacheFlushMode icache_flush_mode) {
DCHECK(rmode_ == RelocInfo::EXTERNAL_REFERENCE);
Assembler::set_target_address_at(pc_, constant_pool_, target,
icache_flush_mode);
}
Address RelocInfo::target_runtime_entry(Assembler* origin) {
DCHECK(IsRuntimeEntry(rmode_));
return target_address();
}
void RelocInfo::set_target_runtime_entry(Address target,
WriteBarrierMode write_barrier_mode,
ICacheFlushMode icache_flush_mode) {
DCHECK(IsRuntimeEntry(rmode_));
if (target_address() != target)
set_target_address(target, write_barrier_mode, icache_flush_mode);
}
Address RelocInfo::target_off_heap_target() {
DCHECK(IsOffHeapTarget(rmode_));
return Assembler::target_address_at(pc_, constant_pool_);
}
void RelocInfo::WipeOut() {
DCHECK(IsFullEmbeddedObject(rmode_) || IsCodeTarget(rmode_) ||
IsRuntimeEntry(rmode_) || IsExternalReference(rmode_) ||
IsInternalReference(rmode_) || IsInternalReferenceEncoded(rmode_) ||
IsOffHeapTarget(rmode_));
if (IsInternalReference(rmode_)) {
// Jump table entry
Memory<Address>(pc_) = kNullAddress;
} else if (IsInternalReferenceEncoded(rmode_) || IsOffHeapTarget(rmode_)) {
// mov sequence
// Currently used only by deserializer, no need to flush.
Assembler::set_target_address_at(pc_, constant_pool_, kNullAddress,
SKIP_ICACHE_FLUSH);
} else {
Assembler::set_target_address_at(pc_, constant_pool_, kNullAddress);
}
}
Operand::Operand(Register rm) : rm_(rm), rmode_(RelocInfo::NONE) {}
void Assembler::UntrackBranch() {
DCHECK(!trampoline_emitted_);
DCHECK_GT(tracked_branch_count_, 0);
int count = --tracked_branch_count_;
if (count == 0) {
// Reset
next_trampoline_check_ = kMaxInt;
} else {
next_trampoline_check_ += kTrampolineSlotsSize;
}
}
// Fetch the 32bit value from the FIXED_SEQUENCE lis/ori
Address Assembler::target_address_at(Address pc, Address constant_pool) {
if (FLAG_enable_embedded_constant_pool && constant_pool) {
ConstantPoolEntry::Access access;
if (IsConstantPoolLoadStart(pc, &access))
return Memory<Address>(target_constant_pool_address_at(
pc, constant_pool, access, ConstantPoolEntry::INTPTR));
}
Instr instr1 = instr_at(pc);
Instr instr2 = instr_at(pc + kInstrSize);
// Interpret 2 instructions generated by lis/ori
if (IsLis(instr1) && IsOri(instr2)) {
#if V8_TARGET_ARCH_PPC64
Instr instr4 = instr_at(pc + (3 * kInstrSize));
Instr instr5 = instr_at(pc + (4 * kInstrSize));
// Assemble the 64 bit value.
uint64_t hi = (static_cast<uint32_t>((instr1 & kImm16Mask) << 16) |
static_cast<uint32_t>(instr2 & kImm16Mask));
uint64_t lo = (static_cast<uint32_t>((instr4 & kImm16Mask) << 16) |
static_cast<uint32_t>(instr5 & kImm16Mask));
return static_cast<Address>((hi << 32) | lo);
#else
// Assemble the 32 bit value.
return static_cast<Address>(((instr1 & kImm16Mask) << 16) |
(instr2 & kImm16Mask));
#endif
}
UNREACHABLE();
}
#if V8_TARGET_ARCH_PPC64
const uint32_t kLoadIntptrOpcode = LD;
#else
const uint32_t kLoadIntptrOpcode = LWZ;
#endif
// Constant pool load sequence detection:
// 1) REGULAR access:
// load <dst>, kConstantPoolRegister + <offset>
//
// 2) OVERFLOWED access:
// addis <scratch>, kConstantPoolRegister, <offset_high>
// load <dst>, <scratch> + <offset_low>
bool Assembler::IsConstantPoolLoadStart(Address pc,
ConstantPoolEntry::Access* access) {
Instr instr = instr_at(pc);
uint32_t opcode = instr & kOpcodeMask;
if (GetRA(instr) != kConstantPoolRegister) return false;
bool overflowed = (opcode == ADDIS);
#ifdef DEBUG
if (overflowed) {
opcode = instr_at(pc + kInstrSize) & kOpcodeMask;
}
DCHECK(opcode == kLoadIntptrOpcode || opcode == LFD);
#endif
if (access) {
*access = (overflowed ? ConstantPoolEntry::OVERFLOWED
: ConstantPoolEntry::REGULAR);
}
return true;
}
bool Assembler::IsConstantPoolLoadEnd(Address pc,
ConstantPoolEntry::Access* access) {
Instr instr = instr_at(pc);
uint32_t opcode = instr & kOpcodeMask;
bool overflowed = false;
if (!(opcode == kLoadIntptrOpcode || opcode == LFD)) return false;
if (GetRA(instr) != kConstantPoolRegister) {
instr = instr_at(pc - kInstrSize);
opcode = instr & kOpcodeMask;
if ((opcode != ADDIS) || GetRA(instr) != kConstantPoolRegister) {
return false;
}
overflowed = true;
}
if (access) {
*access = (overflowed ? ConstantPoolEntry::OVERFLOWED
: ConstantPoolEntry::REGULAR);
}
return true;
}
int Assembler::GetConstantPoolOffset(Address pc,
ConstantPoolEntry::Access access,
ConstantPoolEntry::Type type) {
bool overflowed = (access == ConstantPoolEntry::OVERFLOWED);
#ifdef DEBUG
ConstantPoolEntry::Access access_check =
static_cast<ConstantPoolEntry::Access>(-1);
DCHECK(IsConstantPoolLoadStart(pc, &access_check));
DCHECK(access_check == access);
#endif
int offset;
if (overflowed) {
offset = (instr_at(pc) & kImm16Mask) << 16;
offset += SIGN_EXT_IMM16(instr_at(pc + kInstrSize) & kImm16Mask);
DCHECK(!is_int16(offset));
} else {
offset = SIGN_EXT_IMM16((instr_at(pc) & kImm16Mask));
}
return offset;
}
void Assembler::PatchConstantPoolAccessInstruction(
int pc_offset, int offset, ConstantPoolEntry::Access access,
ConstantPoolEntry::Type type) {
Address pc = reinterpret_cast<Address>(buffer_start_) + pc_offset;
bool overflowed = (access == ConstantPoolEntry::OVERFLOWED);
CHECK(overflowed != is_int16(offset));
#ifdef DEBUG
ConstantPoolEntry::Access access_check =
static_cast<ConstantPoolEntry::Access>(-1);
DCHECK(IsConstantPoolLoadStart(pc, &access_check));
DCHECK(access_check == access);
#endif
if (overflowed) {
int hi_word = static_cast<int>(offset >> 16);
int lo_word = static_cast<int>(offset & 0xffff);
if (lo_word & 0x8000) hi_word++;
Instr instr1 = instr_at(pc);
Instr instr2 = instr_at(pc + kInstrSize);
instr1 &= ~kImm16Mask;
instr1 |= (hi_word & kImm16Mask);
instr2 &= ~kImm16Mask;
instr2 |= (lo_word & kImm16Mask);
instr_at_put(pc, instr1);
instr_at_put(pc + kInstrSize, instr2);
} else {
Instr instr = instr_at(pc);
instr &= ~kImm16Mask;
instr |= (offset & kImm16Mask);
instr_at_put(pc, instr);
}
}
Address Assembler::target_constant_pool_address_at(
Address pc, Address constant_pool, ConstantPoolEntry::Access access,
ConstantPoolEntry::Type type) {
Address addr = constant_pool;
DCHECK(addr);
addr += GetConstantPoolOffset(pc, access, type);
return addr;
}
// This sets the branch destination (which gets loaded at the call address).
// This is for calls and branches within generated code. The serializer
// has already deserialized the mov instructions etc.
// There is a FIXED_SEQUENCE assumption here
void Assembler::deserialization_set_special_target_at(
Address instruction_payload, Code code, Address target) {
set_target_address_at(instruction_payload,
!code.is_null() ? code.constant_pool() : kNullAddress,
target);
}
int Assembler::deserialization_special_target_size(
Address instruction_payload) {
return kSpecialTargetSize;
}
void Assembler::deserialization_set_target_internal_reference_at(
Address pc, Address target, RelocInfo::Mode mode) {
if (RelocInfo::IsInternalReferenceEncoded(mode)) {
set_target_address_at(pc, kNullAddress, target, SKIP_ICACHE_FLUSH);
} else {
Memory<Address>(pc) = target;
}
}
// This code assumes the FIXED_SEQUENCE of lis/ori
void Assembler::set_target_address_at(Address pc, Address constant_pool,
Address target,
ICacheFlushMode icache_flush_mode) {
if (FLAG_enable_embedded_constant_pool && constant_pool) {
ConstantPoolEntry::Access access;
if (IsConstantPoolLoadStart(pc, &access)) {
Memory<Address>(target_constant_pool_address_at(
pc, constant_pool, access, ConstantPoolEntry::INTPTR)) = target;
return;
}
}
Instr instr1 = instr_at(pc);
Instr instr2 = instr_at(pc + kInstrSize);
// Interpret 2 instructions generated by lis/ori
if (IsLis(instr1) && IsOri(instr2)) {
#if V8_TARGET_ARCH_PPC64
Instr instr4 = instr_at(pc + (3 * kInstrSize));
Instr instr5 = instr_at(pc + (4 * kInstrSize));
// Needs to be fixed up when mov changes to handle 64-bit values.
uint32_t* p = reinterpret_cast<uint32_t*>(pc);
uintptr_t itarget = static_cast<uintptr_t>(target);
instr5 &= ~kImm16Mask;
instr5 |= itarget & kImm16Mask;
itarget = itarget >> 16;
instr4 &= ~kImm16Mask;
instr4 |= itarget & kImm16Mask;
itarget = itarget >> 16;
instr2 &= ~kImm16Mask;
instr2 |= itarget & kImm16Mask;
itarget = itarget >> 16;
instr1 &= ~kImm16Mask;
instr1 |= itarget & kImm16Mask;
itarget = itarget >> 16;
*p = instr1;
*(p + 1) = instr2;
*(p + 3) = instr4;
*(p + 4) = instr5;
if (icache_flush_mode != SKIP_ICACHE_FLUSH) {
FlushInstructionCache(p, 5 * kInstrSize);
}
#else
uint32_t* p = reinterpret_cast<uint32_t*>(pc);
uint32_t itarget = static_cast<uint32_t>(target);
int lo_word = itarget & kImm16Mask;
int hi_word = itarget >> 16;
instr1 &= ~kImm16Mask;
instr1 |= hi_word;
instr2 &= ~kImm16Mask;
instr2 |= lo_word;
*p = instr1;
*(p + 1) = instr2;
if (icache_flush_mode != SKIP_ICACHE_FLUSH) {
FlushInstructionCache(p, 2 * kInstrSize);
}
#endif
return;
}
UNREACHABLE();
}
} // namespace internal
} // namespace v8
#endif // V8_CODEGEN_PPC_ASSEMBLER_PPC_INL_H_
|
{
"pile_set_name": "Github"
}
|
/* $NoKeywords:$ */
/**
* @file
*
* Config Fch HwAcpi controller
*
* Init HwAcpi Controller features.
*
* @xrefitem bom "File Content Label" "Release Content"
* @e project: AGESA
* @e sub-project: FCH
* @e \$Revision: 63425 $ @e \$Date: 2011-12-22 11:24:10 -0600 (Thu, 22 Dec 2011) $
*
*/
/*
*****************************************************************************
*
* Copyright (c) 2008 - 2012, Advanced Micro Devices, Inc.
* All rights reserved.
*
* Redistribution and use in source and binary forms, with or without
* modification, are permitted provided that the following conditions are met:
* * Redistributions of source code must retain the above copyright
* notice, this list of conditions and the following disclaimer.
* * Redistributions in binary form must reproduce the above copyright
* notice, this list of conditions and the following disclaimer in the
* documentation and/or other materials provided with the distribution.
* * Neither the name of Advanced Micro Devices, Inc. nor the names of
* its contributors may be used to endorse or promote products derived
* from this software without specific prior written permission.
*
* THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS "AS IS" AND
* ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE IMPLIED
* WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE ARE
* DISCLAIMED. IN NO EVENT SHALL ADVANCED MICRO DEVICES, INC. BE LIABLE FOR ANY
* DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES
* (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES;
* LOSS OF USE, DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND
* ON ANY THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT
* (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE OF THIS
* SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
****************************************************************************
*/
#include "FchPlatform.h"
#include "amdlib.h"
#include "cpuServices.h"
#include "Filecode.h"
#define FILECODE PROC_FCH_HWACPI_FAMILY_HUDSON2_HUDSON2HWACPIMIDSERVICE_FILECODE
|
{
"pile_set_name": "Github"
}
|
/*
* Copyright 2019 NAVER Corp.
*
* Licensed under the Apache License, Version 2.0 (the "License");
* you may not use this file except in compliance with the License.
* You may obtain a copy of the License at
*
* http://www.apache.org/licenses/LICENSE-2.0
*
* Unless required by applicable law or agreed to in writing, software
* distributed under the License is distributed on an "AS IS" BASIS,
* WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
* See the License for the specific language governing permissions and
* limitations under the License.
*/
package com.navercorp.pinpoint.common.trace;
import java.util.List;
/**
* @author HyunGil Jeong
*/
public class ServiceTypeProvider {
private static final ServiceTypeLocator UNREGISTERED = new ServiceTypeLocator() {
@Override
public ServiceType findServiceType(short code) {
throw new IllegalStateException("ServiceTypeRegistry not registered");
}
@Override
public ServiceType findServiceTypeByName(String name) {
throw new IllegalStateException("ServiceTypeRegistry not registered");
}
@Override
public List<ServiceType> findDesc(String name) {
throw new IllegalStateException("ServiceTypeRegistry not registered");
}
};
// must be non final : TraceMetadataRegistrar
private static ServiceTypeLocator registry = UNREGISTERED;
private ServiceTypeProvider() {
throw new AssertionError();
}
public static ServiceType getByCode(int serviceTypeCode) {
Short code = (short) serviceTypeCode;
ServiceType serviceType = registry.findServiceType(code);
if (ServiceType.UNDEFINED == serviceType) {
throw new IllegalStateException("Unknown ServiceType code: " + serviceTypeCode);
}
return serviceType;
}
public static ServiceType getByName(String serviceTypeName) {
ServiceType serviceType = registry.findServiceTypeByName(serviceTypeName);
if (ServiceType.UNDEFINED == serviceType) {
throw new IllegalStateException("Unknown ServiceType name: " + serviceTypeName);
}
return serviceType;
}
}
|
{
"pile_set_name": "Github"
}
|
/*
* RELIC is an Efficient LIbrary for Cryptography
* Copyright (C) 2007-2020 RELIC Authors
*
* This file is part of RELIC. RELIC is legal property of its developers,
* whose names are not listed here. Please refer to the COPYRIGHT file
* for contact information.
*
* RELIC is free software; you can redistribute it and/or modify it under the
* terms of the version 2.1 (or later) of the GNU Lesser General Public License
* as published by the Free Software Foundation; or version 2.0 of the Apache
* License as published by the Apache Software Foundation. See the LICENSE files
* for more details.
*
* RELIC is distributed in the hope that it will be useful, but WITHOUT ANY
* WARRANTY; without even the implied warranty of MERCHANTABILITY or FITNESS FOR
* A PARTICULAR PURPOSE. See the LICENSE files for more details.
*
* You should have received a copy of the GNU Lesser General Public or the
* Apache License along with RELIC. If not, see <https://www.gnu.org/licenses/>
* or <https://www.apache.org/licenses/>.
*/
/**
* @file
*
* Implementation of the low-level binary field addition and subtraction
* functions.
*
* @ingroup fb
*/
#include <gmp.h>
#include "relic_fb.h"
#include "relic_fb_low.h"
/*============================================================================*/
/* Public definitions */
/*============================================================================*/
void fb_add1_low(dig_t *c, const dig_t *a, dig_t digit) {
int i;
(*c) = (*a) ^ digit;
c++;
a++;
for (i = 0; i < RLC_FB_DIGS - 1; i++, a++, c++)
(*c) = (*a);
}
void fb_addn_low(dig_t *c, const dig_t *a, const dig_t *b) {
mpn_xor_n(c, a, b, RLC_FB_DIGS);
}
void fb_addd_low(dig_t *c, const dig_t *a, const dig_t *b, int size) {
mpn_xor_n(c, a, b, size);
}
|
{
"pile_set_name": "Github"
}
|
// This file is part of MLDB. Copyright 2015 mldb.ai inc. All rights reserved.
/** maybe.h -*- C++ -*-
Jeremy Barnes, 3 July 2014
Copyright (C) 2014 mldb.ai inc. All rights reserved.
*/
#pragma once
#include "mldb/types/value_description_fwd.h"
namespace MLDB {
struct Dummy {
void dummy() {};
};
template<typename Val, typename None = void>
struct MaybeT {
MaybeT()
{
}
MaybeT(const Val & val)
: val_(new Val(val))
{
}
MaybeT(Val && val)
: val_(new Val(std::move(val)))
{
}
MaybeT(const None & err)
: err_(new None(err))
{
}
MaybeT(None && err)
: err_(new None(std::move(err)))
{
}
/** A maybe is null if there is neither a value nor an error, ie
it was created using the default constructor.
*/
bool isNull() const
{
return !val_ && !err_;
}
/** Operator bool support to know if there is a value there. */
typedef void (Dummy::* UnnamedBool) ();
operator UnnamedBool() const
{
return val_ ? &Dummy::dummy : nullptr;
}
const Val & val() const
{
ExcAssert(val_);
return *val_;
}
const None & err() const
{
ExcAssert(err_);
return *err_;
}
Val & val()
{
ExcAssert(val_);
return *val_;
}
None & err()
{
ExcAssert(err_);
return *err_;
}
std::unique_ptr<Val> val_;
std::unique_ptr<None> err_;
};
template<typename Val, typename None>
ValueDescriptionT<MaybeT<Val, None> > *
getDefaultDescription(MaybeT<Val, None> * = 0);
} // namespace MLDB
|
{
"pile_set_name": "Github"
}
|
//////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////
// WARNING: This file was auto-generated, any change will be overridden in next release. Please use configs/es6.conf.js then run "npm run convert". //
//////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////
/**
* @author alteredq / http://alteredqualia.com/
* @author mr.doob / http://mrdoob.com/
*/
var WebGL = {
isWebGLAvailable: function () {
try {
var canvas = document.createElement( 'canvas' );
return !! ( window.WebGLRenderingContext && ( canvas.getContext( 'webgl' ) || canvas.getContext( 'experimental-webgl' ) ) );
} catch ( e ) {
return false;
}
},
isWebGL2Available: function () {
try {
var canvas = document.createElement( 'canvas' );
return !! ( window.WebGL2RenderingContext && canvas.getContext( 'webgl2' ) );
} catch ( e ) {
return false;
}
},
getWebGLErrorMessage: function () {
return this.getErrorMessage( 1 );
},
getWebGL2ErrorMessage: function () {
return this.getErrorMessage( 2 );
},
getErrorMessage: function ( version ) {
var names = {
1: 'WebGL',
2: 'WebGL 2'
};
var contexts = {
1: window.WebGLRenderingContext,
2: window.WebGL2RenderingContext
};
var message = 'Your $0 does not seem to support <a href="http://khronos.org/webgl/wiki/Getting_a_WebGL_Implementation" style="color:#000">$1</a>';
var element = document.createElement( 'div' );
element.id = 'webglmessage';
element.style.fontFamily = 'monospace';
element.style.fontSize = '13px';
element.style.fontWeight = 'normal';
element.style.textAlign = 'center';
element.style.background = '#fff';
element.style.color = '#000';
element.style.padding = '1.5em';
element.style.width = '400px';
element.style.margin = '5em auto 0';
if ( contexts[ version ] ) {
message = message.replace( '$0', 'graphics card' );
} else {
message = message.replace( '$0', 'browser' );
}
message = message.replace( '$1', names[ version ] );
element.innerHTML = message;
return element;
}
};
export { WebGL }
|
{
"pile_set_name": "Github"
}
|
// Copyright 2017 the V8 project authors. All rights reserved.
// Use of this source code is governed by a BSD-style license that can be
// found in the LICENSE file.
#ifndef V8_INTERPRETER_INTERPRETER_GENERATOR_H_
#define V8_INTERPRETER_INTERPRETER_GENERATOR_H_
#include "src/interpreter/bytecode-operands.h"
#include "src/interpreter/bytecodes.h"
namespace v8 {
namespace internal {
struct AssemblerOptions;
namespace interpreter {
extern Handle<Code> GenerateBytecodeHandler(Isolate* isolate,
const char* debug_name,
Bytecode bytecode,
OperandScale operand_scale,
int builtin_index,
const AssemblerOptions& options);
extern Handle<Code> GenerateDeserializeLazyHandler(
Isolate* isolate, OperandScale operand_scale, int builtin_index,
const AssemblerOptions& options);
} // namespace interpreter
} // namespace internal
} // namespace v8
#endif // V8_INTERPRETER_INTERPRETER_GENERATOR_H_
|
{
"pile_set_name": "Github"
}
|
<?js
var data = obj;
var self = this;
var defaultObjectClass = '';
// Check if the default value is an object, if so, apply code highlighting
if (data.defaultvalue && data.defaultvaluetype === 'object') {
data.defaultvalue = "<pre class=\"prettyprint\"><code>" + data.defaultvalue + "</code></pre>";
defaultObjectClass = ' class="object-value"';
}
?>
<dl class="details">
<?js
var properties = data.properties;
if (properties && properties.length && properties.forEach) {
?>
<h5 class="subsection-title">Properties:</h5>
<dl><?js= this.partial('properties.tmpl', properties) ?></dl>
<?js } ?>
<?js if (data.version) {?>
<dt class="tag-version">Version:</dt>
<dd class="tag-version"><ul class="dummy"><li><?js= version ?></li></ul></dd>
<?js } ?>
<?js if (data.since) {?>
<dt class="tag-since">Since:</dt>
<dd class="tag-since"><ul class="dummy"><li><?js= since ?></dd>
<?js } ?>
<?js if (data.inherited && data.inherits) { ?>
<dt class="inherited-from">Inherited From:</dt>
<dd class="inherited-from"><ul class="dummy"><li>
<?js= this.linkto(data.inherits, this.htmlsafe(data.inherits)) ?>
</li></dd>
<?js } ?>
<?js if (data.deprecated) { ?>
<dt class="important tag-deprecated">Deprecated:</dt><?js
if (data.deprecated === true) { ?><dd class="yes-def tag-deprecated"><ul class="dummy"><li>Yes</li></ul></dd><?js }
else { ?><dd><ul class="dummy"><li><?js= data.deprecated ?></li><ul></dd><?js }
?>
<?js } ?>
<?js if (data.author && author.length) {?>
<dt class="tag-author">Author:</dt>
<dd class="tag-author">
<ul><?js author.forEach(function(a) { ?>
<li><?js= self.resolveAuthorLinks(a) ?></li>
<?js }); ?></ul>
</dd>
<?js } ?>
<?js if (data.copyright) {?>
<dt class="tag-copyright">Copyright:</dt>
<dd class="tag-copyright"><ul class="dummy"><li><?js= copyright ?></li></ul></dd>
<?js } ?>
<?js if (data.license) {?>
<dt class="tag-license">License:</dt>
<dd class="tag-license"><ul class="dummy"><li><?js= license ?></li></ul></dd>
<?js } ?>
<?js if (data.defaultvalue) {?>
<dt class="tag-default">Default Value:</dt>
<dd class="tag-default"><ul class="dummy">
<li<?js= defaultObjectClass ?>><?js= data.defaultvalue ?></li>
</ul></dd>
<?js } ?>
<?js if (data.meta) {?>
<dt class="tag-source">Source:</dt>
<dd class="tag-source"><ul class="dummy"><li>
<?js= self.linkto(meta.filename) ?>, <?js= self.linkto(meta.filename, 'line ' + meta.lineno, null, 'line' + meta.lineno) ?>
</li></ul></dd>
<?js } ?>
<?js if (data.tutorials && tutorials.length) {?>
<dt class="tag-tutorial">Tutorials:</dt>
<dd class="tag-tutorial">
<ul><?js tutorials.forEach(function(t) { ?>
<li><?js= self.tutoriallink(t) ?></li>
<?js }); ?></ul>
</dd>
<?js } ?>
<?js if (data.see && see.length) {?>
<dt class="tag-see">See:</dt>
<dd class="tag-see">
<ul><?js see.forEach(function(s) { ?>
<li><?js= self.linkto(s) ?></li>
<?js }); ?></ul>
</dd>
<?js } ?>
<?js if (data.todo && todo.length) {?>
<dt class="tag-todo">To Do:</dt>
<dd class="tag-todo">
<ul><?js todo.forEach(function(t) { ?>
<li><?js= t ?></li>
<?js }); ?></ul>
</dd>
<?js } ?>
</dl>
|
{
"pile_set_name": "Github"
}
|
/*
Copyright 2014 The Kubernetes Authors.
Licensed under the Apache License, Version 2.0 (the "License");
you may not use this file except in compliance with the License.
You may obtain a copy of the License at
http://www.apache.org/licenses/LICENSE-2.0
Unless required by applicable law or agreed to in writing, software
distributed under the License is distributed on an "AS IS" BASIS,
WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
See the License for the specific language governing permissions and
limitations under the License.
*/
package cache
import (
"fmt"
"k8s.io/apimachinery/pkg/api/meta"
"k8s.io/apimachinery/pkg/util/sets"
)
// Indexer extends Store with multiple indices and restricts each
// accumulator to simply hold the current object (and be empty after
// Delete).
//
// There are three kinds of strings here:
// 1. a storage key, as defined in the Store interface,
// 2. a name of an index, and
// 3. an "indexed value", which is produced by an IndexFunc and
// can be a field value or any other string computed from the object.
type Indexer interface {
Store
// Index returns the stored objects whose set of indexed values
// intersects the set of indexed values of the given object, for
// the named index
Index(indexName string, obj interface{}) ([]interface{}, error)
// IndexKeys returns the storage keys of the stored objects whose
// set of indexed values for the named index includes the given
// indexed value
IndexKeys(indexName, indexedValue string) ([]string, error)
// ListIndexFuncValues returns all the indexed values of the given index
ListIndexFuncValues(indexName string) []string
// ByIndex returns the stored objects whose set of indexed values
// for the named index includes the given indexed value
ByIndex(indexName, indexedValue string) ([]interface{}, error)
// GetIndexer return the indexers
GetIndexers() Indexers
// AddIndexers adds more indexers to this store. If you call this after you already have data
// in the store, the results are undefined.
AddIndexers(newIndexers Indexers) error
}
// IndexFunc knows how to compute the set of indexed values for an object.
type IndexFunc func(obj interface{}) ([]string, error)
// IndexFuncToKeyFuncAdapter adapts an indexFunc to a keyFunc. This is only useful if your index function returns
// unique values for every object. This conversion can create errors when more than one key is found. You
// should prefer to make proper key and index functions.
func IndexFuncToKeyFuncAdapter(indexFunc IndexFunc) KeyFunc {
return func(obj interface{}) (string, error) {
indexKeys, err := indexFunc(obj)
if err != nil {
return "", err
}
if len(indexKeys) > 1 {
return "", fmt.Errorf("too many keys: %v", indexKeys)
}
if len(indexKeys) == 0 {
return "", fmt.Errorf("unexpected empty indexKeys")
}
return indexKeys[0], nil
}
}
const (
// NamespaceIndex is the lookup name for the most comment index function, which is to index by the namespace field.
NamespaceIndex string = "namespace"
)
// MetaNamespaceIndexFunc is a default index function that indexes based on an object's namespace
func MetaNamespaceIndexFunc(obj interface{}) ([]string, error) {
meta, err := meta.Accessor(obj)
if err != nil {
return []string{""}, fmt.Errorf("object has no meta: %v", err)
}
return []string{meta.GetNamespace()}, nil
}
// Index maps the indexed value to a set of keys in the store that match on that value
type Index map[string]sets.String
// Indexers maps a name to a IndexFunc
type Indexers map[string]IndexFunc
// Indices maps a name to an Index
type Indices map[string]Index
|
{
"pile_set_name": "Github"
}
|
/*
* Copyright (C) 2010 The Android Open Source Project
*
* Licensed under the Apache License, Version 2.0 (the "License");
* you may not use this file except in compliance with the License.
* You may obtain a copy of the License at
*
* http://www.apache.org/licenses/LICENSE-2.0
*
* Unless required by applicable law or agreed to in writing, software
* distributed under the License is distributed on an "AS IS" BASIS,
* WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
* See the License for the specific language governing permissions and
* limitations under the License.
*/
#define LOG_TAG "MtpUtils"
#include <stdio.h>
#include <time.h>
// #include <cutils/tztime.h>
#include "MtpUtils.h"
namespace android {
/*
DateTime strings follow a compatible subset of the definition found in ISO 8601, and
take the form of a Unicode string formatted as: "YYYYMMDDThhmmss.s". In this
representation, YYYY shall be replaced by the year, MM replaced by the month (01-12),
DD replaced by the day (01-31), T is a constant character 'T' delimiting time from date,
hh is replaced by the hour (00-23), mm is replaced by the minute (00-59), and ss by the
second (00-59). The ".s" is optional, and represents tenths of a second.
*/
bool parseDateTime(const char* dateTime, time_t& outSeconds) {
int year, month, day, hour, minute, second;
struct tm tm;
if (sscanf(dateTime, "%04d%02d%02dT%02d%02d%02d",
&year, &month, &day, &hour, &minute, &second) != 6)
return false;
const char* tail = dateTime + 15;
// skip optional tenth of second
if (tail[0] == '.' && tail[1])
tail += 2;
//FIXME - support +/-hhmm
bool useUTC = (tail[0] == 'Z');
// hack to compute timezone
time_t dummy;
tzset();
localtime_r(&dummy, &tm);
tm.tm_sec = second;
tm.tm_min = minute;
tm.tm_hour = hour;
tm.tm_mday = day;
tm.tm_mon = month - 1; // mktime uses months in 0 - 11 range
tm.tm_year = year - 1900;
tm.tm_wday = 0;
tm.tm_isdst = -1;
outSeconds = mktime(&tm);
/*if (useUTC)
outSeconds = mktime(&tm);
else
outSeconds = mktime_tz(&tm, tm.tm_zone);*/
return true;
}
void formatDateTime(time_t seconds, char* buffer, int bufferLength) {
struct tm tm;
localtime_r(&seconds, &tm);
snprintf(buffer, bufferLength, "%04d%02d%02dT%02d%02d%02d",
tm.tm_year + 1900,
tm.tm_mon + 1, // localtime_r uses months in 0 - 11 range
tm.tm_mday, tm.tm_hour, tm.tm_min, tm.tm_sec);
}
} // namespace android
|
{
"pile_set_name": "Github"
}
|
<?php
/*
* Junos.php
*
* -Description-
*
* This program is free software: you can redistribute it and/or modify
* it under the terms of the GNU General Public License as published by
* the Free Software Foundation, either version 3 of the License, or
* (at your option) any later version.
*
* This program is distributed in the hope that it will be useful,
* but WITHOUT ANY WARRANTY; without even the implied warranty of
* MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE.See the
* GNU General Public License for more details.
*
* You should have received a copy of the GNU General Public License
* along with this program. If not, see <http://www.gnu.org/licenses/>.
*
* @package LibreNMS
* @link http://librenms.org
* @copyright 2020 Tony Murray
* @author Tony Murray <murraytony@gmail.com>
*/
namespace LibreNMS\OS;
use App\Models\Device;
use LibreNMS\Interfaces\Polling\OSPolling;
use LibreNMS\RRD\RrdDefinition;
class Junos extends \LibreNMS\OS implements OSPolling
{
public function discoverOS(Device $device): void
{
$data = snmp_get_multi($this->getDeviceArray(), [
'JUNIPER-MIB::jnxBoxDescr.0',
'JUNIPER-MIB::jnxBoxSerialNo.0',
'JUNIPER-VIRTUALCHASSIS-MIB::jnxVirtualChassisMemberSWVersion.0',
'HOST-RESOURCES-MIB::hrSWInstalledName.2',
], '-OQUs');
preg_match('/Juniper Networks, Inc. (?<hardware>\S+) .* kernel JUNOS (?<version>[^, ]+)[, ]/', $device->sysDescr, $parsed);
if (isset($data[2]['hrSWInstalledName'])) {
preg_match('/\[(.+)]/', $data[2]['hrSWInstalledName'], $parsedVersion);
}
$device->hardware = $data[0]['jnxBoxDescr'] ?? (isset($parsed['hardware']) ? 'Juniper ' . strtoupper($parsed['hardware']) : null);
$device->serial = $data[0]['jnxBoxSerialNo'] ?? null;
$device->version = $data[0]['jnxVirtualChassisMemberSWVersion'] ?? $parsedVersion[1] ?? $parsed['version'] ?? null;
}
public function pollOS()
{
$data = snmp_get_multi($this->getDeviceArray(), 'jnxJsSPUMonitoringCurrentFlowSession.0', '-OUQs', 'JUNIPER-SRX5000-SPU-MONITORING-MIB');
if (is_numeric($data[0]['jnxJsSPUMonitoringCurrentFlowSession'])) {
data_update($this->getDeviceArray(), 'junos_jsrx_spu_sessions', [
'rrd_def' => RrdDefinition::make()->addDataset('spu_flow_sessions', 'GAUGE', 0),
], [
'spu_flow_sessions' => $data[0]['jnxJsSPUMonitoringCurrentFlowSession'],
]);
$this->enableGraph('junos_jsrx_spu_sessions');
}
}
}
|
{
"pile_set_name": "Github"
}
|
#!/usr/bin/env python
# oio-meta2-indexer
# Copyright (C) 2018 OpenIO SAS, as part of OpenIO SDS
#
# This program is free software: you can redistribute it and/or modify
# it under the terms of the GNU Affero General Public License as
# published by the Free Software Foundation, either version 3 of the
# License, or (at your option) any later version.
#
# This program is distributed in the hope that it will be useful,
# but WITHOUT ANY WARRANTY; without even the implied warranty of
# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
# GNU Affero General Public License for more details.
#
# You should have received a copy of the GNU Affero General Public License
# along with this program. If not, see <http://www.gnu.org/licenses/>.
from oio.directory.indexer import Meta2Indexer
from oio.common.daemon import run_daemon
import argparse
def make_arg_parser():
log_parser = argparse.ArgumentParser(add_help=False)
levels = ['DEBUG', 'INFO', 'WARN', 'ERROR']
log_parser.add_argument('--log-level', choices=levels,
help="Log level")
log_parser.add_argument('--log-syslog-prefix',
help="Syslog prefix")
log_parser.add_argument('--log-facility',
help="Log facility")
log_parser.add_argument('--log-address',
help="Log address")
descr = """
Periodically scan through volumes to index all meta2 databases that are
present there.
"""
main_parser = argparse.ArgumentParser(description=descr,
parents=[log_parser])
main_parser.add_argument(
'config_file',
help="""
A file containing an oio-meta2-indexer configuration file.
Any arguments passed alongside a configuration file will be ignored.
Alternatively, this can be a writable file, to which you want to
write the configuration you will pass through the parameters by using
the --generate-config flag.
"""
)
main_parser.add_argument(
'--generate-config',
action='store_true',
help="""
Generate configuration file with given arguments.
If the file already exists, it will be overwritten.
"""
)
main_parser.add_argument(
'--user',
help="The name of the OS user this process will run as"
)
main_parser.add_argument(
'--namespace',
help="Namespace of the volumes"
)
main_parser.add_argument(
'--volume-list',
action='append',
help="List of paths pointing to meta2 volumes to index",
nargs="+"
)
main_parser.add_argument(
'--interval',
type=int,
help="Time between two full scans for each volume"
)
main_parser.add_argument(
'--report-interval',
type=int,
help="Time between progress reports for each volume"
)
main_parser.add_argument(
'--scanned-per-second',
type=int,
help="Maximum of scanned databases per second per volume, beyond which"
" the scanning process is throttled for said volume."
)
main_parser.add_argument(
'--try-removing-faulty-indexes',
action='store_true',
help="""
If true, in the event where an indexing worker detects that
a volume it's trying to index does not manage a database it stumbled
upon, the indexer will attempt to remove any existing index for this
database from the volume's rdir index. USE AT YOUR OWN RISK.
Inconsistencies in the proxy cache can for example help induce this
effect even when unwarranted.
"""
)
return main_parser
def gen_configuration(options, path):
file_content = "[meta2-indexer]\n"
for k,v in options.items():
if v is not None:
if k == "volume_list":
v = ",".join(v[0])
file_content += k + " = " + str(v) + "\n"
with open(path, "w") as f:
f.write(file_content)
if __name__ == '__main__':
parser = make_arg_parser()
options = vars(parser.parse_args())
path = options.pop('config_file')
if options.get('generate_config'):
options.pop('generate_config')
gen_configuration(options, path)
run_daemon(Meta2Indexer, conf_file=path, section_name="meta2-indexer",
**options)
|
{
"pile_set_name": "Github"
}
|
/*
* lws-minimal-dbus-ws-proxy
*
* Written in 2010-2019 by Andy Green <andy@warmcat.com>
*
* This file is made available under the Creative Commons CC0 1.0
* Universal Public Domain Dedication.
*
* This demonstrates a minimal session dbus server that uses the lws event loop,
* and allows proxying ws client connections via DBUS.
*/
#include <stdbool.h>
#include <string.h>
#include <stdio.h>
#include <stdlib.h>
#include <unistd.h>
#include <signal.h>
#include <libwebsockets.h>
#include <libwebsockets/lws-dbus.h>
#define LWS_PLUGIN_STATIC
#include "protocol_lws_minimal_dbus_ws_proxy.c"
static int interrupted;
static struct lws_protocols protocols[] = {
LWS_PLUGIN_PROTOCOL_MINIMAL_DBUS_WSPROXY,
{ NULL, NULL, 0, 0 } /* terminator */
};
/*
* we pass the dbus address to connect to proxy with from outside the
* protocol plugin... eg if built as a plugin for lwsws, you would instead
* set this pvo in the lwsws JSON config.
*/
static const struct lws_protocol_vhost_options pvo_ads = {
NULL,
NULL,
"ads", /* pvo name */
(void *)"unix:abstract=org.libwebsockets.wsclientproxy" /* pvo value */
};
static const struct lws_protocol_vhost_options pvo = {
NULL, /* "next" pvo linked-list */
&pvo_ads, /* "child" pvo linked-list */
"lws-minimal-dbus-wsproxy", /* protocol name we belong to on this vhost */
"" /* ignored */
};
void sigint_handler(int sig)
{
interrupted = 1;
}
int main(int argc, const char **argv)
{
static struct lws_context *context;
struct lws_context_creation_info info;
const char *p;
int n = 0, logs = LLL_USER | LLL_ERR | LLL_WARN | LLL_NOTICE
/* for LLL_ verbosity above NOTICE to be built into lws,
* lws must have been configured and built with
* -DCMAKE_BUILD_TYPE=DEBUG instead of =RELEASE */
/* | LLL_INFO */ /* | LLL_PARSER */ /* | LLL_HEADER */
/* | LLL_EXT */ /* | LLL_CLIENT */ /* | LLL_LATENCY */
/* | LLL_DEBUG */ /* | LLL_THREAD */;
signal(SIGINT, sigint_handler);
if ((p = lws_cmdline_option(argc, argv, "-d")))
logs = atoi(p);
lws_set_log_level(logs, NULL);
lwsl_user("LWS DBUS ws client proxy\n");
memset(&info, 0, sizeof info); /* otherwise uninitialized garbage */
info.options = LWS_SERVER_OPTION_DO_SSL_GLOBAL_INIT |
LWS_SERVER_OPTION_HTTP_HEADERS_SECURITY_BEST_PRACTICES_ENFORCE;
info.port = CONTEXT_PORT_NO_LISTEN;
info.protocols = protocols;
info.pvo = &pvo;
context = lws_create_context(&info);
if (!context) {
lwsl_err("lws init failed\n");
return 1;
}
/* lws event loop (default poll one) */
while (n >= 0 && !interrupted)
n = lws_service(context, 0);
lws_context_destroy(context);
lwsl_notice("Exiting cleanly\n");
return 0;
}
|
{
"pile_set_name": "Github"
}
|
/*
* Licensed to the Apache Software Foundation (ASF) under one or more
* contributor license agreements. See the NOTICE file distributed with
* this work for additional information regarding copyright ownership.
* The ASF licenses this file to You under the Apache License, Version 2.0
* (the "License"); you may not use this file except in compliance with
* the License. You may obtain a copy of the License at
*
* http://www.apache.org/licenses/LICENSE-2.0
*
* Unless required by applicable law or agreed to in writing, software
* distributed under the License is distributed on an "AS IS" BASIS,
* WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
* See the License for the specific language governing permissions and
* limitations under the License.
*/
package org.apache.ignite.internal.processors.cache.persistence.snapshot;
import java.io.File;
import java.io.FileOutputStream;
import java.io.IOException;
import java.io.Serializable;
import java.nio.ByteBuffer;
import java.nio.ByteOrder;
import java.nio.channels.FileChannel;
import java.nio.file.FileVisitResult;
import java.nio.file.Files;
import java.nio.file.Path;
import java.nio.file.Paths;
import java.nio.file.SimpleFileVisitor;
import java.nio.file.attribute.BasicFileAttributes;
import java.util.Arrays;
import java.util.Collection;
import java.util.Collections;
import java.util.HashMap;
import java.util.HashSet;
import java.util.List;
import java.util.Map;
import java.util.Set;
import java.util.UUID;
import java.util.concurrent.ConcurrentHashMap;
import java.util.concurrent.ConcurrentMap;
import java.util.concurrent.Executor;
import java.util.concurrent.ExecutorService;
import java.util.concurrent.LinkedBlockingQueue;
import java.util.function.BiFunction;
import java.util.function.Function;
import java.util.stream.Collectors;
import org.apache.ignite.IgniteCheckedException;
import org.apache.ignite.IgniteException;
import org.apache.ignite.IgniteSnapshot;
import org.apache.ignite.binary.BinaryType;
import org.apache.ignite.cluster.ClusterNode;
import org.apache.ignite.configuration.IgniteConfiguration;
import org.apache.ignite.events.DiscoveryEvent;
import org.apache.ignite.internal.GridKernalContext;
import org.apache.ignite.internal.IgniteClientDisconnectedCheckedException;
import org.apache.ignite.internal.IgniteEx;
import org.apache.ignite.internal.IgniteFeatures;
import org.apache.ignite.internal.IgniteFutureCancelledCheckedException;
import org.apache.ignite.internal.IgniteInternalFuture;
import org.apache.ignite.internal.NodeStoppingException;
import org.apache.ignite.internal.cluster.ClusterTopologyCheckedException;
import org.apache.ignite.internal.events.DiscoveryCustomEvent;
import org.apache.ignite.internal.managers.eventstorage.DiscoveryEventListener;
import org.apache.ignite.internal.processors.affinity.AffinityTopologyVersion;
import org.apache.ignite.internal.processors.cache.CacheGroupDescriptor;
import org.apache.ignite.internal.processors.cache.CacheType;
import org.apache.ignite.internal.processors.cache.GridCacheSharedManagerAdapter;
import org.apache.ignite.internal.processors.cache.distributed.dht.preloader.GridDhtPartitionsExchangeFuture;
import org.apache.ignite.internal.processors.cache.distributed.dht.preloader.PartitionsExchangeAware;
import org.apache.ignite.internal.processors.cache.persistence.file.FileIO;
import org.apache.ignite.internal.processors.cache.persistence.file.FileIOFactory;
import org.apache.ignite.internal.processors.cache.persistence.file.FilePageStore;
import org.apache.ignite.internal.processors.cache.persistence.file.FilePageStoreFactory;
import org.apache.ignite.internal.processors.cache.persistence.file.FilePageStoreManager;
import org.apache.ignite.internal.processors.cache.persistence.file.RandomAccessFileIOFactory;
import org.apache.ignite.internal.processors.cache.persistence.filename.PdsFolderSettings;
import org.apache.ignite.internal.processors.cache.persistence.metastorage.MetastorageLifecycleListener;
import org.apache.ignite.internal.processors.cache.persistence.metastorage.ReadOnlyMetastorage;
import org.apache.ignite.internal.processors.cache.persistence.metastorage.ReadWriteMetastorage;
import org.apache.ignite.internal.processors.cache.persistence.partstate.GroupPartitionId;
import org.apache.ignite.internal.processors.cache.persistence.tree.io.PageIO;
import org.apache.ignite.internal.processors.cache.persistence.wal.crc.FastCrc;
import org.apache.ignite.internal.processors.cluster.DiscoveryDataClusterState;
import org.apache.ignite.internal.processors.marshaller.MappedName;
import org.apache.ignite.internal.processors.metric.MetricRegistry;
import org.apache.ignite.internal.processors.metric.impl.LongAdderMetric;
import org.apache.ignite.internal.processors.task.GridInternal;
import org.apache.ignite.internal.util.GridBusyLock;
import org.apache.ignite.internal.util.distributed.DistributedProcess;
import org.apache.ignite.internal.util.distributed.InitMessage;
import org.apache.ignite.internal.util.future.GridFinishedFuture;
import org.apache.ignite.internal.util.future.GridFutureAdapter;
import org.apache.ignite.internal.util.future.IgniteFinishedFutureImpl;
import org.apache.ignite.internal.util.future.IgniteFutureImpl;
import org.apache.ignite.internal.util.lang.GridClosureException;
import org.apache.ignite.internal.util.tostring.GridToStringInclude;
import org.apache.ignite.internal.util.typedef.CX1;
import org.apache.ignite.internal.util.typedef.F;
import org.apache.ignite.internal.util.typedef.internal.A;
import org.apache.ignite.internal.util.typedef.internal.CU;
import org.apache.ignite.internal.util.typedef.internal.S;
import org.apache.ignite.internal.util.typedef.internal.U;
import org.apache.ignite.lang.IgniteClosure;
import org.apache.ignite.lang.IgniteFuture;
import org.apache.ignite.resources.IgniteInstanceResource;
import org.apache.ignite.thread.IgniteThreadPoolExecutor;
import org.apache.ignite.thread.OomExceptionHandler;
import org.jetbrains.annotations.Nullable;
import static java.nio.file.StandardOpenOption.READ;
import static org.apache.ignite.configuration.DataStorageConfiguration.DFLT_BINARY_METADATA_PATH;
import static org.apache.ignite.configuration.DataStorageConfiguration.DFLT_MARSHALLER_PATH;
import static org.apache.ignite.events.EventType.EVT_NODE_FAILED;
import static org.apache.ignite.events.EventType.EVT_NODE_LEFT;
import static org.apache.ignite.internal.IgniteFeatures.PERSISTENCE_CACHE_SNAPSHOT;
import static org.apache.ignite.internal.MarshallerContextImpl.mappingFileStoreWorkDir;
import static org.apache.ignite.internal.MarshallerContextImpl.saveMappings;
import static org.apache.ignite.internal.events.DiscoveryCustomEvent.EVT_DISCOVERY_CUSTOM_EVT;
import static org.apache.ignite.internal.managers.communication.GridIoPolicy.SYSTEM_POOL;
import static org.apache.ignite.internal.pagemem.PageIdAllocator.INDEX_PARTITION;
import static org.apache.ignite.internal.pagemem.PageIdAllocator.MAX_PARTITION_ID;
import static org.apache.ignite.internal.processors.cache.binary.CacheObjectBinaryProcessorImpl.binaryWorkDir;
import static org.apache.ignite.internal.processors.cache.persistence.file.FilePageStoreManager.INDEX_FILE_NAME;
import static org.apache.ignite.internal.processors.cache.persistence.file.FilePageStoreManager.PART_FILE_TEMPLATE;
import static org.apache.ignite.internal.processors.cache.persistence.file.FilePageStoreManager.getPartitionFile;
import static org.apache.ignite.internal.processors.cache.persistence.filename.PdsConsistentIdProcessor.DB_DEFAULT_FOLDER;
import static org.apache.ignite.internal.processors.cache.persistence.partstate.GroupPartitionId.getFlagByPartId;
import static org.apache.ignite.internal.util.IgniteUtils.isLocalNodeCoordinator;
import static org.apache.ignite.internal.util.distributed.DistributedProcess.DistributedProcessType.END_SNAPSHOT;
import static org.apache.ignite.internal.util.distributed.DistributedProcess.DistributedProcessType.START_SNAPSHOT;
/**
* Internal implementation of snapshot operations over persistence caches.
* <p>
* These major actions available:
* <ul>
* <li>Create snapshot of the whole cluster cache groups by triggering PME to achieve consistency.</li>
* </ul>
*/
public class IgniteSnapshotManager extends GridCacheSharedManagerAdapter
implements IgniteSnapshot, PartitionsExchangeAware, MetastorageLifecycleListener {
/** File with delta pages suffix. */
public static final String DELTA_SUFFIX = ".delta";
/** File name template consists of delta pages. */
public static final String PART_DELTA_TEMPLATE = PART_FILE_TEMPLATE + DELTA_SUFFIX;
/** File name template for index delta pages. */
public static final String INDEX_DELTA_NAME = INDEX_FILE_NAME + DELTA_SUFFIX;
/** Text Reason for checkpoint to start snapshot operation. */
public static final String CP_SNAPSHOT_REASON = "Checkpoint started to enforce snapshot operation: %s";
/** Default snapshot directory for loading remote snapshots. */
public static final String DFLT_SNAPSHOT_TMP_DIR = "snp";
/** Snapshot in progress error message. */
public static final String SNP_IN_PROGRESS_ERR_MSG = "Operation rejected due to the snapshot operation in progress.";
/** Error message to finalize snapshot tasks. */
public static final String SNP_NODE_STOPPING_ERR_MSG = "Snapshot has been cancelled due to the local node " +
"is stopping";
/** Metastorage key to save currently running snapshot. */
public static final String SNP_RUNNING_KEY = "snapshot-running";
/** Snapshot metrics prefix. */
public static final String SNAPSHOT_METRICS = "snapshot";
/** Prefix for snapshot threads. */
private static final String SNAPSHOT_RUNNER_THREAD_PREFIX = "snapshot-runner";
/** Total number of thread to perform local snapshot. */
private static final int SNAPSHOT_THREAD_POOL_SIZE = 4;
/**
* Local buffer to perform copy-on-write operations with pages for {@code SnapshotFutureTask.PageStoreSerialWriter}s.
* It is important to have only only buffer per thread (instead of creating each buffer per
* each {@code SnapshotFutureTask.PageStoreSerialWriter}) this is redundant and can lead to OOM errors. Direct buffer
* deallocate only when ByteBuffer is garbage collected, but it can get out of off-heap memory before it.
*/
private final ThreadLocal<ByteBuffer> locBuff;
/** Map of registered cache snapshot processes and their corresponding contexts. */
private final ConcurrentMap<String, SnapshotFutureTask> locSnpTasks = new ConcurrentHashMap<>();
/** Lock to protect the resources is used. */
private final GridBusyLock busyLock = new GridBusyLock();
/** Mutex used to order cluster snapshot operation progress. */
private final Object snpOpMux = new Object();
/** Take snapshot operation procedure. */
private final DistributedProcess<SnapshotOperationRequest, SnapshotOperationResponse> startSnpProc;
/** Check previously performed snapshot operation and delete uncompleted files if need. */
private final DistributedProcess<SnapshotOperationRequest, SnapshotOperationResponse> endSnpProc;
/** Resolved persistent data storage settings. */
private volatile PdsFolderSettings pdsSettings;
/** Fully initialized metastorage. */
private volatile ReadWriteMetastorage metaStorage;
/** Local snapshot sender factory. */
private Function<String, SnapshotSender> locSndrFactory = LocalSnapshotSender::new;
/** Main snapshot directory to save created snapshots. */
private volatile File locSnpDir;
/**
* Working directory for loaded snapshots from the remote nodes and storing
* temporary partition delta-files of locally started snapshot process.
*/
private File tmpWorkDir;
/** Factory to working with delta as file storage. */
private volatile FileIOFactory ioFactory = new RandomAccessFileIOFactory();
/** Factory to create page store for restore. */
private volatile BiFunction<Integer, Boolean, FilePageStoreFactory> storeFactory;
/** Snapshot thread pool to perform local partition snapshots. */
private ExecutorService snpRunner;
/** System discovery message listener. */
private DiscoveryEventListener discoLsnr;
/** Cluster snapshot operation requested by user. */
private ClusterSnapshotFuture clusterSnpFut;
/** Current snapshot operation on local node. */
private volatile SnapshotOperationRequest clusterSnpReq;
/** {@code true} if recovery process occurred for snapshot. */
private volatile boolean recovered;
/** Last seen cluster snapshot operation. */
private volatile ClusterSnapshotFuture lastSeenSnpFut = new ClusterSnapshotFuture();
/**
* @param ctx Kernal context.
*/
public IgniteSnapshotManager(GridKernalContext ctx) {
locBuff = ThreadLocal.withInitial(() ->
ByteBuffer.allocateDirect(ctx.config().getDataStorageConfiguration().getPageSize())
.order(ByteOrder.nativeOrder()));
startSnpProc = new DistributedProcess<>(ctx, START_SNAPSHOT, this::initLocalSnapshotStartStage,
this::processLocalSnapshotStartStageResult, SnapshotStartDiscoveryMessage::new);
endSnpProc = new DistributedProcess<>(ctx, END_SNAPSHOT, this::initLocalSnapshotEndStage,
this::processLocalSnapshotEndStageResult);
}
/**
* @param snapshotCacheDir Snapshot directory to store files.
* @param partId Cache partition identifier.
* @return A file representation.
*/
public static File partDeltaFile(File snapshotCacheDir, int partId) {
return new File(snapshotCacheDir, partDeltaFileName(partId));
}
/**
* @param partId Partition id.
* @return File name of delta partition pages.
*/
public static String partDeltaFileName(int partId) {
assert partId <= MAX_PARTITION_ID || partId == INDEX_PARTITION;
return partId == INDEX_PARTITION ? INDEX_DELTA_NAME : String.format(PART_DELTA_TEMPLATE, partId);
}
/** {@inheritDoc} */
@Override protected void start0() throws IgniteCheckedException {
super.start0();
GridKernalContext ctx = cctx.kernalContext();
if (ctx.clientNode())
return;
if (!CU.isPersistenceEnabled(ctx.config()))
return;
snpRunner = new IgniteThreadPoolExecutor(SNAPSHOT_RUNNER_THREAD_PREFIX,
cctx.igniteInstanceName(),
SNAPSHOT_THREAD_POOL_SIZE,
SNAPSHOT_THREAD_POOL_SIZE,
IgniteConfiguration.DFLT_THREAD_KEEP_ALIVE_TIME,
new LinkedBlockingQueue<>(),
SYSTEM_POOL,
new OomExceptionHandler(ctx));
assert cctx.pageStore() instanceof FilePageStoreManager;
FilePageStoreManager storeMgr = (FilePageStoreManager)cctx.pageStore();
pdsSettings = cctx.kernalContext().pdsFolderResolver().resolveFolders();
locSnpDir = resolveSnapshotWorkDirectory(ctx.config());
tmpWorkDir = U.resolveWorkDirectory(storeMgr.workDir().getAbsolutePath(), DFLT_SNAPSHOT_TMP_DIR, true);
U.ensureDirectory(locSnpDir, "snapshot work directory", log);
U.ensureDirectory(tmpWorkDir, "temp directory for snapshot creation", log);
MetricRegistry mreg = cctx.kernalContext().metric().registry(SNAPSHOT_METRICS);
mreg.register("LastSnapshotStartTime", () -> lastSeenSnpFut.startTime,
"The system time of the last cluster snapshot request start time on this node.");
mreg.register("LastSnapshotEndTime", () -> lastSeenSnpFut.endTime,
"The system time of the last cluster snapshot request end time on this node.");
mreg.register("LastSnapshotName", () -> lastSeenSnpFut.name, String.class,
"The name of last started cluster snapshot request on this node.");
mreg.register("LastSnapshotErrorMessage",
() -> lastSeenSnpFut.error() == null ? "" : lastSeenSnpFut.error().getMessage(),
String.class,
"The error message of last started cluster snapshot request which fail with an error. " +
"This value will be empty if last snapshot request has been completed successfully.");
mreg.register("LocalSnapshotNames", this::localSnapshotNames, List.class,
"The list of names of all snapshots currently saved on the local node with respect to " +
"the configured via IgniteConfiguration snapshot working path.");
storeFactory = storeMgr::getPageStoreFactory;
cctx.exchange().registerExchangeAwareComponent(this);
ctx.internalSubscriptionProcessor().registerMetastorageListener(this);
cctx.gridEvents().addDiscoveryEventListener(discoLsnr = (evt, discoCache) -> {
if (!busyLock.enterBusy())
return;
try {
UUID leftNodeId = evt.eventNode().id();
if (evt.type() == EVT_NODE_LEFT || evt.type() == EVT_NODE_FAILED) {
SnapshotOperationRequest snpReq = clusterSnpReq;
for (SnapshotFutureTask sctx : locSnpTasks.values()) {
if (sctx.sourceNodeId().equals(leftNodeId) ||
(snpReq != null &&
snpReq.snpName.equals(sctx.snapshotName()) &&
snpReq.bltNodes.contains(leftNodeId))) {
sctx.acceptException(new ClusterTopologyCheckedException("Snapshot operation interrupted. " +
"One of baseline nodes left the cluster: " + leftNodeId));
}
}
}
}
finally {
busyLock.leaveBusy();
}
}, EVT_NODE_LEFT, EVT_NODE_FAILED);
}
/** {@inheritDoc} */
@Override protected void stop0(boolean cancel) {
busyLock.block();
try {
// Try stop all snapshot processing if not yet.
for (SnapshotFutureTask sctx : locSnpTasks.values())
sctx.acceptException(new NodeStoppingException(SNP_NODE_STOPPING_ERR_MSG));
locSnpTasks.clear();
synchronized (snpOpMux) {
if (clusterSnpFut != null) {
clusterSnpFut.onDone(new NodeStoppingException(SNP_NODE_STOPPING_ERR_MSG));
clusterSnpFut = null;
}
}
if (snpRunner != null)
snpRunner.shutdownNow();
if (discoLsnr != null)
cctx.kernalContext().event().removeDiscoveryEventListener(discoLsnr);
cctx.exchange().unregisterExchangeAwareComponent(this);
}
finally {
busyLock.unblock();
}
}
/**
* @param snpDir Snapshot dir.
* @param folderName Local node folder name (see {@link U#maskForFileName} with consistent id).
*/
public void deleteSnapshot(File snpDir, String folderName) {
if (!snpDir.exists())
return;
assert snpDir.isDirectory() : snpDir;
try {
File binDir = binaryWorkDir(snpDir.getAbsolutePath(), folderName);
File nodeDbDir = new File(snpDir.getAbsolutePath(), databaseRelativePath(folderName));
U.delete(binDir);
U.delete(nodeDbDir);
File marshDir = mappingFileStoreWorkDir(snpDir.getAbsolutePath());
// Concurrently traverse the snapshot marshaller directory and delete all files.
Files.walkFileTree(marshDir.toPath(), new SimpleFileVisitor<Path>() {
@Override public FileVisitResult visitFile(Path file, BasicFileAttributes attrs) {
U.delete(file);
return FileVisitResult.CONTINUE;
}
@Override public FileVisitResult visitFileFailed(Path file, IOException exc) {
// Skip files which can be concurrently removed from FileTree.
return FileVisitResult.CONTINUE;
}
@Override public FileVisitResult postVisitDirectory(Path dir, IOException exc) {
dir.toFile().delete();
if (log.isInfoEnabled() && exc != null)
log.info("Marshaller directory cleaned with an exception: " + exc.getMessage());
return FileVisitResult.CONTINUE;
}
});
File binMetadataDfltDir = new File(snpDir, DFLT_BINARY_METADATA_PATH);
File marshallerDfltDir = new File(snpDir, DFLT_MARSHALLER_PATH);
U.delete(binMetadataDfltDir);
U.delete(marshallerDfltDir);
File db = new File(snpDir, DB_DEFAULT_FOLDER);
if (!db.exists() || F.isEmpty(db.list())) {
marshDir.delete();
db.delete();
U.delete(snpDir);
}
}
catch (IOException e) {
throw new IgniteException(e);
}
}
/**
* @param snpName Snapshot name.
* @return Local snapshot directory for snapshot with given name.
*/
public File snapshotLocalDir(String snpName) {
assert locSnpDir != null;
assert U.alphanumericUnderscore(snpName) : snpName;
return new File(locSnpDir, snpName);
}
/**
* @return Node snapshot working directory.
*/
public File snapshotTmpDir() {
assert tmpWorkDir != null;
return tmpWorkDir;
}
/**
* @param req Request on snapshot creation.
* @return Future which will be completed when a snapshot has been started.
*/
private IgniteInternalFuture<SnapshotOperationResponse> initLocalSnapshotStartStage(SnapshotOperationRequest req) {
if (cctx.kernalContext().clientNode() ||
!CU.baselineNode(cctx.localNode(), cctx.kernalContext().state().clusterState()))
return new GridFinishedFuture<>();
// Executed inside discovery notifier thread, prior to firing discovery custom event,
// so it is safe to set new snapshot task inside this method without synchronization.
if (clusterSnpReq != null) {
return new GridFinishedFuture<>(new IgniteCheckedException("Snapshot operation has been rejected. " +
"Another snapshot operation in progress [req=" + req + ", curr=" + clusterSnpReq + ']'));
}
Set<UUID> leftNodes = new HashSet<>(req.bltNodes);
leftNodes.removeAll(F.viewReadOnly(cctx.discovery().serverNodes(AffinityTopologyVersion.NONE),
F.node2id()));
if (!leftNodes.isEmpty()) {
return new GridFinishedFuture<>(new IgniteCheckedException("Some of baseline nodes left the cluster " +
"prior to snapshot operation start: " + leftNodes));
}
Set<Integer> leftGrps = new HashSet<>(req.grpIds);
leftGrps.removeAll(cctx.cache().cacheGroupDescriptors().keySet());
if (!leftGrps.isEmpty()) {
return new GridFinishedFuture<>(new IgniteCheckedException("Some of requested cache groups doesn't exist " +
"on the local node [missed=" + leftGrps + ", nodeId=" + cctx.localNodeId() + ']'));
}
Map<Integer, Set<Integer>> parts = new HashMap<>();
// Prepare collection of pairs group and appropriate cache partition to be snapshot.
// Cache group context may be 'null' on some nodes e.g. a node filter is set.
for (Integer grpId : req.grpIds) {
if (cctx.cache().cacheGroup(grpId) == null)
continue;
parts.put(grpId, null);
}
if (parts.isEmpty())
return new GridFinishedFuture<>();
SnapshotFutureTask task0 = registerSnapshotTask(req.snpName,
req.srcNodeId,
parts,
locSndrFactory.apply(req.snpName));
clusterSnpReq = req;
return task0.chain(fut -> {
if (fut.error() == null)
return new SnapshotOperationResponse();
else
throw new GridClosureException(fut.error());
});
}
/**
* @param id Request id.
* @param res Results.
* @param err Errors.
*/
private void processLocalSnapshotStartStageResult(UUID id, Map<UUID, SnapshotOperationResponse> res, Map<UUID, Exception> err) {
if (cctx.kernalContext().clientNode())
return;
SnapshotOperationRequest snpReq = clusterSnpReq;
boolean cancelled = err.values().stream().anyMatch(e -> e instanceof IgniteFutureCancelledCheckedException);
if (snpReq == null || !snpReq.rqId.equals(id)) {
synchronized (snpOpMux) {
if (clusterSnpFut != null && clusterSnpFut.rqId.equals(id)) {
if (cancelled) {
clusterSnpFut.onDone(new IgniteFutureCancelledCheckedException("Execution of snapshot tasks " +
"has been cancelled by external process [err=" + err + ", snpReq=" + snpReq + ']'));
} else {
clusterSnpFut.onDone(new IgniteCheckedException("Snapshot operation has not been fully completed " +
"[err=" + err + ", snpReq=" + snpReq + ']'));
}
clusterSnpFut = null;
}
return;
}
}
if (isLocalNodeCoordinator(cctx.discovery())) {
Set<UUID> missed = new HashSet<>(snpReq.bltNodes);
missed.removeAll(res.keySet());
missed.removeAll(err.keySet());
if (cancelled) {
snpReq.err = new IgniteFutureCancelledCheckedException("Execution of snapshot tasks " +
"has been cancelled by external process [err=" + err + ", missed=" + missed + ']');
}
else if (!F.isEmpty(err) || !missed.isEmpty()) {
snpReq.err = new IgniteCheckedException("Execution of local snapshot tasks fails or them haven't been executed " +
"due to some of nodes left the cluster. Uncompleted snapshot will be deleted " +
"[err=" + err + ", missed=" + missed + ']');
}
endSnpProc.start(UUID.randomUUID(), snpReq);
}
}
/**
* @param req Request on snapshot creation.
* @return Future which will be completed when the snapshot will be finalized.
*/
private IgniteInternalFuture<SnapshotOperationResponse> initLocalSnapshotEndStage(SnapshotOperationRequest req) {
if (clusterSnpReq == null)
return new GridFinishedFuture<>(new SnapshotOperationResponse());
try {
if (req.err != null)
deleteSnapshot(snapshotLocalDir(req.snpName), pdsSettings.folderName());
removeLastMetaStorageKey();
}
catch (Exception e) {
return new GridFinishedFuture<>(e);
}
return new GridFinishedFuture<>(new SnapshotOperationResponse());
}
/**
* @param id Request id.
* @param res Results.
* @param err Errors.
*/
private void processLocalSnapshotEndStageResult(UUID id, Map<UUID, SnapshotOperationResponse> res, Map<UUID, Exception> err) {
SnapshotOperationRequest snpReq = clusterSnpReq;
if (snpReq == null)
return;
Set<UUID> endFail = new HashSet<>(snpReq.bltNodes);
endFail.removeAll(res.keySet());
clusterSnpReq = null;
synchronized (snpOpMux) {
if (clusterSnpFut != null) {
if (endFail.isEmpty() && snpReq.err == null) {
clusterSnpFut.onDone();
if (log.isInfoEnabled())
log.info("Cluster-wide snapshot operation finished successfully [req=" + snpReq + ']');
}
else if (snpReq.err == null) {
clusterSnpFut.onDone(new IgniteCheckedException("Snapshot creation has been finished with an error. " +
"Local snapshot tasks may not finished completely or finalizing results fails " +
"[fail=" + endFail + ", err=" + err + ']'));
}
else
clusterSnpFut.onDone(snpReq.err);
clusterSnpFut = null;
}
}
}
/**
* @return {@code True} if snapshot operation is in progress.
*/
public boolean isSnapshotCreating() {
if (clusterSnpReq != null)
return true;
synchronized (snpOpMux) {
return clusterSnpReq != null || clusterSnpFut != null;
}
}
/**
* @return List of all known snapshots on the local node.
*/
public List<String> localSnapshotNames() {
if (cctx.kernalContext().clientNode())
throw new UnsupportedOperationException("Client and daemon nodes can not perform this operation.");
if (locSnpDir == null)
return Collections.emptyList();
synchronized (snpOpMux) {
return Arrays.stream(locSnpDir.listFiles(File::isDirectory))
.map(File::getName)
.collect(Collectors.toList());
}
}
/** {@inheritDoc} */
@Override public IgniteFuture<Void> cancelSnapshot(String name) {
A.notNullOrEmpty(name, "Snapshot name must be not empty or null");
IgniteInternalFuture<Void> fut0 = cctx.kernalContext().closure()
.broadcast(new CancelSnapshotClosure(),
name,
cctx.discovery().aliveServerNodes(),
null)
.chain(new CX1<IgniteInternalFuture<Collection<Void>>, Void>() {
@Override public Void applyx(IgniteInternalFuture<Collection<Void>> f) throws IgniteCheckedException {
f.get();
return null;
}
});
return new IgniteFutureImpl<>(fut0);
}
/**
* @param name Snapshot name to cancel operation on local node.
*/
public void cancelLocalSnapshotTask(String name) {
A.notNullOrEmpty(name, "Snapshot name must be not null or empty");
ClusterSnapshotFuture fut0 = null;
busyLock.enterBusy();
try {
for (SnapshotFutureTask sctx : locSnpTasks.values()) {
if (sctx.snapshotName().equals(name))
sctx.cancel();
}
synchronized (snpOpMux) {
if (clusterSnpFut != null)
fut0 = clusterSnpFut;
}
}
finally {
busyLock.leaveBusy();
}
// Future may be completed with cancelled exception, which is expected.
try {
if (fut0 != null)
fut0.get();
}
catch (IgniteCheckedException e) {
if (e instanceof IgniteFutureCancelledCheckedException) {
if (log.isInfoEnabled())
log.info("Expected cancelled exception: " + e.getMessage());
}
else
throw new IgniteException(e);
}
}
/** {@inheritDoc} */
@Override public IgniteFuture<Void> createSnapshot(String name) {
A.notNullOrEmpty(name, "Snapshot name cannot be null or empty.");
A.ensure(U.alphanumericUnderscore(name), "Snapshot name must satisfy the following name pattern: a-zA-Z0-9_");
try {
if (!IgniteFeatures.allNodesSupports(cctx.discovery().aliveServerNodes(), PERSISTENCE_CACHE_SNAPSHOT))
throw new IgniteException("Not all nodes in the cluster support a snapshot operation.");
if (!CU.isPersistenceEnabled(cctx.gridConfig())) {
throw new IgniteException("Create snapshot request has been rejected. Snapshots on an in-memory " +
"clusters are not allowed.");
}
if (!cctx.kernalContext().state().clusterState().state().active())
throw new IgniteException("Snapshot operation has been rejected. The cluster is inactive.");
DiscoveryDataClusterState clusterState = cctx.kernalContext().state().clusterState();
if (!clusterState.hasBaselineTopology())
throw new IgniteException("Snapshot operation has been rejected. The baseline topology is not configured for cluster.");
if (cctx.kernalContext().clientNode()) {
ClusterNode crd = U.oldest(cctx.kernalContext().discovery().aliveServerNodes(), null);
if (crd == null)
throw new IgniteException("There is no alive server nodes in the cluster");
return new IgniteSnapshotFutureImpl(cctx.kernalContext().closure()
.callAsync(new CreateSnapshotClosure(),
name,
Collections.singletonList(crd),
null));
}
ClusterSnapshotFuture snpFut0;
synchronized (snpOpMux) {
if (clusterSnpFut != null && !clusterSnpFut.isDone())
throw new IgniteException("Create snapshot request has been rejected. The previous snapshot operation was not completed.");
if (clusterSnpReq != null)
throw new IgniteException("Create snapshot request has been rejected. Parallel snapshot processes are not allowed.");
if (localSnapshotNames().contains(name))
throw new IgniteException("Create snapshot request has been rejected. Snapshot with given name already exists on local node.");
snpFut0 = new ClusterSnapshotFuture(UUID.randomUUID(), name);
clusterSnpFut = snpFut0;
lastSeenSnpFut = snpFut0;
}
List<Integer> grps = cctx.cache().persistentGroups().stream()
.filter(g -> cctx.cache().cacheType(g.cacheOrGroupName()) == CacheType.USER)
.filter(g -> !g.config().isEncryptionEnabled())
.map(CacheGroupDescriptor::groupId)
.collect(Collectors.toList());
List<ClusterNode> srvNodes = cctx.discovery().serverNodes(AffinityTopologyVersion.NONE);
startSnpProc.start(snpFut0.rqId, new SnapshotOperationRequest(snpFut0.rqId,
cctx.localNodeId(),
name,
grps,
new HashSet<>(F.viewReadOnly(srvNodes,
F.node2id(),
(node) -> CU.baselineNode(node, clusterState)))));
if (log.isInfoEnabled())
log.info("Cluster-wide snapshot operation started [snpName=" + name + ", grps=" + grps + ']');
return new IgniteFutureImpl<>(snpFut0);
}
catch (Exception e) {
U.error(log, "Start snapshot operation failed", e);
lastSeenSnpFut = new ClusterSnapshotFuture(name, e);
return new IgniteFinishedFutureImpl<>(e);
}
}
/** {@inheritDoc} */
@Override public void onReadyForReadWrite(ReadWriteMetastorage metaStorage) throws IgniteCheckedException {
synchronized (snpOpMux) {
this.metaStorage = metaStorage;
if (recovered)
removeLastMetaStorageKey();
recovered = false;
}
}
/** {@inheritDoc} */
@Override public void onReadyForRead(ReadOnlyMetastorage metaStorage) throws IgniteCheckedException {
// Snapshot which has not been completed due to the local node crashed must be deleted.
String snpName = (String)metaStorage.read(SNP_RUNNING_KEY);
if (snpName == null)
return;
recovered = true;
for (File tmp : snapshotTmpDir().listFiles())
U.delete(tmp);
deleteSnapshot(snapshotLocalDir(snpName), pdsSettings.folderName());
if (log.isInfoEnabled()) {
log.info("Previous attempt to create snapshot fail due to the local node crash. All resources " +
"related to snapshot operation have been deleted: " + snpName);
}
}
/**
* @param evt Discovery event to check.
* @return {@code true} if exchange started by snapshot operation.
*/
public static boolean isSnapshotOperation(DiscoveryEvent evt) {
return !evt.eventNode().isClient() &&
evt.type() == EVT_DISCOVERY_CUSTOM_EVT &&
((DiscoveryCustomEvent)evt).customMessage() instanceof SnapshotStartDiscoveryMessage;
}
/** {@inheritDoc} */
@Override public void onDoneBeforeTopologyUnlock(GridDhtPartitionsExchangeFuture fut) {
if (clusterSnpReq == null || cctx.kernalContext().clientNode())
return;
SnapshotOperationRequest snpReq = clusterSnpReq;
SnapshotFutureTask task = locSnpTasks.get(snpReq.snpName);
if (task == null)
return;
if (task.start()) {
cctx.database().forceCheckpoint(String.format("Start snapshot operation: %s", snpReq.snpName));
// Schedule task on a checkpoint and wait when it starts.
try {
task.awaitStarted();
}
catch (IgniteCheckedException e) {
U.error(log, "Fail to wait while cluster-wide snapshot operation started", e);
}
}
}
/**
* @param grps List of cache groups which will be destroyed.
*/
public void onCacheGroupsStopped(List<Integer> grps) {
for (SnapshotFutureTask sctx : locSnpTasks.values()) {
Set<Integer> retain = new HashSet<>(grps);
retain.retainAll(sctx.affectedCacheGroups());
if (!retain.isEmpty()) {
sctx.acceptException(new IgniteCheckedException("Snapshot has been interrupted due to some of the required " +
"cache groups stopped: " + retain));
}
}
}
/**
* @param snpName Unique snapshot name.
* @param srcNodeId Node id which cause snapshot operation.
* @param parts Collection of pairs group and appropriate cache partition to be snapshot.
* @param snpSndr Factory which produces snapshot receiver instance.
* @return Snapshot operation task which should be registered on checkpoint to run.
*/
SnapshotFutureTask registerSnapshotTask(
String snpName,
UUID srcNodeId,
Map<Integer, Set<Integer>> parts,
SnapshotSender snpSndr
) {
if (!busyLock.enterBusy())
return new SnapshotFutureTask(new IgniteCheckedException("Snapshot manager is stopping [locNodeId=" + cctx.localNodeId() + ']'));
try {
if (locSnpTasks.containsKey(snpName))
return new SnapshotFutureTask(new IgniteCheckedException("Snapshot with requested name is already scheduled: " + snpName));
SnapshotFutureTask snpFutTask;
SnapshotFutureTask prev = locSnpTasks.putIfAbsent(snpName,
snpFutTask = new SnapshotFutureTask(cctx,
srcNodeId,
snpName,
tmpWorkDir,
ioFactory,
snpSndr,
parts,
locBuff));
if (prev != null)
return new SnapshotFutureTask(new IgniteCheckedException("Snapshot with requested name is already scheduled: " + snpName));
if (log.isInfoEnabled()) {
log.info("Snapshot task has been registered on local node [sctx=" + this +
", topVer=" + cctx.discovery().topologyVersionEx() + ']');
}
snpFutTask.listen(f -> locSnpTasks.remove(snpName));
return snpFutTask;
}
finally {
busyLock.leaveBusy();
}
}
/**
* @param factory Factory which produces {@link LocalSnapshotSender} implementation.
*/
void localSnapshotSenderFactory(Function<String, SnapshotSender> factory) {
locSndrFactory = factory;
}
/**
* @return Factory which produces {@link LocalSnapshotSender} implementation.
*/
Function<String, SnapshotSender> localSnapshotSenderFactory() {
return locSndrFactory;
}
/** Snapshot finished successfully or already restored. Key can be removed. */
private void removeLastMetaStorageKey() throws IgniteCheckedException {
cctx.database().checkpointReadLock();
try {
metaStorage.remove(SNP_RUNNING_KEY);
}
finally {
cctx.database().checkpointReadUnlock();
}
}
/**
* @return The executor used to run snapshot tasks.
*/
Executor snapshotExecutorService() {
assert snpRunner != null;
return snpRunner;
}
/**
* @param ioFactory Factory to create IO interface over a page stores.
*/
void ioFactory(FileIOFactory ioFactory) {
this.ioFactory = ioFactory;
}
/**
* @return Relative configured path of persistence data storage directory for the local node.
* Example: {@code snapshotWorkDir/db/IgniteNodeName0}
*/
static String databaseRelativePath(String folderName) {
return Paths.get(DB_DEFAULT_FOLDER, folderName).toString();
}
/**
* @param cfg Ignite configuration.
* @return Snapshot directory resolved through given configuration.
*/
public static File resolveSnapshotWorkDirectory(IgniteConfiguration cfg) {
try {
return U.resolveWorkDirectory(cfg.getWorkDirectory() == null ? U.defaultWorkDirectory() : cfg.getWorkDirectory(),
cfg.getSnapshotPath(), false);
}
catch (IgniteCheckedException e) {
throw new IgniteException(e);
}
}
/**
* @param factory Factory to produce FileIO access.
* @param from Copy from file.
* @param to Copy data to file.
* @param length Number of bytes to copy from beginning.
*/
static void copy(FileIOFactory factory, File from, File to, long length) {
try (FileIO src = factory.create(from, READ);
FileChannel dest = new FileOutputStream(to).getChannel()) {
if (src.size() < length) {
throw new IgniteException("The source file to copy has to enough length " +
"[expected=" + length + ", actual=" + src.size() + ']');
}
src.position(0);
long written = 0;
while (written < length)
written += src.transferTo(written, length - written, dest);
}
catch (IOException e) {
throw new IgniteException(e);
}
}
/**
* Snapshot sender which writes all data to local directory.
*/
private class LocalSnapshotSender extends SnapshotSender {
/** Snapshot name. */
private final String snpName;
/** Local snapshot directory. */
private final File snpLocDir;
/** Local node snapshot directory calculated on snapshot directory. */
private File dbDir;
/** Size of page. */
private final int pageSize;
/**
* @param snpName Snapshot name.
*/
public LocalSnapshotSender(String snpName) {
super(IgniteSnapshotManager.this.log, snpRunner);
this.snpName = snpName;
snpLocDir = snapshotLocalDir(snpName);
pageSize = cctx.kernalContext().config().getDataStorageConfiguration().getPageSize();
}
/** {@inheritDoc} */
@Override protected void init(int partsCnt) {
dbDir = new File(snpLocDir, databaseRelativePath(pdsSettings.folderName()));
if (dbDir.exists()) {
throw new IgniteException("Snapshot with given name already exists " +
"[snpName=" + snpName + ", absPath=" + dbDir.getAbsolutePath() + ']');
}
cctx.database().checkpointReadLock();
try {
assert metaStorage != null && metaStorage.read(SNP_RUNNING_KEY) == null :
"The previous snapshot hasn't been completed correctly";
metaStorage.write(SNP_RUNNING_KEY, snpName);
U.ensureDirectory(dbDir, "snapshot work directory", log);
}
catch (IgniteCheckedException e) {
throw new IgniteException(e);
}
finally {
cctx.database().checkpointReadUnlock();
}
}
/** {@inheritDoc} */
@Override public void sendCacheConfig0(File ccfg, String cacheDirName) {
assert dbDir != null;
try {
File cacheDir = U.resolveWorkDirectory(dbDir.getAbsolutePath(), cacheDirName, false);
copy(ioFactory, ccfg, new File(cacheDir, ccfg.getName()), ccfg.length());
}
catch (IgniteCheckedException e) {
throw new IgniteException(e);
}
}
/** {@inheritDoc} */
@Override public void sendMarshallerMeta0(List<Map<Integer, MappedName>> mappings) {
if (mappings == null)
return;
try {
saveMappings(cctx.kernalContext(), mappings, snpLocDir);
}
catch (IgniteCheckedException e) {
throw new IgniteException(e);
}
}
/** {@inheritDoc} */
@Override public void sendBinaryMeta0(Collection<BinaryType> types) {
if (types == null)
return;
cctx.kernalContext().cacheObjects().saveMetadata(types, snpLocDir);
}
/** {@inheritDoc} */
@Override public void sendPart0(File part, String cacheDirName, GroupPartitionId pair, Long len) {
try {
if (len == 0)
return;
File cacheDir = U.resolveWorkDirectory(dbDir.getAbsolutePath(), cacheDirName, false);
File snpPart = new File(cacheDir, part.getName());
if (!snpPart.exists() || snpPart.delete())
snpPart.createNewFile();
copy(ioFactory, part, snpPart, len);
if (log.isInfoEnabled()) {
log.info("Partition has been snapshot [snapshotDir=" + dbDir.getAbsolutePath() +
", cacheDirName=" + cacheDirName + ", part=" + part.getName() +
", length=" + part.length() + ", snapshot=" + snpPart.getName() + ']');
}
}
catch (IOException | IgniteCheckedException ex) {
throw new IgniteException(ex);
}
}
/** {@inheritDoc} */
@Override public void sendDelta0(File delta, String cacheDirName, GroupPartitionId pair) {
File snpPart = getPartitionFile(dbDir, cacheDirName, pair.getPartitionId());
if (log.isInfoEnabled()) {
log.info("Start partition snapshot recovery with the given delta page file [part=" + snpPart +
", delta=" + delta + ']');
}
try (FileIO fileIo = ioFactory.create(delta, READ);
FilePageStore pageStore = (FilePageStore)storeFactory
.apply(pair.getGroupId(), false)
.createPageStore(getFlagByPartId(pair.getPartitionId()),
snpPart::toPath,
new LongAdderMetric("NO_OP", null))
) {
ByteBuffer pageBuf = ByteBuffer.allocate(pageSize)
.order(ByteOrder.nativeOrder());
long totalBytes = fileIo.size();
assert totalBytes % pageSize == 0 : "Given file with delta pages has incorrect size: " + fileIo.size();
pageStore.beginRecover();
for (long pos = 0; pos < totalBytes; pos += pageSize) {
long read = fileIo.readFully(pageBuf, pos);
assert read == pageBuf.capacity();
pageBuf.flip();
if (log.isDebugEnabled()) {
log.debug("Read page given delta file [path=" + delta.getName() +
", pageId=" + PageIO.getPageId(pageBuf) + ", pos=" + pos + ", pages=" + (totalBytes / pageSize) +
", crcBuff=" + FastCrc.calcCrc(pageBuf, pageBuf.limit()) + ", crcPage=" + PageIO.getCrc(pageBuf) + ']');
pageBuf.rewind();
}
pageStore.write(PageIO.getPageId(pageBuf), pageBuf, 0, false);
pageBuf.flip();
}
pageStore.finishRecover();
}
catch (IOException | IgniteCheckedException e) {
throw new IgniteException(e);
}
}
/** {@inheritDoc} */
@Override protected void close0(@Nullable Throwable th) {
if (th == null) {
if (log.isInfoEnabled())
log.info("Local snapshot sender closed, resources released [dbNodeSnpDir=" + dbDir + ']');
}
else {
deleteSnapshot(snpLocDir, pdsSettings.folderName());
if (log.isDebugEnabled())
log.debug("Local snapshot sender closed due to an error occurred: " + th.getMessage());
}
}
}
/** Snapshot start request for {@link DistributedProcess} initiate message. */
private static class SnapshotOperationRequest implements Serializable {
/** Serial version uid. */
private static final long serialVersionUID = 0L;
/** Unique snapshot request id. */
private final UUID rqId;
/** Source node id which trigger request. */
private final UUID srcNodeId;
/** Snapshot name. */
private final String snpName;
/** The list of cache groups to include into snapshot. */
@GridToStringInclude
private final List<Integer> grpIds;
/** The list of affected by snapshot operation baseline nodes. */
@GridToStringInclude
private final Set<UUID> bltNodes;
/** Exception occurred during snapshot operation processing. */
private volatile IgniteCheckedException err;
/**
* @param snpName Snapshot name.
* @param grpIds Cache groups to include into snapshot.
*/
public SnapshotOperationRequest(UUID rqId, UUID srcNodeId, String snpName, List<Integer> grpIds, Set<UUID> bltNodes) {
this.rqId = rqId;
this.srcNodeId = srcNodeId;
this.snpName = snpName;
this.grpIds = grpIds;
this.bltNodes = bltNodes;
}
/** {@inheritDoc} */
@Override public String toString() {
return S.toString(SnapshotOperationRequest.class, this);
}
}
/** */
private static class SnapshotOperationResponse implements Serializable {
/** Serial version uid. */
private static final long serialVersionUID = 0L;
}
/** Snapshot operation start message. */
private static class SnapshotStartDiscoveryMessage extends InitMessage<SnapshotOperationRequest>
implements SnapshotDiscoveryMessage {
/** Serial version UID. */
private static final long serialVersionUID = 0L;
/**
* @param processId Unique process id.
* @param req Snapshot initial request.
*/
public SnapshotStartDiscoveryMessage(
UUID processId,
SnapshotOperationRequest req
) {
super(processId, START_SNAPSHOT, req);
}
/** {@inheritDoc} */
@Override public boolean needExchange() {
return true;
}
/** {@inheritDoc} */
@Override public boolean needAssignPartitions() {
return false;
}
/** {@inheritDoc} */
@Override public String toString() {
return S.toString(SnapshotStartDiscoveryMessage.class, this, super.toString());
}
}
/** */
private static class ClusterSnapshotFuture extends GridFutureAdapter<Void> {
/** Unique snapshot request id. */
private final UUID rqId;
/** Snapshot name. */
private final String name;
/** Snapshot start time. */
private final long startTime;
/** Snapshot finish time. */
private volatile long endTime;
/**
* Default constructor.
*/
public ClusterSnapshotFuture() {
onDone();
rqId = null;
name = "";
startTime = 0;
endTime = 0;
}
/**
* @param name Snapshot name.
* @param err Error starting snapshot operation.
*/
public ClusterSnapshotFuture(String name, Exception err) {
onDone(err);
this.name = name;
startTime = U.currentTimeMillis();
endTime = 0;
rqId = null;
}
/**
* @param rqId Unique snapshot request id.
*/
public ClusterSnapshotFuture(UUID rqId, String name) {
this.rqId = rqId;
this.name = name;
startTime = U.currentTimeMillis();
}
/** {@inheritDoc} */
@Override protected boolean onDone(@Nullable Void res, @Nullable Throwable err, boolean cancel) {
endTime = U.currentTimeMillis();
return super.onDone(res, err, cancel);
}
}
/** Start creation of cluster snapshot closure. */
@GridInternal
private static class CreateSnapshotClosure implements IgniteClosure<String, Void> {
/** Serial version UID. */
private static final long serialVersionUID = 0L;
/** Auto-injected grid instance. */
@IgniteInstanceResource
private transient IgniteEx ignite;
/** {@inheritDoc} */
@Override public Void apply(String name) {
ignite.snapshot().createSnapshot(name).get();
return null;
}
}
/** Cancel snapshot operation closure. */
@GridInternal
private static class CancelSnapshotClosure implements IgniteClosure<String, Void> {
/** Serial version uid. */
private static final long serialVersionUID = 0L;
/** Auto-injected grid instance. */
@IgniteInstanceResource
private transient IgniteEx ignite;
/** {@inheritDoc} */
@Override public Void apply(String snpName) {
ignite.context().cache().context().snapshotMgr().cancelLocalSnapshotTask(snpName);
return null;
}
}
/** Wrapper of internal checked exceptions. */
private static class IgniteSnapshotFutureImpl extends IgniteFutureImpl<Void> {
/** @param fut Internal future. */
public IgniteSnapshotFutureImpl(IgniteInternalFuture<Void> fut) {
super(fut);
}
/** {@inheritDoc} */
@Override protected IgniteException convertException(IgniteCheckedException e) {
if (e instanceof IgniteClientDisconnectedCheckedException)
return new IgniteException("Client disconnected. Snapshot result is unknown", U.convertException(e));
else
return new IgniteException("Snapshot has not been created", U.convertException(e));
}
}
}
|
{
"pile_set_name": "Github"
}
|
// Code generated by thriftrw-plugin-yarpc
// @generated
// Copyright (c) 2020 Uber Technologies, Inc.
//
// Permission is hereby granted, free of charge, to any person obtaining a copy
// of this software and associated documentation files (the "Software"), to deal
// in the Software without restriction, including without limitation the rights
// to use, copy, modify, merge, publish, distribute, sublicense, and/or sell
// copies of the Software, and to permit persons to whom the Software is
// furnished to do so, subject to the following conditions:
//
// The above copyright notice and this permission notice shall be included in
// all copies or substantial portions of the Software.
//
// THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
// IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
// FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE
// AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER
// LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM,
// OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN
// THE SOFTWARE.
package thrifttestclient
import (
context "context"
wire "go.uber.org/thriftrw/wire"
yarpc "go.uber.org/yarpc"
transport "go.uber.org/yarpc/api/transport"
thrift "go.uber.org/yarpc/encoding/thrift"
gauntlet "go.uber.org/yarpc/internal/crossdock/thrift/gauntlet"
reflect "reflect"
)
// Interface is a client for the ThriftTest service.
type Interface interface {
TestBinary(
ctx context.Context,
Thing []byte,
opts ...yarpc.CallOption,
) ([]byte, error)
TestByte(
ctx context.Context,
Thing *int8,
opts ...yarpc.CallOption,
) (int8, error)
TestDouble(
ctx context.Context,
Thing *float64,
opts ...yarpc.CallOption,
) (float64, error)
TestEnum(
ctx context.Context,
Thing *gauntlet.Numberz,
opts ...yarpc.CallOption,
) (gauntlet.Numberz, error)
TestException(
ctx context.Context,
Arg *string,
opts ...yarpc.CallOption,
) error
TestI32(
ctx context.Context,
Thing *int32,
opts ...yarpc.CallOption,
) (int32, error)
TestI64(
ctx context.Context,
Thing *int64,
opts ...yarpc.CallOption,
) (int64, error)
TestInsanity(
ctx context.Context,
Argument *gauntlet.Insanity,
opts ...yarpc.CallOption,
) (map[gauntlet.UserId]map[gauntlet.Numberz]*gauntlet.Insanity, error)
TestList(
ctx context.Context,
Thing []int32,
opts ...yarpc.CallOption,
) ([]int32, error)
TestMap(
ctx context.Context,
Thing map[int32]int32,
opts ...yarpc.CallOption,
) (map[int32]int32, error)
TestMapMap(
ctx context.Context,
Hello *int32,
opts ...yarpc.CallOption,
) (map[int32]map[int32]int32, error)
TestMulti(
ctx context.Context,
Arg0 *int8,
Arg1 *int32,
Arg2 *int64,
Arg3 map[int16]string,
Arg4 *gauntlet.Numberz,
Arg5 *gauntlet.UserId,
opts ...yarpc.CallOption,
) (*gauntlet.Xtruct, error)
TestMultiException(
ctx context.Context,
Arg0 *string,
Arg1 *string,
opts ...yarpc.CallOption,
) (*gauntlet.Xtruct, error)
TestNest(
ctx context.Context,
Thing *gauntlet.Xtruct2,
opts ...yarpc.CallOption,
) (*gauntlet.Xtruct2, error)
TestOneway(
ctx context.Context,
SecondsToSleep *int32,
opts ...yarpc.CallOption,
) (yarpc.Ack, error)
TestSet(
ctx context.Context,
Thing map[int32]struct{},
opts ...yarpc.CallOption,
) (map[int32]struct{}, error)
TestString(
ctx context.Context,
Thing *string,
opts ...yarpc.CallOption,
) (string, error)
TestStringMap(
ctx context.Context,
Thing map[string]string,
opts ...yarpc.CallOption,
) (map[string]string, error)
TestStruct(
ctx context.Context,
Thing *gauntlet.Xtruct,
opts ...yarpc.CallOption,
) (*gauntlet.Xtruct, error)
TestTypedef(
ctx context.Context,
Thing *gauntlet.UserId,
opts ...yarpc.CallOption,
) (gauntlet.UserId, error)
TestVoid(
ctx context.Context,
opts ...yarpc.CallOption,
) error
}
// New builds a new client for the ThriftTest service.
//
// client := thrifttestclient.New(dispatcher.ClientConfig("thrifttest"))
func New(c transport.ClientConfig, opts ...thrift.ClientOption) Interface {
return client{
c: thrift.New(thrift.Config{
Service: "ThriftTest",
ClientConfig: c,
}, opts...),
}
}
func init() {
yarpc.RegisterClientBuilder(
func(c transport.ClientConfig, f reflect.StructField) Interface {
return New(c, thrift.ClientBuilderOptions(c, f)...)
},
)
}
type client struct {
c thrift.Client
}
func (c client) TestBinary(
ctx context.Context,
_Thing []byte,
opts ...yarpc.CallOption,
) (success []byte, err error) {
args := gauntlet.ThriftTest_TestBinary_Helper.Args(_Thing)
var body wire.Value
body, err = c.c.Call(ctx, args, opts...)
if err != nil {
return
}
var result gauntlet.ThriftTest_TestBinary_Result
if err = result.FromWire(body); err != nil {
return
}
success, err = gauntlet.ThriftTest_TestBinary_Helper.UnwrapResponse(&result)
return
}
func (c client) TestByte(
ctx context.Context,
_Thing *int8,
opts ...yarpc.CallOption,
) (success int8, err error) {
args := gauntlet.ThriftTest_TestByte_Helper.Args(_Thing)
var body wire.Value
body, err = c.c.Call(ctx, args, opts...)
if err != nil {
return
}
var result gauntlet.ThriftTest_TestByte_Result
if err = result.FromWire(body); err != nil {
return
}
success, err = gauntlet.ThriftTest_TestByte_Helper.UnwrapResponse(&result)
return
}
func (c client) TestDouble(
ctx context.Context,
_Thing *float64,
opts ...yarpc.CallOption,
) (success float64, err error) {
args := gauntlet.ThriftTest_TestDouble_Helper.Args(_Thing)
var body wire.Value
body, err = c.c.Call(ctx, args, opts...)
if err != nil {
return
}
var result gauntlet.ThriftTest_TestDouble_Result
if err = result.FromWire(body); err != nil {
return
}
success, err = gauntlet.ThriftTest_TestDouble_Helper.UnwrapResponse(&result)
return
}
func (c client) TestEnum(
ctx context.Context,
_Thing *gauntlet.Numberz,
opts ...yarpc.CallOption,
) (success gauntlet.Numberz, err error) {
args := gauntlet.ThriftTest_TestEnum_Helper.Args(_Thing)
var body wire.Value
body, err = c.c.Call(ctx, args, opts...)
if err != nil {
return
}
var result gauntlet.ThriftTest_TestEnum_Result
if err = result.FromWire(body); err != nil {
return
}
success, err = gauntlet.ThriftTest_TestEnum_Helper.UnwrapResponse(&result)
return
}
func (c client) TestException(
ctx context.Context,
_Arg *string,
opts ...yarpc.CallOption,
) (err error) {
args := gauntlet.ThriftTest_TestException_Helper.Args(_Arg)
var body wire.Value
body, err = c.c.Call(ctx, args, opts...)
if err != nil {
return
}
var result gauntlet.ThriftTest_TestException_Result
if err = result.FromWire(body); err != nil {
return
}
err = gauntlet.ThriftTest_TestException_Helper.UnwrapResponse(&result)
return
}
func (c client) TestI32(
ctx context.Context,
_Thing *int32,
opts ...yarpc.CallOption,
) (success int32, err error) {
args := gauntlet.ThriftTest_TestI32_Helper.Args(_Thing)
var body wire.Value
body, err = c.c.Call(ctx, args, opts...)
if err != nil {
return
}
var result gauntlet.ThriftTest_TestI32_Result
if err = result.FromWire(body); err != nil {
return
}
success, err = gauntlet.ThriftTest_TestI32_Helper.UnwrapResponse(&result)
return
}
func (c client) TestI64(
ctx context.Context,
_Thing *int64,
opts ...yarpc.CallOption,
) (success int64, err error) {
args := gauntlet.ThriftTest_TestI64_Helper.Args(_Thing)
var body wire.Value
body, err = c.c.Call(ctx, args, opts...)
if err != nil {
return
}
var result gauntlet.ThriftTest_TestI64_Result
if err = result.FromWire(body); err != nil {
return
}
success, err = gauntlet.ThriftTest_TestI64_Helper.UnwrapResponse(&result)
return
}
func (c client) TestInsanity(
ctx context.Context,
_Argument *gauntlet.Insanity,
opts ...yarpc.CallOption,
) (success map[gauntlet.UserId]map[gauntlet.Numberz]*gauntlet.Insanity, err error) {
args := gauntlet.ThriftTest_TestInsanity_Helper.Args(_Argument)
var body wire.Value
body, err = c.c.Call(ctx, args, opts...)
if err != nil {
return
}
var result gauntlet.ThriftTest_TestInsanity_Result
if err = result.FromWire(body); err != nil {
return
}
success, err = gauntlet.ThriftTest_TestInsanity_Helper.UnwrapResponse(&result)
return
}
func (c client) TestList(
ctx context.Context,
_Thing []int32,
opts ...yarpc.CallOption,
) (success []int32, err error) {
args := gauntlet.ThriftTest_TestList_Helper.Args(_Thing)
var body wire.Value
body, err = c.c.Call(ctx, args, opts...)
if err != nil {
return
}
var result gauntlet.ThriftTest_TestList_Result
if err = result.FromWire(body); err != nil {
return
}
success, err = gauntlet.ThriftTest_TestList_Helper.UnwrapResponse(&result)
return
}
func (c client) TestMap(
ctx context.Context,
_Thing map[int32]int32,
opts ...yarpc.CallOption,
) (success map[int32]int32, err error) {
args := gauntlet.ThriftTest_TestMap_Helper.Args(_Thing)
var body wire.Value
body, err = c.c.Call(ctx, args, opts...)
if err != nil {
return
}
var result gauntlet.ThriftTest_TestMap_Result
if err = result.FromWire(body); err != nil {
return
}
success, err = gauntlet.ThriftTest_TestMap_Helper.UnwrapResponse(&result)
return
}
func (c client) TestMapMap(
ctx context.Context,
_Hello *int32,
opts ...yarpc.CallOption,
) (success map[int32]map[int32]int32, err error) {
args := gauntlet.ThriftTest_TestMapMap_Helper.Args(_Hello)
var body wire.Value
body, err = c.c.Call(ctx, args, opts...)
if err != nil {
return
}
var result gauntlet.ThriftTest_TestMapMap_Result
if err = result.FromWire(body); err != nil {
return
}
success, err = gauntlet.ThriftTest_TestMapMap_Helper.UnwrapResponse(&result)
return
}
func (c client) TestMulti(
ctx context.Context,
_Arg0 *int8,
_Arg1 *int32,
_Arg2 *int64,
_Arg3 map[int16]string,
_Arg4 *gauntlet.Numberz,
_Arg5 *gauntlet.UserId,
opts ...yarpc.CallOption,
) (success *gauntlet.Xtruct, err error) {
args := gauntlet.ThriftTest_TestMulti_Helper.Args(_Arg0, _Arg1, _Arg2, _Arg3, _Arg4, _Arg5)
var body wire.Value
body, err = c.c.Call(ctx, args, opts...)
if err != nil {
return
}
var result gauntlet.ThriftTest_TestMulti_Result
if err = result.FromWire(body); err != nil {
return
}
success, err = gauntlet.ThriftTest_TestMulti_Helper.UnwrapResponse(&result)
return
}
func (c client) TestMultiException(
ctx context.Context,
_Arg0 *string,
_Arg1 *string,
opts ...yarpc.CallOption,
) (success *gauntlet.Xtruct, err error) {
args := gauntlet.ThriftTest_TestMultiException_Helper.Args(_Arg0, _Arg1)
var body wire.Value
body, err = c.c.Call(ctx, args, opts...)
if err != nil {
return
}
var result gauntlet.ThriftTest_TestMultiException_Result
if err = result.FromWire(body); err != nil {
return
}
success, err = gauntlet.ThriftTest_TestMultiException_Helper.UnwrapResponse(&result)
return
}
func (c client) TestNest(
ctx context.Context,
_Thing *gauntlet.Xtruct2,
opts ...yarpc.CallOption,
) (success *gauntlet.Xtruct2, err error) {
args := gauntlet.ThriftTest_TestNest_Helper.Args(_Thing)
var body wire.Value
body, err = c.c.Call(ctx, args, opts...)
if err != nil {
return
}
var result gauntlet.ThriftTest_TestNest_Result
if err = result.FromWire(body); err != nil {
return
}
success, err = gauntlet.ThriftTest_TestNest_Helper.UnwrapResponse(&result)
return
}
func (c client) TestOneway(
ctx context.Context,
_SecondsToSleep *int32,
opts ...yarpc.CallOption,
) (yarpc.Ack, error) {
args := gauntlet.ThriftTest_TestOneway_Helper.Args(_SecondsToSleep)
return c.c.CallOneway(ctx, args, opts...)
}
func (c client) TestSet(
ctx context.Context,
_Thing map[int32]struct{},
opts ...yarpc.CallOption,
) (success map[int32]struct{}, err error) {
args := gauntlet.ThriftTest_TestSet_Helper.Args(_Thing)
var body wire.Value
body, err = c.c.Call(ctx, args, opts...)
if err != nil {
return
}
var result gauntlet.ThriftTest_TestSet_Result
if err = result.FromWire(body); err != nil {
return
}
success, err = gauntlet.ThriftTest_TestSet_Helper.UnwrapResponse(&result)
return
}
func (c client) TestString(
ctx context.Context,
_Thing *string,
opts ...yarpc.CallOption,
) (success string, err error) {
args := gauntlet.ThriftTest_TestString_Helper.Args(_Thing)
var body wire.Value
body, err = c.c.Call(ctx, args, opts...)
if err != nil {
return
}
var result gauntlet.ThriftTest_TestString_Result
if err = result.FromWire(body); err != nil {
return
}
success, err = gauntlet.ThriftTest_TestString_Helper.UnwrapResponse(&result)
return
}
func (c client) TestStringMap(
ctx context.Context,
_Thing map[string]string,
opts ...yarpc.CallOption,
) (success map[string]string, err error) {
args := gauntlet.ThriftTest_TestStringMap_Helper.Args(_Thing)
var body wire.Value
body, err = c.c.Call(ctx, args, opts...)
if err != nil {
return
}
var result gauntlet.ThriftTest_TestStringMap_Result
if err = result.FromWire(body); err != nil {
return
}
success, err = gauntlet.ThriftTest_TestStringMap_Helper.UnwrapResponse(&result)
return
}
func (c client) TestStruct(
ctx context.Context,
_Thing *gauntlet.Xtruct,
opts ...yarpc.CallOption,
) (success *gauntlet.Xtruct, err error) {
args := gauntlet.ThriftTest_TestStruct_Helper.Args(_Thing)
var body wire.Value
body, err = c.c.Call(ctx, args, opts...)
if err != nil {
return
}
var result gauntlet.ThriftTest_TestStruct_Result
if err = result.FromWire(body); err != nil {
return
}
success, err = gauntlet.ThriftTest_TestStruct_Helper.UnwrapResponse(&result)
return
}
func (c client) TestTypedef(
ctx context.Context,
_Thing *gauntlet.UserId,
opts ...yarpc.CallOption,
) (success gauntlet.UserId, err error) {
args := gauntlet.ThriftTest_TestTypedef_Helper.Args(_Thing)
var body wire.Value
body, err = c.c.Call(ctx, args, opts...)
if err != nil {
return
}
var result gauntlet.ThriftTest_TestTypedef_Result
if err = result.FromWire(body); err != nil {
return
}
success, err = gauntlet.ThriftTest_TestTypedef_Helper.UnwrapResponse(&result)
return
}
func (c client) TestVoid(
ctx context.Context,
opts ...yarpc.CallOption,
) (err error) {
args := gauntlet.ThriftTest_TestVoid_Helper.Args()
var body wire.Value
body, err = c.c.Call(ctx, args, opts...)
if err != nil {
return
}
var result gauntlet.ThriftTest_TestVoid_Result
if err = result.FromWire(body); err != nil {
return
}
err = gauntlet.ThriftTest_TestVoid_Helper.UnwrapResponse(&result)
return
}
|
{
"pile_set_name": "Github"
}
|
// !$*UTF8*$!
{
archiveVersion = 1;
classes = {
};
objectVersion = 46;
objects = {
/* Begin PBXBuildFile section */
95BD8F7D1F03EC410041E2B7 /* main.m in Sources */ = {isa = PBXBuildFile; fileRef = 95BD8F7C1F03EC410041E2B7 /* main.m */; };
95BD8F801F03EC410041E2B7 /* AppDelegate.m in Sources */ = {isa = PBXBuildFile; fileRef = 95BD8F7F1F03EC410041E2B7 /* AppDelegate.m */; };
95BD8F831F03EC410041E2B7 /* ViewController.m in Sources */ = {isa = PBXBuildFile; fileRef = 95BD8F821F03EC410041E2B7 /* ViewController.m */; };
95BD8F881F03EC410041E2B7 /* Assets.xcassets in Resources */ = {isa = PBXBuildFile; fileRef = 95BD8F871F03EC410041E2B7 /* Assets.xcassets */; };
95BD8F8B1F03EC410041E2B7 /* LaunchScreen.storyboard in Resources */ = {isa = PBXBuildFile; fileRef = 95BD8F891F03EC410041E2B7 /* LaunchScreen.storyboard */; };
95BD8F931F03EC680041E2B7 /* Main.storyboard in Resources */ = {isa = PBXBuildFile; fileRef = 95BD8F921F03EC680041E2B7 /* Main.storyboard */; };
95BD8F991F03ECBA0041E2B7 /* CircleIndicatorView.m in Sources */ = {isa = PBXBuildFile; fileRef = 95BD8F961F03ECBA0041E2B7 /* CircleIndicatorView.m */; };
95BD8F9A1F03ECBA0041E2B7 /* RectangleIndicatorView.m in Sources */ = {isa = PBXBuildFile; fileRef = 95BD8F981F03ECBA0041E2B7 /* RectangleIndicatorView.m */; };
/* End PBXBuildFile section */
/* Begin PBXFileReference section */
95BD8F781F03EC410041E2B7 /* Indicator.app */ = {isa = PBXFileReference; explicitFileType = wrapper.application; includeInIndex = 0; path = Indicator.app; sourceTree = BUILT_PRODUCTS_DIR; };
95BD8F7C1F03EC410041E2B7 /* main.m */ = {isa = PBXFileReference; lastKnownFileType = sourcecode.c.objc; path = main.m; sourceTree = "<group>"; };
95BD8F7E1F03EC410041E2B7 /* AppDelegate.h */ = {isa = PBXFileReference; lastKnownFileType = sourcecode.c.h; path = AppDelegate.h; sourceTree = "<group>"; };
95BD8F7F1F03EC410041E2B7 /* AppDelegate.m */ = {isa = PBXFileReference; lastKnownFileType = sourcecode.c.objc; path = AppDelegate.m; sourceTree = "<group>"; };
95BD8F811F03EC410041E2B7 /* ViewController.h */ = {isa = PBXFileReference; lastKnownFileType = sourcecode.c.h; path = ViewController.h; sourceTree = "<group>"; };
95BD8F821F03EC410041E2B7 /* ViewController.m */ = {isa = PBXFileReference; lastKnownFileType = sourcecode.c.objc; path = ViewController.m; sourceTree = "<group>"; };
95BD8F871F03EC410041E2B7 /* Assets.xcassets */ = {isa = PBXFileReference; lastKnownFileType = folder.assetcatalog; path = Assets.xcassets; sourceTree = "<group>"; };
95BD8F8A1F03EC410041E2B7 /* Base */ = {isa = PBXFileReference; lastKnownFileType = file.storyboard; name = Base; path = Base.lproj/LaunchScreen.storyboard; sourceTree = "<group>"; };
95BD8F8C1F03EC410041E2B7 /* Info.plist */ = {isa = PBXFileReference; lastKnownFileType = text.plist.xml; path = Info.plist; sourceTree = "<group>"; };
95BD8F921F03EC680041E2B7 /* Main.storyboard */ = {isa = PBXFileReference; fileEncoding = 4; lastKnownFileType = file.storyboard; path = Main.storyboard; sourceTree = "<group>"; };
95BD8F951F03ECBA0041E2B7 /* CircleIndicatorView.h */ = {isa = PBXFileReference; fileEncoding = 4; lastKnownFileType = sourcecode.c.h; path = CircleIndicatorView.h; sourceTree = "<group>"; };
95BD8F961F03ECBA0041E2B7 /* CircleIndicatorView.m */ = {isa = PBXFileReference; fileEncoding = 4; lastKnownFileType = sourcecode.c.objc; path = CircleIndicatorView.m; sourceTree = "<group>"; };
95BD8F971F03ECBA0041E2B7 /* RectangleIndicatorView.h */ = {isa = PBXFileReference; fileEncoding = 4; lastKnownFileType = sourcecode.c.h; path = RectangleIndicatorView.h; sourceTree = "<group>"; };
95BD8F981F03ECBA0041E2B7 /* RectangleIndicatorView.m */ = {isa = PBXFileReference; fileEncoding = 4; lastKnownFileType = sourcecode.c.objc; path = RectangleIndicatorView.m; sourceTree = "<group>"; };
/* End PBXFileReference section */
/* Begin PBXFrameworksBuildPhase section */
95BD8F751F03EC410041E2B7 /* Frameworks */ = {
isa = PBXFrameworksBuildPhase;
buildActionMask = 2147483647;
files = (
);
runOnlyForDeploymentPostprocessing = 0;
};
/* End PBXFrameworksBuildPhase section */
/* Begin PBXGroup section */
95BD8F6F1F03EC410041E2B7 = {
isa = PBXGroup;
children = (
95BD8F7A1F03EC410041E2B7 /* Indicator */,
95BD8F791F03EC410041E2B7 /* Products */,
);
sourceTree = "<group>";
};
95BD8F791F03EC410041E2B7 /* Products */ = {
isa = PBXGroup;
children = (
95BD8F781F03EC410041E2B7 /* Indicator.app */,
);
name = Products;
sourceTree = "<group>";
};
95BD8F7A1F03EC410041E2B7 /* Indicator */ = {
isa = PBXGroup;
children = (
95BD8F941F03ECBA0041E2B7 /* IndicatorView */,
95BD8F7E1F03EC410041E2B7 /* AppDelegate.h */,
95BD8F7F1F03EC410041E2B7 /* AppDelegate.m */,
95BD8F811F03EC410041E2B7 /* ViewController.h */,
95BD8F821F03EC410041E2B7 /* ViewController.m */,
95BD8F921F03EC680041E2B7 /* Main.storyboard */,
95BD8F871F03EC410041E2B7 /* Assets.xcassets */,
95BD8F891F03EC410041E2B7 /* LaunchScreen.storyboard */,
95BD8F8C1F03EC410041E2B7 /* Info.plist */,
95BD8F7B1F03EC410041E2B7 /* Supporting Files */,
);
path = Indicator;
sourceTree = "<group>";
};
95BD8F7B1F03EC410041E2B7 /* Supporting Files */ = {
isa = PBXGroup;
children = (
95BD8F7C1F03EC410041E2B7 /* main.m */,
);
name = "Supporting Files";
sourceTree = "<group>";
};
95BD8F941F03ECBA0041E2B7 /* IndicatorView */ = {
isa = PBXGroup;
children = (
95BD8F951F03ECBA0041E2B7 /* CircleIndicatorView.h */,
95BD8F961F03ECBA0041E2B7 /* CircleIndicatorView.m */,
95BD8F971F03ECBA0041E2B7 /* RectangleIndicatorView.h */,
95BD8F981F03ECBA0041E2B7 /* RectangleIndicatorView.m */,
);
path = IndicatorView;
sourceTree = "<group>";
};
/* End PBXGroup section */
/* Begin PBXNativeTarget section */
95BD8F771F03EC410041E2B7 /* Indicator */ = {
isa = PBXNativeTarget;
buildConfigurationList = 95BD8F8F1F03EC410041E2B7 /* Build configuration list for PBXNativeTarget "Indicator" */;
buildPhases = (
95BD8F741F03EC410041E2B7 /* Sources */,
95BD8F751F03EC410041E2B7 /* Frameworks */,
95BD8F761F03EC410041E2B7 /* Resources */,
);
buildRules = (
);
dependencies = (
);
name = Indicator;
productName = Indicator;
productReference = 95BD8F781F03EC410041E2B7 /* Indicator.app */;
productType = "com.apple.product-type.application";
};
/* End PBXNativeTarget section */
/* Begin PBXProject section */
95BD8F701F03EC410041E2B7 /* Project object */ = {
isa = PBXProject;
attributes = {
LastUpgradeCheck = 0830;
ORGANIZATIONNAME = MyCompany;
TargetAttributes = {
95BD8F771F03EC410041E2B7 = {
CreatedOnToolsVersion = 8.3.3;
ProvisioningStyle = Automatic;
};
};
};
buildConfigurationList = 95BD8F731F03EC410041E2B7 /* Build configuration list for PBXProject "Indicator" */;
compatibilityVersion = "Xcode 3.2";
developmentRegion = English;
hasScannedForEncodings = 0;
knownRegions = (
en,
Base,
);
mainGroup = 95BD8F6F1F03EC410041E2B7;
productRefGroup = 95BD8F791F03EC410041E2B7 /* Products */;
projectDirPath = "";
projectRoot = "";
targets = (
95BD8F771F03EC410041E2B7 /* Indicator */,
);
};
/* End PBXProject section */
/* Begin PBXResourcesBuildPhase section */
95BD8F761F03EC410041E2B7 /* Resources */ = {
isa = PBXResourcesBuildPhase;
buildActionMask = 2147483647;
files = (
95BD8F931F03EC680041E2B7 /* Main.storyboard in Resources */,
95BD8F8B1F03EC410041E2B7 /* LaunchScreen.storyboard in Resources */,
95BD8F881F03EC410041E2B7 /* Assets.xcassets in Resources */,
);
runOnlyForDeploymentPostprocessing = 0;
};
/* End PBXResourcesBuildPhase section */
/* Begin PBXSourcesBuildPhase section */
95BD8F741F03EC410041E2B7 /* Sources */ = {
isa = PBXSourcesBuildPhase;
buildActionMask = 2147483647;
files = (
95BD8F831F03EC410041E2B7 /* ViewController.m in Sources */,
95BD8F801F03EC410041E2B7 /* AppDelegate.m in Sources */,
95BD8F991F03ECBA0041E2B7 /* CircleIndicatorView.m in Sources */,
95BD8F7D1F03EC410041E2B7 /* main.m in Sources */,
95BD8F9A1F03ECBA0041E2B7 /* RectangleIndicatorView.m in Sources */,
);
runOnlyForDeploymentPostprocessing = 0;
};
/* End PBXSourcesBuildPhase section */
/* Begin PBXVariantGroup section */
95BD8F891F03EC410041E2B7 /* LaunchScreen.storyboard */ = {
isa = PBXVariantGroup;
children = (
95BD8F8A1F03EC410041E2B7 /* Base */,
);
name = LaunchScreen.storyboard;
sourceTree = "<group>";
};
/* End PBXVariantGroup section */
/* Begin XCBuildConfiguration section */
95BD8F8D1F03EC410041E2B7 /* Debug */ = {
isa = XCBuildConfiguration;
buildSettings = {
ALWAYS_SEARCH_USER_PATHS = NO;
CLANG_ANALYZER_NONNULL = YES;
CLANG_ANALYZER_NUMBER_OBJECT_CONVERSION = YES_AGGRESSIVE;
CLANG_CXX_LANGUAGE_STANDARD = "gnu++0x";
CLANG_CXX_LIBRARY = "libc++";
CLANG_ENABLE_MODULES = YES;
CLANG_ENABLE_OBJC_ARC = YES;
CLANG_WARN_BOOL_CONVERSION = YES;
CLANG_WARN_CONSTANT_CONVERSION = YES;
CLANG_WARN_DIRECT_OBJC_ISA_USAGE = YES_ERROR;
CLANG_WARN_DOCUMENTATION_COMMENTS = YES;
CLANG_WARN_EMPTY_BODY = YES;
CLANG_WARN_ENUM_CONVERSION = YES;
CLANG_WARN_INFINITE_RECURSION = YES;
CLANG_WARN_INT_CONVERSION = YES;
CLANG_WARN_OBJC_ROOT_CLASS = YES_ERROR;
CLANG_WARN_SUSPICIOUS_MOVE = YES;
CLANG_WARN_UNREACHABLE_CODE = YES;
CLANG_WARN__DUPLICATE_METHOD_MATCH = YES;
"CODE_SIGN_IDENTITY[sdk=iphoneos*]" = "iPhone Developer";
COPY_PHASE_STRIP = NO;
DEBUG_INFORMATION_FORMAT = dwarf;
ENABLE_STRICT_OBJC_MSGSEND = YES;
ENABLE_TESTABILITY = YES;
GCC_C_LANGUAGE_STANDARD = gnu99;
GCC_DYNAMIC_NO_PIC = NO;
GCC_NO_COMMON_BLOCKS = YES;
GCC_OPTIMIZATION_LEVEL = 0;
GCC_PREPROCESSOR_DEFINITIONS = (
"DEBUG=1",
"$(inherited)",
);
GCC_WARN_64_TO_32_BIT_CONVERSION = YES;
GCC_WARN_ABOUT_RETURN_TYPE = YES_ERROR;
GCC_WARN_UNDECLARED_SELECTOR = YES;
GCC_WARN_UNINITIALIZED_AUTOS = YES_AGGRESSIVE;
GCC_WARN_UNUSED_FUNCTION = YES;
GCC_WARN_UNUSED_VARIABLE = YES;
IPHONEOS_DEPLOYMENT_TARGET = 10.3;
MTL_ENABLE_DEBUG_INFO = YES;
ONLY_ACTIVE_ARCH = YES;
SDKROOT = iphoneos;
TARGETED_DEVICE_FAMILY = "1,2";
};
name = Debug;
};
95BD8F8E1F03EC410041E2B7 /* Release */ = {
isa = XCBuildConfiguration;
buildSettings = {
ALWAYS_SEARCH_USER_PATHS = NO;
CLANG_ANALYZER_NONNULL = YES;
CLANG_ANALYZER_NUMBER_OBJECT_CONVERSION = YES_AGGRESSIVE;
CLANG_CXX_LANGUAGE_STANDARD = "gnu++0x";
CLANG_CXX_LIBRARY = "libc++";
CLANG_ENABLE_MODULES = YES;
CLANG_ENABLE_OBJC_ARC = YES;
CLANG_WARN_BOOL_CONVERSION = YES;
CLANG_WARN_CONSTANT_CONVERSION = YES;
CLANG_WARN_DIRECT_OBJC_ISA_USAGE = YES_ERROR;
CLANG_WARN_DOCUMENTATION_COMMENTS = YES;
CLANG_WARN_EMPTY_BODY = YES;
CLANG_WARN_ENUM_CONVERSION = YES;
CLANG_WARN_INFINITE_RECURSION = YES;
CLANG_WARN_INT_CONVERSION = YES;
CLANG_WARN_OBJC_ROOT_CLASS = YES_ERROR;
CLANG_WARN_SUSPICIOUS_MOVE = YES;
CLANG_WARN_UNREACHABLE_CODE = YES;
CLANG_WARN__DUPLICATE_METHOD_MATCH = YES;
"CODE_SIGN_IDENTITY[sdk=iphoneos*]" = "iPhone Developer";
COPY_PHASE_STRIP = NO;
DEBUG_INFORMATION_FORMAT = "dwarf-with-dsym";
ENABLE_NS_ASSERTIONS = NO;
ENABLE_STRICT_OBJC_MSGSEND = YES;
GCC_C_LANGUAGE_STANDARD = gnu99;
GCC_NO_COMMON_BLOCKS = YES;
GCC_WARN_64_TO_32_BIT_CONVERSION = YES;
GCC_WARN_ABOUT_RETURN_TYPE = YES_ERROR;
GCC_WARN_UNDECLARED_SELECTOR = YES;
GCC_WARN_UNINITIALIZED_AUTOS = YES_AGGRESSIVE;
GCC_WARN_UNUSED_FUNCTION = YES;
GCC_WARN_UNUSED_VARIABLE = YES;
IPHONEOS_DEPLOYMENT_TARGET = 10.3;
MTL_ENABLE_DEBUG_INFO = NO;
SDKROOT = iphoneos;
TARGETED_DEVICE_FAMILY = "1,2";
VALIDATE_PRODUCT = YES;
};
name = Release;
};
95BD8F901F03EC410041E2B7 /* Debug */ = {
isa = XCBuildConfiguration;
buildSettings = {
ASSETCATALOG_COMPILER_APPICON_NAME = AppIcon;
INFOPLIST_FILE = Indicator/Info.plist;
LD_RUNPATH_SEARCH_PATHS = "$(inherited) @executable_path/Frameworks";
PRODUCT_BUNDLE_IDENTIFIER = cn.xxx.indicator.Indicator;
PRODUCT_NAME = "$(TARGET_NAME)";
};
name = Debug;
};
95BD8F911F03EC410041E2B7 /* Release */ = {
isa = XCBuildConfiguration;
buildSettings = {
ASSETCATALOG_COMPILER_APPICON_NAME = AppIcon;
INFOPLIST_FILE = Indicator/Info.plist;
LD_RUNPATH_SEARCH_PATHS = "$(inherited) @executable_path/Frameworks";
PRODUCT_BUNDLE_IDENTIFIER = cn.xxx.indicator.Indicator;
PRODUCT_NAME = "$(TARGET_NAME)";
};
name = Release;
};
/* End XCBuildConfiguration section */
/* Begin XCConfigurationList section */
95BD8F731F03EC410041E2B7 /* Build configuration list for PBXProject "Indicator" */ = {
isa = XCConfigurationList;
buildConfigurations = (
95BD8F8D1F03EC410041E2B7 /* Debug */,
95BD8F8E1F03EC410041E2B7 /* Release */,
);
defaultConfigurationIsVisible = 0;
defaultConfigurationName = Release;
};
95BD8F8F1F03EC410041E2B7 /* Build configuration list for PBXNativeTarget "Indicator" */ = {
isa = XCConfigurationList;
buildConfigurations = (
95BD8F901F03EC410041E2B7 /* Debug */,
95BD8F911F03EC410041E2B7 /* Release */,
);
defaultConfigurationIsVisible = 0;
};
/* End XCConfigurationList section */
};
rootObject = 95BD8F701F03EC410041E2B7 /* Project object */;
}
|
{
"pile_set_name": "Github"
}
|
<?xml version="1.0" encoding="utf-8"?>
<resources>
<item name="activity_modify_user" type="id" />
</resources>
|
{
"pile_set_name": "Github"
}
|
/*
* Copyright 2015-2020 Amazon.com, Inc. or its affiliates. All Rights Reserved.
*
* Licensed under the Apache License, Version 2.0 (the "License"). You may not use this file except in compliance with
* the License. A copy of the License is located at
*
* http://aws.amazon.com/apache2.0
*
* or in the "license" file accompanying this file. This file is distributed on an "AS IS" BASIS, WITHOUT WARRANTIES OR
* CONDITIONS OF ANY KIND, either express or implied. See the License for the specific language governing permissions
* and limitations under the License.
*/
package com.amazonaws.services.ec2.model.transform;
import javax.xml.stream.events.XMLEvent;
import javax.annotation.Generated;
import com.amazonaws.services.ec2.model.*;
import com.amazonaws.transform.Unmarshaller;
import com.amazonaws.transform.StaxUnmarshallerContext;
import com.amazonaws.transform.SimpleTypeStaxUnmarshallers.*;
/**
* AssociateTransitGatewayMulticastDomainResult StAX Unmarshaller
*/
@Generated("com.amazonaws:aws-java-sdk-code-generator")
public class AssociateTransitGatewayMulticastDomainResultStaxUnmarshaller implements
Unmarshaller<AssociateTransitGatewayMulticastDomainResult, StaxUnmarshallerContext> {
public AssociateTransitGatewayMulticastDomainResult unmarshall(StaxUnmarshallerContext context) throws Exception {
AssociateTransitGatewayMulticastDomainResult associateTransitGatewayMulticastDomainResult = new AssociateTransitGatewayMulticastDomainResult();
int originalDepth = context.getCurrentDepth();
int targetDepth = originalDepth + 1;
if (context.isStartOfDocument())
targetDepth += 1;
while (true) {
XMLEvent xmlEvent = context.nextEvent();
if (xmlEvent.isEndDocument())
return associateTransitGatewayMulticastDomainResult;
if (xmlEvent.isAttribute() || xmlEvent.isStartElement()) {
if (context.testExpression("associations", targetDepth)) {
associateTransitGatewayMulticastDomainResult.setAssociations(TransitGatewayMulticastDomainAssociationsStaxUnmarshaller.getInstance()
.unmarshall(context));
continue;
}
} else if (xmlEvent.isEndElement()) {
if (context.getCurrentDepth() < originalDepth) {
return associateTransitGatewayMulticastDomainResult;
}
}
}
}
private static AssociateTransitGatewayMulticastDomainResultStaxUnmarshaller instance;
public static AssociateTransitGatewayMulticastDomainResultStaxUnmarshaller getInstance() {
if (instance == null)
instance = new AssociateTransitGatewayMulticastDomainResultStaxUnmarshaller();
return instance;
}
}
|
{
"pile_set_name": "Github"
}
|
//
// ip/impl/address_v4.hpp
// ~~~~~~~~~~~~~~~~~~~~~~
//
// Copyright (c) 2003-2019 Christopher M. Kohlhoff (chris at kohlhoff dot com)
//
// Distributed under the Boost Software License, Version 1.0. (See accompanying
// file LICENSE_1_0.txt or copy at http://www.boost.org/LICENSE_1_0.txt)
//
#ifndef ASIO_IP_IMPL_ADDRESS_V4_HPP
#define ASIO_IP_IMPL_ADDRESS_V4_HPP
#if defined(_MSC_VER) && (_MSC_VER >= 1200)
# pragma once
#endif // defined(_MSC_VER) && (_MSC_VER >= 1200)
#if !defined(ASIO_NO_IOSTREAM)
#include "asio/detail/throw_error.hpp"
#include "asio/detail/push_options.hpp"
namespace asio {
namespace ip {
#if !defined(ASIO_NO_DEPRECATED)
inline address_v4 address_v4::from_string(const char* str)
{
return asio::ip::make_address_v4(str);
}
inline address_v4 address_v4::from_string(
const char* str, asio::error_code& ec)
{
return asio::ip::make_address_v4(str, ec);
}
inline address_v4 address_v4::from_string(const std::string& str)
{
return asio::ip::make_address_v4(str);
}
inline address_v4 address_v4::from_string(
const std::string& str, asio::error_code& ec)
{
return asio::ip::make_address_v4(str, ec);
}
#endif // !defined(ASIO_NO_DEPRECATED)
template <typename Elem, typename Traits>
std::basic_ostream<Elem, Traits>& operator<<(
std::basic_ostream<Elem, Traits>& os, const address_v4& addr)
{
return os << addr.to_string().c_str();
}
} // namespace ip
} // namespace asio
#include "asio/detail/pop_options.hpp"
#endif // !defined(ASIO_NO_IOSTREAM)
#endif // ASIO_IP_IMPL_ADDRESS_V4_HPP
|
{
"pile_set_name": "Github"
}
|
## Side Nav
##### Components
* toggle
* close
* dropdown_toggle
##### Modifiers
* visible
### Quick Look
> You can populate the Side-Nav from an existing menu (see the module's [options](#options)) - by default the [Navigation](https://github.com/esr360/One-Nexus/tree/master/src/modules/objects/navigation) module is used to populate the Side-Nav
```html
<div class="sideNav">
<nav>
...
</nav>
</div>
```
### Options
For default values view the [`side-nav.json`](side-nav.json) file. Standard CSS properties for modules, components and modifiers are not documented below - [learn more](https://github.com/esr360/Synergy/wiki/Configuring-a-Module#pass-custom-css-to-modules).
<table class="table">
<thead>
<tr>
<th>Option</th>
<th>Description</th>
</tr>
</thead>
<tbody>
<tr>
<td>name</td>
<td>The name used when generating the CSS selector</td>
</tr>
<tr>
<td>collapsible.icon</td>
<td>The Font Awesome class name for the open/close icon</td>
</tr>
<tr>
<td>navigation</td>
<td>Synergy selector for existing navigation module to clone into the side-nav</td>
</tr>
<tr>
<td>overlay</td>
<td>Syergy selector for overlay module</td>
</tr>
</tbody>
</table>
Pass custom options to the `side-nav` object in your theme's config file (e.g. [themes/One-Nexus/config.json](../../../themes/One-Nexus/config.json)):
```json
{
"app": {
"side-nav": {
"parent": {
"hover": {
"background": "color('brand', 'brand-1')"
}
},
"collapsible": {
"open-by-default": false
}
}
}
}
```
### Sass
Load the side-nav styles in your theme's main `scss` file (e.g. [themes/One-Nexus/One-Nexus.scss](../../../themes/One-Nexus/One-Nexus.scss)) by including the `side-nav()` mixin:
```scss
@import '../../app';
@import './config.json';
@include side-nav();
```
### JavaScript
Call the `sideNav()` function in your theme's main `js` file (e.g. [themes/One-Nexus/One-Nexus.js](../../../themes/One-Nexus/One-Nexus.js)):
```js
import * as app from '../../app';
import config from './config.json';
app.theme = config.app;
app.sideNav();
```
#### API
##### Show
```js
app.sideNav().show();
```
##### Hide
```js
app.sideNav().hide();
```
##### Toggle
```js
app.sideNav().toggle();
```
### Examples
#### Custom Trigger Elements
> The `toggle` component is used to toggle the side-nav depending on its current state, whereas the `close` component will only attempt to close the Side-Nav
```html
<div class="sideNav_toggle">☰</div>
<div class="sideNav">
<div class="sideNav_close"></div>
<nav>
...
</nav>
</div>
```
|
{
"pile_set_name": "Github"
}
|
_l = require 'lodash'
React = require 'react'
{CheckboxControl, propValueLinkTransformer} = require './editor/sidebar-controls'
{Model} = require './model'
{collisions, assert, capitalize_first_char} = require './util'
{ObjectPropControl} = require './props'
{isExternalComponent} = require './libraries'
exports.ComponentSpec = Model.register 'component-spec', class ComponentSpec extends Model
properties:
componentRef: String # the unique identifier used by instances to reference this component
propControl: ObjectPropControl
# This name is slightly wrong. Now we use this to mean "shouldSync" for the CLI
shouldCompile: Boolean
# Where this component's compiled code should be placed relative to the toplevel of the user's project
filePath: String
cssPath: String
# In case the user wants to add some code at the top of the file corresponding to this component
codePrefix: String
flexWidth: Boolean
flexHeight: Boolean
regenerateKey: ->
super()
@componentRef = String(Math.random()).slice(2)
constructor: (json) ->
super(json)
@propControl ?= new ObjectPropControl()
@shouldCompile ?= true
@codePrefix ?= ''
@filePath ?= ''
@cssPath ?= ''
@flexWidth ?= false
@flexHeight ?= false
# The way docs get componentRef usually is through model.coffee's regenerateKey()
# but some old docs never got a componentRef. To ensure consistency we add it here if it doesn't exist at this
# point (even though it should exist)
@componentRef ?= String(Math.random()).slice(2)
addSpec: (propSpec) -> @propControl.attrTypes.push(propSpec)
removeSpec: (propSpec) -> @propControl.attrTypes.splice(@propControl.attrTypes.indexOf(propSpec), 1)
without_invalid_identifier_chars = (str) -> str.replace(/[^\w-_]+/g, '_')
identifierify = (str) -> without_invalid_identifier_chars(str).toLowerCase()
defined_if_nonempty = (val) -> if _l.isEmpty(val) then undefined else val
# Fuck object oriented programming. These are out of ComponentSpec so we can have access to the component itself
exports.sidebarControlsOfComponent = sidebarControlsOfComponent = (component, specLinkAttr, onChange) ->
assert -> component.isComponent and component.componentSpec?
[
<hr />
CheckboxControl("instances have resizable width", specLinkAttr('flexWidth'))
CheckboxControl("instances have resizable height", specLinkAttr('flexHeight'))
]
exports.filePathOfComponent = filePathOfComponent = (component) ->
assert -> component.isComponent and component.componentSpec?
return component.componentSpec.importPath if isExternalComponent(component)
return component.componentSpec.filePath.replace(/^\//, '') if not _l.isEmpty(component.componentSpec.filePath)
# utils
componentNameAsFilePathSegment = identifierify(component.getLabel())
use_extension = (ext) -> "#{component.doc.filepath_prefix}/#{componentNameAsFilePathSegment}.#{ext}"
# depend on the language
return switch component.doc.export_lang
when 'JSX' then use_extension 'js'
when 'React' then use_extension 'js'
when 'CJSX' then use_extension 'cjsx'
when 'TSX' then use_extension 'tsx'
when 'html' then use_extension 'html'
when 'html-email' then use_extension 'html'
when 'Angular2' then "#{component.doc.filepath_prefix}/#{componentNameAsFilePathSegment}/#{componentNameAsFilePathSegment}.component.ts"
# unused
when 'debug' then use_extension 'debug'
when 'PHP' then use_extension 'php'
when 'ERB' then use_extension 'html.erb'
when 'Handlebars' then use_extension 'handlebars'
when 'Jade' then use_extension 'jade'
when 'Jinja2' then use_extension 'html'
# if we missed a case
else
assert -> false # Never get here
# If we do get here, try to do something reasonable
use_extension component.doc.export_lang.toLowerCase()
exports.cssPathOfComponent = cssPathOfComponent = (component) ->
assert -> component.isComponent and component.componentSpec?
assert -> not isExternalComponent(component) # not supported for now
return component.componentSpec.cssPath.replace(/^\//, '') if not _l.isEmpty(component.componentSpec.cssPath)
componentNameAsFilePathSegment = identifierify(component.getLabel())
return switch component.doc.export_lang
when 'Angular2' then "#{component.doc.filepath_prefix}/#{componentNameAsFilePathSegment}/#{componentNameAsFilePathSegment}.component.css"
else "#{component.doc.filepath_prefix}/#{componentNameAsFilePathSegment}.css"
# dash is allowed in filepaths but not allowed in JS symbols
without_invalid_symbol_chars = (str) -> str.replace(/[^\w_]+/g, '_')
symbol_identifierify = (str) -> without_invalid_symbol_chars(str).toLowerCase()
exports.reactJSNameForLibrary = reactJSNameForLibrary = (library) ->
# FIXME these should be globally unique, even if component.componentSymbol isn't
# FIXME this allows dashes in component names, even if it's in Javascript
_l.capitalize(defined_if_nonempty(symbol_identifierify(library.library_name ? "")) ? "pd#{library.uniqueKey}")
exports.reactJSNameForComponent = reactJSNameForComponent = (component, doc) ->
assert -> component.isComponent and component.componentSpec?
reactSymbolForComponent = (component) ->
# FIXME these should be globally unique, even if component.componentSymbol isn't
# FIXME this allows dashes in component names, even if it's in Javascript
_l.capitalize(defined_if_nonempty(symbol_identifierify(component.componentSymbol ? "")) ? "pd#{component.uniqueKey}")
# NOTE this is here for old ExternalComponents (code wrappers)
return component.importSymbol if component.importSymbol?
if isExternalComponent(component)
library = _l.find(doc.libraries, (l) -> _l.find(l.getCachedExternalCodeSpecs(), {ref: component.componentSpec.ref})?)
throw new Error("External Component w/ ref #{component.componentSpec.ref} without a library") if not library?
return "#{reactJSNameForLibrary(library)}.#{component.componentSpec.name}"
else
return reactSymbolForComponent(component)
# only used for Angular
exports.templatePathOfComponent = templatePathOfComponent = (component) ->
assert -> component.isComponent and component.componentSpec?
assert -> not isExternalComponent(component) # not supported for now
# HACK we don't let users override this, so let's go next to the .ts file
ts_path = filePathOfComponent(component)
strip_extension = (path) -> path.replace(/\.[^//]*$/, '')
strip_extension(ts_path) + ".component.html"
# only used for Angular
exports.angularTagNameForComponent = angularTagNameForComponent = (component) ->
assert -> component.isComponent and component.componentSpec?
assert -> not isExternalComponent(component)
without_invalid_identifier_chars = (str) -> str.replace(/[^\w-_]+/g, '_')
identifierify = (str) -> without_invalid_identifier_chars(str).toLowerCase()
defined_if_nonempty = (val) -> if _l.isEmpty(val) then undefined else val
# FIXME these should be globally unique, even if component.componentSymbol isn't
# FIXME this allows dashes in component names, even if it's in Javascript
symbol = defined_if_nonempty(identifierify(component.componentSymbol ? "")) ? "pd#{component.uniqueKey}"
return symbol.replace("_", "-").toLowerCase()
exports.angularJsNameForComponent = angularJsNameForComponent = (component) ->
assert -> component.isComponent and component.componentSpec?
assert -> not isExternalComponent(component)
without_invalid_identifier_chars = (str) -> str.replace(/[^\w-_]+/g, '_')
identifierify = (str) -> without_invalid_identifier_chars(str).toLowerCase()
defined_if_nonempty = (val) -> if _l.isEmpty(val) then undefined else val
# FIXME these should be globally unique, even if component.componentSymbol isn't
# FIXME this allows dashes in component names, even if it's in Javascript
symbol = defined_if_nonempty(identifierify(component.componentSymbol ? "")) ? "pd#{component.uniqueKey}"
return symbol.split("_").map(capitalize_first_char).join('')
exports.errorsOfComponent = (component) ->
MultistateBlock = require './blocks/multistate-block'
ArtboardBlock = require './blocks/artboard-block'
ScreenSizeBlock = require './blocks/screen-size-block'
assert -> component.isComponent and component.componentSpec?
assert -> not isExternalComponent(component)
blocks = component.andChildren()
hasEmptyOverrideCode = blocks.some (block) -> block.hasCustomCode and _l.isEmpty(block.customCode)
hasEmptyEventHandler = blocks.some (block) -> block.eventHandlers.some ({code}) -> _l.isEmpty(code)
hasEmptyPropName = not component.componentSpec.propControl.attrTypes?.every (el) => el.name
nameCollisions = _l.uniq _l.compact collisions(component.componentSpec.propControl.attrTypes, ((attr) -> attr.name))
containsScreenSizeBlock = component.doc.getChildren(component).some (block) -> block.getSourceComponent?() instanceof ScreenSizeBlock
isMultistate = component instanceof MultistateBlock
stateNameCollisions = (blockTree) ->
childrenCollisions = _l.flatten(blockTree.children.filter(({block}) -> block instanceof MultistateBlock)
.map(stateNameCollisions))
return childrenCollisions.concat _l.uniq _l.compact collisions(blockTree.children.filter(({block}) ->
block instanceof ArtboardBlock or block instanceof MultistateBlock
), ({block}) -> block.name)
return _l.compact [
(_l.flatten(_l.map blocks, (block) -> block.getDynamicsForUI()).filter ([_0, _1, dynamicable]) ->
dynamicable.isDynamic and dynamicable.code == ''
).map(([uniqueKey, label, dynamicable]) -> {errorCode: 'EMPTY_DYNAMICABLE', message: "Empty data binding for #{label}"})...
{errorCode: 'EMPTY_COMPONENT_NAME', message: 'Empty component name'} if component.componentSpec.name == '' # currently not possible to leave empty
{errorCode: 'EMPTY_OVERRIDE_CODE', message: 'Empty override code'} if hasEmptyOverrideCode
{errorCode: 'EMPTY_EVENT_HANDLER', message: 'Empty event handler'} if hasEmptyEventHandler
{errorCode: 'EMPTY_PROP_NAME', message: 'Empty component argument name'} if hasEmptyPropName
{errorCode: 'SCREEN_SIZE_BLOCK_NOT_TOPLEVEL', message: 'Screen Size Group instance inside another component'} if containsScreenSizeBlock
(nameCollisions.map (name) -> {errorCode: 'PROP_NAME_COLLISION', message: "Found multiple component arguments with name: #{name}"})...
(if isMultistate then stateNameCollisions(component.blockTree).map (name) ->
{errorCode: 'MULTISTATE_NAME_COLLISION', message: "Found name collision in multistate group: #{name}"}
else [])...
# TODO: warn on nested artboards
]
|
{
"pile_set_name": "Github"
}
|
/* PatternSyntaxException - Indicates illegal pattern for regular expression.
Copyright (C) 2002 Free Software Foundation, Inc.
This file is part of GNU Classpath.
GNU Classpath is free software; you can redistribute it and/or modify
it under the terms of the GNU General Public License as published by
the Free Software Foundation; either version 2, or (at your option)
any later version.
GNU Classpath is distributed in the hope that it will be useful, but
WITHOUT ANY WARRANTY; without even the implied warranty of
MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU
General Public License for more details.
You should have received a copy of the GNU General Public License
along with GNU Classpath; see the file COPYING. If not, write to the
Free Software Foundation, Inc., 51 Franklin Street, Fifth Floor, Boston, MA
02110-1301 USA.
Linking this library statically or dynamically with other modules is
making a combined work based on this library. Thus, the terms and
conditions of the GNU General Public License cover the whole
combination.
As a special exception, the copyright holders of this library give you
permission to link this library with independent modules to produce an
executable, regardless of the license terms of these independent
modules, and to copy and distribute the resulting executable under
terms of your choice, provided that you also meet, for each linked
independent module, the terms and conditions of the license of that
module. An independent module is a module which is not derived from
or based on this library. If you modify this library, you may extend
this exception to your version of the library, but you are not
obligated to do so. If you do not wish to do so, delete this
exception statement from your version. */
package java.util.regex;
import gnu.java.lang.CPStringBuilder;
/**
* Indicates illegal pattern for regular expression.
* Includes state to inspect the pattern and what and where the expression
* was not valid regular expression.
* @since 1.4
*/
public class PatternSyntaxException extends IllegalArgumentException
{
private static final long serialVersionUID = -3864639126226059218L;
/**
* Human readable escription of the syntax error.
*/
private final String desc;
/**
* The original pattern that contained the syntax error.
*/
private final String pattern;
/**
* Index of the first character in the String that was probably invalid,
* or -1 when unknown.
*/
private final int index;
/**
* Creates a new PatternSyntaxException.
*
* @param description Human readable escription of the syntax error.
* @param pattern The original pattern that contained the syntax error.
* @param index Index of the first character in the String that was
* probably invalid, or -1 when unknown.
*/
public PatternSyntaxException(String description,
String pattern,
int index)
{
super(description);
this.desc = description;
this.pattern = pattern;
this.index = index;
}
/**
* Returns a human readable escription of the syntax error.
*/
public String getDescription()
{
return desc;
}
/**
* Returns the original pattern that contained the syntax error.
*/
public String getPattern()
{
return pattern;
}
/**
* Returns the index of the first character in the String that was probably
* invalid, or -1 when unknown.
*/
public int getIndex()
{
return index;
}
/**
* Returns a string containing a line with the description, a line with
* the original pattern and a line indicating with a ^ which character is
* probably the first invalid character in the pattern if the index is not
* negative.
*/
public String getMessage()
{
String lineSep = System.getProperty("line.separator");
CPStringBuilder sb = new CPStringBuilder(desc);
sb.append(lineSep);
sb.append('\t');
sb.append(pattern);
if (index != -1)
{
sb.append(lineSep);
sb.append('\t');
for (int i=0; i<index; i++)
sb.append(' ');
sb.append('^');
}
return sb.toString();
}
}
|
{
"pile_set_name": "Github"
}
|
fileFormatVersion: 2
guid: be0903cd8e1546f498710afdc59db5eb
AssemblyDefinitionImporter:
externalObjects: {}
userData:
assetBundleName:
assetBundleVariant:
|
{
"pile_set_name": "Github"
}
|
---
title: Changer la politique d’accès au vCenter
slug: changer-la-politique-d-acces-au-vcenter
excerpt: Découvrez comment modifier la stratégie d'accès pour vCenter'
legacy_guide_number: '1442246'
space_key: VS
space_name: vSphere as a Service
section: Fonctionnalités OVH
---
**Dernière mise à jour le 07/07/2020**
## Objectif
Pour améliorer la sécurité de votre infrastructure Hosted Private cloud, vous pouvez restreindre et gérer l'accès à vCenter.
**Découvrez comment modifier la stratégie d'accès pour vCenter dans l'espace client OVHcloud.**
## Prérequis
- Disposer d'une offre [Hosted Private cloud](https://www.ovhcloud.com/fr-ca/enterprise/products/hosted-private-cloud/){.external}.
- Être connecté à [l'espace client OVHcloud](https://ca.ovh.com/auth/?action=gotomanager).
## En pratique
Connectez-vous à votre [espace client OVHcloud](https://ca.ovh.com/auth/?action=gotomanager), dirigez-vous dans la section`Server`{.action}, puis sélectionnez votre service sous `Cloud privé`{.action} dans la barre de navigation de gauche.
À partir de la page principale du service, cliquez sur l'onglet `Sécurité`{.action}, puis sur `Modifier la politique d'accès vCenter`{.action}.
{.thumbnail}
Dans la fenêtre qui s'affiche, choisissez « Ouverte » ou « Restreinte » dans le menu déroulant et cliquez sur `Valider`{.action} pour appliquer votre sélection.
{.thumbnail}
> [!primary]
>
> Si vous avez défini la politique d'accès sur « Restreinte », consultez le guide [« Autoriser des IP à se connecter au vCenter »](../autoriser-des-ip-a-se-connecter-au-vcenter/).
>
## Aller plus loin
[Autoriser des IP à se connecter au vCenter](../autoriser-des-ip-a-se-connecter-au-vcenter/)
Échangez avec notre communauté d’utilisateurs sur [https://community.ovh.com/](https://community.ovh.com/){.external}.
|
{
"pile_set_name": "Github"
}
|
---
layout: default
title: User guide
---
<div>
<style scoped>
.cli {
background-color: black;
border-radius: 3px;
color: white;
margin: 0.3cm;
padding: 4px;
max-width: 65em;
font-family: monospace;
white-space: pre-wrap;
line-height: normal;
}
</style>
<h3>User Guide (CLI)</h3>
<p>This documentation is for the command-line interface to Pond. You can start the CLI by passing the <tt>--cli</tt> option to Pond or, if your Pond binary doesn't have the GUI compiled in, it'll always start in CLI mode.</p>
<p>Before running Pond, you need to have <a href="https://torproject.org">Tor</a> running. Pond makes all connections over Tor. Simply having the <a href="https://www.torproject.org/projects/torbrowser.html.en">browser bundle</a> running should be sufficient. There's no danger to having Pond running without Tor, it'll simply not work.</p>
<p>When first starting Pond, you'll be prompted to set a passphrase for Pond's <i>state file</i>, which should look like this:</p>
<div class="cli"><span style="color: #af00ff">></span><span style="color: #af5fff">></span><span style="color: #af87ff">></span> Pond...
<span style="color: #0000ff">></span><span style="color: #005fff">></span><span style="color: #0087ff">></span> Pond keeps private keys, messages etc on disk for a limited amount of time and that information can be encrypted with a passphrase. If you are comfortable with the security of your home directory, this passphrase can be empty and you won't be prompted for it again. If you set a passphrase and forget it, it cannot be recovered. You will have to start afresh.
passphrase></div>
<p>The state file contains all of Pond's persistent state and may be encrypted with a passphrase if you wish. If you set a passphrase and then forget it, there is no recovery mechanism. If you believe that the security of your home directory is sufficiently good then you may omit the passphrase completely.</p>
<p><b>The state file should not be copied.</b> Pond depends on the ability to delete past information and making copies of the state file may allow information that should have been deleted, to be recovered. Additionally, Pond is not designed to operate concurrently on multiple computers.</p>
<p>After setting the passphrase (or not), you may be prompted to setup TPM storage if your computer has a TPM chip. Pond depends on being able to erase old information but it is not clear how well modern computers, using SSDs or log-structured filesystems, are able to erase anything. Without some form of special storage, such as a TPM chip, it may be possible to recover “deleted” messages given the passphrase.</p>
<div class="cli"><span style="color: #0000ff">></span><span style="color: #005fff">></span><span style="color: #0087ff">></span> It's very difficult to erase information on modern computers so Pond tries to use the TPM chip if possible.
<span style="color: #0000ff">></span><span style="color: #005fff">></span><span style="color: #0087ff">></span> Your computer appears to have a TPM chip. You'll need tcsd (the TPM daemon) running in order to use it.
Try to configure TPM (y/n)></div>
<p> </p>
<p>Once the passphrase and TPM have been configured, you'll be prompted to create an account on a Pond server.</p>
<div class="cli"><span style="color: #0000ff">></span><span style="color: #005fff">></span><span style="color: #0087ff">></span> In order to use Pond you have to have an account on a server. Servers may set their own account policies, but the default server allows anyone to create an account. Just hit enter to use the default server [pondserver://ICYUHSAYGIXTKYKXSAHIBWEAQCTEF26WUWEPOVC764WYELCJMUPA@jb644zapje5dvgk3.onion]
server></div>
<p>A Pond server accepts and stores messages for you since most computers are not constantly connected to the Internet. Anyone can run a Pond server, and you can even run your own if you wish, but there is a default Pond server that is already filled out should you wish to use that one.</p>
<p>Note that one doesn't provide a name, email or any other identifying information when creating a Pond account. A Pond server knows almost nothing about you. For details, see the threat model document.</p>
<p>In order to prevent abuse, the Pond server may ask your computer to perform a certain amount of work before allowing an account to be created. Please be patient, esp on slower computers.</p>
<p>Creating the account will be the first time that Pond tries to connect through Tor and so any errors at this point are likely caused by a problem with network. Ensure that Tor is running and listening for SOCKS5 connections on port 9050 or 9150.</p>
<p> </p>
<p>Once your account has been created, and every time you subsequently start Pond, you'll see a summary of Pond's state. At the moment, there are no messages nor contacts so the summary will be very brief:</p>
<div class="cli"><span style="color: #0000ff">></span><span style="color: #005fff">></span><span style="color: #0087ff">></span> There are no messages waiting to be transmitted.</div>
<p>At any point you can run the <tt>help</tt> command to list the other commands that are currently available to you:</p>
<div class="cli"><span style="color: #af00ff">></span> help
<span style="color: #0000ff">></span><span style="color: #005fff">></span><span style="color: #0087ff">></span> clear Clear terminal
<span style="color: #0000ff">></span><span style="color: #005fff">></span><span style="color: #0087ff">></span> contacts Show all known contacts
<span style="color: #0000ff">></span><span style="color: #005fff">></span><span style="color: #0087ff">></span> drafts Show drafts
<span style="color: #0000ff">></span><span style="color: #005fff">></span><span style="color: #0087ff">></span> help [--all] List known commands
<span style="color: #0000ff">></span><span style="color: #005fff">></span><span style="color: #0087ff">></span> identity Show identity
<span style="color: #0000ff">></span><span style="color: #005fff">></span><span style="color: #0087ff">></span> inbox Show the Inbox
<span style="color: #0000ff">></span><span style="color: #005fff">></span><span style="color: #0087ff">></span> log Show recent log entries
<span style="color: #0000ff">></span><span style="color: #005fff">></span><span style="color: #0087ff">></span> new-contact <name> Start a key exchange with a new contact
<span style="color: #0000ff">></span><span style="color: #005fff">></span><span style="color: #0087ff">></span> outbox Show the Outbox
<span style="color: #0000ff">></span><span style="color: #005fff">></span><span style="color: #0087ff">></span> queue Show the queue
<span style="color: #0000ff">></span><span style="color: #005fff">></span><span style="color: #0087ff">></span> quit Exit Pond
<span style="color: #0000ff">></span><span style="color: #005fff">></span><span style="color: #0087ff">></span> status Show overall Pond status
<span style="color: #0000ff">></span><span style="color: #005fff">></span><span style="color: #0087ff">></span> transact-now Perform a network transaction now</div>
<p><b>Unlike regular email, Pond is a closed system.</b> This means that you cannot send messages to another Pond user without establishing a relationship with them first. (This also means that nobody unwelcome can send messages to you - i.e. there is no spam.)</p>
<p>So the first order to business is to add a contact by running the <tt>new-contact</tt> command. (If you don't know anyone else using Pond then I'm afraid that you're rather stuck - such is the nature of network effects.)</p>
<p>Feel free when choosing a name for the contact. The name must be locally unique, but will only be used to refer to the contact. The name that you choose for a contact will not be shared with anyone else.</p>
<p>There are two methods of establishing a new contact - manual keying and shared secrets. Manual keying is suitable if you already have an existing secure channel to the contact, i.e. you have each other's PGP keys, or have OTR setup. However, most people should use shared secrets and that's the default in the CLI.</p>
<p>Shared secret keying allows one to bootstrap secure communication from some shared secret. This may be a random string generated by one of the parties, a physical meeting or perhaps a secure, but low capacity, channel.</p>
<p>Shared secret keying involves contacting a central server (using Tor) and performing a key exchange based on the shared secret. This means that, so long as a MITM attack isn't performed against the shared secret in real time then it's secure for the future. Once the key exchange is complete, the shared secret doesn't need to be strongly protected: possession of it might disclose that a key exchange was performed, but it doesn't allow decryption, impersonation, etc.</p>
<p>In the interests of practicality, it's pretty secure to exchange a shared secret over IM or email. Pond will suggest a randomly generated secret if you don't already have one:</p>
<div class="cli"><span style="color: #af00ff">></span> new-contact Alice
Enter shared secret with contact, or hit enter to generate, print and use a random one
secret:
<span style="color: #af00ff">></span><span style="color: #af5fff">></span><span style="color: #af87ff">></span> Shared secret: ff947f1bad0945d763247aafa6255dcc
<span style="color: #af00ff">></span><span style="color: #af5fff">></span><span style="color: #af87ff">></span> Key exchange running in background.</div>
<p>Now that the key exchange is pending, there is a contact. The current state of Pond is shown whenever an empty command is entered. So just hit enter and you'll see something like:</p>
<div class="cli"><span style="color: #af00ff">></span>
<span style="color: #0000ff">></span><span style="color: #005fff">></span><span style="color: #0087ff">></span> Contacts
Alice | pending (<span style="color: #00d7ff">qu6</span>)
<span style="color: #0000ff">></span><span style="color: #005fff">></span><span style="color: #0087ff">></span> There are no messages waiting to be transmitted</div>
<p>The Pond CLI has the concept of a ‘current’ object and whenever you see three letters written like <tt>qu6</tt> in the example above, that's an object that can be selected by entering those three letters. It's called a tag. Go ahead, enter the tag for your pending contact now - they'll be differnt than the example.</p>
<p>The prompt will change to indicate the current object and the type of the object, which is <tt>contact</tt> in this case. Some commands apply to the current object, for example <tt>show</tt>:</p>
<div class="cli"><span style="color: #af00ff">></span> qu6
<span style="color: #bcbcbc">contact</span>/<span style="color: #00d7ff">qu6</span><span style="color: #af00ff">></span> show
<span style="color: #ff0000">></span><span style="color: #ff5f00">></span><span style="color: #ff8700">></span> This contact is pending
<span style="color: #0087ff">-</span> Name | Alice
<span style="color: #0087ff">-</span> Server
<span style="color: #0087ff">-</span> Generation | 0
<span style="color: #0087ff">-</span> Public key | 0000000000000000000000000000000000000000000000000000000000000000
<span style="color: #0087ff">-</span> Identity key | 0000000000000000000000000000000000000000000000000000000000000000
<span style="color: #0087ff">-</span> Client version | 0</div>
<p>Once the key exchange has been completed, messages can be exchanged. <b>Messages are ephemeral.</b> Pond is only software and cannot force the recipient of a message not to retain it, but <b>it is the social norm, and the default in the software, that messages are permanently erased a week from receipt</b>.</p>
<p>In order to send a message, select a contact as the current object and run the <tt>compose</tt> command. Vim will start to let you enter the message. When you save and quit from Vim (by pressing escape and then <tt>ZZ</tt>) the prompt will have changed because the draft message is now the current object.</p>
<p>Since the type of the current object has changed, so will the set of applicable commands. Running help will show the commands that can now be used.</p>
<div class="cli"><span style="color: #0000ff">></span><span style="color: #005fff">></span><span style="color: #0087ff">></span> Created new draft: <span style="color: #00d7ff">u3u</span>
<span style="color: #af00ff">></span><span style="color: #af5fff">></span><span style="color: #af87ff">></span> Message using 67 of 15758 bytes
<span style="color: #bcbcbc">draft</span>/<span style="color: #00d7ff">u3u</span><span style="color: #af00ff">></span> help
<span style="color: #0000ff">></span><span style="color: #005fff">></span><span style="color: #0087ff">></span> These commands operate on the current object:
<span style="color: #0000ff">></span><span style="color: #005fff">></span><span style="color: #0087ff">></span> attach <filename> Attach a file to the current draft
<span style="color: #0000ff">></span><span style="color: #005fff">></span><span style="color: #0087ff">></span> close Close currently opened object
<span style="color: #0000ff">></span><span style="color: #005fff">></span><span style="color: #0087ff">></span> delete Delete a message or contact
<span style="color: #0000ff">></span><span style="color: #005fff">></span><span style="color: #0087ff">></span> edit Edit the draft message
<span style="color: #0000ff">></span><span style="color: #005fff">></span><span style="color: #0087ff">></span> remove <number> Remove an attachment or detachment from a draft message
<span style="color: #0000ff">></span><span style="color: #005fff">></span><span style="color: #0087ff">></span> send Send the current draft
<span style="color: #0000ff">></span><span style="color: #005fff">></span><span style="color: #0087ff">></span> show Show the current object
<span style="color: #0000ff">></span><span style="color: #005fff">></span><span style="color: #0087ff">></span> upload <filename> Upload a file to home server and include key in current draft</div>
<p>When typing and attaching files, you'll see that the size counter changes. Pond messages are always a fixed size in order to make them indistinguishable. But while a short message can always be padded out to the correct size, a long message cannot always be compressed down to fit. While it's unlikely that you'll hit the limit while typing, attachments can quickly balloon the size of a message.</p>
<p>In order to attach a file to a message, use the <tt>attach</tt> command. If the file is small enough then it'll be included in the message directly. Otherwise you'll be prompted to either save an encrypted version of the file, or to upload the file. These are both methods where encryption is used to separate the bulk transfer of the file from from the security of that transfer. By encrypting the file, the problem of securing the transfer is reduced to the problem of securing the encryption key, and the encryption key is small enough to fit in the Pond message. The bulk transfer is called a “detachment”.</p>
<p>By opting to save an encrypted version of the file, you are taking on the job of getting the encrypted file to the recipient yourself. For huge files, this may be the only method. Perhaps you'll put the file on a USB stick and physically hand it to them.</p>
<p>For modest sized detachments (up to a few megabytes - i.e. something that you might attach to an email), you can upload it to your Pond server. The advantage of this is that the upload will occur over Tor and that it's convenient. Your Pond server will have a limit on the amount that it'll store for you however and be aware that the upload is visible to anyone watching your network connection. The contents of the upload are hidden, of course, but the rough size of the file can be observed during the transfer. Likewise, the rough size of the file can be observed when the recipient downloads it.</p>
<p> </p>
<p>When you send a message, it'll appear in your outbox with a red dot. Hit enter to see Pond's state, including the outbox:</p>
<div class="cli"><span style="color: #0000ff">></span><span style="color: #005fff">></span><span style="color: #0087ff">></span> Created new outbox entry <span style="color: #00d7ff">r7p</span>
<span style="color: #0000ff">></span><span style="color: #005fff">></span><span style="color: #0087ff">></span> There is one message waiting to be transmitted
<span style="color: #bcbcbc">outbox</span>/<span style="color: #00d7ff">r7p</span><span style="color: #af00ff">></span>
<span style="color: #0000ff">></span><span style="color: #005fff">></span><span style="color: #0087ff">></span> Outbox
<span style="color: red">*</span> Alice | Dec 26 17:18 (<span style="color: #00d7ff">r7p</span>)
<span style="color: #0000ff">></span><span style="color: #005fff">></span><span style="color: #0087ff">></span> Contacts
Alice (<span style="color: #00d7ff">qu6</span>)
<span style="color: #0000ff">></span><span style="color: #005fff">></span><span style="color: #0087ff">></span> There is one message waiting to be transmitted</div>
<p>The red dot means that the message hasn't been transmitted yet. Pond doesn't transmit messages as needed because that would disclose when messages were being sent. Instead it transmits messages at random, whether there's anything to be sent or not. When there's a real message pending, it has to wait until the next randomly timed slot, which could be many minutes.</p>
<p>Once the message has been transmitted, the dot will turn yellow. The dot will turn green when the message has been “acknowledged”. An acknowledgment occurs either when a reply to the message is received, or a special acknowledgment message is received (which is actually just an empty reply). Acknowledgments exist because it's often difficult to know whether a message has been read and replying to every one can be awkward. An acknowledgment is never sent automatically, but you should expect to acknowledge (or reply to) every message that you receive.</p>
</div>
|
{
"pile_set_name": "Github"
}
|
/*
MIT License
Copyright 2016 Comcast Cable Communications Management, LLC
Permission is hereby granted, free of charge, to any person obtaining a copy
of this software and associated documentation files (the "Software"), to deal
in the Software without restriction, including without limitation the rights
to use, copy, modify, merge, publish, distribute, sublicense, and/or sell
copies of the Software, and to permit persons to whom the Software is
furnished to do so, subject to the following conditions:
The above copyright notice and this permission notice shall be included in all
copies or substantial portions of the Software.
THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE
AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER
LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM,
OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE
SOFTWARE.
*/
package scte35
import (
"github.com/Comcast/gots"
"strings"
)
const receivedRingLen = 10
type receivedElem struct {
pts gots.PTS
descs []SegmentationDescriptor
}
type state struct {
open []SegmentationDescriptor
received []*receivedElem
receivedHead int
blackoutIdx int
inBlackout bool
}
// NewState returns an initialized state object
func NewState() State {
return &state{received: make([]*receivedElem, receivedRingLen)}
}
func (s *state) Open() []SegmentationDescriptor {
open := make([]SegmentationDescriptor, len(s.open))
copy(open, s.open)
if s.inBlackout {
return append(open[0:s.blackoutIdx], open[s.blackoutIdx+1:]...)
} else {
return open
}
}
func (s *state) ProcessDescriptor(desc SegmentationDescriptor) ([]SegmentationDescriptor, error) {
var err error
var closed []SegmentationDescriptor
// check if desc has a pts because we can't handle if it doesn't
if !desc.SCTE35().HasPTS() {
return nil, gots.ErrSCTE35UnsupportedSpliceCommand
}
// check if this is a duplicate - if not, add it to the received list and
// drop the old received if we're over the length limit
descAdded := false
pts := desc.SCTE35().PTS()
for _, e := range s.received {
if e != nil {
for _, d := range e.descs {
if e.pts == pts {
if desc.Equal(d) {
// Duplicate desc found
return nil, gots.ErrSCTE35DuplicateDescriptor
}
e.descs = append(e.descs, desc)
descAdded = true
}
// check if we have seen a VSS signal with the same signalId and
// same eventId before.
if desc.EventID() == d.EventID() &&
d.TypeID() == SegDescUnscheduledEventStart && desc.TypeID() == SegDescUnscheduledEventStart {
descStreamSwitchSignalId, err := desc.StreamSwitchSignalId()
if err != nil {
return nil, err
}
dStreamSwitchSignalId, err := d.StreamSwitchSignalId()
if err != nil {
return nil, err
}
if strings.Compare(descStreamSwitchSignalId, dStreamSwitchSignalId) == 0 &&
(d.EventID() == desc.EventID()) {
// desc and d contain same signalId and same eventID
// we should not be processing this desc.
return nil, gots.ErrSCTE35DuplicateDescriptor
}
descAdded = true
}
}
}
}
if !descAdded {
s.received[s.receivedHead] = &receivedElem{pts: pts, descs: []SegmentationDescriptor{desc}}
s.receivedHead = (s.receivedHead + 1) % receivedRingLen
}
// first close signals until one returns false, then handle the breakaway
for i := len(s.open) - 1; i >= 0; i-- {
d := s.open[i]
if desc.CanClose(d) {
closed = append(closed, d)
} else {
break
}
}
// remove all closed descriptors
s.open = s.open[0 : len(s.open)-len(closed)]
// validation logic
switch desc.TypeID() {
// breakaway handling
case SegDescProgramBreakaway:
s.inBlackout = true
s.blackoutIdx = len(s.open)
// append breakaway to match against resumption even though it's an in
s.open = append(s.open, desc)
case SegDescProgramResumption:
if s.inBlackout {
s.inBlackout = false
s.open = s.open[0:s.blackoutIdx]
// TODO: verify that there is a program start that has a matching event id
} else {
// ProgramResumption can only come after a breakaway
err = gots.ErrSCTE35InvalidDescriptor
}
fallthrough
// out signals
case SegDescProgramStart,
SegDescChapterStart,
SegDescProviderAdvertisementStart,
SegDescDistributorAdvertisementStart,
SegDescProviderPOStart,
SegDescDistributorPOStart,
SegDescUnscheduledEventStart,
SegDescNetworkStart,
SegDescProgramOverlapStart,
SegDescProgramStartInProgress:
s.open = append(s.open, desc)
// in signals
// SegDescProgramEnd treated individually since it is expected to normally
// close program resumption AND program start
case SegDescProgramEnd:
if len(closed) == 0 {
err = gots.ErrSCTE35MissingOut
break
}
for _, d := range closed {
if d.TypeID() != SegDescProgramStart &&
d.TypeID() != SegDescProgramResumption {
err = gots.ErrSCTE35MissingOut
break
}
}
case SegDescChapterEnd,
SegDescProviderAdvertisementEnd,
SegDescProviderPOEnd,
SegDescDistributorAdvertisementEnd,
SegDescDistributorPOEnd,
SegDescUnscheduledEventEnd,
SegDescNetworkEnd:
var openDesc SegmentationDescriptor
// We already closed a descriptor
// and have no other open descriptors
// so break and return closed descriptors
if len(closed) != 0 && len(s.open) == 0 {
break
}
// descriptor matches out, but doesn't close it. Check event id against open
if len(closed) == 0 || closed[len(closed)-1].TypeID() != desc.TypeID()-1 {
if len(s.open) == 0 {
err = gots.ErrSCTE35MissingOut
break
} else {
openDesc = s.open[len(s.open)-1]
}
} else {
openDesc = closed[len(closed)-1]
}
if openDesc.EventID() != desc.EventID() {
err = gots.ErrSCTE35MissingOut
}
default:
// no validating
}
return closed, err
}
func (s *state) Close(desc SegmentationDescriptor) ([]SegmentationDescriptor, error) {
// back off list until we reach the descriptor we are closing. If we don't
// find it, return error
var closed []SegmentationDescriptor
for i := len(s.open) - 1; i >= 0; i-- {
d := s.open[i]
if desc.Equal(d) {
// found our descriptor at index i, remove it
// Shift s.open left by one index.
copy(s.open[i:], s.open[i+1:])
// Delete last element
s.open[len(s.open)-1] = nil
// Truncate slice
s.open = s.open[:len(s.open)-1]
closed = append(closed, d)
return closed, nil
}
}
return nil, gots.ErrSCTE35DescriptorNotFound
}
|
{
"pile_set_name": "Github"
}
|
.. code-block:: bash
$ mkdir pmm-data-backup; cd pmm-data-backup
|
{
"pile_set_name": "Github"
}
|
/**
* This file is part of veraPDF Library core, a module of the veraPDF project.
* Copyright (c) 2015, veraPDF Consortium <info@verapdf.org>
* All rights reserved.
*
* veraPDF Library core is free software: you can redistribute it and/or modify
* it under the terms of either:
*
* The GNU General public license GPLv3+.
* You should have received a copy of the GNU General Public License
* along with veraPDF Library core as the LICENSE.GPL file in the root of the source
* tree. If not, see http://www.gnu.org/licenses/ or
* https://www.gnu.org/licenses/gpl-3.0.en.html.
*
* The Mozilla Public License MPLv2+.
* You should have received a copy of the Mozilla Public License along with
* veraPDF Library core as the LICENSE.MPL file in the root of the source tree.
* If a copy of the MPL was not distributed with this file, you can obtain one at
* http://mozilla.org/MPL/2.0/.
*/
package org.verapdf.pdfa.validation.profiles;
import java.util.Set;
import org.verapdf.pdfa.flavours.PDFAFlavour;
/**
* A ProfileDirectory provides access to a set of {@link ValidationProfile}s
* that can be retrieved by String id or {@link PDFAFlavour}.
* <p>
* This interface provides a simple directory of {@link ValidationProfile}s that is intentionally restricted by the enum type {@link PDFAFlavour}.
* </p>
*
* @author <a href="mailto:carl@openpreservation.org">Carl Wilson</a>
*/
public interface ProfileDirectory {
/**
* @return the Set of ValidationProfile String identifiers for the profiles
* held in the directory.
*/
public Set<String> getValidationProfileIds();
/**
* @return the Set of {@link PDFAFlavour} enum instances that identify the
* profiles held in the directory.
*/
public Set<PDFAFlavour> getPDFAFlavours();
/**
* @param profileID
* a two character String that uniquely identifies a particular
* {@link PDFAFlavour}, e.g. 1a, 1b, 2a, etc.
* @return the {@link ValidationProfile} associated with the profileId
* @throws NoSuchElementException
* when there is no profile associated with the profileID string
* IllegalArgumentException if the profileID parameter is null
* @throws IllegalArgumentException
* if profileID is null
*/
public ValidationProfile getValidationProfileById(String profileID);
/**
* @param flavour
* a {@link PDFAFlavour} instance that identifies a
* {@link ValidationProfile}
* @return the {@link ValidationProfile} associated with the flavour
* @throws NoSuchElementException
* when there is no profile associated with the flavour
* IllegalArgumentException if the flavour parameter is null
* @throws IllegalArgumentException
* if flavour is null
*/
public ValidationProfile getValidationProfileByFlavour(PDFAFlavour flavour);
/**
* @return the full set of {@link ValidationProfile}s held in the directory.
*/
public Set<ValidationProfile> getValidationProfiles();
}
|
{
"pile_set_name": "Github"
}
|
<?xml version="1.0" ?>
<annotation>
<folder>widerface</folder>
<filename>6--Funeral_6_Funeral_Funeral_6_77.jpg</filename>
<source>
<database>wider face Database</database>
<annotation>PASCAL VOC2007</annotation>

<flickrid>-1</flickrid>
</source>
<owner>
<flickrid>yanyu</flickrid>
<name>yanyu</name>
</owner>
<size>
<width>1024</width>
<height>683</height>
<depth>3</depth>
</size>
<segmented>0</segmented>
<object>
<name>face</name>
<pose>Unspecified</pose>
<truncated>1</truncated>
<difficult>0</difficult>
<bndbox>
<xmin>856</xmin>
<ymin>362</ymin>
<xmax>866</xmax>
<ymax>378</ymax>
</bndbox>
</object>
<object>
<name>face</name>
<pose>Unspecified</pose>
<truncated>1</truncated>
<difficult>0</difficult>
<bndbox>
<xmin>798</xmin>
<ymin>377</ymin>
<xmax>809</xmax>
<ymax>390</ymax>
</bndbox>
</object>
<object>
<name>face</name>
<pose>Unspecified</pose>
<truncated>1</truncated>
<difficult>0</difficult>
<bndbox>
<xmin>765</xmin>
<ymin>357</ymin>
<xmax>777</xmax>
<ymax>370</ymax>
</bndbox>
</object>
<object>
<name>face</name>
<pose>Unspecified</pose>
<truncated>1</truncated>
<difficult>0</difficult>
<bndbox>
<xmin>926</xmin>
<ymin>499</ymin>
<xmax>940</xmax>
<ymax>522</ymax>
</bndbox>
</object>
<object>
<name>face</name>
<pose>Unspecified</pose>
<truncated>1</truncated>
<difficult>0</difficult>
<bndbox>
<xmin>902</xmin>
<ymin>516</ymin>
<xmax>923</xmax>
<ymax>530</ymax>
</bndbox>
</object>
<object>
<name>face</name>
<pose>Unspecified</pose>
<truncated>1</truncated>
<difficult>0</difficult>
<bndbox>
<xmin>925</xmin>
<ymin>537</ymin>
<xmax>938</xmax>
<ymax>557</ymax>
</bndbox>
</object>
<object>
<name>face</name>
<pose>Unspecified</pose>
<truncated>1</truncated>
<difficult>0</difficult>
<bndbox>
<xmin>944</xmin>
<ymin>506</ymin>
<xmax>959</xmax>
<ymax>536</ymax>
</bndbox>
</object>
<object>
<name>face</name>
<pose>Unspecified</pose>
<truncated>1</truncated>
<difficult>0</difficult>
<bndbox>
<xmin>973</xmin>
<ymin>547</ymin>
<xmax>989</xmax>
<ymax>566</ymax>
</bndbox>
</object>
<object>
<name>face</name>
<pose>Unspecified</pose>
<truncated>1</truncated>
<difficult>0</difficult>
<bndbox>
<xmin>1005</xmin>
<ymin>568</ymin>
<xmax>1022</xmax>
<ymax>599</ymax>
</bndbox>
</object>
<object>
<name>face</name>
<pose>Unspecified</pose>
<truncated>1</truncated>
<difficult>0</difficult>
<bndbox>
<xmin>993</xmin>
<ymin>580</ymin>
<xmax>1014</xmax>
<ymax>602</ymax>
</bndbox>
</object>
<object>
<name>face</name>
<pose>Unspecified</pose>
<truncated>1</truncated>
<difficult>0</difficult>
<bndbox>
<xmin>740</xmin>
<ymin>380</ymin>
<xmax>750</xmax>
<ymax>390</ymax>
</bndbox>
</object>
<object>
<name>face</name>
<pose>Unspecified</pose>
<truncated>1</truncated>
<difficult>0</difficult>
<bndbox>
<xmin>669</xmin>
<ymin>366</ymin>
<xmax>679</xmax>
<ymax>379</ymax>
</bndbox>
</object>
<object>
<name>face</name>
<pose>Unspecified</pose>
<truncated>1</truncated>
<difficult>0</difficult>
<bndbox>
<xmin>604</xmin>
<ymin>354</ymin>
<xmax>624</xmax>
<ymax>376</ymax>
</bndbox>
</object>
<object>
<name>face</name>
<pose>Unspecified</pose>
<truncated>1</truncated>
<difficult>0</difficult>
<bndbox>
<xmin>584</xmin>
<ymin>363</ymin>
<xmax>605</xmax>
<ymax>386</ymax>
</bndbox>
</object>
<object>
<name>face</name>
<pose>Unspecified</pose>
<truncated>1</truncated>
<difficult>0</difficult>
<bndbox>
<xmin>550</xmin>
<ymin>361</ymin>
<xmax>567</xmax>
<ymax>381</ymax>
</bndbox>
</object>
<object>
<name>face</name>
<pose>Unspecified</pose>
<truncated>1</truncated>
<difficult>0</difficult>
<bndbox>
<xmin>444</xmin>
<ymin>355</ymin>
<xmax>458</xmax>
<ymax>374</ymax>
</bndbox>
</object>
<object>
<name>face</name>
<pose>Unspecified</pose>
<truncated>1</truncated>
<difficult>0</difficult>
<bndbox>
<xmin>427</xmin>
<ymin>365</ymin>
<xmax>442</xmax>
<ymax>387</ymax>
</bndbox>
</object>
<object>
<name>face</name>
<pose>Unspecified</pose>
<truncated>1</truncated>
<difficult>0</difficult>
<bndbox>
<xmin>390</xmin>
<ymin>364</ymin>
<xmax>404</xmax>
<ymax>378</ymax>
</bndbox>
</object>
<object>
<name>face</name>
<pose>Unspecified</pose>
<truncated>1</truncated>
<difficult>0</difficult>
<bndbox>
<xmin>404</xmin>
<ymin>354</ymin>
<xmax>416</xmax>
<ymax>366</ymax>
</bndbox>
</object>
<object>
<name>face</name>
<pose>Unspecified</pose>
<truncated>1</truncated>
<difficult>0</difficult>
<bndbox>
<xmin>362</xmin>
<ymin>351</ymin>
<xmax>379</xmax>
<ymax>373</ymax>
</bndbox>
</object>
<object>
<name>face</name>
<pose>Unspecified</pose>
<truncated>1</truncated>
<difficult>0</difficult>
<bndbox>
<xmin>334</xmin>
<ymin>342</ymin>
<xmax>357</xmax>
<ymax>363</ymax>
</bndbox>
</object>
<object>
<name>face</name>
<pose>Unspecified</pose>
<truncated>1</truncated>
<difficult>0</difficult>
<bndbox>
<xmin>493</xmin>
<ymin>355</ymin>
<xmax>510</xmax>
<ymax>378</ymax>
</bndbox>
</object>
<object>
<name>face</name>
<pose>Unspecified</pose>
<truncated>1</truncated>
<difficult>0</difficult>
<bndbox>
<xmin>299</xmin>
<ymin>373</ymin>
<xmax>316</xmax>
<ymax>394</ymax>
</bndbox>
</object>
<object>
<name>face</name>
<pose>Unspecified</pose>
<truncated>1</truncated>
<difficult>0</difficult>
<bndbox>
<xmin>280</xmin>
<ymin>376</ymin>
<xmax>298</xmax>
<ymax>397</ymax>
</bndbox>
</object>
<object>
<name>face</name>
<pose>Unspecified</pose>
<truncated>1</truncated>
<difficult>0</difficult>
<bndbox>
<xmin>219</xmin>
<ymin>368</ymin>
<xmax>234</xmax>
<ymax>387</ymax>
</bndbox>
</object>
<object>
<name>face</name>
<pose>Unspecified</pose>
<truncated>1</truncated>
<difficult>0</difficult>
<bndbox>
<xmin>248</xmin>
<ymin>379</ymin>
<xmax>269</xmax>
<ymax>405</ymax>
</bndbox>
</object>
<object>
<name>face</name>
<pose>Unspecified</pose>
<truncated>1</truncated>
<difficult>0</difficult>
<bndbox>
<xmin>198</xmin>
<ymin>370</ymin>
<xmax>214</xmax>
<ymax>390</ymax>
</bndbox>
</object>
<object>
<name>face</name>
<pose>Unspecified</pose>
<truncated>1</truncated>
<difficult>0</difficult>
<bndbox>
<xmin>179</xmin>
<ymin>369</ymin>
<xmax>197</xmax>
<ymax>390</ymax>
</bndbox>
</object>
<object>
<name>face</name>
<pose>Unspecified</pose>
<truncated>1</truncated>
<difficult>0</difficult>
<bndbox>
<xmin>111</xmin>
<ymin>369</ymin>
<xmax>129</xmax>
<ymax>391</ymax>
</bndbox>
</object>
<object>
<name>face</name>
<pose>Unspecified</pose>
<truncated>1</truncated>
<difficult>0</difficult>
<bndbox>
<xmin>85</xmin>
<ymin>355</ymin>
<xmax>103</xmax>
<ymax>377</ymax>
</bndbox>
</object>
<object>
<name>face</name>
<pose>Unspecified</pose>
<truncated>1</truncated>
<difficult>0</difficult>
<bndbox>
<xmin>34</xmin>
<ymin>375</ymin>
<xmax>54</xmax>
<ymax>403</ymax>
</bndbox>
</object>
<object>
<name>face</name>
<pose>Unspecified</pose>
<truncated>1</truncated>
<difficult>0</difficult>
<bndbox>
<xmin>9</xmin>
<ymin>375</ymin>
<xmax>27</xmax>
<ymax>394</ymax>
</bndbox>
</object>
<object>
<name>face</name>
<pose>Unspecified</pose>
<truncated>1</truncated>
<difficult>0</difficult>
<bndbox>
<xmin>976</xmin>
<ymin>520</ymin>
<xmax>990</xmax>
<ymax>551</ymax>
</bndbox>
</object>
</annotation>
|
{
"pile_set_name": "Github"
}
|
int plus(int a, int b);
int minus(int a, int b);
|
{
"pile_set_name": "Github"
}
|
// Code generated by cmd/cgo -godefs; DO NOT EDIT.
// cgo -godefs defs_openbsd.go
package socket
type iovec struct {
Base *byte
Len uint32
}
type msghdr struct {
Name *byte
Namelen uint32
Iov *iovec
Iovlen uint32
Control *byte
Controllen uint32
Flags int32
}
type cmsghdr struct {
Len uint32
Level int32
Type int32
}
type sockaddrInet struct {
Len uint8
Family uint8
Port uint16
Addr [4]byte /* in_addr */
Zero [8]int8
}
type sockaddrInet6 struct {
Len uint8
Family uint8
Port uint16
Flowinfo uint32
Addr [16]byte /* in6_addr */
Scope_id uint32
}
const (
sizeofIovec = 0x8
sizeofMsghdr = 0x1c
sizeofCmsghdr = 0xc
sizeofSockaddrInet = 0x10
sizeofSockaddrInet6 = 0x1c
)
|
{
"pile_set_name": "Github"
}
|
'use strict';
/*global it */
var assert = require('assert');
require('../../lib/js-yaml');
it('Timestamps are incorrectly parsed in local time', function () {
var data = require('./data/issue-46.yml'), date1, date2;
date1 = data.date1; // date1: 2010-10-20T20:45:00Z
assert.equal(date1.getUTCFullYear(), 2010, 'year');
assert.equal(date1.getUTCMonth(), 9, 'month');
assert.equal(date1.getUTCDate(), 20, 'date');
assert.equal(date1.getUTCHours(), 20);
assert.equal(date1.getUTCMinutes(), 45);
assert.equal(date1.getUTCSeconds(), 0);
date2 = data.date2; // date2: 2010-10-20T20:45:00+0100
assert.equal(date2.getUTCFullYear(), 2010, 'year');
assert.equal(date2.getUTCMonth(), 9, 'month');
assert.equal(date2.getUTCDate(), 20, 'date');
assert.equal(date2.getUTCHours(), 19);
assert.equal(date2.getUTCMinutes(), 45);
assert.equal(date2.getUTCSeconds(), 0);
});
|
{
"pile_set_name": "Github"
}
|
// Copyright 2017 Google Inc.
//
// Licensed under the Apache License, Version 2.0 (the "License");
// you may not use this file except in compliance with the License.
// You may obtain a copy of the License at
//
// http://www.apache.org/licenses/LICENSE-2.0
//
// Unless required by applicable law or agreed to in writing, software
// distributed under the License is distributed on an "AS IS" BASIS,
// WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
// See the License for the specific language governing permissions and
// limitations under the License.
#ifndef XRTL_BASE_TRACING_H_
#define XRTL_BASE_TRACING_H_
#include <string>
#if !defined(WTF_ENABLE)
#define WTF_ENABLE 0
#endif // !WTF_ENABLE
#if WTF_ENABLE
#include <wtf/event.h>
#include <wtf/macros.h>
#include <wtf/platform.h>
#include <wtf/runtime.h>
#endif // WTF_ENABLE
namespace xrtl {
namespace tracing {
#if WTF_ENABLE
// Marks a frame start/end event with a monotonically increasing frame number.
void EmitFrameStart();
void EmitFrameEnd();
// Saves the current trace buffer to the given file path, if enabled.
void SaveToFile(std::string file_path);
#else
// Empty functions to prevent compilation errors.
inline void EmitFrameStart() {}
inline void EmitFrameEnd() {}
inline void SaveToFile(std::string file_path) {}
// No-op macros.
#define __WTF_IGNORED(...)
#define WTF_EVENT(...) __WTF_IGNORED
#define WTF_EVENT0(...)
#define WTF_SCOPE(...) __WTF_IGNORED
#define WTF_SCOPE0(...)
#endif // WTF_ENABLE
} // namespace tracing
} // namespace xrtl
#endif // XRTL_BASE_TRACING_H_
|
{
"pile_set_name": "Github"
}
|
/*
* Copyright (C) 2011 ~ 2018 Deepin Technology Co., Ltd.
*
* Author: sbw <sbw@sbw.so>
*
* Maintainer: sbw <sbw@sbw.so>
*
* This program is free software: you can redistribute it and/or modify
* it under the terms of the GNU General Public License as published by
* the Free Software Foundation, either version 3 of the License, or
* any later version.
*
* This program is distributed in the hope that it will be useful,
* but WITHOUT ANY WARRANTY; without even the implied warranty of
* MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
* GNU General Public License for more details.
*
* You should have received a copy of the GNU General Public License
* along with this program. If not, see <http://www.gnu.org/licenses/>.
*/
#include "xwindowtraywidget.h"
#include <QWindow>
#include <QPainter>
#include <QX11Info>
#include <QDebug>
#include <QMouseEvent>
#include <QProcess>
#include <QThread>
#include <QApplication>
#include <QScreen>
#include <X11/extensions/shape.h>
#include <X11/extensions/XTest.h>
#include <X11/Xregion.h>
#include <xcb/composite.h>
#include <xcb/xcb_image.h>
static const qreal iconSize = 16;
#define DRAG_THRESHOLD 20
const QPoint rawXPosition(const QPoint &scaledPos)
{
QRect g = qApp->primaryScreen()->geometry();
for (auto *screen : qApp->screens())
{
const QRect &sg = screen->geometry();
if (sg.contains(scaledPos))
{
g = sg;
break;
}
}
return g.topLeft() + (scaledPos - g.topLeft()) * qApp->devicePixelRatio();
}
void sni_cleanup_xcb_image(void *data)
{
xcb_image_destroy(static_cast<xcb_image_t*>(data));
}
XWindowTrayWidget::XWindowTrayWidget(quint32 winId, QWidget *parent)
: AbstractTrayWidget(parent),
m_windowId(winId)
{
wrapWindow();
m_updateTimer = new QTimer(this);
m_updateTimer->setInterval(100);
m_updateTimer->setSingleShot(true);
m_sendHoverEvent = new QTimer(this);
m_sendHoverEvent->setInterval(100);
m_sendHoverEvent->setSingleShot(true);
connect(m_updateTimer, &QTimer::timeout, this, &XWindowTrayWidget::refershIconImage);
#ifdef DOCK_TRAY_USE_NATIVE_POPUP
connect(m_sendHoverEvent, &QTimer::timeout, this, &XWindowTrayWidget::sendHoverEvent);
#endif
setMouseTracking(true);
m_updateTimer->start();
}
XWindowTrayWidget::~XWindowTrayWidget()
{
}
const QImage XWindowTrayWidget::trayImage()
{
return m_image;
}
QSize XWindowTrayWidget::sizeHint() const
{
return QSize(26, 26);
}
void XWindowTrayWidget::showEvent(QShowEvent *e)
{
QWidget::showEvent(e);
m_updateTimer->start();
}
void XWindowTrayWidget::paintEvent(QPaintEvent *e)
{
Q_UNUSED(e);
if (m_image.isNull())
return m_updateTimer->start();
QPainter painter;
painter.begin(this);
painter.setRenderHint(QPainter::Antialiasing);
#ifdef QT_DEBUG
// painter.fillRect(rect(), Qt::red);
#endif
const QPoint p = rect().center() - m_image.rect().center() / m_image.devicePixelRatioF();
painter.drawImage(p, m_image);
painter.end();
}
void XWindowTrayWidget::mousePressEvent(QMouseEvent *e)
{
e->accept();
const QPoint point(e->pos() - rect().center());
if (point.manhattanLength() > 24)
e->ignore();
QWidget::mousePressEvent(e);
}
void XWindowTrayWidget::mouseMoveEvent(QMouseEvent *e)
{
QWidget::mouseMoveEvent(e);
m_sendHoverEvent->start();
}
void XWindowTrayWidget::configContainerPosition()
{
auto c = QX11Info::connection();
const QPoint p(rawXPosition(QCursor::pos()));
const uint32_t containerVals[4] = {uint32_t(p.x()), uint32_t(p.y()), 1, 1};
xcb_configure_window(c, m_containerWid,
XCB_CONFIG_WINDOW_X | XCB_CONFIG_WINDOW_Y | XCB_CONFIG_WINDOW_WIDTH | XCB_CONFIG_WINDOW_HEIGHT,
containerVals);
xcb_flush(c);
}
void XWindowTrayWidget::wrapWindow()
{
auto c = QX11Info::connection();
auto cookie = xcb_get_geometry(c, m_windowId);
QScopedPointer<xcb_get_geometry_reply_t> clientGeom(xcb_get_geometry_reply(c, cookie, Q_NULLPTR));
if (clientGeom.isNull())
return;
//create a container window
const auto ratio = devicePixelRatioF();
auto screen = xcb_setup_roots_iterator (xcb_get_setup (c)).data;
m_containerWid = xcb_generate_id(c);
uint32_t values[2];
auto mask = XCB_CW_BACK_PIXEL | XCB_CW_OVERRIDE_REDIRECT;
values[0] = ParentRelative; //draw a solid background so the embedded icon doesn't get garbage in it
values[1] = true; //bypass wM
xcb_create_window (c, /* connection */
XCB_COPY_FROM_PARENT, /* depth */
m_containerWid, /* window Id */
screen->root, /* parent window */
0, 0, /* x, y */
iconSize * ratio, iconSize * ratio, /* width, height */
0, /* border_width */
XCB_WINDOW_CLASS_INPUT_OUTPUT,/* class */
screen->root_visual, /* visual */
mask, values); /* masks */
/*
We need the window to exist and be mapped otherwise the child won't render it's contents
We also need it to exist in the right place to get the clicks working as GTK will check sendEvent locations to see if our window is in the right place. So even though our contents are drawn via compositing we still put this window in the right place
We can't composite it away anything parented owned by the root window (apparently)
Stack Under works in the non composited case, but it doesn't seem to work in kwin's composited case (probably need set relevant NETWM hint)
As a last resort set opacity to 0 just to make sure this container never appears
*/
// const uint32_t stackBelowData[] = {XCB_STACK_MODE_BELOW};
// xcb_configure_window(c, m_containerWid, XCB_CONFIG_WINDOW_STACK_MODE, stackBelowData);
QWindow * win = QWindow::fromWinId(m_containerWid);
win->setOpacity(0);
// setX11PassMouseEvent(true);
xcb_flush(c);
xcb_map_window(c, m_containerWid);
xcb_reparent_window(c, m_windowId,
m_containerWid,
0, 0);
/*
* Render the embedded window offscreen
*/
xcb_composite_redirect_window(c, m_windowId, XCB_COMPOSITE_REDIRECT_MANUAL);
/* we grab the window, but also make sure it's automatically reparented back
* to the root window if we should die.
*/
xcb_change_save_set(c, XCB_SET_MODE_INSERT, m_windowId);
//tell client we're embedding it
// xembed_message_send(m_windowId, XEMBED_EMBEDDED_NOTIFY, m_containerWid, 0, 0);
//move window we're embedding
/*
const uint32_t windowMoveConfigVals[2] = { 0, 0 };
xcb_configure_window(c, m_windowId,
XCB_CONFIG_WINDOW_X | XCB_CONFIG_WINDOW_Y,
windowMoveCentially quitting the application. Returns onfigVals);
*/
//if the window is a clearly stupid size resize to be something sensible
//this is needed as chormium and such when resized just fill the icon with transparent space and only draw in the middle
//however spotify does need this as by default the window size is 900px wide.
//use an artbitrary heuristic to make sure icons are always sensible
// if (clientGeom->width > iconSize || clientGeom->height > iconSize )
{
const uint32_t windowMoveConfigVals[2] = { uint32_t(iconSize * ratio), uint32_t(iconSize * ratio) };
xcb_configure_window(c, m_windowId,
XCB_CONFIG_WINDOW_WIDTH | XCB_CONFIG_WINDOW_HEIGHT,
windowMoveConfigVals);
}
//show the embedded window otherwise nothing happens
xcb_map_window(c, m_windowId);
// xcb_clear_area(c, 0, m_windowId, 0, 0, qMin(clientGeom->width, iconSize), qMin(clientGeom->height, iconSize));
xcb_flush(c);
// setWindowOnTop(false);
setWindowOnTop(true);
setX11PassMouseEvent(true);
}
void XWindowTrayWidget::sendHoverEvent()
{
// fake enter event
const QPoint p(rawXPosition(QCursor::pos()));
configContainerPosition();
setX11PassMouseEvent(false);
setWindowOnTop(true);
XTestFakeMotionEvent(QX11Info::display(), 0, p.x(), p.y(), CurrentTime);
XFlush(QX11Info::display());
QTimer::singleShot(100, this, [=] { setX11PassMouseEvent(true); });
}
void XWindowTrayWidget::updateIcon()
{
if (!isVisible() && !m_active)
return;
m_updateTimer->start();
}
//void TrayWidget::hideIcon()
//{
// auto c = QX11Info::connection();
// const uint32_t stackAboveData[] = {XCB_STACK_MODE_BELOW};
// xcb_configure_window(c, m_containerWid, XCB_CONFIG_WINDOW_STACK_MODE, stackAboveData);
// const uint32_t windowMoveConfigVals[2] = {0, 0};
// xcb_configure_window(c, m_containerWid,
// XCB_CONFIG_WINDOW_X | XCB_CONFIG_WINDOW_Y,
// windowMoveConfigVals);
// hide();
//}
void XWindowTrayWidget::sendClick(uint8_t mouseButton, int x, int y)
{
if (isBadWindow())
return;
m_sendHoverEvent->stop();
const QPoint p(rawXPosition(QPoint(x, y)));
configContainerPosition();
setX11PassMouseEvent(false);
setWindowOnTop(true);
XTestFakeMotionEvent(QX11Info::display(), 0, p.x(), p.y(), CurrentTime);
XFlush(QX11Info::display());
XTestFakeButtonEvent(QX11Info::display(), mouseButton, true, CurrentTime);
XFlush(QX11Info::display());
XTestFakeButtonEvent(QX11Info::display(), mouseButton, false, CurrentTime);
XFlush(QX11Info::display());
QTimer::singleShot(100, this, [=] { setX11PassMouseEvent(true); });
}
void XWindowTrayWidget::setActive(const bool active)
{
m_active = active;
m_updateTimer->start();
}
void XWindowTrayWidget::refershIconImage()
{
const auto ratio = devicePixelRatioF();
auto c = QX11Info::connection();
auto cookie = xcb_get_geometry(c, m_windowId);
QScopedPointer<xcb_get_geometry_reply_t> geom(xcb_get_geometry_reply(c, cookie, Q_NULLPTR));
if (geom.isNull())
return;
xcb_expose_event_t expose;
expose.response_type = XCB_EXPOSE;
expose.window = m_containerWid;
expose.x = 0;
expose.y = 0;
expose.width = iconSize * ratio;
expose.height = iconSize * ratio;
xcb_send_event_checked(c, false, m_containerWid, XCB_EVENT_MASK_VISIBILITY_CHANGE, reinterpret_cast<char *>(&expose));
xcb_flush(c);
xcb_image_t *image = xcb_image_get(c, m_windowId, 0, 0, geom->width, geom->height, ~0, XCB_IMAGE_FORMAT_Z_PIXMAP);
if (!image)
return;
QImage qimage(image->data, image->width, image->height, image->stride, QImage::Format_ARGB32, sni_cleanup_xcb_image, image);
if (qimage.isNull())
return;
m_image = qimage.scaled(16 * ratio, 16 * ratio, Qt::KeepAspectRatio, Qt::SmoothTransformation);
m_image.setDevicePixelRatio(ratio);
update();
emit iconChanged();
}
void XWindowTrayWidget::setX11PassMouseEvent(const bool pass)
{
if (pass)
{
XShapeCombineRectangles(QX11Info::display(), m_containerWid, ShapeBounding, 0, 0, nullptr, 0, ShapeSet, YXBanded);
XShapeCombineRectangles(QX11Info::display(), m_containerWid, ShapeInput, 0, 0, nullptr, 0, ShapeSet, YXBanded);
}
else
{
XRectangle rectangle;
rectangle.x = 0;
rectangle.y = 0;
rectangle.width = 1;
rectangle.height = 1;
XShapeCombineRectangles(QX11Info::display(), m_containerWid, ShapeBounding, 0, 0, &rectangle, 1, ShapeSet, YXBanded);
XShapeCombineRectangles(QX11Info::display(), m_containerWid, ShapeInput, 0, 0, &rectangle, 1, ShapeSet, YXBanded);
}
XFlush(QX11Info::display());
}
void XWindowTrayWidget::setWindowOnTop(const bool top)
{
auto c = QX11Info::connection();
const uint32_t stackAboveData[] = {top ? XCB_STACK_MODE_ABOVE : XCB_STACK_MODE_BELOW};
xcb_configure_window(c, m_containerWid, XCB_CONFIG_WINDOW_STACK_MODE, stackAboveData);
xcb_flush(c);
}
bool XWindowTrayWidget::isBadWindow()
{
auto c = QX11Info::connection();
auto cookie = xcb_get_geometry(c, m_windowId);
QScopedPointer<xcb_get_geometry_reply_t> clientGeom(xcb_get_geometry_reply(c, cookie, Q_NULLPTR));
return clientGeom.isNull();
}
|
{
"pile_set_name": "Github"
}
|
# ------------------------------------------------------------------------
# Gunrock: Sub-Project k-Nearest Neighbor (& Shared Nearest Neighbor)
# ------------------------------------------------------------------------
project(knn)
message("-- Project Added: ${PROJECT_NAME}")
include(${CMAKE_SOURCE_DIR}/cmake/SetSubProject.cmake)
add_test(NAME TEST_KNN COMMAND knn market --labels-file
${gunrock_INCLUDE_DIRS}/dataset/small/stars_2total_separate --k=2)
set_tests_properties(TEST_KNN PROPERTIES PASS_REGULAR_EXPRESSION "PASSED KNN")
|
{
"pile_set_name": "Github"
}
|
"use strict";
exports.__esModule = true;
var _getIterator2 = require("../core-js/get-iterator");
var _getIterator3 = _interopRequireDefault(_getIterator2);
var _isIterable2 = require("../core-js/is-iterable");
var _isIterable3 = _interopRequireDefault(_isIterable2);
function _interopRequireDefault(obj) { return obj && obj.__esModule ? obj : { default: obj }; }
exports.default = function (arr, i) {
if (Array.isArray(arr)) {
return arr;
} else if ((0, _isIterable3.default)(Object(arr))) {
var _arr = [];
for (var _iterator = (0, _getIterator3.default)(arr), _step; !(_step = _iterator.next()).done;) {
_arr.push(_step.value);
if (i && _arr.length === i) break;
}
return _arr;
} else {
throw new TypeError("Invalid attempt to destructure non-iterable instance");
}
};
|
{
"pile_set_name": "Github"
}
|
fileFormatVersion: 2
guid: 2b162f5d72fd0054691f48f60a21d53c
folderAsset: yes
DefaultImporter:
userData:
|
{
"pile_set_name": "Github"
}
|
<?php
namespace spec\Prophecy\Call;
use PhpSpec\ObjectBehavior;
use Prophecy\Prophecy\ObjectProphecy;
use Prophecy\Argument\ArgumentsWildcard;
class CallCenterSpec extends ObjectBehavior
{
/**
* @param \Prophecy\Prophecy\ObjectProphecy $objectProphecy
*/
function let($objectProphecy)
{
}
/**
* @param \Prophecy\Prophecy\ObjectProphecy $objectProphecy
* @param \Prophecy\Argument\ArgumentsWildcard $wildcard
*/
function it_records_calls_made_through_makeCall_method($objectProphecy, $wildcard)
{
$wildcard->scoreArguments(array(5, 2, 3))->willReturn(10);
$objectProphecy->getMethodProphecies()->willReturn(array());
$this->makeCall($objectProphecy, 'setValues', array(5, 2, 3));
$calls = $this->findCalls('setValues', $wildcard);
$calls->shouldHaveCount(1);
$calls[0]->shouldBeAnInstanceOf('Prophecy\Call\Call');
$calls[0]->getMethodName()->shouldReturn('setValues');
$calls[0]->getArguments()->shouldReturn(array(5, 2, 3));
$calls[0]->getReturnValue()->shouldReturn(null);
}
function it_returns_null_for_any_call_through_makeCall_if_no_method_prophecies_added(
$objectProphecy
)
{
$objectProphecy->getMethodProphecies()->willReturn(array());
$this->makeCall($objectProphecy, 'setValues', array(5, 2, 3))->shouldReturn(null);
}
/**
* @param \Prophecy\Prophecy\MethodProphecy $method1
* @param \Prophecy\Prophecy\MethodProphecy $method2
* @param \Prophecy\Prophecy\MethodProphecy $method3
* @param \Prophecy\Argument\ArgumentsWildcard $arguments1
* @param \Prophecy\Argument\ArgumentsWildcard $arguments2
* @param \Prophecy\Argument\ArgumentsWildcard $arguments3
* @param \Prophecy\Promise\PromiseInterface $promise
*/
function it_executes_promise_of_method_prophecy_that_matches_signature_passed_to_makeCall(
$objectProphecy, $method1, $method2, $method3, $arguments1, $arguments2, $arguments3,
$promise
)
{
$method1->getMethodName()->willReturn('getName');
$method1->getArgumentsWildcard()->willReturn($arguments1);
$arguments1->scoreArguments(array('world', 'everything'))->willReturn(false);
$method2->getMethodName()->willReturn('setTitle');
$method2->getArgumentsWildcard()->willReturn($arguments2);
$arguments2->scoreArguments(array('world', 'everything'))->willReturn(false);
$method3->getMethodName()->willReturn('getName');
$method3->getArgumentsWildcard()->willReturn($arguments3);
$method3->getPromise()->willReturn($promise);
$arguments3->scoreArguments(array('world', 'everything'))->willReturn(200);
$objectProphecy->getMethodProphecies()->willReturn(array(
'method1' => array($method1),
'method2' => array($method2, $method3)
));
$objectProphecy->getMethodProphecies('getName')->willReturn(array($method1, $method3));
$objectProphecy->reveal()->willReturn(new \stdClass());
$promise->execute(array('world', 'everything'), $objectProphecy->getWrappedObject(), $method3)->willReturn(42);
$this->makeCall($objectProphecy, 'getName', array('world', 'everything'))->shouldReturn(42);
$calls = $this->findCalls('getName', $arguments3);
$calls->shouldHaveCount(1);
$calls[0]->getReturnValue()->shouldReturn(42);
}
/**
* @param \Prophecy\Prophecy\MethodProphecy $method1
* @param \Prophecy\Prophecy\MethodProphecy $method2
* @param \Prophecy\Prophecy\MethodProphecy $method3
* @param \Prophecy\Argument\ArgumentsWildcard $arguments1
* @param \Prophecy\Argument\ArgumentsWildcard $arguments2
* @param \Prophecy\Argument\ArgumentsWildcard $arguments3
* @param \Prophecy\Promise\PromiseInterface $promise
*/
function it_executes_promise_of_method_prophecy_that_matches_with_highest_score_to_makeCall(
$objectProphecy, $method1, $method2, $method3, $arguments1, $arguments2, $arguments3,
$promise
)
{
$method1->getMethodName()->willReturn('getName');
$method1->getArgumentsWildcard()->willReturn($arguments1);
$arguments1->scoreArguments(array('world', 'everything'))->willReturn(50);
$method2->getMethodName()->willReturn('getName');
$method2->getArgumentsWildcard()->willReturn($arguments2);
$method2->getPromise()->willReturn($promise);
$arguments2->scoreArguments(array('world', 'everything'))->willReturn(300);
$method3->getMethodName()->willReturn('getName');
$method3->getArgumentsWildcard()->willReturn($arguments3);
$arguments3->scoreArguments(array('world', 'everything'))->willReturn(200);
$objectProphecy->getMethodProphecies()->willReturn(array(
'method1' => array($method1),
'method2' => array($method2, $method3)
));
$objectProphecy->getMethodProphecies('getName')->willReturn(array(
$method1, $method2, $method3
));
$objectProphecy->reveal()->willReturn(new \stdClass());
$promise->execute(array('world', 'everything'), $objectProphecy->getWrappedObject(), $method2)
->willReturn('second');
$this->makeCall($objectProphecy, 'getName', array('world', 'everything'))
->shouldReturn('second');
}
/**
* @param \Prophecy\Prophecy\MethodProphecy $method
* @param \Prophecy\Argument\ArgumentsWildcard $arguments
*/
function it_throws_exception_if_call_does_not_match_any_of_defined_method_prophecies(
$objectProphecy, $method, $arguments
)
{
$method->getMethodName()->willReturn('getName');
$method->getArgumentsWildcard()->willReturn($arguments);
$arguments->scoreArguments(array('world', 'everything'))->willReturn(false);
$arguments->__toString()->willReturn('arg1, arg2');
$objectProphecy->getMethodProphecies()->willReturn(array('method1' => array($method)));
$objectProphecy->getMethodProphecies('getName')->willReturn(array($method));
$this->shouldThrow('Prophecy\Exception\Call\UnexpectedCallException')
->duringMakeCall($objectProphecy, 'getName', array('world', 'everything'));
}
/**
* @param \Prophecy\Prophecy\MethodProphecy $method
* @param \Prophecy\Argument\ArgumentsWildcard $arguments
*/
function it_returns_null_if_method_prophecy_that_matches_makeCall_arguments_has_no_promise(
$objectProphecy, $method, $arguments
)
{
$method->getMethodName()->willReturn('getName');
$method->getArgumentsWildcard()->willReturn($arguments);
$method->getPromise()->willReturn(null);
$arguments->scoreArguments(array('world', 'everything'))->willReturn(100);
$objectProphecy->getMethodProphecies()->willReturn(array($method));
$objectProphecy->getMethodProphecies('getName')->willReturn(array($method));
$this->makeCall($objectProphecy, 'getName', array('world', 'everything'))
->shouldReturn(null);
}
/**
* @param \Prophecy\Argument\ArgumentsWildcard $wildcard
*/
function it_finds_recorded_calls_by_a_method_name_and_arguments_wildcard(
$objectProphecy, $wildcard
)
{
$objectProphecy->getMethodProphecies()->willReturn(array());
$this->makeCall($objectProphecy, 'getName', array('world'));
$this->makeCall($objectProphecy, 'getName', array('everything'));
$this->makeCall($objectProphecy, 'setName', array(42));
$wildcard->scoreArguments(array('world'))->willReturn(false);
$wildcard->scoreArguments(array('everything'))->willReturn(10);
$calls = $this->findCalls('getName', $wildcard);
$calls->shouldHaveCount(1);
$calls[0]->getMethodName()->shouldReturn('getName');
$calls[0]->getArguments()->shouldReturn(array('everything'));
}
}
|
{
"pile_set_name": "Github"
}
|
#!/bin/bash
set -e
extract_latest_version_tgz() {
# Takes in a path to a chart's directory, finds the latest version's tgz and extracts it
# All crd tgz are ignored
# Max depth is set to prevent extracting a tgz contained within another tgz, which is the case for charts containing a helm repo
LATEST_VERSION_TGZ_PATH=$(find $1 -maxdepth 1 -name "*.tgz" ! -name "*crd*.tgz" -print | sort -Vr | head -1)
if [[ $LATEST_VERSION_TGZ_PATH ]]; then
tar -xvf $LATEST_VERSION_TGZ_PATH -C $(dirname $LATEST_VERSION_TGZ_PATH)
fi
}
export -f extract_latest_version_tgz
./k3s-images.sh
source $(dirname $0)/version
ARCH=${ARCH:-"amd64"}
SYSTEM_CHART_DEFAULT_BRANCH=${SYSTEM_CHART_DEFAULT_BRANCH:-"dev-v2.4"}
CHART_DEFAULT_BRANCH=${CHART_DEFAULT_BRANCH:-"dev-v2.5"}
cd $(dirname $0)/../package
cp ../bin/rancher.yaml ../bin/rancher ../bin/agent ../bin/data.json ../bin/k3s-airgap-images.tar .
IMAGE=${REPO}/rancher:${TAG}
AGENT_IMAGE=${REPO}/rancher-agent:${AGENT_TAG}
RUNTIME_IMAGE=${REPO}/rancher-runtime:${TAG}
if [ ${ARCH} == arm64 ]; then
sed -i -e '$a\' -e 'ENV ETCD_UNSUPPORTED_ARCH=arm64' Dockerfile
fi
docker build --build-arg VERSION=${TAG} --build-arg ARCH=${ARCH} --build-arg IMAGE_REPO=${REPO} --build-arg SYSTEM_CHART_DEFAULT_BRANCH=${SYSTEM_CHART_DEFAULT_BRANCH} --build-arg CHART_DEFAULT_BRANCH=${CHART_DEFAULT_BRANCH} -t ${IMAGE} .
docker build --build-arg VERSION=${TAG} --build-arg ARCH=${ARCH} --build-arg RANCHER_TAG=${TAG} --build-arg RANCHER_REPO=${REPO} -t ${AGENT_IMAGE} -f Dockerfile.agent .
if [ "${ARCH}" == amd64 ]; then
docker build -t ${RUNTIME_IMAGE} -f Dockerfile.runtime .
fi
mkdir -p ../dist
echo ${IMAGE} > ../dist/images
echo ${AGENT_IMAGE} >> ../dist/images
echo Built ${IMAGE} #${AGENT_IMAGE}
echo
cd ../bin
if [ ! -d build/system-charts ]; then
mkdir -p build
git clone --branch $SYSTEM_CHART_DEFAULT_BRANCH https://github.com/rancher/system-charts build/system-charts
fi
if [ ! -d build/charts ]; then
git clone --branch $CHART_DEFAULT_BRANCH https://github.com/rancher/charts build/charts
# Iterate through chart directories and execute callback to extract latest version tgz
find build/charts/assets -type d -maxdepth 1 -exec bash -c 'extract_latest_version_tgz {}' \;
# Remove index to force building a virtual index like system charts
rm -f build/charts/index.yaml build/charts/assets/index.yaml
fi
TAG=$TAG REPO=${REPO} go run ../pkg/image/export/main.go build/system-charts build/charts $IMAGE $AGENT_IMAGE
if [ ${ARCH} == amd64 ]; then
# rancherd tarball
rm -rf build/rancherd/bundle
mkdir -p build/rancherd/bundle
tar c -C ../cmd/rancherd/bundle . | tar x -C build/rancherd/bundle
cp -vf rancherd build/rancherd/bundle/bin
tar czf rancherd-${ARCH}.tar.gz -C build/rancherd/bundle .
fi
|
{
"pile_set_name": "Github"
}
|
-- Loot Template Cleanup. Removed obsolete loot entries.
-- Kobold Vermin, Harvest Golem , Two-Bit Thug, Diseased Timber Wolf, Kobold Laborer
-- Defias Smuggler, Garrick Padfoot, Defias Pathstalker, Defias Highwayman, Riverpaw Overseer
-- Blue Dragonspawn, Starving Dire Wolf, Defias Night Runner, Farmer Ray, Kobold Worker
-- Eliza <Bride of the Embalmer>, Singe, Timber Wolf, Winter Wolf, Porcine Entourage, Stitches <Gift from the Embalmer>
DELETE FROM creature_loot_template WHERE entry IN
(6, 36, 38, 69, 80, 95, 103, 121, 122, 125, 193, 213, 215, 232, 257, 314, 335, 358, 359, 390, 412);
DELETE FROM creature WHERE id IN
(6, 36, 38, 69, 80, 95, 103, 121, 122, 125, 193, 213, 215, 232, 257, 314, 335, 358, 359, 390, 412);
|
{
"pile_set_name": "Github"
}
|
Section Titles are interactive titles that open and close sections, typically on a form.
|
{
"pile_set_name": "Github"
}
|
package json
import (
"reflect"
)
// Extension holds a set of additional rules to be used when unmarshaling
// strict JSON or JSON-like content.
type Extension struct {
funcs map[string]funcExt
consts map[string]interface{}
keyed map[string]func([]byte) (interface{}, error)
encode map[reflect.Type]func(v interface{}) ([]byte, error)
unquotedKeys bool
trailingCommas bool
}
type funcExt struct {
key string
args []string
}
// Extend changes the decoder behavior to consider the provided extension.
func (dec *Decoder) Extend(ext *Extension) { dec.d.ext = *ext }
// Extend changes the encoder behavior to consider the provided extension.
func (enc *Encoder) Extend(ext *Extension) { enc.ext = *ext }
// Extend includes in e the extensions defined in ext.
func (e *Extension) Extend(ext *Extension) {
for name, fext := range ext.funcs {
e.DecodeFunc(name, fext.key, fext.args...)
}
for name, value := range ext.consts {
e.DecodeConst(name, value)
}
for key, decode := range ext.keyed {
e.DecodeKeyed(key, decode)
}
for typ, encode := range ext.encode {
if e.encode == nil {
e.encode = make(map[reflect.Type]func(v interface{}) ([]byte, error))
}
e.encode[typ] = encode
}
}
// DecodeFunc defines a function call that may be observed inside JSON content.
// A function with the provided name will be unmarshaled as the document
// {key: {args[0]: ..., args[N]: ...}}.
func (e *Extension) DecodeFunc(name string, key string, args ...string) {
if e.funcs == nil {
e.funcs = make(map[string]funcExt)
}
e.funcs[name] = funcExt{key, args}
}
// DecodeConst defines a constant name that may be observed inside JSON content
// and will be decoded with the provided value.
func (e *Extension) DecodeConst(name string, value interface{}) {
if e.consts == nil {
e.consts = make(map[string]interface{})
}
e.consts[name] = value
}
// DecodeKeyed defines a key that when observed as the first element inside a
// JSON document triggers the decoding of that document via the provided
// decode function.
func (e *Extension) DecodeKeyed(key string, decode func(data []byte) (interface{}, error)) {
if e.keyed == nil {
e.keyed = make(map[string]func([]byte) (interface{}, error))
}
e.keyed[key] = decode
}
// DecodeUnquotedKeys defines whether to accept map keys that are unquoted strings.
func (e *Extension) DecodeUnquotedKeys(accept bool) {
e.unquotedKeys = accept
}
// DecodeTrailingCommas defines whether to accept trailing commas in maps and arrays.
func (e *Extension) DecodeTrailingCommas(accept bool) {
e.trailingCommas = accept
}
// EncodeType registers a function to encode values with the same type of the
// provided sample.
func (e *Extension) EncodeType(sample interface{}, encode func(v interface{}) ([]byte, error)) {
if e.encode == nil {
e.encode = make(map[reflect.Type]func(v interface{}) ([]byte, error))
}
e.encode[reflect.TypeOf(sample)] = encode
}
|
{
"pile_set_name": "Github"
}
|
fileFormatVersion: 2
guid: f9286865c652f7c4796676f1a56152e5
timeCreated: 1521803997
licenseType: Pro
TextureImporter:
fileIDToRecycleName: {}
externalObjects: {}
serializedVersion: 4
mipmaps:
mipMapMode: 0
enableMipMap: 1
sRGBTexture: 1
linearTexture: 0
fadeOut: 0
borderMipMap: 0
mipMapsPreserveCoverage: 0
alphaTestReferenceValue: 0.5
mipMapFadeDistanceStart: 1
mipMapFadeDistanceEnd: 3
bumpmap:
convertToNormalMap: 0
externalNormalMap: 0
heightScale: 0.25
normalMapFilter: 0
isReadable: 0
grayScaleToAlpha: 0
generateCubemap: 6
cubemapConvolution: 0
seamlessCubemap: 0
textureFormat: 1
maxTextureSize: 2048
textureSettings:
serializedVersion: 2
filterMode: -1
aniso: -1
mipBias: -1
wrapU: -1
wrapV: -1
wrapW: -1
nPOTScale: 1
lightmap: 0
compressionQuality: 50
spriteMode: 0
spriteExtrude: 1
spriteMeshType: 1
alignment: 0
spritePivot: {x: 0.5, y: 0.5}
spriteBorder: {x: 0, y: 0, z: 0, w: 0}
spritePixelsToUnits: 100
alphaUsage: 1
alphaIsTransparency: 0
spriteTessellationDetail: -1
textureType: 0
textureShape: 1
maxTextureSizeSet: 0
compressionQualitySet: 0
textureFormatSet: 0
platformSettings:
- buildTarget: DefaultTexturePlatform
maxTextureSize: 2048
resizeAlgorithm: 0
textureFormat: -1
textureCompression: 1
compressionQuality: 50
crunchedCompression: 0
allowsAlphaSplitting: 0
overridden: 0
spriteSheet:
serializedVersion: 2
sprites: []
outline: []
physicsShape: []
spritePackingTag:
userData:
assetBundleName:
assetBundleVariant:
|
{
"pile_set_name": "Github"
}
|
/**
* Copyright (c) 2005-2013 by Appcelerator, Inc. All Rights Reserved.
* Copyright (c) 2013 by Syapse, Inc. All Rights Reserved.
* Licensed under the terms of the Eclipse Public License (EPL).
* Please see the license.txt included with this distribution for details.
* Any modifications to this file must keep this entire header intact.
*/
package org.python.pydev.ui.actions.container;
import org.eclipse.jface.dialogs.MessageDialog;
import org.python.pydev.editor.actions.PyOrganizeImports;
import org.python.pydev.ui.importsconf.ImportsPreferencesPage;
/**
* Action used to organize imports to all the available python files.
*
* @author Jeremy J. Carroll
*/
public class PyOrganizeImportsAction extends PyContainerFormatterAction {
public PyOrganizeImportsAction() {
super("organize imports", "organize imports in", "organized");
}
@Override
PyOrganizeImports createFormatter() {
return new PyOrganizeImports();
}
@Override
protected boolean confirmRun() {
return
super.confirmRun()
&& ( (!ImportsPreferencesPage.getDeleteUnusedImports())
||
MessageDialog
.openConfirm(
null,
"Confirm Deletion of Unused Imports",
"Your preferences show to delete unused imports (PyDev > Editor > Code Style > Imports)\n"
+ "\n"
+ "This requires that you have run the PyDev Code Analysis recently for correct behavior.") );
}
}
|
{
"pile_set_name": "Github"
}
|
/*
* pcie_host.h
*
* Copyright (c) 2009 Isaku Yamahata <yamahata at valinux co jp>
* VA Linux Systems Japan K.K.
*
* This program is free software; you can redistribute it and/or modify
* it under the terms of the GNU General Public License as published by
* the Free Software Foundation; either version 2 of the License, or
* (at your option) any later version.
* This program is distributed in the hope that it will be useful,
* but WITHOUT ANY WARRANTY; without even the implied warranty of
* MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
* GNU General Public License for more details.
* You should have received a copy of the GNU General Public License along
* with this program; if not, see <http://www.gnu.org/licenses/>.
*/
#ifndef PCIE_HOST_H
#define PCIE_HOST_H
#include "hw/pci/pci_host.h"
#include "exec/memory.h"
#define TYPE_PCIE_HOST_BRIDGE "pcie-host-bridge"
#define PCIE_HOST_BRIDGE(obj) \
OBJECT_CHECK(PCIExpressHost, (obj), TYPE_PCIE_HOST_BRIDGE)
#define PCIE_HOST_MCFG_BASE "MCFG"
#define PCIE_HOST_MCFG_SIZE "mcfg_size"
/* pcie_host::base_addr == PCIE_BASE_ADDR_UNMAPPED when it isn't mapped. */
#define PCIE_BASE_ADDR_UNMAPPED ((hwaddr)-1ULL)
struct PCIExpressHost {
PCIHostState pci;
/* express part */
/* base address where MMCONFIG area is mapped. */
hwaddr base_addr;
/* the size of MMCONFIG area. It's host bridge dependent */
hwaddr size;
/* MMCONFIG mmio area */
MemoryRegion mmio;
};
void pcie_host_mmcfg_unmap(PCIExpressHost *e);
void pcie_host_mmcfg_init(PCIExpressHost *e, uint32_t size);
void pcie_host_mmcfg_map(PCIExpressHost *e, hwaddr addr, uint32_t size);
void pcie_host_mmcfg_update(PCIExpressHost *e,
int enable,
hwaddr addr,
uint32_t size);
/*
* PCI express ECAM (Enhanced Configuration Address Mapping) format.
* AKA mmcfg address
* bit 20 - 28: bus number
* bit 15 - 19: device number
* bit 12 - 14: function number
* bit 0 - 11: offset in configuration space of a given device
*/
#define PCIE_MMCFG_SIZE_MAX (1ULL << 28)
#define PCIE_MMCFG_SIZE_MIN (1ULL << 20)
#define PCIE_MMCFG_BUS_BIT 20
#define PCIE_MMCFG_BUS_MASK 0x1ff
#define PCIE_MMCFG_DEVFN_BIT 12
#define PCIE_MMCFG_DEVFN_MASK 0xff
#define PCIE_MMCFG_CONFOFFSET_MASK 0xfff
#define PCIE_MMCFG_BUS(addr) (((addr) >> PCIE_MMCFG_BUS_BIT) & \
PCIE_MMCFG_BUS_MASK)
#define PCIE_MMCFG_DEVFN(addr) (((addr) >> PCIE_MMCFG_DEVFN_BIT) & \
PCIE_MMCFG_DEVFN_MASK)
#define PCIE_MMCFG_CONFOFFSET(addr) ((addr) & PCIE_MMCFG_CONFOFFSET_MASK)
#endif /* PCIE_HOST_H */
|
{
"pile_set_name": "Github"
}
|
L HWCRHK e_chil_err.h e_chil_err.c
|
{
"pile_set_name": "Github"
}
|
fileFormatVersion: 2
guid: 190efca65a78647b7a60e0e4b2288b27
timeCreated: 1510509514
licenseType: Free
MonoImporter:
externalObjects: {}
serializedVersion: 2
defaultReferences: []
executionOrder: 0
icon: {instanceID: 0}
userData:
assetBundleName:
assetBundleVariant:
|
{
"pile_set_name": "Github"
}
|
using System;
using CoreWCF.Channels;
using CoreWCF.Diagnostics;
namespace CoreWCF.Dispatcher
{
internal sealed class MessageOperationFormatter : IClientMessageFormatter, IDispatchMessageFormatter
{
static MessageOperationFormatter instance;
internal static MessageOperationFormatter Instance
{
get
{
if (instance == null)
instance = new MessageOperationFormatter();
return instance;
}
}
public object DeserializeReply(Message message, object[] parameters)
{
if (message == null)
throw DiagnosticUtility.ExceptionUtility.ThrowHelperArgumentNull(nameof(message));
if (parameters != null && parameters.Length > 0)
throw DiagnosticUtility.ExceptionUtility.ThrowHelperArgument(SR.SFxParametersMustBeEmpty);
return message;
}
public void DeserializeRequest(Message message, object[] parameters)
{
if (message == null)
throw DiagnosticUtility.ExceptionUtility.ThrowHelperArgumentNull(nameof(message));
if (parameters == null)
throw TraceUtility.ThrowHelperError(new ArgumentNullException(nameof(parameters)), message);
if (parameters.Length != 1)
throw DiagnosticUtility.ExceptionUtility.ThrowHelperArgument(SR.SFxParameterMustBeArrayOfOneElement);
parameters[0] = message;
}
public bool IsFault(string operation, Exception error)
{
return false;
}
public MessageFault SerializeFault(Exception error)
{
throw DiagnosticUtility.ExceptionUtility.ThrowHelperError(new InvalidOperationException(SR.SFxMessageOperationFormatterCannotSerializeFault));
}
public Message SerializeReply(MessageVersion messageVersion, object[] parameters, object result)
{
if (!(result is Message))
throw DiagnosticUtility.ExceptionUtility.ThrowHelperArgument(SR.SFxResultMustBeMessage);
if (parameters != null && parameters.Length > 0)
throw DiagnosticUtility.ExceptionUtility.ThrowHelperArgument(SR.SFxParametersMustBeEmpty);
return (Message)result;
}
public Message SerializeRequest(MessageVersion messageVersion, object[] parameters)
{
if (parameters == null)
throw DiagnosticUtility.ExceptionUtility.ThrowHelperArgumentNull(nameof(parameters));
if (parameters.Length != 1 || !(parameters[0] is Message))
throw DiagnosticUtility.ExceptionUtility.ThrowHelperArgument(SR.SFxParameterMustBeMessage);
return (Message)parameters[0];
}
}
}
|
{
"pile_set_name": "Github"
}
|
# The Content Format
Coleslaw expects content to have a file extension matching the class
of the content. (I.e. `.post` for blog posts, `.page` for static pages, etc.)
There should also be a metadata header on all files
starting and ending with the config-specified `:separator`, ";;;;;" by
default. Example:
```
;;;;;
title: foo
tags: bar, baz
date: yyyy-mm-dd hh:mm:ss
format: html (for raw html) or md (for markdown)
excerpt: Can also be extracted from content (see :excerpt-sep config param)
;;;;;
your post
```
Posts require the `title:` and `format:` fields.
Pages require the `title:` and `url:` fields.
To omit a field, simply do not have the line present, empty lines and
fields (e.g. "tags:" followed by whitespace) will be ignored.
|
{
"pile_set_name": "Github"
}
|
#!/usr/bin/env bash
# Keyboard Shortcut
sxhkd -c "${HOME}/.config/bspwm/configuration/sxhkd/sxhkdrc" &
# Restore cursor theme
xsetroot -cursor_name left_ptr
# Restore wallpaper
feh --bg-fill "${HOME}/Pictures/Wallpapers/no-mans-sky-8k-ultrawide-i3.jpg"
# Music is layf
mpd &>/dev/null
# Compositor
picom --experimental-backends --dbus --config ~/.config/bspwm/configuration/picom/picom.conf &
# Load Xresources
xrdb "${HOME}/.Xresources"
# nm-applet
nm-applet &>/dev/null
# blueman applet
blueman-applet &>/dev/null
# Equalizer
pulseeffects --gapplication-service &>/dev/null
# Polkit
/usr/lib/polkit-gnome/polkit-gnome-authentication-agent-1 &>/dev/null
# Keyring
eval $(gnome-keyring-daemon -s --components=pkcs11,secrets,ssh,gpg) &>/dev/null
|
{
"pile_set_name": "Github"
}
|
syntax = "proto3";
package POGOProtos.Data.Gym;
import "POGOProtos/Map/Fort/FortData.proto";
import "POGOProtos/Data/Gym/GymMembership.proto";
message GymState {
.POGOProtos.Map.Fort.FortData fort_data = 1;
repeated .POGOProtos.Data.Gym.GymMembership memberships = 2;
bool deploy_lockout = 3;
}
|
{
"pile_set_name": "Github"
}
|
<?xml version="1.0"?>
<ZopeData>
<record id="1" aka="AAAAAAAAAAE=">
<pickle>
<global name="Category" module="erp5.portal_type"/>
</pickle>
<pickle>
<dictionary>
<item>
<key> <string>_Add_portal_content_Permission</string> </key>
<value>
<tuple>
<string>Assignor</string>
<string>Manager</string>
</tuple>
</value>
</item>
<item>
<key> <string>_Add_portal_folders_Permission</string> </key>
<value>
<tuple>
<string>Assignor</string>
<string>Manager</string>
</tuple>
</value>
</item>
<item>
<key> <string>_Copy_or_Move_Permission</string> </key>
<value>
<tuple>
<string>Assignor</string>
<string>Manager</string>
</tuple>
</value>
</item>
<item>
<key> <string>_Delete_objects_Permission</string> </key>
<value>
<tuple>
<string>Assignor</string>
<string>Manager</string>
</tuple>
</value>
</item>
<item>
<key> <string>_Modify_portal_content_Permission</string> </key>
<value>
<tuple>
<string>Assignee</string>
<string>Assignor</string>
<string>Manager</string>
<string>Owner</string>
</tuple>
</value>
</item>
<item>
<key> <string>categories</string> </key>
<value>
<tuple>
<string>base_amount/payroll/l10n/fr/ctp/450D</string>
</tuple>
</value>
</item>
<item>
<key> <string>codification</string> </key>
<value> <string>450D</string> </value>
</item>
<item>
<key> <string>description</string> </key>
<value>
<none/>
</value>
</item>
<item>
<key> <string>id</string> </key>
<value> <string>450D</string> </value>
</item>
<item>
<key> <string>portal_type</string> </key>
<value> <string>Category</string> </value>
</item>
<item>
<key> <string>title</string> </key>
<value> <string>450D</string> </value>
</item>
</dictionary>
</pickle>
</record>
</ZopeData>
|
{
"pile_set_name": "Github"
}
|
#!/usr/bin/env bash
# coding=utf-8
# Author: zhaigy@ucweb.com
# Data: 2013-01
#
ZK_PORT_PREFIX=$PORT_PREFIX
# ZooKeeper服务器,新版本会配置成自动切换热备模式,ZK是必须的。
# 本脚本会根据此配置安装ZK
# 如果不配置,会使用$NODES中的最多5个
ZK_NODES=""
# 用于支持,不是配置项,不要修改
#-------------------------------------
if [ -z "$ZK_NODES" ]; then
NS=($NODES)
if ((${#NS[@]} <= 5)); then
ZK_NODES=$NODES
else
ZK_NODES=${NS[@]:0:5}
fi
unset NS
fi
#-------------------------------------
|
{
"pile_set_name": "Github"
}
|
gcr.io/google_containers/cloud-controller-manager-arm:v1.13.0-alpha.3
|
{
"pile_set_name": "Github"
}
|
/// <reference path="../test-types.ts"/>
import * as _ from 'lodash';
import assert = require('assert');
import server = require('../utils/server');
import utils = require('../utils/utils');
import { buildSite } from '../utils/site-builder';
import { TyE2eTestBrowser } from '../utils/pages-for';
import settings = require('../utils/settings');
import logAndDie = require('../utils/log-and-die');
import c = require('../test-constants');
let everyonesBrowsers;
let richBrowserA;
let richBrowserB;
let owen: Member;
let owensBrowser: TyE2eTestBrowser;
let maria: Member;
let mariasBrowser: TyE2eTestBrowser;
let strangersBrowser: TyE2eTestBrowser;
let siteIdAddress: IdAddress;
let siteId;
let forum: TwoPagesTestForum;
let discussionPageUrl: string;
const DummyGroupUsername = "dummy_ignore_group";
const DummyGroupFullName = "Dummy Ignore Group";
const GroupsFirstFullName = 'GroupsFirstFullName';
const GroupsFirstUsername = 'groups_1st_username';
const GroupsSecondFullName = 'GroupsSecondFullName';
const GroupsSecondUsername = 'groups_2nd_username';
const DummyGroupNames = { username: DummyGroupUsername, fullName: DummyGroupFullName };
const GroupsFirstNames = { username: GroupsFirstUsername, fullName: GroupsFirstFullName };
const GroupsSecondNames = { username: GroupsSecondUsername, fullName: GroupsSecondFullName };
describe("group-profile-change-things TyT5MS5TWV0", () => {
it("import a site", () => {
const builder = buildSite();
forum = builder.addTwoPagesForum({
title: "Group Profile Change Things",
members: undefined, // default = everyone
});
assert(builder.getSite() === forum.siteData);
siteIdAddress = server.importSiteData(forum.siteData);
siteId = siteIdAddress.id;
});
it("initialize people", () => {
everyonesBrowsers = new TyE2eTestBrowser(wdioBrowser);
richBrowserA = new TyE2eTestBrowser(browserA);
richBrowserB = new TyE2eTestBrowser(browserB);
owen = forum.members.owen;
owensBrowser = richBrowserA;
maria = forum.members.maria;
mariasBrowser = richBrowserB;
strangersBrowser = richBrowserB;
});
it("Owen logs in to the groups page", () => {
owensBrowser.groupListPage.goHere(siteIdAddress.origin);
owensBrowser.complex.loginWithPasswordViaTopbar(owen);
});
it("... creates a dummy won't-be-used group", () => {
// Just so can verify the server won't edit the wrong custom group.
owensBrowser.groupListPage.createGroup(DummyGroupNames);
});
it("... navigates back to the groups list page", () => {
owensBrowser.userProfilePage.navBackToGroups();
});
it("... creates a group to edit", () => {
owensBrowser.groupListPage.createGroup(GroupsFirstNames);
});
it("... adds Maria", () => {
owensBrowser.userProfilePage.groupMembers.addOneMember(maria.username);
});
it("Maria logs in", () => {
mariasBrowser.go(siteIdAddress.origin + '/' + forum.topics.byMichaelCategoryA.slug);
mariasBrowser.complex.loginWithPasswordViaTopbar(maria);
});
it("... goes to the groups page, via her username menu", () => {
mariasBrowser.topbar.navigateToGroups();
});
it("There're two custom groups", () => {
assert.equal(mariasBrowser.groupListPage.countCustomGroups(), 2);
});
it("... with the correct names", () => {
mariasBrowser.groupListPage.waitUntilGroupPresent(DummyGroupNames);
mariasBrowser.groupListPage.waitUntilGroupPresent(GroupsFirstNames);
});
it("Owen goes to the group's prefs | about page", () => {
owensBrowser.userProfilePage.goToPreferences();
});
it("... the group's name is in the about box", () => {
owensBrowser.userProfilePage.waitUntilUsernameIs(GroupsFirstUsername);
});
it("He renames the group: changes the username", () => {
owensBrowser.userProfilePage.preferences.startChangingUsername();
owensBrowser.userProfilePage.preferences.setUsername(GroupsSecondUsername);
});
it("... and the full name", () => {
owensBrowser.userProfilePage.preferences.setFullName(GroupsSecondFullName);
});
it("... saves", () => {
owensBrowser.userProfilePage.preferences.save();
});
it("The group's new username is now in the about box", () => {
owensBrowser.userProfilePage.waitUntilUsernameIs(GroupsSecondUsername);
});
it("Maria refreshes the page, and there're still two custom groups", () => {
mariasBrowser.refresh();
mariasBrowser.groupListPage.waitUntilLoaded();
assert.equal(mariasBrowser.groupListPage.countCustomGroups(), 2);
});
it("... with the correct names", () => {
mariasBrowser.groupListPage.waitUntilGroupPresent(DummyGroupNames);
mariasBrowser.groupListPage.waitUntilGroupPresent(GroupsSecondNames);
});
// Later: edit title, verify member's title (dispalyed next to hens username,
// at hens posts) gets refreshed.
});
|
{
"pile_set_name": "Github"
}
|
// Code generated by protoc-gen-go. DO NOT EDIT.
// source: google/cloud/ml/v1/operation_metadata.proto
package ml
import proto "github.com/golang/protobuf/proto"
import fmt "fmt"
import math "math"
import _ "google.golang.org/genproto/googleapis/api/annotations"
import google_protobuf2 "github.com/golang/protobuf/ptypes/timestamp"
// Reference imports to suppress errors if they are not otherwise used.
var _ = proto.Marshal
var _ = fmt.Errorf
var _ = math.Inf
// The operation type.
type OperationMetadata_OperationType int32
const (
// Unspecified operation type.
OperationMetadata_OPERATION_TYPE_UNSPECIFIED OperationMetadata_OperationType = 0
// An operation to create a new version.
OperationMetadata_CREATE_VERSION OperationMetadata_OperationType = 1
// An operation to delete an existing version.
OperationMetadata_DELETE_VERSION OperationMetadata_OperationType = 2
// An operation to delete an existing model.
OperationMetadata_DELETE_MODEL OperationMetadata_OperationType = 3
)
var OperationMetadata_OperationType_name = map[int32]string{
0: "OPERATION_TYPE_UNSPECIFIED",
1: "CREATE_VERSION",
2: "DELETE_VERSION",
3: "DELETE_MODEL",
}
var OperationMetadata_OperationType_value = map[string]int32{
"OPERATION_TYPE_UNSPECIFIED": 0,
"CREATE_VERSION": 1,
"DELETE_VERSION": 2,
"DELETE_MODEL": 3,
}
func (x OperationMetadata_OperationType) String() string {
return proto.EnumName(OperationMetadata_OperationType_name, int32(x))
}
func (OperationMetadata_OperationType) EnumDescriptor() ([]byte, []int) {
return fileDescriptor2, []int{0, 0}
}
// Represents the metadata of the long-running operation.
type OperationMetadata struct {
// The time the operation was submitted.
CreateTime *google_protobuf2.Timestamp `protobuf:"bytes,1,opt,name=create_time,json=createTime" json:"create_time,omitempty"`
// The time operation processing started.
StartTime *google_protobuf2.Timestamp `protobuf:"bytes,2,opt,name=start_time,json=startTime" json:"start_time,omitempty"`
// The time operation processing completed.
EndTime *google_protobuf2.Timestamp `protobuf:"bytes,3,opt,name=end_time,json=endTime" json:"end_time,omitempty"`
// Indicates whether a request to cancel this operation has been made.
IsCancellationRequested bool `protobuf:"varint,4,opt,name=is_cancellation_requested,json=isCancellationRequested" json:"is_cancellation_requested,omitempty"`
// The operation type.
OperationType OperationMetadata_OperationType `protobuf:"varint,5,opt,name=operation_type,json=operationType,enum=google.cloud.ml.v1.OperationMetadata_OperationType" json:"operation_type,omitempty"`
// Contains the name of the model associated with the operation.
ModelName string `protobuf:"bytes,6,opt,name=model_name,json=modelName" json:"model_name,omitempty"`
// Contains the version associated with the operation.
Version *Version `protobuf:"bytes,7,opt,name=version" json:"version,omitempty"`
}
func (m *OperationMetadata) Reset() { *m = OperationMetadata{} }
func (m *OperationMetadata) String() string { return proto.CompactTextString(m) }
func (*OperationMetadata) ProtoMessage() {}
func (*OperationMetadata) Descriptor() ([]byte, []int) { return fileDescriptor2, []int{0} }
func (m *OperationMetadata) GetCreateTime() *google_protobuf2.Timestamp {
if m != nil {
return m.CreateTime
}
return nil
}
func (m *OperationMetadata) GetStartTime() *google_protobuf2.Timestamp {
if m != nil {
return m.StartTime
}
return nil
}
func (m *OperationMetadata) GetEndTime() *google_protobuf2.Timestamp {
if m != nil {
return m.EndTime
}
return nil
}
func (m *OperationMetadata) GetIsCancellationRequested() bool {
if m != nil {
return m.IsCancellationRequested
}
return false
}
func (m *OperationMetadata) GetOperationType() OperationMetadata_OperationType {
if m != nil {
return m.OperationType
}
return OperationMetadata_OPERATION_TYPE_UNSPECIFIED
}
func (m *OperationMetadata) GetModelName() string {
if m != nil {
return m.ModelName
}
return ""
}
func (m *OperationMetadata) GetVersion() *Version {
if m != nil {
return m.Version
}
return nil
}
func init() {
proto.RegisterType((*OperationMetadata)(nil), "google.cloud.ml.v1.OperationMetadata")
proto.RegisterEnum("google.cloud.ml.v1.OperationMetadata_OperationType", OperationMetadata_OperationType_name, OperationMetadata_OperationType_value)
}
func init() { proto.RegisterFile("google/cloud/ml/v1/operation_metadata.proto", fileDescriptor2) }
var fileDescriptor2 = []byte{
// 454 bytes of a gzipped FileDescriptorProto
0x1f, 0x8b, 0x08, 0x00, 0x00, 0x00, 0x00, 0x00, 0x02, 0xff, 0x84, 0x92, 0x5f, 0x6b, 0xdb, 0x30,
0x14, 0xc5, 0xe7, 0xb6, 0x6b, 0x1a, 0x75, 0x0d, 0x99, 0x1e, 0xb6, 0xcc, 0xfb, 0x17, 0xfa, 0x30,
0x02, 0x03, 0x99, 0xb4, 0xdb, 0xc3, 0xd6, 0xa7, 0x36, 0xd1, 0x20, 0xd0, 0xc6, 0xc6, 0xf5, 0x0a,
0xeb, 0x8b, 0x51, 0xed, 0x3b, 0x23, 0x90, 0x25, 0xcf, 0x52, 0x0c, 0xfd, 0x2c, 0xfb, 0xb2, 0x23,
0x92, 0x4d, 0x33, 0x52, 0xe8, 0xa3, 0xce, 0xfd, 0x9d, 0xab, 0xab, 0x7b, 0x84, 0x3e, 0x17, 0x4a,
0x15, 0x02, 0x82, 0x4c, 0xa8, 0x55, 0x1e, 0x94, 0x22, 0x68, 0xa6, 0x81, 0xaa, 0xa0, 0x66, 0x86,
0x2b, 0x99, 0x96, 0x60, 0x58, 0xce, 0x0c, 0x23, 0x55, 0xad, 0x8c, 0xc2, 0xd8, 0xc1, 0xc4, 0xc2,
0xa4, 0x14, 0xa4, 0x99, 0xfa, 0xef, 0xda, 0x06, 0xac, 0xe2, 0x01, 0x93, 0x52, 0x19, 0xeb, 0xd4,
0xce, 0xe1, 0x7f, 0x7a, 0xa4, 0x7d, 0xa9, 0x72, 0x10, 0xa9, 0x86, 0xba, 0xe1, 0x19, 0xb4, 0xdc,
0xc7, 0x96, 0xb3, 0xa7, 0xbb, 0xd5, 0xef, 0xc0, 0xf0, 0x12, 0xb4, 0x61, 0x65, 0xe5, 0x80, 0xe3,
0xbf, 0x7b, 0xe8, 0x65, 0xd8, 0xcd, 0x75, 0xd5, 0x8e, 0x85, 0xcf, 0xd0, 0x61, 0x56, 0x03, 0x33,
0x90, 0xae, 0xf9, 0x91, 0x37, 0xf6, 0x26, 0x87, 0x27, 0x3e, 0x69, 0xc7, 0xec, 0x9a, 0x91, 0xa4,
0x6b, 0x16, 0x23, 0x87, 0xaf, 0x05, 0xfc, 0x0d, 0x21, 0x6d, 0x58, 0x6d, 0x9c, 0x77, 0xe7, 0x49,
0x6f, 0xdf, 0xd2, 0xd6, 0xfa, 0x15, 0x1d, 0x80, 0xcc, 0x9d, 0x71, 0xf7, 0x49, 0x63, 0x0f, 0x64,
0x6e, 0x6d, 0xdf, 0xd1, 0x1b, 0xae, 0xd3, 0x8c, 0xc9, 0x0c, 0x84, 0x70, 0x1b, 0xae, 0xe1, 0xcf,
0x0a, 0xb4, 0x81, 0x7c, 0xb4, 0x37, 0xf6, 0x26, 0x07, 0xf1, 0x6b, 0xae, 0x67, 0x1b, 0xf5, 0xb8,
0x2b, 0xe3, 0x5b, 0x34, 0x78, 0xc8, 0xc5, 0xdc, 0x57, 0x30, 0x7a, 0x3e, 0xf6, 0x26, 0x83, 0x93,
0x53, 0xb2, 0x1d, 0x0a, 0xd9, 0xda, 0xd4, 0x83, 0x92, 0xdc, 0x57, 0x10, 0x1f, 0xa9, 0xcd, 0x23,
0x7e, 0x8f, 0x90, 0x0b, 0x45, 0xb2, 0x12, 0x46, 0xfb, 0x63, 0x6f, 0xd2, 0x8f, 0xfb, 0x56, 0x59,
0x32, 0xfb, 0xda, 0x5e, 0x03, 0xb5, 0xe6, 0x4a, 0x8e, 0x7a, 0xf6, 0xb1, 0x6f, 0x1f, 0xbb, 0xf3,
0xc6, 0x21, 0x71, 0xc7, 0x1e, 0x73, 0x74, 0xf4, 0xdf, 0xad, 0xf8, 0x03, 0xf2, 0xc3, 0x88, 0xc6,
0xe7, 0xc9, 0x22, 0x5c, 0xa6, 0xc9, 0xaf, 0x88, 0xa6, 0x3f, 0x97, 0xd7, 0x11, 0x9d, 0x2d, 0x7e,
0x2c, 0xe8, 0x7c, 0xf8, 0x0c, 0x63, 0x34, 0x98, 0xc5, 0xf4, 0x3c, 0xa1, 0xe9, 0x0d, 0x8d, 0xaf,
0x17, 0xe1, 0x72, 0xe8, 0xad, 0xb5, 0x39, 0xbd, 0xa4, 0x1b, 0xda, 0x0e, 0x1e, 0xa2, 0x17, 0xad,
0x76, 0x15, 0xce, 0xe9, 0xe5, 0x70, 0xf7, 0x42, 0x20, 0x3f, 0x53, 0xe5, 0xd6, 0x54, 0xac, 0xe2,
0xa4, 0x99, 0x5e, 0xbc, 0xda, 0x5a, 0x47, 0xb4, 0x0e, 0x29, 0xf2, 0x6e, 0xbf, 0xb4, 0x8e, 0x42,
0x09, 0x26, 0x0b, 0xa2, 0xea, 0x22, 0x28, 0x40, 0xda, 0x08, 0x03, 0x57, 0x62, 0x15, 0xd7, 0x9b,
0xbf, 0xf7, 0xac, 0x14, 0x77, 0xfb, 0x16, 0x38, 0xfd, 0x17, 0x00, 0x00, 0xff, 0xff, 0x03, 0xf9,
0xcc, 0xf1, 0x3c, 0x03, 0x00, 0x00,
}
|
{
"pile_set_name": "Github"
}
|
fileFormatVersion: 2
guid: 71e01454e4f05d0408bf7e467df7be00
timeCreated: 1456376760
licenseType: Free
MonoImporter:
serializedVersion: 2
defaultReferences: []
executionOrder: 0
icon: {instanceID: 0}
userData:
assetBundleName:
assetBundleVariant:
|
{
"pile_set_name": "Github"
}
|
#if !defined NO_PING && !defined NO_UDP
// ping/udp structure.
typedef struct PINGFLOOD
{
SOCKET sock;
char chan[128];
char host[128];
int num;
int size;
int delay;
int port;
int threadnum;
BOOL notice;
BOOL silent;
BOOL gotinfo;
} PINGFLOOD;
#endif
#ifndef NO_PING
DWORD WINAPI ping(LPVOID param);
#endif
#ifndef NO_UDP
DWORD WINAPI udp(LPVOID param);
#endif
|
{
"pile_set_name": "Github"
}
|
//
// OutlineTableViewController.h
// Outline
//
// Created by Tim Moose on 5/27/13.
// Copyright (c) 2013 Tractable Labs. All rights reserved.
//
#import <TLIndexPathTools/TLTreeTableViewController.h>
@interface OutlineTableViewController : TLTreeTableViewController <TLTreeTableViewControllerDelegate>
@end
|
{
"pile_set_name": "Github"
}
|
Joe Groff
|
{
"pile_set_name": "Github"
}
|
# The Elmish Book
The Elmish Book is a practical guide to building modern and reliable web applications in F# from first principles. We will be using the [Fable](https://fable.io/) compiler, which will take our F# code and turn it into JavaScript. This allows our code to run anywhere JavaScript runs, whether it is the browser, [Node.js][nodejs], or other runtimes. Fable is designed with interoperability in mind, which makes it simple to re-use and integrate with the vast ecosystem of JavaScript libraries, as we will see later on in the book.
Our primary focus will be building client applications for the browser. We will start by learning the development workflow around client applications, slowly understanding the tooling and the hybrid nature of Fable projects since we will be both using [.NET][dotnet] and Node.js tools for development.
Using the Elmish library, we will build and design our applications following The Elm Architecture: A beautiful pattern for making genuinely modular user interfaces as popularized by the [Elm][elm] programming language. We will spend a significant portion of the book talking about, understanding, and building applications that follow this architecture starting from scratch until it becomes second nature to the reader, hence the name of this book.
The premise of The Elm Architecture is the ability to build robust and reliable applications: applications that don't fail or break easily. Building a stable structure requires identifying the failure points of that structure and accounting for them. When it comes to web applications, many problems come down to the correct handling of data and syncing it with the user interface. Data can have many failure points, whether it is a failure when being retrieved, a failure when being processed from one form to another, or failure when assuming the data to be available and using it when in fact, it is not. To account for these problems, we will spend a lot of time discussing **data modeling** and ways to encode the data using types with the help of F#'s powerful type-system while having the compiler at our backs.
The pacing of the book is *deliberately* slow because learning front-end development can often be overwhelming. That is why each chapter is divided into bite-sized sections that are hopefully easy to understand on their own. These sections include working small samples to demonstrate the various concepts. As you progress through the book, the concepts start to become more apparent as we keep expanding upon the things we learn along the way.
Some parts of the book are *opinionated* and do not necessarily follow the tutorials and guidelines you have potentially read before. However, this is not to say that you should follow my advice and forget what you already know, it is the opposite: my goal is that you learn a lot and gain more experience to draw your conclusions and understand why one approach is better than the other. That is why I will try my best to explain my *train of thought* when going through the examples and the way they are implemented.
[elm]:https://elm-lang.org/
[nodejs]:https://nodejs.org/en/
[dotnet]:https://dotnet.microsoft.com/
|
{
"pile_set_name": "Github"
}
|
/* =========================================================
* bootstrap-slider.js v2.0.0
* http://www.eyecon.ro/bootstrap-slider
* =========================================================
* Copyright 2012 Stefan Petre
*
* Licensed under the Apache License, Version 2.0 (the "License");
* you may not use this file except in compliance with the License.
* You may obtain a copy of the License at
*
* http://www.apache.org/licenses/LICENSE-2.0
*
* Unless required by applicable law or agreed to in writing, software
* distributed under the License is distributed on an "AS IS" BASIS,
* WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
* See the License for the specific language governing permissions and
* limitations under the License.
* ========================================================= */
!function( $ ) {
var Slider = function(element, options) {
this.dragLocked = false;
this.limit = 100000;
this.element = $(element).hide();
this.picker = $('<div class="slider">'+
'<div class="slider-track">'+
'<div class="slider-selection"></div>'+
'<div class="slider-handle"></div>'+
'<div class="slider-handle"></div>'+
'</div>'+
'<div class="tooltip"><div class="tooltip-arrow"></div><div class="tooltip-inner"></div></div>'+
'</div>')
.insertBefore(this.element)
.append(this.element);
this.id = this.element.data('slider-id')||options.id;
if (this.id) {
this.picker[0].id = this.id;
}
if (typeof Modernizr !== 'undefined' && Modernizr.touch) {
this.touchCapable = true;
}
var tooltip = this.element.data('slider-tooltip')||options.tooltip;
this.tooltip = this.picker.find('.tooltip');
this.tooltipInner = this.tooltip.find('div.tooltip-inner');
this.orientation = this.element.data('slider-orientation')||options.orientation;
switch(this.orientation) {
case 'vertical':
this.picker.addClass('slider-vertical');
this.stylePos = 'top';
this.mousePos = 'pageY';
this.sizePos = 'offsetHeight';
this.tooltip.addClass('right')[0].style.left = '100%';
break;
default:
this.picker
.addClass('slider-horizontal')
.css('width', this.element.outerWidth());
this.orientation = 'horizontal';
this.stylePos = 'left';
this.mousePos = 'pageX';
this.sizePos = 'offsetWidth';
this.tooltip.addClass('top')[0].style.top = -this.tooltip.outerHeight() - 14 + 'px';
break;
}
this.min = this.element.data('slider-min')||options.min;
this.max = this.element.data('slider-max')||options.max;
this.step = this.element.data('slider-step')||options.step;
this.value = this.element.data('slider-value')||options.value;
if (this.value[1]) {
this.range = true;
}
this.selection = this.element.data('slider-selection')||options.selection;
this.selectionEl = this.picker.find('.slider-selection');
if (this.selection === 'none') {
this.selectionEl.addClass('hide');
}
this.selectionElStyle = this.selectionEl[0].style;
this.handle1 = this.picker.find('.slider-handle:first');
this.handle1Stype = this.handle1[0].style;
this.handle2 = this.picker.find('.slider-handle:last');
this.handle2Stype = this.handle2[0].style;
var handle = this.element.data('slider-handle')||options.handle;
switch(handle) {
case 'round':
this.handle1.addClass('round');
this.handle2.addClass('round');
break
case 'triangle':
this.handle1.addClass('triangle');
this.handle2.addClass('triangle');
break
}
if (this.range) {
this.value[0] = Math.max(this.min, Math.min(this.max, this.value[0]));
this.value[1] = Math.max(this.min, Math.min(this.max, this.value[1]));
} else {
this.value = [ Math.max(this.min, Math.min(this.max, this.value))];
this.handle2.addClass('hide');
if (this.selection == 'after') {
this.value[1] = this.max;
} else {
this.value[1] = this.min;
}
}
this.diff = this.max - this.min;
this.percentage = [
(this.value[0]-this.min)*100/this.diff,
(this.value[1]-this.min)*100/this.diff,
this.step*100/this.diff
];
this.offset = this.picker.offset();
this.size = this.picker[0][this.sizePos];
this.formater = options.formater;
this.reversed = this.element.data('slider-reversed')||options.reversed;
this.layout();
if (this.touchCapable) {
// Touch: Bind touch events:
this.picker.on({
touchstart: $.proxy(this.mousedown, this)
});
} else {
this.picker.on({
mousedown: $.proxy(this.mousedown, this)
});
}
if (tooltip === 'show') {
this.picker.on({
mouseenter: $.proxy(this.showTooltip, this),
mouseleave: $.proxy(this.hideTooltip, this)
});
} else {
this.tooltip.addClass('hide');
}
};
Slider.prototype = {
constructor: Slider,
over: false,
inDrag: false,
showTooltip: function(){
this.tooltip.addClass('in');
//var left = Math.round(this.percent*this.width);
//this.tooltip.css('left', left - this.tooltip.outerWidth()/2);
this.over = true;
},
hideTooltip: function(){
if (this.inDrag === false) {
this.tooltip.removeClass('in');
}
this.over = false;
},
layout: function(){
var positionPercentages;
if(this.reversed) {
positionPercentages = [ this.percentage[1] - this.percentage[0], this.percentage[1] ];
} else {
positionPercentages = [ this.percentage[0], this.percentage[1] ];
}
this.handle1Stype[this.stylePos] = positionPercentages[0]+'%';
this.handle2Stype[this.stylePos] = positionPercentages[1]+'%';
if (this.orientation == 'vertical') {
this.selectionElStyle.top = Math.min(positionPercentages[0], positionPercentages[1]) +'%';
this.selectionElStyle.height = Math.abs(positionPercentages[0] - positionPercentages[1]) +'%';
} else {
this.selectionElStyle.left = Math.min(positionPercentages[0], positionPercentages[1]) +'%';
this.selectionElStyle.width = Math.abs(positionPercentages[0] - positionPercentages[1]) +'%';
}
if (this.range) {
this.tooltipInner.text(
this.formater(this.value[0]) +
' : ' +
this.formater(this.value[1])
);
this.tooltip[0].style[this.stylePos] = this.size * (positionPercentages[0] + (positionPercentages[1] - positionPercentages[0])/2)/100 - (this.orientation === 'vertical' ? this.tooltip.outerHeight()/2 : this.tooltip.outerWidth()/2) +'px';
} else {
this.tooltipInner.text(
this.formater(this.value[0])
);
this.tooltip[0].style[this.stylePos] = this.size * positionPercentages[0]/100 - (this.orientation === 'vertical' ? this.tooltip.outerHeight()/2 : this.tooltip.outerWidth()/2) +'px';
}
},
mousedown: function(ev) {
if (!this.dragLocked){
// Touch: Get the original event:
if (this.touchCapable && ev.type === 'touchstart') {
ev = ev.originalEvent;
}
this.offset = this.picker.offset();
this.size = this.picker[0][this.sizePos];
var percentage = this.getPercentage(ev);
if (this.range) {
var diff1 = Math.abs(this.percentage[0] - percentage);
var diff2 = Math.abs(this.percentage[1] - percentage);
this.dragged = (diff1 < diff2) ? 0 : 1;
} else {
this.dragged = 0;
}
this.percentage[this.dragged] = this.reversed ? this.percentage[1] - percentage : percentage;
this.layout();
if (this.touchCapable) {
// Touch: Bind touch events:
$(document).on({
touchmove: $.proxy(this.mousemove, this),
touchend: $.proxy(this.mouseup, this)
});
} else {
$(document).on({
mousemove: $.proxy(this.mousemove, this),
mouseup: $.proxy(this.mouseup, this)
});
}
this.inDrag = true;
var val = this.calculateValue();
this.setValue(val);
this.element.trigger({
type: 'slideStart',
value: val
}).trigger({
type: 'slide',
value: val
});
return false;
}
},
mousemove: function(ev) {
// Touch: Get the original event:
if (!this.dragLocked){
if (this.touchCapable && ev.type === 'touchmove') {
ev = ev.originalEvent;
}
var percentage = this.getPercentage(ev);
if (this.range) {
if (this.dragged === 0 && this.percentage[1] < percentage) {
this.percentage[0] = this.percentage[1];
this.dragged = 1;
} else if (this.dragged === 1 && this.percentage[0] > percentage) {
this.percentage[1] = this.percentage[0];
this.dragged = 0;
}
}
x = this.reversed ? this.percentage[1] - percentage : percentage;
if (x > this.limit) {
return ;
}
this.percentage[this.dragged] = x;
this.layout();
var val = this.calculateValue();
this.setValue(val);
this.element
.trigger({
type: 'slide',
value: val
})
.data('value', val)
.prop('value', val);
return false;
}
},
mouseup: function(ev) {
if (this.touchCapable) {
// Touch: Bind touch events:
$(document).off({
touchmove: this.mousemove,
touchend: this.mouseup
});
} else {
$(document).off({
mousemove: this.mousemove,
mouseup: this.mouseup
});
}
this.inDrag = false;
if (this.over == false) {
this.hideTooltip();
}
this.element;
var val = this.calculateValue();
this.layout();
this.element
.trigger({
type: 'slideStop',
value: val
})
.data('value', val)
.prop('value', val);
return false;
},
calculateValue: function() {
var val;
if (this.range) {
val = [
(this.min + Math.round((this.diff * this.percentage[0]/100)/this.step)*this.step),
(this.min + Math.round((this.diff * this.percentage[1]/100)/this.step)*this.step)
];
this.value = val;
} else {
val = (this.min + Math.round((this.diff * this.percentage[0]/100)/this.step)*this.step);
this.value = [val, this.value[1]];
}
return val;
},
getPercentage: function(ev) {
if (this.touchCapable) {
ev = ev.touches[0];
}
var percentage = (ev[this.mousePos] - this.offset[this.stylePos])*100/this.size;
percentage = Math.round(percentage/this.percentage[2])*this.percentage[2];
return Math.max(0, Math.min(100, percentage));
},
getValue: function() {
if (this.range) {
return this.value;
}
return this.value[0];
},
setLimit: function(val) {
this.limit = val;
},
setDragLocked: function(val) {
this.dragLocked = val;
},
getDragLocked: function(val) {
return this.dragLocked;
},
setValue: function(val) {
this.value = val;
if (this.range) {
this.value[0] = Math.max(this.min, Math.min(this.max, this.value[0]));
this.value[1] = Math.max(this.min, Math.min(this.max, this.value[1]));
} else {
this.value = [ Math.max(this.min, Math.min(this.max, this.value))];
this.handle2.addClass('hide');
if (this.selection == 'after') {
this.value[1] = this.max;
} else {
this.value[1] = this.min;
}
}
this.diff = this.max - this.min;
this.percentage = [
(this.value[0]-this.min)*100/this.diff,
(this.value[1]-this.min)*100/this.diff,
this.step*100/this.diff
];
this.layout();
},
destroy: function(){
this.element.show().insertBefore(this.picker);
this.picker.remove();
},
};
$.fn.slider = function ( option, val ) {
return this.each(function () {
var $this = $(this),
data = $this.data('slider'),
options = typeof option === 'object' && option;
if (!data) {
$this.data('slider', (data = new Slider(this, $.extend({}, $.fn.slider.defaults,options))));
}
if (typeof option == 'string') {
data[option](val);
}
})
};
$.fn.slider.defaults = {
min: 0,
max: 10,
step: 1,
orientation: 'horizontal',
value: 5,
selection: 'before',
tooltip: 'show',
handle: 'round',
reversed : false,
limit: 100000,
dragLocked: false,
formater: function(value) {
return value;
}
};
$.fn.slider.Constructor = Slider;
}( window.jQuery );
|
{
"pile_set_name": "Github"
}
|
interactions:
- request:
body: !!python/unicode '{"query": {"tool": "sumsrcip", "type": "event", "startOffset":
0, "filters": [], "endOffset": 5}, "type": "event", "sourceType": "lce"}'
headers:
Accept: ['*/*']
Accept-Encoding: ['gzip, deflate']
Connection: [keep-alive]
Content-Length: ['135']
Content-Type: [application/json]
Cookie: [TNS_SESSIONID=9e4b4e83ec7251dfcccfd36636d5e788]
User-Agent: [pyTenable/0.3.5 (pyTenable/0.3.5; Python/2.7.14)]
X-SecurityCenter: ['2023559547']
method: POST
uri: https://securitycenter.home.cugnet.net/rest/analysis
response:
body: {string: !!python/unicode '{"type":"regular","response":{"totalRecords":268,"returnedRecords":5,"startOffset":0,"endOffset":5,"matchingDataElementCount":48234,"results":[{"address":"192.168.101.179","count":"21757","lce":{"id":"1","name":"Internal
LCE","description":"","status":"1"}},{"address":"192.168.101.1","count":"6059","lce":{"id":"1","name":"Internal
LCE","description":"","status":"1"}},{"address":"192.168.106.147","count":"5588","lce":{"id":"1","name":"Internal
LCE","description":"","status":"1"}},{"address":"192.168.106.100","count":"5058","lce":{"id":"1","name":"Internal
LCE","description":"","status":"1"}},{"address":"0.0.0.0","count":"4788","lce":{"id":"1","name":"Internal
LCE","description":"","status":"1"}}],"startTime":1544993872,"endTime":1545080272},"error_code":0,"error_msg":"","warnings":[],"timestamp":1545080272}
'}
headers:
cache-control: ['no-store, no-cache, must-revalidate']
connection: [Keep-Alive]
content-length: ['818']
content-type: [application/json]
date: ['Mon, 17 Dec 2018 20:57:52 GMT']
expires: ['Thu, 19 Nov 1981 08:52:00 GMT']
keep-alive: ['timeout=15, max=100']
pragma: [no-cache]
securitycenter: [5.8.0]
server: [Apache]
strict-transport-security: [max-age=31536000; includeSubDomains]
x-content-type-options: [nosniff]
x-frame-options: [DENY]
x-xss-protection: [1; mode=block]
status: {code: 200, message: OK}
- request:
body: !!python/unicode '{"query": {"tool": "sumsrcip", "type": "event", "startOffset":
5, "filters": [], "endOffset": 10}, "type": "event", "sourceType": "lce"}'
headers:
Accept: ['*/*']
Accept-Encoding: ['gzip, deflate']
Connection: [keep-alive]
Content-Length: ['136']
Content-Type: [application/json]
Cookie: [TNS_SESSIONID=9e4b4e83ec7251dfcccfd36636d5e788]
User-Agent: [pyTenable/0.3.5 (pyTenable/0.3.5; Python/2.7.14)]
X-SecurityCenter: ['2023559547']
method: POST
uri: https://securitycenter.home.cugnet.net/rest/analysis
response:
body: {string: !!python/unicode '{"type":"regular","response":{"totalRecords":268,"returnedRecords":5,"startOffset":5,"endOffset":10,"matchingDataElementCount":48234,"results":[{"address":"192.168.104.171","count":"921","lce":{"id":"1","name":"Internal
LCE","description":"","status":"1"}},{"address":"192.168.104.189","count":"855","lce":{"id":"1","name":"Internal
LCE","description":"","status":"1"}},{"address":"192.168.104.188","count":"854","lce":{"id":"1","name":"Internal
LCE","description":"","status":"1"}},{"address":"192.168.101.115","count":"573","lce":{"id":"1","name":"Internal
LCE","description":"","status":"1"}},{"address":"192.168.101.140","count":"399","lce":{"id":"1","name":"Internal
LCE","description":"","status":"1"}}],"startTime":1544993873,"endTime":1545080273},"error_code":0,"error_msg":"","warnings":[],"timestamp":1545080273}
'}
headers:
cache-control: ['no-store, no-cache, must-revalidate']
connection: [Keep-Alive]
content-length: ['823']
content-type: [application/json]
date: ['Mon, 17 Dec 2018 20:57:53 GMT']
expires: ['Thu, 19 Nov 1981 08:52:00 GMT']
keep-alive: ['timeout=15, max=99']
pragma: [no-cache]
securitycenter: [5.8.0]
server: [Apache]
strict-transport-security: [max-age=31536000; includeSubDomains]
x-content-type-options: [nosniff]
x-frame-options: [DENY]
x-xss-protection: [1; mode=block]
status: {code: 200, message: OK}
version: 1
|
{
"pile_set_name": "Github"
}
|
import errorMessage from './errorMessage';
import bestreviews from './bestreviews';
import lastestvideo from './lastestvideo';
import mostwanted from './mostwanted';
import bestrated from './bestrated';
import listByTag from './listByTag';
import listByStar from './listByStar';
import detail from './detail';
import user from './user';
import me from './me';
import stars from './stars';
import search from './search';
import comments from './comments';
import videoPlayer from './videoPlayer';
export default {
bestreviews,
lastestvideo,
mostwanted,
bestrated,
listByTag,
listByStar,
detail,
user,
me,
stars,
search,
comments,
errorMessage,
videoPlayer,
};
|
{
"pile_set_name": "Github"
}
|
// CodeMirror, copyright (c) by Marijn Haverbeke and others
// Distributed under an MIT license: https://codemirror.net/LICENSE
(function(mod) {
if (typeof exports == "object" && typeof module == "object") { // CommonJS
mod(require("../../lib/codemirror"));
} else if (typeof define == "function" && define.amd) { // AMD
define(["../../lib/codemirror"], mod);
} else { // Plain browser env
mod(CodeMirror);
}
})(function(CodeMirror) {
"use strict";
var TOKEN_STYLES = {
addition: "positive",
attributes: "attribute",
bold: "strong",
cite: "keyword",
code: "atom",
definitionList: "number",
deletion: "negative",
div: "punctuation",
em: "em",
footnote: "variable",
footCite: "qualifier",
header: "header",
html: "comment",
image: "string",
italic: "em",
link: "link",
linkDefinition: "link",
list1: "variable-2",
list2: "variable-3",
list3: "keyword",
notextile: "string-2",
pre: "operator",
p: "property",
quote: "bracket",
span: "quote",
specialChar: "tag",
strong: "strong",
sub: "builtin",
sup: "builtin",
table: "variable-3",
tableHeading: "operator"
};
function startNewLine(stream, state) {
state.mode = Modes.newLayout;
state.tableHeading = false;
if (state.layoutType === "definitionList" && state.spanningLayout &&
stream.match(RE("definitionListEnd"), false))
state.spanningLayout = false;
}
function handlePhraseModifier(stream, state, ch) {
if (ch === "_") {
if (stream.eat("_"))
return togglePhraseModifier(stream, state, "italic", /__/, 2);
else
return togglePhraseModifier(stream, state, "em", /_/, 1);
}
if (ch === "*") {
if (stream.eat("*")) {
return togglePhraseModifier(stream, state, "bold", /\*\*/, 2);
}
return togglePhraseModifier(stream, state, "strong", /\*/, 1);
}
if (ch === "[") {
if (stream.match(/\d+\]/)) state.footCite = true;
return tokenStyles(state);
}
if (ch === "(") {
var spec = stream.match(/^(r|tm|c)\)/);
if (spec)
return tokenStylesWith(state, TOKEN_STYLES.specialChar);
}
if (ch === "<" && stream.match(/(\w+)[^>]+>[^<]+<\/\1>/))
return tokenStylesWith(state, TOKEN_STYLES.html);
if (ch === "?" && stream.eat("?"))
return togglePhraseModifier(stream, state, "cite", /\?\?/, 2);
if (ch === "=" && stream.eat("="))
return togglePhraseModifier(stream, state, "notextile", /==/, 2);
if (ch === "-" && !stream.eat("-"))
return togglePhraseModifier(stream, state, "deletion", /-/, 1);
if (ch === "+")
return togglePhraseModifier(stream, state, "addition", /\+/, 1);
if (ch === "~")
return togglePhraseModifier(stream, state, "sub", /~/, 1);
if (ch === "^")
return togglePhraseModifier(stream, state, "sup", /\^/, 1);
if (ch === "%")
return togglePhraseModifier(stream, state, "span", /%/, 1);
if (ch === "@")
return togglePhraseModifier(stream, state, "code", /@/, 1);
if (ch === "!") {
var type = togglePhraseModifier(stream, state, "image", /(?:\([^\)]+\))?!/, 1);
stream.match(/^:\S+/); // optional Url portion
return type;
}
return tokenStyles(state);
}
function togglePhraseModifier(stream, state, phraseModifier, closeRE, openSize) {
var charBefore = stream.pos > openSize ? stream.string.charAt(stream.pos - openSize - 1) : null;
var charAfter = stream.peek();
if (state[phraseModifier]) {
if ((!charAfter || /\W/.test(charAfter)) && charBefore && /\S/.test(charBefore)) {
var type = tokenStyles(state);
state[phraseModifier] = false;
return type;
}
} else if ((!charBefore || /\W/.test(charBefore)) && charAfter && /\S/.test(charAfter) &&
stream.match(new RegExp("^.*\\S" + closeRE.source + "(?:\\W|$)"), false)) {
state[phraseModifier] = true;
state.mode = Modes.attributes;
}
return tokenStyles(state);
};
function tokenStyles(state) {
var disabled = textileDisabled(state);
if (disabled) return disabled;
var styles = [];
if (state.layoutType) styles.push(TOKEN_STYLES[state.layoutType]);
styles = styles.concat(activeStyles(
state, "addition", "bold", "cite", "code", "deletion", "em", "footCite",
"image", "italic", "link", "span", "strong", "sub", "sup", "table", "tableHeading"));
if (state.layoutType === "header")
styles.push(TOKEN_STYLES.header + "-" + state.header);
return styles.length ? styles.join(" ") : null;
}
function textileDisabled(state) {
var type = state.layoutType;
switch(type) {
case "notextile":
case "code":
case "pre":
return TOKEN_STYLES[type];
default:
if (state.notextile)
return TOKEN_STYLES.notextile + (type ? (" " + TOKEN_STYLES[type]) : "");
return null;
}
}
function tokenStylesWith(state, extraStyles) {
var disabled = textileDisabled(state);
if (disabled) return disabled;
var type = tokenStyles(state);
if (extraStyles)
return type ? (type + " " + extraStyles) : extraStyles;
else
return type;
}
function activeStyles(state) {
var styles = [];
for (var i = 1; i < arguments.length; ++i) {
if (state[arguments[i]])
styles.push(TOKEN_STYLES[arguments[i]]);
}
return styles;
}
function blankLine(state) {
var spanningLayout = state.spanningLayout, type = state.layoutType;
for (var key in state) if (state.hasOwnProperty(key))
delete state[key];
state.mode = Modes.newLayout;
if (spanningLayout) {
state.layoutType = type;
state.spanningLayout = true;
}
}
var REs = {
cache: {},
single: {
bc: "bc",
bq: "bq",
definitionList: /- .*?:=+/,
definitionListEnd: /.*=:\s*$/,
div: "div",
drawTable: /\|.*\|/,
foot: /fn\d+/,
header: /h[1-6]/,
html: /\s*<(?:\/)?(\w+)(?:[^>]+)?>(?:[^<]+<\/\1>)?/,
link: /[^"]+":\S/,
linkDefinition: /\[[^\s\]]+\]\S+/,
list: /(?:#+|\*+)/,
notextile: "notextile",
para: "p",
pre: "pre",
table: "table",
tableCellAttributes: /[\/\\]\d+/,
tableHeading: /\|_\./,
tableText: /[^"_\*\[\(\?\+~\^%@|-]+/,
text: /[^!"_=\*\[\(<\?\+~\^%@-]+/
},
attributes: {
align: /(?:<>|<|>|=)/,
selector: /\([^\(][^\)]+\)/,
lang: /\[[^\[\]]+\]/,
pad: /(?:\(+|\)+){1,2}/,
css: /\{[^\}]+\}/
},
createRe: function(name) {
switch (name) {
case "drawTable":
return REs.makeRe("^", REs.single.drawTable, "$");
case "html":
return REs.makeRe("^", REs.single.html, "(?:", REs.single.html, ")*", "$");
case "linkDefinition":
return REs.makeRe("^", REs.single.linkDefinition, "$");
case "listLayout":
return REs.makeRe("^", REs.single.list, RE("allAttributes"), "*\\s+");
case "tableCellAttributes":
return REs.makeRe("^", REs.choiceRe(REs.single.tableCellAttributes,
RE("allAttributes")), "+\\.");
case "type":
return REs.makeRe("^", RE("allTypes"));
case "typeLayout":
return REs.makeRe("^", RE("allTypes"), RE("allAttributes"),
"*\\.\\.?", "(\\s+|$)");
case "attributes":
return REs.makeRe("^", RE("allAttributes"), "+");
case "allTypes":
return REs.choiceRe(REs.single.div, REs.single.foot,
REs.single.header, REs.single.bc, REs.single.bq,
REs.single.notextile, REs.single.pre, REs.single.table,
REs.single.para);
case "allAttributes":
return REs.choiceRe(REs.attributes.selector, REs.attributes.css,
REs.attributes.lang, REs.attributes.align, REs.attributes.pad);
default:
return REs.makeRe("^", REs.single[name]);
}
},
makeRe: function() {
var pattern = "";
for (var i = 0; i < arguments.length; ++i) {
var arg = arguments[i];
pattern += (typeof arg === "string") ? arg : arg.source;
}
return new RegExp(pattern);
},
choiceRe: function() {
var parts = [arguments[0]];
for (var i = 1; i < arguments.length; ++i) {
parts[i * 2 - 1] = "|";
parts[i * 2] = arguments[i];
}
parts.unshift("(?:");
parts.push(")");
return REs.makeRe.apply(null, parts);
}
};
function RE(name) {
return (REs.cache[name] || (REs.cache[name] = REs.createRe(name)));
}
var Modes = {
newLayout: function(stream, state) {
if (stream.match(RE("typeLayout"), false)) {
state.spanningLayout = false;
return (state.mode = Modes.blockType)(stream, state);
}
var newMode;
if (!textileDisabled(state)) {
if (stream.match(RE("listLayout"), false))
newMode = Modes.list;
else if (stream.match(RE("drawTable"), false))
newMode = Modes.table;
else if (stream.match(RE("linkDefinition"), false))
newMode = Modes.linkDefinition;
else if (stream.match(RE("definitionList")))
newMode = Modes.definitionList;
else if (stream.match(RE("html"), false))
newMode = Modes.html;
}
return (state.mode = (newMode || Modes.text))(stream, state);
},
blockType: function(stream, state) {
var match, type;
state.layoutType = null;
if (match = stream.match(RE("type")))
type = match[0];
else
return (state.mode = Modes.text)(stream, state);
if (match = type.match(RE("header"))) {
state.layoutType = "header";
state.header = parseInt(match[0][1]);
} else if (type.match(RE("bq"))) {
state.layoutType = "quote";
} else if (type.match(RE("bc"))) {
state.layoutType = "code";
} else if (type.match(RE("foot"))) {
state.layoutType = "footnote";
} else if (type.match(RE("notextile"))) {
state.layoutType = "notextile";
} else if (type.match(RE("pre"))) {
state.layoutType = "pre";
} else if (type.match(RE("div"))) {
state.layoutType = "div";
} else if (type.match(RE("table"))) {
state.layoutType = "table";
}
state.mode = Modes.attributes;
return tokenStyles(state);
},
text: function(stream, state) {
if (stream.match(RE("text"))) return tokenStyles(state);
var ch = stream.next();
if (ch === '"')
return (state.mode = Modes.link)(stream, state);
return handlePhraseModifier(stream, state, ch);
},
attributes: function(stream, state) {
state.mode = Modes.layoutLength;
if (stream.match(RE("attributes")))
return tokenStylesWith(state, TOKEN_STYLES.attributes);
else
return tokenStyles(state);
},
layoutLength: function(stream, state) {
if (stream.eat(".") && stream.eat("."))
state.spanningLayout = true;
state.mode = Modes.text;
return tokenStyles(state);
},
list: function(stream, state) {
var match = stream.match(RE("list"));
state.listDepth = match[0].length;
var listMod = (state.listDepth - 1) % 3;
if (!listMod)
state.layoutType = "list1";
else if (listMod === 1)
state.layoutType = "list2";
else
state.layoutType = "list3";
state.mode = Modes.attributes;
return tokenStyles(state);
},
link: function(stream, state) {
state.mode = Modes.text;
if (stream.match(RE("link"))) {
stream.match(/\S+/);
return tokenStylesWith(state, TOKEN_STYLES.link);
}
return tokenStyles(state);
},
linkDefinition: function(stream, state) {
stream.skipToEnd();
return tokenStylesWith(state, TOKEN_STYLES.linkDefinition);
},
definitionList: function(stream, state) {
stream.match(RE("definitionList"));
state.layoutType = "definitionList";
if (stream.match(/\s*$/))
state.spanningLayout = true;
else
state.mode = Modes.attributes;
return tokenStyles(state);
},
html: function(stream, state) {
stream.skipToEnd();
return tokenStylesWith(state, TOKEN_STYLES.html);
},
table: function(stream, state) {
state.layoutType = "table";
return (state.mode = Modes.tableCell)(stream, state);
},
tableCell: function(stream, state) {
if (stream.match(RE("tableHeading")))
state.tableHeading = true;
else
stream.eat("|");
state.mode = Modes.tableCellAttributes;
return tokenStyles(state);
},
tableCellAttributes: function(stream, state) {
state.mode = Modes.tableText;
if (stream.match(RE("tableCellAttributes")))
return tokenStylesWith(state, TOKEN_STYLES.attributes);
else
return tokenStyles(state);
},
tableText: function(stream, state) {
if (stream.match(RE("tableText")))
return tokenStyles(state);
if (stream.peek() === "|") { // end of cell
state.mode = Modes.tableCell;
return tokenStyles(state);
}
return handlePhraseModifier(stream, state, stream.next());
}
};
CodeMirror.defineMode("textile", function() {
return {
startState: function() {
return { mode: Modes.newLayout };
},
token: function(stream, state) {
if (stream.sol()) startNewLine(stream, state);
return state.mode(stream, state);
},
blankLine: blankLine
};
});
CodeMirror.defineMIME("text/x-textile", "textile");
});
|
{
"pile_set_name": "Github"
}
|
<?xml version="1.0" encoding="utf-8"?>
<!DOCTYPE plist PUBLIC "-//Apple//DTD PLIST 1.0//EN" "http://www.apple.com/DTDs/PropertyList-1.0.dtd">
<plist version="1.0">
<dict>
<key>PayloadIdentifier</key>
<string>com.github.gregneagle.suppress_icloud_asst</string>
<key>PayloadRemovalDisallowed</key>
<false/>
<key>PayloadScope</key>
<string>System</string>
<key>PayloadType</key>
<string>Configuration</string>
<key>PayloadUUID</key>
<string>c8d04bb6-91ca-4bc9-a5d7-c636fe132846</string>
<key>PayloadOrganization</key>
<string></string>
<key>PayloadVersion</key>
<integer>1</integer>
<key>PayloadDescription</key>
<string>Disables iCloud Setup Assistant</string>
<key>PayloadDisplayName</key>
<string>Suppress iCloud Setup Assistant -- El Capitan</string>
<key>PayloadContent</key>
<array>
<dict>
<key>PayloadType</key>
<string>com.apple.SetupAssistant.managed</string>
<key>PayloadVersion</key>
<integer>1</integer>
<key>PayloadIdentifier</key>
<string>com.github.gregneagle.suppress_icloud_asst.SetupAssistant.managed</string>
<key>PayloadEnabled</key>
<true/>
<key>PayloadUUID</key>
<string>fb3fa053-eb04-623f-6cf1-05a9cbe0b3ff</string>
<key>PayloadDisplayName</key>
<string>Setup Assistant configuration</string>
<key>SkipCloudSetup</key>
<true/>
</dict>
</array>
</dict>
</plist>
|
{
"pile_set_name": "Github"
}
|
OptimalStart
Validation
BaseClasses
|
{
"pile_set_name": "Github"
}
|
/**
* A specialized version of `_.lastIndexOf` which performs strict equality
* comparisons of values, i.e. `===`.
*
* @private
* @param {Array} array The array to inspect.
* @param {*} value The value to search for.
* @param {number} fromIndex The index to search from.
* @returns {number} Returns the index of the matched value, else `-1`.
*/
function strictLastIndexOf(array, value, fromIndex) {
var index = fromIndex + 1;
while (index--) {
if (array[index] === value) {
return index;
}
}
return index;
}
module.exports = strictLastIndexOf;
|
{
"pile_set_name": "Github"
}
|
/******************************************************************************
Copyright (c) 2007-2011, Intel Corp.
All rights reserved.
Redistribution and use in source and binary forms, with or without
modification, are permitted provided that the following conditions are met:
* Redistributions of source code must retain the above copyright notice,
this list of conditions and the following disclaimer.
* Redistributions in binary form must reproduce the above copyright
notice, this list of conditions and the following disclaimer in the
documentation and/or other materials provided with the distribution.
* Neither the name of Intel Corporation nor the names of its contributors
may be used to endorse or promote products derived from this software
without specific prior written permission.
THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS "AS IS"
AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE
IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE
ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT OWNER OR CONTRIBUTORS BE
LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR
CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF
SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, DATA, OR PROFITS; OR BUSINESS
INTERRUPTION) HOWEVER CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN
CONTRACT, STRICT LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE)
ARISING IN ANY WAY OUT OF THE USE OF THIS SOFTWARE, EVEN IF ADVISED OF
THE POSSIBILITY OF SUCH DAMAGE.
******************************************************************************/
#include "bid_internal.h"
#define SIZE_MASK 0xffff8000
#define INVALID_RESULT 0x8000
BID_TO_SMALL_INT_CVT_FUNCTION (short, bid128_to_int16_rnint, BID_UINT128, x,
bid128_to_int32_rnint, int, SIZE_MASK,
INVALID_RESULT)
BID_TO_SMALL_INT_CVT_FUNCTION (short, bid128_to_int16_xrnint, BID_UINT128,
x, bid128_to_int32_xrnint, int,
SIZE_MASK, INVALID_RESULT)
BID_TO_SMALL_INT_CVT_FUNCTION (short, bid128_to_int16_rninta, BID_UINT128,
x, bid128_to_int32_rninta, int,
SIZE_MASK, INVALID_RESULT)
BID_TO_SMALL_INT_CVT_FUNCTION (short, bid128_to_int16_xrninta, BID_UINT128,
x, bid128_to_int32_xrninta, int,
SIZE_MASK, INVALID_RESULT)
BID_TO_SMALL_INT_CVT_FUNCTION (short, bid128_to_int16_int, BID_UINT128, x,
bid128_to_int32_int, int, SIZE_MASK,
INVALID_RESULT)
BID_TO_SMALL_INT_CVT_FUNCTION (short, bid128_to_int16_xint, BID_UINT128, x,
bid128_to_int32_xint, int, SIZE_MASK,
INVALID_RESULT)
BID_TO_SMALL_INT_CVT_FUNCTION (short, bid128_to_int16_floor, BID_UINT128, x,
bid128_to_int32_floor, int, SIZE_MASK,
INVALID_RESULT)
BID_TO_SMALL_INT_CVT_FUNCTION (short, bid128_to_int16_ceil, BID_UINT128, x,
bid128_to_int32_ceil, int, SIZE_MASK,
INVALID_RESULT)
BID_TO_SMALL_INT_CVT_FUNCTION (short, bid128_to_int16_xfloor, BID_UINT128,
x, bid128_to_int32_xfloor, int,
SIZE_MASK, INVALID_RESULT)
BID_TO_SMALL_INT_CVT_FUNCTION (short, bid128_to_int16_xceil, BID_UINT128, x,
bid128_to_int32_xceil, int, SIZE_MASK,
INVALID_RESULT)
|
{
"pile_set_name": "Github"
}
|
/*
Copyright 2019 The Tekton Authors
Licensed under the Apache License, Version 2.0 (the "License");
you may not use this file except in compliance with the License.
You may obtain a copy of the License at
http://www.apache.org/licenses/LICENSE-2.0
Unless required by applicable law or agreed to in writing, software
distributed under the License is distributed on an "AS IS" BASIS,
WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
See the License for the specific language governing permissions and
limitations under the License.
*/
package pod
import (
"context"
"fmt"
"github.com/tektoncd/pipeline/pkg/apis/config"
"github.com/tektoncd/pipeline/pkg/apis/pipeline"
"github.com/tektoncd/pipeline/pkg/credentials"
"github.com/tektoncd/pipeline/pkg/credentials/dockercreds"
"github.com/tektoncd/pipeline/pkg/credentials/gitcreds"
"github.com/tektoncd/pipeline/pkg/names"
corev1 "k8s.io/api/core/v1"
metav1 "k8s.io/apimachinery/pkg/apis/meta/v1"
"k8s.io/client-go/kubernetes"
)
const (
credsInitHomeMountPrefix = "tekton-creds-init-home"
sshKnownHosts = "known_hosts"
)
// credsInit reads secrets available to the given service account and
// searches for annotations matching a specific format (documented in
// docs/auth.md). Matching secrets are turned into Volumes for the Pod
// and VolumeMounts to be given to each Step. Additionally, a list of
// entrypointer arguments are returned, each with a meaning specific to
// the credential type it describes: git credentials expect one set of
// args while docker credentials expect another.
//
// Any errors encountered during this process are returned to the
// caller. If no matching annotated secrets are found, nil lists with a
// nil error are returned.
func credsInit(ctx context.Context, serviceAccountName, namespace string, kubeclient kubernetes.Interface) ([]string, []corev1.Volume, []corev1.VolumeMount, error) {
// service account if not specified in pipeline/task spec, read it from the ConfigMap
// and defaults to `default` if its missing from the ConfigMap as well
if serviceAccountName == "" {
serviceAccountName = config.DefaultServiceAccountValue
}
sa, err := kubeclient.CoreV1().ServiceAccounts(namespace).Get(serviceAccountName, metav1.GetOptions{})
if err != nil {
return nil, nil, nil, err
}
builders := []credentials.Builder{dockercreds.NewBuilder(), gitcreds.NewBuilder()}
var volumeMounts []corev1.VolumeMount
var volumes []corev1.Volume
args := []string{}
for _, secretEntry := range sa.Secrets {
secret, err := kubeclient.CoreV1().Secrets(namespace).Get(secretEntry.Name, metav1.GetOptions{})
if err != nil {
return nil, nil, nil, err
}
if err := checkGitSSHSecret(ctx, secret); err != nil {
return nil, nil, nil, err
}
matched := false
for _, b := range builders {
if sa := b.MatchingAnnotations(secret); len(sa) > 0 {
matched = true
args = append(args, sa...)
}
}
if matched {
name := names.SimpleNameGenerator.RestrictLengthWithRandomSuffix(fmt.Sprintf("tekton-internal-secret-volume-%s", secret.Name))
volumeMounts = append(volumeMounts, corev1.VolumeMount{
Name: name,
MountPath: credentials.VolumeName(secret.Name),
})
volumes = append(volumes, corev1.Volume{
Name: name,
VolumeSource: corev1.VolumeSource{
Secret: &corev1.SecretVolumeSource{
SecretName: secret.Name,
},
},
})
}
}
if len(args) == 0 {
// There are no creds to initialize.
return nil, nil, nil, nil
}
return args, volumes, volumeMounts, nil
}
// getCredsInitVolume returns a Volume and VolumeMount for /tekton/creds. Each call
// will return a new volume and volume mount with randomized name.
func getCredsInitVolume() (corev1.Volume, corev1.VolumeMount) {
name := names.SimpleNameGenerator.RestrictLengthWithRandomSuffix(credsInitHomeMountPrefix)
v := corev1.Volume{
Name: name,
VolumeSource: corev1.VolumeSource{EmptyDir: &corev1.EmptyDirVolumeSource{
Medium: corev1.StorageMediumMemory,
}},
}
vm := corev1.VolumeMount{
Name: name,
MountPath: pipeline.CredsDir,
}
return v, vm
}
// checkGitSSHSecret requires `known_host` field must be included in Git SSH Secret when feature flag
// `require-git-ssh-secret-known-hosts` is true.
func checkGitSSHSecret(ctx context.Context, secret *corev1.Secret) error {
cfg := config.FromContextOrDefaults(ctx)
if secret.Type == corev1.SecretTypeSSHAuth && cfg.FeatureFlags.RequireGitSSHSecretKnownHosts {
if _, ok := secret.Data[sshKnownHosts]; !ok {
return fmt.Errorf("TaskRun validation failed. Git SSH Secret must have \"known_hosts\" included " +
"when feature flag \"require-git-ssh-secret-known-hosts\" is set to true")
}
}
return nil
}
|
{
"pile_set_name": "Github"
}
|
#include <math.h>
#include <stdio.h>
#include <string.h>
#include <stdarg.h>
#include "tron.h"
#ifndef min
template <class T> static inline T min(T x,T y) { return (x<y)?x:y; }
#endif
#ifndef max
template <class T> static inline T max(T x,T y) { return (x>y)?x:y; }
#endif
#ifdef __cplusplus
extern "C" {
#endif
extern double dnrm2_(int *, double *, int *);
extern double ddot_(int *, double *, int *, double *, int *);
extern int daxpy_(int *, double *, double *, int *, double *, int *);
extern int dscal_(int *, double *, double *, int *);
#ifdef __cplusplus
}
#endif
static void default_print(const char *buf)
{
fputs(buf,stdout);
fflush(stdout);
}
void TRON::info(const char *fmt,...)
{
char buf[BUFSIZ];
va_list ap;
va_start(ap,fmt);
vsprintf(buf,fmt,ap);
va_end(ap);
(*tron_print_string)(buf);
}
TRON::TRON(const function *fun_obj, double eps, int max_iter)
{
this->fun_obj=const_cast<function *>(fun_obj);
this->eps=eps;
this->max_iter=max_iter;
tron_print_string = default_print;
}
TRON::~TRON()
{
}
void TRON::tron(double *w)
{
// Parameters for updating the iterates.
double eta0 = 1e-4, eta1 = 0.25, eta2 = 0.75;
// Parameters for updating the trust region size delta.
double sigma1 = 0.25, sigma2 = 0.5, sigma3 = 4;
int n = fun_obj->get_nr_variable();
int i, cg_iter;
double delta, snorm, one=1.0;
double alpha, f, fnew, prered, actred, gs;
int search = 1, iter = 1, inc = 1;
double *s = new double[n];
double *r = new double[n];
double *w_new = new double[n];
double *g = new double[n];
for (i=0; i<n; i++)
w[i] = 0;
f = fun_obj->fun(w);
fun_obj->grad(w, g);
delta = dnrm2_(&n, g, &inc);
double gnorm1 = delta;
double gnorm = gnorm1;
if (gnorm <= eps*gnorm1)
search = 0;
iter = 1;
while (iter <= max_iter && search)
{
cg_iter = trcg(delta, g, s, r);
memcpy(w_new, w, sizeof(double)*n);
daxpy_(&n, &one, s, &inc, w_new, &inc);
gs = ddot_(&n, g, &inc, s, &inc);
prered = -0.5*(gs-ddot_(&n, s, &inc, r, &inc));
fnew = fun_obj->fun(w_new);
// Compute the actual reduction.
actred = f - fnew;
// On the first iteration, adjust the initial step bound.
snorm = dnrm2_(&n, s, &inc);
if (iter == 1)
delta = min(delta, snorm);
// Compute prediction alpha*snorm of the step.
if (fnew - f - gs <= 0)
alpha = sigma3;
else
alpha = max(sigma1, -0.5*(gs/(fnew - f - gs)));
// Update the trust region bound according to the ratio of actual to predicted reduction.
if (actred < eta0*prered)
delta = min(max(alpha, sigma1)*snorm, sigma2*delta);
else if (actred < eta1*prered)
delta = max(sigma1*delta, min(alpha*snorm, sigma2*delta));
else if (actred < eta2*prered)
delta = max(sigma1*delta, min(alpha*snorm, sigma3*delta));
else
delta = max(delta, min(alpha*snorm, sigma3*delta));
info("iter %2d act %5.3e pre %5.3e delta %5.3e f %5.3e |g| %5.3e CG %3d\n", iter, actred, prered, delta, f, gnorm, cg_iter);
if (actred > eta0*prered)
{
iter++;
memcpy(w, w_new, sizeof(double)*n);
f = fnew;
fun_obj->grad(w, g);
gnorm = dnrm2_(&n, g, &inc);
if (gnorm <= eps*gnorm1)
break;
}
if (f < -1.0e+32)
{
info("WARNING: f < -1.0e+32\n");
break;
}
if (fabs(actred) <= 0 && prered <= 0)
{
info("WARNING: actred and prered <= 0\n");
break;
}
if (fabs(actred) <= 1.0e-12*fabs(f) &&
fabs(prered) <= 1.0e-12*fabs(f))
{
info("WARNING: actred and prered too small\n");
break;
}
}
delete[] g;
delete[] r;
delete[] w_new;
delete[] s;
}
int TRON::trcg(double delta, double *g, double *s, double *r)
{
int i, inc = 1;
int n = fun_obj->get_nr_variable();
double one = 1;
double *d = new double[n];
double *Hd = new double[n];
double rTr, rnewTrnew, alpha, beta, cgtol;
for (i=0; i<n; i++)
{
s[i] = 0;
r[i] = -g[i];
d[i] = r[i];
}
cgtol = 0.1*dnrm2_(&n, g, &inc);
int cg_iter = 0;
rTr = ddot_(&n, r, &inc, r, &inc);
while (1)
{
if (dnrm2_(&n, r, &inc) <= cgtol)
break;
cg_iter++;
fun_obj->Hv(d, Hd);
alpha = rTr/ddot_(&n, d, &inc, Hd, &inc);
daxpy_(&n, &alpha, d, &inc, s, &inc);
if (dnrm2_(&n, s, &inc) > delta)
{
info("cg reaches trust region boundary\n");
alpha = -alpha;
daxpy_(&n, &alpha, d, &inc, s, &inc);
double std = ddot_(&n, s, &inc, d, &inc);
double sts = ddot_(&n, s, &inc, s, &inc);
double dtd = ddot_(&n, d, &inc, d, &inc);
double dsq = delta*delta;
double rad = sqrt(std*std + dtd*(dsq-sts));
if (std >= 0)
alpha = (dsq - sts)/(std + rad);
else
alpha = (rad - std)/dtd;
daxpy_(&n, &alpha, d, &inc, s, &inc);
alpha = -alpha;
daxpy_(&n, &alpha, Hd, &inc, r, &inc);
break;
}
alpha = -alpha;
daxpy_(&n, &alpha, Hd, &inc, r, &inc);
rnewTrnew = ddot_(&n, r, &inc, r, &inc);
beta = rnewTrnew/rTr;
dscal_(&n, &beta, d, &inc);
daxpy_(&n, &one, r, &inc, d, &inc);
rTr = rnewTrnew;
}
delete[] d;
delete[] Hd;
return(cg_iter);
}
double TRON::norm_inf(int n, double *x)
{
double dmax = fabs(x[0]);
for (int i=1; i<n; i++)
if (fabs(x[i]) >= dmax)
dmax = fabs(x[i]);
return(dmax);
}
void TRON::set_print_string(void (*print_string) (const char *buf))
{
tron_print_string = print_string;
}
|
{
"pile_set_name": "Github"
}
|
<div id="sidebar" class="sidebar responsive sidebar-fixed" style="margin-top: -25px;width : 200px">
<ul class="nav nav-list">
<li ng-repeat="migrationCluster in eventDetails">
<a ui-sref=".details({migrationCluster: migrationCluster})" class="dropdown-toggle">
<i class="menu-icon fa fa-caret-right"></i> {{ migrationCluster.migrationCluster.clusterName }}
<div>
<span class="label label-sm" ng-class="{
'label-default':migrationCluster.migrationCluster.statusType == 'Init',
'label-info':migrationCluster.migrationCluster.statusType == 'Processing',
'label-success':migrationCluster.migrationCluster.statusType == 'Success',
'label-danger':migrationCluster.migrationCluster.statusType == 'Fail'
}">{{migrationCluster.migrationCluster.status}}</span>
</div>
</a>
</li>
</ul>
</div>
<div ui-view style="margin-left:200px"></div>
<a href="#" id="btn-scroll-up"
class="btn-scroll-up btn btn-sm btn-inverse"> <i
class="icon-double-angle-up icon-only bigger-110"></i>
</a>
|
{
"pile_set_name": "Github"
}
|
# Copyright 2019 The Kubeflow Authors
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
FROM ubuntu:16.04
RUN apt-get update -y && \
apt-get install --no-install-recommends -y -q ca-certificates python-dev python-setuptools wget && \
easy_install pip && \
pip install pyyaml==3.12 kubernetes
ADD build /ml
ENTRYPOINT ["python", "/ml/launch_tfjob.py"]
|
{
"pile_set_name": "Github"
}
|
// Copyright 2015-2018 Benjamin Fry <benjaminfry@me.com>
//
// Licensed under the Apache License, Version 2.0, <LICENSE-APACHE or
// http://apache.org/licenses/LICENSE-2.0> or the MIT license <LICENSE-MIT or
// http://opensource.org/licenses/MIT>, at your option. This file may not be
// copied, modified, or distributed except according to those terms.
//! All authority related types
use std::ops::{Deref, DerefMut};
use std::path::{Path, PathBuf};
use std::pin::Pin;
use std::sync::Arc;
use futures::future::Future;
use log::{error, info, warn};
use crate::client::op::LowerQuery;
use crate::client::rr::dnssec::{DnsSecResult, Signer, SupportedAlgorithms};
use crate::client::rr::{LowerName, RrKey};
use crate::proto::op::ResponseCode;
use crate::proto::rr::dnssec::rdata::key::KEY;
use crate::proto::rr::{DNSClass, Name, RData, Record, RecordSet, RecordType};
#[cfg(feature = "dnssec")]
use crate::authority::UpdateRequest;
use crate::authority::{Authority, LookupError, MessageRequest, UpdateResult, ZoneType};
use crate::error::{PersistenceErrorKind, PersistenceResult};
use crate::store::in_memory::InMemoryAuthority;
use crate::store::sqlite::{Journal, SqliteConfig};
/// SqliteAuthority is responsible for storing the resource records for a particular zone.
///
/// Authorities default to DNSClass IN. The ZoneType specifies if this should be treated as the
/// start of authority for the zone, is a Secondary, or a cached zone.
pub struct SqliteAuthority {
in_memory: InMemoryAuthority,
journal: Option<Journal>,
allow_update: bool,
is_dnssec_enabled: bool,
}
impl SqliteAuthority {
/// Creates a new Authority.
///
/// # Arguments
///
/// * `in_memory` - InMemoryAuthority for all records.
/// * `allow_update` - If true, then this zone accepts dynamic updates.
/// * `is_dnssec_enabled` - If true, then the zone will sign the zone with all registered keys,
/// (see `add_zone_signing_key()`)
///
/// # Return value
///
/// The new `Authority`.
pub fn new(in_memory: InMemoryAuthority, allow_update: bool, is_dnssec_enabled: bool) -> Self {
Self {
in_memory,
journal: None,
allow_update,
is_dnssec_enabled,
}
}
/// load the authority from the configuration
pub fn try_from_config(
origin: Name,
zone_type: ZoneType,
allow_axfr: bool,
enable_dnssec: bool,
root_dir: Option<&Path>,
config: &SqliteConfig,
) -> Result<Self, String> {
use crate::store::file::{FileAuthority, FileConfig};
let zone_name: Name = origin;
let root_zone_dir = root_dir.map(PathBuf::from).unwrap_or_else(PathBuf::new);
// to be compatible with previous versions, the extension might be zone, not jrnl
let journal_path: PathBuf = root_zone_dir.join(&config.journal_file_path);
let zone_path: PathBuf = root_zone_dir.join(&config.zone_file_path);
// load the zone
if journal_path.exists() {
info!("recovering zone from journal: {:?}", journal_path);
let journal = Journal::from_file(&journal_path)
.map_err(|e| format!("error opening journal: {:?}: {}", journal_path, e))?;
let in_memory = InMemoryAuthority::empty(zone_name.clone(), zone_type, allow_axfr);
let mut authority = SqliteAuthority::new(in_memory, config.allow_update, enable_dnssec);
authority
.recover_with_journal(&journal)
.map_err(|e| format!("error recovering from journal: {}", e))?;
authority.set_journal(journal);
info!("recovered zone: {}", zone_name);
Ok(authority)
} else if zone_path.exists() {
// TODO: deprecate this portion of loading, instantiate the journal through a separate tool
info!("loading zone file: {:?}", zone_path);
let file_config = FileConfig {
zone_file_path: config.zone_file_path.clone(),
};
let in_memory = FileAuthority::try_from_config(
zone_name.clone(),
zone_type,
allow_axfr,
root_dir,
&file_config,
)?
.unwrap();
let mut authority = SqliteAuthority::new(in_memory, config.allow_update, enable_dnssec);
// if dynamic update is enabled, enable the journal
info!("creating new journal: {:?}", journal_path);
let journal = Journal::from_file(&journal_path)
.map_err(|e| format!("error creating journal {:?}: {}", journal_path, e))?;
authority.set_journal(journal);
// preserve to the new journal, i.e. we just loaded the zone from disk, start the journal
authority
.persist_to_journal()
.map_err(|e| format!("error persisting to journal {:?}: {}", journal_path, e))?;
info!("zone file loaded: {}", zone_name);
Ok(authority)
} else {
Err(format!(
"no zone file or journal defined at: {:?}",
zone_path
))
}
}
/// Recovers the zone from a Journal, returns an error on failure to recover the zone.
///
/// # Arguments
///
/// * `journal` - the journal from which to load the persisted zone.
pub fn recover_with_journal(&mut self, journal: &Journal) -> PersistenceResult<()> {
assert!(
self.in_memory.records().is_empty(),
"records should be empty during a recovery"
);
info!("recovering from journal");
for record in journal.iter() {
// AXFR is special, it is used to mark the dump of a full zone.
// when recovering, if an AXFR is encountered, we should remove all the records in the
// authority.
if record.rr_type() == RecordType::AXFR {
self.in_memory.clear();
} else if let Err(error) = self.update_records(&[record], false) {
return Err(PersistenceErrorKind::Recovery(error.to_str()).into());
}
}
Ok(())
}
/// Persist the state of the current zone to the journal, does nothing if there is no associated
/// Journal.
///
/// Returns an error if there was an issue writing to the persistence layer.
pub fn persist_to_journal(&self) -> PersistenceResult<()> {
if let Some(journal) = self.journal.as_ref() {
let serial = self.serial();
info!("persisting zone to journal at SOA.serial: {}", serial);
// TODO: THIS NEEDS TO BE IN A TRANSACTION!!!
journal.insert_record(serial, Record::new().set_rr_type(RecordType::AXFR))?;
for rr_set in self.in_memory.records().values() {
// TODO: should we preserve rr_sets or not?
for record in rr_set.records_without_rrsigs() {
journal.insert_record(serial, record)?;
}
}
// TODO: COMMIT THE TRANSACTION!!!
}
Ok(())
}
/// Associate a backing Journal with this Authority for Updatable zones
pub fn set_journal(&mut self, journal: Journal) {
self.journal = Some(journal);
}
/// Returns the associated Journal
pub fn journal(&self) -> Option<&Journal> {
self.journal.as_ref()
}
/// Enables the zone for dynamic DNS updates
pub fn set_allow_update(&mut self, allow_update: bool) {
self.allow_update = allow_update;
}
/// [RFC 2136](https://tools.ietf.org/html/rfc2136), DNS Update, April 1997
///
/// ```text
///
/// 3.2 - Process Prerequisite Section
///
/// Next, the Prerequisite Section is checked to see that all
/// prerequisites are satisfied by the current state of the zone. Using
/// the definitions expressed in Section 1.2, if any RR's NAME is not
/// within the zone specified in the Zone Section, signal NOTZONE to the
/// requestor.
///
/// 3.2.1. For RRs in this section whose CLASS is ANY, test to see that
/// TTL and RDLENGTH are both zero (0), else signal FORMERR to the
/// requestor. If TYPE is ANY, test to see that there is at least one RR
/// in the zone whose NAME is the same as that of the Prerequisite RR,
/// else signal NXDOMAIN to the requestor. If TYPE is not ANY, test to
/// see that there is at least one RR in the zone whose NAME and TYPE are
/// the same as that of the Prerequisite RR, else signal NXRRSET to the
/// requestor.
///
/// 3.2.2. For RRs in this section whose CLASS is NONE, test to see that
/// the TTL and RDLENGTH are both zero (0), else signal FORMERR to the
/// requestor. If the TYPE is ANY, test to see that there are no RRs in
/// the zone whose NAME is the same as that of the Prerequisite RR, else
/// signal YXDOMAIN to the requestor. If the TYPE is not ANY, test to
/// see that there are no RRs in the zone whose NAME and TYPE are the
/// same as that of the Prerequisite RR, else signal YXRRSET to the
/// requestor.
///
/// 3.2.3. For RRs in this section whose CLASS is the same as the ZCLASS,
/// test to see that the TTL is zero (0), else signal FORMERR to the
/// requestor. Then, build an RRset for each unique <NAME,TYPE> and
/// compare each resulting RRset for set equality (same members, no more,
/// no less) with RRsets in the zone. If any Prerequisite RRset is not
/// entirely and exactly matched by a zone RRset, signal NXRRSET to the
/// requestor. If any RR in this section has a CLASS other than ZCLASS
/// or NONE or ANY, signal FORMERR to the requestor.
///
/// 3.2.4 - Table Of Metavalues Used In Prerequisite Section
///
/// CLASS TYPE RDATA Meaning
/// ------------------------------------------------------------
/// ANY ANY empty Name is in use
/// ANY rrset empty RRset exists (value independent)
/// NONE ANY empty Name is not in use
/// NONE rrset empty RRset does not exist
/// zone rrset rr RRset exists (value dependent)
/// ```
pub fn verify_prerequisites(&self, pre_requisites: &[Record]) -> UpdateResult<()> {
use futures::executor::block_on;
// 3.2.5 - Pseudocode for Prerequisite Section Processing
//
// for rr in prerequisites
// if (rr.ttl != 0)
// return (FORMERR)
// if (zone_of(rr.name) != ZNAME)
// return (NOTZONE);
// if (rr.class == ANY)
// if (rr.rdlength != 0)
// return (FORMERR)
// if (rr.type == ANY)
// if (!zone_name<rr.name>)
// return (NXDOMAIN)
// else
// if (!zone_rrset<rr.name, rr.type>)
// return (NXRRSET)
// if (rr.class == NONE)
// if (rr.rdlength != 0)
// return (FORMERR)
// if (rr.type == ANY)
// if (zone_name<rr.name>)
// return (YXDOMAIN)
// else
// if (zone_rrset<rr.name, rr.type>)
// return (YXRRSET)
// if (rr.class == zclass)
// temp<rr.name, rr.type> += rr
// else
// return (FORMERR)
//
// for rrset in temp
// if (zone_rrset<rrset.name, rrset.type> != rrset)
// return (NXRRSET)
for require in pre_requisites {
let required_name = LowerName::from(require.name());
if require.ttl() != 0 {
warn!("ttl must be 0 for: {:?}", require);
return Err(ResponseCode::FormErr);
}
if !self.origin().zone_of(&require.name().into()) {
warn!("{} is not a zone_of {}", require.name(), self.origin());
return Err(ResponseCode::NotZone);
}
match require.dns_class() {
DNSClass::ANY => {
if let RData::NULL(..) = *require.rdata() {
match require.rr_type() {
// ANY ANY empty Name is in use
RecordType::ANY => {
/*TODO: this works because the future here is always complete*/
if block_on(self.lookup(
&required_name,
RecordType::ANY,
false,
SupportedAlgorithms::new(),
))
.unwrap_or_default()
.was_empty()
{
return Err(ResponseCode::NXDomain);
} else {
continue;
}
}
// ANY rrset empty RRset exists (value independent)
rrset => {
/*TODO: this works because the future here is always complete*/
if block_on(self.lookup(
&required_name,
rrset,
false,
SupportedAlgorithms::new(),
))
.unwrap_or_default()
.was_empty()
{
return Err(ResponseCode::NXRRSet);
} else {
continue;
}
}
}
} else {
return Err(ResponseCode::FormErr);
}
}
DNSClass::NONE => {
if let RData::NULL(..) = *require.rdata() {
match require.rr_type() {
// NONE ANY empty Name is not in use
RecordType::ANY => {
/*TODO: this works because the future here is always complete*/
if !block_on(self.lookup(
&required_name,
RecordType::ANY,
false,
SupportedAlgorithms::new(),
))
.unwrap_or_default()
.was_empty()
{
return Err(ResponseCode::YXDomain);
} else {
continue;
}
}
// NONE rrset empty RRset does not exist
rrset => {
/*TODO: this works because the future here is always complete*/
if !block_on(self.lookup(
&required_name,
rrset,
false,
SupportedAlgorithms::new(),
))
.unwrap_or_default()
.was_empty()
{
return Err(ResponseCode::YXRRSet);
} else {
continue;
}
}
}
} else {
return Err(ResponseCode::FormErr);
}
}
class if class == self.class() =>
// zone rrset rr RRset exists (value dependent)
{
/*TODO: this works because the future here is always complete*/
if block_on(self.lookup(
&required_name,
require.rr_type(),
false,
SupportedAlgorithms::new(),
))
.unwrap_or_default()
.iter()
.find(|rr| *rr == require)
.is_none()
{
return Err(ResponseCode::NXRRSet);
} else {
continue;
}
}
_ => return Err(ResponseCode::FormErr),
}
}
// if we didn't bail everything checked out...
Ok(())
}
/// [RFC 2136](https://tools.ietf.org/html/rfc2136), DNS Update, April 1997
///
/// ```text
///
/// 3.3 - Check Requestor's Permissions
///
/// 3.3.1. Next, the requestor's permission to update the RRs named in
/// the Update Section may be tested in an implementation dependent
/// fashion or using mechanisms specified in a subsequent Secure DNS
/// Update protocol. If the requestor does not have permission to
/// perform these updates, the server may write a warning message in its
/// operations log, and may either signal REFUSED to the requestor, or
/// ignore the permission problem and proceed with the update.
///
/// 3.3.2. While the exact processing is implementation defined, if these
/// verification activities are to be performed, this is the point in the
/// server's processing where such performance should take place, since
/// if a REFUSED condition is encountered after an update has been
/// partially applied, it will be necessary to undo the partial update
/// and restore the zone to its original state before answering the
/// requestor.
/// ```
///
#[cfg(feature = "dnssec")]
#[allow(clippy::blocks_in_if_conditions)]
pub fn authorize(&self, update_message: &MessageRequest) -> UpdateResult<()> {
use futures::executor::block_on;
use log::debug;
use crate::client::rr::rdata::{DNSSECRData, DNSSECRecordType};
use crate::proto::rr::dnssec::Verifier;
// 3.3.3 - Pseudocode for Permission Checking
//
// if (security policy exists)
// if (this update is not permitted)
// if (local option)
// log a message about permission problem
// if (local option)
// return (REFUSED)
// does this authority allow_updates?
if !self.allow_update {
warn!(
"update attempted on non-updatable Authority: {}",
self.origin()
);
return Err(ResponseCode::Refused);
}
// verify sig0, currently the only authorization that is accepted.
let sig0s: &[Record] = update_message.sig0();
debug!("authorizing with: {:?}", sig0s);
if !sig0s.is_empty()
&& sig0s
.iter()
.filter_map(|sig0| {
if let RData::DNSSEC(DNSSECRData::SIG(ref sig)) = *sig0.rdata() {
Some(sig)
} else {
None
}
})
.any(|sig| {
let name = LowerName::from(sig.signer_name());
// TODO: updates should be async as well.
let keys = block_on(self.lookup(
&name,
RecordType::DNSSEC(DNSSECRecordType::KEY),
false,
SupportedAlgorithms::new(),
));
let keys = match keys {
Ok(keys) => keys,
Err(_) => return false,
};
debug!("found keys {:?}", keys);
// TODO: check key usage flags and restrictions
keys.iter()
.filter_map(|rr_set| {
if let RData::DNSSEC(DNSSECRData::KEY(ref key)) = *rr_set.rdata() {
Some(key)
} else {
None
}
})
.any(|key| {
key.verify_message(update_message, sig.sig(), sig)
.map(|_| {
info!("verified sig: {:?} with key: {:?}", sig, key);
true
})
.unwrap_or_else(|_| {
debug!("did not verify sig: {:?} with key: {:?}", sig, key);
false
})
})
})
{
return Ok(());
} else {
warn!(
"no sig0 matched registered records: id {}",
update_message.id()
);
}
// getting here, we will always default to rejecting the request
// the code will only ever explicitly return authorized actions.
Err(ResponseCode::Refused)
}
/// [RFC 2136](https://tools.ietf.org/html/rfc2136), DNS Update, April 1997
///
/// ```text
///
/// 3.4 - Process Update Section
///
/// Next, the Update Section is processed as follows.
///
/// 3.4.1 - Prescan
///
/// The Update Section is parsed into RRs and each RR's CLASS is checked
/// to see if it is ANY, NONE, or the same as the Zone Class, else signal
/// a FORMERR to the requestor. Using the definitions in Section 1.2,
/// each RR's NAME must be in the zone specified by the Zone Section,
/// else signal NOTZONE to the requestor.
///
/// 3.4.1.2. For RRs whose CLASS is not ANY, check the TYPE and if it is
/// ANY, AXFR, MAILA, MAILB, or any other QUERY metatype, or any
/// unrecognized type, then signal FORMERR to the requestor. For RRs
/// whose CLASS is ANY or NONE, check the TTL to see that it is zero (0),
/// else signal a FORMERR to the requestor. For any RR whose CLASS is
/// ANY, check the RDLENGTH to make sure that it is zero (0) (that is,
/// the RDATA field is empty), and that the TYPE is not AXFR, MAILA,
/// MAILB, or any other QUERY metatype besides ANY, or any unrecognized
/// type, else signal FORMERR to the requestor.
/// ```
#[allow(clippy::unused_unit)]
pub fn pre_scan(&self, records: &[Record]) -> UpdateResult<()> {
// 3.4.1.3 - Pseudocode For Update Section Prescan
//
// [rr] for rr in updates
// if (zone_of(rr.name) != ZNAME)
// return (NOTZONE);
// if (rr.class == zclass)
// if (rr.type & ANY|AXFR|MAILA|MAILB)
// return (FORMERR)
// elsif (rr.class == ANY)
// if (rr.ttl != 0 || rr.rdlength != 0
// || rr.type & AXFR|MAILA|MAILB)
// return (FORMERR)
// elsif (rr.class == NONE)
// if (rr.ttl != 0 || rr.type & ANY|AXFR|MAILA|MAILB)
// return (FORMERR)
// else
// return (FORMERR)
for rr in records {
if !self.origin().zone_of(&rr.name().into()) {
return Err(ResponseCode::NotZone);
}
let class: DNSClass = rr.dns_class();
if class == self.class() {
match rr.rr_type() {
RecordType::ANY | RecordType::AXFR | RecordType::IXFR => {
return Err(ResponseCode::FormErr);
}
_ => (),
}
} else {
match class {
DNSClass::ANY => {
if rr.ttl() != 0 {
return Err(ResponseCode::FormErr);
}
if let RData::NULL(..) = *rr.rdata() {
()
} else {
return Err(ResponseCode::FormErr);
}
match rr.rr_type() {
RecordType::AXFR | RecordType::IXFR => {
return Err(ResponseCode::FormErr);
}
_ => (),
}
}
DNSClass::NONE => {
if rr.ttl() != 0 {
return Err(ResponseCode::FormErr);
}
match rr.rr_type() {
RecordType::ANY | RecordType::AXFR | RecordType::IXFR => {
return Err(ResponseCode::FormErr);
}
_ => (),
}
}
_ => return Err(ResponseCode::FormErr),
}
}
}
Ok(())
}
/// Updates the specified records according to the update section.
///
/// [RFC 2136](https://tools.ietf.org/html/rfc2136), DNS Update, April 1997
///
/// ```text
///
/// 3.4.2.6 - Table Of Metavalues Used In Update Section
///
/// CLASS TYPE RDATA Meaning
/// ---------------------------------------------------------
/// ANY ANY empty Delete all RRsets from a name
/// ANY rrset empty Delete an RRset
/// NONE rrset rr Delete an RR from an RRset
/// zone rrset rr Add to an RRset
/// ```
///
/// # Arguments
///
/// * `records` - set of record instructions for update following above rules
/// * `auto_signing_and_increment` - if true, the zone will sign and increment the SOA, this
/// should be disabled during recovery.
pub fn update_records(
&mut self,
records: &[Record],
auto_signing_and_increment: bool,
) -> UpdateResult<bool> {
let mut updated = false;
let serial: u32 = self.serial();
// the persistence act as a write-ahead log. The WAL will also be used for recovery of a zone
// subsequent to a failure of the server.
if let Some(ref journal) = self.journal {
if let Err(error) = journal.insert_records(serial, records) {
error!("could not persist update records: {}", error);
return Err(ResponseCode::ServFail);
}
}
// 3.4.2.7 - Pseudocode For Update Section Processing
//
// [rr] for rr in updates
// if (rr.class == zclass)
// if (rr.type == CNAME)
// if (zone_rrset<rr.name, ~CNAME>)
// next [rr]
// elsif (zone_rrset<rr.name, CNAME>)
// next [rr]
// if (rr.type == SOA)
// if (!zone_rrset<rr.name, SOA> ||
// zone_rr<rr.name, SOA>.serial > rr.soa.serial)
// next [rr]
// for zrr in zone_rrset<rr.name, rr.type>
// if (rr.type == CNAME || rr.type == SOA ||
// (rr.type == WKS && rr.proto == zrr.proto &&
// rr.address == zrr.address) ||
// rr.rdata == zrr.rdata)
// zrr = rr
// next [rr]
// zone_rrset<rr.name, rr.type> += rr
// elsif (rr.class == ANY)
// if (rr.type == ANY)
// if (rr.name == zname)
// zone_rrset<rr.name, ~(SOA|NS)> = Nil
// else
// zone_rrset<rr.name, *> = Nil
// elsif (rr.name == zname &&
// (rr.type == SOA || rr.type == NS))
// next [rr]
// else
// zone_rrset<rr.name, rr.type> = Nil
// elsif (rr.class == NONE)
// if (rr.type == SOA)
// next [rr]
// if (rr.type == NS && zone_rrset<rr.name, NS> == rr)
// next [rr]
// zone_rr<rr.name, rr.type, rr.data> = Nil
// return (NOERROR)
for rr in records {
let rr_name = LowerName::from(rr.name());
let rr_key = RrKey::new(rr_name.clone(), rr.rr_type());
match rr.dns_class() {
class if class == self.class() => {
// RFC 2136 - 3.4.2.2. Any Update RR whose CLASS is the same as ZCLASS is added to
// the zone. In case of duplicate RDATAs (which for SOA RRs is always
// the case, and for WKS RRs is the case if the ADDRESS and PROTOCOL
// fields both match), the Zone RR is replaced by Update RR. If the
// TYPE is SOA and there is no Zone SOA RR, or the new SOA.SERIAL is
// lower (according to [RFC1982]) than or equal to the current Zone SOA
// RR's SOA.SERIAL, the Update RR is ignored. In the case of a CNAME
// Update RR and a non-CNAME Zone RRset or vice versa, ignore the CNAME
// Update RR, otherwise replace the CNAME Zone RR with the CNAME Update
// RR.
// zone rrset rr Add to an RRset
info!("upserting record: {:?}", rr);
updated = self.upsert(rr.clone(), serial) || updated;
}
DNSClass::ANY => {
// This is a delete of entire RRSETs, either many or one. In either case, the spec is clear:
match rr.rr_type() {
t @ RecordType::SOA | t @ RecordType::NS if rr_name == *self.origin() => {
// SOA and NS records are not to be deleted if they are the origin records
info!("skipping delete of {:?} see RFC 2136 - 3.4.2.3", t);
continue;
}
RecordType::ANY => {
// RFC 2136 - 3.4.2.3. For any Update RR whose CLASS is ANY and whose TYPE is ANY,
// all Zone RRs with the same NAME are deleted, unless the NAME is the
// same as ZNAME in which case only those RRs whose TYPE is other than
// SOA or NS are deleted.
// ANY ANY empty Delete all RRsets from a name
info!(
"deleting all records at name (not SOA or NS at origin): {:?}",
rr_name
);
let to_delete = self
.records()
.keys()
.filter(|k| {
!((k.record_type == RecordType::SOA
|| k.record_type == RecordType::NS)
&& k.name != *self.origin())
})
.filter(|k| k.name == rr_name)
.cloned()
.collect::<Vec<RrKey>>();
for delete in to_delete {
self.records_mut().remove(&delete);
updated = true;
}
}
_ => {
// RFC 2136 - 3.4.2.3. For any Update RR whose CLASS is ANY and
// whose TYPE is not ANY all Zone RRs with the same NAME and TYPE are
// deleted, unless the NAME is the same as ZNAME in which case neither
// SOA or NS RRs will be deleted.
// ANY rrset empty Delete an RRset
if let RData::NULL(..) = *rr.rdata() {
let deleted = self.records_mut().remove(&rr_key);
info!("deleted rrset: {:?}", deleted);
updated = updated || deleted.is_some();
} else {
info!("expected empty rdata: {:?}", rr);
return Err(ResponseCode::FormErr);
}
}
}
}
DNSClass::NONE => {
info!("deleting specific record: {:?}", rr);
// NONE rrset rr Delete an RR from an RRset
if let Some(rrset) = self.records_mut().get_mut(&rr_key) {
// b/c this is an Arc, we need to clone, then remove, and replace the node.
let mut rrset_clone: RecordSet = RecordSet::clone(&*rrset);
let deleted = rrset_clone.remove(rr, serial);
info!("deleted ({}) specific record: {:?}", deleted, rr);
updated = updated || deleted;
if deleted {
*rrset = Arc::new(rrset_clone);
}
}
}
class => {
info!("unexpected DNS Class: {:?}", class);
return Err(ResponseCode::FormErr);
}
}
}
// update the serial...
if updated && auto_signing_and_increment {
if self.is_dnssec_enabled {
self.secure_zone().map_err(|e| {
error!("failure securing zone: {}", e);
ResponseCode::ServFail
})?
} else {
// the secure_zone() function increments the SOA during it's operation, if we're not
// dnssec, then we need to do it here...
self.increment_soa_serial();
}
}
Ok(updated)
}
}
impl Deref for SqliteAuthority {
type Target = InMemoryAuthority;
fn deref(&self) -> &Self::Target {
&self.in_memory
}
}
impl DerefMut for SqliteAuthority {
fn deref_mut(&mut self) -> &mut Self::Target {
&mut self.in_memory
}
}
impl Authority for SqliteAuthority {
type Lookup = <InMemoryAuthority as Authority>::Lookup;
type LookupFuture = <InMemoryAuthority as Authority>::LookupFuture;
/// What type is this zone
fn zone_type(&self) -> ZoneType {
self.in_memory.zone_type()
}
/// Return true if AXFR is allowed
fn is_axfr_allowed(&self) -> bool {
self.in_memory.is_axfr_allowed()
}
/// Takes the UpdateMessage, extracts the Records, and applies the changes to the record set.
///
/// [RFC 2136](https://tools.ietf.org/html/rfc2136), DNS Update, April 1997
///
/// ```text
///
/// 3.4 - Process Update Section
///
/// Next, the Update Section is processed as follows.
///
/// 3.4.2 - Update
///
/// The Update Section is parsed into RRs and these RRs are processed in
/// order.
///
/// 3.4.2.1. If any system failure (such as an out of memory condition,
/// or a hardware error in persistent storage) occurs during the
/// processing of this section, signal SERVFAIL to the requestor and undo
/// all updates applied to the zone during this transaction.
///
/// 3.4.2.2. Any Update RR whose CLASS is the same as ZCLASS is added to
/// the zone. In case of duplicate RDATAs (which for SOA RRs is always
/// the case, and for WKS RRs is the case if the ADDRESS and PROTOCOL
/// fields both match), the Zone RR is replaced by Update RR. If the
/// TYPE is SOA and there is no Zone SOA RR, or the new SOA.SERIAL is
/// lower (according to [RFC1982]) than or equal to the current Zone SOA
/// RR's SOA.SERIAL, the Update RR is ignored. In the case of a CNAME
/// Update RR and a non-CNAME Zone RRset or vice versa, ignore the CNAME
/// Update RR, otherwise replace the CNAME Zone RR with the CNAME Update
/// RR.
///
/// 3.4.2.3. For any Update RR whose CLASS is ANY and whose TYPE is ANY,
/// all Zone RRs with the same NAME are deleted, unless the NAME is the
/// same as ZNAME in which case only those RRs whose TYPE is other than
/// SOA or NS are deleted. For any Update RR whose CLASS is ANY and
/// whose TYPE is not ANY all Zone RRs with the same NAME and TYPE are
/// deleted, unless the NAME is the same as ZNAME in which case neither
/// SOA or NS RRs will be deleted.
///
/// 3.4.2.4. For any Update RR whose class is NONE, any Zone RR whose
/// NAME, TYPE, RDATA and RDLENGTH are equal to the Update RR is deleted,
/// unless the NAME is the same as ZNAME and either the TYPE is SOA or
/// the TYPE is NS and the matching Zone RR is the only NS remaining in
/// the RRset, in which case this Update RR is ignored.
///
/// 3.4.2.5. Signal NOERROR to the requestor.
/// ```
///
/// # Arguments
///
/// * `update` - The `UpdateMessage` records will be extracted and used to perform the update
/// actions as specified in the above RFC.
///
/// # Return value
///
/// true if any of additions, updates or deletes were made to the zone, false otherwise. Err is
/// returned in the case of bad data, etc.
#[cfg(feature = "dnssec")]
fn update(&mut self, update: &MessageRequest) -> UpdateResult<bool> {
// the spec says to authorize after prereqs, seems better to auth first.
self.authorize(update)?;
self.verify_prerequisites(update.prerequisites())?;
self.pre_scan(update.updates())?;
self.update_records(update.updates(), true)
}
/// Always fail when DNSSEC is disabled.
#[cfg(not(feature = "dnssec"))]
fn update(&mut self, _update: &MessageRequest) -> UpdateResult<bool> {
Err(ResponseCode::NotImp)
}
/// Get the origin of this zone, i.e. example.com is the origin for www.example.com
fn origin(&self) -> &LowerName {
self.in_memory.origin()
}
/// Looks up all Resource Records matching the giving `Name` and `RecordType`.
///
/// # Arguments
///
/// * `name` - The `Name`, label, to lookup.
/// * `rtype` - The `RecordType`, to lookup. `RecordType::ANY` will return all records matching
/// `name`. `RecordType::AXFR` will return all record types except `RecordType::SOA`
/// due to the requirements that on zone transfers the `RecordType::SOA` must both
/// precede and follow all other records.
/// * `is_secure` - If the DO bit is set on the EDNS OPT record, then return RRSIGs as well.
///
/// # Return value
///
/// None if there are no matching records, otherwise a `Vec` containing the found records.
fn lookup(
&self,
name: &LowerName,
rtype: RecordType,
is_secure: bool,
supported_algorithms: SupportedAlgorithms,
) -> Pin<Box<dyn Future<Output = Result<Self::Lookup, LookupError>> + Send>> {
self.in_memory
.lookup(name, rtype, is_secure, supported_algorithms)
}
fn search(
&self,
query: &LowerQuery,
is_secure: bool,
supported_algorithms: SupportedAlgorithms,
) -> Pin<Box<dyn Future<Output = Result<Self::Lookup, LookupError>> + Send>> {
self.in_memory
.search(query, is_secure, supported_algorithms)
}
/// Return the NSEC records based on the given name
///
/// # Arguments
///
/// * `name` - given this name (i.e. the lookup name), return the NSEC record that is less than
/// this
/// * `is_secure` - if true then it will return RRSIG records as well
fn get_nsec_records(
&self,
name: &LowerName,
is_secure: bool,
supported_algorithms: SupportedAlgorithms,
) -> Pin<Box<dyn Future<Output = Result<Self::Lookup, LookupError>> + Send>> {
self.in_memory
.get_nsec_records(name, is_secure, supported_algorithms)
}
fn add_update_auth_key(&mut self, name: Name, key: KEY) -> DnsSecResult<()> {
self.in_memory.add_update_auth_key(name, key)
}
/// By adding a secure key, this will implicitly enable dnssec for the zone.
///
/// # Arguments
///
/// * `signer` - Signer with associated private key
fn add_zone_signing_key(&mut self, signer: Signer) -> DnsSecResult<()> {
self.in_memory.add_zone_signing_key(signer)
}
/// (Re)generates the nsec records, increments the serial number and signs the zone
fn secure_zone(&mut self) -> DnsSecResult<()> {
Authority::secure_zone(&mut self.in_memory)
}
}
|
{
"pile_set_name": "Github"
}
|
package com.etiennelawlor.moviehub.domain.usecases;
import com.etiennelawlor.moviehub.domain.models.MoviesDomainModel;
import io.reactivex.Single;
/**
* Created by etiennelawlor on 6/26/17.
*/
public interface MoviesDomainContract {
interface UseCase {
Single<MoviesDomainModel> getPopularMovies(int currentPage);
}
}
|
{
"pile_set_name": "Github"
}
|
/*
* Licensed to the Apache Software Foundation (ASF) under one
* or more contributor license agreements. See the NOTICE file
* distributed with this work for additional information
* regarding copyright ownership. The ASF licenses this file
* to you under the Apache License, Version 2.0 (the
* "License"); you may not use this file except in compliance
* with the License. You may obtain a copy of the License at
*
* http://www.apache.org/licenses/LICENSE-2.0
*
* Unless required by applicable law or agreed to in writing,
* software distributed under the License is distributed on an
* "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
* KIND, either express or implied. See the License for the
* specific language governing permissions and limitations
* under the License.
*/
/*!
* \file nms.cu
* \brief NMS Operator
* \author Yanghao Li
*/
#include <dmlc/logging.h>
#include <dmlc/parameter.h>
#include <mxnet/operator.h>
#include <mshadow/tensor.h>
#include <mshadow/cuda/reduce.cuh>
#include <thrust/sort.h>
#include <thrust/execution_policy.h>
#include <thrust/functional.h>
#include "../tensor/sort_op.h"
#include <map>
#include <vector>
#include <string>
#include <utility>
#include <ctime>
#include <iterator>
#include "../operator_common.h"
#include "../mshadow_op.h"
#include "./nms-inl.h"
#define DIVUP(m, n) ((m) / (n) + ((m) % (n) > 0))
#define FRCNN_CUDA_CHECK(condition) \
/* Code block avoids redefinition of cudaError_t error */ \
do { \
cudaError_t error = condition; \
CHECK_EQ(error, cudaSuccess) << " " << cudaGetErrorString(error); \
} while (0)
namespace mshadow {
namespace cuda {
namespace {
// copy score and init order
// dets (n, 5); score (n, ); order (n, )
// count should be n (total anchors or proposals)
template<typename Dtype>
__global__ void CopyScoreKernel(const int count,
const Dtype* dets,
Dtype* score,
int* order) {
for (int index = blockIdx.x * blockDim.x + threadIdx.x;
index < count;
index += blockDim.x * gridDim.x) {
score[index] = dets[index * 5 + 4];
order[index] = index;
}
}
// reorder proposals according to order and keep the top_n proposals
// prev_dets (n, 5); order (n, ); dets (n, 5)
// count should be output anchor numbers (top_n)
template<typename Dtype>
__global__ void ReorderProposalsKernel(const int count,
const Dtype* prev_dets,
const int* order,
Dtype* dets) {
for (int index = blockIdx.x * blockDim.x + threadIdx.x;
index < count;
index += blockDim.x * gridDim.x) {
const int order_i = order[index];
for (int j = 0; j < 5; j ++) {
dets[index * 5 + j] = prev_dets[order_i * 5 + j];
}
}
}
__device__ inline float devIoU(float const * const a, float const * const b) {
float left = max(a[0], b[0]), right = min(a[2], b[2]);
float top = max(a[1], b[1]), bottom = min(a[3], b[3]);
float width = max(right - left + 1, 0.f), height = max(bottom - top + 1, 0.f);
float interS = width * height;
float Sa = (a[2] - a[0] + 1) * (a[3] - a[1] + 1);
float Sb = (b[2] - b[0] + 1) * (b[3] - b[1] + 1);
return interS / (Sa + Sb - interS);
}
__global__ void nms_kernel(const int n_boxes, const float nms_overlap_thresh,
const float *dev_boxes, uint64_t *dev_mask) {
const int threadsPerBlock = sizeof(uint64_t) * 8;
const int row_start = blockIdx.y;
const int col_start = blockIdx.x;
// if (row_start > col_start) return;
const int row_size =
min(n_boxes - row_start * threadsPerBlock, threadsPerBlock);
const int col_size =
min(n_boxes - col_start * threadsPerBlock, threadsPerBlock);
__shared__ float block_boxes[threadsPerBlock * 5];
if (threadIdx.x < col_size) {
block_boxes[threadIdx.x * 5 + 0] =
dev_boxes[(threadsPerBlock * col_start + threadIdx.x) * 5 + 0];
block_boxes[threadIdx.x * 5 + 1] =
dev_boxes[(threadsPerBlock * col_start + threadIdx.x) * 5 + 1];
block_boxes[threadIdx.x * 5 + 2] =
dev_boxes[(threadsPerBlock * col_start + threadIdx.x) * 5 + 2];
block_boxes[threadIdx.x * 5 + 3] =
dev_boxes[(threadsPerBlock * col_start + threadIdx.x) * 5 + 3];
block_boxes[threadIdx.x * 5 + 4] =
dev_boxes[(threadsPerBlock * col_start + threadIdx.x) * 5 + 4];
}
__syncthreads();
if (threadIdx.x < row_size) {
const int cur_box_idx = threadsPerBlock * row_start + threadIdx.x;
const float *cur_box = dev_boxes + cur_box_idx * 5;
int i = 0;
uint64_t t = 0;
int start = 0;
if (row_start == col_start) {
start = threadIdx.x + 1;
}
for (i = start; i < col_size; i++) {
if (devIoU(cur_box, block_boxes + i * 5) > nms_overlap_thresh) {
t |= 1ULL << i;
}
}
const int col_blocks = DIVUP(n_boxes, threadsPerBlock);
dev_mask[cur_box_idx * col_blocks + col_start] = t;
}
}
void _nms(const mshadow::Tensor<gpu, 2>& boxes,
const float nms_overlap_thresh,
int *keep,
int *num_out,
uint64_t *mask_dev,
uint64_t *mask_host) {
/*
@input boxes: (pre_nms_top_n, 5)
@return keep
@return num_out
@tmp mask_dev
@tmp mask_host
*/
const int threadsPerBlock = sizeof(uint64_t) * 8;
const int boxes_num = boxes.size(0);
const int boxes_dim = boxes.size(1);
float* boxes_dev = boxes.dptr_;
const int col_blocks = DIVUP(boxes_num, threadsPerBlock);
dim3 blocks(DIVUP(boxes_num, threadsPerBlock),
DIVUP(boxes_num, threadsPerBlock));
dim3 threads(threadsPerBlock);
nms_kernel<<<blocks, threads>>>(boxes_num,
nms_overlap_thresh,
boxes_dev,
mask_dev);
FRCNN_CUDA_CHECK(cudaPeekAtLastError());
// TODO: need to be rewritten
FRCNN_CUDA_CHECK(cudaMemcpy(mask_host,
mask_dev,
sizeof(uint64_t) * boxes_num * col_blocks,
cudaMemcpyDeviceToHost));
std::vector<uint64_t> remv(col_blocks);
memset(&remv[0], 0, sizeof(uint64_t) * col_blocks);
int num_to_keep = 0;
for (int i = 0; i < boxes_num; i++) {
int nblock = i / threadsPerBlock;
int inblock = i % threadsPerBlock;
if (!(remv[nblock] & (1ULL << inblock))) {
keep[num_to_keep++] = i;
uint64_t *p = mask_host + i * col_blocks;
for (int j = nblock; j < col_blocks; j++) {
remv[j] |= p[j];
}
}
}
*num_out = num_to_keep;
}
// copy proposals to output
// dets (top_n, 5); keep (top_n, ); out (top_n, )
// count should be top_n (total anchors or proposals)
template<typename Dtype>
__global__ void PrepareOutput(const int count,
const Dtype* dets,
const int* keep,
const int out_size,
const int batchIdx,
Dtype* out,
Dtype* score) {
for (int index = blockIdx.x * blockDim.x + threadIdx.x;
index < count;
index += blockDim.x * gridDim.x) {
// out[index * 5] = batchIdx;
if (index < out_size) {
int keep_i = keep[index];
for (int j = 0; j < 4; ++j) {
out[index * 4 + j] = dets[keep_i * 5 + j];
}
score[index] = dets[keep_i * 5 + 4];
} else {
//int keep_i = keep[index % out_size];
for (int j = 0; j < 4; ++j) {
out[index * 4 + j] = 0.0f;
}
score[index] = 0;
}
}
}
} // namespace
} // namespace cuda
} // namespace mshadow
namespace mxnet {
namespace op {
template<typename xpu>
class NMSGPUOp : public Operator{
public:
explicit NMSGPUOp(NMSParam param) {
this->param_ = param;
}
virtual void Forward(const OpContext &ctx,
const std::vector<TBlob> &in_data,
const std::vector<OpReqType> &req,
const std::vector<TBlob> &out_data,
const std::vector<TBlob> &aux_states) {
using namespace mshadow;
using namespace mshadow::expr;
using namespace mshadow::cuda;
CHECK_EQ(in_data.size(), 1);
CHECK_EQ(out_data.size(), 2);
CHECK_GT(req.size(), 1);
// CHECK_EQ(req[proposal::kOut], kWriteTo);
Stream<xpu> *s = ctx.get_stream<xpu>();
Tensor<xpu, 3> proposals = in_data[nms::kBBox].get<xpu, 3, float>(s); // batch_idx, rois_idx, 5(x1, y1, x2, y2, score)
Tensor<xpu, 3> out = out_data[nms::kOut].get<xpu, 3, float>(s); // batch_idx, rois_idx, 4(x1, y1, x2, y2)
Tensor<xpu, 3> out_score = out_data[nms::kScore].get<xpu, 3, float>(s); // batch_idx, rois_idx, 1(score)
uint64_t WORKSPACE_LIMIT = 1024 * 1024 * param_.workspace; // 256 MB should be sufficient
Tensor<xpu, 1, uint8_t> workspace = ctx.requested[nms::kTempSpace].get_space_typed<xpu, 1, uint8_t>(Shape1(WORKSPACE_LIMIT), s);
uint64_t allocated_bytes = 0ULL;
uint64_t allocated_bytes_outside_loop = 0ULL;
int nbatch = proposals.size(0);
int count = proposals.size(1);
// set to -1 for max
int rpn_pre_nms_top_n = (param_.rpn_pre_nms_top_n > 0) ? param_.rpn_pre_nms_top_n : count;
rpn_pre_nms_top_n = std::min(rpn_pre_nms_top_n, count);
int rpn_post_nms_top_n = std::min(param_.rpn_post_nms_top_n, rpn_pre_nms_top_n);
/* copy anchors for all images in batch */
for (int i = 0; i < nbatch; i++) {
float* batch_proposals = proposals.dptr_ + i * 5 * count;
/* copy score to a continuous memory */
dim3 dimGrid((count + kMaxThreadsPerBlock - 1) / kMaxThreadsPerBlock);
dim3 dimBlock(kMaxThreadsPerBlock);
Tensor<xpu, 1> score(reinterpret_cast<float *>(workspace.dptr_ + allocated_bytes), Shape1(count));
allocated_bytes += count * sizeof(float);
CHECK_LT(allocated_bytes, WORKSPACE_LIMIT) << "Allocating more memory than workspace limit";
Tensor<xpu, 1, int> order(reinterpret_cast<int *>(workspace.dptr_ + allocated_bytes), Shape1(count));
allocated_bytes += count * sizeof(int);
CHECK_LT(allocated_bytes, WORKSPACE_LIMIT) << "Allocating more memory than workspace limit";
CheckLaunchParam(dimGrid, dimBlock, "CopyScore");
CopyScoreKernel<<<dimGrid, dimBlock>>>(
count, batch_proposals, score.dptr_, order.dptr_);
FRCNN_CUDA_CHECK(cudaPeekAtLastError());
if (!param_.already_sorted) {
/* argsort score, save order */
thrust::stable_sort_by_key(thrust::device,
score.dptr_,
score.dptr_ + score.size(0),
order.dptr_,
thrust::greater<float>());
FRCNN_CUDA_CHECK(cudaPeekAtLastError());
}
/* Reorder proposals according to order */
Tensor<xpu, 2> ordered_proposals(reinterpret_cast<float *>(workspace.dptr_ + allocated_bytes), Shape2(rpn_pre_nms_top_n, 5));
allocated_bytes += rpn_pre_nms_top_n * 5 * sizeof(float);
CHECK_LT(allocated_bytes, WORKSPACE_LIMIT) << "Allocating more memory than workspace limit";
dimGrid.x = (rpn_pre_nms_top_n + kMaxThreadsPerBlock - 1) / kMaxThreadsPerBlock;
CheckLaunchParam(dimGrid, dimBlock, "ReorderProposals");
ReorderProposalsKernel<<<dimGrid, dimBlock>>>(
rpn_pre_nms_top_n, batch_proposals, order.dptr_, ordered_proposals.dptr_);
FRCNN_CUDA_CHECK(cudaPeekAtLastError());
/* perform nms */
std::vector<int> _keep(rpn_pre_nms_top_n);
int out_size = 0;
const int boxes_num = rpn_pre_nms_top_n;
const int col_blocks = DIVUP(boxes_num, sizeof(uint64_t) * 8);
// take special care when allocate memory of 8-byte alignment.
allocated_bytes += allocated_bytes % sizeof(uint64_t);
Tensor<xpu, 1, uint64_t> mask_tensor(reinterpret_cast<uint64_t *>(workspace.dptr_ + allocated_bytes), Shape1(boxes_num * col_blocks));
allocated_bytes += boxes_num * col_blocks * sizeof(uint64_t);
CHECK_LT(allocated_bytes, WORKSPACE_LIMIT) << "Allocating more memory than workspace limit";
// the following line does not need change since it the only place where requires host workspace
Tensor<cpu, 1, uint64_t> mask_host_tensor = ctx.requested[nms::kTempSpace].get_host_space_typed<1, uint64_t>(Shape1(boxes_num * col_blocks));
uint64_t *mask_dev = mask_tensor.dptr_;
uint64_t *mask_host = mask_host_tensor.dptr_;
_nms(ordered_proposals,
param_.threshold,
&_keep[0],
&out_size,
mask_dev,
mask_host);
/* copy nms result to gpu */
Tensor<xpu, 1, int> keep(reinterpret_cast<int *>(workspace.dptr_ + allocated_bytes), Shape1(_keep.size()));
allocated_bytes += _keep.size() * sizeof(int);
CHECK_LT(allocated_bytes, WORKSPACE_LIMIT) << "Allocating more memory than workspace limit";
FRCNN_CUDA_CHECK(cudaMemcpy(keep.dptr_,
&_keep[0],
sizeof(int) * _keep.size(),
cudaMemcpyHostToDevice)); // less than 64K
/* copy results after nms */
dimGrid.x = (rpn_post_nms_top_n + kMaxThreadsPerBlock - 1) / kMaxThreadsPerBlock;
CheckLaunchParam(dimGrid, dimBlock, "PrepareOutput");
PrepareOutput<<<dimGrid, dimBlock>>>(
rpn_post_nms_top_n, ordered_proposals.dptr_, keep.dptr_, out_size, i,
out.dptr_ + i * 4 * rpn_post_nms_top_n,
out_score.dptr_ + i * rpn_post_nms_top_n);
FRCNN_CUDA_CHECK(cudaPeekAtLastError());
// recycle all bytes allocated within loop
allocated_bytes = allocated_bytes_outside_loop;
}
}
virtual void Backward(const OpContext &ctx,
const std::vector<TBlob> &out_grad,
const std::vector<TBlob> &in_data,
const std::vector<TBlob> &out_data,
const std::vector<OpReqType> &req,
const std::vector<TBlob> &in_grad,
const std::vector<TBlob> &aux_states) {
using namespace mshadow;
using namespace mshadow::expr;
CHECK_EQ(in_grad.size(), 1);
Stream<xpu> *s = ctx.get_stream<xpu>();
Tensor<xpu, 3> gbbox = in_grad[nms::kBBox].get<xpu, 3, real_t>(s);
Assign(gbbox, req[nms::kBBox], 0);
}
private:
NMSParam param_;
}; // class NMSGPUOp
template<>
Operator* CreateOp<gpu>(NMSParam param) {
return new NMSGPUOp<gpu>(param);
}
} // namespace op
} // namespace mxnet
|
{
"pile_set_name": "Github"
}
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.