text
stringlengths 27
775k
|
|---|
import * as React from "react";
import { parseFileRenameFormat } from "@/lib/context";
import { GameStartType } from "@vinceau/slp-realtime";
export const TemplatePreview: React.FC<{
template: string;
settings?: GameStartType,
metadata?: any,
}> = (props) => {
const parsedTemplate = parseFileRenameFormat(props.template, props.settings, props.metadata);
return (
<span>{parsedTemplate}</span>
);
};
|
# Xous Text to Speech Backend
API for Text-to-speech backends on Xous. Third party executables for TTS
that comply to this API can be used by Xous for doing text to speech
operations.
|
package akkeeper.common
private[akkeeper] object CliArguments {
val AkkeeperJarArg = "akkeeperJar"
val InstanceIdArg = "instanceId"
val AppIdArg = "appId"
val ConfigArg = "config"
val MasterAddressArg = "masterAddress"
val ActorLaunchContextsArg = "actorLaunchContexts"
val PrincipalArg = "principal"
}
|
public class HelloWorld{
HelloWorld(){
System.out.println("constructor added");
}
public static void main(){
System.out.println("Hello, Git");
}
public int subtract(int a , int b){
return a-b;
}
}
|
# Deep Gradient Compression [[arXiv]](https://arxiv.org/pdf/1712.01887.pdf)
```
@inproceedings{lin2018dgc,
title={{Deep Gradient Compression: Reducing the communication bandwidth for distributed training}},
author={Lin, Yujun and Han, Song and Mao, Huizi and Wang, Yu and Dally, William J},
booktitle={The International Conference on Learning Representations},
year={2018}
}
```
## Overview
We release the PyTorch code of the [Deep Gradient Compression](https://arxiv.org/pdf/1712.01887.pdf).
<p align="left">
<img src="data/docs/teaser.png" width="1080"><br/>
Figure 1. Deep Gradient Compression (DGC) can reduce the communication bandwidth (transmit less gradients by pruning away small gradients), improve the scalability, and speed up distributed training.<br/><br/>
<img src="data/docs/cifar-10.png" width="1080">
<img src="data/docs/resnet.png" width="1080"><br/>
Figure 2. : DGC maintains accuracy: Learning curves of ResNet (the gradient sparsity is 99.9%).<br/><br/>
<img src="data/docs/speedup.png" width="640"><br/>
Figure 3. DGC improves the scalability: speedup measured on NVIDIA TITAN RTX 2080Ti GPU cluster with 25 Gbps Ethernet.<br/><br/>
</p>
## Content
- [Prerequisites](#prerequisites)
- [Code](#code)
- [Training](#training)
- [Known Issues and TODOs](#known-issues-and-todos)
## Prerequisites
The code is built with following libraries (see [requirements.txt](requirements.txt)):
- Python >= 3.7
- [PyTorch](https://github.com/pytorch/pytorch) >= 1.5
- [Horovod](https://github.com/horovod/horovod) >= 0.19.4
- [numpy](https://github.com/numpy/numpy)
- [tensorboardX](https://github.com/lanpa/tensorboardX) >= 1.2
- [tqdm](https://github.com/tqdm/tqdm)
- [openmpi](https://www.open-mpi.org/software/ompi/) >= 4.0
## Code
The core code to implement DGC is in [dgc/compression.py](dgc/compression.py) and [dgc/memory.py](dgc/memory.py).
- Gradient Accumulation and Momentum Correction
```python
mmt = self.momentums[name]
vec = self.velocities[name]
if self.nesterov:
mmt.add_(grad).mul_(self.momentum)
vec.add_(mmt).add_(grad)
else:
mmt.mul_(self.momentum).add_(grad)
vec.add_(mmt)
return vec
```
- Sparsification
```python
importance = tensor.abs()
# sampling
sample_start = random.randint(0, sample_stride - 1)
samples = importance[sample_start::sample_stride]
# thresholding
threshold = torch.min(torch.topk(samples, top_k_samples, 0, largest=True, sorted=False)[0])
mask = torch.ge(importance, threshold)
indices = mask.nonzero().view(-1)
```
## Training
We use [Horovod](https://github.com/horovod/horovod) to run distributed training:
- run on a machine with *N* GPUs,
```bash
horovodrun -np N python train.py --configs [config files]
```
e.g., resnet-20 on cifar-10 dataset with 8 GPUs:
```bash
# fp16 values, int32 indices
# warmup coeff: [0.25, 0.063, 0.015, 0.004, 0.001] -> 0.001
horovodrun -np 8 python train.py --configs configs/cifar/resnet20.py \
configs/dgc/wm5.py configs/dgc/fp16.py configs/dgc/int32.py
```
- run on *K* machines with *N* GPUs each,
```bash
mpirun -np [K*N] -H server0:N,server1:N,...,serverK:N \
-bind-to none -map-by slot -x NCCL_DEBUG=INFO \
-x LD_LIBRARY_PATH -x PATH -mca pml ob1 \
-mca btl ^openib -mca btl_tcp_if_exclude docker0,lo \
python train.py --configs [config files]
```
e.g., resnet-50 on ImageNet dataset with 4 machines with 8 GPUs each,
```bash
# fp32 values, int64 indices, no warmup
mpirun -np 32 -H server0:8,server1:8,server2:8,server3:8 \
-bind-to none -map-by slot -x NCCL_DEBUG=INFO \
-x LD_LIBRARY_PATH -x PATH -mca pml ob1 \
-mca btl ^openib -mca btl_tcp_if_exclude docker0,lo \
python train.py --configs configs/imagenet/resnet50.py \
configs/dgc/wm0.py
```
For more information on horovodrun, please read horovod documentations.
You can modify/add new config files under [configs](configs) to change training settings. You can also modify some trivial configs in the command:
```bash
python train.py --configs [config files] --[config name] [config value] --suffix [suffix of experiment directory]
```
e.g.,
```bash
horovodrun -np 8 python train.py --configs configs/cifar/resnet20.py \
configs/dgc/wm5.py --configs.train.num_epochs 500 --suffix .e500
```
Here are some reproduce results using **0.1%** compression ratio (*i.e.*, `configs.train.compression.compress_ratio = 0.001`):
| #GPUs | Batch Size | #Sparsified Nodes | ResNet-50 | VGG-16-BN | LR Scheduler |
|:-----:|:----------:|:-----------------:|:---------:|:---------:|:------------:|
| - | - | - | [76.2](https://pytorch.org/docs/stable/torchvision/models.html) | [73.4](https://pytorch.org/docs/stable/torchvision/models.html) | - |
| 8 | 256 | 8 | 76.6 | 74.1 | MultiStep |
| 16 | 512 | 16 | 76.5 | 73.8 | MultiStep |
| 32 | 1024 | 32 | 76.3 | 73.3 | MultiStep |
| 32 | 1024 | 32 | 76.7 | 74.4 | Cosine |
| 64 | 2048 | 64 | 76.8 | 74.2 | Cosine |
| 64 | 2048 | 8 | 76.6 | 73.8 | Cosine |
| 128 | 4096 | 16 | 76.4 | 73.1 | Cosine |
| 256 | 8192 | 32 | 75.9 | 71.7 | Cosine |
## Known Issues and TODOs
- **Backend**: We currently only support OpenMPI backend. We encountered some errors when calling `allgather` using NCCL2 backend: `allgather`ed data are random data once in a while; if we set `CUDA_LAUNCH_BLOCKING=1` for debugging, everything works well.
- **#Sparsified Nodes**: We currently treat each GPU as an independent node. However, communication is rarely a bottleneck within one machine. A better strategy should be performing `allreduce` dense gradients intra-machine and `allgather` sparse gradients inter-machines.
- For accuracy/convergence verification, we can simulate this by setting `configs.train.num_batches_per_step` to desired #GPUs per machine (see accuracy table for batch size = 4096/8192).
- **Sparsification Granularity**: We naively perform fine-grained (*i.e.*, element-wise) top-k to select gradients, and thus the communication will suffer from increased `allgather` data volume as #nodes increases.
- [Sun *et.al.*](https://arxiv.org/pdf/1902.06855.pdf) modified the process with coarse-grained sparsification: gradients are partioned into chunks, `allreduce` the gradient chunks selected based on `allreduce`d L1-norm of each chunk, which gets rid of the `allgather` and solves the problem.
- **Data Encoding**: We did not perform any data quantization/encoding before transmission. Data encoding can further reduce data volume.
- **Overhead**: Performing sparsification (esp. adapting thresholding) in C/C++ may further reduce the DGC overhead.
## License
This repository is released under the Apache license. See [LICENSE](LICENSE) for additional details.
## Acknowledgement
- Our implementation is modified from [grace](https://github.com/sands-lab/grace) which is an unified framework for all sorts of compressed distributed training algorithms.
|
package cmd
import "testing"
func Test_getS3Cse(t *testing.T) {
type args struct {
s3bucket string
s3object string
filedest string
}
tests := []struct {
name string
args args
wantErr bool
}{
{"Fail on garbage input", args{"nosuchbucket", "nosuchobject", "notafile"}, true},
}
for _, tt := range tests {
t.Run(tt.name, func(t *testing.T) {
if err := getS3Cse(tt.args.s3bucket, tt.args.s3object, tt.args.filedest); (err != nil) != tt.wantErr {
t.Errorf("getS3Cse() error = %v, wantErr %v", err, tt.wantErr)
}
})
}
}
|
# frozen_string_literal: true
module ClickHouse
module Middleware
autoload :Logging, 'click_house/middleware/logging'
autoload :ParseCsv, 'click_house/middleware/parse_csv'
autoload :RaiseError, 'click_house/middleware/raise_error'
end
end
|
TQUEUE
==========
Time Delay Circular Queue Service, Base On Tornado Web Framework
Reference
------------
* Base On https://github.com/bufferx/tornado_webserver_demo
Usage
------------
* cd ./src/www/
* python service.py -env=debug
Test
------------
* http://localhost:8000/sayhi?name=bufferx
Requirements
------------
The following libraries are required
* [tornado](http://github.com/facebook/tornado)
Issues
------
Please report any issues via [github issues](https://github.com/bufferx/tqueue/issues)
|
# ansible-files
Repo to place all ansible works
## How to run ?
```
ansible-playbook simplify/hjson2json.yml -e "input_file='/tmp/sample.hjson'" -e "output_file='/tmp/sample.json'
```
|
const assert = require("assert")
const geography = require("../../../sources/_lib/geography/index.js")
const maintainers = require("../../../sources/_lib/maintainers.js")
const parse = require("../../../sources/_lib/parse.js")
const transform = require("../../../sources/_lib/transform.js")
const _counties = [
"Adair County",
"Alfalfa County",
"Atoka County",
"Beaver County",
"Beckham County",
"Blaine County",
"Bryan County",
"Caddo County",
"Canadian County",
"Carter County",
"Cherokee County",
"Choctaw County",
"Cimarron County",
"Cleveland County",
"Coal County",
"Comanche County",
"Cotton County",
"Craig County",
"Creek County",
"Custer County",
"Delaware County",
"Dewey County",
"Ellis County",
"Garfield County",
"Garvin County",
"Grady County",
"Grant County",
"Greer County",
"Harmon County",
"Harper County",
"Haskell County",
"Hughes County",
"Jackson County",
"Jefferson County",
"Johnston County",
"Kay County",
"Kingfisher County",
"Kiowa County",
"Latimer County",
"Le Flore County",
"Lincoln County",
"Logan County",
"Love County",
"Major County",
"Marshall County",
"Mayes County",
"McClain County",
"McCurtain County",
"McIntosh County",
"Murray County",
"Muskogee County",
"Noble County",
"Nowata County",
"Okfuskee County",
"Oklahoma County",
"Okmulgee County",
"Osage County",
"Ottawa County",
"Pawnee County",
"Payne County",
"Pittsburg County",
"Pontotoc County",
"Pottawatomie County",
"Pushmataha County",
"Roger Mills County",
"Rogers County",
"Seminole County",
"Sequoyah County",
"Stephens County",
"Texas County",
"Tillman County",
"Tulsa County",
"Wagoner County",
"Washington County",
"Washita County",
"Woods County",
"Woodward County",
]
const _titleCase = (s) =>
s
.toLowerCase()
.split(" ")
.map((part) => part[0].toUpperCase() + part.slice(1))
.join(" ")
module.exports = {
aggregate: "county",
country: "iso1:US",
maintainers: [ maintainers.paulboal, maintainers.camjc ],
priority: 1,
state: "iso2:US-OK",
friendly: {
name: "Oklahoma State Department of Health",
},
scrapers: [
{
startDate: "2020-03-21",
crawl: [
{
data: "table",
type: "page",
url: "https://coronavirus.health.ok.gov/",
},
],
scrape ($) {
let counties = []
const $table = $("table[summary='COVID-19 Cases by County']").first()
const $trs = $table.find("tbody").find("tr")
$trs.each((index, tr) => {
const $tr = $(tr)
const countyName = parse.string($tr.find("td:nth-child(1)").text())
const countyObj = {
county: geography.addCounty(parse.string(countyName)),
cases: parse.number($tr.find("td:nth-child(2)").text() || 0),
deaths: parse.number($tr.find("td:nth-child(3)").text() || 0),
}
if (countyObj.county !== "Total County") {
counties.push(countyObj)
}
})
counties = geography.addEmptyRegions(counties, _counties, "county")
counties.push(transform.sumData(counties))
return counties
},
},
{
startDate: "2020-04-30",
crawl: [
{
type: "csv",
url:
"https://storage.googleapis.com/ok-covid-gcs-public-download/oklahoma_cases_county.csv",
},
],
scrape (data) {
let counties = []
data.forEach((item) => {
const county = `${_titleCase(item.County)} County`
const cases = parse.number(item.Cases)
const deaths = parse.number(item.Deaths)
const recovered = parse.number(item.Recovered)
const countyObj = { county, cases, deaths, recovered }
if (countyObj.county === "Total County") {
console.log.warn(`rejecting ${countyObj}`)
} else {
counties.push(countyObj)
}
})
counties = geography.addEmptyRegions(counties, _counties, "county")
const summedData = transform.sumData(counties)
assert(summedData.cases > 0, "Cases are not reasonable")
counties.push(summedData)
return counties
},
},
],
}
|
package skuber.apiextensions
import play.api.libs.functional.syntax.unlift
import play.api.libs.json.{JsPath, JsResult, JsSuccess, JsValue}
import skuber.ResourceSpecification.StatusSubresource
import skuber.{NonCoreResourceSpecification, ObjectEditor, ObjectMeta, ObjectResource, ResourceDefinition, ResourceSpecification, TypeMeta}
/**
* @author David O'Riordan
*/
case class CustomResourceDefinition(
val kind: String = "CustomResourceDefinition",
override val apiVersion: String = "apiextensions.k8s.io/v1beta1",
val metadata: ObjectMeta,
spec: CustomResourceDefinition.Spec
) extends ObjectResource
object CustomResourceDefinition {
type Spec=NonCoreResourceSpecification
val Spec=NonCoreResourceSpecification
val Scope=ResourceSpecification.Scope
type Names=ResourceSpecification.Names
val Names=ResourceSpecification.Names
type Version = ResourceSpecification.Version
type Subresources = ResourceSpecification.Subresources
type ScaleSubresource = ResourceSpecification.ScaleSubresource
type StatusSubresource = ResourceSpecification.StatusSubresource
val crdNames = Names(
"customresourcedefinitions",
"customresourcedefinition",
"CustomResourceDefinition",
List("crd"))
val specification = NonCoreResourceSpecification(
apiGroup = "apiextensions.k8s.io",
version = "v1beta1",
scope = Scope.Cluster,
names = crdNames)
def apply(
name:String,
kind: String): CustomResourceDefinition = CustomResourceDefinition(name,kind, "v1", Scope.Namespaced, None, Nil)
def apply(
name: String,
kind: String,
scope: Scope.Value): CustomResourceDefinition = CustomResourceDefinition(name, kind, "v1", scope, None, Nil)
def apply(
name: String,
kind: String,
shortNames: List[String]): CustomResourceDefinition = CustomResourceDefinition(name, kind, "v1", Scope.Namespaced, None, shortNames)
def apply(
name: String,
kind: String,
scope: Scope.Value,
shortNames: List[String]): CustomResourceDefinition = CustomResourceDefinition(name, kind, "v1", scope, None, shortNames)
def apply(
name: String,
kind: String,
version: String,
scope: Scope.Value,
singular: Option[String],
shortNames: List[String]): CustomResourceDefinition =
{
val nameParts = name.split('.')
if (nameParts.length < 2)
throw new Exception("name must be of format <plural>.<group>>")
val plural=nameParts.head
val group=nameParts.tail.mkString(".")
val names=ResourceSpecification.Names(plural=plural,kind=kind,singular=singular.getOrElse(""),shortNames=shortNames)
val spec=Spec(apiGroup=group,version=version,names=names, scope=scope)
CustomResourceDefinition(metadata=ObjectMeta(name=name), spec=spec)
}
def apply[T <: TypeMeta : ResourceDefinition]: CustomResourceDefinition = {
val crdSpec: Spec = try {
implicitly[ResourceDefinition[T]].spec.asInstanceOf[Spec]
} catch {
case ex: ClassCastException =>
val msg = "Requires an implicit resource definition that has a NonCoreResourceSpecification"
throw new skuber.K8SException(skuber.api.client.Status(message = Some(msg)))
}
val name=s"${crdSpec.names.plural}.${crdSpec.group.get}"
new CustomResourceDefinition(metadata=ObjectMeta(name=name), spec=crdSpec)
}
implicit val crdDef = new ResourceDefinition[CustomResourceDefinition] { def spec=specification }
implicit val crdListDef = new ResourceDefinition[CustomResourceDefinitionList] { def spec=specification }
implicit val crdEditor = new ObjectEditor[CustomResourceDefinition] {
override def updateMetadata(obj: CustomResourceDefinition, newMetadata: ObjectMeta) = obj.copy(metadata = newMetadata)
}
// json formatters for sending/receiving CRD resources
import play.api.libs.json.{Json, Format}
import play.api.libs.functional.syntax._
import skuber.json.format.{enumFormat,enumFormatMethods, maybeEmptyFormatMethods, objectMetaFormat}
implicit val scopeFormat = enumFormat(Scope)
implicit val namesFormat = (
(JsPath \ "plural").format[String] and
(JsPath \ "singular").format[String] and
(JsPath \ "kind").format[String] and
(JsPath \ "shortNames").formatMaybeEmptyList[String] and
(JsPath \ "listKind").formatNullable[String] and
(JsPath \ "categories").formatMaybeEmptyList[String]
)(Names.apply _, unlift(Names.unapply))
implicit val versionFormat: Format[ResourceSpecification.Version] = (
(JsPath \ "name").format[String] and
(JsPath \ "served").formatMaybeEmptyBoolean() and
(JsPath \ "storage").formatMaybeEmptyBoolean()
)(ResourceSpecification.Version.apply _, unlift(ResourceSpecification.Version.unapply))
implicit val scaleSubresourceFmt: Format[ScaleSubresource] = Json.format[ScaleSubresource]
implicit val statusSubResourceFmt: Format[StatusSubresource] = new Format[StatusSubresource] {
override def writes(o: StatusSubresource): JsValue = Json.obj()
override def reads(json: JsValue): JsResult[StatusSubresource] = JsSuccess(StatusSubresource())
}
implicit val subresourcesFmt: Format[Subresources] = Json.format[Subresources]
implicit val crdSpecFmt: Format[Spec] = (
(JsPath \ "group").format[String] and
(JsPath \ "version").formatNullable[String] and
(JsPath \ "versions").formatMaybeEmptyList[Version] and
(JsPath \ "scope").formatEnum(Scope) and
(JsPath \ "names").format[Names] and
(JsPath \ "subresources").formatNullable[Subresources]
)(Spec.apply _, unlift(Spec.unapply))
implicit val crdFmt: Format[CustomResourceDefinition] = (
(JsPath \ "kind").format[String] and
(JsPath \ "apiVersion").format[String] and
(JsPath \ "metadata").format[ObjectMeta] and
(JsPath \ "spec").format[Spec]
)(CustomResourceDefinition.apply _,unlift(CustomResourceDefinition.unapply))
}
|
//------------------------------------------------------------------------------
// <自动生成>
// 此代码由工具生成。
//
// 对此文件的更改可能会导致不正确的行为,并且如果
// 重新生成代码,这些更改将会丢失。
// </自动生成>
//------------------------------------------------------------------------------
namespace Studio.JZY.Doc {
public partial class DaEdit {
/// <summary>
/// Head1 控件。
/// </summary>
/// <remarks>
/// 自动生成的字段。
/// 若要进行修改,请将字段声明从设计器文件移到代码隐藏文件。
/// </remarks>
protected global::System.Web.UI.HtmlControls.HtmlHead Head1;
/// <summary>
/// form1 控件。
/// </summary>
/// <remarks>
/// 自动生成的字段。
/// 若要进行修改,请将字段声明从设计器文件移到代码隐藏文件。
/// </remarks>
protected global::System.Web.UI.HtmlControls.HtmlForm form1;
/// <summary>
/// txtWH 控件。
/// </summary>
/// <remarks>
/// 自动生成的字段。
/// 若要进行修改,请将字段声明从设计器文件移到代码隐藏文件。
/// </remarks>
protected global::System.Web.UI.WebControls.TextBox txtWH;
/// <summary>
/// txtBT 控件。
/// </summary>
/// <remarks>
/// 自动生成的字段。
/// 若要进行修改,请将字段声明从设计器文件移到代码隐藏文件。
/// </remarks>
protected global::System.Web.UI.WebControls.TextBox txtBT;
/// <summary>
/// txtFBT 控件。
/// </summary>
/// <remarks>
/// 自动生成的字段。
/// 若要进行修改,请将字段声明从设计器文件移到代码隐藏文件。
/// </remarks>
protected global::System.Web.UI.WebControls.TextBox txtFBT;
/// <summary>
/// txtND 控件。
/// </summary>
/// <remarks>
/// 自动生成的字段。
/// 若要进行修改,请将字段声明从设计器文件移到代码隐藏文件。
/// </remarks>
protected global::System.Web.UI.WebControls.TextBox txtND;
/// <summary>
/// txtJGH 控件。
/// </summary>
/// <remarks>
/// 自动生成的字段。
/// 若要进行修改,请将字段声明从设计器文件移到代码隐藏文件。
/// </remarks>
protected global::System.Web.UI.WebControls.TextBox txtJGH;
/// <summary>
/// txtZRR 控件。
/// </summary>
/// <remarks>
/// 自动生成的字段。
/// 若要进行修改,请将字段声明从设计器文件移到代码隐藏文件。
/// </remarks>
protected global::System.Web.UI.WebControls.TextBox txtZRR;
/// <summary>
/// txtBMMC 控件。
/// </summary>
/// <remarks>
/// 自动生成的字段。
/// 若要进行修改,请将字段声明从设计器文件移到代码隐藏文件。
/// </remarks>
protected global::System.Web.UI.WebControls.TextBox txtBMMC;
/// <summary>
/// txtGSMC 控件。
/// </summary>
/// <remarks>
/// 自动生成的字段。
/// 若要进行修改,请将字段声明从设计器文件移到代码隐藏文件。
/// </remarks>
protected global::System.Web.UI.WebControls.TextBox txtGSMC;
/// <summary>
/// txtPos 控件。
/// </summary>
/// <remarks>
/// 自动生成的字段。
/// 若要进行修改,请将字段声明从设计器文件移到代码隐藏文件。
/// </remarks>
protected global::System.Web.UI.WebControls.TextBox txtPos;
/// <summary>
/// hidPos 控件。
/// </summary>
/// <remarks>
/// 自动生成的字段。
/// 若要进行修改,请将字段声明从设计器文件移到代码隐藏文件。
/// </remarks>
protected global::System.Web.UI.WebControls.HiddenField hidPos;
/// <summary>
/// txtCtl 控件。
/// </summary>
/// <remarks>
/// 自动生成的字段。
/// 若要进行修改,请将字段声明从设计器文件移到代码隐藏文件。
/// </remarks>
protected global::System.Web.UI.WebControls.TextBox txtCtl;
/// <summary>
/// hidCtl 控件。
/// </summary>
/// <remarks>
/// 自动生成的字段。
/// 若要进行修改,请将字段声明从设计器文件移到代码隐藏文件。
/// </remarks>
protected global::System.Web.UI.WebControls.HiddenField hidCtl;
/// <summary>
/// txtGB 控件。
/// </summary>
/// <remarks>
/// 自动生成的字段。
/// 若要进行修改,请将字段声明从设计器文件移到代码隐藏文件。
/// </remarks>
protected global::System.Web.UI.WebControls.TextBox txtGB;
/// <summary>
/// hidGB 控件。
/// </summary>
/// <remarks>
/// 自动生成的字段。
/// 若要进行修改,请将字段声明从设计器文件移到代码隐藏文件。
/// </remarks>
protected global::System.Web.UI.WebControls.HiddenField hidGB;
/// <summary>
/// txtZD 控件。
/// </summary>
/// <remarks>
/// 自动生成的字段。
/// 若要进行修改,请将字段声明从设计器文件移到代码隐藏文件。
/// </remarks>
protected global::System.Web.UI.WebControls.TextBox txtZD;
/// <summary>
/// hidZD 控件。
/// </summary>
/// <remarks>
/// 自动生成的字段。
/// 若要进行修改,请将字段声明从设计器文件移到代码隐藏文件。
/// </remarks>
protected global::System.Web.UI.WebControls.HiddenField hidZD;
/// <summary>
/// txtWBS 控件。
/// </summary>
/// <remarks>
/// 自动生成的字段。
/// 若要进行修改,请将字段声明从设计器文件移到代码隐藏文件。
/// </remarks>
protected global::System.Web.UI.WebControls.TextBox txtWBS;
/// <summary>
/// hidWBS 控件。
/// </summary>
/// <remarks>
/// 自动生成的字段。
/// 若要进行修改,请将字段声明从设计器文件移到代码隐藏文件。
/// </remarks>
protected global::System.Web.UI.WebControls.HiddenField hidWBS;
/// <summary>
/// txtAQ 控件。
/// </summary>
/// <remarks>
/// 自动生成的字段。
/// 若要进行修改,请将字段声明从设计器文件移到代码隐藏文件。
/// </remarks>
protected global::System.Web.UI.WebControls.TextBox txtAQ;
/// <summary>
/// hidAQ 控件。
/// </summary>
/// <remarks>
/// 自动生成的字段。
/// 若要进行修改,请将字段声明从设计器文件移到代码隐藏文件。
/// </remarks>
protected global::System.Web.UI.WebControls.HiddenField hidAQ;
/// <summary>
/// txtZL 控件。
/// </summary>
/// <remarks>
/// 自动生成的字段。
/// 若要进行修改,请将字段声明从设计器文件移到代码隐藏文件。
/// </remarks>
protected global::System.Web.UI.WebControls.TextBox txtZL;
/// <summary>
/// hidZL 控件。
/// </summary>
/// <remarks>
/// 自动生成的字段。
/// 若要进行修改,请将字段声明从设计器文件移到代码隐藏文件。
/// </remarks>
protected global::System.Web.UI.WebControls.HiddenField hidZL;
/// <summary>
/// txtJS 控件。
/// </summary>
/// <remarks>
/// 自动生成的字段。
/// 若要进行修改,请将字段声明从设计器文件移到代码隐藏文件。
/// </remarks>
protected global::System.Web.UI.WebControls.TextBox txtJS;
/// <summary>
/// hidJS 控件。
/// </summary>
/// <remarks>
/// 自动生成的字段。
/// 若要进行修改,请将字段声明从设计器文件移到代码隐藏文件。
/// </remarks>
protected global::System.Web.UI.WebControls.HiddenField hidJS;
/// <summary>
/// txtProjectName 控件。
/// </summary>
/// <remarks>
/// 自动生成的字段。
/// 若要进行修改,请将字段声明从设计器文件移到代码隐藏文件。
/// </remarks>
protected global::System.Web.UI.WebControls.TextBox txtProjectName;
/// <summary>
/// hidProjectID 控件。
/// </summary>
/// <remarks>
/// 自动生成的字段。
/// 若要进行修改,请将字段声明从设计器文件移到代码隐藏文件。
/// </remarks>
protected global::System.Web.UI.WebControls.HiddenField hidProjectID;
/// <summary>
/// hidProjectCode 控件。
/// </summary>
/// <remarks>
/// 自动生成的字段。
/// 若要进行修改,请将字段声明从设计器文件移到代码隐藏文件。
/// </remarks>
protected global::System.Web.UI.WebControls.HiddenField hidProjectCode;
/// <summary>
/// txtKeyWord 控件。
/// </summary>
/// <remarks>
/// 自动生成的字段。
/// 若要进行修改,请将字段声明从设计器文件移到代码隐藏文件。
/// </remarks>
protected global::System.Web.UI.WebControls.TextBox txtKeyWord;
/// <summary>
/// txtReMark 控件。
/// </summary>
/// <remarks>
/// 自动生成的字段。
/// 若要进行修改,请将字段声明从设计器文件移到代码隐藏文件。
/// </remarks>
protected global::System.Web.UI.WebControls.TextBox txtReMark;
/// <summary>
/// hidTableName 控件。
/// </summary>
/// <remarks>
/// 自动生成的字段。
/// 若要进行修改,请将字段声明从设计器文件移到代码隐藏文件。
/// </remarks>
protected global::System.Web.UI.WebControls.HiddenField hidTableName;
/// <summary>
/// hidAppID 控件。
/// </summary>
/// <remarks>
/// 自动生成的字段。
/// 若要进行修改,请将字段声明从设计器文件移到代码隐藏文件。
/// </remarks>
protected global::System.Web.UI.WebControls.HiddenField hidAppID;
/// <summary>
/// hidInstanceID 控件。
/// </summary>
/// <remarks>
/// 自动生成的字段。
/// 若要进行修改,请将字段声明从设计器文件移到代码隐藏文件。
/// </remarks>
protected global::System.Web.UI.WebControls.HiddenField hidInstanceID;
/// <summary>
/// hidWorkflowID 控件。
/// </summary>
/// <remarks>
/// 自动生成的字段。
/// 若要进行修改,请将字段声明从设计器文件移到代码隐藏文件。
/// </remarks>
protected global::System.Web.UI.WebControls.HiddenField hidWorkflowID;
}
}
|
import React from 'react';
import IconBase from '@suitejs/icon-base';
function MdBookmark(props) {
return (
<IconBase viewBox="0 0 48 48" {...props}>
<path d="M34 6H14c-2.21 0-3.98 1.79-3.98 4L10 42l14-6 14 6V10c0-2.21-1.79-4-4-4z" />
</IconBase>
);
}
export default MdBookmark;
|
// Simple compiler test.
public final class Where implements support.Waldo
{
int WhereIsWaldo ()
{
// The compiler should find 'here' in support.Waldo.
return here;
}
}
|
import java.util.{Arrays, Properties}
import org.apache.kafka.clients.consumer.ConsumerRecord
import org.apache.kafka.clients.producer.{KafkaProducer, ProducerRecord}
import org.apache.spark.streaming.kafka010.{ConsumerStrategies, KafkaUtils, LocationStrategies}
import org.apache.spark.streaming.{Seconds, StreamingContext}
import org.apache.spark.{SparkConf, SparkContext}
import util.{EmbeddedKafkaServer, PartitionMapAnalyzer, SimpleKafkaClient}
/**
* A single stream subscribing to the two topics receives data from both of them.
* The partitioning behavior here is quite interesting, as the topics have three and six partitions respectively,
* each RDD has nine partitions, and each RDD partition receives data from exactly one partition of one topic.
*
* Partitioning is analyzed using the PartitionMapAnalyzer.
*/
object MultipleTopics {
def main (args: Array[String]) {
val topic1 = "foo"
val topic2 = "bar"
// topics are partitioned differently
val kafkaServer = new EmbeddedKafkaServer()
kafkaServer.start()
kafkaServer.createTopic(topic1, 3)
kafkaServer.createTopic(topic2, 6)
val conf = new SparkConf().setAppName("MultipleTopics").setMaster("local[10]")
val sc = new SparkContext(conf)
// streams will produce data every second
val ssc = new StreamingContext(sc, Seconds(1))
// this many messages
val max = 100
// Create the stream.
val props: Properties = SimpleKafkaClient.getBasicStringStringConsumer(kafkaServer)
val kafkaStream =
KafkaUtils.createDirectStream(
ssc,
LocationStrategies.PreferConsistent,
ConsumerStrategies.Subscribe[String, String](
Arrays.asList(topic1, topic2),
props.asInstanceOf[java.util.Map[String, Object]]
)
)
// now, whenever this Kafka stream produces data the resulting RDD will be printed
kafkaStream.foreachRDD(r => {
println("*** got an RDD, size = " + r.count())
PartitionMapAnalyzer.analyze(r)
})
ssc.start()
println("*** started streaming context")
// streams seem to need some time to get going
Thread.sleep(5000)
val producerThreadTopic1 = new Thread("Producer thread 1") {
override def run() {
val client = new SimpleKafkaClient(kafkaServer)
val numbers = 1 to max
val producer = new KafkaProducer[String, String](client.basicStringStringProducer)
numbers.foreach { n =>
// NOTE:
// 1) the keys and values are strings, which is important when receiving them
// 2) We don't specify which Kafka partition to send to, so a hash of the key
// is used to determine this
producer.send(new ProducerRecord(topic1, "key_1_" + n, "string_1_" + n))
}
}
}
val producerThreadTopic2 = new Thread("Producer thread 2; controlling termination") {
override def run() {
val client = new SimpleKafkaClient(kafkaServer)
val numbers = 1 to max
val producer = new KafkaProducer[String, String](client.basicStringStringProducer)
numbers.foreach { n =>
// NOTE:
// 1) the keys and values are strings, which is important when receiving them
// 2) We don't specify which Kafka partition to send to, so a hash of the key
// is used to determine this
producer.send(new ProducerRecord(topic2, "key_2_" + n, "string_2_" + n))
}
Thread.sleep(10000)
println("*** requesting streaming termination")
ssc.stop(stopSparkContext = false, stopGracefully = true)
}
}
producerThreadTopic1.start()
producerThreadTopic2.start()
try {
ssc.awaitTermination()
println("*** streaming terminated")
} catch {
case e: Exception => {
println("*** streaming exception caught in monitor thread")
}
}
// stop Spark
sc.stop()
// stop Kafka
kafkaServer.stop()
println("*** done")
}
}
|
#![no_std]
/**
Copyright (c) 2020, Todd Stellanova
All rights reserved.
License: See LICENSE file
*/
// use chrono::{DateTime, FixedOffset, TimeZone};
mod fetcher;
// use fetcher::FileFetcher;
pub mod metadata;
use metadata::SignatureMethod;
use crate::metadata::{KeyContainer, SignatureContainer, MetadataFormat};
/// Unless otherwise noted, this implementation is intended to comply
/// with the "Uptane Standard for Design and Implementation" version 1.0.1
/// originally obtained [here](https://uptane.github.io/uptane-standard/uptane-standard.html).
/// Comments will frequently refer to section numbers (eg "5.4.4") in that standard.
///
/// From Uptane standard: 5.4.4. Metadata verification procedures:
/// - A Primary ECU MUST perform full verification of metadata.
/// - A Secondary ECU SHOULD perform full verification of metadata.
/// If a Secondary cannot perform full verification, it SHALL perform partial verification instead.
/// If a step in the following workflows does not succeed
/// (e.g., the update is aborted because a new metadata file was not signed),
/// an ECU SHOULD still be able to update again in the future.
/// Errors raised during the update process SHOULD NOT leave ECUs in an unrecoverable state.
/// This library supports full verification of metadata.
/// Full verification of metadata means that we check that the Targets metadata
/// about images from the Director repository matches the Targets metadata about the
/// same images from the Image repository. This provides resilience to a single key compromise.
/// Errors in this crate
#[derive(Debug)]
pub enum Error {
/// Invalid key format
InvalidKeyFormat,
/// Unsupported signing method
UnsupportedSigningMethod,
/// Unsupported metadata format (eg json)
UnsupportedMetadataFormat,
/// Could not verify metadata
MetadataVerification,
/// Invalid signature
SignatureInvalid,
/// Invalid Hash
HashInvalid,
}
pub struct Verifier {}
impl Verifier {
/// Checks whether the signer (whose keys are provided) signed the given object
/// to produce the given signature.
///
/// - `key` The signer's key
/// - `sig` The signature to be verified
/// - `data` Data object used by create_signature() to generate the signature.
/// - `metadata` the metadata to be verified
/// - `format` the format of the metadata
///
/// Returns Ok if verified, errors if not
pub fn verify_signature_over_metadata(
key: &KeyContainer,
sig: &SignatureContainer,
data: &[u8],
metadata: &[u8],
format: MetadataFormat,
) -> Result<(), crate::Error> {
//TODO call verify_signature
Ok(())
}
/// Verify signature for data
fn verify_signature(public_key: &[u8],
method: SignatureMethod,
signature: &[u8],
data: &[u8]
) -> Result<(), crate::Error> {
// DateTime::parse_from_rfc3339()
match method {
SignatureMethod::RsaSsaPss => {
}
SignatureMethod::Ed25519 => {
}
SignatureMethod::NaCl => {
}
}
Ok(())
}
/// Verify that the Targets metadata about an image, from the Director repository,
/// matches the Targets metadata about the same image from the Image repository.
/// This fulfills Uptane "Full Verification" requirements
pub fn full_image_verification(director_targets: u32, image_targets: u32) -> Result<(), crate::Error> {
// Verify that Targets metadata from the Director and Image repositories match.
// A Primary ECU MUST perform this check on metadata for all images listed in
// the Targets metadata file from the Director repository downloaded in step 6.
//
// A Secondary ECU MAY elect to perform this check only on the metadata for the
// image it will install. (That is, the target metadata from the Director that
// contains the ECU identifier of the current ECU.)
// To check that the metadata for an image matches, complete the following procedure:
// - Locate and download a Targets metadata file from the Image repository that contains
// an image with exactly the same file name listed in the Director metadata, following
// the procedure in Section 5.4.4.7.
// - Check that the Targets metadata from the Image repository matches the Targets metadata
// from the Director repository:
// - Check that the non-custom metadata (i.e., length and hashes) of the unencrypted or
// encrypted image are the same in both sets of metadata. Note: the Primary is responsible
// for validating encrypted images and associated metadata. The target
// ECU (Primary or Secondary) is responsible for validating the unencrypted image and
// associated metadata.
// - Check that all “MUST match” custom metadata (e.g., hardware identifier and
// release counter) are the same in both sets of metadata.
// - Check that the release counter in the previous Targets metadata file is less than
// or equal to the release counter in this Targets metadata file.
Ok(())
}
}
|
package in.parsel.pvr.model;
import java.util.List;
/**
* Created by patch on 22/03/15.
*/
public class BaseOutput {
public static final String SUC_RESP_CODE = "1000";
public static final String FAL_RESP_CODE = "1001";
public static final String LOGIN_REQUIRED = "1002";
private String responseCode;
private String responseMessage;
private List<Error> errors;
private boolean isValid;
public String getResponseMessage() {
return responseMessage;
}
public void setResponseMessage(String responseMessage) {
this.responseMessage = responseMessage;
}
public void setResponseCode(String responseCode) { this.responseCode = responseCode; }
public void setErrors(List<Error> errors) { this.errors = errors; }
public void setIsValid(boolean isValid) {this.isValid = isValid;}
public String getResponseCode() { return responseCode; }
public List<Error> getErrors() { return errors; }
public boolean isValid() {return isValid;}
}
|
function enviarDatos(document) {
var nombre = document.getElementById("nombre").value;
$.ajaxSetup({
headers:{
'X-CSRF-TOKEN':$('meta[name="csrf-token"]').attr('content')
}
});
$.ajax({
method: "POST",
url: "{{route('agregarorigen.index')}}",
data: {nombre:nombre},
success: function (respuesta) {
alert(respuesta);
},
error: function () {
alert("ERROR: ELLA NO TE AMA");
}
});
document.getElementById("nombre").value = "";
}
|
export class Errors {
public static ErrorMessages = {
INVALID_PASSWORD: 'Password is invalid',
INVALID_EMAIL: 'Provided email is invalid',
INVALID_ACCESS: 'Invalid Access',
INVALID_TOKEN: 'Invalid Token',
USER_NOT_FOUND: 'User not found',
USER_EXISTS: 'User already exists',
};
}
|
package config
import (
"github.com/akmamun/gin-boilerplate-examples/pkg/logger"
"github.com/spf13/viper"
)
type Configuration struct {
Server ServerConfiguration
Database DatabaseConfiguration
}
// SetupConfig configuration
func SetupConfig() error {
var configuration *Configuration
viper.SetConfigFile(".env")
if err := viper.ReadInConfig(); err != nil {
logger.Errorf("Error to reading config file, %s", err)
return err
}
err := viper.Unmarshal(&configuration)
if err != nil {
logger.Errorf("error to decode, %v", err)
return err
}
return nil
}
|
package com.example.internal.dummyRealization
import com.example.*
import com.example.models.ChatMessage
import com.example.models.SendMessageRequest
import java.util.*
class InMemoryMessageStorage(private val userStorage: UserStorage) : MessageStorage {
private val storage = hashMapOf<Participants, MutableList<ChatMessage>>()
var nextId = 0
override fun getChatsById(id: Int): List<Int> {
// FIXME?
userStorage.getChatIds(id).map { Participants(id, it) }.forEach {
storage.putIfAbsent(it, arrayListOf())
}
val result = mutableListOf<Int>()
for ((participants, _) in storage) {
if (id == participants.first) {
result.add(participants.second)
} else if (id == participants.second) {
result.add(participants.first)
}
}
return result
}
override fun getMessagesForChat(chatId: ChatId): List<ChatMessage>? {
return storage[chatId]?.toList()
}
override fun sendMessage(senderId: Int, message: SendMessageRequest) {
val receiverId = userStorage.getUserId(message.receiver) ?: error("invalid receiver username")
val chatId = ChatId(senderId, receiverId)
storage.putIfAbsent(chatId, mutableListOf())
storage[chatId]?.add(
ChatMessage(
nextId,
message.text,
FORMATTER.format(Date(System.currentTimeMillis())),
senderId,
receiverId
)
)
nextId++
}
}
|
################################################################################
## UTILITIES
################################################################################
function exec($cmd) {
& $cmd
if ($lastexitcode -ne 0) {
exit 1
}
}
################################################################################
## SCRIPT
################################################################################
$Configuration = "Debug"
# Cleaning...
Remove-Item "artifacts" -Force -Recurse -ErrorAction Ignore
Get-ChildItem -Path . -Filter "*.sln" | ForEach-Object {
$project = $_.FullName
Write-Host "Cleaning $project..." -ForegroundColor Blue
exec {
& dotnet clean "$project" `
--configuration $Configuration
}
Write-Host
}
# Restoring...
Get-ChildItem -Path . -Filter "*.sln" | ForEach-Object {
$project = $_.FullName
Write-Host "Restoring $project..." -ForegroundColor Blue
exec {
& dotnet restore "$project"
}
Write-Host
}
# Building...
Get-ChildItem -Path . -Filter "*.sln" | ForEach-Object {
$project = $_.FullName
Write-Host "Building $project..." -ForegroundColor Blue
exec {
& dotnet build "$project" `
--nologo `
--no-restore `
--configuration $Configuration
}
Write-Host
}
# Publishing App...
Get-ChildItem -Path "src" -Filter "*.App.csproj" -Recurse | ForEach-Object {
$project = $_.FullName
$output = "artifacts/app"
Write-Host "Publishing $project..." -ForegroundColor Blue
exec {
& dotnet publish "$project" `
--nologo `
--no-build `
--configuration $Configuration `
--output $output
}
Write-Host
}
# Publishing Tests...
Get-ChildItem -Path "tests" -Filter "*.Tests.csproj" -Recurse | ForEach-Object {
$project = $_.FullName
$output = "artifacts/tests"
Write-Host "Publishing $project..." -ForegroundColor Blue
exec {
& dotnet publish "$project" `
--nologo `
--no-build `
--configuration $Configuration `
--output $output
}
Write-Host
}
# Building images
Get-ChildItem -Path "dockerfiles" -Filter "*.Dockerfile" | ForEach-Object {
$file = $_.FullName
$tag = "sandbox-$($_.BaseName.ToLower())"
Write-Host "Building image $tag from $file..." -ForegroundColor Blue
exec {
& docker build . `
--file $file `
--tag $tag
}
# Write-Host
# Write-Host "Pushing image $tag..." -ForegroundColor Blue
# exec {
# & docker push $tag
# }
Write-Host
}
|
using System.Text;
namespace Zektor.Protocol {
public abstract class ZektorControlCommand : ZektorCommand {
protected abstract string Command { get; }
public bool IsQueryResponse { get; protected set; }
public bool IsQueryRequest { protected get; set; }
protected override bool Parse(string data) {
if (data.Length < Command.Length) return false;
int idx = 0;
// lines either start with '=' follower by command,
if (data[0] == '=') {
IsQueryResponse = true;
idx++;
}
// or lines start directly with command
if (!data.Substring(idx).StartsWith(Command)) return false;
idx += Command.Length; // skip command name
// after command, either a space or '.ch' should follow
char nextCh = data[idx];
if (nextCh != '.' && nextCh != ' ') return false;
// most commands have a space directly after the name, if so, skip it
if (data[idx] == ' ') idx++;
if (idx > 0)
data = data.Remove(0, idx);
return ParseCommand(data);
}
protected override string Format() {
StringBuilder sb = new StringBuilder();
if (IsQueryResponse) sb.Append('=');
sb.Append(Command);
if (this is IHasChannel ich) {
if (ich.Channels != ChannelBitmap.All)
sb.AppendFormat(".{0}", (int)ich.Channels);
}
sb.Append(' ');
FormatCommand(sb);
return sb.ToString();
}
/// <summary>
/// Format command to StringBuilder.
/// Do not append '=' for responses.
/// Do not append Command name or initial space
/// </summary>
/// <param name="sb"></param>
protected abstract void FormatCommand(StringBuilder sb);
/// <summary>
/// Parse command from string.
/// Prefix and (optional) '=' prefix are already stripped.
/// </summary>
/// <param name="cmd"></param>
protected abstract bool ParseCommand(string cmd);
protected static ChannelBitmap ConsumeChannel(ref string cmd) {
ChannelBitmap ret = ChannelBitmap.All;
// if first character is a '.' then the channel follows next as 3-digit number
if (cmd[0] == '.') {
ret = (ChannelBitmap)int.Parse(cmd.Substring(1, 3));
cmd = cmd.Substring(5);
}
return ret;
}
}
public interface IHasChannel {
ChannelBitmap Channels { get; set; }
}
}
|
#include <iostream>
#include <algorithm>
#include "spell.hpp"
namespace GitGud
{
bool SpellChecker::initSpellChecker(const std::string &aff, const std::string &dic)
{
if (m_initialised)
return false;
m_spell = new Hunspell(aff.c_str(), dic.c_str());
// std::cout << "Initialised spell checker" << std::endl;
m_initialised = true;
return true;
}
//////////////////////////////////////////////////////////////////////////////
bool SpellChecker::spellingError(const std::string &word)
{
if (word.find(".") != std::string::npos || word.find("()") != std::string::npos)
return false;
std::string raw;
std::remove_copy_if(word.begin(), word.end(),
std::back_inserter(raw),
std::ptr_fun<int, int>(&std::ispunct)
);
if (std::all_of(raw.begin(), raw.end(), isupper))
return false;
return (m_spell->spell(raw) == 0);
}
//////////////////////////////////////////////////////////////////////////////
std::vector<std::string> SpellChecker::spellingSuggestion(const std::string &word)
{
std::string raw;
std::remove_copy_if(word.begin(), word.end(),
std::back_inserter(raw),
std::ptr_fun<int, int>(&std::ispunct)
);
return m_spell->suggest(raw);
}
//////////////////////////////////////////////////////////////////////////////
SpellChecker::~SpellChecker()
{
if (m_initialised)
delete m_spell;
}
}
|
#!/bin/bash
[[ $(id -u) = "0" ]] || { echo "Please run the script as root user. sudo $0"; exit; }
# Install dependencies
apt-get -y install python3 python3-rpi.gpio python3-serial apache2 apache2-utils libapache2-mod-wsgi-py3
# Enable module wsgi
a2enmod wsgi
# Copy api folder to /var/www/ (don't put it under html folder, or the source code might be exposed)
cp -r rpiapi/ /var/www/
# Copy configuration file to apache2 directory conf-enabled
cp rpiapi/rpiapi.conf /etc/apache2/conf-enabled/
# Add user www-data to the gpio group (So that the API can access the GPIO pins)
adduser www-data gpio
# Restart apache2 ir order to reload configurations
systemctl restart apache2
# Clear the screen
clear
echo -e "Enter the new password for the user admin \n\n\n"
# Change the password for the admin user (default: admin)
htpasswd /var/www/rpiapi/.htpasswd admin
ip=$(hostname -I | cut -d " " -f1)
echo "All set."
echo "You can check the API at http://$ip/rpiapi"
|
package org.community.scheduler.service.impl;
import java.text.SimpleDateFormat;
import java.util.ArrayList;
import java.util.Date;
import java.util.List;
import org.community.scheduler.entity.TextEntity;
import org.community.scheduler.repository.api.ITextRepository;
import org.community.scheduler.service.api.ITextService;
import org.springframework.beans.factory.annotation.Autowired;
import org.springframework.stereotype.Service;
import lombok.extern.log4j.Log4j2;
/**
* @author tudor.codrea
*
*/
@Service
@Log4j2
public class TextService implements ITextService {
@Autowired
private ITextRepository textRepository;
public List<TextEntity> findAllBetweenTextDates(String startDate, String endDate) {
List<TextEntity> retList = new ArrayList<>();
SimpleDateFormat df = new SimpleDateFormat("yyyy-MM-dd HH:mm:ss");
try {
Date startingDate = df.parse(startDate);
Date endingDate = df.parse(startDate);
retList = textRepository.findAllBetweenTextDates(startingDate, endingDate);
} catch (Exception e) {
log.error("Failed to findAllBetweenTextDates:" + e);
}
return retList;
}
public List<TextEntity> findAllBetweenTextDates(Date startDate, Date endDate) {
return textRepository.findAllBetweenTextDates(startDate, endDate);
}
@Override
public TextEntity insert(TextEntity te) {
return textRepository.save(te);
}
@Override
public List<TextEntity> insertList(List<TextEntity> tes) {
return textRepository.saveAll(tes);
}
}
|
#!/usr/bin/env bash
function suman_trap_and_kill_child_jobs {
trap 'jobs -p | xargs kill -9' SIGINT SIGTERM EXIT
}
|
-- 2021-03-11T14:27:57.620Z
-- I forgot to set the DICTIONARY_ID_COMMENTS System Configurator
UPDATE AD_Column SET DefaultValue='', IsMandatory='N',Updated=TO_TIMESTAMP('2021-03-11 16:27:57','YYYY-MM-DD HH24:MI:SS'),UpdatedBy=100 WHERE AD_Column_ID=573001
;
-- 2021-03-11T14:28:00.859Z
-- I forgot to set the DICTIONARY_ID_COMMENTS System Configurator
INSERT INTO t_alter_column values('i_bankstatement','DebitorOrCreditorId','NUMERIC(10)',null,null)
;
-- 2021-03-11T14:28:00.864Z
-- I forgot to set the DICTIONARY_ID_COMMENTS System Configurator
INSERT INTO t_alter_column values('i_bankstatement','DebitorOrCreditorId',null,'NULL',null)
;
|
using System;
using System.Collections.Generic;
using System.Linq;
using System.Text;
using System.Threading.Tasks;
namespace GalaxyGen.Model
{
public enum TypeEnum
{
Planet,
Ship,
Agent
}
}
|
/*
* keyring_overlay.go
*
* Copyright 2018-2021 Bill Zissimopoulos
*/
/*
* This file is part of golib.
*
* It is licensed under the MIT license. The full license text can be found
* in the License.txt file at the root of this project.
*/
package keyring
import (
"fmt"
"sync"
"github.com/billziss-gh/golib/errors"
)
// OverlayKeyring is a keyring that stores passwords in a hierarchy of keyrings.
type OverlayKeyring struct {
Keyrings []Keyring
mux sync.Mutex
}
func (self *OverlayKeyring) Get(service, user string) (string, error) {
self.mux.Lock()
defer self.mux.Unlock()
for _, k := range self.Keyrings {
v, err := k.Get(service, user)
if nil == err {
return v, nil
}
}
return "", errors.New(fmt.Sprintf("cannot get key %s/%s", service, user), nil, ErrKeyring)
}
func (self *OverlayKeyring) Set(service, user, pass string) error {
self.mux.Lock()
defer self.mux.Unlock()
for _, k := range self.Keyrings {
return k.Set(service, user, pass)
}
return errors.New(fmt.Sprintf("cannot set key %s/%s", service, user), nil, ErrKeyring)
}
func (self *OverlayKeyring) Delete(service, user string) error {
self.mux.Lock()
defer self.mux.Unlock()
for _, k := range self.Keyrings {
return k.Delete(service, user)
}
return errors.New(fmt.Sprintf("cannot delete key %s/%s", service, user), nil, ErrKeyring)
}
|
$scriptPath = Split-Path $MyInvocation.MyCommand.Path -Parent
Import-Module (Join-Path $scriptPath 'out\module\TfsCmdlets.psd1')
Get-Module TfsCmdlets
|
---
path: /carmodels
carimage: /img/bmw.jpg
title: BMW
text: >-
Bayerische Motoren Werke AG (FWB: BMW ) (BMW) er en tysk bil-, motorcykel- og
motorfabrikant som er grundlagt i 1916. BMW har hovedsæde i München, Tyskland.
Koncernen ejer også Minibilmærket og er moderselskab til Rolls-Royce Motor.
---
|
package hclu.hreg.dao
import java.util.UUID
import hclu.hreg.dao.sql.SqlDatabase
case class DocRecipient(docId: UUID, recipientId: UUID)
trait SqlDocRecipientSchema extends SqlContactSchema {
this: SqlDocSchema =>
protected val database: SqlDatabase
import database.driver.api._
protected val DocRecipientQuery = TableQuery[DocRecipientTable]
protected class DocRecipientTable(tag: Tag) extends Table[DocRecipient](tag, "DOCS_RECIPIENTS") {
def docId = column[UUID]("DOC_ID")
def recipientId = column[UUID]("RECIPIENT_ID")
def * = (docId, recipientId) <> (DocRecipient.tupled, DocRecipient.unapply)
def docFK = foreignKey("DOC_RECIPIENT_DOC_FK", docId, docs)(d => d.id)
def recipientFK = foreignKey("DOC_RECIPIENT_CONTACT_FK", recipientId, contacts)(r => r.id)
}
}
|
import Vue from 'vue'
import MyTestableComponent from 'src/components/MyTestableComponent'
function getRenderedText (Component, propsData) {
const Ctor = Vue.extend(Component)
const vm = new Ctor({ propsData }).$mount()
return vm.$el.textContent
}
const component = new Vue(MyTestableComponent)
describe('MyTestableComponent', () => {
it('has a created hook', () => {
expect(typeof MyTestableComponent.created)
.to.equal('function')
})
it('render correctly with no props', () => {
expect(getRenderedText(MyTestableComponent, {}))
.to.equal('Static text')
})
it('render correctly with different props', () => {
expect(getRenderedText(MyTestableComponent, { msg: 'Hello' }))
.to.equal('Static text, props text: Hello')
expect(getRenderedText(MyTestableComponent, { msg: 'Bye' }))
.to.equal('Static text, props text: Bye')
})
it('can call a function', () => {
expect(component.someMethod())
.to.equal('someValue from someMethod')
})
})
|
module Spree
module Admin
module OrdersControllerDecorator
def index
if params.dig(:q, :delivery_date_gt).present?
params[:q][:delivery_date_gt] = begin
parse_date_param(:delivery_date_gt).beginning_of_day
rescue StandardError
""
end
end
if params.dig(:q, :delivery_date_lt).present?
params[:q][:delivery_date_lt] = begin
parse_date_param(:delivery_date_lt).end_of_day
rescue StandardError
""
end
end
super
end
private
def parse_date_param(param)
Time.zone.parse(params[:q][param])
end
end
end
end
Spree::Admin::OrdersController.prepend Spree::Admin::OrdersControllerDecorator
Spree::Order.whitelisted_ransackable_attributes << 'delivery_date'
|
import 'package:interactivesso_backendcore/interactivesso_backendcore.dart';
void main() {
print('example!');
}
|
#include "CameraMap.hpp"
#include "Stonk/OpenGl.hpp"
using Shay::CameraMap;
/**
* @brief Displays a map of murdoch with a pointer showing the player position
* @param screenWidth The current width of the screen
* @param screenHeight The current height of the screen
* @param xPos The x position of the player
* @param zPos The z position of the player
* @param tempImage The enum number of the texture to display
*/
void CameraMap::DisplayMap(int screenWidth, int screenHeight, GLfloat xPos,
GLfloat zPos, GLuint tempImage) {
GLfloat tempX = xPos / 163.0f - 2096.0f / 163.0f;
GLfloat tempZ = zPos / 164.0f - 4688.0f / 164.0f;
glPushMatrix();
glMatrixMode(GL_PROJECTION);
glPushMatrix();
glLoadIdentity();
gluOrtho2D(0, screenWidth, 0, screenHeight);
glScalef(1, -1, 1);
// mover the origin from the bottom left corner
// to the upper left corner
glTranslatef(0, static_cast<GLfloat>(-screenHeight), 0);
glMatrixMode(GL_MODELVIEW);
glLoadIdentity();
// display the cursor of the camera position
glBegin(GL_QUADS);
glVertex3f(219.0f - tempX - 2.0f, 256.0f - tempZ - 2.0f, 0.0f);
glVertex3f(219.0f - tempX + 2.0f, 256.0f - tempZ - 2.0f, 0.0f);
glVertex3f(219.0f - tempX + 2.0f, 256.0f - tempZ + 2.0f, 0.0f);
glVertex3f(219.0f - tempX - 2.0f, 256.0f - tempZ + 2.0f, 0.0f);
glEnd();
// display map
glBindTexture(GL_TEXTURE_2D, tempImage);
glCallList(448);
// Reset Perspective Projection
glMatrixMode(GL_PROJECTION);
glPopMatrix();
glMatrixMode(GL_MODELVIEW);
glPopMatrix();
}
/**
* @brief Displays the welcome or exit screen
* @param screenWidth The current width of the screen
* @param screenHeight The current height of the screen
* @param textureNum The enum value for the texture
*/
void CameraMap::DisplayWelcomeScreen(int screenWidth, int screenHeight,
GLuint textureNum) {
glPushMatrix();
glMatrixMode(GL_PROJECTION);
glPushMatrix();
glLoadIdentity();
gluOrtho2D(0, screenWidth, 0, screenHeight);
glScalef(1, -1, 1);
// move to centre of screen
glTranslatef(screenWidth / 2.0f - 256.0f, -screenHeight / 2.0f - 256.0f, 0.0f);
glMatrixMode(GL_MODELVIEW);
glLoadIdentity();
// display exit screen or welcome screen
glBindTexture(GL_TEXTURE_2D, textureNum);
// display image
glCallList(449);
// Reset Perspective Projection
glMatrixMode(GL_PROJECTION);
glPopMatrix();
glMatrixMode(GL_MODELVIEW);
glPopMatrix();
}
/**
* @brief Displays the no exit dialog
* @param screenWidth The current width of the screen
* @param screenHeight The current height of the screen
* @param textureNum The enum value for the texture
*/
void CameraMap::DisplayNoExit(int screenWidth, int screenHeight, GLuint textureNum) {
glPushMatrix();
glMatrixMode(GL_PROJECTION);
glPushMatrix();
glLoadIdentity();
gluOrtho2D(0, screenWidth, 0, screenHeight);
glScalef(1, -1, 1);
// move to centre of screen
glTranslatef(screenWidth / 2.0f - 128.0f, -screenHeight / 2.0f - 32.0f, 0.0f);
glMatrixMode(GL_MODELVIEW);
glLoadIdentity();
// display sign
glBindTexture(GL_TEXTURE_2D, textureNum);
// display image
glCallList(454);
// Reset Perspective Projection
glMatrixMode(GL_PROJECTION);
glPopMatrix();
glMatrixMode(GL_MODELVIEW);
glPopMatrix();
}
|
<?php
require_once "../setup.php";
init('../');
session_start();
if(!isset($_SESSION["rr_admin"]) || !$_SESSION["rr_admin"]){
header('Location: login.php');
exit();
}
echo '<html><head><meta name="viewport" content="width=device-width, initial-scale=1"></head><body>';
echo '<style>table {border-collapse: collapse;}table, th, td {border: 1px solid black;}</style>';
$query = $DB->query("SELECT * FROM r18_teams WHERE t_start_position<=99");
if($query->num_rows == 0){
exit("Inga lag");
}
$teamResArr = [];
while($row = $query->fetch_assoc()){
$teamResArr[] = new TeamResult($row);
}
function cmp($a, $b){
return ($a->result > $b->result);
//return strcmp($a->getStartFinishDiffMinutes(), $b->getStartFinishDiffMinutes());
}
usort($teamResArr, "cmp");
echo '<p>P=Placering. Nr=Startnr. T=Start till mål-tid. S=Stål. H=Häftig. R=Resultat. Hj=Hjälprebusar/lösningar</p>';
echo '<table>';
echo '<th>P</th>';
echo '<th>Nr</th>';
//echo '<th></th>';
//echo '<th></th>';
echo '<th>T</th>';
echo '<th>S</th>';
echo '<th>H</th>';
echo '<th>L</th>';
echo '<th>Hj</th>';
echo '<th>R</th>';
foreach($teamResArr as $k => $teamRes){
echo "<tr>";
echo "<td>$k</td>";
echo "<td>".$teamRes->getStartPosition()."</td>";
//echo "<td>".$teamRes->getTsStart2()."</td>";
//echo "<td>".$teamRes->getTsFinish()."</td>";
echo "<td>".$teamRes->getStartFinishDiffMinutes()."</td>";
echo "<td>".$teamRes->getCorrStal()."</td>";
echo "<td>".$teamRes->getCorrHaftig()."</td>";
echo "<td>".$teamRes->getNrLocked()."</td>";
echo "<td>".$teamRes->getHelpBan()."</td>";
$res = $teamRes->computeResult();
$teamRes->setResult($res);
echo "<td>$res</td>";
echo "</tr>";
}
echo '</table>';
|
<?php namespace App\Commands;
use App\Image\Image;
use App\Commands\Command;
use App\Image\ImageRepositoryInterface;
Use Intervention\Image\Facades\Image as IntervetionImage;
use Illuminate\Contracts\Bus\SelfHandling;
class UploadImageCommand extends Command implements SelfHandling {
/**
* @var
*/
private $file;
/**
* @var
*/
private $name;
/**
* Create a new command instance.
*
* @param $file
* @param $name
* @return \App\Commands\UploadImageCommand
*/
public function __construct($file, $name)
{
//
$this->file = $file;
$this->name = $name;
}
/**
* Execute the command.
*
* @param ImageRepositoryInterface $repo
* @return void
*/
public function handle(ImageRepositoryInterface $repo)
{
//image with 200, and it is a data-url
$data = (string) IntervetionImage::make($this->file->getRealPath())
->resize(200,null, function($constraint){
$constraint->aspectRatio();
})->encode('data-url');
//$content = file_get_contents($this->file);
//$base64 = base64_encode($content);
$image = new Image();
$image->image_name = $this->name;
$image->image_base_64 = $data;
$repo->createImage($image);
}
}
|
/*
Navicat MySQL Data Transfer
Source Server : 127.0.0.1 - MANTIS
Source Server Version : 50621
Source Host : 127.0.0.1:3306
Source Database : todoapp
Target Server Type : MYSQL
Target Server Version : 50621
File Encoding : 65001
Date: 2018-04-12 15:34:15
*/
SET FOREIGN_KEY_CHECKS=0;
-- ----------------------------
-- Table structure for migrations
-- ----------------------------
DROP TABLE IF EXISTS `migrations`;
CREATE TABLE `migrations` (
`id` int(10) unsigned NOT NULL AUTO_INCREMENT,
`migration` varchar(255) COLLATE utf8_unicode_ci NOT NULL,
`batch` int(11) NOT NULL,
PRIMARY KEY (`id`)
) ENGINE=InnoDB AUTO_INCREMENT=2 DEFAULT CHARSET=utf8 COLLATE=utf8_unicode_ci;
-- ----------------------------
-- Records of migrations
-- ----------------------------
INSERT INTO `migrations` VALUES ('1', '2018_04_06_144032_create_todos_table', '1');
-- ----------------------------
-- Table structure for todos
-- ----------------------------
DROP TABLE IF EXISTS `todos`;
CREATE TABLE `todos` (
`id` int(10) unsigned NOT NULL AUTO_INCREMENT,
`name` varchar(500) COLLATE utf8_unicode_ci NOT NULL,
`priority` tinyint(3) unsigned NOT NULL,
`location` varchar(273) COLLATE utf8_unicode_ci NOT NULL,
`time_start` time NOT NULL,
`username` varchar(100) COLLATE utf8_unicode_ci NOT NULL,
`password` varchar(20) COLLATE utf8_unicode_ci NOT NULL,
PRIMARY KEY (`id`)
) ENGINE=InnoDB AUTO_INCREMENT=3 DEFAULT CHARSET=utf8 COLLATE=utf8_unicode_ci;
-- ----------------------------
-- Records of todos
-- ----------------------------
INSERT INTO `todos` VALUES ('1', 'Mohammad Iqbal', '5', 'bandung', '09:00:00', 'iqbal', '123456');
|
package muxes
import scala.math._
import chisel3._
import chisel3.Module
import chipsalliance.rocketchip.config._
import dandelion.config._
import util._
import dandelion.interfaces._
class TestMux(NReads: Int)(implicit val p: Parameters) extends Module with HasAccelParams {
val io = IO(new Bundle {
val ReadIn = Input(Vec(NReads, new ReadResp()))
val EN = Input(Bool())
val SEL = Input(UInt(max(1, log2Ceil(NReads)).W))
val ReadOut = Output(new ReadResp())
})
val RMux = Module(new Mux(new ReadResp(), NReads))
// Connect up Read ins with arbiters
for (i <- 0 until NReads) {
RMux.io.inputs(i) <> io.ReadIn(i)
}
io.ReadOut.data := 0.U
io.ReadOut.RouteID := 0.U
RMux.io.sel <> io.SEL
RMux.io.en := io.EN
io.ReadOut <> RMux.io.output
// val EN = RegInit(true.B)
// val SEL = RegInit(1.U(2.W))
//
//
// val x = io.SEL
// when(io.EN) {
// io.ReadOut := io.ReadIn(x)
//
// }.otherwise {
// io.ReadOut.valid := false.B
// }
}
|
# Tuning Component Settings
Helm Charts are a set of Kubernetes manifests that reflect best practices to deploy an application
or service. Helm is heavily influenced by [Homebrew](http://brew.sh/), including the
[formula model](https://github.com/Homebrew/homebrew-core). A Helm chart is to Helm as a Formula
is to Homebrew.
When you run `helmc fetch deis/workflow-v2.1.0`, you can customize the chart with
`helmc edit workflow-v2.1.0`. To customize the respective component, edit
`manifests/deis-<component>-rc.yaml` and modify the `env` section of the component to tune these
settings.
For example, to allow only administrators to register new accounts in the controller,
edit `manifests/deis-controller-rc.yaml` and add the following under the `env` section:
```
env:
- name: REGISTRATION_MODE
value: "admin_only"
```
## Customizing the Controller
The following environment variables are tunable for the [Controller][] component:
Setting | Description
------------------- | ---------------------------------
REGISTRATION_MODE | set registration to "enabled", "disabled", or "admin_only" (default: "enabled")
GUNICORN_WORKERS | number of [gunicorn][] workers spawned to process requests (default: CPU cores * 4 + 1)
DEIS_RESERVED_NAMES | a comma-separated list of names which applications cannot reserve for routing (default: "deis")
## Customizing the Database
The following environment variables are tunable for the [Database][] component:
Setting | Description
----------------- | ---------------------------------
BACKUP_FREQUENCY | how ofter the database should perform a base backup (default: "12h")
BACKUPS_TO_RETAIN | number of base backups the backing store should retain (default: 5)
[controller]: ../understanding-workflow/components.md#controller
[database]: ../understanding-workflow/components.md#database
[gunicorn]: http://gunicorn.org/
|
//! Module for all bucket aggregations.
//!
//! BucketAggregations create buckets of documents
//! [BucketAggregation](super::agg_req::BucketAggregation).
//!
//! Results of final buckets are [BucketResult](super::agg_result::BucketResult).
//! Results of intermediate buckets are
//! [IntermediateBucketResult](super::intermediate_agg_result::IntermediateBucketResult)
mod histogram;
mod range;
mod term_agg;
use std::collections::HashMap;
pub(crate) use histogram::SegmentHistogramCollector;
pub use histogram::*;
pub(crate) use range::SegmentRangeCollector;
pub use range::*;
use serde::{de, Deserialize, Deserializer, Serialize, Serializer};
pub use term_agg::*;
/// Order for buckets in a bucket aggregation.
#[derive(Clone, Copy, Debug, PartialEq, Serialize, Deserialize)]
pub enum Order {
/// Asc order
#[serde(rename = "asc")]
Asc,
/// Desc order
#[serde(rename = "desc")]
Desc,
}
impl Default for Order {
fn default() -> Self {
Order::Desc
}
}
#[derive(Clone, Debug, PartialEq)]
/// Order property by which to apply the order
pub enum OrderTarget {
/// The key of the bucket
Key,
/// The doc count of the bucket
Count,
/// Order by value of the sub aggregation metric with identified by given `String`.
///
/// Only single value metrics are supported currently
SubAggregation(String),
}
impl Default for OrderTarget {
fn default() -> Self {
OrderTarget::Count
}
}
impl From<&str> for OrderTarget {
fn from(val: &str) -> Self {
match val {
"_key" => OrderTarget::Key,
"_count" => OrderTarget::Count,
_ => OrderTarget::SubAggregation(val.to_string()),
}
}
}
impl ToString for OrderTarget {
fn to_string(&self) -> String {
match self {
OrderTarget::Key => "_key".to_string(),
OrderTarget::Count => "_count".to_string(),
OrderTarget::SubAggregation(agg) => agg.to_string(),
}
}
}
/// Set the order. target is either "_count", "_key", or the name of
/// a metric sub_aggregation.
///
/// De/Serializes to elasticsearch compatible JSON.
///
/// Examples in JSON format:
/// { "_count": "asc" }
/// { "_key": "asc" }
/// { "average_price": "asc" }
#[derive(Clone, Default, Debug, PartialEq)]
pub struct CustomOrder {
/// The target property by which to sort by
pub target: OrderTarget,
/// The order asc or desc
pub order: Order,
}
impl Serialize for CustomOrder {
fn serialize<S>(&self, serializer: S) -> Result<S::Ok, S::Error>
where S: Serializer {
let map: HashMap<String, Order> =
std::iter::once((self.target.to_string(), self.order)).collect();
map.serialize(serializer)
}
}
impl<'de> Deserialize<'de> for CustomOrder {
fn deserialize<D>(deserializer: D) -> Result<CustomOrder, D::Error>
where D: Deserializer<'de> {
HashMap::<String, Order>::deserialize(deserializer).and_then(|map| {
if let Some((key, value)) = map.into_iter().next() {
Ok(CustomOrder {
target: key.as_str().into(),
order: value,
})
} else {
Err(de::Error::custom(
"unexpected empty map in order".to_string(),
))
}
})
}
}
#[test]
fn custom_order_serde_test() {
let order = CustomOrder {
target: OrderTarget::Key,
order: Order::Desc,
};
let order_str = serde_json::to_string(&order).unwrap();
assert_eq!(order_str, "{\"_key\":\"desc\"}");
let order_deser = serde_json::from_str(&order_str).unwrap();
assert_eq!(order, order_deser);
let order_deser: serde_json::Result<CustomOrder> = serde_json::from_str("{}");
assert!(order_deser.is_err());
let order_deser: serde_json::Result<CustomOrder> = serde_json::from_str("[]");
assert!(order_deser.is_err());
}
|
# Space Shooter #
## This is a simple 2D, console based space shooter made in c++ ##
### NOTE: This is only windows compatible ###
## ConsoleGameEngine ##
This code manages the console using windows.h
It has functions
* OnUserCreate() - called at the start of the program;
* OnUserUpdate() - called every cycle;
* OnUserDestroy() - called to clean up at the end
It also has lots of utility functions that handle drawing.
The gameLoop is on a separate thread.
It uses WriteConsoleOutput() to write the buffer (an array of every character in the game) to the screen
## SpaceShooter ##
This code is where the game takes place
Every 'game object' has a class derived from the class 'entity'
ALl logic is implemented in the OnUserupdate() loop (called every frame)

### Licensed under MIT ###
|
import Template from './template.vue'
import { Component, Prop, Vue, Watch } from 'vue-property-decorator'
import {
evalutateTypeField,
fieldIsDisplayed
} from '../../utils/DictionaryUtils'
import { PanelContextType } from '../../utils/DictionaryUtils/ContextMenuType'
import {
IFieldReferencesType
} from '../../utils/references'
import { Namespaces } from '../../utils/types'
import { isEmptyValue, recursiveTreeSearch } from '@/ADempiere/shared/utils/valueUtils'
import { DeviceType, IAppState } from '@/ADempiere/modules/app/AppType'
import FieldOptions from './FieldOptions'
@Component({
name: 'FieldDefinition',
components: {
FieldOptions
},
mixins: [Template]
})
export default class FieldDefinition extends Vue {
@Prop({ type: Object, default: () => { return {} } }) metadataField?: any
@Prop({
type: [Number, String, Boolean, Array, Object, Date],
default: undefined
}) recordDataFields?: any
@Prop({ type: Boolean, default: false }) inGroup?: boolean
@Prop({ type: Boolean, default: false }) inTable?: boolean
@Prop({ type: Boolean, default: false }) isAdvancedQuery?: boolean
public field: any = {}
// public visibleForDesktop = false
// public value: any
// public triggerMenu = 'click'
// public showPopoverPath = false
// public timeOut?: NodeJS.Timeout
// public optionColumnName?: string
// public visibleFields: boolean[] = []
// Computed properties
get isMobile(): boolean {
return (this.$store.state.app as IAppState).device === DeviceType.Mobile
}
// load the component that is indicated in the attributes of received property
get componentRender() {
if (isEmptyValue(this.field.componentPath || !this.field.isSupported)) {
return () => import('@/ADempiere/shared/components/Field/FieldText')
}
if (this.isSelectCreated) {
return () => import('@/ADempiere/shared/components/Field/FieldSelectMultiple')
}
let field
switch (this.field.componentPath) {
case 'FieldAutocomplete':
field = () => import('@/ADempiere/shared/components/Field/FieldAutocomplete')
break
case 'FieldBinary':
field = () => import('@/ADempiere/shared/components/Field/FieldBinary')
break
case 'FieldButton':
field = () => import('@/ADempiere/shared/components/Field/FieldButton')
break
case 'FieldColor':
field = () => import('@/ADempiere/shared/components/Field/FieldColor')
break
case 'FieldDate':
field = () => import('@/ADempiere/shared/components/Field/FieldDate')
break
case 'FieldImage':
field = () => import('@/ADempiere/shared/components/Field/FieldImage')
break
case 'FieldLocation':
field = () => import('@/ADempiere/shared/components/Field/FieldLocation')
break
case 'FieldLocator':
field = () => import('@/ADempiere/shared/components/Field/FieldLocator')
break
case 'FieldNumber':
field = () => import('@/ADempiere/shared/components/Field/FieldNumber')
break
case 'FieldSelect':
field = () => import('@/ADempiere/shared/components/Field/FieldSelect')
break
case 'FieldText':
field = () => import('@/ADempiere/shared/components/Field/FieldText')
break
case 'FieldTextLong':
field = () => import('@/ADempiere/shared/components/Field/FieldTextLong')
break
case 'FieldTime':
field = () => import('@/ADempiere/shared/components/Field/FieldTime')
break
case 'FieldYesNo':
field = () => import('@/ADempiere/shared/components/Field/FieldYesNo')
break
}
return field
}
get isPanelWindow() {
return this.field.panelType === PanelContextType.Window
}
get preferenceClientId() {
if (this.isPanelWindow) {
return this.$store.getters[Namespaces.Preference + '/' + 'getPreferenceClientId']
}
return undefined
}
get fieldAttributes() {
return {
...this.field,
inTable: this.inTable,
isPanelWindow: this.isPanelWindow,
isAdvancedQuery: this.isAdvancedQuery,
// DOM properties
required: this.isMandatory,
readonly: this.isReadOnly,
displayed: this.isDisplayed,
disabled: !this.field.isActive,
isSelectCreated: this.isSelectCreated,
placeholder: this.field.help ? this.field.help.slice(0, 40) + '...' : ''
}
}
get isDisplayed(): boolean {
if (this.isAdvancedQuery) {
return this.field.isShowedFromUser
}
return (
fieldIsDisplayed(this.field) &&
(this.isMandatory || this.field.isShowedFromUser || this.inTable)
)
}
get isMandatory(): boolean {
if (this.isAdvancedQuery) {
return false
}
return this.field.isMandatory || this.field.isMandatoryFromLogic
}
get isReadOnly(): boolean {
if (this.isAdvancedQuery) {
if (['NULL', 'NOT_NULL'].includes(this.field.operator)) {
return true
}
return false
}
if (!this.field.isActive) {
return true
}
const isUpdateableAllFields: boolean =
this.field.isReadOnly || this.field.isReadOnlyFromLogic
if (this.isPanelWindow) {
let isWithRecord: boolean = this.field.recordUuid !== 'create-new'
if ((this.preferenceClientId !== this.metadataField.clientId) && isWithRecord) {
return true
}
if (this.field.isAlwaysUpdateable) {
return false
}
if (
this.field.isProcessingContext ||
this.field.isProcessedContext
) {
return true
}
// TODO: Evaluate record uuid without route.action
// edit mode is diferent to create new
if (this.inTable) {
isWithRecord = !isEmptyValue(this.field.recordUuid)
}
return (
(!this.field.isUpdateable && isWithRecord) ||
isUpdateableAllFields || this.field.isReadOnlyFromForm
)
} else if (this.field.panelType === PanelContextType.Browser) {
if (this.inTable) {
// browser result
return this.field.isReadOnly
}
// query criteria
return this.field.isReadOnlyFromLogic
}
// other type of panels (process/report)
return Boolean(isUpdateableAllFields)
}
get isFieldOnly(): any {
if (this.inTable || this.field.isFieldOnly) {
return undefined
}
return this.field.name
}
get isSelectCreated(): boolean {
return (
this.isAdvancedQuery! &&
['IN', 'NOT_IN'].includes(this.field.operator) &&
!['FieldBinary', 'FieldDate', 'FieldSelect', 'FieldYesNo'].includes(
this.field.componentPath
)
)
}
get getWidth(): number {
return this.$store.getters[Namespaces.Utils + '/' + 'getWidthLayout']
}
get classField(): string {
if (this.inTable) {
return 'in-table'
}
return ''
}
@Watch('metadataField')
handleMetaadataFieldChange(value: any) {
this.field = value
}
// Methods
recursiveTreeSearch = recursiveTreeSearch
focusField() {
if (
this.field.handleRequestFocus ||
(this.field.displayed && !this.field.readonly)
) {
// eslint-disable-next-line
// @ts-ignore
this.$refs[this.field.columnName].requestFocus()
}
}
// Hooks
created() {
// assined field with prop
this.field = this.metadataField
if (this.field.isCustomField && !this.field.componentPath) {
let componentReference: Partial<IFieldReferencesType> = <
IFieldReferencesType
>evalutateTypeField(this.field.displayType)
if (isEmptyValue(componentReference)) {
componentReference = {
componentPath: 'FieldText'
}
}
this.field = {
...this.metadataField,
isActive: true,
isDisplayed: true,
isDisplayedFromLogic: true,
isShowedFromUser: true,
//
componentPath: componentReference.componentPath
}
}
}
}
|
<#
.SYNOPSIS
Add proxy addresses to contacts in Office 365 for a list of Exchange contact.
.DESCRIPTION
Connect to Office 365
cd to working directory (import csv should be in working location and it will create the output file in this location.)
map input csv to fields
#>
$ContactListPath = "contacts.csv"
$OutputPath = "Import-Contacts-Output.csv"
Import-Csv -Path $ContactListPath | %{
Write-Host $_.EmailAddress -ForegroundColor Green
forEach ($ProxyAddress in $_.ProxyAddresses.split(',')) {
write-host " $ProxyAddress"
Set-MailContact -Identity $_.EmailAddress -EmailAddresses @{Add=$ProxyAddress}
}
}
|
jQuery.fn.outerHTML = function() {
return jQuery('<div />').append(this.eq(0).clone()).html();
};
var timestampToTimeString = function(timestamp) {
timestamp = Math.floor(timestamp);
var date = new Date(timestamp);
var hours = date.getHours();
var minutes = date.getMinutes();
minutes = minutes < 10 ? '0'+minutes : minutes;
var seconds = date.getSeconds();
seconds = seconds < 10 ? '0'+seconds : seconds;
var milliseconds = date.getMilliseconds();
milliseconds = milliseconds < 10 ? '00'+milliseconds : milliseconds < 100 ? '0'+milliseconds : milliseconds;
return ((hours > 0) ? hours + "h ": "") +
minutes + "m " + seconds + "s " +
((milliseconds > 0) ? milliseconds + "ms" : "");
};
var DirectedAcyclicGraphTooltip = function(gravity, propertiesToRead) {
var _reserved = propertiesToRead;
var tooltip = Tooltip(gravity).title(function(d) {
var datum = d.datum;
function appendRow(key, value, tooltip) {
var keyrow = $("<div>").attr("class", "key").append(key);
var valrow = $("<div>").attr("class", "value").append(value);
var clearrow = $("<div>").attr("class", "clear");
tooltip.append($("<div>").append(keyrow).append(valrow).append(clearrow));
}
function appendChart(key, values, count, tooltip){
var barchart = new BARCHART();
var svg = barchart.plot(values, count);
tooltip.append($("<div>").append(svg));
barchart.clearArea();
}
function appendChartWithDelay(key, values, count, tooltip){
// var barchart = new BARCHART();
// var svg = barchart.plot(values, count, true);
// tooltip.append($("<div>").append(svg));
// barchart.clearArea();
}
var tooltip = $("<div>").attr("class", "xtrace-tooltip");
var seen = {"Edge": true, "version": true};
// Do the reserved first
for (var i = 0; i < _reserved.length; i++) {
var key = _reserved[i];
if (datum.hasOwnProperty(key)) {
seen[key] = true;
if (key.toUpperCase()=="AVGTIME") {
appendRow(key, timestampToTimeString(datum[key]), tooltip);
} else if (key.toUpperCase()=="RESOURCES"){
appendChart(key, datum[key], datum['count'], tooltip);
appendChartWithDelay(key, datum[key], datum['count'], tooltip);
} else {
appendRow(key, datum[key], tooltip);
}
}
}
// Do the label
//appendRow("(hash)", hash_report(datum), tooltip);
document.getElementById("teste2").innerHTML = tooltip.outerHTML();
return "";
});
return tooltip;
};
var CompareTooltip = function() {
var tooltip = Tooltip().title(function(d) {
function appendRow(key, value, tooltip) {
var keyrow = $("<div>").attr("class", "key").append(key);
var valrow = $("<div>").attr("class", "value").append(value);
var clearrow = $("<div>").attr("class", "clear");
tooltip.append($("<div>").append(keyrow).append(valrow).append(clearrow));
}
var tooltip = $("<div>").attr("class", "xtrace-tooltip");
appendRow("ID", d.get_id(), tooltip);
appendRow("NumReports", d.get_node_ids().length, tooltip);
appendRow("NumLabels", Object.keys(d.get_labels()).length, tooltip);
return "";
});
return tooltip;
}
var Tooltip = function(gravity) {
if (gravity==null)
gravity = $.fn.tipsy.autoWE;
var tooltip = function(selection) {
selection.each(function(d) {
$(this).tipsy({
gravity: gravity,
html: true,
title: function() { return title(d); },
opacity: 1
});
});
}
var title = function(d) { return ""; };
tooltip.hide = function() { $(".tipsy").remove(); }
tooltip.title = function(_) { if (arguments.length==0) return title; title = _; return tooltip; }
return tooltip;
}
|
<?php
namespace App;
use Illuminate\Database\Eloquent\Model;
use Illuminate\Database\Eloquent\SoftDeletes;
use Carbon\Carbon;
class Task extends Model
{
use SoftDeletes;
protected $fillable = [
'item',
'due_date',
'is_completed',
'owner_id'
];
public function owner()
{
return $this->belongsTo('App\User');
}
public function getId()
{
return $this->id;
}
public function getItem()
{
return $this->item;
}
public function getDueDate($format = false)
{
if ($format == true)
{
return $this->getFormattedDueDate();
}
return $this->due_date;
}
public function setDueDate($due_date = null)
{
if (!is_null($due_date))
{
$this->due_date = $due_date;
}
}
public function getFormattedDueDate()
{
if (!is_null($this->due_date))
{
$dt = new Carbon($this->due_date);
return $dt->toFormattedDateString();
}
return null;
}
public function isCompleted()
{
return ($this->is_completed == 1);
}
public static function createTask(
$owner_id,
$item,
$due_date = null,
$is_completed = null
)
{
$task = new static;
$task->item = $item;
$task->owner_id = $owner_id;
if (!is_null($due_date))
{
$task->due_date = $due_date;
}
if (!is_null($is_completed))
{
$task->is_completed = $is_completed;
}
if ($task->save() == true)
{
return static::find($task->id);
}
return false;
}
public static function updateTask(
$task_id,
$item,
$due_date = null,
$is_completed = null
)
{
$task = static::find($task_id);
if (!is_null($task))
{
$task->item = $item;
if (!is_null($due_date))
{
$task->due_date = $due_date;
}
if (!is_null($is_completed))
{
$task->is_completed = $is_completed;
}
if ($task->save() == true)
{
return $task;
}
}
return false;
}
public function setCompleted()
{
$this->is_completed = true;
return $this->save();
}
public function setNotCompleted()
{
$this->is_completed = false;
return $this->save();
}
public function updateItem(
$item,
$due_date = null,
$is_completed = null
)
{
$this->item = $item;
if (!is_null($due_date))
{
$this->due_date = $due_date;
}
if (!is_null($is_completed) and is_bool($is_completed))
{
$this->is_completed = $is_completed;
}
return $this->save();
}
}
|
package paperdoll.scalaz
import scalaz.Functor
import scalaz.OptionT
import paperdoll.core.effect.Effects
import scalaz.std.option._
import paperdoll.core.layer.Layer
import paperdoll.std.Option_
object OptionTLayer {
def sendOptionT[F[_]: Functor, A](ot: OptionT[F, A]): Effects.Two[Layer.Aux[F], Option_, A] =
Effects.sendTU[F[Option[A]], Option[A]](ot.run)
}
|
/**
* Copyright (C) 2017 Microbeans Software Jürgen Röder.
*
* Licensed under the Apache License, Version 2.0 (the "License");
* you may not use this file except in compliance with the License.
* You may obtain a copy of the License at
*
* http://www.apache.org/licenses/LICENSE-2.0
*
* Unless required by applicable law or agreed to in writing, software
* distributed under the License is distributed on an "AS IS" BASIS,
* WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
* See the License for the specific language governing permissions and
* limitations under the License.
*/
package services;
import java.util.List;
import javax.inject.Inject;
import javax.validation.constraints.NotNull;
import dao.TimberOriginDao;
import dto.TimberOriginDto;
import ninja.cache.NinjaCache;
/**
* Implementation of {@code TimberOriginService} interface.
* <p>
* Uses data access objects (DAO) to retrieve {@code TimberOrigin} data from
* underlying data store.
*
* @author mbsusr01
*
* ninja-superbowl 03.05.2017 mbsusr01
*/
public class TimberOriginServiceImpl extends AbstractService implements TimberOriginService {
/**
* The {@code NinjaCache} instance
*/
@Inject
private NinjaCache ninjaCache;
/**
* The {@code TimberOriginDao} instance
*/
@Inject
private TimberOriginDao timberOriginDao;
/**
* Constructor.
*/
public TimberOriginServiceImpl() {
super();
}
/*
* (non-Javadoc)
*
* @see services.TimberOriginService#getTimberOriginById(java.lang.Long)
*/
@Override
public TimberOriginDto getTimberOriginById(Long id) {
return timberOriginDao.getTimberOriginById(id);
}
/*
* (non-Javadoc)
*
* @see services.TimberOriginService#getTimberOriginById(java.lang.String)
*/
@Override
public TimberOriginDto getTimberOriginById(String id) {
return timberOriginDao.getTimberOriginById(id);
}
/*
* (non-Javadoc)
*
* @see services.TimberOriginService#getTimberOriginMaxIndex()
*/
@Override
public TimberOriginDto getTimberOriginMaxIndex() {
return timberOriginDao.getTimberOriginMaxIndex();
}
/*
* (non-Javadoc)
*
* @see services.TimberOriginService#listTimberOrigin()
*/
@Override
public List<TimberOriginDto> listTimberOrigin() {
return timberOriginDao.listTimberOrigin();
}
/*
* (non-Javadoc)
*
* @see services.TimberOriginService#listTimberOriginByTimberId(java.lang.Long)
*/
@Override
public List<TimberOriginDto> listTimberOriginByTimberId(Long timberId) {
return timberOriginDao.listTimberOriginByTimberId(timberId);
}
/*
* (non-Javadoc)
*
* @see services.TimberOriginService#register(dto.TimberOriginDto)
*/
@Override
public void register(@NotNull TimberOriginDto timberOriginDto) {
timberOriginDao.register(timberOriginDto);
}
}
|
#!/usr/bin/env perl -w
use strict;
use XML::Simple;
use WWW::Curl::Easy;
use IO::Uncompress::Gunzip;
my %packages = (
'core' => 1,
'epel' => 1,
'ovirt' => 1,
);
my %baseurl = (
'core' => 'http://127.0.0.1/rpms/centos65/x86_64',
'epel' => 'http://127.0.0.1/rpms/epel/6/x86_64/',
'ovirt' => 'http://127.0.0.1/rpms/oVirt/RPMS/',
);
my $dest;
my $comps_file;
sub download_package {
my ($package, $url) = @_;
my $curl = WWW::Curl::Easy->new;
$curl->setopt(CURLOPT_HEADER, 0);
$curl->setopt(CURLOPT_URL, $url.$package);
$package =~ s{^[^/]+/}{};
print "$dest$package\n";
open(my $fh, ">", $dest.$package) or die;
$curl->setopt(CURLOPT_WRITEDATA, $fh);
my $retcode = $curl->perform;
close $fh;
die unless ($retcode == 0);
}
sub get_packages_list {
my $xml = XML::Simple->new('KeyAttr' => {'group' => 'id', 'packagereq' => 'content'});
my $data = $xml->XMLin($comps_file);
foreach (keys %packages) {
$packages{$_} = $data->{group}->{$_}->{packagelist}->{packagereq};
}
}
sub get_packages {
foreach (keys %baseurl) {
my $component = $_;
my $curl = WWW::Curl::Easy->new;
my $response_body;
$curl->setopt(CURLOPT_HEADER, 0);
$curl->setopt(CURLOPT_URL, $baseurl{$_}."repodata/primary.xml.gz");
open(my $fb, ">", \$response_body);
$curl->setopt(CURLOPT_WRITEDATA, $fb);
my $retcode = $curl->perform();
close $fb;
if ($retcode == 0) {
my $buffer;
IO::Uncompress::Gunzip::gunzip \$response_body => \$buffer;
my $xml = XML::Simple->new(KeyAttr => {'package' => 'name'});
my $data = $xml->XMLin($buffer);
my $tmphash = $packages{$component};
foreach (keys %$tmphash) {
my $package = $data->{package}->{$_}->{location}->{href};
if ($package) {
download_package($package, $baseurl{$component});
}
}
}
}
}
$comps_file = shift;
$dest = shift;
get_packages_list;
get_packages;
|
using System;
using System.Collections.Generic;
using System.Linq;
using System.Text;
using Model;
using System.Data;
using System.Data.SqlClient;
namespace DAL
{
public class SectionRoomSonDAL
{
List<SectionRoomSo> sw = new List<SectionRoomSo>();
public string SonInsert(SectionRoomSo s)
{
SqlParameter Sname = new SqlParameter("@Sname", SqlDbType.VarChar);
SqlParameter SonName = new SqlParameter("@Sonname", SqlDbType.VarChar);
SqlParameter result = new SqlParameter("@result", SqlDbType.VarChar,20);
result.Direction = ParameterDirection.Output;
Sname.Value = s.Sname;
SonName.Value = s.SonSname;
SqlParameter[] ps = { Sname, SonName, result };
bool f= DBHelper.ExecuteNonQueryProc("p_SectionRooomSonInsert", ps);
if (f)
{
return result.Value + "";
}
else
return "出现异常";
}
public List<SectionRoomSo> SectionRoomSonCheck()
{
SqlDataReader reader= DBHelper.SectionRoomSelect("p_SectionRooomSon_select");
while (reader.Read())
{
SectionRoomSo sr = new SectionRoomSo();
sr.Sname = reader[1] + "";
sr.SonSname = reader[2] + "";
sw.Add(sr);
}
DBHelper.cmd.Dispose();
DBHelper.reader.Dispose();
DBHelper.con.Close();
return sw;
}
public string SonDelteDal(SectionRoomSo s)
{
SqlParameter SonName = new SqlParameter("@SonName",SqlDbType.VarChar);
SqlParameter result = new SqlParameter("@result", SqlDbType.VarChar, 20);
result.Direction = ParameterDirection.Output;
SonName.Value = s.SonSname;
SqlParameter[] ps = { SonName, result };
bool f= DBHelper.ExecuteNonQueryProc("p_SectionRooomSonDelete", ps);
if (f)
{
return result.Value + "";
}
else
{
return "出现异常";
}
}
}
}
|
@model MVCDemo.Models.CommonModel
<div class="easyui-panel" title="修改成功" style="width:500px;text-align:center;">
<dl>
<dt>@Model.Title 修改成功</dt>
<dd><a id="btn_close" href="#" class="easyui-linkbutton">关闭</a></dd>
</dl>
</div>
<script type="text/javascript">
$("#btn_close").click(function () {
parent.$("#mainTabs").tabs('close','修改文章');
});
</script>
|
<#
.SYNOPSIS
The "Invoke-GitPushFromOriginToMain" function is a wrapper for the 'git push origin main' command line.
.EXAMPLE
.NOTES
Name: Invoke-GitPushFromOriginToMain.ps1
Author: Travis Logue
Version History: 1.1 | 2021-10-08 | Initial Version
Dependencies: git.exe
Notes:
-
.
#>
function Invoke-GitPushFromOriginToMain {
[CmdletBinding()]
[Alias('GitPushFromOriginToMain')]
param ()
begin {}
process {
git push origin main
}
end {}
}
|
package uchiwa
import (
"encoding/json"
"fmt"
"net/http"
"net/url"
"github.com/palourde/auth"
"github.com/palourde/logger"
)
func deleteClientHandler(w http.ResponseWriter, r *http.Request) {
u, _ := url.Parse(r.URL.String())
i := u.Query().Get("id")
d := u.Query().Get("dc")
if i == "" || d == "" {
http.Error(w, fmt.Sprint("Parameters 'id' and 'dc' are required"), http.StatusInternalServerError)
}
err := DeleteClient(i, d)
if err != nil {
http.Error(w, fmt.Sprint(err), http.StatusInternalServerError)
}
}
func deleteStashHandler(w http.ResponseWriter, r *http.Request) {
decoder := json.NewDecoder(r.Body)
var data interface{}
err := decoder.Decode(&data)
if err != nil {
http.Error(w, fmt.Sprint("Could not decode body"), http.StatusInternalServerError)
}
err = DeleteStash(data)
if err != nil {
http.Error(w, fmt.Sprint(err), http.StatusInternalServerError)
}
}
func getAggregateHandler(w http.ResponseWriter, r *http.Request) {
u, _ := url.Parse(r.URL.String())
c := u.Query().Get("check")
d := u.Query().Get("dc")
if c == "" || d == "" {
http.Error(w, fmt.Sprint("Parameters 'check' and 'dc' are required"), 500)
}
a, err := GetAggregate(c, d)
if err != nil {
http.Error(w, fmt.Sprint(err), 500)
} else {
encoder := json.NewEncoder(w)
if err := encoder.Encode(a); err != nil {
http.Error(w, fmt.Sprintf("Cannot encode response data: %v", err), 500)
}
}
}
func getAggregateByIssuedHandler(w http.ResponseWriter, r *http.Request) {
u, _ := url.Parse(r.URL.String())
c := u.Query().Get("check")
i := u.Query().Get("issued")
d := u.Query().Get("dc")
if c == "" || i == "" || d == "" {
http.Error(w, fmt.Sprint("Parameters 'check', 'issued' and 'dc' are required"), 500)
}
a, err := GetAggregateByIssued(c, i, d)
if err != nil {
http.Error(w, fmt.Sprint(err), 500)
} else {
encoder := json.NewEncoder(w)
if err := encoder.Encode(a); err != nil {
http.Error(w, fmt.Sprintf("Cannot encode response data: %v", err), 500)
}
}
}
func getClientHandler(w http.ResponseWriter, r *http.Request) {
u, _ := url.Parse(r.URL.String())
i := u.Query().Get("id")
d := u.Query().Get("dc")
if i == "" || d == "" {
http.Error(w, fmt.Sprint("Parameters 'id' and 'dc' are required"), http.StatusInternalServerError)
}
c, err := GetClient(i, d)
if err != nil {
http.Error(w, fmt.Sprint(err), http.StatusInternalServerError)
} else {
encoder := json.NewEncoder(w)
if err := encoder.Encode(c); err != nil {
http.Error(w, fmt.Sprintf("Cannot encode response data: %v", err), http.StatusInternalServerError)
}
}
}
func getConfigHandler(w http.ResponseWriter, r *http.Request) {
encoder := json.NewEncoder(w)
if err := encoder.Encode(PublicConfig); err != nil {
http.Error(w, fmt.Sprintf("Cannot encode response data: %v", err), http.StatusInternalServerError)
}
}
func getSensuHandler(w http.ResponseWriter, r *http.Request) {
encoder := json.NewEncoder(w)
if err := encoder.Encode(Results.Get()); err != nil {
http.Error(w, fmt.Sprintf("Cannot encode response data: %v", err), http.StatusInternalServerError)
}
}
func healthHandler(w http.ResponseWriter, r *http.Request) {
encoder := json.NewEncoder(w)
var err error
if r.URL.Path[1:] == "health/sensu" {
err = encoder.Encode(Health.Sensu)
} else if r.URL.Path[1:] == "health/uchiwa" {
err = encoder.Encode(Health.Uchiwa)
} else {
err = encoder.Encode(Health)
}
if err != nil {
http.Error(w, fmt.Sprintf("Cannot encode response data: %v", err), http.StatusInternalServerError)
}
}
func postEventHandler(w http.ResponseWriter, r *http.Request) {
decoder := json.NewDecoder(r.Body)
var data interface{}
err := decoder.Decode(&data)
if err != nil {
http.Error(w, fmt.Sprint("Could not decode body"), http.StatusInternalServerError)
}
err = ResolveEvent(data)
if err != nil {
http.Error(w, fmt.Sprint(err), http.StatusInternalServerError)
}
}
func postStashHandler(w http.ResponseWriter, r *http.Request) {
decoder := json.NewDecoder(r.Body)
var data interface{}
err := decoder.Decode(&data)
if err != nil {
http.Error(w, fmt.Sprint("Could not decode body"), http.StatusInternalServerError)
}
err = CreateStash(data)
if err != nil {
http.Error(w, fmt.Sprint(err), http.StatusInternalServerError)
}
}
// WebServer starts the web server and serves GET & POST requests
func WebServer(config *Config, publicPath *string, auth auth.Config) {
http.Handle("/delete_client", auth.Authenticate(http.HandlerFunc(deleteClientHandler)))
http.Handle("/delete_stash", auth.Authenticate(http.HandlerFunc(deleteStashHandler)))
http.Handle("/get_aggregate", auth.Authenticate(http.HandlerFunc(getAggregateHandler)))
http.Handle("/get_aggregate_by_issued", auth.Authenticate(http.HandlerFunc(getAggregateByIssuedHandler)))
http.Handle("/get_client", auth.Authenticate(http.HandlerFunc(getClientHandler)))
http.Handle("/get_config", auth.Authenticate(http.HandlerFunc(getConfigHandler)))
http.Handle("/get_sensu", auth.Authenticate(http.HandlerFunc(getSensuHandler)))
http.Handle("/post_event", auth.Authenticate(http.HandlerFunc(postEventHandler)))
http.Handle("/post_stash", auth.Authenticate(http.HandlerFunc(postStashHandler)))
http.Handle("/", http.FileServer(http.Dir(*publicPath)))
http.Handle("/health", http.HandlerFunc(healthHandler))
http.Handle("/health/", http.HandlerFunc(healthHandler))
http.Handle("/login", auth.GetIdentification())
listen := fmt.Sprintf("%s:%d", config.Uchiwa.Host, config.Uchiwa.Port)
logger.Infof("Uchiwa is now listening on %s", listen)
http.ListenAndServe(listen, nil)
}
|
# Salsa Verde
A Mexican sauce.
- Yields: 2 cups.
## Ingredients
- 1 pound (450g) tomatillos, husks removed.
- 1 clove garlic.
- 2 jalapeño or serrano chiles.
- 1/2 cup (26g) chopped onion.
- 3 Tbsp (40ml) oil.
- Cilantro.
## Instructions
1. Place tomatillos, garlic, and chilies in a medium saucepan and cover with
water. Simmer until tomatillos turn pale green, about 10 minutes.
2. Transfer tomatillos, garlic, and chilies into a blender or food processor. Add
the onion and cilantro. Puree until smooth, then salt to taste, usually a
teaspoon (6g) will suffice.
3. Heat oil in a medium saucepan until hot but not smoking. Pour puree into the
pan and cook, stirring occasionally, until thickened somewhat, about 6-8
minutes.
## Contribution
- Nathan.
;tags: mexican sauce
|
create function Metadata.fParameterDescription(@MajorId int, @MinorId int) returns table as return
select
d.*
from
Metadata.fParameterProperty(@MajorId, @MinorId) p inner join
Metadata.fDescription(@MajorId, @MinorId) d on p.MajorId = d.MajorId and p.MinorId = p.MinorId
|
import PropertyAnnotation from '../../model/PropertyAnnotation'
import Fragmenter from '../../model/Fragmenter'
class Strong extends PropertyAnnotation {}
Strong.type = "strong"
Strong.fragmentation = Fragmenter.ANY
export default Strong
|
package org.phoenix._03_BigInteger;
/*
* A prime number is a natural number greater than whose only positive divisors are and itself. For example, the first
six prime numbers are 2,3, 5, 7, 11, and 13.
Given a large integer, n, use the Java BigInteger class' isProbablePrime method to determine and print whether it's
prime or not prime.
Input Format
A single line containing an integer, n(the number to be checked).
Constraints
n contains at most 100 digits.
Output Format
If n is a prime number, print prime; otherwise, print not prime.
Sample Input
13
Sample Output
prime
Explanation
The only positive divisors of 13 are 1 and 13, so we print prime.*/
import java.math.BigInteger;
import java.util.Scanner;
public class Problem03_JavaPrimality {
private static final Scanner scanner = new Scanner(System.in);
public static void main(String[] args) {
String n = scanner.nextLine();
BigInteger integer = new BigInteger(String.valueOf(n));
System.out.println(integer.isProbablePrime(1) ? "prime" : "not prime");
scanner.close();
}
}
|
# frozen_string_literal: true
class Spinach::Features::PasswordReset < Spinach::FeatureSteps
step 'I am on the sign in page' do
visit new_session_path
end
step 'I click on forgot password link' do
click_link 'Forgot Password'
end
step 'I should be on the password reset page' do
current_path.must_equal new_password_path
end
step 'I have an Openhub account' do
GithubVerification.any_instance.stubs(:generate_access_token)
@account = FactoryBot.create(:account)
end
step 'I am on the password reset page' do
visit new_password_path
end
step 'I enter my email' do
fill_in 'Email address', with: @account.email
end
step 'submit the form' do
click_on 'Reset password'
end
step 'it should send me a password reset email' do
@account.reload
delivery = ActionMailer::Base.deliveries.last
delivery.to.first.must_equal @account.email
delivery.html_part.body.raw_source.must_include(
edit_user_password_path(@account, token: @account.confirmation_token)
)
end
step 'I have raised a password reset request' do
@account.update!(confirmation_token: Clearance::Token.new)
end
step 'I follow the email link to reset my password' do
visit edit_user_password_path(@account, token: @account.confirmation_token)
end
step 'I submit my new password' do
@new_password = Faker::Internet.password
fill_in 'Choose password', with: @new_password
click_on 'Save this password'
end
step 'it should reset my password' do
@account.reload
@account.authenticated?(@new_password).must_equal true
end
step 'it should sign me in' do
current_path.must_equal account_path(:me)
end
step 'double click submit button' do
ActionMailer::Base.deliveries = []
element = find_button('Reset password')
page.driver.browser.action.double_click(element.native).perform
end
step 'it should send only one password reset email' do
ActionMailer::Base.deliveries.length.must_equal 1
end
end
|
#!/usr/local/bin/perl
use AMOS::AmosLib;
while ($record = getRecord(\*STDIN)){
my ($rec, $fields, $recs) = parseRecord($record);
if ($rec eq "FRG"){
my $sq = $$fields{seq};
my $nm = $$fields{src};
my @lines = split('\n', $nm);
$nm = join('',@lines);
if ($nm =~ /^\s*$/){
$nm = $$fields{acc};
}
@lines = split('\n', $sq);
$sq = join('', @lines);
my ($l, $r) = split(',', $$fields{clr});
$sq = substr($sq, $l, $r - $l + 1);
printFastaSequence(\*STDOUT, $nm, $sq);
next;
}
}
|
<?php
namespace Transbank\Webpay\Oneclick\Exceptions;
use Transbank\Webpay\Exceptions\WebpayRequestException;
class InscriptionFinishException extends WebpayRequestException
{
}
|
# hello-world
I can reverse engineer anything, give me a challenge!
I love anime!
I'm a huge car enthusiast!
|
### Zones
Appliance Zones are used to isolate traffic. A Management System discovered by an EVM
Appliance in a specific zone gets monitored and managed in that zone. All jobs,
such as a SmartState Analysis or VM start, dispatched by an EVM Appliance in a
specific EVM Zone can get processed by any EVM Appliance assigned to that same
zone.
Zones can be created based on your own environment. You can make zones based on
geographic location, network location, or function. When first started, a new
EVM Server is put into the default zone.
```json
{
"id": "http://localhost:3000/api/zones/1",
"name": "west_coast",
"description" : "West Coast Zone",
"created_on" : "2012-08-02T18:20:07Z",
"updated_on" : "2012-08-02T18:20:07Z",
"settings": {
"proxy_server_ip" : "192.168.177.128",
"concurrent_vm_scans" : 10,
"ntp": {
"server" : [
"pool.ntp.org"
]
}
},
"servers" : {
"count" : "2",
"resources" : [
{ "href" : "http://localhost:3000/api/servers/1" },
{ "href" : "http://localhost:3000/api/servers/2" }
]
},
"actions" : [
{ "name" : "edit", "method" : "post", "href" : "http://localhost:3000/api/zones/1" },
{ "name" : "delete", "method" : "delete", "href" : "http://localhost:3000/api/zones/1" }
]
}
```
#### Attributes
`Required`
```
name, description
```
`Optional`
```
settings
```
#### Actions
| Name | Description |
|------|-------------|
| add | Add a new Zone |
| edit | Edit a Zone |
| delete | Delete one or more Zones |
##### Add
Zones can be created based on your own environment. You can make zones based on
geographic location, network location, or function. When first started, a new
EVM Server is put into the default zone.
`POST /api/zones`
```json
{
"action": "add",
"resources" : [
{
"name" : "east_coast",
"description" : "East Coast Zone",
"settings": {
"proxy_server_ip" : "192.168.187.128",
}
}
]
}
```
##### Edit
`POST /api/zones/1`
```json
{
"action": "edit",
"resource" : {
"settings" : {
"proxy_server_ip" : "192.168.197.128",
}
}
}
```
##### Delete
Delete the zone from the Zone collection. You might delete multiple zones at once.
`POST /api/zones`
```json
{
"action": "delete",
"resources" : [
{ "href" : "http://localhost:3000/api/zones/1" },
{ "href" : "http://localhost:3000/api/zones/2" }
]
}
```
Back to [Features](./features.md)
Back to [Design Specification](../design.md)
|
use strict;
use warnings;
# [0, 2, 7, 0], each position is a memory bank with n blocks.
# Find the one with most blocks, if tie, then lower indexed
# position is the one with prevalence. At first, '7' (memory
# bank at pos 2) has the most blocks. Then the position is
# put to 0 and the 7 blocks will be spread one by one, 1 to
# 3rd, 1 to 0th, 1 to 1st. 1 to 2nd (original), 1 to 3rd,
# 1 to 0th, 1 to 2nd. Now its [2, 4, 1, 2].
# Now starting again the 2nd bank has the most blocks so its
# used for redistribution and so on. The problem is that this
# program can be caught in an infinite loop since from some
# states, the same state could be reached again after x steps.
my $memory = [
# Result: 5
[0, 2, 7, 0],
# Result: 6681.
[4, 1, 15, 12, 0, 9, 9, 5, 5, 8, 7, 3, 14, 5, 12, 3],
];
sub row_to_string {
# [0, 2, 7, 0] => "0 2 7 0" for hashing and storing states.
my ($row) = @_;
return join " ", @{ $row };
}
foreach my $memory_row (@{ $memory }) {
my $seen_before = {};
# Total of iterations it takes until reaching a repeated state.
my $iterations = 0;
# Now start the endless redistribution.
# Is this program guaranteed to halt? :-P
while (1) {
# Hash the current state.
my $current_state = row_to_string($memory_row);
# Exit infinite loop if state was seen before.
last if defined $seen_before->{$current_state};
# Now mark this state as seen.
$seen_before->{$current_state} = 1;
# Scan to find the highest value and start from there.
my $high_value = -1;
my $high_index = -1;
foreach my $pos (0 .. scalar @{ $memory_row} - 1) {
my $value = $memory_row->[$pos];
if ($value > $high_value) {
$high_value = $value;
$high_index = $pos;
}
}
# Get the value to redistribute.
my $memory_to_redistribute = $memory_row->[$high_index];
# Set the value in the memory as 0.
$memory_row->[$high_index] = 0;
# Set the position to iterate from.
my $index = $high_index;
# Start redistributing all these memory blocks.
while ($memory_to_redistribute) {
# Iterate over the memory banks and wrap around if needed.
$index = ($index + 1) % (scalar @{ $memory_row });
# Increase the memory of the current block.
$memory_row->[$index] += 1;
# Decrease the memory left to redistribute.
$memory_to_redistribute -= 1;
}
$iterations += 1;
}
print "$iterations\n";
}
|
# pylint:disable=unused-import
# pylint:disable=unused-argument
# pylint:disable=redefined-outer-name
import logging
import docker
import pytest
from pytest_docker import docker_ip, docker_services
log = logging.getLogger(__name__)
def is_responsive(url):
try:
docker_client = docker.from_env()
docker_client.login(registry=url, username="test")
except docker.errors.APIError:
log.exception("Error while loggin into the registry")
return False
return True
@pytest.fixture(scope="session")
def docker_registry(docker_ip, docker_services):
host = docker_ip
port = docker_services.port_for('registry', 5000)
url = "{host}:{port}".format(host=host, port=port)
# Wait until we can connect
docker_services.wait_until_responsive(
check=lambda: is_responsive(url),
timeout=30.0,
pause=1.0,
)
# test the registry
try:
docker_client = docker.from_env()
# get the hello world example from docker hub
hello_world_image = docker_client.images.pull("hello-world","latest")
# login to private registry
docker_client.login(registry=url, username="test")
# tag the image
repo = url + "/hello-world:dev"
assert hello_world_image.tag(repo) == True
# push the image to the private registry
docker_client.images.push(repo)
# wipe the images
docker_client.images.remove(image="hello-world:latest")
docker_client.images.remove(image=hello_world_image.id)
# pull the image from the private registry
private_image = docker_client.images.pull(repo)
docker_client.images.remove(image=private_image.id)
except docker.errors.APIError:
log.exception("Unexpected docker API error")
raise
yield url
print("teardown docker registry")
|
using System.Collections;
using System.Collections.Generic;
using UnityEngine;
public class RespawnController : MonoBehaviour
{
public LeverLiftController[] LeverLiftControllers;
public float DropDelay = 3.0f;
public BallFactory BallFactory;
private void Start()
{
EventHub.Instance.OnResetRequested += Respawn;
}
public void Respawn()
{
foreach (var leverLiftController in LeverLiftControllers)
{
leverLiftController.Drop();
}
StartCoroutine(WaitThenResetLevers());
}
private IEnumerator WaitThenResetLevers()
{
yield return new WaitForSeconds(DropDelay);
BallFactory.Create();
foreach (var leverLiftController in LeverLiftControllers)
{
leverLiftController.Ready();
}
EventHub.Instance.BallSpawned();
}
}
|
# Validador de CPF / CPF validator
Site para validação de CPF seguindo regras de verificação para os últimos dois dígitos.
---
Site for CPF validation following verification rules for the last two digits.
# Construído com / Built with
- Bootstrap
- HTML
- CSS
- Javascript
*And dark theme of course :)*
## License
This site was built under the [MIT License](https://choosealicense.com/licenses/mit/).
|
/**
* @author clzhu
* @createtime 2018/5/22 15:40
* @description
*/
import {BaseLoader, LoaderStatus, LoaderErrors} from './loader.js';
import {RuntimeException} from '../utils/exception.js';
/**
* 集成 QVBP2P 的loader
*
* 关于回退的说明, qvbp2p.rollback和 TLoader中onStateChange调用的_rollback方法实现其中一个即可, 但请务必实现其中的一个
*
* 触发的顺序为: 如果绑定了qvbp2p.rollback, 则sdk会调用这个方法, 否则的话, 会触发onStateChange事件
*/
class QVBP2PLoader extends BaseLoader {
static isSupported() {
let ql = window.qvbp2p;
if (ql) {
return ql.supportLoader;
} else {
if (window.QVBP2P) {
return window.QVBP2P.isSupported();
}
return false;
}
}
constructor(seekHandler, config) {
super('qvb-p2p-loader');
this.TAG = 'QVBP2PLoader';
this._seekHandler = seekHandler;
this._config = config;
this._needStash = false;
this._requestAbort = false;
this._contentLength = null;
this._receivedLength = 0;
}
destroy() {
super.destroy();
this._destroyQVBP2P();
}
abort() {
this._requestAbort = true;
this._status = LoaderStatus.kComplete;
}
_initQVBP2P() {
if (!window.qvbp2p) {
window.qvbp2p = new window.QVBP2P(this._config.p2pOptions);
// 绑定回退方法
if (this._config.rollback) {
window.qvbp2p.rollback = this._config.rollback;
}
if (this.player) {
window.qvbp2p.player = this.player;
}
}
}
_destroyQVBP2P() {
if (window.qvbp2p) {
window.qvbp2p.destroy();
window.qvbp2p = null;
}
}
open(dataSource, range) {
if (window.qvbp2p) {
this._destroyQVBP2P();
}
if (!window.qvbp2p) {
this._initQVBP2P();
}
let sourceURL = dataSource.url;
window.qvbp2p.loadSource({ videoId: this._config.videoId, src: sourceURL });
this._bindInterface();
}
_bindInterface() {
this._status = LoaderStatus.kBuffering;
let tl = window.qvbp2p,
TL = window.QVBP2P;
tl.listen(TL.ComEvents.STATE_CHANGE, this.onStateChange.bind(this));
}
onStateChange(event, data) {
let CODE = window.QVBP2P.ComCodes;
let code = data.code;
switch (code) {
// 接收数据
case CODE.RECEIVE_BUFFER:
this._receiveBuffer(data);
break;
// 回退
case CODE.ROLLBACK:
this._rollback(data.player);
break;
default:
break;
}
}
_receiveBuffer(data) {
if (this._requestAbort) {
return;
}
if (data.payload instanceof ArrayBuffer) {
if (this._contentLength === null) {
if (data.payload !== null && data.payload !== 0) {
this._contentLength = data.payload.byteLength;
if (this._onContentLengthKnown) {
this._onContentLengthKnown(this._contentLength);
}
}
}
this._dispatchArrayBuffer(data.payload);
} else {
this._status = LoaderStatus.kBuffering;
let errInfo = {
code: -1,
msg: `${this.TAG} receive buffer is not instanceof ArrayBuffer`
};
if (this._onError) {
this._onError(LoaderErrors.EXCEPTION, errInfo);
} else {
throw new RuntimeException(errInfo.msg);
}
}
}
_bufferEof() {
this._status = LoaderStatus.kComplete;
}
_httpStatusCodeInvalid(data) {
this._status = LoaderStatus.kError;
if (this._onError) {
this._onError(LoaderErrors.HTTP_STATUS_CODE_INVALID, { code: data.statusCode, msg: data.statusText });
} else {
throw new RuntimeException('TLoader: Http code invalid, ' + data.statusCode + ' ' + data.statusText);
}
}
/**
* p2p回退
* @param player 播放器实例引用, 如果您没有调用qvbp2p.player = xxx, 则这个参数为undefined
* @private
*/
_rollback(player) {
// 请自行实现回退
}
_dispatchArrayBuffer(arraybuffer) {
let chunk = arraybuffer;
let byteStart = this._receivedLength;
this._receivedLength += chunk.byteLength;
if (this._onDataArrival) {
this._onDataArrival(chunk, byteStart, this._receivedLength);
}
}
}
export default QVBP2PLoader;
|
#![feature(test)]
extern crate test;
use std::time;
use hdrhistogram::serialization;
use hdrhistogram::serialization::interval_log;
use hdrhistogram::*;
use rand::SeedableRng;
use test::Bencher;
use self::rand_varint::*;
#[path = "../src/serialization/rand_varint.rs"]
mod rand_varint;
#[bench]
fn write_interval_log_1k_hist_10k_value(b: &mut Bencher) {
let mut log = Vec::new();
let mut histograms = Vec::new();
let mut rng = rand::rngs::SmallRng::from_entropy();
for _ in 0..1000 {
let mut h = Histogram::<u64>::new_with_bounds(1, u64::max_value(), 3).unwrap();
for v in RandomVarintEncodedLengthIter::new(&mut rng).take(10_000) {
h.record(v).unwrap();
}
histograms.push(h);
}
let mut serializer = serialization::V2Serializer::new();
b.iter(|| {
log.clear();
let mut writer = interval_log::IntervalLogWriterBuilder::new()
.begin_log_with(&mut log, &mut serializer)
.unwrap();
let dur = time::Duration::new(5, 678_000_000);
for h in histograms.iter() {
writer
.write_histogram(h, time::Duration::new(1, 234_000_000), dur, None)
.unwrap();
}
})
}
#[bench]
fn parse_interval_log_1k_hist_10k_value(b: &mut Bencher) {
let mut log = Vec::new();
let mut histograms = Vec::new();
let mut rng = rand::rngs::SmallRng::from_entropy();
for _ in 0..1000 {
let mut h = Histogram::<u64>::new_with_bounds(1, u64::max_value(), 3).unwrap();
for v in RandomVarintEncodedLengthIter::new(&mut rng).take(10_000) {
h.record(v).unwrap();
}
histograms.push(h);
}
{
let mut serializer = serialization::V2Serializer::new();
let mut writer = interval_log::IntervalLogWriterBuilder::new()
.begin_log_with(&mut log, &mut serializer)
.unwrap();
let dur = time::Duration::new(5, 678_000_000);
for h in histograms.iter() {
writer
.write_histogram(h, time::Duration::new(1, 234_000_000), dur, None)
.unwrap();
}
}
b.iter(|| {
let iter = interval_log::IntervalLogIterator::new(&log);
assert_eq!(1000, iter.count());
})
}
|
using System;
namespace MetafileDumper
{
public class MetafileReader
{
public MetafileReader(byte[] buffer, int index)
{
Buffer = buffer;
CurrentIndex = index;
}
public byte[] Buffer { get; }
public int CurrentIndex { get; set; }
public int ReadInt32()
{
int result = BitConverter.ToInt32(Buffer, CurrentIndex);
CurrentIndex += 4;
return result;
}
public int PeekInt32()
{
int result = BitConverter.ToInt32(Buffer, CurrentIndex);
return result;
}
public uint ReadUInt32()
{
uint result = BitConverter.ToUInt32(Buffer, CurrentIndex);
CurrentIndex += 4;
return result;
}
public int PeekUInt32()
{
int result = BitConverter.ToInt32(Buffer, CurrentIndex);
return result;
}
public short ReadInt16()
{
short result = BitConverter.ToInt16(Buffer, CurrentIndex);
CurrentIndex += 2;
return result;
}
public short PeekInt16()
{
short result = BitConverter.ToInt16(Buffer, CurrentIndex);
return result;
}
public ushort ReadUInt16()
{
ushort result = BitConverter.ToUInt16(Buffer, CurrentIndex);
CurrentIndex += 2;
return result;
}
public byte ReadByte()
{
byte result = Buffer[CurrentIndex];
CurrentIndex += 1;
return result;
}
public float ReadSingle()
{
float result = BitConverter.ToSingle(Buffer, CurrentIndex);
CurrentIndex += 4;
return result;
}
public bool ReadBoolean()
{
bool result = BitConverter.ToBoolean(Buffer, CurrentIndex);
CurrentIndex += 4;
return result;
}
public Guid ReadGuid()
{
var result = new Guid(Buffer);
CurrentIndex += 16;
return result;
}
}
}
|
using System.ComponentModel;
namespace Domain.Common.Errors
{
public enum CommonErrors
{
[Description("{PropertyName} should not be empty")]
ShouldNotBeEmpty,
[Description("{PropertyName} should be unique")]
ShouldBeUnique,
[Description("{PropertyName} should be greater than 0")]
ShouldBeGreaterThan0,
}
}
|
/*
* Copyright 2020 Google LLC
*
* Licensed under the Apache License, Version 2.0 (the "License");
* you may not use this file except in compliance with the License.
* You may obtain a copy of the License at
*
* http://www.apache.org/licenses/LICENSE-2.0
*
* Unless required by applicable law or agreed to in writing, software
* distributed under the License is distributed on an "AS IS" BASIS,
* WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
* See the License for the specific language governing permissions and
* limitations under the License.
*/
package dev.android.playground.nova.core.framework.styleables
import dev.android.playground.nova.core.base.*
// ****************************************************************
// Menu XML inflation.
// ****************************************************************
// Base attributes that are available to all Menu objects.
@UseAndroidNamespace
abstract class CoreMenuStyleable : BaseStyleable
// Base attributes that are available to all groups.
@UseAndroidNamespace
abstract class CoreMenuGroupStyleable : BaseStyleable {
// The ID of the group.
@Reuse(CoreViewStyleable.id::class)
class id
enum class MenuCategoryEnum(val value: Int) : StyleableEnum {
// Items are part of a container.
container(0x00010000),
// Items are provided by the system.
system(0x00020000),
// Items are user-supplied secondary (infrequently used).
secondary(0x00030000),
// Items are alternative actions.
alternative(0x00040000)
}
// The category applied to all items within this group.
// (This will be or'ed with the orderInCategory attribute.)
@EnumValue(MenuCategoryEnum::class)
@UseAndroidNamespace
class menuCategory
// The order within the category applied to all items within this group.
// (This will be or'ed with the category attribute.)
@IntegerValue
@UseAndroidNamespace
class orderInCategory
enum class CheckableBehaviorEnum(val value: Int) : StyleableEnum {
// The items are not checkable.
none(0),
// The items are all checkable.
all(1),
// The items are checkable and there will only be a single checked item in
// this group.
single(2)
}
// Whether the items are capable of displaying a check mark.
@EnumValue(CheckableBehaviorEnum::class)
@UseAndroidNamespace
class checkableBehavior
// Whether the items are shown/visible.
@Reuse(CoreDrawableStyleable.visible::class)
class visible
// Whether the items are enabled.
@Reuse(CoreTextViewStyleable.enabled::class)
class enabled
}
// Base attributes that are available to all Item objects.
@UseAndroidNamespace
abstract class CoreMenuItemStyleable : BaseStyleable {
// The ID of the item.
@Reuse(CoreViewStyleable.id::class)
class id
// The category applied to the item.
// (This will be or'ed with the orderInCategory attribute.)
@Reuse(CoreMenuGroupStyleable.menuCategory::class)
class menuCategory
// The order within the category applied to the item.
// (This will be or'ed with the category attribute.)
@Reuse(CoreMenuGroupStyleable.orderInCategory::class)
class orderInCategory
// The title associated with the item.
@StringValue
@UseAndroidNamespace
class title
// The condensed title associated with the item. This is used in situations where the
// normal title may be too long to be displayed.
@StringValue
@UseAndroidNamespace
class titleCondensed
// The icon associated with this item. This icon will not always be shown, so
// the title should be sufficient in describing this item.
@Reuse(ManifestStyleable.icon::class)
class icon
// Tint to apply to the icon.
@ColorValue
@UseAndroidNamespace
class iconTint
enum class IconTintModeEnum(val value: Int) : StyleableEnum {
// The tint is drawn on top of the icon.
// [Sa + (1 - Sa)*Da, Rc = Sc + (1 - Sa)*Dc]
src_over(3),
// The tint is masked by the alpha channel of the icon. The icon’s
// color channels are thrown out. [Sa * Da, Sc * Da]
src_in(5),
// The tint is drawn above the icon, but with the icon’s alpha
// channel masking the result. [Da, Sc * Da + (1 - Sa) * Dc]
src_atop(9),
// Multiplies the color and alpha channels of the icon with those of
// the tint. [Sa * Da, Sc * Dc]
multiply(14),
// [Sa + Da - Sa * Da, Sc + Dc - Sc * Dc]
screen(15),
// Combines the tint and icon color and alpha channels, clamping the
// result to valid color values. Saturate(S + D)
add(16)
}
// Blending mode used to apply the icon tint.
@EnumValue(IconTintModeEnum::class)
@UseAndroidNamespace
class iconTintMode
// The alphabetic shortcut key. This is the shortcut when using a keyboard
// with alphabetic keys.
@StringValue
@UseAndroidNamespace
class alphabeticShortcut
enum class AlphabeticModifiersFlag(val value: Long) : StyleableFlag {
META(0x10000),
CTRL(0x1000),
ALT(0x02),
SHIFT(0x1),
SYM(0x4),
FUNCTION(0x8)
}
// The alphabetic modifier key. This is the modifier when using a keyboard
// with alphabetic keys. The values should be kept in sync with KeyEvent
@FlagValue(AlphabeticModifiersFlag::class)
@UseAndroidNamespace
class alphabeticModifiers
// The numeric shortcut key. This is the shortcut when using a numeric (for example,
// 12-key) keyboard.
@StringValue
@UseAndroidNamespace
class numericShortcut
enum class NumericModifiersFlag(val value: Long) : StyleableFlag {
META(0x10000),
CTRL(0x1000),
ALT(0x02),
SHIFT(0x1),
SYM(0x4),
FUNCTION(0x8)
}
// The numeric modifier key. This is the modifier when using a numeric (for example,
// 12-key) keyboard. The values should be kept in sync with KeyEvent
@FlagValue(NumericModifiersFlag::class)
@UseAndroidNamespace
class numericModifiers
// Whether the item is capable of displaying a check mark.
@BooleanValue
@UseAndroidNamespace
class checkable
// Whether the item is checked. Note that you must first have enabled checking with
// the checkable attribute or else the check mark will not appear.
@Reuse(CoreCompoundButtonStyleable.checked::class)
class checked
// Whether the item is shown/visible.
@Reuse(CoreDrawableStyleable.visible::class)
class visible
// Whether the item is enabled.
@Reuse(CoreTextViewStyleable.enabled::class)
class enabled
// Name of a method on the Context used to inflate the menu that will be
// called when the item is clicked.
@Reuse(CoreViewStyleable.onClick::class)
class onClick
enum class ShowAsActionFlag(val value: Long) : StyleableFlag {
// Never show this item in an action bar, show it in the overflow menu instead.
// Mutually exclusive with "ifRoom" and "always".
never(0),
// Show this item in an action bar if there is room for it as determined
// by the system. Favor this option over "always" where possible.
// Mutually exclusive with "never" and "always".
ifRoom(1),
// Always show this item in an actionbar, even if it would override
// the system's limits of how much stuff to put there. This may make
// your action bar look bad on some screens. In most cases you should
// use "ifRoom" instead. Mutually exclusive with "ifRoom" and "never".
always(2),
// When this item is shown as an action in the action bar, show a text
// label with it even if it has an icon representation.
withText(4),
// This item's action view collapses to a normal menu
// item. When expanded, the action view takes over a
// larger segment of its container.
collapseActionView(8)
}
// How this item should display in the Action Bar, if present.
@FlagValue(ShowAsActionFlag::class)
@UseAndroidNamespace
class showAsAction
// An optional layout to be used as an action view.
// See {@link android.view.MenuItem#setActionView(android.view.View)}
// for more info.
@ReferenceValue
@UseAndroidNamespace
class actionLayout
// The name of an optional View class to instantiate and use as an
// action view. See {@link android.view.MenuItem#setActionView(android.view.View)}
// for more info.
@StringValue
@UseAndroidNamespace
class actionViewClass
// The name of an optional ActionProvider class to instantiate an action view
// and perform operations such as default action for that menu item.
// See {@link android.view.MenuItem#setActionProvider(android.view.ActionProvider)}
// for more info.
@StringValue
@UseAndroidNamespace
class actionProviderClass
// The content description associated with the item.
@StringValue
@UseAndroidNamespace
class contentDescription
// The tooltip text associated with the item.
@StringValue
@UseAndroidNamespace
class tooltipText
}
|
---
id: D1cWYKUE3fEH7l7H1X0J0
title: Admin
desc: ''
updated: 1643950336753
created: 1643922281465
---
|
# Joyent Authentication Library
Utility functions to sign http requests to Joyent Triton and Manta services.
This library is meant to be used internally by other libraries and tools as in
the [`triton`](https://github.com/joyent/node-triton) and
[Manta](https://github.com/joyent/node-manta) repositories.
If you only want to use one of these libraries to make requests to a Joyent
service, you should not need to use this library directly at all.
Its API can be used independently, though, to search for and list the available
SSH keys on the system (used by `triton profile create`, for example):
```js
var mod_sdcauth = require('smartdc-auth');
var keyRing = new mod_sdcauth.KeyRing();
keyRing.list(function (err, keyMap) {
if (err) {
/* ... handle err ... */
return;
}
/* The keyMap is an object that maps keyId => [keyPair] */
var keyIds = Object.keys(keyMap);
keyIds.forEach(function (keyId) {
var keys = keyMap[keyId];
console.log('%s:', keyId);
keys.forEach(function (keyPair) {
var key = keyPair.getPublicKey();
console.log(' %s (%d bit): %s',
key.type, key.size, key.comment);
if (keyPair.isLocked())
console.log(' !! password protected');
});
});
});
```
This might produce the output:
```
05:6c:c8:0c:83:6c:1e:9a:81:26:fb:52:8e:03:3c:33:
ecdsa (256 bit): foobar@my-mbp.local
!! password protected
2c:be:e8:b1:32:02:31:cd:10:89:f9:96:95:db:11:0c:
rsa (2048 bit): foobar@my-mbp.local
81:ad:d5:57:e5:6f:7d:a2:93:79:56:af:d7:c0:38:51:
ecdsa (256 bit): foobar@my-mbp.local
```
It can also be used to implement your own `http-signature` HTTPS client that
uses the same logic that the `triton` and `manta` tools do to locate SSH keys:
```js
var mod_sdcauth = require('smartdc-auth');
var mod_sshpk = require('sshpk');
var mod_https = require('https');
var fp = mod_sshpk.parseFingerprint(process.env.TRITON_KEY_ID);
var keyRing = new mod_sdcauth.KeyRing();
keyRing.findSigningKeyPair(fp, function (err, keyPair) {
var signer = keyPair.createRequestSigner({
user: process.env.TRITON_ACCOUNT
});
var opts = {
host: 'localhost',
port: 8443, path: '/', method: 'GET',
headers: {}
};
signer.writeTarget(opts.method, opts.path);
opts.headers.date = signer.writeDateHeader();
signer.sign(function (err, authz) {
opts.headers.authorization = authz;
var req = https.request(opts);
/* ... */
req.end();
});
});
```
## Overview
Authentication to Triton CloudAPI and Manta is built on top of Joyent's
[http-signature](https://github.com/joyent/node-http-signature) specification.
All requests to the APIs require an HTTP Authorization header where the scheme is
`Signature`. Full details are available in the `http-signature` specification,
but a simple form is:
Authorization: Signature keyId="/:login/keys/:md5_fingerprint",algorithm="rsa-sha256" $base64_signature
The `keyId` field varies in structure when making requests with RBAC subusers,
particularly when doing so in requests made to Manta. In the API reference
below, the term `keyId` generally refers specifically to the MD5 fingerprint of
the key in hex format, as used in the field.
Note that this MD5 fingerprint is used only to choose the existing full key on
file at the server end out of the ones for the given user and is not used for
authentication itself (so the weak hash is not a serious problem).
This library handles the complete process of finding SSH keys based on user
preferences or input, all the way to generating the contents of the
`Authorization` header ready for you to use.
The general idea is to create a `KeyRing`, then search it for the particular key
pair you want to use. Then you can call methods on the `KeyPair` instance like
`createRequestSigner()` to sign an HTTP request. You can also access metadata
about the key pair.
## API: KeyRing
### `new mod_sdcauth.KeyRing([options])`
Create a new SDC keyring. KeyRing instances use a list of plugins in order to
locate keys on the local system - via the filesystem, via the SSH agent, or any
other mechanism.
Parameters
- `options`: an Object containing properties:
- `plugins`: an Array of Strings, names of plugins to enable
Any additional keys set in the `options` object will be passed through to
plugins as options for their processing.
Available plugins:
- `agent`: Gets keys from the OpenSSH agent. Options:
- `sshAgentOpts`: an Object, options to be passed to `mod_sshpk_agent.Client`
- `homedir`: Gets keys from a directory on the filesystem. Options:
- `keyDir`: a String, path to look in for keys, defaults to `$HOME/.ssh`
- `file`: Gets a key from a particular path on disk. Options:
- `keyPath`: a String, path to the private key file
### `KeyRing#addPlugin(pluginName[, options])`
Adds a plugin to the KeyRing after construction. This is particularly useful
with the `file` plugin.
Parameters
- `pluginName`: a String, name of the plugin to load. One of `agent`, `homedir`
or `file`
- `options`: an optional Object, options to pass to the plugin. See the
documentation above for the class constructor for details.
### `KeyRing#list(cb)`
Lists all available keys in all plugins, organised by their Key ID.
Parameters
- `cb`: a Function `(err, keyPairs)` with parameters:
- `err`: an Error or `null`
- `keyPairs`: an Object, keys: String key IDs, values: Array of instances of
`KeyPair`
### `KeyRing#find(fingerprint, cb)`
Searches active plugins for an SSH key matching the given fingerprint. Calls
`cb` with an array of `KeyPair` instances that match, ordered arbitrarily.
Parameters:
- `fingerprint`: an `sshpk.Fingerprint`
- `cb`: a Function `(err, keyPairs)`, with parameters:
- `err`: an Error or `null`
- `keyPairs`: an Array of `KeyPair` instances
### `KeyRing#findSigningKeyPair(fingerprint, cb)`
Searches active plugins for an SSH key matching the given fingerprint. Chooses
the best available signing key of those available (preferably unlocked) and
calls `cb` with this single `KeyPair` instance.
Parameters:
- `fingerprint`: an `sshpk.Fingerprint`
- `cb`: a Function `(err, keyPair)`, with parameters:
- `err`: an Error or `null`
- `keyPair`: a `KeyPair` instance
## KeyPair
### `KeyPair.fromPrivateKey(privKey)`
Constructs a KeyPair unrelated to any keychain, based directly on a given
private key. This is mostly useful for compatibility purposes.
Parameters:
- `privKey`: an `sshpk.PrivateKey`
### `KeyPair#plugin`
String, name of the plugin through which this KeyPair was found.
### `KeyPair#source`
String (may be `undefined`), human-readable name of the source that the KeyPair
came from when discovered (e.g. for a plugin that searches the filesystem, this
could be the path to the key file).
### `KeyPair#comment`
String, comment that was stored with the key, if any.
### `KeyPair#canSign()`
Returns Boolean `true` if this key pair is complete (has a private and public
key) and can be used for signing. Note that this returns `true` for locked
keys.
### `KeyPair#isLocked()`
Returns Boolean `true` if this key pair is locked and may be unlocked using
the `unlock()` method.
### `KeyPair#unlock(passphrase)`
Unlocks an encrypted key pair, allowing it to be used for signing and the
`getPrivateKey()` method to be called.
Parameters:
- `passphrase`: a String, passphrase for decryption
### `KeyPair#getKeyId()`
Returns the String key ID for this key pair. This is specifically the key ID
as used in HTTP signature auth for SDC and Manta. Currently this is a
hex-format MD5 fingerprint of the key, but this may change in future.
### `KeyPair#getPublicKey()`
Returns the `sshpk.Key` object representing this pair's public key.
### `KeyPair#getPrivateKey()`
Returns the `sshpk.PrivateKey` object representing this pair's private key. If
unavailable, this method will throw an `Error`.
### `KeyPair#createRequestSigner(options)`
Creates an `http-signature` `RequestSigner` object for signing an HTTP request
using this key pair's private key.
Parameters:
- `options`, an Object with keys:
- `user`, a String, the Triton or Manta account to authenticate as. Note that
this field is named `user` even though it normally refers
to an *account*, for historical reasons.
- `subuser`, an optional String, subuser of the account to authenticate as
- `mantaSubUser`, an optional Boolean, if `true` use Manta-style subuser
syntax
### `KeyPair#createSign(options)`
Creates a `sign()` function (matching the legacy `smartdc-auth` API) for
signing arbitrary data with this key pair's private key.
Parameters:
- `options`, an Object with keys:
- `user`, a String, the Triton or Manta account to authenticate as. Note that
this field is named `user` even though it normally refers
to an *account*, for historical reasons.
- `subuser`, an optional String, subuser of the account to authenticate as
- `mantaSubUser`, an optional Boolean, if `true` use Manta-style subuser
syntax
- `algorithm`, an optional String, the signing algorithm to use
## Legacy request signers
Older SDC and Manta client libraries expose a bit more of the innards of key
location and management, and require direct use of this library.
The legacy signer function API is provided for compatibility with users of these
older client libraries. Note that you don't need to use this API for new
software that still wants to be able to use an older client library (you can
just use the `createSign()` method on a `KeyPair`, above).
These functions take options and return a "signer function" which is provided as
the `sign` parameter to other libraries.
### `privateKeySigner(options);`
A basic signer which signs using a given PEM (PKCS#1) format private key only.
Ideal for simple use cases where the key is stored in a file on the filesystem
ready for use.
- `options`: an Object containing properties:
- `key`: a String, PEM-format (PKCS#1) private key, for any supported algorithm
- `user`: a String, SDC login name to be used in the full keyId, above
- `subuser`: an optional String, SDC subuser login name
- `keyId`: optional String, the fingerprint of the `key` (not the same as the
full keyId given to the server). Ignored unless it does not match
the given `key`, then an Error will be thrown.
### `sshAgentSigner(options);`
Signs requests using a key that is stored in the OpenSSH agent. Opens and manages
a connection to the current session's agent during operation.
- `options`: an Object containing properties:
- `keyId`: a String, fingerprint of the key to retrieve from the agent
- `user`: a String, SDC login name to be used
- `subuser`: an optional String, SDC subuser login name
- `sshAgentOpts`: an optional Object, any additional options to pass through
to the SSHAgent constructor (eg `timeout`)
### `cliSigner(options);`
Signs requests using a key located either in the OpenSSH agent, or found in
the filesystem under `$HOME/.ssh` (or its equivalent on your platform).
This is generally intended for use with CLI utilities (eg the `sdc-listmachines`
tool and family), hence the name.
- `options`: an Object containing properties:
- `keyId`: a String, fingerprint of the key to retrieve or find
- `user`: a String, SDC login name to be used
- `subuser`: an optional String, SDC subuser login name
- `sshAgentOpts`: an optional Object, any additional options to pass through
to the SSHAgent constructor (eg `timeout`)
- `algorithm`: DEPRECATED, an optional String, the signing algorithm to use.
If this does not match up with the algorithm of the key (once
it is located), an Error will be thrown.
(The `algorithm` option is deprecated as its backwards-compatible behaviour is
to apply only to keys that were found on disk, not in the SSH agent. If you have
a compelling use case for a replacement for this option in future, please open
an issue on this repo).
The `keyId` fingerprint does not necessarily need to be the exact format
(hex MD5) as sent to the server -- it can be in any fingerprint format supported
by the [`sshpk`](https://github.com/arekinath/node-sshpk) library.
As of version 2.0.0, an invalid fingerprint (one that can never match any key,
because, for example, it contains invalid characters) will produce an exception
immediately rather than returning a `sign` function.
Note that the `cliSigner` and `sshAgentSigner` are not suitable for server
applications, or any other system where the performance degradation necessary
to interact with SSH is not acceptable; put another way, you should only use
it for interactive tooling, such as the CLI that ships with node-smartdc.
## License
MIT.
## Bugs
See <https://github.com/joyent/node-smartdc-auth/issues>.
|
#!/bin/bash
irissession $ISC_PACKAGE_INSTANCENAME -U USER <<EOF
sys
sys
Do \$System.OBJ.Load("/tmp/deps/src/community/fhirAnalytics/samples/Setup.cls", "ck")
Do ##class(community.fhirAnalytics.samples.Setup).Run()
halt
EOF
|
#!/bin/sh
TERMWIN="xfce4-terminal"
CMD="url"
# Use with url_externalapp firefox extension
# Idiotic parsing of data sent by Firefox (see
# https://developer.mozilla.org/en-US/docs/Mozilla/Add-ons/WebExtensions/Native_messaging#App_side)
t=$(cat | dd bs=1 skip=4 | tr -d '"')
xdotool search --onlyvisible "Terminal - " windowactivate && \
sleep 0.2 && \
xdotool key Ctrl+Shift+t && \
sleep 0.2 && \
xdotool type --clearmodifiers --delay 15 "$CMD \"$t\""
|
# rhelsearch
See which packages are "officially" packaged in RHEL by using the Fedora
Infrastructure's
[JSON files](https://infrastructure.fedoraproject.org/repo/json/pkg_el7.json).
## How it works
We cache the JSON files in `~/.cache/rhelsearch/`. If the file is over a week
old, we pull the latest files before doing anything else. Otherwise, we use the
cached file. If we're unable to pull the latest files, or if `--offline` is
passed, we simply use the files in cache, if they exist. Otherwise, we give up.
# License
BSD-2
|
# mip-toggle 显示隐藏
提供显示/隐藏元素的功能。
标题|内容
----|----
类型|通用
支持布局|responsive,fixed-height,fill,container,fixed
所需脚本|https://c.mipcdn.com/static/v1/mip-toggle/mip-toggle.js
## 示例
### 基本使用
你可以使用事件 `toggle`, `show` 或 `hide` 以控制 `mip-toggle` 组件的显示与隐藏。
```html
<button on="tap:my-mip-toggle-1.toggle">显示/隐藏</button>
<button on="tap:my-mip-toggle-1.show">显示</button>
<button on="tap:my-mip-toggle-1.hide">隐藏</button>
<mip-toggle id="my-mip-toggle-1">
<p>lorem ipsum</p>
</mip-toggle>
```
### 自动隐藏
你可以设置自定义隐藏时间,在显示了指定时间后它将会被自动隐藏。注意这仅对 `show` 生效。
```html
<button on="tap:my-mip-toggle-2.toggle">显示/隐藏</button>
<button on="tap:my-mip-toggle-2.show">显示</button>
<button on="tap:my-mip-toggle-2.hide">隐藏</button>
<mip-toggle id="my-mip-toggle-2" hidetimeout="500" layout="nodisplay">
<p>lorem ipsum</p>
</mip-toggle>
```
### 自定义 display
你可以自定义显示时的 display 以实现特殊需求。
```html
<button on="tap:my-mip-toggle-3.toggle">显示/隐藏</button>
<p>
<mip-toggle id="my-mip-toggle-3" display="inline">lorem ipsum</mip-toggle> dolor sit amet
</p>
```
### 自动隐藏(事件级别)
你可以通过 `show` 的事件参数传入隐藏延迟以覆盖默认值。
```html
<button on="tap:my-mip-toggle-4.show(500)">显示500ms</button>
<button on="tap:my-mip-toggle-4.show(1000)">显示1s</button>
<button on="tap:my-mip-toggle-4.show(3000)">显示3s</button>
<button on="tap:my-mip-toggle-4.show(Infinity)">不自动隐藏(覆盖默认值)</button>
<mip-toggle id="my-mip-toggle-4" layout="nodisplay" hidetimeout="2000">
<p>lorem ipsum</p>
</mip-toggle>
```
### 自定义动画
你可以在组件上申明 `enterclass` 以决定显示/隐藏时要增加/删除的 class。此时,元素的 `display` 将不再自动变化。
```html
<style>
.my-mip-toggle {
transition: 2s opacity;
opacity: 0;
}
.my-mip-toggle-enter {
opacity: 1;
}
</style>
<button on="tap:my-mip-toggle-5.toggle">显示/隐藏</button>
<mip-toggle id="my-mip-toggle-5" enterclass="my-mip-toggle-enter" class="my-mip-toggle">
<p>lorem ipsum</p>
</mip-toggle>
```
## 属性
### hidetimeout
说明:自动隐藏延时(ms)。当设置 hidetimeout 时,每次被 show 后,将会在指定时间后被自动隐藏。toggle 不受该参数影响。
必选项:否
类型:整数
取值范围:0~Infinity
默认值:Infinity
### display
说明:当元素被显示时,需要使用的 display 值。
必选项:否
类型:字符串
取值范围:css display property
默认值:block
### enterclass
说明:当元素被显示时,使用的 class。注意设置该值之后,组件将不再设置 display。
必选项:否
类型:字符串
取值范围:class
默认值:无
|
---
title: Plots
parent: Display Widgets
has_children: true
nav_order: 10
---
## {{page.title}}
This section describes different widgets
that are available for the graphical display of data.
|
import express from "express";
import chalk from 'chalk';
import http from 'http';
import { middlewares } from 'middleware';
//apis
import swaggerJsDoc from 'swagger-jsdoc';
import swaggerUi from 'swagger-ui-express'
import { swaggerOptions } from 'docs/swagger';
//loaders y middleware
import useLoaders from "config/loaders";
import useSocket from 'config/socket';
import { config } from 'dotenv';
//import router
import createRouter from "routes";
config();
const app = express();
const server = http.createServer(app)
const port = parseInt(process.env.PORT!) || 3000
//settings
useLoaders(app, port);
app.use(express.static('public'));
const swaggerDocs = swaggerJsDoc(swaggerOptions(port));
//rutas
app.use('/docs', swaggerUi.serve, swaggerUi.setup(swaggerDocs))
createRouter(app)
//err
app.use(middlewares.errorHandler);
app.use(middlewares.notFoundHandler);
server.listen(app.get('port'), () => {
console.log(chalk.blue('INFO: ') + chalk.green(`Server started at http://localhost:${app.get('port')}`))
});
useSocket(app,server)
|
# GitHub GraphQL Demo
> An example GitHub dashboard implemented using REST & GraphQL for comparison
[](https://gr2m.github.io/github-graphql-demo)
This static HTML dashboard is to showcase the benefits of GitHub’s [GraphQL API](https://developer.github.com/v4/)
over its [REST API](https://developer.github.com/v3/).
The dashboard shows
* the authenticated user (based on [private access token](https://github.com/settings/tokens/new))
* A list of repositories (4 by default)
* For each repository
* A list of recent issues (3 by default)
* For each issue: Name and company for user tooltip
* 5 most recent stargazers.
* For each stargazer: Name and company for user tooltip
Using the Rest API, up to 42 request are required to render the dashboard.
GraphQL requires a single request which looks like that:
```graphql
query myGithubDashboard($numRepositories: Int = 4, $numIssues: Int = 3) {
me: viewer {
...userInfo
repositories(
first: $numRepositories
orderBy: { field: UPDATED_AT, direction: DESC }
affiliations: OWNER
) {
totalCount
mostStarred: nodes {
name
url
stargazers(last: 5) {
totalCount
mostRecent: nodes {
...userInfo
}
}
issues(
first: $numIssues
states: [OPEN, CLOSED]
orderBy: { field: UPDATED_AT, direction: DESC }
) {
mostRecent: nodes {
title
url
author {
...userInfo
}
}
}
}
}
}
}
fragment userInfo on User {
login
url
avatarUrl(size: 30)
name
company
location
}
```
You can run the query in the [GitHub GraphQL API](https://developer.github.com/v4/explorer/).
## License
[MIT](LICENSE)
|
<?php
namespace App\Rabbit\Validator\Traits;
use App\Rabbit\Validator\AbstractValidator;
trait ValidatorAwareTrait
{
/**
* @var AbstractValidator
*/
protected $validator;
protected function setValidator(AbstractValidator $validator): void
{
$this->validator = $validator;
}
protected function getValidator(): AbstractValidator
{
return $this->validator;
}
protected function validateOrFail($object): void
{
$validator = $this->getValidator()->validate($object);
if ($validator->fails()) {
throw new \InvalidArgumentException($validator->messages()->first());
}
}
}
|
using System;
using Meta.Numerics;
using Microsoft.VisualStudio.TestTools.UnitTesting;
using NetComplex = System.Numerics.Complex;
namespace Test
{
[TestClass]
public class ComplexInteropTest
{
[TestMethod]
public void ComplexNetToMy ()
{
NetComplex netComplex = new NetComplex(1.0, -2.0);
Complex myComplex = netComplex;
Assert.IsTrue(netComplex.Real == myComplex.Re);
Assert.IsTrue(netComplex.Imaginary == myComplex.Im);
Assert.IsTrue(ComplexMath.Sqrt(netComplex) == ComplexMath.Sqrt(myComplex));
}
[TestMethod]
public void ComplexMyToNet () {
Complex myComplex = new Complex(-1.0, 2.0);
NetComplex netComplex = myComplex;
Assert.IsTrue(netComplex.Real == myComplex.Re);
Assert.IsTrue(netComplex.Imaginary == myComplex.Im);
Assert.IsTrue(NetComplex.Sqrt(netComplex) == NetComplex.Sqrt(myComplex));
}
}
}
|
;; -*-lisp-*-
;;
;; p22.lisp
;; P22 (*) Create a list containing all integers within a given range.
;; If first argument is smaller (??? they must have meant "bigger")
;; than second, produce a list in decreasing order.
;; Example:
;; * (range 4 9)
;; (4 5 6 7 8 9)
(defun my-range (n k)
(cond ((eql n k) (list k))
((< n k) (append (list n)
(my-range (+ n 1) k)))
((> n k) (append (list n)
(my-range (- n 1) k)))))
|
/**
* @file sys.c
* @date Tue Jul 20 16:19:46 2010
*
* @brief Code to call kernel time routines and also display server statistics.
*
*
*/
#include "../ptpd.h"
int
isTimeInternalNegative(const TimeInternal * p)
{
return (p->seconds < 0) || (p->nanoseconds < 0);
}
int
snprint_TimeInternal(char *s, int max_len, const TimeInternal * p)
{
int len = 0;
if (isTimeInternalNegative(p))
len += snprintf(&s[len], max_len - len, "-");
len += snprintf(&s[len], max_len - len, "%d.%09d",
abs(p->seconds), abs(p->nanoseconds));
return len;
}
int
snprint_ClockIdentity(char *s, int max_len, const Octet uuid[PTP_UUID_LENGTH], const char *info)
{
int len = 0;
int i;
if (info)
len += snprintf(&s[len], max_len - len, "%s", info);
for (i = 0; ;) {
len += snprintf(&s[len], max_len - len, "%02x", (unsigned char) uuid[i]);
if (++i >= PTP_UUID_LENGTH)
break;
// uncomment the line below to print a separator after each byte except the last one
// len += snprintf(&s[len], max_len - len, "%s", "-");
}
return len;
}
int
snprint_PortIdentity(char *s, int max_len, const Octet uuid[PTP_UUID_LENGTH],
UInteger16 portId, const char *info)
{
int len = 0;
if (info)
len += snprintf(&s[len], max_len - len, "%s", info);
len += snprint_ClockIdentity(&s[len], max_len - len, uuid, NULL);
len += snprintf(&s[len], max_len - len, ":%02x", portId);
return len;
}
void
message(int priority, const char *format, ...)
{
extern RunTimeOpts rtOpts;
va_list ap;
va_start(ap, format);
if(rtOpts.useSysLog) {
static Boolean logOpened;
if(!logOpened) {
openlog("ptpd", 0, LOG_USER);
logOpened = TRUE;
}
vsyslog(priority, format, ap);
} else {
fprintf(stderr, "(ptpd %s) ",
priority == LOG_EMERG ? "emergency" :
priority == LOG_ALERT ? "alert" :
priority == LOG_CRIT ? "critical" :
priority == LOG_ERR ? "error" :
priority == LOG_WARNING ? "warning" :
priority == LOG_NOTICE ? "notice" :
priority == LOG_INFO ? "info" :
priority == LOG_DEBUG ? "debug" :
"???");
vfprintf(stderr, format, ap);
}
va_end(ap);
}
char *
translatePortState(PtpClock *ptpClock)
{
char *s;
switch(ptpClock->port_state) {
case PTP_INITIALIZING: s = "init"; break;
case PTP_FAULTY: s = "flt"; break;
case PTP_LISTENING: s = "lstn"; break;
case PTP_PASSIVE: s = "pass"; break;
case PTP_UNCALIBRATED: s = "uncl"; break;
case PTP_SLAVE: s = "slv"; break;
case PTP_PRE_MASTER: s = "pmst"; break;
case PTP_MASTER: s = "mst"; break;
case PTP_DISABLED: s = "dsbl"; break;
default: s = "?"; break;
}
return s;
}
void
displayStats(RunTimeOpts * rtOpts, PtpClock * ptpClock)
{
static int start = 1;
static char sbuf[SCREEN_BUFSZ];
int len = 0;
struct timeval now;
char time_str[MAXTIMESTR];
if (start && rtOpts->csvStats) {
start = 0;
printf("timestamp, state, clock ID, one way delay, "
"offset from master, master to slave, "
"slave to master, drift, variance");
fflush(stdout);
}
memset(sbuf, ' ', sizeof(sbuf));
gettimeofday(&now, 0);
strftime(time_str, MAXTIMESTR, "%Y-%m-%d %X", localtime(&now.tv_sec));
len += snprintf(sbuf + len, sizeof(sbuf) - len, "%s%s:%06d, %s",
rtOpts->csvStats ? "\n" : "\rstate: ",
time_str, (int)now.tv_usec,
translatePortState(ptpClock));
if (ptpClock->port_state == PTP_SLAVE) {
len += snprint_PortIdentity(sbuf + len, sizeof(sbuf) - len,
ptpClock->parent_uuid, ptpClock->parent_port_id, ", ");
/*
* if grandmaster ID differs from parent port ID then also
* print GM ID
*/
if (memcmp(ptpClock->grandmaster_uuid_field,
ptpClock->parent_uuid, PTP_UUID_LENGTH)) {
len += snprint_ClockIdentity(sbuf + len,
sizeof(sbuf) - len,
ptpClock->grandmaster_uuid_field,
" GM:");
}
len += snprintf(sbuf + len, sizeof(sbuf) - len, ", ");
if (!rtOpts->csvStats)
len += snprintf(sbuf + len,
sizeof(sbuf) - len, "owd: ");
len += snprint_TimeInternal(sbuf + len, sizeof(sbuf) - len,
&ptpClock->one_way_delay);
len += snprintf(sbuf + len, sizeof(sbuf) - len, ", ");
if (!rtOpts->csvStats)
len += snprintf(sbuf + len, sizeof(sbuf) - len,
"ofm: ");
len += snprint_TimeInternal(sbuf + len, sizeof(sbuf) - len,
&ptpClock->offset_from_master);
len += snprintf(sbuf + len, sizeof(sbuf) - len,
", %s%d.%09d" ", %s%d.%09d",
rtOpts->csvStats ? "" : "stm: ",
ptpClock->slave_to_master_delay.seconds,
abs(ptpClock->slave_to_master_delay.nanoseconds),
rtOpts->csvStats ? "" : "mts: ",
ptpClock->master_to_slave_delay.seconds,
abs(ptpClock->master_to_slave_delay.nanoseconds));
len += snprintf(sbuf + len, sizeof(sbuf) - len,
", %s%d",
rtOpts->csvStats ? "" : "drift: ",
ptpClock->observed_drift);
len += snprintf(sbuf + len, sizeof(sbuf) - len,
", %s%d",
rtOpts->csvStats ? "" : "var: ",
ptpClock->observed_variance);
}
else {
if (ptpClock->port_state == PTP_MASTER) {
len += snprint_ClockIdentity(sbuf + len, sizeof(sbuf) - len,
ptpClock->clock_uuid_field, " (ID:");
len += snprintf(sbuf + len, sizeof(sbuf) - len, ")");
}
}
write(1, sbuf, rtOpts->csvStats ? len : SCREEN_MAXSZ + 1);
}
Boolean
nanoSleep(TimeInternal * t)
{
struct timespec ts, tr;
ts.tv_sec = t->seconds;
ts.tv_nsec = t->nanoseconds;
if (nanosleep(&ts, &tr) < 0) {
t->seconds = tr.tv_sec;
t->nanoseconds = tr.tv_nsec;
return FALSE;
}
return TRUE;
}
void
getTime(TimeInternal * time)
{
#if defined(linux)
struct timeval tv;
gettimeofday(&tv, 0);
time->seconds = tv.tv_sec;
time->nanoseconds = tv.tv_usec * 1000;
#else /* FreeBSD */
struct timespec tp;
if (clock_gettime(CLOCK_REALTIME, &tp) < 0) {
PERROR("clock_gettime() failed, exiting.");
exit(0);
}
time->seconds = tp.tv_sec;
time->nanoseconds = tp.tv_nsec;
#endif /* FreeBSD or Linux */
}
void
setTime(TimeInternal * time)
{
struct timeval tv;
tv.tv_sec = time->seconds;
tv.tv_usec = time->nanoseconds / 1000;
settimeofday(&tv, 0);
NOTIFY("resetting system clock to %ds %dns\n",
time->seconds, time->nanoseconds);
}
UInteger16
getRand(UInteger32 * seed)
{
return rand_r((unsigned int *)seed);
}
Boolean
adjFreq(Integer32 adj)
{
struct timex t;
if (adj > ADJ_FREQ_MAX)
adj = ADJ_FREQ_MAX;
else if (adj < -ADJ_FREQ_MAX)
adj = -ADJ_FREQ_MAX;
t.modes = MOD_FREQUENCY;
t.freq = adj * ((1 << 16) / 1000);
return !adjtimex(&t);
}
|
using System.Diagnostics.CodeAnalysis;
namespace Grauenwolf.TravellerTools
{
[SuppressMessage("Microsoft.Design", "CA1008:EnumsShouldHaveZeroValue")]
public enum Edition
{
Classic = 1997,
MegaTraveller = 1986,
NewEra = 1992,
T4 = 1996,
Gurps = 1998,
T20 = 2006,
Hero = 2007,
Mongoose = 2008,
T5 = 2013,
Mongoose2 = 2016,
CT = 1997,
MT = 1986,
[SuppressMessage("Microsoft.Naming", "CA1709:IdentifiersShouldBeCasedCorrectly", MessageId = "TNE")]
TNE = 1992,
GT = 1998,
TH = 2007,
[SuppressMessage("Microsoft.Naming", "CA1709:IdentifiersShouldBeCasedCorrectly", MessageId = "MGT")]
MGT = 2008,
[SuppressMessage("Microsoft.Naming", "CA1709:IdentifiersShouldBeCasedCorrectly", MessageId = "MGT")]
MGT2 = 2016
}
}
|
package com.vivint.ceph.kvstore
import akka.Done
import akka.stream.OverflowStrategy
import akka.stream.scaladsl.SourceQueueWithComplete
import java.io.File
import java.util.concurrent.Executors
import scala.collection.immutable.Seq
import scala.concurrent.{ ExecutionContext, Future, Promise }
import akka.stream.scaladsl.Source
/** For use in tests.
*/
class MemStore extends KVStore {
implicit private val ec = ExecutionContext.fromExecutor(
Executors.newSingleThreadExecutor())
private var state = Map.empty[File, Array[Byte]]
private var subscriptions = Set.empty[SourceQueueWithComplete[File]]
private def createFolders(path: File): Unit = {
val parent = path.getParentFile
if ((parent != null) && (!state.contains(parent))) {
state = state + (parent -> Array.empty)
createFolders(parent)
}
}
private [kvstore] def removeSubscription(queue: SourceQueueWithComplete[File]) = Future {
subscriptions = subscriptions - queue
}
private [kvstore] def addSubscription(queue: SourceQueueWithComplete[File]) = Future {
subscriptions = subscriptions + queue
}
def fileFor(path: String) =
new File("/", path)
def create(path: String, data: Array[Byte]): Future[Unit] = Future {
val output = fileFor(path)
if (state.contains(output))
throw new RuntimeException(s"path ${path} already exists")
if (! state.contains(output.getParentFile))
throw new RuntimeException(s"no such parent for ${output}: ${output.getParentFile}")
state = state.updated(output, data)
subscriptions.foreach(_.offer(output))
}
def set(path: String, data: Array[Byte]): Future[Unit] = Future {
val output = fileFor(path)
if (!state.contains(output))
throw new RuntimeException(s"path ${path} doesn't exist")
state = state.updated(output, data)
subscriptions.foreach(_.offer(output))
}
def createOrSet(path: String, data: Array[Byte]): Future[Unit] = Future {
val output = fileFor(path)
createFolders(output)
state = state.updated(output, data)
subscriptions.foreach(_.offer(output))
}
def delete(path: String): Future[Unit] = Future {
val deleteFile = fileFor(path)
if (!state.contains(deleteFile))
throw new RuntimeException(s"path ${path} doesn't exist")
state = state - deleteFile
subscriptions.foreach(_.offer(deleteFile))
}
def get(path: String): Future[Option[Array[Byte]]] = Future {
val input = fileFor(path)
state.get(input)
}
private [kvstore] def get(path: File): Future[Option[Array[Byte]]] = Future {
state.get(path)
}
def lock(path: String): Future[KVStore.CancellableWithResult] = Future {
val lockFile = fileFor(path)
if (state.contains(lockFile)) {
throw new RuntimeException("Couldn't acquire lock in memory lock")
}
state = state.updated(lockFile, Array.empty)
val p = Promise[Done]
new KVStore.CancellableWithResult {
def result = p.future
def cancel(): Boolean = {
delete(path)
p.trySuccess(Done); true
}
def isCancelled = p.isCompleted
}
}
def children(path: String): Future[Seq[String]] = Future {
val parent = fileFor(path)
state.keys.
filter { _.getParentFile == parent }.
map ( _.getName ).
toList
}
def watch(path: String, bufferSize: Int = 1): Source[Option[Array[Byte]], KVStore.CancellableWithResult] = {
val input = fileFor(path)
Source.queue[File](bufferSize, OverflowStrategy.dropHead).
mapMaterializedValue { queue =>
addSubscription(queue).onComplete { _ => queue.offer(input) }
var _isCancelled = false
new KVStore.CancellableWithResult {
def result = queue.watchCompletion()
def cancel(): Boolean = { queue.complete(); true }
def isCancelled = _isCancelled
}
}.
filter(_ == input).
mapAsync(1)(get)
}
}
|
package ank.service;
public class UserAssistorService {
private UserAssistorService () {
}
public static UserAssistorService init(String[] args){
ArgValidator argValidator = ArgValidator.init(args).va
}
}
|
// "Migrate lambda syntax in whole project" "true"
val a = { <caret>(): Int ->
val b = { (): Int -> 5 }
b()
}
|
"""
Utilities for real-case harvesting scenario
"""
from collections import Mapping
import os
import json
HARVEST_SOURCE_NAME = 'dummy-harvest-source'
class HarvestSource(Mapping):
"""
Provides dict-like access to harvest sources
"""
# Default harvest source..
source_name = HARVEST_SOURCE_NAME
def __init__(self, base_dir, day):
"""
:param day:
The day from which to get data.
Full name, like 'day-00', 'day-01', ..
"""
self.base_dir = base_dir
self.day = day
def __getitem__(self, name):
if name not in self.__iter__():
raise KeyError("No such object type: {0!r}".format(name))
return HarvestSourceCollection(self, name)
def __iter__(self):
"""List object types"""
folder = os.path.join(self.base_dir, self.day)
for name in os.listdir(folder):
# Skip hidden files
if name.startswith('.'):
continue
# Skip non-directories
path = os.path.join(folder, name)
if not os.path.isdir(path):
continue
yield name
def __len__(self):
return len(list(self.__iter__()))
class HarvestSourceCollection(Mapping):
"""
Wrapper around a "collection" of items in the "harvest source".
"""
def __init__(self, source, name):
self.source = source
self.name = name
def __getitem__(self, name):
if name not in self.__iter__():
raise KeyError("There is no object of type={0!r} id={1!r}"
.format(self.name, name))
folder = os.path.join(self.source.base_dir, self.source.day, self.name)
path = os.path.join(folder, name)
with open(path, 'r') as f:
data = json.load(f)
if 'id' in data:
if data['id'] != name:
raise ValueError("Mismatching dataset id -- bad data?")
data['id'] = name # make sure we pass it back
return data
def __iter__(self):
"""List object ids"""
folder = os.path.join(self.source.base_dir, self.source.day, self.name)
for name in os.listdir(folder):
# Skip hidden files
if name.startswith('.'):
continue
# Skip non-files
path = os.path.join(folder, name)
if not os.path.isfile(path):
continue
yield name
def __len__(self):
return len(list(self.__iter__()))
|
package in.clouthink.daas.sbb.account.rest.dto;
import in.clouthink.daas.sbb.account.domain.model.Gender;
import in.clouthink.daas.sbb.account.domain.model.User;
import in.clouthink.daas.sbb.shared.util.DateTimeUtils;
import io.swagger.annotations.ApiModel;
import org.springframework.beans.BeanUtils;
import java.util.Date;
@ApiModel("用户摘要信息")
public class UserSummary {
static void convert(User user, UserSummary result) {
BeanUtils.copyProperties(user, result, "expired", "locked");
if (user.getBirthday() != null) {
result.setAge(DateTimeUtils.howOldAreYou(user.getBirthday()));
}
}
public static UserSummary from(User user) {
if (user == null) {
return null;
}
UserSummary result = new UserSummary();
convert(user, result);
return result;
}
private String id;
private String displayName;
private String cellphone;
private String username;
private String avatarId;
private String avatarUrl;
private Gender gender;
private Date birthday;
private Integer age;
private String province;
private String city;
private String signature;
private boolean enabled;
private Date createdAt;
public String getId() {
return id;
}
public void setId(String id) {
this.id = id;
}
public String getDisplayName() {
return displayName;
}
public void setDisplayName(String displayName) {
this.displayName = displayName;
}
public String getUsername() {
return username;
}
public void setUsername(String username) {
this.username = username;
}
public String getCellphone() {
return cellphone;
}
public void setCellphone(String cellphone) {
this.cellphone = cellphone;
}
public String getAvatarId() {
return avatarId;
}
public void setAvatarId(String avatarId) {
this.avatarId = avatarId;
}
public String getAvatarUrl() {
return avatarUrl;
}
public void setAvatarUrl(String avatarUrl) {
this.avatarUrl = avatarUrl;
}
public Gender getGender() {
return gender;
}
public void setGender(Gender gender) {
this.gender = gender;
}
public Date getBirthday() {
return birthday;
}
public void setBirthday(Date birthday) {
this.birthday = birthday;
}
public Integer getAge() {
return age;
}
public void setAge(Integer age) {
this.age = age;
}
public String getProvince() {
return province;
}
public void setProvince(String province) {
this.province = province;
}
public String getCity() {
return city;
}
public void setCity(String city) {
this.city = city;
}
public boolean isEnabled() {
return enabled;
}
public void setEnabled(boolean enabled) {
this.enabled = enabled;
}
public String getSignature() {
return signature;
}
public void setSignature(String signature) {
this.signature = signature;
}
public Date getCreatedAt() {
return createdAt;
}
public void setCreatedAt(Date createdAt) {
this.createdAt = createdAt;
}
}
|
package tin.thurein.data.local.daos
import androidx.room.*
import io.reactivex.Flowable
import io.reactivex.Maybe
import io.reactivex.Single
import tin.thurein.data.local.DatabaseConstants
import tin.thurein.data.local.entities.JobEntity
@Dao
abstract class JobDao {
@Insert(onConflict = OnConflictStrategy.REPLACE)
abstract fun insertAll(jobEntities: List<JobEntity>): List<Long>
@Update
abstract fun update(jobEntity: JobEntity): Single<Int>
@Query("SELECT * FROM ${DatabaseConstants.JOB_TABLE_NAME};")
abstract fun getJobs(): Flowable<List<JobEntity>>
@Query("SELECT * FROM ${DatabaseConstants.JOB_TABLE_NAME} WHERE ${DatabaseConstants.IS_ACCEPTED} = :isAccepted;")
abstract fun getJobsByIsAcceptable(isAccepted: Boolean): Flowable<List<JobEntity>>
@Query("DELETE FROM ${DatabaseConstants.JOB_TABLE_NAME};")
abstract fun deleteAllJobs(): Int
@Transaction
open fun saveAllJobs(jobEntities: List<JobEntity>): List<Long> {
deleteAllJobs()
return insertAll(jobEntities)
}
}
|
#! /usr/bin/perl
#
# jquery_perl/create/xml_perl_create.pl
#
#
# May/20/2011
#
# ----------------------------------------------------------------------
use strict;
use warnings;
use utf8;
#
use Encode;
use XML::Simple;
#
use lib '/var/www/data_base/common/perl_common';
use file_io;
use xml_manipulate;
#
# ----------------------------------------------------------------------
my $file_text = "/var/tmp/xml_file/cities.xml";
my %dict_aa = data_prepare_proc ();
#
my $xml_str = xml_manipulate::dict_to_xml_proc (%dict_aa);
#
file_io::string_write_proc ($file_text,encode ('utf-8',$xml_str));
#
print "Content-type: text/html\n\n";
#
print "*** OK ***\n";
# ----------------------------------------------------------------------
sub data_prepare_proc
{
my %dict_aa;
%dict_aa = text_manipulate::dict_append_proc
('t2261','静岡',51200,'2005-2-12',%dict_aa);
%dict_aa = text_manipulate::dict_append_proc
('t2262','浜松',34700,'2005-8-15',%dict_aa);
%dict_aa = text_manipulate::dict_append_proc
('t2263','沼津',58600,'2005-6-17',%dict_aa);
%dict_aa = text_manipulate::dict_append_proc
('t2264','三島',43200,'2005-8-22',%dict_aa);
%dict_aa = text_manipulate::dict_append_proc
('t2265','富士',27100,'2005-2-28',%dict_aa);
%dict_aa = text_manipulate::dict_append_proc
('t2266','熱海',94700,'2005-5-9',%dict_aa);
%dict_aa = text_manipulate::dict_append_proc
('t2267','富士宮',35100,'2005-4-10',%dict_aa);
%dict_aa = text_manipulate::dict_append_proc
('t2268','藤枝',85600,'2005-10-8',%dict_aa);
%dict_aa = text_manipulate::dict_append_proc
('t2269','御殿場',64700,'2005-5-21',%dict_aa);
%dict_aa = text_manipulate::dict_append_proc
('t2270','島田',82300,'2005-9-14',%dict_aa);
}
# ----------------------------------------------------------------------
|
// Copyright 2017-2018 the authors. See the 'Copyright and license' section of the
// README.md file at the top-level directory of this repository.
//
// Licensed under the Apache License, Version 2.0 (the LICENSE-APACHE file) or
// the MIT license (the LICENSE-MIT file) at your option. This file may not be
// copied, modified, or distributed except according to those terms.
//! Allocator-safe formatting and assertion macros.
//!
//! [`alloc-fmt`] provides formatting and assertion macros similar to the standard library's
//! [`println`], [`eprintln`], [`panic`], [`assert`], [`debug_assert`], etc which are safe for use in a
//! global allocator. The standard library's formatting and assertion macros can allocate, meaning
//! that if they are used in the implementation of a global allocator, it can cause infinite
//! recursion. The macros in this crate avoid this problem by either not allocating (in the case of
//! formatting macros) or detecting recursion (in the case of panic and assertion macros).
//!
//! [`alloc-fmt`]: index.html
//! [`println`]: https://doc.rust-lang.org/std/macro.println.html
//! [`eprintln`]: https://doc.rust-lang.org/std/macro.eprint.html
//! [`panic`]: https://doc.rust-lang.org/std/macro.panic.html
//! [`assert`]: https://doc.rust-lang.org/std/macro.assert.html
//! [`debug_assert`]: https://doc.rust-lang.org/std/macro.debug_assert.html
//!
//! # Usage and Behavior
//! The macros in this crate are named `alloc_xxx`, where `xxx` is the name of the equivalent
//! standard library macro (e.g., [`alloc_println`], [`alloc_debug_assert`], etc).
//!
//! The behavior of the formatting macros is identical to the behavior of their standard library
//! counterparts. The behavior of the panic and assertion macros is somewhat different. When an
//! assertion fails or an explicit panic is invoked, a message is first unconditionally printed to
//! stderr (in case further processing causes a crash). A stack trace is then printed, and the
//! process aborts. If recursion is detected during the printing of the stack trace, the process
//! immediately aborts. Recusion can happen if, for example, the code that computes the stack trace
//! allocates, triggering further assertion failures or panics. This check is conservative - it may
//! sometimes detect recursion when there is none.
//!
//! Unlike the standard library assertion and panic macros, the stack is not unwound, and once an
//! assertion failure or panic triggers, it cannot be caught or aborted.
//!
//! [`alloc_println`]: macro.alloc_println.html
//! [`alloc_debug_assert`]: macro.alloc_debug_assert.html
#![no_std]
#![feature(core_intrinsics)]
#[cfg(feature = "print-backtrace")]
extern crate backtrace;
extern crate libc;
extern crate spin;
use core::fmt::{Arguments, Result as FmtResult, Write};
use core::sync::atomic::{AtomicBool};
use core::sync::atomic::Ordering::SeqCst;
// Import items that macros need to reference. They will reference them as $crate::foo instead of
// core::foo since core isn't guaranteed to be imported in the user's scope.
#[doc(hidden)]
pub use core::mem::drop;
#[doc(hidden)]
pub use core::fmt::write;
#[doc(hidden)]
pub static STDERR_MTX: spin::Mutex<()> = spin::Mutex::new(());
#[doc(hidden)]
pub static STDOUT_MTX: spin::Mutex<()> = spin::Mutex::new(());
#[doc(hidden)]
pub static STDOUT: libc::c_int = 1;
#[doc(hidden)]
pub static STDERR: libc::c_int = 2;
#[doc(hidden)]
pub struct FDWriter(pub libc::c_int);
impl Write for FDWriter {
#[inline]
fn write_str(&mut self, s: &str) -> FmtResult {
let mut buf = s.as_bytes();
while !buf.is_empty() {
unsafe {
#[cfg(not(windows))]
let written = libc::write(self.0, buf.as_ptr() as *const _, buf.len());
#[cfg(windows)]
let written =
libc::write(self.0, buf.as_ptr() as *const _, buf.len() as libc::c_uint);
if written < 1 {
core::intrinsics::abort();
}
buf = &buf[written as usize..];
}
}
Ok(())
}
}
// We can't simply 'pub use core::intrinsics::abort' because its use requires
// feature(core_intrinsics), which the user would have to enable.
#[doc(hidden)]
pub unsafe fn abort() -> ! {
core::intrinsics::abort();
}
#[doc(hidden)]
#[macro_export]
macro_rules! print_internal {
($file:expr, $mtx:expr, $fmt:expr) => {
// in case we're called from inside an unsafe block
#[allow(unused_unsafe)]
unsafe {
let guard = $mtx.lock();
let mut fd = $crate::FDWriter($file);
let _ = $crate::write(&mut fd, $fmt).map_err(|_| {
$crate::abort();
});
// explicitly drop the guard so we're guaranteed it lives at least this long;
// since we don't actually use the guard, the compiler could theoretically
// drop it sooner.
$crate::drop(guard);
}
}
}
#[macro_export]
macro_rules! alloc_print {
($($arg:tt)*) => (print_internal!($crate::STDOUT, $crate::STDOUT_MTX, format_args!($($arg)*)))
}
#[macro_export]
macro_rules! alloc_eprint {
($($arg:tt)*) => (print_internal!($crate::STDERR, $crate::STDERR_MTX, format_args!($($arg)*)))
}
#[macro_export]
macro_rules! alloc_println {
() => (alloc_print!("\n"));
($fmt:expr) => (alloc_print!(concat!($fmt, "\n")));
($fmt:expr, $($arg:tt)*) => (alloc_print!(concat!($fmt, "\n"), $($arg)*));
}
#[macro_export]
macro_rules! alloc_eprintln {
() => (alloc_eprint!("\n"));
($fmt:expr) => (alloc_eprint!(concat!($fmt, "\n")));
($fmt:expr, $($arg:tt)*) => (alloc_eprint!(concat!($fmt, "\n"), $($arg)*));
}
// Sometimes backtraces allocate. If that allocation causes an assertion failure, then we can get
// into an infinite recursion scenario. In order to detect this, we set this global bool to true,
// and load it before printing a backtrace. If we find that it is true, we immediately abort
// (essentially short-circuiting what would eventually happen if print_backtrace_and_abort
// successfully finished executing).
#[doc(hidden)]
pub static IS_PANICKING: AtomicBool = AtomicBool::new(false);
#[macro_export]
macro_rules! alloc_panic {
() => (alloc_panic!("explicit panic"));
($msg:expr) => ({
$crate::panic(&(format_args!($msg), file!(), line!(), column!()))
});
($fmt:expr, $($arg:tt)*) => ({
$crate::panic(&(format_args!($fmt, $($arg)*), file!(), line!(), column!()))
})
}
#[doc(hidden)]
#[inline(never)]
#[cold]
pub fn panic(fmt_file_line_col: &(Arguments, &'static str, u32, u32)) -> ! {
let (fmt, file, line, col) = *fmt_file_line_col;
alloc_eprint!("thread panicked at '");
print_internal!(STDERR, STDERR_MTX, fmt);
alloc_eprintln!("', {}:{}:{}", file, line, col);
unsafe {
if IS_PANICKING.compare_and_swap(false, true, SeqCst) {
// compare_and_swap returns the old value; true means somebody's already panicking.
alloc_eprintln!("thread panicked while panicking");
core::intrinsics::abort();
}
print_backtrace_and_abort()
}
}
#[macro_export]
macro_rules! alloc_assert {
($pred:expr) => ({
// Do this instead of alloc_assert!($pred, stringify!($pred)) in case $pred contains
// characters that would be interpreted as formatting directives.
alloc_assert!($pred, "{}", stringify!($pred));
});
($pred:expr, $msg:expr) => ({
if !($pred) {
alloc_panic!("assertion failed: {}", $msg);
}
});
($pred:expr, $fmt:expr, $($arg:tt)*) => ({
if !($pred) {
alloc_panic!(concat!("assertion failed: ", $fmt), $($arg)*);
}
})
}
#[macro_export]
macro_rules! alloc_debug_assert {
($($arg:tt)*) => {
if cfg!(debug_assertions) {
alloc_assert!($($arg)*);
}
}
}
#[macro_export]
macro_rules! alloc_assert_eq {
($a:expr, $b:expr) => {
{
let a = $a;
let b = $b;
let s = stringify!($a == $b);
alloc_assert!(a == b, "{} (evaluated to {:?} == {:?})", s, a, b);
}
};
($a:expr, $b:expr, $fmt:expr) => {
{
let a = $a;
let b = $b;
let s = stringify!($a == $b);
alloc_assert!(a == b, concat!("{} (evaluated to {:?} == {:?}): ", $fmt), s, a, b);
}
};
($a:expr, $b:expr, $fmt:expr, $($arg:tt)*) => {
{
let a = $a;
let b = $b;
let s = stringify!($a == $b);
alloc_assert!(a == b, concat!("{} (evaluated to {:?} == {:?}): ", $fmt), s, a, b, $($arg)*);
}
}
}
#[macro_export]
macro_rules! alloc_debug_assert_eq {
($($arg:tt)*) => {
if cfg!(debug_assertions) {
alloc_assert_eq!($($arg)*);
}
}
}
#[macro_export]
macro_rules! alloc_assert_ne {
($a:expr, $b:expr) => {
{
let a = $a;
let b = $b;
let s = stringify!($a != $b);
alloc_assert!(a != b, "{} (evaluated to {:?} != {:?})", s, a, b);
}
};
($a:expr, $b:expr, $fmt:expr) => {
{
let a = $a;
let b = $b;
let s = stringify!($a != $b);
alloc_assert!(a != b, concat!("{} (evaluated to {:?} != {:?}): ", $fmt), s, a, b);
}
};
($a:expr, $b:expr, $fmt:expr, $($arg:tt)*) => {
{
let a = $a;
let b = $b;
let s = stringify!($a != $b);
alloc_assert!(a != b, concat!("{} (evaluated to {:?} != {:?}): ", $fmt), s, a, b, $($arg)*);
}
}
}
#[macro_export]
macro_rules! alloc_debug_assert_ne {
($($arg:tt)*) => {
if cfg!(debug_assertions) {
alloc_assert_ne!($($arg)*);
}
}
}
/// Types that can be unwrapped in an allocation-safe manner.
///
/// [`AllocUnwrap`] provides the [`alloc_unwrap`] and [`alloc_expect`] methods, which are allocation-safe
/// equivalents of the [`unwrap`][Option::unwrap] and [`expect`][Option::expect] methods on [`Option`] and [`Result`]. [`AllocUnwrap`] is
/// implemented for [`Option`] and [`Result`].
///
/// [`AllocUnwrap`]: trait.AllocUnwrap.html
/// [`alloc_unwrap`]: trait.AllocUnwrap.html#tymethod.alloc_unwrap
/// [`alloc_expect`]: trait.AllocUnwrap.html#tymethod.alloc_expect
///
/// # Examples
///
/// ```rust
/// use alloc_fmt::AllocUnwrap;
/// println!("{}", Some(1).alloc_unwrap());
/// ```
pub trait AllocUnwrap {
type Item;
fn alloc_unwrap(self) -> Self::Item;
fn alloc_expect(self, msg: &str) -> Self::Item;
}
// The implementations for Option and Result are adapted from the Rust standard library.
impl<T> AllocUnwrap for Option<T> {
type Item = T;
#[inline]
fn alloc_unwrap(self) -> T {
match self {
Some(val) => val,
None => alloc_panic!("called `Option::alloc_unwrap()` on a `None` value"),
}
}
#[inline]
fn alloc_expect(self, msg: &str) -> T {
// This is a separate function to reduce the code size of alloc_expect itself
#[inline(never)]
#[cold]
fn failed(msg: &str) -> ! {
alloc_panic!("{}", msg);
}
match self {
Some(val) => val,
None => failed(msg),
}
}
}
impl<T, E: ::core::fmt::Debug> AllocUnwrap for Result<T, E> {
type Item = T;
#[inline]
fn alloc_unwrap(self) -> T {
match self {
Ok(val) => val,
Err(err) => {
result_unwrap_failed("called `Result::alloc_unwrap()` on an `Err` value", err)
}
}
}
#[inline]
fn alloc_expect(self, msg: &str) -> T {
match self {
Ok(val) => val,
Err(err) => result_unwrap_failed(msg, err),
}
}
}
// This is a separate function to reduce the code size of alloc_{expect,unwrap}
#[inline(never)]
#[cold]
fn result_unwrap_failed<E: ::core::fmt::Debug>(msg: &str, err: E) -> ! {
alloc_panic!("{}: {:?}", msg, err)
}
/// Print a backtrace and then abort the process.
///
/// `print_backtrace_and_abort` should be called after any relevant output has been flushed to
/// stderr so that even if this function crashes (since the `backtrace` crate does not guarantee
/// allocation-free backtraces), as much information as possible has already been output.
#[doc(hidden)]
pub unsafe fn print_backtrace_and_abort() -> ! {
// TODO(joshlf): Currently, this function prints itself and its callees in the trace. We should
// figure out a way to omit those and have the first printed frame be the caller's.
#[cfg(feature = "print-backtrace")]
{
backtrace::trace(|frame| {
let ip = frame.ip();
backtrace::resolve(ip, |symbol| {
if let Some(name) = symbol.name() {
alloc_eprintln!("{}", name);
} else {
alloc_eprintln!("<unknown function>");
}
if let Some(path) = symbol.filename() {
if let Some(s) = path.to_str() {
alloc_eprint!("\t{}", s);
} else {
alloc_eprint!("\t<unknown file>");
}
} else {
alloc_eprint!("\t<unknown file>");
}
if let Some(line) = symbol.lineno() {
alloc_eprintln!(":{}", line);
} else {
alloc_eprintln!();
}
});
true
});
}
core::intrinsics::abort();
}
// Test the macros by expanding them here and ensuring that they compile properly.
#[allow(unused)]
#[cfg_attr(feature = "cargo-clippy", allow(cyclomatic_complexity))]
fn never_called() {
alloc_print!("foo");
alloc_println!("foo");
alloc_eprint!("foo");
alloc_eprintln!("foo");
alloc_assert!(false && true);
alloc_assert!(false && true, "foo");
alloc_assert!(false && true, "foo: {}", "bar");
alloc_debug_assert!(false && true);
alloc_debug_assert!(false && true, "foo");
alloc_debug_assert!(false && true, "foo: {}", "bar");
alloc_assert_eq!(1 + 2, 1);
alloc_assert_eq!(1 + 2, 1, "foo");
alloc_assert_eq!(1 + 2, 1, "foo: {}", "bar");
alloc_debug_assert_eq!(1 + 2, 1);
alloc_debug_assert_eq!(1 + 2, 1, "foo");
alloc_debug_assert_eq!(1 + 2, 1, "foo: {}", "bar");
alloc_assert_eq!(1 + 2, 3);
alloc_assert_eq!(1 + 2, 3, "foo");
alloc_assert_eq!(1 + 2, 3, "foo: {}", "bar");
alloc_debug_assert_eq!(1 + 2, 3);
alloc_debug_assert_eq!(1 + 2, 3, "foo");
alloc_debug_assert_eq!(1 + 2, 3, "foo: {}", "bar");
Some(0).alloc_unwrap();
Some(0).alloc_expect("None");
let _: usize = None.alloc_unwrap();
let _: usize = None.alloc_expect("None");
(Ok(0) as Result<_, &'static str>).alloc_unwrap();
(Ok(0) as Result<_, &'static str>).alloc_expect("None");
(Err("") as Result<usize, _>).alloc_unwrap();
(Err("") as Result<usize, _>).alloc_expect("None");
}
|
package presenters
import (
"github.com/bf2fc6cc711aee1a0c2a/kas-fleet-manager/internal/connector/internal/api/dbapi"
"github.com/bf2fc6cc711aee1a0c2a/kas-fleet-manager/internal/connector/internal/api/public"
)
func ConvertConnectorClusterRequest(from public.ConnectorClusterRequest) dbapi.ConnectorCluster {
return dbapi.ConnectorCluster{
Name: from.Name,
}
}
|
#!/bin/bash
host=$1
key_base=$2
epoch=$(date "+%s")
value=$(expr $(od -vAn -N4 -tu4 < /dev/random) % 60)
echo $epoch
echo $value
redis-cli -h $host --eval insert_into_buckets.redis.lua $key_base , $epoch $value
|
# Security Policy
## Supported Versions
| Version | Supported |
| ------- |:------------------:|
| Latest | :white_check_mark: |
| Anything below latest | :x: |
## Reporting a Vulnerability
Open an issue at https://github.com/LilyAsFlora/reddit-fetch/issues.
You can expect the vulnerability to be acted on within a week.
|
# Goggle-bots
# Uploaded implants that help create persistence.
Pre-compiled binaries.
# Functionality:
- TBD
|
fastlane documentation
================
# Installation
Make sure you have the latest version of the Xcode command line tools installed:
```
xcode-select --install
```
Install _fastlane_ using
```
[sudo] gem install fastlane -NV
```
or alternatively using `brew cask install fastlane`
# Available Actions
### install_dependencies
```
fastlane install_dependencies
```
Install or upgrade Flutter and Android SDK licenses
### generate
```
fastlane generate
```
Generate files for built_value and format all files
### lint
```
fastlane lint
```
Run static analysis on Flutter files
----
## Android
### android build
```
fastlane android build
```
Build a debug APK
### android publish
```
fastlane android publish
```
Build a release AAB and publish it (including Store artifacts).
Set "release" lane key to non-empty value to upload to "alpha" track.
----
## iOS
### ios build
```
fastlane ios build
```
Build a debug iOS package
### ios publish
```
fastlane ios publish
```
Build a release iOS package and publish it (including Store artifacts).
Set "release" lane key to non-empty value to upload metadata.
----
This README.md is auto-generated and will be re-generated every time [fastlane](https://fastlane.tools) is run.
More information about fastlane can be found on [fastlane.tools](https://fastlane.tools).
The documentation of fastlane can be found on [docs.fastlane.tools](https://docs.fastlane.tools).
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.