text
stringlengths 1
1.04M
| language
stringclasses 25
values |
|---|---|
The first of the eight opening ceremonies was held on the opening day of IPL 2017 at Rajiv Gandhi International Stadium in Hyderabad, just before the first game of the tournament was played between reigning champions Sunrisers Hyderabad (SRH) and last year’s runners-up Royal Challengers Bangalore (RCB), while the second opening ceremony of the tournament was held before the second match between Rising Pune Supergiant (RPS) and Mumbai Indians (MI) and the third opening ceremony of the tournament was held before the third match between Gujarat Lions (GL) and Kolkata Knight Riders (KKR).
Actor Disha Patani will be seen performing in this opening ceremony.
Coming to the fourth match, the Steven Smith-led RPS finished at second place in the points table in the previous edition, which was also their first in IPL, while KXIP finished last in the last season of IPL.
Taking into consideration all these aspects, we can definitely guarantee that we are in for a cracker of contest at Indore.
This website uses cookies so that we can provide you with the best user experience possible. Cookie information is stored in your browser and performs functions such as recognising you when you return to our website and helping our team to understand which sections of the website you find most interesting and useful.
Strictly Necessary Cookie should be enabled at all times so that we can save your preferences for cookie settings.
If you disable this cookie, we will not be able to save your preferences. This means that every time you visit this website you will need to enable or disable cookies again.
|
english
|
<reponame>BlogToolshed50/chinashipweb.com
---
id: 1028
title: Discount Nursing Scrubs and Uniforms
date: 2012-10-05T10:44:00+00:00
author: admin
layout: post
guid: http://chinashipweb.com/?p=1028
permalink: /2012/10/05/discount-nursing-scrubs-and-uniforms/
categories:
- General
---
People who are working in the medical field and looking for the nursing scrubs and hospital uniforms, then visit at Marcus Uniforms. They carry a vast selection of discount nursing scrubs and medical uniforms with various styles and color options for men and women, you can [buy scrubs](http://www.marcusuniforms.com/) here at the cheapest price. The quality school uniforms are also available for kids. People can check out their inventory and choose the perfect fit one for your needs and feel comfortable.
|
markdown
|
package com.sinoyd.artifact.result;
import lombok.Getter;
import lombok.Setter;
import java.util.ArrayList;
import java.util.Collection;
import java.util.Optional;
/**
* @Description 用来规范返回数据的结构类 code 表示代码 0为正常 1、2代表用户输入错误 -1、-2代表服务器错误
* @auther 李忠杰
* @create 2019-01-15 14:30
*/
@Getter
@Setter
public class ResultBean<T> {
private Integer code;
private String message;
private Collection<T> data;
public ResultBean() {
}
public static ResultBean error(Integer code, String message) {
ResultBean resultBean = new ResultBean();
resultBean.setCode(code);
resultBean.setMessage(message);
return resultBean;
}
public static ResultBean success() {
ResultBean resultBean = new ResultBean();
resultBean.setCode(0);
resultBean.setMessage("success");
return resultBean;
}
public static <T> ResultBean<T> success(Collection<T> data) {
ResultBean resultBean = new ResultBean();
resultBean.setCode(0);
resultBean.setMessage("success");
resultBean.setData(data);
return resultBean;
}
public static <T> ResultBean<T> success(T data) {
ResultBean resultBean = new ResultBean();
resultBean.setCode(0);
resultBean.setMessage("success");
if (data == null) {
return resultBean;
}
Collection<T> list = new ArrayList<>();
list.add(data);
resultBean.setData(list);
return resultBean;
}
}
|
java
|
<reponame>Fillr/libpostal<gh_stars>1000+
import csv
import six
from collections import defaultdict, Counter
from itertools import izip, islice
from geodata.text.tokenize import tokenize, token_types
from geodata.encoding import safe_encode
class FrequentPhraseExtractor(object):
'''
Extract common multi-word phrases from a file/iterator using the
frequent itemsets method to keep memory usage low.
'''
WORD_TOKEN_TYPES = (token_types.WORD,
token_types.IDEOGRAPHIC_CHAR,
token_types.ABBREVIATION,
token_types.HANGUL_SYLLABLE,
token_types.ACRONYM)
def __init__(self, min_count=5):
self.min_count = min_count
self.vocab = defaultdict(int)
self.frequencies = defaultdict(int)
self.train_words = 0
def ngrams(self, words, n=2):
for t in izip(*(islice(words, i, None) for i in xrange(n))):
yield t
def add_tokens(self, s):
for t, c in tokenize(s):
if c in self.WORD_TOKEN_TYPES:
self.vocab[((t.lower(), c), )] += 1
self.train_words += 1
def create_vocab(self, f):
for line in f:
line = line.rstrip()
if not line:
continue
self.add_tokens(line)
self.prune_vocab()
def prune_vocab(self):
for k in self.vocab.keys():
if self.vocab[k] < self.min_count:
del self.vocab[k]
def add_ngrams(self, s, n=2):
sequences = []
seq = []
for t, c in tokenize(s):
if c in self.WORD_TOKEN_TYPES:
seq.append((t, c))
elif seq:
sequences.append(seq)
seq = []
if seq:
sequences.append(seq)
for seq in sequences:
for gram in self.ngrams(seq, n=n):
last_c = None
prev_tokens = tuple([(t.lower(), c) for t, c in gram[:-1]])
if prev_tokens in self.vocab:
t, c = gram[-1]
current_token = (t.lower(), c)
self.frequencies[(prev_tokens, current_token)] += 1
def add_frequent_ngrams_to_vocab(self):
for k, v in six.iteritems(self.frequencies):
if v < self.min_count:
continue
prev, current = k
self.vocab[prev + (current,)] = v
def find_ngram_phrases(self, f, n=2):
self.frequencies = defaultdict(int)
for line in f:
line = line.rstrip()
if not line:
continue
self.add_ngrams(line, n=n)
self.add_frequent_ngrams_to_vocab()
self.frequencies = defaultdict(int)
@classmethod
def from_file(cls, f, max_phrase_len=5, min_count=5):
phrases = cls()
print('Doing frequent words for {}'.format(filename))
f.seek(0)
phrases.create_vocab(f)
for n in xrange(2, max_phrase_len + 1):
print('Doing frequent ngrams, n={} for {}'.format(n, filename))
f.seek(0)
phrases.find_ngram_phrases(f, n=n)
print('Done with {}'.format(filename))
return phrases
def to_tsv(self, filename, mode='w', max_rows=None):
f = open(filename, mode)
writer = csv.writer(f, delimiter='\t')
for i, (k, v) in enumerate(Counter(self.vocab).most_common()):
if max_rows is not None and i == max_rows:
break
gram = []
for t, c in k:
gram.append(t)
if c != token_types.IDEOGRAPHIC_CHAR:
gram.append(six.text_type(' '))
phrase = six.text_type('').join(gram)
writer.writerow((safe_encode(phrase), safe_encode(len(k)), safe_encode(v)))
|
python
|
New Delhi, Feb 3 (IANS) Heavily inspired by the eyes, artist Dipak Kumar Ghosh has come up with a series of fascinating portraits of famous personalities which he has ventilated on canvas using pencil and charcoal.
Organised at IGNCA, the exhibition titled “Eyes Says It All” showcases the piercing eyes of eminent personalities like Rabindranath Tagore, Swami Vivekananda, Albert Einstein, Sri Ramakrishna, Mother Teresa, A.P.J Abdul Kalam and more which are always attractive and seeks all the attention.
The theme “Eyes” have been encircled by many pairs of eyes that reflect various facets of a person, from exuberance to a simple smile, from a profound grief to a mellowed agony, from being exhausted to boredom. His entire work contains a plethora of different subjects.
“Portraits, that reflect the different faces and phases of life. People’s emotions and expressions has affected the artist at a great extent and has lead his focus especially to their eyes as they convey a certain intimacy, a history that nothing else can hold,” the artist said.
The exhibition was inaugurated on Saturday by former President Pranab Mukherjee who was in all praise about the exhibition.
“An artist should have a creative mind, they feast the mind of the living & that is their study. I wish Mr. Ghosh and associates a grand success for this exhibition,” Mukherjee said during the inauguration.
The exhibition will continue till February 8.
International online casino Casino Days has published a report sharing their internal data on what types and brands of devices are used to play on the platform by users from the South Asian region.
Such aggregate data analyses allow the operator to optimise their website for the brands and models of devices people are actually using.
The insights gained through the research also help Casino Days tailor their services based on the better understanding of their clients and their needs.
The primary data samples analysed by Casino Days reveal that mobile connections dominate the market in South Asia and are responsible for a whopping 96.6% of gaming sessions, while computers and tablets have negligible shares of 2.9% and 0.5% respectively.
The authors of the study point out that historically, playing online casino was exclusively done on computers, and attribute thе major shift to mobile that has unfolded over time to the wide spread of cheaper smartphones and mobile data plans in South Asia.
“Some of the reasons behind this massive difference in device type are affordability, technical advantages, as well as cheaper and more obtainable internet plans for mobiles than those for computers,” the researchers comment.
Chinese brands Xiaomi and Vivo were used by 21.9% and 20.79% of Casino Days players from South Asia respectively, and together with the positioned in third place with a 18.1% share South Korean brand Samsung dominate the market among real money gamers in the region.
Cupertino, California-based Apple is way down in seventh with a user share of just 2.29%, overshadowed by Chinese brands Realme (11.43%), OPPO (11.23%), and OnePlus (4.07%).
Huawei is at the very bottom of the chart with a tiny share just below the single percent mark, trailing behind mobile devices by Motorola, Google, and Infinix.
The data on actual phone usage provided by Casino Days, even though limited to the gaming parts of the population of South Asia, paints a different picture from global statistics on smartphone shipments by vendors.
Apple and Samsung have been sharing the worldwide lead for over a decade, while current regional leader Xiaomi secured their third position globally just a couple of years ago.
The shifted market share patterns of the world’s top smartphone brands in South Asia observed by the Casino Days research paper reveal a striking dominance of Android devices at the expense of iOS-powered phones.
On the global level, Android enjoys a comfortable lead with a sizable 68.79% share which grows to nearly 79% when we look at the whole continent of Asia. The data on South Asian real money gaming communities suggests that Android’s dominance grows even higher and is north of the 90% mark.
Among the major factors behind these figures, the authors of the study point to the relative affordability of and greater availability of Android devices in the region, especially when manufactured locally in countries like India and Vietnam.
“And, with influencers and tech reviews putting emphasis on Android devices, the choice of mobile phone brand and OS becomes easy; Android has a much wider range of products and caters to the Asian online casino market in ways that Apple can’t due to technical limitations,” the researchers add.
The far better integration achieved by Google Pay compared to its counterpart Apple Pay has also played a crucial role in shaping the existing smartphone market trends.
|
english
|
<gh_stars>0
use super::Config;
use std::path::PathBuf;
/// Test the values in the config file
fn test_config_contents(config: Config) {
assert_eq!(config.sites[0].structure.get("CONTAINER").unwrap().selector.as_ref().unwrap(), "asd");
assert_eq!(config.sites.len(), 1);
}
#[test]
/// Tests loading of a yaml config file
fn test_load_yaml_config() {
let mut path = PathBuf::new();
path.push(env!("CARGO_MANIFEST_DIR"));
path.push("testdata");
path.push("html_config.yaml");
let s = path.as_os_str().to_str().unwrap();
match Config::new(s) {
Ok(config) => {
test_config_contents(config);
},
Err(e) => panic!(e),
}
}
|
rust
|
<gh_stars>0
.container {
margin-top: 8px;
}
.container > div {
padding: 15px 40px 50px 12px;
}
.container > div header {
display: flex;
}
.container > div header .avatar-skeleton {
width: 48px;
height: 48px;
border-radius: 50%;
flex-shrink: 0;
}
.container > div header .column {
display: flex;
flex-direction: column;
/* justify-content: center; */
padding-left: 15px;
flex: 1;
}
.container > div header .column .row-skeleton {
height: 12px;
}
.container > div header .column .row-skeleton:nth-child(1) {
width: 30%;
}
.container > div header .column .row-skeleton:nth-child(2) {
width: 55%;
margin-top: 10px;
}
.container > div span {
display: flex;
flex-direction: column;
margin-top: 30px;
}
.container > div span .row-skeleton {
height: 12px;
}
.container > div span .row-skeleton:nth-child(1) {
width: 100%;
}
.container > div span .row-skeleton:nth-child(2) {
width: 90%;
margin-top: 10px;
}
@media (min-width: 1180px) {
.container {
margin-top: 16px;
}
}
|
css
|
#!/usr/bin/env python2
# The MIT License (MIT)
#
# Copyright (c) 2015 <NAME>, <NAME>
#
# Permission is hereby granted, free of charge, to any person obtaining a copy
# of this software and associated documentation files (the "Software"), to deal
# in the Software without restriction, including without limitation the rights
# to use, copy, modify, merge, publish, distribute, sublicense, and/or sell
# copies of the Software, and to permit persons to whom the Software is
# furnished to do so, subject to the following conditions:
#
# The above copyright notice and this permission notice shall be included in
# all copies or substantial portions of the Software.
#
# THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
# IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
# FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE
# AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER
# LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM,
# OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN
# THE SOFTWARE.
"""\
The script kicks off the preminmization step of the benchmark run using the minimize_with_cst application from the
Rosetta suite. The command lines used herein are intended to reproduce the protocol from row 16 of the original paper by Kellogg et al.:
<NAME>. Role of conformational sampling in computing mutation-induced changes in protein
structure and stability. 2011. Proteins. 79(3):830-8. doi: 10.1002/prot.22921.
Usage:
run_preminimization.py [options]...
Options:
-d --dataset DATASET
A filepath to the input dataset in JSON format. [default: ../../input/json/kellogg.json]
-o --output_directory OUTPUT_DIR
The path where output data will be created. Output will be created inside a time-stamped subfolder of this directory. [default: ./job_output]
--run_identifier RUN_ID
A suffix used to name the output directory.
--test
When this option is set, a shorter version of the benchmark will run with fewer input structures, less fewer DDG experiments, and fewer generated structures. This should be used to test the scripts but not for analysis.
--talaris2014
When this option is set, the talaris2014 score function will be used rather than the default score function. Warning: This option may break when talaris2014 becomes the default Rosetta score function.
--beta_july15
When this option is set, the July 2015 beta score function will be used rather than the default score function. Warning: This option may break when this score function is removed.
--beta_nov15
When this option is set, the November 2015 beta score function will be used rather than the default score function. Warning: This option may break when this score function is removed.
-p --parallel NUM_PROCESSORS
If this argument is set then the job setup will use NUM_PROCESSORS which will speed this step up. Otherwise, a single processor will be used. This should run on both Unix and Windows machines.
--maxp
This is a special case of --parallel. If this argument is set then the job setup will use as many processors as are available on the machine.
Authors:
<NAME>
<NAME>
"""
import sys
import os
import re
import shutil
import traceback
import time
import datetime
import inspect
multiprocessing_module_available = True
try:
import multiprocessing
except:
multiprocessing_module_available = False
import cPickle as pickle
import getpass
import rosetta.parse_settings
from rosetta.write_run_file import process as write_run_file
from analysis.libraries import docopt
from analysis.stats import read_file, write_file
try:
import json
except:
import simplejson as json
from rosetta.pdb import PDB, create_mutfile
from rosetta.basics import ChainMutation
from rosetta.input_files import Mutfile
task_subfolder = 'preminimization'
mutfiles_subfolder = 'mutfiles'
generated_scriptname = 'preminimization_step'
DEFAULT_NUMBER_OF_PROCESSORS_TO_USE = 1 # this can be overridden using the -p command
def create_input_files(job_dict, settings, pdb_dir_path, pdb_data_dir, mutfile_data_dir, keypair, dataset_cases, skip_if_exists = False):
'''Create the stripped PDB files and the mutfiles for the DDG step. Mutfiles are created at this point as we need the
original PDB to generate the residue mapping.
'''
# Read PDB
pdb_id = keypair[0]
chain = keypair[1]
pdb = PDB.from_filepath(pdb_dir_path)
stripped_pdb_path = os.path.join(pdb_data_dir, '%s_%s.pdb' % (pdb_id, chain))
# Strip the PDB to the list of chains. This also renumbers residues in the PDB for Rosetta.
chains = [chain]
pdb.strip_to_chains(chains)
pdb.strip_HETATMs()
stripped_pdb = PDB('\n'.join(pdb.lines))
# Check to make sure that we haven't stripped all the ATOM lines
if not [line for line in stripped_pdb.lines if line[0:4] == "ATOM"]:
raise Exception("No ATOM lines remain in the stripped PDB file %s." % stripped_pdb_path)
# Assert that there are no empty sequences
assert(sorted(stripped_pdb.atom_sequences.keys()) == sorted(chains))
for chain_id, sequence in stripped_pdb.atom_sequences.iteritems():
assert(len(sequence) > 0)
# Check for CSE and MSE
try:
if 'CSE' in stripped_pdb.residue_types:
raise Exception('This case contains a CSE residue which may (or may not) cause an issue with Rosetta depending on the version.')
elif 'MSE' in stripped_pdb.residue_types:
raise Exception('This case contains an MSE residue which may (or may not) cause an issue with Rosetta depending on the version.')
# It looks like MSE (and CSE?) may now be handled - https://www.rosettacommons.org/content/pdb-files-rosetta-format
except Exception, e:
print('%s: %s, chain %s' % (str(e), str(stripped_pdb.pdb_id), chain))
# Turn the lines array back into a valid PDB file
if not(skip_if_exists) or not(os.path.exists(stripped_pdb_path)):
write_file(stripped_pdb_path, '\n'.join(stripped_pdb.lines))
# Create the mapping between PDB and Rosetta residue numbering
# Note: In many Rosetta protocols, '-ignore_unrecognized_res' and '-ignore_zero_occupancy false' are used to allow
# Rosetta to work with structures with missing data and non-canonicals. In those cases, we should supply both flags
# in the string below. Since protocol 16 only uses '-ignore_unrecognized_res', we only use that flag below as otherwise
# we could break the mapping.
rosetta_scripts_bin = os.path.join(settings['local_rosetta_bin'], 'rosetta_scripts%s' % settings['rosetta_binary_type'])
rosetta_database_path = settings['local_rosetta_db_dir']
if not os.path.exists(rosetta_scripts_bin):
raise Exception('The Rosetta scripts executable "{0}" could not be found. Please check your configuration file.'.format(rosetta_database_path))
if not os.path.exists(rosetta_database_path):
raise Exception('The path to the Rosetta database "{0}" could not be found. Please check your configuration file.'.format(rosetta_database_path))
stripped_pdb.construct_pdb_to_rosetta_residue_map(rosetta_scripts_bin,rosetta_database_path, extra_command_flags = '-ignore_unrecognized_res')
atom_to_rosetta_residue_map = stripped_pdb.get_atom_sequence_to_rosetta_json_map()
rosetta_to_atom_residue_map = stripped_pdb.get_rosetta_sequence_to_atom_json_map()
# Save the PDB <-> Rosetta residue mappings to disk
write_file(os.path.join(pdb_data_dir, '%s_%s.rosetta2pdb.resmap.json' % (pdb_id, chain)), rosetta_to_atom_residue_map)
write_file(os.path.join(pdb_data_dir, '%s_%s.pdb2rosetta.resmap.json' % (pdb_id, chain)), atom_to_rosetta_residue_map)
# Assert that there are no empty sequences in the Rosetta-processed PDB file
total_num_residues = 0
d = json.loads(rosetta_to_atom_residue_map)
for chain_id in chains:
num_chain_residues = len([z for z in d.values() if z[0] == chain_id])
total_num_residues += num_chain_residues
assert(num_chain_residues > 0)
# Check that the mutated positions exist and that the wild-type matches the PDB
try:
for dataset_case in dataset_cases:
assert(dataset_case['PDBFileID'] == pdb_id)
# Note: I removed a hack here for 1AJ3->1U5P mapping
# The JSON file does not have the residue IDs in PDB format (5 characters including insertion code) so we need to repad them for the mapping to work
pdb_mutations = [ChainMutation(mutation['WildTypeAA'], PDB.ResidueID2String(mutation['ResidueID']), mutation['MutantAA'], Chain = mutation['Chain']) for mutation in dataset_case['Mutations']]
stripped_pdb.validate_mutations(pdb_mutations)
# Map the PDB mutations to Rosetta numbering which is used by the mutfile format
rosetta_mutations = stripped_pdb.map_pdb_residues_to_rosetta_residues(pdb_mutations)
if (len(rosetta_mutations) != len(pdb_mutations)) or (None in set([m.ResidueID for m in rosetta_mutations])):
raise Exception('An error occurred in the residue mapping code for DDG case: %s, %s' % (pdb_id, pdb_mutations))
# Create the mutfile
mutfile = Mutfile.from_mutagenesis(rosetta_mutations)
mutfilename = os.path.join(mutfile_data_dir, '%d.mutfile' % (dataset_case['RecordID']))
if os.path.exists(mutfilename):
raise Exception('%s already exists. Check that the RecordIDs in the JSON file are all unique.' % mutfilename)
write_file(os.path.join(mutfile_data_dir, '%d.mutfile' % (dataset_case['RecordID'])), str(mutfile))
except Exception, e:
print(str(e))
print(traceback.format_exc())
# Set up --in:file:l parameter
pdb_relpath = os.path.relpath(stripped_pdb_path, settings['output_dir'])
job_dict[os.path.join(task_subfolder, '_'.join(keypair))] = dict(input_file_list = [pdb_relpath])
sys.stdout.write('.'); sys.stdout.flush()
def single_job_pack(args):
print(args)
return single_job(*args)
def use_multiple_processors(settings, pdb_monomers, input_pdb_dir_path, pdb_data_dir, mutfile_data_dir, dataset_cases_by_pdb_chain, num_processors):
assert(multiprocessing_module_available)
pool = multiprocessing.Pool(processes = num_processors)#[, initializer[, initargs]]])
m = multiprocessing.Manager()
job_dict = m.dict()
pool_jobs = []
for keypair in pdb_monomers:
pdb_dir_path = os.path.join(input_pdb_dir_path, '%s.pdb' % keypair[0])
pool_jobs.append(pool.apply_async(create_input_files, (job_dict, settings, pdb_dir_path, pdb_data_dir, mutfile_data_dir, keypair, dataset_cases_by_pdb_chain[keypair])))
pool.close()
pool.join()
sys.stdout.write('\n')
return job_dict._getvalue()
def use_single_processor(settings, pdb_monomers, input_pdb_dir_path, pdb_data_dir, mutfile_data_dir, dataset_cases_by_pdb_chain):
job_dict = {}
for keypair in pdb_monomers:
pdb_dir_path = os.path.join(input_pdb_dir_path, '%s.pdb' % keypair[0])
create_input_files(job_dict, settings, pdb_dir_path, pdb_data_dir, mutfile_data_dir, keypair, dataset_cases_by_pdb_chain[keypair])
sys.stdout.write('\n')
return job_dict
if __name__ == '__main__':
import pprint
try:
arguments = docopt.docopt(__doc__.format(**locals()))
except Exception, e:
print('Failed while parsing arguments: %s.' % str(e))
sys.exit(1)
# Set the PDB input path
input_pdb_dir_path = '../../input/pdbs'
# Read the settings file
settings = rosetta.parse_settings.get_dict()
# Read in the dataset file
dataset_filepath = arguments['--dataset'][0]
dataset_filename = os.path.splitext(os.path.split(dataset_filepath)[1])[0]
if not os.path.exists(dataset_filepath):
raise Exception('The dataset file %s does not exist.' % dataset_filepath)
# Read in any parallel processing options
num_system_processors = 1
if multiprocessing_module_available:
num_system_processors = multiprocessing.cpu_count()
if arguments.get('--maxp'):
num_processors = num_system_processors
else:
num_processors = min(DEFAULT_NUMBER_OF_PROCESSORS_TO_USE, num_system_processors)
if arguments.get('--parallel'):
valid_options = [int(x) for x in arguments['--parallel'] if x.isdigit()]
if not valid_options:
raise Exception('None of the arguments to --parallel are valid. The argument must be an integer between 1 and the number of processors (%d).' % num_system_processors)
else:
num_processors = max(valid_options)
else:
# If the user has not specified the number of processors, only one is selected, and more exist then let them know that this process may run faster
if num_processors == 1 and num_system_processors > 1:
print('The setup is configured to use one processor but this machine has %d processors. The --parallel or --maxp options may make this setup run faster.' % num_system_processors)
if 1 > num_processors or num_processors > num_system_processors:
raise Exception('The number of processors must be an integer between 1 and %d.' % num_system_processors)
# Read the dataset from disk
try:
dataset = json.loads(read_file(dataset_filepath))
dataset_cases = dataset['data']
except Exception, e:
raise Exception('An error occurred parsing the JSON file: %s..' % str(e))
# Set the job directory name
job_name = '%s_%s_ddg_monomer_16' % (time.strftime("%y-%m-%d-%H-%M"), getpass.getuser())
if arguments.get('--run_identifier'):
job_name += '_' + arguments['--run_identifier'][0]
# Set the root output directory
root_output_directory = 'job_output'
if arguments.get('--output_directory'):
root_output_directory = arguments['--output_directory'][0]
if not os.path.exists(root_output_directory):
print('Creating directory %s:' % root_output_directory)
os.makedirs(root_output_directory)
# Set the job output directory
output_dir = os.path.join(root_output_directory, job_name) # The root directory for the protocol run
settings['output_dir'] = output_dir
try:
task_dir = os.path.join(output_dir, task_subfolder) # The root directory for preminization section of the protocol
output_data_dir = os.path.join(output_dir, 'data')
pdb_data_dir = os.path.join(output_data_dir, 'input_pdbs')
mutfile_data_dir = os.path.join(output_data_dir, mutfiles_subfolder)
for jobdir in [output_dir, task_dir, output_data_dir, pdb_data_dir, mutfile_data_dir]:
try: os.mkdir(jobdir)
except: pass
# Make a copy the dataset so that it can be automatically used by the following steps
shutil.copy(dataset_filepath, os.path.join(output_dir, 'dataset.json'))
# Count the number of datapoints per PDB chain
count_by_pdb_chain = {}
dataset_cases_by_pdb_chain = {}
job_dict = {}
for ddg_case in dataset_cases:
chains = set([r['Chain'] for r in ddg_case['Mutations']])
assert(len(chains) == 1)
chain = chains.pop()
pdb_id = ddg_case['PDBFileID']
keypair = (pdb_id, chain)
count_by_pdb_chain[keypair] = count_by_pdb_chain.get(keypair, 0)
count_by_pdb_chain[keypair] += 1
dataset_cases_by_pdb_chain[keypair] = dataset_cases_by_pdb_chain.get(keypair, [])
dataset_cases_by_pdb_chain[keypair].append(ddg_case)
# Create the list of PDB IDs and chains for the dataset
print('')
if arguments['--test']:
pdb_monomers = []
print('Creating test run input...')
num_cases = 0
for keypair, v in sorted(count_by_pdb_chain.iteritems(), key=lambda x:-x[1]):
if v <= 10:
pdb_monomers.append(keypair)
num_cases += v
if num_cases >= 20:
break
else:
pdb_monomers = sorted(count_by_pdb_chain.keys())
# Ensure all the input PDB files exist
for keypair in pdb_monomers:
pdb_path = os.path.join(input_pdb_dir_path, '%s.pdb' % keypair[0])
if not os.path.exists(pdb_path):
raise Exception('Error: The file %s is missing.' % pdb_path)
# Write job dict and setup self-contained data directory
extra_s = ''
if arguments['--talaris2014']:
extra_s = ' (using talaris2014)'
if arguments['--beta_july15']:
assert(not(extra_s))
extra_s = ' (using beta_july15)'
if arguments['--beta_nov15']:
assert(not(extra_s))
extra_s = ' (using beta_nov15)'
print('Creating benchmark input:%s' % extra_s)
if num_processors == 1:
job_dict = use_single_processor(settings, pdb_monomers, input_pdb_dir_path, pdb_data_dir, mutfile_data_dir, dataset_cases_by_pdb_chain)
else:
print('Setting up the preminimization data using %d processors.' % num_processors)
job_dict = use_multiple_processors(settings, pdb_monomers, input_pdb_dir_path, pdb_data_dir, mutfile_data_dir, dataset_cases_by_pdb_chain, num_processors)
with open(os.path.join(output_data_dir, 'job_dict.pickle'), 'w') as f:
pickle.dump(job_dict, f)
settings['numjobs'] = '%d' % len(pdb_monomers)
settings['mem_free'] = '3.0G'
settings['scriptname'] = generated_scriptname
settings['appname'] = 'minimize_with_cst'
settings['rosetta_args_list'] = [
'-in:file:fullatom', '-ignore_unrecognized_res',
'-fa_max_dis', '9.0', '-ddg::harmonic_ca_tether', '0.5',
'-ddg::constraint_weight', '1.0',
'-ddg::out_pdb_prefix', 'min_cst_0.5',
'-ddg::sc_min_only', 'false'
]
if arguments['--talaris2014']:
settings['rosetta_args_list'].extend(['-talaris2014', 'true'])
elif arguments['--beta_july15']:
settings['rosetta_args_list'].extend(['-beta_july15'])
elif arguments['--beta_nov15']:
settings['rosetta_args_list'].extend(['-beta_nov15'])
write_run_file(settings)
job_path = os.path.abspath(output_dir)
print('''Job files written to directory: %s.\n\nTo launch this job:
cd %s
python %s.py\n''' % (job_path, job_path, generated_scriptname))
except Exception, e:
print('\nAn exception occurred setting up the preminimization step: "%s".' % str(e))
sys.stdout.write('Removing the directory %s: ' % output_dir)
try:
shutil.rmtree(output_dir)
print('done.\n')
except Exception, e2:
print('failed.\n')
print(str(e2))
print(traceback.format_exc())
|
python
|
Hobart Hurricanes became the 2nd team from Group B to make it to the last four stage.
Hobart Hurricanes became the second team from Group B to make it to the last four stage of Champion’s League t20 (CLT20) 2014 by virtue of beating the Barbados Tridents by six wickets. After early hiccups, some sensible batting by Aiden Blizzard, Shoaib Malik and Jonathan Wells saw them through.
Hobart started their chase of 114 slowly as they lost early wickets. Ben Dunk got out in the first over of Kyle Mayers. Later captain Tim Paine also lost his wicket. Then Blizzard and Malik took control of the innings and later a quick fire 23 from Wells helped the Hobart team to qualify form the semi-finals.
Earlier the team from Australia strengthen their claim of making to the semi-finals as they restricted the to 113 all out in 19. 4 overs, which meant Hurricanes needed only 114 runs for a place in the last four stage. Ben Hilfenhaus and Xavier Doherty broke the back of Tridents batting. Specially the Tridents batsmen couldn’t recover from Doherty’s spell of four for 27 in the middle overs. Later the left arm spinner was selected as the man of the match for his efforts.
The Barbados Tridents lost momentum in their batting from the start of the innings as they lost wickets at regular intervals. At the halfway stage of the innings they were reeling at 50 for four. From the beginning of the innings Tridents lost early wickets thanks to some excellent bowing by Doug Bollinger and Hifenhaus.
The batsmen too, never looked like getting settled and played some terrible shots. The West Indies franchise lost the in-form Dilshan Munaweera, who played some cracking shots earlier. After that Only Jonathon Crater, the centurion of last game showed some fight and made a valuable 42, which helped the Tridents score to get past 100 run mark.
Hobart won two of their previous three encounters, and barring an absolute disaster, they ought to breeze through to the semi-finals. On the other hand, Barbados are yet to open their account despite taking things down to the final over — and the Super Over on one occasion — both times. They will feel quite hard-done by their luck after doing so well for much of the games.
Hobart won the toss and decided to bowl first. Tridents brought in Kyle Mayers and Neil McKenzie in place of Shane Dowrich and Jeevan Mendis, who joined the Sri Lanka squad for the Asian games, while Hurricanes were unchanged.
Brief scores:
Barbados Tridents 113 all out in 19. 4 overs (Jonathan Carter 42; Ben Hilfenhaus 2 for 14, Xavier Doherty 4 for 27) lost to Hobart Hurricanes 117 for 4 in 18. 4 overs (Aiden Blizzard 21, Shoaib Malik 39; Akeal Hosein 2 for 25) by 6 wickets.
(Sandipan Banerjee is a reporter at CricketCountry. Cricket has been the biggest passion for him since his childhood. So, when it came to choosing his career, he chose to turn his passion into his profession. Apart from cricket he likes mountain trekking, river rafting and photography. His twitter handle is @im_sandipan)
|
english
|
[{"namaKab":"SUMBAWA BARAT","originalFilename":"IMG20180721145043.jpg","namaPartai":"Partai NasDem","id":31996,"noUrut":1,"nama":"MASADI,, SE","stringJenisKelamin":"Laki-Laki"},{"namaKab":"SUMBAWA BARAT","originalFilename":"FOTO.jpg","namaPartai":"Partai NasDem","id":32010,"noUrut":2,"nama":"<NAME>","stringJenisKelamin":"Laki-Laki"},{"namaKab":"SUMBAWA BARAT","originalFilename":"FOTO.jpg","namaPartai":"Partai NasDem","id":31898,"noUrut":3,"nama":"<NAME>, ST","stringJenisKelamin":"Perempuan"},{"namaKab":"SUMBAWA BARAT","originalFilename":"amiruddin sp.jpg","namaPartai":"Partai NasDem","id":31939,"noUrut":4,"nama":"AMIRUDDIN, SP","stringJenisKelamin":"Laki-Laki"},{"namaKab":"SUMBAWA BARAT","originalFilename":"DSC_0663.jpg","namaPartai":"Partai NasDem","id":31865,"noUrut":5,"nama":"JAMILA","stringJenisKelamin":"Perempuan"},{"namaKab":"SUMBAWA BARAT","originalFilename":"IMG20180721131922.jpg","namaPartai":"Partai NasDem","id":31775,"noUrut":6,"nama":"<NAME>","stringJenisKelamin":"Laki-Laki"},{"namaKab":"SUMBAWA BARAT","originalFilename":"foto.jpg","namaPartai":"Partai NasDem","id":31774,"noUrut":7,"nama":"<NAME>","stringJenisKelamin":"Perempuan"},{"namaKab":"SUMBAWA BARAT","originalFilename":"DSC_0147.jpg","namaPartai":"Partai NasDem","id":31952,"noUrut":8,"nama":"<NAME>, S.Pd","stringJenisKelamin":"Laki-Laki"},{"namaKab":"SUMBAWA BARAT","originalFilename":"NANANG.jpg","namaPartai":"Partai NasDem","id":32027,"noUrut":9,"nama":"<NAME>","stringJenisKelamin":"Laki-Laki"},{"namaKab":"SUMBAWA BARAT","originalFilename":"IMG_0076.JPG","namaPartai":"Partai NasDem","id":31880,"noUrut":10,"nama":"MASYHUR, ST.,MT","stringJenisKelamin":"Laki-Laki"}]
|
json
|
<gh_stars>0
{"files" :[
{"name": "Epoxy primer clear solvent free.pdf"},
{"name": "Epoxy topcoat solvent free.pdf"}
]
}
|
json
|
An artificial mind! Yes, a machine brain, perhaps much more intelligent than your own may someday be a reality, if scientists are to be believed.
A Swiss team, led by Professor Henry Markram, claims to be working on the world’s first artificial conscious and intelligent mind, made of silicon, gold and copper, which they say would be ready latest by 2018.
According to Prof Markram of the Brain Mind Institute at Ecole Polytechnique in Lausanne, the artificial mind would render vivisection obsolete, conquer insanity and even improve human intelligence and ability to learn.
What Markram’s ‘Blue Brain’ project amounts to is an audacious attempt to build a computerised copy of a brain — starting with a rat’s brain, then progressing to a human brain — inside one of the world’s most powerful computers, British newspaper the ‘ Daily Mail’ reported.
“We will do it by 2018. We need a lot of money, but I am getting it. There are few scientists in the world with the resources I have at my disposal,” Prof Markram said.
The Swiss team is in fact building what it hopes will be a real person, or at least the most important and complex part of a real person — its mind. And so instead of trying to copy what a brain does, the scientists have started at the bottom, with the biological brain itself.
As human brains are full of nerve cells called neurons, which communicate with one another using minuscule electrical impulses, the project takes apart actual brains cell by cell, analyses the billions of connections between the cells, and then plots these connections into a computer.
The upshot is, in effect, a blueprint or carbon copy of a brain, rendered in software rather than flesh and blood.
The idea is that by building a model of a real brain, it might begin to behave like the real thing, the scientists say.
And, to demonstrate how they are achieving this, Prof Markram has already showed a machine that resembles an infernal torture engine; a wheel about 2ft across with a dozen ultra-fine glass ‘spokes’ aimed at the centre.
It is here tiny slivers of rat brain are dissected, using tools finer than a human hair. Their interconnections are then mapped and turned into computer code.
So far, the team’s supercomputer — an IBM Blue Gene — is using the information gleaned from the slivers of real brain tissue, to simulate the workings of about 10,000 neurones, amounting to a single rat’s “neocortical column” — the brain’s part believed to be centre of conscious thought.
|
english
|
He’s only gone from the ODIs for now. This is not goodbye, but Sachin Tendulkar has left the building and is walking towards that gleaming red Ferrari 360 Modena, ready to drive off in to the sunset of permanent retirement from all forms of the game. This is assuming that the current owner of the car, a Surat businessman, hands over the keys to his Ferrari. It’s sad, but inevitable that this day would come. The process has begun, the wheels are in motion as the great man calls time on his career and life. Because quitting cricket altogether will be the death of Sachin Tendulkar. Many before him have found it hard to get on with what we mortals call “the daily routine”, and although worshipped throughout the land, he is still skin and bones, just like those before him.
His retirement has been rightfully met with an outpour of grief and an impending sense of doom. It’s a normal human reaction to a sudden, huge loss, because Tendulkar is, and always will be, more than a professional cricket player, at least to the people who lived and breathed in his time. It is safe to assume that at least a million people cried themselves to sleep last night, or needed to be consoled before the lights were turned out. There are hundreds of thousands more who will stop watching the game altogether. What’s the point now? There will be countless more who will be in denial, refusing to believe what the papers say or the radio blares out. Plenty more would have stuck their head in the sand like ostriches, the more, extreme form of denial.
“Oh, he is just saving himself so he can play Tests for a few years.” Stop kidding yourselves. It’s happening. The curtains are falling, and judging by the way this announcement was made, there’s every chance that the final one will be just as drastic and as rude a shock to the system as any.
Sachin’s star is dying out, ready to collapse into oblivion. It’s been shining all this time, but it’s only now you are imagining how dark the sky will be in its absence, and how lost you will be without it guiding you. We dream of a utopia, of never-ending fantasies, but the reality of the situation has hit home, and hard. It’s closing time, but we are not full yet. We are not ready yet, not even close. Even if he is, we aren’t. If Tendulkar’s retirement from ODIs – a form of the game that he has stopped featuring in regularly for a while – can cause such mayhem, I shudder to think what will happen when Sachin gets up one fine day and declares his innings, for good. Anti-depression pills will fly off the shelves that day.
How many people can you name that have had such an effect on a mass population in the last 20-odd years? A handful, perhaps? He does not give speeches, does not perform live-saving surgery and doesn’t stand guard on the borders in the cruel cold of Kashmir. Yet he finds his place. Heroes, or icons, don’t have a long shelf-life. Somewhere along the line, they drop out and new heroes take their place. Every generation has one, quickly replaced by the incumbent generation. The cycle goes on, and anything that happened in the past is demoted to the history books, along with the people and their achievements, however monumental they may be. The present generation is always the golden generation, and so are the heroes during that time. That is what the history books tell us, and we would be foolish to think that the next generation will still revere Tendulkar the way he is now. We believe what we see, and if we haven’t seen it, we cannot judge it correctly.
Our hero will be gone sooner than later, so it would be prudent to lap up the last morsels of cricketing brilliance that the great man leaves us. The next Test series should draw record numbers, every run should be cheered like its manna falling from the heavens. Every flick off the legs should be met with a sudden realization – ‘I’ll never get to see this again’. Every bad shot or incorrect display of footwork of his should be forgotten in a jiffy. For once, let’s talk about all that is good in him, because there’s more than enough to fill libraries with books on cricketing techniques, professionalism and integrity.
All eyes will be on him, it’s always been that way, but hopefully this time we will be looking for the right things so that we can point and say, there’s the guy we grew up watching, and he did just fine. We could have chosen to watch something else, but we were happy watching him and he made us laugh, and sometimes, cry the most. Some of us hated him while he was here, but in reality, everyone was entertained at some point. Due to reasons unfathomable, we just can’t seem to come to terms with that. But then greatness has never been truly appreciated in its time. Why should Sachin’s story be any different?
It’s only a few more steps before Tendulkar reaches the car and zooms off. The little big guy is on his way out. But the question is, how do you want to remember him? Do you really want the last thing you say of him to be full of spite, ignorance and misplaced anger? Haven’t we evolved enough to stitch together even a half-decent goodbye to one of the most inspirational figures of our time?
|
english
|
<gh_stars>1-10
{
"author": "<NAME>",
"author_email": "<EMAIL>",
"classifiers": [
"Development Status :: 5 - Production/Stable",
"Environment :: Plugins",
"Intended Audience :: Science/Research",
"License :: OSI Approved :: MIT License",
"Programming Language :: Python :: 3.8",
"Programming Language :: Python :: 3.9",
"Programming Language :: Python :: 3.10",
"Topic :: Scientific/Engineering :: Physics",
"Topic :: Scientific/Engineering :: Chemistry",
"Framework :: AiiDA"
],
"description": "AiiDA Plugin for running VASP calculations.",
"license": "MIT License, see LICENSE.txt file.",
"name": "aiida-vasp",
"url": "https://github.com/aiida-vasp/aiida-vasp",
"version": "2.1.1",
"reentry_register": true,
"python_requires": ">=3.8",
"setup_requires": [
"pip>=20.3"
],
"entry_points": {
"aiida.calculations": [
"vasp.vasp = aiida_vasp.calcs.vasp:VaspCalculation",
"vasp.neb = aiida_vasp.calcs.neb:VaspNEBCalculation",
"vasp.vasp2w90 = aiida_vasp.calcs.vasp2w90:Vasp2w90Calculation",
"vasp.immigrant = aiida_vasp.calcs.immigrant:VaspImmigrant"
],
"aiida.cmdline.data": [
"vasp-potcar = aiida_vasp.commands.potcar:potcar"
],
"aiida.data": [
"vasp.archive = aiida_vasp.data.archive:ArchiveData",
"vasp.chargedensity = aiida_vasp.data.chargedensity:ChargedensityData",
"vasp.wavefun = aiida_vasp.data.wavefun:WavefunData",
"vasp.potcar = aiida_vasp.data.potcar:PotcarData",
"vasp.potcar_file = aiida_vasp.data.potcar:PotcarFileData"
],
"aiida.parsers": [
"vasp.vasp = aiida_vasp.parsers.vasp:VaspParser",
"vasp.neb = aiida_vasp.parsers.neb:VtstNebParser",
"vasp.vasp2w90 = aiida_vasp.parsers.vasp2w90:Vasp2w90Parser"
],
"aiida.workflows": [
"vasp.vasp = aiida_vasp.workchains.vasp:VaspWorkChain",
"vasp.verify = aiida_vasp.workchains.verify:VerifyWorkChain",
"vasp.converge = aiida_vasp.workchains.converge:ConvergeWorkChain",
"vasp.bands = aiida_vasp.workchains.bands:BandsWorkChain",
"vasp.master = aiida_vasp.workchains.master:MasterWorkChain",
"vasp.relax = aiida_vasp.workchains.relax:RelaxWorkChain",
"vasp.neb = aiida_vasp.workchains.neb:VaspNEBWorkChain",
"vasp.immigrant = aiida_vasp.workchains.immigrant:VaspImmigrantWorkChain"
],
"aiida.groups": [
"vasp.potcar = aiida_vasp.data.potcar:PotcarGroup"
],
"console_scripts": [
"mock-vasp = aiida_vasp.commands.mock_vasp:mock_vasp",
"mock-vasp-strict = aiida_vasp.commands.mock_vasp:mock_vasp_strict"
]
},
"extras_require": {
"pre-commit": [
"aiida-core[pre-commit]>=2.0.1,<3",
"tox>=3.23.0",
"virtualenv>20"
],
"tests": [
"aiida-core[tests]>=2.0.1,<3",
"tox>=3.23.0",
"virtualenv>20"
],
"graphs": [
"matplotlib"
]
},
"include_package_data": true,
"install_requires": [
"aiida-core[atomic_tools]>=2.0.1,<3",
"pymatgen>=2019.7.2,<=2022.02.03,!=2019.9.7",
"lxml",
"packaging",
"parsevasp~=3.0"
]
}
|
json
|
<filename>task3/ELE/train/train2165.json<gh_stars>1-10
{"article": "We would all love to learn how to be happy. And sometimes, the41 _ comes from a surprising place. There was an anthropologist who had been studying the habits and42 _ of a remote African tribe. He had been43 _ in the village for44 _ some time and the day before he was to return home, he put together a gift basket filled with delicious fruits from around the region and45 _ it in a piece of cloth. He placed the basket under a tree and then he46 gathered up the children in the village. The man drew a47 _ in the dirt, looked at the children, and said,\"When I tell you to48 _ , run to the tree and49 _ gets there first will win the basket of the fruits.\"When he told them to run, they all50 _ each other's hands and ran together to the tree. Then they51 _ together around the basket and enjoyed their52 _ as a group. The anthropologist was53 _ . He asked why they would all go together when one of them could have won all the fruits for himself or herself. A54 _ girl looked up at him and said55 _ ,\"How can one of us be happy if all the other ones are56 _ ?\" Years later, the well-known South African activist Desmond Tutu would describe the little girl's57 _ process by using the word ubuntu, which means\"I am because58 _ are.\"People in that tribe believe that a person is a person through other people. Happiness is the59 _ of combining what we love to do with something that is60 _ .", "options": [["determination", "solution", "design", "consideration"], ["culture", "difference", "origin", "diversity"], ["surviving", "searching", "working", "wandering"], ["quite", "still", "even", "just"], ["decorated", "fixed", "wrapped", "tied"], ["line", "picture", "symbol", "sign"], ["depart", "escape", "start", "prepare"], ["whatever", "whoever", "whichever", "whenever"], ["held", "shook", "dragged", "pushed"], ["lay", "sat", "played", "united"], ["service", "effort", "experience", "treat"], ["excited", "ashamed", "shocked", "annoyed"], ["troublesome", "crazy", "noisy", "young"], ["innocently", "confusedly", "pitifully", "hopefully"], ["curious", "sad", "silent", "greedy"], ["study", "growth", "analysis", "thought"], ["you", "they", "we", "all"], ["direction", "secret", "truth", "result"], ["meaningful", "important", "demanding", "practical"]], "answers": ["B", "A", "C", "A", "C", "A", "C", "B", "A", "B", "D", "C", "D", "A", "B", "D", "C", "D", "A"]}
|
json
|
{
"name": "<NAME>",
"number": "54529134",
"is_illegal": false,
"text": "Target 1 \"Salamangreat\" Link Monster you control; Special Summon 1 \"Salamangreat\" Link Monster with the same name from your Extra Deck, using that monster you control as the entire material. (This is treated as a Link Summon.) You can only activate 1 \"Salamangreat Transcendence\" per turn.",
"type": "Spell",
"is_monster": false,
"is_spell": true,
"is_trap": false,
"property": "Quick-Play"
}
|
json
|
Posted On:
A proposal has been received from the Chief Justice of India for setting up of National Judicial Infrastructure Authority of India (NJIAI) for arrangement of adequate infrastructure for courts, as per which there will be a Governing Body with Chief Justice of India as Patron-in-Chief. The other salient features in the proposal are that NJIAI will act as a Central body in laying down the road map for planning, creation, development, maintenance and management of functional infrastructure for the Indian Court System, besides, identical structures under all the High Courts. The proposal has been sent to the various State Government/UTs, as they constitute an important stakeholder, for their views on the contours of the proposal to enable taking a considered view on the matter.
The primary responsibility of development of infrastructure facilities for judiciary rests with the State Governments. To augment the resources of the State Governments, the Union Government has been implementing a Centrally Sponsored Scheme for Development of Infrastructure Facilities for Judiciary by providing financial assistance to State Governments / UTs in the prescribed fund sharing pattern between Centre and States. The Scheme is being implemented since 1993-94. It covers the construction of court buildings and residential accommodations for Judicial Officers of District and Subordinate Judiciary. A sum of Rs. 8758.71 crore has been released under the Scheme so far since its inception, out of which Rs. 5314.40 crore (60.68 %) has been released since 2014-15. The Scheme has been extended from 2021-22 to 2025-26 with a budgetary outlay of Rs. 9000 crore including Central share of Rs. 5307.00 crore. Besides the construction of Court Halls and Residential Quarters, the Scheme now also covers the construction of Lawyers’ Halls, Digital Computer Rooms and Toilet Complexes in the District and Subordinate Courts.
The status of sanctioned strength and working strength of judges in High Courts is as under:
Subsequent to the deliberations held in the Conference of the Chief Ministers and Chief Justices in 2013 it was inter-alia resolved that the total sanctioned strength of each High Court could be increased. Subsequently the Judge strength of various High Courts was increased. At present, the sanctioned strength of Judges of High Courts has increased from 906 in 2014 to 1104 in 2022.
This information was given by the Union Minister of Law and Justice, Shri Kiren Rijiju in a written reply in Rajya Sabha, today.
|
english
|
<reponame>yetsun/hue
{"body":"<div><div id=\"decimal_v2\"><div class=\"hue-doc-title\">DECIMAL_V2 Query Option</div><div><p>\n A query option that changes behavior related to the\n <span class=\"hue-doc-codeph\">DECIMAL</span> data type. Set this option to\n <span class=\"hue-doc-codeph\">FALSE</span> for backward compatibility to Impala 2.x.\n </p><p><b>Type:</b> Boolean</p><p><b>Default:</b><span class=\"hue-doc-codeph\">TRUE</span></p></div></div></div>","title":"DECIMAL_V2 Query Option"}
|
json
|
from .sql_io import SQLio
from sql_tools.utils import refetch_filter, listify, kwgs, bin2str
from .keymap import (
ADD_USER,
DROP_USER,
LOCK_USER,
UNLOCK_USER,
GRANT_POWER,
REVOKE_POWER,
USER_GRANTS
)
try:
SQ = SQLio('mysql')
except:
SQ = None
##@@ USERS
def create_user(user, host, password):
"""
Add User to mysql server configuration
"""
SQ.execute_only(ADD_USER.format(user, host, password))
def remove_user(user, host):
"""
Remove User from mysql server configuration
"""
SQ.execute_only(DROP_USER.format(user, host))
def users_list(filter_by=['User', 'Host', 'account_locked']):
"""
Return list of users filtred by filtred_by + *args
"""
from .queries import select_elements
s = listify(filter_by)
res = select_elements('mysql', 'user', selection=s)
if isinstance(res[0][0], str):
return res
_res = []
_t = []
for elements in res:
_t = ()
for i in elements:
if isinstance(i, str):
_t += (i, )
continue
_t += (bin2str(i), )
_res += [_t]
return _res
##@@ LOCKS
def lock_user(user, host):
"""
Show User grants
"""
SQ.execute_only(LOCK_USER.format(user, host))
def unlock_user(user, host):
"""
Show User grants
"""
SQ.execute_only(UNLOCK_USER.format(user, host))
##@@ GRANTS
def set_user_grants(user, host, grants=None, database=None, table=None):
"""
Grants rights to user
"""
g, d, t = kwgs(grants, database, table)
SQ.execute_only(GRANT_POWER.format(g, d, t, user, host))
def revoke_user_grants(user, host, grants=None, database=None, table=None):
"""
Revoke rights
"""
g, d, t = kwgs(grants, database, table)
SQ.execute_only(REVOKE_POWER.format(g, d, t, user, host))
@refetch_filter([0])
def user_grants(user, host):
"""
Show User grants
"""
return SQ.execute_and_fetch(USER_GRANTS.format(user, host))
|
python
|
<gh_stars>1000+
package base64Captcha
import (
"embed"
"github.com/golang/freetype"
"github.com/golang/freetype/truetype"
)
type EmbeddedFontsStorage struct {
fs embed.FS
}
func (s *EmbeddedFontsStorage) LoadFontByName(name string) *truetype.Font {
fontBytes, err := s.fs.ReadFile(name)
if err != nil {
panic(err)
}
//font file bytes to trueTypeFont
trueTypeFont, err := freetype.ParseFont(fontBytes)
if err != nil {
panic(err)
}
return trueTypeFont
}
// LoadFontsByNames import fonts from dir.
// make the simple-font(RitaSmith.ttf) the first font of trueTypeFonts.
func (s *EmbeddedFontsStorage) LoadFontsByNames(assetFontNames []string) []*truetype.Font {
fonts := make([]*truetype.Font, 0)
for _, assetName := range assetFontNames {
f := s.LoadFontByName(assetName)
fonts = append(fonts, f)
}
return fonts
}
func NewEmbeddedFontsStorage(fs embed.FS) *EmbeddedFontsStorage {
return &EmbeddedFontsStorage{
fs: fs,
}
}
|
go
|
<gh_stars>0
'use strict';
const cmd = {
command: 'edit',
description: 'Edit front matter of a set of files',
builder: (yargs) => (
yargs.option('f', {
alias: 'folders',
type: 'array',
description: 'set of folders to include',
}).option('x', {
alias: 'extensions',
type: 'array',
description: 'set of extensions to include',
default: ['md'],
defaultDescription: 'Only markdown included by default'
})
),
async handler(/* argv */) {
console.log('calling from: %s', process.cwd());
},
};
module.exports = cmd;
|
javascript
|
Actor Srikant is also a self-made artiste and he has no godfathers in the film industry. He entered the film industry as character actor. He started playing negative roles and gradually promoted as hero. He had several hits to his credit and he had completed 100th film in Tollywood sometime ago and is appearing in films occasionally.
Srikanth’s last appearance on the Telugu screen is in ‘Sarrainodu’. He is presently doing a horror film titled ‘Raa Raa’. The first look poster of the movie was launched at Mega Star Chiranjeevi’s residence. It may be mentioned here that Srikanth is also making his debut in Mollywood and is planning to share screen with Mohanlal and Vishaal and the Tamil film would be directed by Unnikrishnan.
|
english
|
{"title": "Efficient coding provides a direct link between prior and likelihood in perceptual Bayesian inference.", "fields": ["bayesian hierarchical modeling", "bayesian statistics", "variable order bayesian network", "frequentist inference", "bayes factor"], "abstract": "A common challenge for Bayesian models of perception is the fact that the two fundamental Bayesian components, the prior distribution and the likelihood function, are formally unconstrained. Here we argue that a neural system that emulates Bayesian inference is naturally constrained by the way it represents sensory information in populations of neurons. More specifically, we show that an efficient coding principle creates a direct link between prior and likelihood based on the underlying stimulus distribution. The resulting Bayesian estimates can show biases away from the peaks of the prior distribution, a behavior seemingly at odds with the traditional view of Bayesian estimation, yet one that has been reported in human perception. We demonstrate that our framework correctly accounts for the repulsive biases previously reported for the perception of visual orientation, and show that the predicted tuning characteristics of the model neurons match the reported orientation tuning properties of neurons in primary visual cortex. Our results suggest that efficient coding is a promising hypothesis in constraining Bayesian models of perceptual inference.", "citation": "Citations (16)", "departments": ["University of Pennsylvania", "University of Pennsylvania"], "authors": ["<NAME>.....http://dblp.org/pers/hd/w/Wei:Xue=Xin", "<NAME>.....http://dblp.org/pers/hd/s/Stocker:Alan_A="], "conf": "nips", "year": "2012", "pages": 9}
|
json
|
{
"variants": {
"powered=false": { "model": "sap:block/linden_pressure_plate" },
"powered=true": { "model": "sap:block/linden_pressure_plate_down" }
}
}
|
json
|
<reponame>pokt-network/pocket-arcade
{
"name": "@pokt-network/pocket-arcade",
"version": "0.0.1",
"engine": {
"node": ">=10.19.0 <=12.15.0"
},
"description": "Pocket Network CLI that uses the Pocket-JS library",
"main": "src/cli.js",
"scripts": {
"test": "echo \"Error: no test specified\" && exit 1"
},
"author": "Pocket Network",
"authors": [
{
"name": "<NAME>",
"email": "<EMAIL>",
"homepage": "https://github.com/pabelnl"
},
{
"name": "<NAME>",
"email": "<EMAIL>",
"homepage": "https://github.com/luyzdeleon"
}
],
"license": "MIT",
"bin": {
"pocket-arcade": "./src/cli.js"
},
"dependencies": {
"@pokt-network/pocket-js": "^0.6.4-rc",
"commander": "^6.2.0"
}
}
|
json
|
Everyone loves a good corpse flower blossom. These giant plants mostly remain dormant and only sprout flowers once every four to five years.
The one in the New York Botanical Garden (NYBG) is about to bloom for the first time since 1939. According to the NYBG, the flower has already started to open, and will bloom for around 24 to 36 hours. Friday, July 29, is the best time to see the flower, which will go dormant again for another few years.
Just be glad that when it does bloom, you won’t be there to smell it. After all, the name “corpse flower” is more of a symbol of what the plant smells like. It apparently smells like rotting flesh. This is to attract the appropriate pollinators.
Oh Mother Nature. You have a sick sense of humor.
Apparently, so do scientists. The scientific name for the plant is Amorphophallus titanum, which means misshapen giant phallus. Insert your own penis joke here.
|
english
|
The Baltimore Ravens were projected to repeat as the NFL's #1 rushing offense for the third straight season, led by former MVP QB Lamar Jackson, and RBs JK Dobbins and Gus Edwards.
The Baltimore Ravens faced the Washington Football Team in their preseason finale on Saturday, but it resulted in star RB JK Dobbins' season coming to an early end. Baltimore were giving their starters a few reps ahead of the season-opener when JK Dobbins went down with a serious-looking knee injury.
He was helped off the field, but the cart was needed to carry him to the back. According to NFL Network's Ian Rapoport, there was fear of it being a season-ending ACL tear, which is indeed the case.
JK Dobbins is still young, and should rebound well for next season. The Baltimore Ravens were expecting great things from him in 2021 after taking the lead in 2020 with 134 carries, 805 yard, and nine TDs.
Now, Gus Edwards will look to become the Week 1 starter after finishing third in rushing last year with 144 carries, 723 yards and six TDs. He was better than JK Dobbins as a receiver last year, but the Baltimore Ravens need a star runner so Lamar Jackson doesn't have to carry the load.
Justice Hill was in danger of being cut, but he could now possibly make the final roster, but the Baltimore Ravens should see what their options are. On that note, here's a look at three RBs the Baltimore Ravens should look at in the aftermath of JK Dobbins' season-ending injury:
While Gus Edwards and Ty'Son Williams may not look like the best options, Baltimore could still decide to run with their current roster and use Edwards' versatility.
He has been a strong and consistent runner, never averaging below five yards per carry, and is a decent receiver. The depth is deep with potential, with Ty'Son Williams and Nate McCrary having great preseasons.
Justice Hill isn't the same RB he was in 2019, but that's not to say he can't step up when called upon. The Ravens likely won't three-peat as the top rushing attack, but maybe they believe their homegrown talent can perform well enough to keep the backfield relevant.
Todd Gurley wouldn't have been an option if he hadn't already met with the Baltimore Ravens earlier in the offense and left without a contract. He remains unsigned after having a 'decent' season with the Atlanta Falcons, with 195 carries, 678 yards, nine TDs and a 3. 5 average.
Gurley was once an elite RB with the LA Rams, leading the league in rushing TDs and total TDs twice. He has averaged under four yards per carry over the past two seasons, but he's still young enough to get another push somewhere.
With the Baltimore Ravens' run-heavy offense, it could be his best shot to show the NFL that he isn't washed up. It is difficult to think that Todd Gurley couldn't at least regain some of the success he had with the Rams.
38-year-old Frank Gore is a free agent after having 187 carries for 653 yards, two TDs and a 3. 5 average with the New York Jets last year.
Gore has stated he will only go to a team that feels right for him, so he'll likely join a contender. Despite owning the record for the most games played by an RB (241), he has never won a Super Bowl, and has only been to one with the San Francisco 49ers, which they lost to the Baltimore Ravens.
The Ravens are indeed a contender, so Gore could be used heavily in the backfield. It's a win-win situation for both parties, as Gore could wrap up his career with a deep playoff run. It also adds some power running to the backfield, and Gore could be a leader of a young RB room.
|
english
|
<reponame>poulosar/nfl-database
{"player_id": 1697, "name": "<NAME>", "position": "G-T", "height": "6-0", "weight": "215", "current_team": null, "birth_date": "1903-04-01", "birth_place": "La Farge, WI", "death_date": null, "college": "none", "high_school": null, "draft_team": null, "draft_round": null, "draft_position": null, "draft_year": null, "current_salary": null, "hof_induction_year": null}
|
json
|
When I was preparing list of falls in Gujarat, I familiarized concerning Jodiya waterfalls, Bilpudi, Dharampur. At that time, Bilpudi falls were unknown to the majority of people. Little did I recognize they would certainly turn out as major attraction in gale 2016.
i also upload a live streaming via snapchat app. its help me to save story of the day :)
Bilpudi falls are beginning of Swargvahini River, which passes through Dharampur community.
Upgrading this short article with excellent pictures as well as many more information to help out travelers.
Driving Instructions: Roadways are solitary, yet to construct as well as muddy. Two other vehicles can't pass at the exact same time. Autos could go upto a specific point, after that you need to stroll to reach near drops. That walk in lap of nature will blow your mind with the smell of fresh air.
There are two waterfalls because area. So they are called as Jodiya waterfalls.
One accidents from 30 feet. An additional one is falling from 20 feet.
However, that's a name offered by tourists.
The falls are or else initially referred to as Mavli Mata waterfalls.
First falls is called as, Sonajal Dhodh.
Second one is called as, Roopajal Dhodh.
Mavli Mata falls come under boundry of Bilpudi village.
Bilpudi is a tiny town, around 10 kilometres from Dharampur town.
To get to Bilpudi, take course towards Wilson Hills (Barumal Temple). Drive for a km and then ask any type of regional for route.
Sona & Roopa falls are 2 kilometres from highway. Do note that the roadway causing Jodiya Falls from main road is in pathetic condition.
Unless you've 4 wheel drive vehicle, it will certainly be a great decision to keep your automobile behind as well as stroll for 2 kilometres.
|
english
|
/* Copyright 2001, 2019 IBM Corporation
*
* Redistribution and use in source and binary forms, with or without modification, are permitted provided that the
* following conditions are met:
*
* 1. Redistributions of source code must retain the above copyright notice, this list of conditions and the
* following disclaimer.
*
* 2. Redistributions in binary form must reproduce the above copyright notice, this list of conditions and the
* following disclaimer in the documentation and/or other materials provided with the distribution.
*
* THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES,
* INCLUDING, BUT NOT LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE ARE
* DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT HOLDER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL,
* SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR
* SERVICES; LOSS OF USE, DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY THEORY OF LIABILITY,
* WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE
* USE OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
*/
#ifndef DATARECEIVERTIMESTAMPER_HPP
#define DATARECEIVERTIMESTAMPER_HPP
#include <iostream>
#include <iomanip>
#include <BlueMatter/DataReceiver.hpp>
#include <BlueMatter/ExternalDatagram.hpp>
#include <BlueMatter/Tag.hpp>
#include <algorithm>
#include <vector>
class TimeValue
{
public:
unsigned long mSeconds;
unsigned long mNanoseconds;
static inline TimeValue MinValue()
{
TimeValue rc;
rc.mSeconds = 0;
rc.mNanoseconds = 0;
return ( rc );
}
static inline TimeValue MaxValue()
{
TimeValue rc;
rc.mSeconds = 0x7fffffff;
rc.mNanoseconds = 0x7fffffff;
return ( rc );
}
inline TimeValue operator+=( const TimeValue& aOther )
{
int nanoSum = mNanoseconds + aOther.mNanoseconds;
int difference = nanoSum - 1000000000;
// If carry over occured adjust for it
if ( difference < 0 )
{
mSeconds += aOther.mSeconds;
mNanoseconds = nanoSum;
}
else
{
mSeconds += (aOther.mSeconds + 1);
mNanoseconds = difference;
}
return (*this);
}
inline TimeValue operator-( const TimeValue& aOther ) const
{
TimeValue rc;
rc.mSeconds = mSeconds - aOther.mSeconds;
rc.mNanoseconds = mNanoseconds - aOther.mNanoseconds;
// If carry over occured adjust for it
if( rc.mNanoseconds < 0 )
{
rc.mSeconds--;
rc.mNanoseconds += 1000000000;
}
return ( rc );
}
inline
int operator<( const TimeValue& aOther )
{
if(mSeconds < aOther.mSeconds)
return 1;
else if(mSeconds == aOther.mSeconds)
if(mNanoseconds < aOther.mNanoseconds)
return 1;
else
return 0;
else
return 0;
}
inline
int operator<=( const TimeValue& aOther )
{
if(mSeconds < aOther.mSeconds)
return 1;
else if (mSeconds == aOther.mSeconds)
if(mNanoseconds <= aOther.mNanoseconds)
return 1;
else
return 0;
else
return 0;
}
inline
int operator>=( const TimeValue& aOther )
{
if(mSeconds > aOther.mSeconds)
return 1;
else if (mSeconds == aOther.mSeconds)
if(mNanoseconds >= aOther.mNanoseconds)
return 1;
else
return 0;
else
return 0;
}
inline
int operator>( const TimeValue& aOther )
{
if(mSeconds > aOther.mSeconds)
return 1;
else if(mSeconds == aOther.mSeconds)
if(mNanoseconds > aOther.mNanoseconds)
return 1;
else
return 0;
else
return 0;
}
void Zero()
{
mSeconds = 0;
mNanoseconds = 0;
}
static double getDouble( TimeValue aTime )
{
char timeString[512];
double result;
// sprintf(timeString, "%d.%09d", aTime.mSeconds, aTime.mNanoseconds);
//sscanf(timeString, "%lf", &result);
double rc = aTime.mNanoseconds * 0.000000001 + aTime.mSeconds;
return(rc);
// return result;
}
};
struct RunInfoPerTimeStep
{
double timeInfo[ Tag::TAG_COUNT ];
int numberOfValues[ Tag::TAG_COUNT ];
};
#define MAX_TIMESTEP 100
static RunInfoPerTimeStep dataCollectionArray[ MAX_TIMESTEP ];
static RunInfoPerTimeStep sumX;
static RunInfoPerTimeStep sumXsqr;
static RunInfoPerTimeStep minX;
static RunInfoPerTimeStep maxX;
class DataReceiverTimeStamper : public DataReceiver
{
double mTotalTime;
int mTotalTimeSteps;
int mTotalPrintOutTimeSteps[ Tag::TAG_COUNT ];
int mFirstTimeStep;
int mDelayNsteps;
int mDumpSummaryOnly;
public:
DataReceiverTimeStamper( int delayNsteps = TIMESTEPS_TO_TUNE, int dumpSummaryOnly = 0) {
mDelayNsteps = delayNsteps;
mDumpSummaryOnly = dumpSummaryOnly;
}
virtual void init()
{
mTotalTime=0.0;
mTotalTimeSteps = 0;
mFirstTimeStep = -1;
for(int i=0; i<MAX_TIMESTEP; i++)
{
for( int j=0; j<Tag::TAG_COUNT; j++ ) {
dataCollectionArray[ i ].timeInfo[j]=0.0;
dataCollectionArray[ i ].numberOfValues[ j ] = 0;
}
}
for( int j=0; j<Tag::TAG_COUNT; j++ )
{
mTotalPrintOutTimeSteps[ j ] = 0;
sumX.timeInfo[j]=0.0;
sumX.numberOfValues[j] = 0;
sumXsqr.timeInfo[j]=0.0;
sumXsqr.numberOfValues[j] = 0;
minX.timeInfo[j]=99999999.0;
minX.numberOfValues[j] = 0;
maxX.timeInfo[j]=0.0;
maxX.numberOfValues[j] = 0;
}
}
virtual void controlTimeStamp(ED_ControlTimeStamp &ts)
{
TimeValue t;
t.mSeconds = ts.mSeconds;
t.mNanoseconds = ts.mNanoSeconds;
double timeValue = TimeValue::getDouble( t );
int timeStep = ts.mFullOuterTimeStep;
printf("GotPacket:: ts = %d, tag = %d, tv = %f\n", timeStep, ts.mTag1, timeValue );
if( mFirstTimeStep == -1 )
mFirstTimeStep = timeStep;
if( ts.mTag1 == Tag::TimeStep )
mTotalTime += timeValue;
if( (timeStep - mFirstTimeStep) >= mDelayNsteps )
{
// cout << "mDelayNsteps: " << mDelayNsteps << endl;
if( ts.mTag1 >= Tag::TAG_COUNT )
{
cerr << "Tag: " << ts.mTag1 << " not recognized." << endl;
}
sumX.timeInfo[ ts.mTag1 ] += timeValue;
sumX.numberOfValues[ ts.mTag1 ]++;
sumXsqr.timeInfo[ ts.mTag1 ] += (timeValue * timeValue);
sumXsqr.numberOfValues[ ts.mTag1 ]++;
if( timeValue < minX.timeInfo[ ts.mTag1 ] )
{
minX.timeInfo[ ts.mTag1 ] = timeValue;
}
if( timeValue > maxX.timeInfo[ ts.mTag1 ] )
{
maxX.timeInfo[ ts.mTag1 ] = timeValue;
}
if( timeStep < (MAX_TIMESTEP+mFirstTimeStep) )
{
int timeStepIndex = (timeStep - mFirstTimeStep) - (mDelayNsteps);
if( timeStepIndex < 0 )
{
cerr << "ERROR:: timeStepIndex < 0 " << endl;
exit(1);
}
// cout << "timeStepIndex: " << timeStepIndex << endl;
dataCollectionArray[ timeStepIndex ].timeInfo[ ts.mTag1 ] += timeValue;
if(dataCollectionArray[ timeStepIndex ].numberOfValues[ ts.mTag1 ] == 0)
{
dataCollectionArray[ timeStepIndex ].numberOfValues[ ts.mTag1 ] = 1;
mTotalPrintOutTimeSteps[ ts.mTag1 ]++;
}
}
}
}
void addTimeValue(TimeValue &aTs, TimeValue &aResult)
{
unsigned long Seconds = aTs.mSeconds;
unsigned long Nanoseconds = aTs.mNanoseconds;
unsigned long nanoSum = aResult.mNanoseconds + Nanoseconds;
if( nanoSum >= 1000000000 ) {
aResult.mSeconds += ( Seconds + 1 );
aResult.mNanoseconds += (Nanoseconds - 1000000000);
}
else {
aResult.mSeconds += Seconds;
aResult.mNanoseconds += Nanoseconds;
}
}
virtual void final(int status=1)
{
// We don't count the last time step
// mTotalPrintOutTimeSteps[ Tag::TimeStep ]--;
if( !mDumpSummaryOnly ) {
cout << "TimeStep(after delay)\t";
// Output the information per time step
for( int i=0; i<Tag::TAG_COUNT; i++)
{
if( sumX.numberOfValues[ i ] == 0 )
continue;
char nameForTag[64];
Tag::GetNameForTag( i, nameForTag );
cout << nameForTag << "\t";
}
cout << endl;
for( int i=0; i<mTotalPrintOutTimeSteps[ Tag::TimeStep ]; i++ ) {
cout << i << "\t";
for( int j=0; j<Tag::TAG_COUNT; j++ ) {
if( sumX.numberOfValues[j] == 0 )
continue;
cout << setw(12) << setprecision(6) << dataCollectionArray[i].timeInfo[j];
}
cout << endl;
}
}
// Report results with some analysis.
// Calculate the Total Time
// for(int i=0; i<mTotalTimeSteps; i++) {
// mTotalTime += dataCollectionArray[i].timeInfo[ Tag::TimeStep ];
// }
// char nanoSecStr[16];
// sprintf(nanoSecStr, "%09d", mTotalTime.mNanoseconds);
cout << "Delay N steps: " << mDelayNsteps << endl;
cout << "Number of time steps to print out: " << mTotalPrintOutTimeSteps[ Tag::TimeStep ] << endl;
cout << "Number of time steps analyzed : " << sumX.numberOfValues[ Tag::TimeStep ] << endl;
cout << "Total time of run: " << mTotalTime << endl;
for( int j=0; j<Tag::TAG_COUNT; j++ )
{
DumpInfoForTag( j );
}
}
void DumpInfoForTag( int tag )
{
char tagName[ 32 ];
Tag::GetNameForTag( tag, tagName );
if( sumX.numberOfValues[ tag ] == 0 )
return;
double meanTime = sumX.timeInfo[ tag ] / sumX.numberOfValues[ tag ];
double variance = sumXsqr.timeInfo[ tag ] / sumXsqr.numberOfValues[ tag ] - meanTime*meanTime;
double std = sqrt(variance);
cout << "Mean time step (" << tagName << "): " << meanTime << endl;
cout << "Variance (" << tagName << "): " << variance << endl;
cout << "StdDev (" << tagName << "): " << std << endl;
cout << "Max (" << tagName << "): " << maxX.timeInfo[ tag ] << endl;
cout << "Min (" << tagName << "): " << minX.timeInfo[ tag ] << endl;
}
#if 0
void DumpInfoForTag( int tag )
{
char tagName[ 32 ];
Tag::GetNameForTag( tag, tagName );
double TotalSumAfterRebalancing = 0.0;
double minTime = 99999999.0;
double maxTime = 0.0;
for( int i = mDelayNsteps+1; i < mTotalTimeSteps; i++ )
{
double currentTimeValue = dataCollectionArray[i].timeInfo[ tag ];
TotalSumAfterRebalancing += currentTimeValue;
double curTime = currentTimeValue ;
// Look for Min
if( curTime < minTime ) {
minTime = curTime;
}
// Look for Max
if( curTime > maxTime ) {
maxTime = curTime;
}
}
double sumAR = TotalSumAfterRebalancing ;
int numberOfElements = mTotalTimeSteps - (mDelayNsteps+1);
double meanTime = ( sumAR/numberOfElements );
cout << "Mean time step (" << tagName << "): " << meanTime << endl;
double sum = 0.0;
for( int i = mDelayNsteps+1; i < mTotalTimeSteps; i++ )
{
double currentTime = dataCollectionArray[ i ].timeInfo[ tag ] ;
sum += ( (currentTime - meanTime) * (currentTime - meanTime) );
}
double variance = (sum/(numberOfElements));
cout << "Variance(" << tagName << "): " << variance << endl;
cout << "Max(" << tagName << "): " << maxTime << endl;
cout << "Min(" << tagName << "): " << minTime << endl;
cout << endl;
}
#endif
};
#endif
|
cpp
|
Ministry of Environment, Forest and Climate Change.
The Central Government has not decided to free all elephants to wild for their safety and free movement. The government has taken a decision to rehabilitate captive elephant only from Zoos to elephant camps/rehabilitation camps / other facilities available with the Forest Departments at National Parks/Wildlife Sanctuaries/Tiger Reserves for departmental use.
This information was given by the Shri Namo Narain Meena, Minister of State in the Ministry of Finance and currently looking after Ministry of Environment and Forests in a return reply to question by Shri Rama Chandra Khuntia in the Rajya Sabha today.
|
english
|
// server/app/routes/index.ts
const path = require('path');
const router = require('express').Router();
router.use(function (req, res, next) {
next();
});
// server routes ============================================================
var api = require('./api');
router.use('/api', api);
// frontend routes ==========================================================
// route to handle all angular requests
router.get('*', function (req, res) {
res.sendFile('index.html', {
root: path.join(__dirname, '../../../../client')
});
});
console.log(path.join(__dirname, '../../../../client'));
module.exports = router;
|
javascript
|
<filename>devices/sdm230.py<gh_stars>0
import sdm_modbus
def device(
host=False, port=False,
device=False, stopbits=False, parity=False, baud=False,
timeout=False, retries=False, unit=False,
extended=False
):
if device:
return sdm_modbus.SDM230(
device=device,
stopbits=stopbits,
parity=parity,
baud=baud,
timeout=timeout,
unit=unit
)
else:
return sdm_modbus.SDM230(
host=host,
port=port,
timeout=timeout,
unit=unit
)
def values(device):
values = device.read_all()
return {
"energy_active": values.get("total_energy_active", 0),
"import_energy_active": values.get("import_energy_active", 0),
"power_active": values.get("power_active", 0),
"p1_power_active": values.get("power_active", 0),
# "p2_power_active"
# "p3_power_active"
"voltage_ln": values.get("voltage", 0),
"p1n_voltage": values.get("voltage", 0),
# "p2n_voltage"
# "p3n_voltage"
# "voltage_ll"
# "p12_voltage"
# "p23_voltage"
# "p31_voltage"
"frequency": values.get("frequency", 0),
"p1_energy_active": values.get("total_energy_active", 0),
# "p2_energy_active"
# "p3_energy_active"
"p1_import_energy_active": values.get("import_energy_active", 0),
# "p2_import_energy_active"
# "p3_import_energy_active"
"export_energy_active": values.get("export_energy_active", 0),
"p1_export_energy_active": values.get("export_energy_active", 0),
# "p2_export_energy_active"
# "p3_export_energy_active"
"energy_reactive": values.get("total_energy_reactive", 0),
"p1_energy_reactive": values.get("total_energy_reactive", 0),
# "p2_energy_reactive"
# "p3_energy_reactive"
# "energy_apparent"
# "p1_energy_apparent"
# "p2_energy_apparent"
# "p3_energy_apparent"
"power_factor": values.get("power_factor", 0),
"p1_power_factor": values.get("power_factor", 0),
# "p2_power_factor"
# "p3_power_factor"
"power_reactive": values.get("power_reactive", 0),
"p1_power_reactive": values.get("power_reactive", 0),
# "p2_power_reactive"
# "p3_power_reactive"
"power_apparent": values.get("power_apparent", 0),
"p1_power_apparent": values.get("power_apparent", 0),
# "p2_power_apparent"
# "p3_power_apparent"
"p1_current": values.get("current", 0),
# "p2_current"
# "p3_current"
"demand_power_active": values.get("total_demand_power_active", 0),
# "minimum_demand_power_active"
"maximum_demand_power_active": values.get("maximum_total_demand_power_active", 0),
# "demand_power_apparent"
"p1_demand_power_active": values.get("total_demand_power_active", 0),
# "p2_demand_power_active"
# "p3_demand_power_active"
}
|
python
|
<reponame>dnlcrl/PyScholarGraph
[{"num_versions": 2, "url_citation": null, "title": "Facilitating dialog in the game-based learning classroom: Teacher challenges reconstructing professional identity", "url": "http://scholar.google.com/https://repository.nie.edu.sg/handle/10497/16390", "url_versions": "http://scholar.google.com/scholar?cluster=14724139203833061888&hl=en&as_sdt=1,5&sciodt=1,5&as_vis=1", "year": "2014", "excerpt": "Despite widespread interest in the use of digital games to engage students and enhance the quality of student learning, the teacher's perspective has been less extensively studied. The challenges that teachers face when enacting authentic game-based learning predicated ...", "url_pdf": null, "num_citations": 2, "cluster_id": "14724139203833061888", "authors": "<NAME>, <NAME>, <NAME> -", "url_citations": "http://scholar.google.com/scholar?cites=14724139203833061888&as_sdt=2005&sciodt=1,5&hl=en"}]
|
json
|
package com.mpitaskframework.TaskSystem;
import java.io.EOFException;
import java.io.File;
import java.io.IOException;
import java.util.concurrent.locks.Condition;
import java.util.concurrent.locks.Lock;
import java.util.concurrent.locks.ReentrantLock;
import com.mpitaskframework.TaskSystem.Messages.IntMessage;
import io.mappedbus.MappedBusReader;
import io.mappedbus.MappedBusWriter;
/**
* System class is a central point where message queues are kept. Every method can be call from any task, this
* class need to be lock-free and thread-safe. One mutex is used for the incrementing task id.
* @author <NAME> <bizzard4>
*
*/
public class TaskSystem implements Runnable {
/**
* Maximum number of task.
*/
public static final int MAX_TASK_COUNT = 1000;
/**
* Shared system location.
*/
public static final String SYSTEM_SHARED_PATH = "/tmp/TS_SYSTEM";
/**
* Prefix for task specific shared queue path.
*/
public static final String TASK_SHARED_PATH_PREFIX = "/tmp/TS_";
/**
* Reference to wait and signal thread. Will be null if this process is not
* the system creator.
*/
private Thread m_threadRef;
/**
* Link to data in shared space.
*/
private SharedSystemData m_sharedData;
/**
* Writers for each task. Writer are acquired when doing the first send.
*/
private MappedBusWriter[] writers = new MappedBusWriter[MAX_TASK_COUNT];
/**
* Readers for each task. Reader are created when the task is created.
*/
private MappedBusReader[] readers = new MappedBusReader[MAX_TASK_COUNT];
/**
* Wait and signal condition and lock.
*/
final Lock sleeper_lock = new ReentrantLock();
final Condition[] sleepers = new Condition[MAX_TASK_COUNT];
/**
* The system is unique process wide. But in the case of IPC, it will need to be
* acquired instead of created.
*/
private static TaskSystem instance = null;
/**
* Prepare and activate the system. Can be created or acquire depending on the context.
* @param create True to create the system.
*/
public static void activateSystem(boolean create) {
if (instance != null) {
System.err.println("Errir, system already activated");
System.exit(-1);
}
// Create the object.
instance = new TaskSystem();
if (create) {
try {
instance.createSystem();
} catch (IOException | ClassNotFoundException e) {
System.err.println("Error, " + e.getMessage());
e.printStackTrace();
System.exit(-1);
}
} else {
try {
instance.acquireSystem();
} catch (IOException e) {
System.err.println("Error, " + e.getMessage());
e.printStackTrace();
System.exit(-1);
}
}
}
/**
* This is not the best solution, but the one I will use for now. The system need to be prepared
* before used. It is an unique instance across all process.
* @return The system is initialized.
*/
public static TaskSystem getInstance() {
if (instance==null) {
System.err.print("Error, system is not initialized");
System.exit(-1);
}
return instance;
}
/**
* Constructor. Put default values.
*/
private TaskSystem() {
m_sharedData = null;
for (int i = 0; i < MAX_TASK_COUNT; i++) {
sleepers[i] = sleeper_lock.newCondition();
}
}
/**
* Prepare a new instance of the system within a shared space.
* @return
* @throws IOException
* @throws ClassNotFoundException
*/
private void createSystem() throws IOException, ClassNotFoundException {
// Create and initialize shared data
m_sharedData = new SharedSystemData(SYSTEM_SHARED_PATH, true);
// Initialize all readers and writers to null
for (int i = 0; i < MAX_TASK_COUNT; i++) {
readers[i] = null;
writers[i] = null;
}
// Start he wait and signal loop
m_threadRef = new Thread(this);
m_threadRef.start();
}
/**
* Acquire an existing instance of the system from the shared space.
* @throws IOException
*/
private void acquireSystem() throws IOException {
// Create and initialize shared data
m_sharedData = new SharedSystemData(SYSTEM_SHARED_PATH, false);
// Initialize all readers and writers to null
for (int i = 0; i < MAX_TASK_COUNT; i++) {
readers[i] = null;
writers[i] = null;
}
}
/**
* Signal system for a clean exit.
*/
public void destroy() {
m_sharedData.setShutdownSignal(true);
try {
m_threadRef.join();
} catch (InterruptedException e) {
System.err.println("System failed to join wait and signal thread");
e.printStackTrace();
}
}
/**
* Add a message to a task queue. Communication is only between threads in the same process, but
* this system should be extensible to inter-process and network communication. This is part of the scaling.
* @param pMsg
* @param pTaskId
*/
public void send(Message pMsg, int pTaskId) {
try {
if (writers[pTaskId] == null) {
// Acquire the Q
writers[pTaskId] = new MappedBusWriter(getTaskQPath(pTaskId), 100000L, 8, true);
writers[pTaskId].open();
}
} catch (IOException ex) {
System.err.println("Failed to acquire Q : " + ex.getMessage());
return;
}
// Write message
try {
writers[pTaskId].write(pMsg);
} catch (EOFException e) {
System.err.println("Failed to write in Q : " + e.getMessage());
}
}
/**
* Get the message from a task queue.
* @param pTaskId
* @return
* @throws
*/
public Message receive(int pTaskId) {
try {
if (readers[pTaskId].next()) {
int type = readers[pTaskId].readType();
// Mapping
Message msg = null;
if (type == IntMessage.INTMESSAGE_TID) {
msg = new IntMessage();
readers[pTaskId].readMessage(msg);
}
return msg;
}
} catch (EOFException e) {
System.err.println("EOF error : " + e.getMessage());
}
return null;
}
/**
* Create a new shared queue for a task id.
* @param pTaskId
*/
public void createMessageQueue(int pTaskId) {
File f = new File(getTaskQPath(pTaskId));
if (f.exists()) {
f.delete();
}
try {
readers[pTaskId] = new MappedBusReader(getTaskQPath(pTaskId), 100000L, 8);
readers[pTaskId].open();
} catch (IOException e) {
System.err.println("Error creating the task Q : " + e.getMessage());
System.exit(-1);
}
}
/**
* Return next free task id.
* @return
*/
public int getNextTaskId() {
return m_sharedData.incrementNextTaskId();
}
/**
* Return true if there are no message in the Q.
* @param pTaskId
* @return
*/
public boolean message_immediate(int pTaskId) {
try {
return !readers[pTaskId].next();
} catch (EOFException e) {
System.err.println("message_immediate error : " + e.getMessage());
e.printStackTrace();
}
return false;
}
/**
* Look for a message in the Q, if no message put to sleep and the
* wait and signal loop will wake him up when a new message is present.
* @param pTaskId
*/
public void message_notify(int pTaskId) {
if (message_immediate(pTaskId)) {
try {
sleepers[pTaskId].await();
} catch (InterruptedException e) {
System.err.println("Condition variable failed to await");
e.printStackTrace();
}
}
}
/**
* Yield the current thread.
* @param pTaskId
*/
public void message_wait(int pTaskId) {
Thread.yield();
}
/**
* Build the path to a specific task queue.
* Format : /tmp/TS_<TASK_ID>
* @param pTaskId
* @return
*/
private String getTaskQPath(int pTaskId) {
return new String(TASK_SHARED_PATH_PREFIX + pTaskId);
}
/**
* Wait and signal loop.
*/
@Override
public void run() {
System.out.println("Wait and signal loop started");
while(!m_sharedData.getShutdownSignal()) {
// Signal each task with mesage waiting
int current_max_id = m_sharedData.getNextTaskId();
for (int i = 0; i < current_max_id; i++) {
if ((readers[i] != null) && (!message_immediate(i))) {
sleeper_lock.lock();
try {
this.sleepers[i].signal();
} finally {
sleeper_lock.unlock();
}
}
}
try {
Thread.sleep(10); // 10ms sleep
} catch (InterruptedException e) {
System.err.println("Wait and signal loop failed to sleep");
e.printStackTrace();
}
}
System.out.println("Wait and signal loop shutdown");
}
}
|
java
|
sap.ui.define(['sap/ui/webc/common/thirdparty/base/asset-registries/Icons'], function (Icons) { 'use strict';
const name = "chevron-phase-2";
const pathData = "M507.625 254c5 7 5 20 0 28l-87 130c-15 21-38 33-64 33h-330c-10 0-18-5-23-12-5-8-3-18 3-26l92-138-92-138c-5-10-8-18-3-28 5-8 13-13 23-13h330c26 0 49 13 64 33z";
const ltr = false;
const collection = "SAP-icons-v5";
const packageName = "@ui5/webcomponents-icons";
Icons.registerIcon(name, { pathData, ltr, collection, packageName });
var pathDataV4 = { pathData };
return pathDataV4;
});
|
javascript
|
<gh_stars>0
var { series, src, start, dest } = require("gulp");
var minify = require("gulp-minify");
var watch = require("gulp-watch");
var copy = require("gulp-copy");
function minifyJs() {
return src("dist/js/signature.js")
.pipe(
minify({
ext: {
min: ".min.js",
},
})
)
.pipe(dest("dist/js/"));
}
function watch() {
return watch("./lib/**/*.js", function () {
start("minify");
});
}
function copyDocs() {
return src("./dist/js/signature.min.js").pipe(
copy("./docs/js", {
prefix: 2,
})
);
}
exports.watch = watch;
exports.minify = minifyJs;
exports.build = minifyJs;
exports.copyDocs = copyDocs;
exports.default = series(minifyJs, copyDocs);
|
javascript
|
<gh_stars>1-10
function renderMedal(context, id, x, y, h, alpha=.5) {
if (!this.medals || !this.medals.find(m => m.id == id)) return;
context.save();
context.fillStyle = '#fff';
context.strokeStyle = '#000';
context.shadowColor = '#000';
context.textBaseline = 'middle';
context.textAlign = 'left';
context.font = (h/2)+'px impact';
context.lineWidth = h/35;
context.shadowBlur = h/5;
context.globalAlpha = alpha;
const medal = this.medals.find(m => m.id == id);
context.drawImage(medal.image, x, y, h, h);
context.strokeRect(x, y, h, h);
const points = this.points[medal.difficulty - 1];
const text = this.getMedalText(medal);
context.lineWidth = Math.max(1,h/26);
context.strokeText(text, x + h*1.2, y+h/2);
context.fillText(text, x + h*1.2, y+h/2);
context.restore();
};
export function update(delta) {
if (this.displayMedalQueue?.length) {
const medal = this.displayMedalQueue[0];
medal.time += delta;
if(medal.time > this.medalDisplayTime) this.displayMedalQueue.shift();
};
};
export function render(context, size=50) {
if (this.displayMedalQueue?.length) {
const medal = this.displayMedalQueue[0];
const slideOnPecent = medal.time < 1 ? 1-medal.time : 0;
const alpha = medal.time > this.medalDisplayTime - 1 ? this.medalDisplayTime - medal.time : 1;
const y = context.canvas.height + slideOnPecent * size * 1.5;
renderMedal(context, medal.index, 0, y - size, size, alpha);
};
};
|
javascript
|
// This file is part of Substrate.
// Copyright (C) 2017-2020 Parity Technologies (UK) Ltd.
// SPDX-License-Identifier: Apache-2.0
// Licensed under the Apache License, Version 2.0 (the "License");
// you may not use this file except in compliance with the License.
// You may obtain a copy of the License at
//
// http://www.apache.org/licenses/LICENSE-2.0
//
// Unless required by applicable law or agreed to in writing, software
// distributed under the License is distributed on an "AS IS" BASIS,
// WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
// See the License for the specific language governing permissions and
// limitations under the License.
// --- std ---
// use std::path::PathBuf;
use crate::cli::{Cli, Subcommand};
use sc_cli::{SubstrateCli, RuntimeVersion, Role, ChainSpec};
// use sc_service::PartialComponents;
use sp_core::crypto::Ss58AddressFormat;
// use uniarts_primitives::{OpaqueBlock as Block};
use uniarts_service::{pangu_runtime, fuxi_runtime, IdentifyVariant};
use log::info;
const UNI_ARTS_ADDRESS_FORMAT_ID: u8 = 45;
impl SubstrateCli for Cli {
fn impl_name() -> String {
"Uni-arts Node".into()
}
fn impl_version() -> String {
env!("SUBSTRATE_CLI_IMPL_VERSION").into()
}
fn executable_name() -> String { "uniarts".into() }
fn description() -> String {
env!("CARGO_PKG_DESCRIPTION").into()
}
fn author() -> String {
env!("CARGO_PKG_AUTHORS").into()
}
fn support_url() -> String {
"https://github.com/uni-arts-chain/uni-arts-network/issues".into()
}
fn copyright_start_year() -> i32 {
2020
}
fn load_spec(&self, id: &str) -> Result<Box<dyn sc_service::ChainSpec>, String> {
let id = if id.is_empty() {
let n = get_exec_name().unwrap_or_default();
["uart", "pangu", "fuxi"]
.iter()
.cloned()
.find(|&chain| n.starts_with(chain))
.unwrap_or("uart")
} else {
id
};
Ok(match id {
"dev" => Box::new(uniarts_service::chain_spec::fuxi_development_config()?),
"" | "local" => Box::new(uniarts_service::chain_spec::pangu_local_testnet_config()?),
"staging" => Box::new(uniarts_service::chain_spec::staging_config()?),
"uart" => Box::new(uniarts_service::chain_spec::pangu_config()?),
"pangu" => Box::new(uniarts_service::chain_spec::pangu_config()?),
"fuxi" => Box::new(uniarts_service::chain_spec::fuxi_config()?),
path => Box::new(uniarts_service::chain_spec::PanguChainSpec::from_json_file(
std::path::PathBuf::from(path),
)?),
})
}
fn native_runtime_version(spec: &Box<dyn ChainSpec>) -> &'static RuntimeVersion {
if spec.is_pangu_network() {
&uniarts_service::pangu_runtime::VERSION
} else if spec.is_fuxi_network() {
&uniarts_service::fuxi_runtime::VERSION
} else {
&uniarts_service::pangu_runtime::VERSION
}
}
}
fn get_exec_name() -> Option<String> {
std::env::current_exe()
.ok()
.and_then(|pb| pb.file_name().map(|s| s.to_os_string()))
.and_then(|s| s.into_string().ok())
}
fn set_default_ss58_version(spec: &Box<dyn uniarts_service::ChainSpec>) {
let ss58_version = if spec.is_pangu_network() {
Ss58AddressFormat::SubstrateAccount
} else if spec.is_fuxi_network() {
// todo
// Waiting for release: uniart address id
Ss58AddressFormat::Custom(UNI_ARTS_ADDRESS_FORMAT_ID)
} else {
Ss58AddressFormat::SubstrateAccount
};
sp_core::crypto::set_default_ss58_version(ss58_version);
}
/// Parse and run command line arguments
pub fn run() -> sc_cli::Result<()> {
let cli = Cli::from_args();
match &cli.subcommand {
Some(Subcommand::BuildSpec(cmd)) => {
let runner = cli.create_runner(cmd)?;
runner.sync_run(|config| cmd.run(config.chain_spec, config.network))
},
Some(Subcommand::CheckBlock(cmd)) => {
let runner = cli.create_runner(cmd)?;
let chain_spec = &runner.config().chain_spec;
set_default_ss58_version(chain_spec);
if chain_spec.is_pangu_network() {
runner.async_run(|mut config| {
let (client, _, import_queue, task_manager) = uniarts_service::new_chain_ops::<
pangu_runtime::RuntimeApi,
uniarts_service::PanguExecutor,
>(&mut config)?;
Ok((cmd.run(client, import_queue), task_manager))
})
} else if chain_spec.is_fuxi_network() {
runner.async_run(|mut config| {
let (client, _, import_queue, task_manager) = uniarts_service::new_chain_ops::<
fuxi_runtime::RuntimeApi,
uniarts_service::FuxiExecutor,
>(&mut config)?;
Ok((cmd.run(client, import_queue), task_manager))
})
} else {
unreachable!()
}
},
Some(Subcommand::ExportBlocks(cmd)) => {
let runner = cli.create_runner(cmd)?;
let chain_spec = &runner.config().chain_spec;
set_default_ss58_version(chain_spec);
if chain_spec.is_pangu_network() {
runner.async_run(|mut config| {
let (client, _, _, task_manager) = uniarts_service::new_chain_ops::<
pangu_runtime::RuntimeApi,
uniarts_service::PanguExecutor,
>(&mut config)?;
Ok((cmd.run(client, config.database), task_manager))
})
} else if chain_spec.is_fuxi_network() {
runner.async_run(|mut config| {
let (client, _, _, task_manager) = uniarts_service::new_chain_ops::<
fuxi_runtime::RuntimeApi,
uniarts_service::FuxiExecutor,
>(&mut config)?;
Ok((cmd.run(client, config.database), task_manager))
})
} else {
unreachable!()
}
},
Some(Subcommand::ExportState(cmd)) => {
let runner = cli.create_runner(cmd)?;
let chain_spec = &runner.config().chain_spec;
set_default_ss58_version(chain_spec);
if chain_spec.is_pangu_network() {
runner.async_run(|mut config| {
let (client, _, _, task_manager) = uniarts_service::new_chain_ops::<
pangu_runtime::RuntimeApi,
uniarts_service::PanguExecutor,
>(&mut config)?;
Ok((cmd.run(client, config.chain_spec), task_manager))
})
} else if chain_spec.is_fuxi_network() {
runner.async_run(|mut config| {
let (client, _, _, task_manager) = uniarts_service::new_chain_ops::<
fuxi_runtime::RuntimeApi,
uniarts_service::FuxiExecutor,
>(&mut config)?;
Ok((cmd.run(client, config.chain_spec), task_manager))
})
} else {
unreachable!()
}
},
Some(Subcommand::ImportBlocks(cmd)) => {
let runner = cli.create_runner(cmd)?;
let chain_spec = &runner.config().chain_spec;
set_default_ss58_version(chain_spec);
if chain_spec.is_pangu_network() {
runner.async_run(|mut config| {
let (client, _, import_queue, task_manager) = uniarts_service::new_chain_ops::<
pangu_runtime::RuntimeApi,
uniarts_service::PanguExecutor,
>(&mut config)?;
Ok((cmd.run(client, import_queue), task_manager))
})
} else if chain_spec.is_fuxi_network() {
runner.async_run(|mut config| {
let (client, _, import_queue, task_manager) = uniarts_service::new_chain_ops::<
fuxi_runtime::RuntimeApi,
uniarts_service::FuxiExecutor,
>(&mut config)?;
Ok((cmd.run(client, import_queue), task_manager))
})
} else {
unreachable!()
}
},
Some(Subcommand::PurgeChain(cmd)) => {
let runner = cli.create_runner(cmd)?;
runner.sync_run(|config| cmd.run(config.database))
},
Some(Subcommand::Revert(cmd)) => {
let runner = cli.create_runner(cmd)?;
let chain_spec = &runner.config().chain_spec;
set_default_ss58_version(chain_spec);
if chain_spec.is_pangu_network() {
runner.async_run(|mut config| {
let (client, backend, _, task_manager) = uniarts_service::new_chain_ops::<
pangu_runtime::RuntimeApi,
uniarts_service::PanguExecutor,
>(&mut config)?;
Ok((cmd.run(client, backend), task_manager))
})
} else if chain_spec.is_fuxi_network() {
runner.async_run(|mut config| {
let (client, backend, _, task_manager) = uniarts_service::new_chain_ops::<
fuxi_runtime::RuntimeApi,
uniarts_service::FuxiExecutor,
>(&mut config)?;
Ok((cmd.run(client, backend), task_manager))
})
} else {
unreachable!()
}
},
None => {
let runner = cli.create_runner(&cli.run)?;
let chain_spec = &runner.config().chain_spec;
set_default_ss58_version(chain_spec);
info!(" _ _ _ _ _____ _ _ ");
info!(" | | | | (_) /\\ | | / ____| | (_) ");
info!(" | | | |_ __ _ ______ / \\ _ __| |_ ___ | | | |__ __ _ _ _ __ ");
info!(" | | | | '_ \\| |______/ /\\ \\ | '__| __/ __| | | | '_ \\ / _` | | '_ \\ ");
info!(" | |__| | | | | | / ____ \\| | | |_\\__ \\ | |____| | | | (_| | | | | |");
info!(" \\____/|_| |_|_| /_/ \\_\\_| \\__|___/ \\_____|_| |_|\\__,_|_|_| |_|");
info!(" ");
info!(" ");
info!(" by Uni-Arts Network, 2018-2020");
if chain_spec.is_pangu_network() {
runner.run_node_until_exit(|config| match config.role {
Role::Light => uniarts_service::pangu_new_light(config),
_ => uniarts_service::pangu_new_full(config).map(|(components, _)| components),
})
} else if chain_spec.is_fuxi_network() {
runner.run_node_until_exit(|config| match config.role {
Role::Light => uniarts_service::fuxi_new_light(config),
_ => uniarts_service::fuxi_new_full(config).map(|(components, _)| components),
})
} else {
unreachable!()
}
}
}
}
|
rust
|
<gh_stars>0
{"skins/default.js":"sha256-iPVVXrmIuOP4T4CTIQUAZkJmqbgmBsGnjAXpLFPaNG8=","skins/lowkey.js":"sha256-cY+PRolriWXGpcw5FgCrf2p68/TNM+O6voSTORiqUUY=","skins/narrow.js":"sha256-AIV3fLRt4BxAmqtyYG1it4bGXX7uD9wq5bIsocBllMY=","wavedrom.js":"sha256-<KEY>,"wavedrom.min.js":"sha2<KEY>}
|
json
|
{
"schema_version": "1.2.0",
"id": "GHSA-c864-h25r-8gjj",
"modified": "2022-05-13T01:41:52Z",
"published": "2022-05-13T01:41:52Z",
"aliases": [
"CVE-2017-10398"
],
"details": "Vulnerability in the Oracle Hospitality Cruise Fleet Management component of Oracle Hospitality Applications (subcomponent: BaseMasterPage). The supported version that is affected is 9.0.2.0. Easily exploitable vulnerability allows low privileged attacker with logon to the infrastructure where Oracle Hospitality Cruise Fleet Management executes to compromise Oracle Hospitality Cruise Fleet Management. While the vulnerability is in Oracle Hospitality Cruise Fleet Management, attacks may significantly impact additional products. Successful attacks of this vulnerability can result in unauthorized creation, deletion or modification access to critical data or all Oracle Hospitality Cruise Fleet Management accessible data as well as unauthorized access to critical data or complete access to all Oracle Hospitality Cruise Fleet Management accessible data. CVSS 3.0 Base Score 8.4 (Confidentiality and Integrity impacts). CVSS Vector: (CVSS:3.0/AV:L/AC:L/PR:L/UI:N/S:C/C:H/I:H/A:N).",
"severity": [
{
"type": "CVSS_V3",
"score": "CVSS:3.0/AV:L/AC:L/PR:L/UI:N/S:C/C:H/I:H/A:N"
}
],
"affected": [
],
"references": [
{
"type": "ADVISORY",
"url": "https://nvd.nist.gov/vuln/detail/CVE-2017-10398"
},
{
"type": "WEB",
"url": "http://www.oracle.com/technetwork/security-advisory/cpuoct2017-3236626.html"
},
{
"type": "WEB",
"url": "http://www.securityfocus.com/bid/101452"
}
],
"database_specific": {
"cwe_ids": [
],
"severity": "HIGH",
"github_reviewed": false
}
}
|
json
|
<reponame>zwang695/alexa-skills-list
{
"accountLinkingWhitelistedDomains": null,
"asin": "B01N03YIKK",
"averageRating": 0,
"canDisable": true,
"capabilities": null,
"category": null,
"description": "CoWorking Night Facts is a skill designed to give you interesting facts on the CoWorking Night, a weekly networking event happening weekly in Huntsville, AL.",
"enablement": null,
"exampleInteractions": [
"Alexa, ask CoWorking Night Facts to tell me an fact about CoWorking Night.",
"Alexa, ask CoWorking Night Facts to give me a fact about CoWorking Night.",
"Alexa, tell CoWorking Night Facts to give me some information about CoWorking Night."
],
"firstReleaseDate": 1479107594.088,
"homepageLinkText": null,
"homepageLinkUrl": null,
"id": "amzn1.ask.skill.8a107b5e-3611-403c-830f-db15647be45a",
"imageAltText": "CoWorking Night Facts icon",
"imageUrl": "https://github.com/dale3h/alexa-skills-list/raw/master/skills/B01N03YIKK/skill_icon",
"inAppPurchasingSupported": false,
"launchPhrase": "coworking night facts",
"name": "CoWorking Night Facts",
"numberOfReviews": 0,
"pamsPartnerId": null,
"permissions": null,
"privacyPolicyUrl": null,
"shortDescription": "Learn interesting facts about CoWorking Night.",
"skillTypes": null,
"stage": "live",
"termsOfUseUrl": null,
"vendorId": "M2AWTQ35Z2LRQ8",
"vendorName": "<NAME>"
}
|
json
|
<reponame>sunzy/nacos<filename>core/src/main/java/com/alibaba/nacos/core/remote/ConnectionManager.java
/*
* Copyright 1999-2020 Alibaba Group Holding Ltd.
*
* Licensed under the Apache License, Version 2.0 (the "License");
* you may not use this file except in compliance with the License.
* You may obtain a copy of the License at
*
* http://www.apache.org/licenses/LICENSE-2.0
*
* Unless required by applicable law or agreed to in writing, software
* distributed under the License is distributed on an "AS IS" BASIS,
* WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
* See the License for the specific language governing permissions and
* limitations under the License.
*/
package com.alibaba.nacos.core.remote;
import com.alibaba.nacos.api.common.Constants;
import com.alibaba.nacos.api.exception.NacosException;
import com.alibaba.nacos.api.remote.RemoteConstants;
import com.alibaba.nacos.api.remote.RequestCallBack;
import com.alibaba.nacos.api.remote.RpcScheduledExecutor;
import com.alibaba.nacos.api.remote.request.ClientDetectionRequest;
import com.alibaba.nacos.api.remote.request.ConnectResetRequest;
import com.alibaba.nacos.api.remote.request.RequestMeta;
import com.alibaba.nacos.api.remote.response.Response;
import com.alibaba.nacos.api.utils.NetUtils;
import com.alibaba.nacos.common.notify.Event;
import com.alibaba.nacos.common.notify.NotifyCenter;
import com.alibaba.nacos.common.notify.listener.Subscriber;
import com.alibaba.nacos.common.remote.exception.ConnectionAlreadyClosedException;
import com.alibaba.nacos.common.utils.CollectionUtils;
import com.alibaba.nacos.common.utils.JacksonUtils;
import com.alibaba.nacos.common.utils.StringUtils;
import com.alibaba.nacos.common.utils.VersionUtils;
import com.alibaba.nacos.core.monitor.MetricsMonitor;
import com.alibaba.nacos.core.remote.event.ConnectionLimitRuleChangeEvent;
import com.alibaba.nacos.core.utils.Loggers;
import com.alibaba.nacos.sys.env.EnvUtil;
import com.alibaba.nacos.sys.file.FileChangeEvent;
import com.alibaba.nacos.sys.file.FileWatcher;
import com.alibaba.nacos.sys.file.WatchFileCenter;
import com.alibaba.nacos.sys.utils.DiskUtils;
import org.springframework.beans.factory.annotation.Autowired;
import org.springframework.stereotype.Service;
import javax.annotation.PostConstruct;
import java.io.File;
import java.io.IOException;
import java.nio.file.Paths;
import java.util.ArrayList;
import java.util.HashMap;
import java.util.HashSet;
import java.util.LinkedList;
import java.util.List;
import java.util.Map;
import java.util.Set;
import java.util.concurrent.ConcurrentHashMap;
import java.util.concurrent.CountDownLatch;
import java.util.concurrent.Executor;
import java.util.concurrent.TimeUnit;
import java.util.concurrent.atomic.AtomicInteger;
/**
* connect manager.
*
* @author liuzunfei
* @version $Id: ConnectionManager.java, v 0.1 2020年07月13日 7:07 PM liuzunfei Exp $
*/
@Service
public class ConnectionManager extends Subscriber<ConnectionLimitRuleChangeEvent> {
public static final String RULE_FILE_NAME = "limitRule";
/**
* 4 times of client keep alive.
*/
private static final long KEEP_ALIVE_TIME = 20000L;
/**
* connection limit rule.
*/
private ConnectionLimitRule connectionLimitRule = new ConnectionLimitRule();
/**
* current loader adjust count,only effective once,use to re balance.
*/
private int loadClient = -1;
String redirectAddress = null;
private Map<String, AtomicInteger> connectionForClientIp = new ConcurrentHashMap<String, AtomicInteger>(16);
Map<String, Connection> connections = new ConcurrentHashMap<>();
@Autowired
private ClientConnectionEventListenerRegistry clientConnectionEventListenerRegistry;
public ConnectionManager() {
NotifyCenter.registerToPublisher(ConnectionLimitRuleChangeEvent.class, NotifyCenter.ringBufferSize);
NotifyCenter.registerSubscriber(this);
}
/**
* if monitor detail.
*
* @param clientIp clientIp.
* @return
*/
public boolean traced(String clientIp) {
return connectionLimitRule != null && connectionLimitRule.getMonitorIpList() != null && connectionLimitRule
.getMonitorIpList().contains(clientIp);
}
@PostConstruct
protected void initLimitRue() {
try {
loadRuleFromLocal();
registerFileWatch();
} catch (Exception e) {
Loggers.REMOTE.warn("Fail to init limit rue from local ,error= ", e);
}
}
/**
* check connection id is valid.
*
* @param connectionId connectionId to be check.
* @return is valid or not.
*/
public boolean checkValid(String connectionId) {
return connections.containsKey(connectionId);
}
/**
* register a new connect.
*
* @param connectionId connectionId
* @param connection connection
*/
public synchronized boolean register(String connectionId, Connection connection) {
if (connection.isConnected()) {
if (connections.containsKey(connectionId)) {
return true;
}
if (!checkLimit(connection)) {
return false;
}
if (traced(connection.getMetaInfo().clientIp)) {
connection.setTraced(true);
}
connections.put(connectionId, connection);
connectionForClientIp.get(connection.getMetaInfo().clientIp).getAndIncrement();
clientConnectionEventListenerRegistry.notifyClientConnected(connection);
Loggers.REMOTE_DIGEST
.info("new connection registered successfully, connectionId = {},connection={} ", connectionId,
connection);
return true;
}
return false;
}
private boolean checkLimit(Connection connection) {
String clientIp = connection.getMetaInfo().clientIp;
if (connection.getMetaInfo().isClusterSource()) {
if (!connectionForClientIp.containsKey(clientIp)) {
connectionForClientIp.putIfAbsent(clientIp, new AtomicInteger(0));
}
return true;
}
if (isOverLimit()) {
return false;
}
if (!connectionForClientIp.containsKey(clientIp)) {
connectionForClientIp.putIfAbsent(clientIp, new AtomicInteger(0));
}
AtomicInteger currentCount = connectionForClientIp.get(clientIp);
if (connectionLimitRule != null) {
// 1.check rule of specific client ip limit.
if (connectionLimitRule.getCountLimitPerClientIp().containsKey(clientIp)) {
Integer integer = connectionLimitRule.getCountLimitPerClientIp().get(clientIp);
if (integer != null && integer >= 0) {
return currentCount.get() < integer;
}
}
// 2.check rule of specific client app limit.
String appName = connection.getMetaInfo().getAppName();
if (StringUtils.isNotBlank(appName) && connectionLimitRule.getCountLimitPerClientApp()
.containsKey(appName)) {
Integer integerApp = connectionLimitRule.getCountLimitPerClientApp().get(appName);
if (integerApp != null && integerApp >= 0) {
return currentCount.get() < integerApp;
}
}
// 3.check rule of default client ip.
int countLimitPerClientIpDefault = connectionLimitRule.getCountLimitPerClientIpDefault();
return countLimitPerClientIpDefault <= 0 || currentCount.get() < countLimitPerClientIpDefault;
}
return true;
}
/**
* unregister a connection .
*
* @param connectionId connectionId.
*/
public synchronized void unregister(String connectionId) {
Connection remove = this.connections.remove(connectionId);
if (remove != null) {
String clientIp = remove.getMetaInfo().clientIp;
AtomicInteger atomicInteger = connectionForClientIp.get(clientIp);
if (atomicInteger != null) {
int count = atomicInteger.decrementAndGet();
if (count <= 0) {
connectionForClientIp.remove(clientIp);
}
}
remove.close();
Loggers.REMOTE_DIGEST.info("[{}]Connection unregistered successfully. ", connectionId);
clientConnectionEventListenerRegistry.notifyClientDisConnected(remove);
}
}
/**
* get by connection id.
*
* @param connectionId connection id.
* @return connection of the id.
*/
public Connection getConnection(String connectionId) {
return connections.get(connectionId);
}
/**
* get by client ip.
*
* @param clientIp client ip.
* @return connections of the client ip.
*/
public List<Connection> getConnectionByIp(String clientIp) {
Set<Map.Entry<String, Connection>> entries = connections.entrySet();
List<Connection> connections = new ArrayList<>();
for (Map.Entry<String, Connection> entry : entries) {
Connection value = entry.getValue();
if (clientIp.equals(value.getMetaInfo().clientIp)) {
connections.add(value);
}
}
return connections;
}
/**
* get current connections count.
*
* @return get all connection count
*/
public int getCurrentConnectionCount() {
return this.connections.size();
}
/**
* regresh connection active time.
*
* @param connectionId connectionId.
*/
public void refreshActiveTime(String connectionId) {
Connection connection = connections.get(connectionId);
if (connection != null) {
connection.freshActiveTime();
}
}
/**
* Start Task:Expel the connection which active Time expire.
*/
@PostConstruct
public void start() {
// Start UnHealthy Connection Expel Task.
RpcScheduledExecutor.COMMON_SERVER_EXECUTOR.scheduleWithFixedDelay(new Runnable() {
@Override
public void run() {
try {
int totalCount = connections.size();
Loggers.REMOTE_DIGEST.info("Connection check task start");
MetricsMonitor.getLongConnectionMonitor().set(totalCount);
Set<Map.Entry<String, Connection>> entries = connections.entrySet();
int currentSdkClientCount = currentSdkClientCount();
boolean isLoaderClient = loadClient >= 0;
int currentMaxClient = isLoaderClient ? loadClient : connectionLimitRule.countLimit;
int expelCount = currentMaxClient < 0 ? 0 : Math.max(currentSdkClientCount - currentMaxClient, 0);
Loggers.REMOTE_DIGEST
.info("Total count ={}, sdkCount={},clusterCount={}, currentLimit={}, toExpelCount={}",
totalCount, currentSdkClientCount, (totalCount - currentSdkClientCount),
currentMaxClient + (isLoaderClient ? "(loaderCount)" : ""), expelCount);
List<String> expelClient = new LinkedList<>();
Map<String, AtomicInteger> expelForIp = new HashMap<>(16);
//1. calculate expel count of ip.
for (Map.Entry<String, Connection> entry : entries) {
Connection client = entry.getValue();
String appName = client.getMetaInfo().getAppName();
String clientIp = client.getMetaInfo().getClientIp();
if (client.getMetaInfo().isSdkSource() && !expelForIp.containsKey(clientIp)) {
//get limit for current ip.
int countLimitOfIp = connectionLimitRule.getCountLimitOfIp(clientIp);
if (countLimitOfIp < 0) {
int countLimitOfApp = connectionLimitRule.getCountLimitOfApp(appName);
countLimitOfIp = countLimitOfApp < 0 ? countLimitOfIp : countLimitOfApp;
}
if (countLimitOfIp < 0) {
countLimitOfIp = connectionLimitRule.getCountLimitPerClientIpDefault();
}
if (countLimitOfIp >= 0 && connectionForClientIp.containsKey(clientIp)) {
AtomicInteger currentCountIp = connectionForClientIp.get(clientIp);
if (currentCountIp != null && currentCountIp.get() > countLimitOfIp) {
expelForIp.put(clientIp, new AtomicInteger(currentCountIp.get() - countLimitOfIp));
}
}
}
}
Loggers.REMOTE_DIGEST
.info("Check over limit for ip limit rule, over limit ip count={}", expelForIp.size());
if (expelForIp.size() > 0) {
Loggers.REMOTE_DIGEST.info("Over limit ip expel info, {}", expelForIp);
}
Set<String> outDatedConnections = new HashSet<>();
long now = System.currentTimeMillis();
//2.get expel connection for ip limit.
for (Map.Entry<String, Connection> entry : entries) {
Connection client = entry.getValue();
String clientIp = client.getMetaInfo().getClientIp();
AtomicInteger integer = expelForIp.get(clientIp);
if (integer != null && integer.intValue() > 0) {
integer.decrementAndGet();
expelClient.add(client.getMetaInfo().getConnectionId());
expelCount--;
} else if (now - client.getMetaInfo().getLastActiveTime() >= KEEP_ALIVE_TIME) {
outDatedConnections.add(client.getMetaInfo().getConnectionId());
}
}
//3. if total count is still over limit.
if (expelCount > 0) {
for (Map.Entry<String, Connection> entry : entries) {
Connection client = entry.getValue();
if (!expelForIp.containsKey(client.getMetaInfo().clientIp) && client.getMetaInfo()
.isSdkSource() && expelCount > 0) {
expelClient.add(client.getMetaInfo().getConnectionId());
expelCount--;
outDatedConnections.remove(client.getMetaInfo().getConnectionId());
}
}
}
String serverIp = null;
String serverPort = null;
if (StringUtils.isNotBlank(redirectAddress) && redirectAddress.contains(Constants.COLON)) {
String[] split = redirectAddress.split(Constants.COLON);
serverIp = split[0];
serverPort = split[1];
}
for (String expelledClientId : expelClient) {
try {
Connection connection = getConnection(expelledClientId);
if (connection != null) {
ConnectResetRequest connectResetRequest = new ConnectResetRequest();
connectResetRequest.setServerIp(serverIp);
connectResetRequest.setServerPort(serverPort);
connection.asyncRequest(connectResetRequest, null);
Loggers.REMOTE_DIGEST
.info("Send connection reset request , connection id = {},recommendServerIp={}, recommendServerPort={}",
expelledClientId, connectResetRequest.getServerIp(),
connectResetRequest.getServerPort());
}
} catch (ConnectionAlreadyClosedException e) {
unregister(expelledClientId);
} catch (Exception e) {
Loggers.REMOTE_DIGEST.error("Error occurs when expel connection, expelledClientId:{}", expelledClientId, e);
}
}
//4.client active detection.
Loggers.REMOTE_DIGEST.info("Out dated connection ,size={}", outDatedConnections.size());
if (CollectionUtils.isNotEmpty(outDatedConnections)) {
Set<String> successConnections = new HashSet<>();
final CountDownLatch latch = new CountDownLatch(outDatedConnections.size());
for (String outDateConnectionId : outDatedConnections) {
try {
Connection connection = getConnection(outDateConnectionId);
if (connection != null) {
ClientDetectionRequest clientDetectionRequest = new ClientDetectionRequest();
connection.asyncRequest(clientDetectionRequest, new RequestCallBack() {
@Override
public Executor getExecutor() {
return null;
}
@Override
public long getTimeout() {
return 1000L;
}
@Override
public void onResponse(Response response) {
latch.countDown();
if (response != null && response.isSuccess()) {
connection.freshActiveTime();
successConnections.add(outDateConnectionId);
}
}
@Override
public void onException(Throwable e) {
latch.countDown();
}
});
Loggers.REMOTE_DIGEST
.info("[{}]send connection active request ", outDateConnectionId);
} else {
latch.countDown();
}
} catch (ConnectionAlreadyClosedException e) {
latch.countDown();
} catch (Exception e) {
Loggers.REMOTE_DIGEST
.error("[{}]Error occurs when check client active detection ,error={}",
outDateConnectionId, e);
latch.countDown();
}
}
latch.await(3000L, TimeUnit.MILLISECONDS);
Loggers.REMOTE_DIGEST
.info("Out dated connection check successCount={}", successConnections.size());
for (String outDateConnectionId : outDatedConnections) {
if (!successConnections.contains(outDateConnectionId)) {
Loggers.REMOTE_DIGEST
.info("[{}]Unregister Out dated connection....", outDateConnectionId);
unregister(outDateConnectionId);
}
}
}
//reset loader client
if (isLoaderClient) {
loadClient = -1;
redirectAddress = null;
}
Loggers.REMOTE_DIGEST.info("Connection check task end");
} catch (Throwable e) {
Loggers.REMOTE.error("Error occurs during connection check... ", e);
}
}
}, 1000L, 3000L, TimeUnit.MILLISECONDS);
}
private RequestMeta buildMeta() {
RequestMeta meta = new RequestMeta();
meta.setClientVersion(VersionUtils.getFullClientVersion());
meta.setClientIp(NetUtils.localIP());
return meta;
}
public void loadCount(int loadClient, String redirectAddress) {
this.loadClient = loadClient;
this.redirectAddress = redirectAddress;
}
/**
* send load request to spefic connetionId.
*
* @param connectionId connection id of client.
* @param redirectAddress server address to redirect.
*/
public void loadSingle(String connectionId, String redirectAddress) {
Connection connection = getConnection(connectionId);
if (connection != null) {
if (connection.getMetaInfo().isSdkSource()) {
ConnectResetRequest connectResetRequest = new ConnectResetRequest();
if (StringUtils.isNotBlank(redirectAddress) && redirectAddress.contains(Constants.COLON)) {
String[] split = redirectAddress.split(Constants.COLON);
connectResetRequest.setServerIp(split[0]);
connectResetRequest.setServerPort(split[1]);
}
try {
connection.request(connectResetRequest, 3000L);
} catch (ConnectionAlreadyClosedException e) {
unregister(connectionId);
} catch (Exception e) {
Loggers.REMOTE.error("error occurs when expel connection, connectionId: {} ", connectionId, e);
}
}
}
}
/**
* get all client count.
*
* @return client count.
*/
public int currentClientsCount() {
return connections.size();
}
/**
* get client count with labels filter.
*
* @param filterLabels label to filter client count.
* @return count with the specific filter labels.
*/
public int currentClientsCount(Map<String, String> filterLabels) {
int count = 0;
for (Connection connection : connections.values()) {
Map<String, String> labels = connection.getMetaInfo().labels;
boolean disMatchFound = false;
for (Map.Entry<String, String> entry : filterLabels.entrySet()) {
if (!entry.getValue().equals(labels.get(entry.getKey()))) {
disMatchFound = true;
break;
}
}
if (!disMatchFound) {
count++;
}
}
return count;
}
/**
* get client count from sdk.
*
* @return sdk client count.
*/
public int currentSdkClientCount() {
Map<String, String> filter = new HashMap<String, String>(2);
filter.put(RemoteConstants.LABEL_SOURCE, RemoteConstants.LABEL_SOURCE_SDK);
return currentClientsCount(filter);
}
public Map<String, Connection> currentClients() {
return connections;
}
/**
* check if over limit.
*
* @return over limit or not.
*/
private boolean isOverLimit() {
return connectionLimitRule.countLimit > 0 && currentSdkClientCount() >= connectionLimitRule.getCountLimit();
}
@Override
public void onEvent(ConnectionLimitRuleChangeEvent event) {
String limitRule = event.getLimitRule();
Loggers.REMOTE.info("connection limit rule change event receive :{}", limitRule);
try {
ConnectionLimitRule connectionLimitRule = JacksonUtils.toObj(limitRule, ConnectionLimitRule.class);
if (connectionLimitRule != null) {
this.connectionLimitRule = connectionLimitRule;
try {
saveRuleToLocal(this.connectionLimitRule);
} catch (Exception e) {
Loggers.REMOTE.warn("Fail to save rule to local error is ", e);
}
} else {
Loggers.REMOTE.info("Parse rule is null,Ignore illegal rule :{}", limitRule);
}
} catch (Exception e) {
Loggers.REMOTE.error("Fail to parse connection limit rule :{}", limitRule, e);
}
}
@Override
public Class<? extends Event> subscribeType() {
return ConnectionLimitRuleChangeEvent.class;
}
static class ConnectionLimitRule {
private Set<String> monitorIpList = new HashSet<String>();
private int countLimit = -1;
private int countLimitPerClientIpDefault = -1;
private Map<String, Integer> countLimitPerClientIp = new HashMap<String, Integer>();
private Map<String, Integer> countLimitPerClientApp = new HashMap<String, Integer>();
public int getCountLimit() {
return countLimit;
}
public void setCountLimit(int countLimit) {
this.countLimit = countLimit;
}
public int getCountLimitPerClientIpDefault() {
return countLimitPerClientIpDefault;
}
public void setCountLimitPerClientIpDefault(int countLimitPerClientIpDefault) {
this.countLimitPerClientIpDefault = countLimitPerClientIpDefault;
}
public int getCountLimitOfIp(String clientIp) {
if (countLimitPerClientIp.containsKey(clientIp)) {
Integer integer = countLimitPerClientIp.get(clientIp);
if (integer != null && integer >= 0) {
return integer;
}
}
return -1;
}
public int getCountLimitOfApp(String appName) {
if (countLimitPerClientApp.containsKey(appName)) {
Integer integer = countLimitPerClientApp.get(appName);
if (integer != null && integer >= 0) {
return integer;
}
}
return -1;
}
public Map<String, Integer> getCountLimitPerClientIp() {
return countLimitPerClientIp;
}
public void setCountLimitPerClientIp(Map<String, Integer> countLimitPerClientIp) {
this.countLimitPerClientIp = countLimitPerClientIp;
}
public Map<String, Integer> getCountLimitPerClientApp() {
return countLimitPerClientApp;
}
public void setCountLimitPerClientApp(Map<String, Integer> countLimitPerClientApp) {
this.countLimitPerClientApp = countLimitPerClientApp;
}
public Set<String> getMonitorIpList() {
return monitorIpList;
}
public void setMonitorIpList(Set<String> monitorIpList) {
this.monitorIpList = monitorIpList;
}
}
public ConnectionLimitRule getConnectionLimitRule() {
return connectionLimitRule;
}
private synchronized void loadRuleFromLocal() throws Exception {
File limitFile = getRuleFile();
if (!limitFile.exists()) {
limitFile.createNewFile();
}
String ruleContent = DiskUtils.readFile(limitFile);
ConnectionLimitRule connectionLimitRule = StringUtils.isBlank(ruleContent) ? new ConnectionLimitRule()
: JacksonUtils.toObj(ruleContent, ConnectionLimitRule.class);
// apply rule.
if (connectionLimitRule != null) {
this.connectionLimitRule = connectionLimitRule;
Set<String> monitorIpList = connectionLimitRule.monitorIpList;
for (Connection connection : this.connections.values()) {
String clientIp = connection.getMetaInfo().getClientIp();
if (!CollectionUtils.isEmpty(monitorIpList) && monitorIpList.contains(clientIp)) {
connection.setTraced(true);
} else {
connection.setTraced(false);
}
}
}
Loggers.REMOTE.info("Init loader limit rule from local,rule={}", ruleContent);
}
private synchronized void saveRuleToLocal(ConnectionLimitRule limitRule) throws IOException {
File limitFile = getRuleFile();
if (!limitFile.exists()) {
limitFile.createNewFile();
}
DiskUtils.writeFile(limitFile, JacksonUtils.toJson(limitRule).getBytes(Constants.ENCODE), false);
}
private File getRuleFile() {
File baseDir = new File(EnvUtil.getNacosHome(), "data" + File.separator + "loader" + File.separator);
if (!baseDir.exists()) {
baseDir.mkdir();
}
return new File(baseDir, RULE_FILE_NAME);
}
private void registerFileWatch() {
try {
String tpsPath = Paths.get(EnvUtil.getNacosHome(), "data", "loader").toString();
WatchFileCenter.registerWatcher(tpsPath, new FileWatcher() {
@Override
public void onChange(FileChangeEvent event) {
try {
String fileName = event.getContext().toString();
if (RULE_FILE_NAME.equals(fileName)) {
loadRuleFromLocal();
}
} catch (Throwable throwable) {
Loggers.REMOTE.warn("Fail to load rule from local", throwable);
}
}
@Override
public boolean interest(String context) {
return RULE_FILE_NAME.equals(context);
}
});
} catch (NacosException e) {
Loggers.REMOTE.warn("Register connection rule fail ", e);
}
}
}
|
java
|
{
"version": "1.0.8",
"name": "generator-primer-module",
"description": "Use this to create a new Primer modules!",
"author": "GitHub, Inc.",
"license": "MIT",
"primer": {
"module_type": "tools"
},
"repository": "https://github.com/primer/primer/tree/master/modules/generator-primer-module",
"bugs": {
"url": "https://github.com/primer/primer/issues"
},
"scripts": {
"test": "../../node_modules/.bin/ava -v test/**/*.spec.js",
"watch": "npm run test -- --watch"
},
"dependencies": {
"chalk": "^2.1.0",
"primer-support": "4.6.0",
"yeoman-generator": "^1.1.1"
},
"keywords": [
"primer",
"css",
"github",
"design-system",
"yeoman-generator"
],
"devDependencies": {
"yeoman-assert": "^3.0.0",
"yeoman-test": "^1.7.0"
}
}
|
json
|
.que.multichoice .answer .specificfeedback {
padding: 0 0.7em;
background: #FFF3BF;
}
.que.multichoice .answer .specificfeedback * {
display: inline;
background: #FFF3BF;
}
.que.multichoice .answer .specificfeedback script {
display: none;
}
.que.multichoice .answer div.r0,
.que.multichoice .answer div.r1 {
padding: 0.3em;
}
.que.multichoice .feedback .rightanswer * {
display: inline;
}
|
css
|
import paramiko
ssh = paramiko.SSHClient()
ssh.set_missing_host_key_policy(
paramiko.AutoAddPolicy())
ssh.connect('172.24.4.53', username='root', key_filename='keys/anella')
stdin, stdout, stderr = ssh.exec_command("uptime")
print stdout.readlines()
ssh.close()
|
python
|
// Copyright (c) 2009 The Chromium Authors. All rights reserved.
// Use of this source code is governed by a BSD-style license that can be
// found in the LICENSE file.
#if !defined(_WIN32)
#ifdef __linux__
// Linux
#include <freetype/ftoutln.h>
#include <ft2build.h>
#include FT_FREETYPE_H
#else
// Mac OS X
#include <ApplicationServices/ApplicationServices.h> // g++ -framework Cocoa
#endif // __linux__
#include <unistd.h>
#else
// Windows
#include <io.h>
#endif // !defiend(_WIN32)
#include <fcntl.h>
#include <sys/stat.h>
/* C9: Needed to crash */
#include <assert.h>
#include <cstdio>
#include <cstdlib>
#include <cstring>
#include "opentype-sanitiser.h"
#include "ots-memory-stream.h"
extern "C" {
void klee_begin_checked(void) __attribute__((weak));
void klee_end_checked(void) __attribute__((weak));
bool klee_memcmp(void *mem1, void *mem2, size_t size) __attribute__((weak));
int64_t klee_get_value_i64(int64_t expr) __attribute__((weak));
}
namespace {
int Usage(const char *argv0) {
std::fprintf(stderr, "Usage: %s <ttf file>\n", argv0);
return 1;
}
bool ReadFile(const char *file_name, uint8_t **data, size_t *file_size);
bool DumpResults(const uint8_t *result1, const size_t len1,
const uint8_t *result2, const size_t len2);
#if defined(_WIN32)
#define ADDITIONAL_OPEN_FLAGS O_BINARY
#else
#define ADDITIONAL_OPEN_FLAGS 0
#endif
bool ReadFile(const char *file_name, uint8_t **data, size_t *file_size) {
const int fd = open(file_name, O_RDONLY | ADDITIONAL_OPEN_FLAGS);
if (fd < 0) {
return false;
}
struct stat st;
fstat(fd, &st);
*file_size = st.st_size;
*data = new uint8_t[st.st_size];
if (read(fd, *data, st.st_size) != st.st_size) {
close(fd);
return false;
}
close(fd);
return true;
}
bool DumpResults(const uint8_t *result1, const size_t len1,
const uint8_t *result2, const size_t len2) {
int fd1 = open("out1.ttf",
O_WRONLY | O_CREAT | O_TRUNC | ADDITIONAL_OPEN_FLAGS, 0600);
int fd2 = open("out2.ttf",
O_WRONLY | O_CREAT | O_TRUNC | ADDITIONAL_OPEN_FLAGS, 0600);
if (fd1 < 0 || fd2 < 0) {
perror("opening output file");
return false;
}
if ((write(fd1, result1, len1) < 0) ||
(write(fd2, result2, len2) < 0)) {
perror("writing output file");
close(fd1);
close(fd2);
return false;
}
close(fd1);
close(fd2);
return true;
}
// Platform specific implementations.
bool VerifyTranscodedFont(uint8_t *result, const size_t len);
#if defined(__linux__)
// Linux
bool VerifyTranscodedFont(uint8_t *result, const size_t len) {
FT_Library library;
FT_Error error = ::FT_Init_FreeType(&library);
if (error) {
return false;
}
FT_Face dummy;
error = ::FT_New_Memory_Face(library, result, len, 0, &dummy);
if (error) {
return false;
}
::FT_Done_Face(dummy);
return true;
}
#elif defined(__APPLE_CC__)
// Mac
bool VerifyTranscodedFont(uint8_t *result, const size_t len) {
ATSFontContainerRef container_ref = 0;
ATSFontActivateFromMemory(result, len, 3, kATSFontFormatUnspecified,
NULL, kATSOptionFlagsDefault, &container_ref);
if (!container_ref) {
return false;
}
ItemCount count;
ATSFontFindFromContainer(
container_ref, kATSOptionFlagsDefault, 0, NULL, &count);
if (!count) {
return false;
}
ATSFontRef ats_font_ref = 0;
ATSFontFindFromContainer(
container_ref, kATSOptionFlagsDefault, 1, &ats_font_ref, NULL);
if (!ats_font_ref) {
return false;
}
CGFontRef cg_font_ref = CGFontCreateWithPlatformFont(&ats_font_ref);
if (!CGFontGetNumberOfGlyphs(cg_font_ref)) {
return false;
}
return true;
}
#elif defined(_WIN32)
// Windows
bool VerifyTranscodedFont(uint8_t *result, const size_t len) {
DWORD num_fonts = 0;
HANDLE handle = AddFontMemResourceEx(result, len, 0, &num_fonts);
if (!handle) {
return false;
}
RemoveFontMemResourceEx(handle);
return true;
}
#else
bool VerifyTranscodedFont(uint8_t *result, const size_t len) {
std::fprintf(stderr, "Can't verify the transcoded font on this platform.\n");
return false;
}
#endif
} // namespace
int main(int argc, char **argv) {
if (argc != 2) return Usage(argv[0]);
size_t file_size = 0;
uint8_t *data = 0;
if (!ReadFile(argv[1], &data, &file_size)) {
std::fprintf(stderr, "Failed to read file!\n");
return 1;
}
// A transcoded font is usually smaller than an original font.
// However, it can be slightly bigger than the original one due to
// name table replacement and/or padding for glyf table.
//
// However, a WOFF font gets decompressed and so can be *much* larger than
// the original.
uint8_t *result = new uint8_t[file_size * 3/2];
ots::MemoryStream output(result, file_size * 3/2);
bool r = ots::Process(&output, data, file_size);
if (!r) {
std::fprintf(stderr, "Failed to sanitise file!\n");
return 1;
}
const size_t result_len = output.Tell();
delete[] data;
uint8_t *result2 = new uint8_t[result_len];
ots::MemoryStream output2(result2, result_len);
r = ots::Process(&output2, result, result_len);
if (!r) {
std::fprintf(stderr, "Failed to sanitise previous output!\n");
return 1;
}
const size_t result2_len = output2.Tell();
klee_begin_checked();
bool dump_results = false;
if (result2_len != result_len) {
std::fprintf(stderr, "Outputs differ in length\n");
dump_results = true;
assert(0 && "Files differ in size");
} else {
#if 0
// Do a shard comparison
const int shard_size = 32;
uint8_t **shard1 = new uint8_t*[(result_len-1)/shard_size + 1];
uint8_t **shard2 = new uint8_t*[(result_len-1)/shard_size + 1];
for (unsigned int i = 0; i < (result_len-1)/shard_size + 1; ++i) {
if (i == (result_len-1)/shard_size) {
shard1[i] = new uint8_t[result_len % shard_size];
shard2[i] = new uint8_t[result_len % shard_size];
std::memcpy(shard1[i], &result[i*shard_size], result_len % shard_size);
std::memcpy(shard2[i], &result2[i*shard_size], result_len % shard_size);
} else {
shard1[i] = new uint8_t[shard_size];
shard2[i] = new uint8_t[shard_size];
std::memcpy(shard1[i], &result[i*shard_size], shard_size);
std::memcpy(shard2[i], &result2[i*shard_size], shard_size);
}
}
klee_begin_checked();
for (unsigned int i = 0; i < (result_len-1)/shard_size + 1; ++i) {
if (i == (result_len - 1)/shard_size) {
if (std::memcmp(shard1[i], shard2[i], result_len % shard_size)) {
std::fprintf(stderr, "Outputs differ in content\n");
dump_results = true;
assert(0 && "Files differ in content");
}
} else {
if (std::memcmp(shard1[i], shard2[i], shard_size)) {
std::fprintf(stderr, "Outputs differ in content\n");
dump_results = true;
assert(0 && "Files differ in content");
}
}
}
std::fprintf(stderr, "No output difference\n");
klee_end_checked();
#else
if (!klee_memcmp(result, result2, klee_get_value_i64(result_len))) {
std::fprintf(stderr, "Outputs differ in content\n");
dump_results = true;
assert(0 && "Files differ in content");
} else {
std::fprintf(stderr, "No output difference\n");
}
#endif
}
klee_end_checked();
if (dump_results) {
std::fprintf(stderr, "Dumping results to out1.tff and out2.tff\n");
if (!DumpResults(result, result_len, result2, result2_len)) {
std::fprintf(stderr, "Failed to dump output files.\n");
return 1;
}
}
/* C9: Check only idempotence */
#if 0
// Verify that the transcoded font can be opened by the font renderer for
// Linux (FreeType2), Mac OS X, or Windows.
if (!VerifyTranscodedFont(result, result_len)) {
std::fprintf(stderr, "Failed to verify the transcoded font\n");
return 1;
}
#endif
return 0;
}
|
cpp
|
MP Board Results 2023 is DECLARED and students will be able to access the result link at 1 pm. The result will be announced by the School Education Minister Inder Singh Parmar via a press conference. Check the MP Board result link websites and the steps to access the result below.
MP Board Results 2023 is DECLARED and the result link will be available at 1 pm. The result will be announced by the School Education Minister Inder Singh Parmar via a press conference. The Madhya Pradesh Board Result link will be available on the official website--rskmp. org and mpresults. nic. in. Students and parents can log in using the User ID and password. Check the MP Board result link websites and the steps to access the result below.
MP Board 5th 8th Results 2023 will be released for over 8. 65 lakh students MP 5th exams and 7. 70 lakh students for 8th exams. Students can check their results and in case they are not satisfied they can apply for the re-evaluation or improvement. The application form for the re-evaluation process will be released on May 16 and the students will be able to submit the form till May 30.
As the result is announced, the students and parents can visit the following websites to check MP board results:
- rskmp. in.
MP Board Class 5th final exams were held between March 25 and April 3. While the Class 8th exams were conducted from March 23 to April 1.
Manipur Unrest: CM N Biren Singh Hints At Involvement Of 'External Forces' In Ethnic Violence, Calls It 'Pre-Planned'
|
english
|
<reponame>buildsi/spack-monitor-nlp-dev<filename>data/spack-issues/issue-14902.json
{
"body": "NCBI packaged versions >=2.9.0 with a config.guess from 2013-02-12. This caused spack install to fail. The message suggested running `wget -O config.guess 'https://git.savannah.gnu.org/gitweb/?p=config.git;a=blob_plain;f=config.guess;hb=HEAD'` to obtain the latest config.guess script. Including the diff and patching it from within package.py resolved the issue.",
"user": "dnanto",
"url": "https://api.github.com/repos/spack/spack/issues/14902",
"updated_at": "2020-10-16 10:07:35",
"created_at": "2020-02-12 03:14:40",
"closed_at": "2020-10-16 10:07:35",
"state": "closed",
"title": "blast-plus: build fails on the IBM POWER8 unless the packaged config.guess is updated",
"number": 14902,
"milestone": null,
"labels": [
"build-error",
"autotools",
"power"
],
"id": 563716900,
"html_url": "https://github.com/spack/spack/pull/14902",
"assignees": [],
"comments": 5
}
|
json
|
body{
background-color:#FFFACD;
}
.from-masuk{
background:#6B8E23;
/*meletakkan form ke tengah*/
margin: 40px auto;
padding: 30px 20px;
font-family:Latha;
}
.regis h1{
text-align: center;
font-family: Rich;
font-size: 60px;
color: white;
}
.tulisan_regis{
text-align: center;
margin-top: 30px;
font-family: Fish Grill;
color: white;
font-size: 20px;
}
.btn-custom{
color:#6B8E23;
background-color:#FFFACD;
}
|
css
|
<reponame>Scruffy753/DiplomaThesis
{"ast":null,"code":"/*\n This file is part of web3.js.\n\n web3.js is free software: you can redistribute it and/or modify\n it under the terms of the GNU Lesser General Public License as published by\n the Free Software Foundation, either version 3 of the License, or\n (at your option) any later version.\n\n web3.js is distributed in the hope that it will be useful,\n but WITHOUT ANY WARRANTY; without even the implied warranty of\n MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the\n GNU Lesser General Public License for more details.\n\n You should have received a copy of the GNU Lesser General Public License\n along with web3.js. If not, see <http://www.gnu.org/licenses/>.\n */\n\n/**\n * @file index.js\n * @author <NAME> <<EMAIL>>\n * @date 2016\n */\n\"use strict\";\n\nvar EventEmitter = require('eventemitter3');\n/**\n * This function generates a defer promise and adds eventEmitter functionality to it\n *\n * @method eventifiedPromise\n */\n\n\nvar PromiEvent = function PromiEvent(justPromise) {\n var resolve,\n reject,\n eventEmitter = new Promise(function () {\n resolve = arguments[0];\n reject = arguments[1];\n });\n\n if (justPromise) {\n return {\n resolve: resolve,\n reject: reject,\n eventEmitter: eventEmitter\n };\n } // get eventEmitter\n\n\n var emitter = new EventEmitter(); // add eventEmitter to the promise\n\n eventEmitter._events = emitter._events;\n eventEmitter.emit = emitter.emit;\n eventEmitter.on = emitter.on;\n eventEmitter.once = emitter.once;\n eventEmitter.off = emitter.off;\n eventEmitter.listeners = emitter.listeners;\n eventEmitter.addListener = emitter.addListener;\n eventEmitter.removeListener = emitter.removeListener;\n eventEmitter.removeAllListeners = emitter.removeAllListeners;\n return {\n resolve: resolve,\n reject: reject,\n eventEmitter: eventEmitter\n };\n};\n\nPromiEvent.resolve = function (value) {\n var promise = PromiEvent(true);\n promise.resolve(value);\n return promise.eventEmitter;\n};\n\nmodule.exports = PromiEvent;","map":null,"metadata":{},"sourceType":"script"}
|
json
|
/*****************************************************************************************************************************
* Copyright 2020 <NAME>. All rights reserved.
* This code is licensed under the BSD 3-Clause "New" or "Revised" License
* License url: https://github.com/GabyForceQ/PolluxEngine/blob/master/LICENSE
*****************************************************************************************************************************/
#pragma once
namespace Pollux::BuildSystem
{
enum class BuildOptimization : type_t
{
None = 0x0000,
Debug = 0x0001,
Release = 0x0002,
Retail = 0x0004
};
std::underlying_type_t<BuildOptimization> operator+(BuildOptimization self);
BuildOptimization operator|(BuildOptimization lhs, BuildOptimization rhs);
BuildOptimization& operator|=(BuildOptimization& lhs, BuildOptimization rhs);
BuildOptimization operator&(BuildOptimization lhs, BuildOptimization rhs);
BuildOptimization& operator&=(BuildOptimization& lhs, BuildOptimization rhs);
extern const size_t g_BuildOptimizationCount;
extern const std::underlying_type_t<BuildOptimization> g_BuildOptimizationMin;
extern const std::underlying_type_t<BuildOptimization> g_BuildOptimizationMax;
extern const std::underlying_type_t<BuildOptimization> g_BuildOptimizationFlagArray[3];
extern const char* g_BuildOptimization_None;
extern const char* g_BuildOptimization_Debug;
extern const char* g_BuildOptimization_Release;
extern const char* g_BuildOptimization_Retail;
extern const std::map<const char*, BuildOptimization> g_BuildOptimizationMap;
BuildOptimization BuildOptimizationToEnum(const char* buildOptimization);
std::string ToString(const BuildOptimization buildOptimization);
}
|
cpp
|
Posted On:
Credibility of Research Publications is extremely important because it has a direct impact upon the individual, institutional and National image. With an aim to refine and strengthen the University Grants Commission (UGC) approved list of journals, the UGC has issued a Public Notice dated 28th November 2018, and decided to establish a Consortium for Academic and Research Ethics (CARE). The good quality Research Journals in disciplines under Social Sciences, Humanities, Languages, Arts, Culture, Indian Knowledge Systems etc., will be maintained by CARE and referred to as ‘CARE Reference List of Quality Journals’. This will be used for all academic purposes. The ‘CARE Reference List of Quality Journals’ will be regularly updated and published by the UGC and the Members of the Consortium at their respective websites.
UGC has constituted a Committee under the Chairmanship of Prof. P. Balram, to review its quality mandate relating to promotion of Research and to review the existing M.Phil/ Ph.D Regulations.
This information was given by the Minister of State (HRD), Dr. Satya Pal Singh today in a written reply to a Lok Sabha question.
|
english
|
package io.github.chinalhr.consumer1;
import io.github.chinalhr.serviceapi.IDataService;
import org.springframework.beans.factory.annotation.Autowired;
import org.springframework.web.bind.annotation.GetMapping;
import org.springframework.web.bind.annotation.RequestMapping;
import org.springframework.web.bind.annotation.RestController;
/**
* @Author : ChinaLHR
* @Date : Create in 22:48 2018/5/25
* @Email : <EMAIL>
*/
@RestController
@RequestMapping("/TestInvoker")
public class ConsumerController {
@Autowired
private IDataService iDataService;
@GetMapping
public String TestInvoker(){
return "OK";
}
}
|
java
|
/*
* Copyright Amazon.com, Inc. or its affiliates. All Rights Reserved.
*
* Licensed under the Apache License, Version 2.0 (the "License").
* You may not use this file except in compliance with the License.
* A copy of the License is located at
*
* http://aws.amazon.com/apache2.0
*
* or in the "license" file accompanying this file. This file is distributed
* on an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either
* express or implied. See the License for the specific language governing
* permissions and limitations under the License.
*/
package software.amazon.awssdk.extensions.dynamodb.mappingclient;
import software.amazon.awssdk.annotations.SdkPublicApi;
import software.amazon.awssdk.core.pagination.sync.SdkIterable;
import software.amazon.awssdk.extensions.dynamodb.mappingclient.model.CreateTableEnhancedRequest;
import software.amazon.awssdk.extensions.dynamodb.mappingclient.model.DeleteItemEnhancedRequest;
import software.amazon.awssdk.extensions.dynamodb.mappingclient.model.GetItemEnhancedRequest;
import software.amazon.awssdk.extensions.dynamodb.mappingclient.model.PutItemEnhancedRequest;
import software.amazon.awssdk.extensions.dynamodb.mappingclient.model.QueryEnhancedRequest;
import software.amazon.awssdk.extensions.dynamodb.mappingclient.model.ScanEnhancedRequest;
import software.amazon.awssdk.extensions.dynamodb.mappingclient.model.UpdateItemEnhancedRequest;
/**
* Synchronous interface for running commands against an object that is linked to a specific DynamoDb table resource
* and therefore knows how to map records from that table into a modelled object.
*
* @param <T> The type of the modelled object.
*/
@SdkPublicApi
public interface DynamoDbTable<T> extends MappedTableResource<T> {
/**
* Returns a mapped index that can be used to execute commands against a secondary index belonging to the table
* being mapped by this object. Note that only a subset of the commands that work against a table will work
* against a secondary index.
*
* @param indexName The name of the secondary index to build the command interface for.
* @return A {@link DynamoDbIndex} object that can be used to execute database commands against.
*/
DynamoDbIndex<T> index(String indexName);
default Void createTable(CreateTableEnhancedRequest request) {
throw new UnsupportedOperationException();
}
default T deleteItem(DeleteItemEnhancedRequest<T> request) {
throw new UnsupportedOperationException();
}
default T getItem(GetItemEnhancedRequest request) {
throw new UnsupportedOperationException();
}
default SdkIterable<Page<T>> query(QueryEnhancedRequest request) {
throw new UnsupportedOperationException();
}
default Void putItem(PutItemEnhancedRequest<T> request) {
throw new UnsupportedOperationException();
}
default SdkIterable<Page<T>> scan(ScanEnhancedRequest request) {
throw new UnsupportedOperationException();
}
default T updateItem(UpdateItemEnhancedRequest<T> request) {
throw new UnsupportedOperationException();
}
}
|
java
|
package kr.co.mashup.feedgetapi.service;
import kr.co.mashup.feedgetapi.common.StorageProperties;
import kr.co.mashup.feedgetapi.common.util.CodeGenerator;
import kr.co.mashup.feedgetapi.common.util.FileUtil;
import kr.co.mashup.feedgetapi.exception.InvalidParameterException;
import kr.co.mashup.feedgetapi.exception.NotFoundException;
import kr.co.mashup.feedgetapi.web.dto.CreationDto;
import kr.co.mashup.feedgetapi.web.dto.FeedbackDto;
import kr.co.mashup.feedgetcommon.domain.Creation;
import kr.co.mashup.feedgetcommon.domain.CreationAttachedContent;
import kr.co.mashup.feedgetcommon.domain.Feedback;
import kr.co.mashup.feedgetcommon.domain.FeedbackAttachedContent;
import kr.co.mashup.feedgetcommon.domain.code.ContentType;
import kr.co.mashup.feedgetcommon.repository.CreationRepository;
import kr.co.mashup.feedgetcommon.repository.FeedbackRepository;
import lombok.RequiredArgsConstructor;
import lombok.extern.slf4j.Slf4j;
import org.springframework.stereotype.Service;
import org.springframework.transaction.annotation.Transactional;
import org.springframework.web.multipart.MultipartFile;
import java.util.List;
import java.util.Optional;
import java.util.concurrent.CopyOnWriteArrayList;
/**
* 컨텐츠 관련 비즈니스 로직 처리
* <p>
* Created by ethankim on 2017. 11. 5..
*/
@Service
@Slf4j
@RequiredArgsConstructor
public class ContentsService {
private final CreationRepository creationRepository;
private final FeedbackRepository feedbackRepository;
private final StorageProperties storageProperties;
/**
* 창작물의 첨부 컨텐츠 추가
*
* @param creationId
* @param dto
*/
@Transactional
public void addCreationAttachedContents(long creationId, CreationDto.AttachedContent dto) {
ContentType contentType = ContentType.fromString(dto.getContentsType());
List<MultipartFile> files = dto.getFiles();
Optional<Creation> creationOp = creationRepository.findByCreationId(creationId);
Creation creation = creationOp.orElseThrow(() -> new NotFoundException("not found creation"));
if (contentType == ContentType.IMAGE) {
files.stream()
.filter(file -> !file.isEmpty())
.forEach(file -> {
// 파일 저장
String fileName = CodeGenerator.generateFileName(file.getOriginalFilename());
FileUtil.upload(file, FileUtil.getImageUploadPath(storageProperties.getPath(), creationId), fileName);
CreationAttachedContent content = new CreationAttachedContent();
content.setFileName(fileName);
content.setOriginalFileName(file.getOriginalFilename());
content.setSize(file.getSize());
content.setType(contentType);
content.setCreation(creation);
content.setUrl(FileUtil.getImageUrl(storageProperties.getUri(), creation.getCreationId(), fileName));
creation.addAttachedContent(content);
});
} else if (contentType == ContentType.AUDIO) {
// Todo: audio 저장 로직 추가
}
}
/**
* 창작물의 첨부 컨텐츠 제거
*
* @param creationId
* @param contentIds
*/
@Transactional
public void removeCreationAttachedContents(long creationId, List<Long> contentIds) {
Optional<Creation> creationOp = creationRepository.findByCreationId(creationId);
Creation creation = creationOp.orElseThrow(() -> new NotFoundException("not found creation"));
// Todo: fetch join으로 수정?
List<CreationAttachedContent> contents = creation.getContents();
CopyOnWriteArrayList<CreationAttachedContent> removeContents = new CopyOnWriteArrayList<>(contents);
removeContents.stream()
.filter(content -> contentIds.contains(content.getCreationAttachedContentId()))
.forEach(content -> {
// 파일 제거
String filePath = FileUtil.getImageUploadPath(storageProperties.getPath(), creationId) + "/" + content.getFileName();
FileUtil.deleteFile(filePath);
creation.removeAttachedContent(content);
});
}
/**
* 피드백의 첨부 컨텐츠 추가
*
* @param creationId
* @param feedbackId
* @param dto
*/
@Transactional
/*
Todo: refactoring
content -> feedback -> image
-> audio
-> creation -> image
-> audio
enum에 대한 컨텐츠를 저장할 수 있는 서비스를 가진 어떤 것을 만들고,
그 enum이면 그 서비스의 저장 로직을 태운다?
*/
public void addFeedbackAttachedContents(long creationId, long feedbackId, FeedbackDto.AttachedContent dto) {
ContentType contentType = ContentType.fromString(dto.getContentsType());
List<MultipartFile> files = dto.getFiles();
Optional<Feedback> feedbackOp = feedbackRepository.findByFeedbackId(feedbackId);
Feedback feedback = feedbackOp.orElseThrow(() -> new NotFoundException("not found feedback"));
if (!feedback.fromCreation(creationId)) {
// Todo: exception 수정
throw new InvalidParameterException("forbidden request");
}
if (contentType == ContentType.IMAGE) {
files.stream()
.filter(file -> !file.isEmpty())
.forEach(file -> {
// 파일 저장
// Todo: storage path, url 로직 이동. storage/creations/3/feedack/img_name.jpg
String fileName = CodeGenerator.generateFileName(file.getOriginalFilename());
FileUtil.upload(file, String.format("%s/creations/%d/feedback/%d", storageProperties.getPath(), creationId, feedbackId), fileName);
FeedbackAttachedContent content = new FeedbackAttachedContent();
content.setFileName(fileName);
content.setOriginalFileName(file.getOriginalFilename());
content.setSize(file.getSize());
content.setType(contentType);
content.setFeedback(feedback);
content.setUrl(String.format("%s/creations/%d/feedback/%d/%s", storageProperties.getUri(), creationId, feedbackId, fileName));
feedback.addAttachedContent(content);
});
} else if (contentType == ContentType.AUDIO) {
// Todo: audio 저장 로직 추가
}
}
/**
* 피드백의 첨부 컨텐츠 제거
*
* @param creationId
* @param contentIds
*/
@Transactional
public void removeFeedbackAttachedContents(long creationId, long feedbackId, List<Long> contentIds) {
Optional<Feedback> feedbackOp = feedbackRepository.findByFeedbackId(feedbackId);
Feedback feedback = feedbackOp.orElseThrow(() -> new NotFoundException("not found feedback"));
if (!feedback.fromCreation(creationId)) {
throw new InvalidParameterException("forbidden request");
}
// Todo: fetch join으로 수정?
List<FeedbackAttachedContent> contents = feedback.getAttachedContents();
CopyOnWriteArrayList<FeedbackAttachedContent> removeContents = new CopyOnWriteArrayList<>(contents);
removeContents.stream()
.filter(content -> contentIds.contains(content.getFeedbackAttachedContentId()))
.forEach(content -> {
// 파일 제거
String filePath = String.format("%s/creations/%d/feedback/%d/%s", storageProperties.getPath(), creationId, feedbackId, content.getFileName());
FileUtil.deleteFile(filePath);
feedback.removeAttachedContent(content);
});
}
}
|
java
|
import React from 'react';
import PropTypes from 'prop-types';
import { ResponsiveContainer, BarChart, Bar, XAxis, YAxis, Tooltip } from 'recharts';
import './VerticalBarChart.scss';
import constants from '../../config/constants';
const VerticalBarChart = ({ data }) => (
<div className="VerticalBarChart">
<ResponsiveContainer width="100%" height={155}>
<BarChart
layout="vertical"
width={600}
height={155}
data={data}
margin={{ top: 0, right: 40, bottom: 0, left: -40 }}
>
<XAxis type="number" hide />
<YAxis dataKey="key" type="category" tick={<CustomizedAxisTick />} />
<Tooltip />
<Bar legendType="line" dataKey="value" barSize={15} label={{ fontSize: 20 }} fill={constants.chartColor.blue} />
</BarChart>
</ResponsiveContainer>
</div>
);
VerticalBarChart.propTypes = {
data: PropTypes.arrayOf(
PropTypes.shape({
key: PropTypes.oneOfType([PropTypes.string, PropTypes.number]).isRequired,
value: PropTypes.oneOfType([PropTypes.string, PropTypes.number]).isRequired,
}),
).isRequired,
};
const CustomizedAxisTick = ({ x, y, payload }) => (
<g transform={`translate(${x},${y})`}>
<text x="10" y="-10" dy="0" textAnchor="start" fill={constants.chartColor.tickColor}>
{payload.value}
</text>
</g>
);
CustomizedAxisTick.propTypes = {
x: PropTypes.number,
y: PropTypes.number,
payload: PropTypes.shape({
value: PropTypes.isRequired,
}),
};
CustomizedAxisTick.defaultProps = {
x: 0,
y: 0,
payload: {
value: '',
},
};
export default VerticalBarChart;
|
javascript
|
<reponame>MohamedGamil/my-stuff
import { Injectable } from '@angular/core';
import { Http } from '@angular/http';
import 'rxjs/add/operator/map';
import { environment } from '../../environment';
@Injectable()
export class GoogleCloudVisionServiceProvider {
constructor(public http: Http) { }
getLabels(base64Image) {
const body = {
"requests": [
{
"image": {
"content": base64Image
},
"features": [
{
"type": "LABEL_DETECTION"
}
]
}
]
}
return this.http.post('https://vision.googleapis.com/v1/images:annotate?key=' + environment.googleCloudVisionAPIKey, body);
}
}
|
typescript
|
HYDERABAD: Telangana health authorities announced that they are suspending the distribution of Covid vaccination for individuals above 45 years in view of inadequate stock of Covaxin. Details regarding the resumption of the vaccination drive will be made available subsequently, Director of Public Health, Dr G Srinivasa Rao said on Sunday.
While the state has been focusing on the second dose for people above 45 years for over two weeks, the suspension order issued by the government came as a shocker for many who were waiting for the jab.
Due to lack of sufficient supplies from the Centre, the State government has stopped administering the first dose for beneficiaries and has also not taken up vaccination for those above 18 years.
The state has so far given 56,25,920 doses and a total of 11. 37 lakh beneficiaries received the second dose. Telangana claims to have the capacity to give 10 lakh vaccines every day but is administering 30,000 to 40,000 vaccines daily due to shortage.
The state has already conveyed to the Centre that to vaccinate people above 45 years of age, the state requires 1. 29 crore doses. Chief Minister K Chandrasekhara Rao requested the Centre to supply 2 to 2. 5 lakh doses per day. There is an urgent need for 13 lakh vaccines till the end of May, the state told the Centre.
|
english
|
<filename>parent/hello/src/main/java/de/majchrowski/jdemo/Math.java
package de.majchrowski.jdemo;
public class Math {
public boolean isEven(int intArg) {
boolean isE;
if ((intArg % 2) == 0)
{
isE = true;
}
else
{
isE = false;
}
return isE;
}
}
|
java
|
Quantum computers are real, but thanks to the fragility of quantum information, they can't yet do anything you couldn't do faster on a normal computer. Now, a team of researchers at the University of Sydney and Dartmouth College have found a way to make quantum information more reliable.
"In these superconducting systems, the quantum information only persists for about 100 microseconds -- a tiny fraction of a second," says Dr. Michael J. Biercuk, director of the Quantum Control Laboratory in the University of Sydney’s School of Physics and ARC Centre for Engineered Quantum Systems.
This information decay, called decoherence, is a problem even when information is idle. But Biercuk and his colleagues have found a way to make quantum information persist for several hours. Their research will be published on Wednesday in Nature Communications.
Quantum computing takes advantage of the unique properties of quantum particles, creating something called "qubits" in order to do calculations. Researchers believe that this new breed of computer could one day solve certain types of problems in a fraction of the time today's classical computers can, and major progress has been made towards that goal.
For example, Google and NASA recently bought a machine created by the Canadian company D-Wave, which the inventors claim is a working quantum computer. But many scientists remain unconvinced that the D-Wave machine can outperform traditional computers -- if it's even a quantum computer at all. Others, such as IBM, have built proof-of-concept quantum computers, but they are all held back by decoherence.
"Building a large-scale quantum computer requires the ability to store and manipulate quantum information with very low error probabilities," says Biercuk. In other words, you've got to have a reliable form of quantum memory.
Biercuk and company are solving the problem with what's called a quantum repeater, which can "boost" a signal representing a piece of quantum information. Others have built quantum repeaters in the past, but Biercuk says this new approach will be more reliable.
The Sydney-Dartmouth repeater is based on Ytterbium ions and a process called dynamical decoupling, which uses interference to cancel out errors. The concept has been experimentally demonstrated before, but Biercuk explains these proofs-of-concept weren't practical because they limited how often the quantum information was actually retrievable. The new method makes it possible to reliably access information stored in memory at any time without damaging it.
This is the latest in a series of discoveries in quantum memory. For example, last March a group at Yale found a way to make quantum memory writable when desired, but read-only at all other times.
Biercuk says this new method will eventually be compatible with other quantum computing techniques, including that of the Yale team. "This will require some improvements in the superconducting technology first to address a rather nefarious limiting mechanism called energy relaxation," he says. "But if that can be overcome, there is great promise that quantum repeaters could in principle leverage both superconducting qubits and our techniques."
|
english
|
The company plans to preview Tiger, the next version of the Mac OS X operating system, at a developers conference in June.
CEO Steve Jobs plans to show off the new operating system software during a keynote address June 28 to kick off the company's Worldwide Developer Conference in San Francisco.
The Mac maker offered few details, saying only that Jobs will "offer a preview" of Tiger. Apple has thus far said little about the latest cat, which follows other Mac OS X releases that bore code names such as Puma, Jaguar and Panther.
Apple postponed last year's developers conference from May to June so it could preview Panther, the current version of Mac OS X, which went on sale in October.
If Tiger goes on sale this year, it would mark the company's fifth version of Mac OS X in five years. In the same period, Microsoft has released one major version of Windows--XP--along with various updates. Longhorn, the next major release of Windows, is not expected until the middle of 2006, at the earliest.
|
english
|
Alexander Volkanovski's next opponent is the talk of the UFC town at the moment. Yair Rodriguez vs. Brian Ortega failed to throw up a legit No. 1-contender. It seems that whoever will present the best argument to fight Volkanovski next, will get the title shot.
Rodriguez seems to be taking things into his own hands by calling out Volkanovski on Twitter today. Although much of the first round between Rodriguez and Ortega was edging towards 'El Pantera', it'd be unwise to assume he would have won this fight had it followed a more natural conclusion.
Rodriguez tweeted:
"You gonna try and skip me again like last time, or we doing it for real? You know I’m next, everyone does. "
Rodriguez may not present the strongest case, but given the lack of options at featherweight right now, he could very well be next for Alexander Volkanovski.
Josh Emmett just does not seem to have gained the traction he needs to campaign for a title shot following his win over Calvin Kattar. Other than Emmett, the Mexican may be the only viable option for Volkanovski at featherweight.
Alexander Volkanovski has eyes on double-champ status, with many others calling for his head.
The title-clincher bout at 155 pounds was recently announced between Charles Oliveira and Islam Makhachev. The fight is set to take place in Abu Dhabi on October 22.
Alexander Volkanovski has thrown his hand up to face the winner. He wrote the following on Twitter:
"I got the winner! ! "
With options in his own weight class running out, it is no wonder that the Australian is looking in other directions. Alexander Volkanovski derives his nickname from 'Alexander the Great', and he will be looking to conquer new lands in search of UFC gold.
The possibility of a fight with Aljamain Sterling, the current UFC bantamweight champion, could also interest Volkanovski. A fight against Henry Cejudo is also a distant probability. Both fighters have been vocal in their apparent confidence to defeat the best pound-for-pound featherweight in the world.
The world is currently Alexander Volkanovski's oyster, with champions and former champions alike popping out of the woodwork to see if they can have a crack at the formidable Aussie.
Volkanovski sustained a broken right hand during his latest fight against Holloway, and his return date is unclear right now. There is a lot of time for the champion to sit down and ponder upon what comes next.
|
english
|
i) total goods revenue net tonne kilometres.
ii) non-suburban passenger kilometres converted by a factor of 0.076.
iii) suburban passenger kilometres converted by a factor of 0.053.
b) The input is taken as the non-gazetted staff strength (excluding RPF/RPSF personnel), increased by the incremental increase/decrease in capital (over average of last three years) during the year. Incremental capital is confined to Rolling Stock utilised for movement of trains. The relative weights given are 0.50 for Tractive Effort, 0.20 for Wagon Capacity and 0.30 for Seating Capacity. The labour input i.e. non-gazetted staff strength is then increased to the extent of the percentage increase in the incremental capital.
Highest PLB amount of 78 days' wages was paid for the financial years 2010-11, 2011-12, 2012-13 and 2013-14. This year also PLB equivalent to 78 days' wages will be paid considering the good financial performance which is expected to motivate employees for working towards improving the same in future.
The financial implication of payment of 78 days' PLB to railway employees has been estimated to be Rs. 1030.02 crore. The wage calculation ceiling prescribed for payment of PLB to the eligible non-gazetted railway employees is 3500/- p.m. The maximum amount payable per eligible railway employee is Rs. 8975 for 78 days.
About 12.58 lakh non-gazetted Railway employees are likely to benefit from the decision.
The Productivity Linked Bonus on Railway covers all non-gazetted railway employees (excluding RPF/RPSF personnel) who are spread over the entire country.
The Union Cabinet in its meeting held on October 2015 accepted the proposal of the Ministry of Railways for payment of Productivity Linked Bonus (PLB) equivalent to 78 days' wages for the financial year 2014-2015 for all eligible non-gazetted Railway employees (excluding RPF/RPSF personnel).
|
english
|
{
"courseCode": "SW4201",
"courseCredit": "5",
"description": "This module involves the analysis of direct and indirect professional practice in Singapore and includes the study of cross-cultural variations and applications of social work theory. An examination of the process of theory building and the study of different theoretical models for indigenous practice will be made. Students are required to identify and develop a specific knowledge base for local social work practice.",
"faculty": "Arts and Social Science",
"title": "Theory Building in Social Work Practice"
}
|
json
|
<filename>motan-transport-netty/src/test/java/com/weibo/api/motan/transport/netty/NettyResponseFutureTest.java
/*
* Copyright 2009-2016 Weibo, Inc.
*
* Licensed under the Apache License, Version 2.0 (the "License");
* you may not use this file except in compliance with the License.
* You may obtain a copy of the License at
*
* http://www.apache.org/licenses/LICENSE-2.0
*
* Unless required by applicable law or agreed to in writing, software
* distributed under the License is distributed on an "AS IS" BASIS,
* WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
* See the License for the specific language governing permissions and
* limitations under the License.
*/
package com.weibo.api.motan.transport.netty;
import java.util.concurrent.atomic.AtomicBoolean;
import junit.framework.TestCase;
import org.junit.Assert;
import org.junit.Test;
import com.weibo.api.motan.rpc.DefaultRequest;
import com.weibo.api.motan.rpc.DefaultResponse;
import com.weibo.api.motan.rpc.Future;
import com.weibo.api.motan.rpc.FutureListener;
import com.weibo.api.motan.rpc.URL;
import com.weibo.api.motan.transport.Server;
/**
* @author maijunsheng
* @version 创建时间:2013-6-14
*
*/
public class NettyResponseFutureTest extends TestCase {
private static NettyClient client = new NettyClient(new URL("motan", "localhost", 18080, Server.class.getName()));
@Test
public void testNormal() {
DefaultRequest request = new DefaultRequest();
DefaultResponse defaultResponse = new DefaultResponse();
defaultResponse.setValue("success");
NettyResponseFuture response = new NettyResponseFuture(request, 100, client);
response.onSuccess(defaultResponse);
Object result = response.getValue();
Assert.assertEquals(result, defaultResponse.getValue());
Assert.assertTrue(response.isDone());
}
@Test
public void testException() {
DefaultRequest request = new DefaultRequest();
NettyResponseFuture response = new NettyResponseFuture(request, 100, client);
Exception exception = new Exception("hello");
DefaultResponse defaultResponse = new DefaultResponse();
defaultResponse.setException(exception);
response.onFailure(defaultResponse);
try {
response.getValue();
Assert.assertTrue(false);
} catch (Exception e) {
Assert.assertTrue(true);
}
Assert.assertTrue(response.isDone());
}
@Test
public void testTimeout() {
DefaultRequest request = new DefaultRequest();
NettyResponseFuture response = new NettyResponseFuture(request, 10, client);
try {
response.getValue();
Assert.assertTrue(false);
} catch (Exception e) {
Assert.assertTrue(true);
}
Assert.assertTrue(response.isCancelled());
}
@Test
public void testCancel() {
DefaultRequest request = new DefaultRequest();
NettyResponseFuture response = new NettyResponseFuture(request, 10, client);
response.cancel();
try {
response.getValue();
Assert.assertTrue(false);
} catch (Exception e) {
Assert.assertTrue(true);
}
Assert.assertTrue(response.isCancelled());
}
@Test
public void testListener() {
DefaultRequest request = new DefaultRequest();
NettyResponseFuture response = new NettyResponseFuture(request, 100, client);
final AtomicBoolean result = new AtomicBoolean(false);
response.addListener(new FutureListener() {
@Override
public void operationComplete(Future future) throws Exception {
if (future.isSuccess()) {
result.set(true);
} else {
result.set(false);
}
}
});
DefaultResponse defaultResponse = new DefaultResponse();
defaultResponse.setValue(new Object());
response.onSuccess(defaultResponse);
Assert.assertTrue(result.get());
response = new NettyResponseFuture(request, 100, client);
response.addListener(new FutureListener() {
@Override
public void operationComplete(Future future) throws Exception {
if (future.isSuccess()) {
result.set(true);
} else {
result.set(false);
}
}
});
response.cancel();
result.set(true);
response.addListener(new FutureListener() {
@Override
public void operationComplete(Future future) throws Exception {
if (future.isSuccess()) {
result.set(true);
} else {
result.set(false);
}
}
});
Assert.assertFalse(result.get());
}
public static void main(String[] args) throws Exception {
final NettyResponseFuture future = new NettyResponseFuture(null, 1100, client);
new Thread() {
public void run() {
try {
System.out.println("start get value");
Object result = future.getValue();
System.out.println("finish get value: " + result);
} catch (Exception e) {
System.out.println("throwable get value: " + e.getMessage());
e.printStackTrace();
}
}
}.start();
Thread.sleep(1000);
System.out.println("onComplete:" + future.getState());
//
// future.onComplete("hello");
// System.out.println("onComplete:" + future.state);
DefaultResponse defaultResponse = new DefaultResponse();
defaultResponse.setException(new Exception("exception ~~~~"));
future.onFailure(defaultResponse);
System.out.println("onError:" + future.getState());
Thread.sleep(1000);
System.out.println("finish");
}
}
|
java
|
Begin typing your search above and press return to search.
Bongaigaon: Around Rs 3 lakh was seized on Saturday by SST flying squad and police from two citizens of Bhutan. The two persons who were carrying such a large amount of Indian money at a time when the model code of conduct is in force, were identified as Zigme Dorzee and Rinsen Cresigeng.
They were in a Mahindra Scorpio car bearing registration no AS 01 DV 1033. The driver of the car, Anowar Hussein was also nabbed by police. A sticker ‘On Election Duty’ was on the vehicle. All were brought to Runikhata PS and magistrate Ringkhang Muchahary is investigating the matter.
|
english
|
189 Motion of Confidence in
of Rs. 188 crore in the atomic energy sector. It shows how the Government deals with using of indigenous energy resources in the country.
Government says that due to this deal electricity can be given to each and every village. It is really impracticable. The validity of the deal is 40 years. The estimated production of the electricity upto this period is 8%. Now we produced 1 lakh 1.44 MW electricity. Out of this only 3% is Nuclear energy. Even if the deal materialized, it will take atleast 7 to 8 years for electricity production in the plant. With the international experience, if it is coal based plant, it will take only 3 to 4 years. A coal based plant cannot be commissioned with half of the time than the nuclear plant. The gas based plant can be built even faster. So, the claim of the Government to give electricity with short time is baseless.
It is also untrue to say that the nuclear energy is less expensive. The Scientists and experts say the cost per unit of coal is Rs. 2.50 whereas the cost per unit of the nuclear energy is Rs. 5.50. How is it possible for the poor farmer and worker to afford this huge amount? We have experiences of ENRON in Maharashtra electricity charges had risen Rs. 5.50 to Rs. 6.00 which has heavy loss to the Maharastra Electricity Board.
We have criticized and pointed out this issue in the very beginning.
Government again claim that we can add 40,000 MW of Nuclear Power by an imported reactor to ensure energy security. For this, the cost will be Rs. 3.624 lakh crore. It means the minimum cost of the nuclear power plant is 2,000 dollar perkilo watt. Experts say that instead of 40,000 MW, 1 lakh MW electricity can be generated through a coal based thermal plant with the same investment. Again it translates the facts and now 3 coal based thermal plant can be constructed with the investment that we used for one nuclear thermal plant. So, the investment that we can save by utilizing the coal based thermal plant can be used to wipe out illiteracy and provide free education and health for all the ensure universal food security. So the Nuclear deal
the Council of Ministers 190
the nuclear is not much in India's interest than in the interest of nuclear power industry.
When we go to the utilization of the nuclear energy by Western countries, instead of nuclear energy they prefer nuclear renaissance. In US, Western Europe, Japan the total nuclear plant built in all these countries is only three. In 1980s it was 20 in these countries. US itself commissioned its last nuclear plant in 1996. During the last 20 years, there was no demand at all. It means even the Western countries are not keen to use the nuclear plants.
The most important issue that our nation face are the price rise, farmers suicide, the set back of PDS etc. Government says that we have better growth rate but what about the price rise that the common people face. We have been discussing these issues in every Session. But no concrete steps have been taken by the Government. That is because of the food policy that we have follow.
The PDS system has collapsed and also there is inefficiency in the storage of foodgrains. We have pointed out a number of measures. But you have not listened. It is true that lakhs of farmers have committed suicide. Though the Government declared relief measures to the farmers, many of the poor farmers still not getting this relief.
The Government says that the Left parties have compelled BJP or NDA to destabilize the Government. But it is not the BJP or the Left to move this trust Motion, it is by you yourself. For the last 4 years we give support though we have differences of views on many of the issues. Can you say that you have utilized the support positively and made political gain in the elections. It is only the left parties who have opposed the communal forces. That is why they are weak in West Bengal, Kerala and Tripura. But your strong mass is eroded not because of CPI(M) but because of the policy that you are following.
Why the Government is in a hurry to have an agreement with USA. rejecting the support and betraying the Left party? The election in America is going to be
191 Motion of Confidence in
[Shri P. Karunakaran]
held in November. All the popular referendum conducted in America proved that the unpopularity of Bush Administration then why you cant wait for the next election if it is so much needed. Here, nuclear deal is not a one railway compartment, which can be detached from a long train. There are number of agreements and deals. The strategic statement made by two countries have paved the way for the creation of CEO. In the Industrial as well as in the agricultural sector, the CEO take the decision. Members of the CEO are the directors of the Multi National Companies in these two countries. Out of 30 suggestions that they made, 26 suggestions are against India's interest. They argue for more freedom of foreign capital in all the sectors either industry, agriculture and retail trade etc.
Our nation witnessed the joint military exercise where India, America, France, Japan had are participated. US also dictate the mode of arms and ammunition that India's Navy and Air Force use. The MNC's in USA succeeded to get more orders from Indian armed forces for arms and ammunition.
The large number of deals that the Government made with the America will not assist for the self reliance or to continue the fight against the communal forces. Not because we are against the nuclear energy, but because we can promote indigenous energy sources in our country which protect our self reliance in the energy sectors.
Think about the pathetic condition of our people 72% of the people have not got better drinking water, 75% of people have no better shelter to live, 57% of the women suffer from mal-nutrition. Percentage of people uneducated are more in minority sections. Sachar Committee shows that 90 districts of country, the living condition of minority are below SC/ST. The maximum dropouts are among muslim women and 330 minority concentrated cities, the preliminary facilities are lacking. After the independence, the victims of communal clashes are mostly from minority communities. This is really a picture of the experience of poor people in our country. So, the major issue before the nation is not the energy problem but the problem of the poor people.
the Council of Ministers 192
Instead of traveling with a speedy train to Washington, to get nuclear energy, what we need is to promote interaction. So we can't allow the Indian market to be opened to the uni-polar world. We prefer a multi-polar world, where we can exchange our views and interactions with others. The deal really prevent the emergence of the multi-polar world where India has a dominant role.
In the history of Indian Parliament it is shameful to witness that money power and muscle power have used to win in the Confidence Motion. It is reported that Rs. 1 crore was given in advance to two BJP MPs and they brought the money in the House. The status of the Parliament and the status of the MPs are degenerated. The Speaker should take appropriate action either by constituting the committee of or refer it to the police investigation. The status of the House has to be maintained by appropriate action.
I conclude with the precious words of Williams Shakespeare.
"To be or not to be is the question, to be served with cakes and not to be served with cakes".
Four years back, people of India have given a verdict to prefer to be and provide cakes but the Government turned not to be and to give cakes to the people through the Bush Administration.
I am sure that this House would prefer provide cakes and reject the trust motion which has already become a trustless one.
*SHRI P.S. GADHAVI (Kutch): I rise to oppose the motion moved hon. Prime Minister, Dr. Manmohan Singhji, that the House expresses its confidence in the Council of Ministers.
Much has been written and said by the supporters and opponents of the Indo-US nuclear deal during the last three years. Each one of us, who has participated in this debate, has endeavoured to reflect their understanding of the various aspects of this deal. It is
193 Motion of Confidence in
distressing, however, to note that the Government has chosen to create a political crisis in the country on this issue and is indulging in plain untruth to bolster its case. The Government claims that our nuclear weapons programme is entirely safe under this deal.
The Americans have been saying from day one that the whole purpose of this deal is to bring India within the global nuclear non-proliferation regime. Their avowed aim is to cap, reduce and ultimately eliminate India's nuclear weapons programme. The immediate aim is to trap India at the lower end of nuclear weapons technology by eliminating forever our option to hold further tests.
Secondly, I would like to submit that what the objects of Hyde Act are. Its objectives are to seek to halt the increase of nuclear weapons arsenals in South Asia and to promote their reduction and eventual elimination; and to encourage India not increase its production of missile material at unsafeguarded nuclear facilities (section 109).
Over and above this nuclear deal, this Government has no right to stay in power because this Government has lost majority in House; and it has failed in almost all fronts in fulfilling the aspirations of common man, the aam admi. I would like to bring to the notice of this august House that in my constituency when Bhuj Airport was reconstructed and inaugurated, the people of my constituency expressed their feeling that the name of Bhuj Airport, be given after the name of 'Kranti Guru Pandit Shyamji Krishna Verma' and it was assured to the people at the time of inauguration of this airport by the then Deputy Prime Minister, hon. L.K. Advaniji, and the then Civil Aviation Minister of State, Hon. Rajiv Pratap Rudy, that Government of India would certainly consider the feelings of people of Kutch positively but before procedural formalities for giving name of Bhuj Airport, UPA Government came to power which has neglected the feelings of the people of Kutch under the pretext that it is the General policy of UPA Government to retain the names of airports in the name of city in which such airports are located.
But I would like to submit that in this country
almost 17 airports are the names of leaders and very recently the UPA Government has decided to give the name of Lucknow Airport after the name of Chaudhari Charan Singhji, the former Prime Minister of India. How this name has been decided to be given is very well known to public. This Government has failed in controlling the rising inflation, failed on the front of maintaning the security, etc.
1, therefore, oppose the motion moved by the hon. Prime Minister.
*SHRI B.VINOD KUMAR (Hanamkonda): The UPA Government headed by the Congress Party is seeking a Vote of Trust, after indulging in Breach of Trust. It is ironical that the betrayers of trust themselves are today's seekers of trust.
The Nuclear Deal, no doubt, is at the back of our minds, but that is not the only issue confronting the nation now. Escalation of inflation at an unprecedented scale has made the life of common man miserable. His daily struggle is for getting at least one square meal a day; and he is hardly in a position to comprehend the intricacies of a Nuclear Deal. Agrarian Sector is on the verge of a crisis, spite of good monsoon spanning the last four years. We boast of a very impressive rate of growth of our economy, but our eyes are completely closed to the distress caused by the absence of distributive justice, and the alarming increase in disparities between the rich and the poor. The law and order scenario does not give us any comfort, nor can we rest assured about security concerns of the nation. These are only a few facets of a situation which is dismal, to say the least.
Coming to the Common Minimum Programme, which the UPA Government is expected to abide by, is followed or flouted-at the whims and fancies of the ruling dispensation. Some of the issues that form an integral part of the Common Minimum Programme are arbitrarily shelved-for instance, the formation of, Telangana State. And some of the issues of a very far reaching nature, that do not find even an indirect *Speech was laid on the Tabl
195 Motion of Confidence in
[Shri B. Vinod Kumar]
reference in the Common Minimum Programme, are unilaterally foisted on the nation. The best-or the worst-example is the Nuclear Deal itself.
The Congress Party which is heading a coalition Government has arrogated to itself all the powers which even a single party Government with a clear majority in Parliament hesitates to exercise. It is a clear case of unethical violation of coalition ethics.
Coming to the question of Telangana State, the UPA Government has badly let down the people of the region. Let me very briefly submit to this august house some vital facts of the case to substantiate my charge against the government.
In the Common Minimum Programme of the UPA it was categorically stated that the formation of Telangana State would be taken up after arriving at a consensus through consultations. This assurance was incorporated in the Hon'ble President's Address to the first Joint Session of Parliament, after the last elections. It was further reiterated by the Hon'ble Prime Minister in his first Press Conference. These assurances facilitated participation of my Party i.e. Telangana Rashtra Samiti, in the UPA and the Union Government. A Committee was constituted under the Chairmanship of Shri Pranab Mukherjee to ascertain the views of all political parties having representation in the Parliament. It is quite evident from the responses of political parties that the consensus in favour of formation of Telangana State is very wide and overwhelming. But an impression is sought to be created that a consensus on this score is yet to be arrived at. It is a travesty of truth, to say the least. The truth is that even if the CPI (M), besides TDP and a couple of other parties do not support the proposal, the total number of Members of Lok Sabha supporting the formation of Telangana State, if the Congress Party also supports, would come to a staggering figure of more than 425. If this is not consensus what else could it be? And what more are we searching for?
Even after getting such a clear and categorical
the Council of Ministers 196
endorsement of almost all the segments of political spectrum of our country, the Congress Party continues to bluff, maintaining that the consensus has not yet been arrived at. To prove our charge against the Government, we submitted documentary evidence to the Hon'ble Speaker in this regard with a request to allow discussion in this august House. The ruling dispensation avoided an open discussion in the House, for obvious reasons.
On the contrary, it has tried to side track the issue by raising the bogie of Second SRC, quiet contrary to what was agreed to in the Common Minimum Programme. When this misadventure misfired, it is talking about development of Telangana, which never took place in the past nor is likely to happen in the future. Vexed with these intriguing experiences with the UPA, we had no option but to leave the UPA as well as the Union and the State Governments. We are thereby once again amidst the people to expose the questionable credentials of the Congress Party and the resultant loss of trust of the people in that Party.
*SHRI FRANCIS FANTHOME (Nominated): Sir, I thank you for allowing me to speak on the Motion of Confidence in the Council of Ministers moved by our most revered and admired Prime Minister, Dr. Manmohan Singh.
Sir, this Motion has been necessitated by the withdrawal of support to the UPA Government by the Left Parties due to their displeasure in not being shown the draft of the Indo-US Civil Nuclear Agreement prior to it being presented to the IAEA. Having supported the present Government for four years and two months and having participated in governance without position and responsibility, and continuously dictated the direction of policy, extremely disappointing to find this position emerging prior to the elections to the 15th Lok Sabha.
Sir, the UPA Government, with Left support, was put in place on the principles of mutuality and interdependence as enshrined in the Common Minimum Programme (CMP) for governance. The UPA
* Speech was laid on the Table.
Motion of Confidence in
Government even in the past four years-has, in accordance with the agreed programme, initiated several steps that were in the process of transforming the Nation in an unprecedented manner : be it the National Rural Employment Guarantee Scheme; the National Health Mission; the loan waiver to the farmers; the Sarva Shiksha Abhiyan; and the programmes to enhance the effectiveness of women in the social order. The Government has done remarkably well. But a procedural issue related to the IAEA has been made an issue of such grave importance that the governance of the Nation has been relegated to address the ego of the left parties Members of the joint mechanism set in place to exhibit transparency regarding the nuclear agreement proposed with the US.
Sir, it is my view that never in the history of this great Nation, with such esteem around the world, has the Nation witnessed such ridicule on a matter that promotes energy security and enhances the Nation's effectiveness in all spheres of development. Consequent to the events consequently unfolding in this House, its representatives have been denigrated and reduced to economic commodities with a price tag on integrity and loyalty available to be traded. Someone remarked that "Nuclear energy is being driven by Horses".
Sir, the Indo-US Civil Nuclear Energy Treaty has been negotiated by the UPA Government under the able guidance of the hon. Prime Minister by safeguarding India's strategic needs as well as enabling the Nation to generate energy for peaceful purposes. The agreement facilitates the inclusion of India in the community of leading economically-powerful Nations, and to participate on platforms for trade in nuclear energy practices, which have been denied consequent to India not being a signatory to the CTBT.
Sir, this treaty is to secure the future. Therefore, it addresses the aspiration and concerns of the youth of this great Nation, which constitutes more than 60 per cent of the Nation today. It is the solemn duty of the Government to ensure that the future is better secured than the present. I, therefore, commend the hon. Prime Minister and his Council of Ministers for the great work that they have done for this Nation.
the Council of Ministers 198
I, therefore, support the Motion of Confidence.
*SHRIMATI SANGEETA KUMARI SINGH DEO (Bolangir): I thank you for the opportunity to express my views on the Confidence Motion.
I will not get into details of price rise crisis or the problem of internal security which has become so acute that terrorist attacks and bomb blasts have become a common occurrence. There have been serial blasts in Mumbai, Malegaon, Jammu, Hyderabad, Jaipur, Samjhauta Express etc.
Naxalite menace has increased manifold. The recent occurrences in Orissa and Hyderabad bear testimony to that.
As far as the farmers loan waiver goes, it has been repeatedly mentioned by my Hon'ble colleagues what an eyewash it is benefiting only a fraction of the farmers. Farmers crisis still exists and suicides are rampant all over the country. The Government has not fulfilled its promises or commitment to the people of this country and failed miserably on all fronts.
Our party is not against-Indo-US relations nor the Nuclear Deal. After all the credit of elevating India's status in the eyes of the world and specially the US goes to the Vajpayeeji's administration.
The issue we are opposed to is the manner in which the deal is being negotiated and the veil of Secrecy surrounding it. After all in a Parliamentary democracy the executive is responsible to Parliament. And a deal of such importance is on the verge of finalization and we are being kept ignorant regarding the details under the garb of confidentiality. Even in the US the US Congress has discussed the issue threadbare. But in our country the Parliament is being treated in such a shabby and redundant manner and no importance is attributed to it.
I would like to mention here that in a country like ours, wind solar and geothermal energy should be concentrated upon and encouraged. Even senators
199 Motion of Confidence in
[Shrimati Sangeeta Kumari Singh Deo]
Barrack Obama and John M. Cain have pledged to spend more money on developing these should either of them come to power.
I wish to oppose the Confidence Motion.
*SHRIMATI KARUNA SHUKLA (Janjgir): Sir, I rise to oppose the motion of Confidence moved by hon. Prime Minister Dr. Manmohan Singh. I firmly oppose it.
Sir, when the vision of thought goes out, then conduct becomes blind. The people of the country are witnessing it today. What have been the state of affairs in the country in the last four years and two months. The hon. Prime Minister is well aware of it.
For the first time two day's special session had to be called only to gratify his desires. He and his party are genuflecting before the Bush Government. The people of the country are aware of it and the country is witnessing all this. India is the most dependable and strongest democracy. What example are they presenting before that democracy.
Sir, all kinds of issues would be raised when motion of confidence would be put to vote. His party ruled the country for several years after independence and gave birth to corruption. When the former hon. Prime Minister Late Nehruji was apprised about corruption in the construction of canals in Punjab during the first Chief Minister, Pratap Singh Kairon's Government, he responded by saying that corruption is admissible to the proportion of salt in meal and that very corruption ultimately resulted in a big Mandi of Parliamentarians today. Someone is elected to Rajya Sabha through corruption. Horse-trading by former Prime Minister Late Narsimha Rao is not a secret. Whatever they are doing is being witnesed not only by the country but the world aswell. The matter of horse trading that surfaced in the 14th Lok Sabha made the new Members hang their heads in shame.
Sir, Nuclear Deal is not in the interest of the
country. Secondly, we should give emphasis on other sources of energy whether it is wind energy, solar energy, thermal power or hydro power. Nuclear energy is not in the interest of the country.
Today, national interest has become secondary and personal interest is above all. They are thinking of American interests and sacrificing country's interests. Comming generation will never forgive us. Since yesterday I am listening to the speeches of senior parliamentarians. Salim Sir, you have taken too much time in withdrawing your support. If you had withdrawn support on issues of pricerise, naxalism, terrorism etc. people would have given more respect to you.
Prof. Ram Gopal Yadav is their new favourite. Therefore, everybody liked their story. They have made parliamentarians a commodity. If anyone is most worried from price rise in the country, it is women, who are compelled to purchase costly gas, pulses, oil, milk and vegetables. In coming times, they will cast their votes in elections keeping price rise in mind.
Sir, during the period of last four years and two months, our culture, traditions and beliefs have been hurt. Submission of affidavit to destroy Ram Sethu in Rameshwaram and taking back land given to Amarnath Shrine Board indicates that faith of people living in India is going to hurt. Appeasement has become necessary for vote bank politics.
Most parts of the country are facing the menace of terrorism and naxalism. UPA Government did not take this problem seriously. Had any action plan been formulated considering it a national problem, then solution to terrorism would have been chalked out.
Sir, one party and one family has ruled this country for 55 years and poverty has increased manifold. Poor has become more poor while rich has become more rich. Gap between the rich and the poor has widened so much in these four years that now it has become impossible to fill up the gap.
Sir, some people may survive with their own efforts but others will not be able to save themselves due to our leaders.
201 Motion of Confidence in
*SHRI VIJAYENDRA PAL SINGH (Bhilwara): Sir I stand to oppose the Motion moved by the Prime Minister. It in reality does not mention the Nuclear Deal with the U.S. but in fact it is the Indo-US Nuclear Deal which triggered it and therefore is the reason for the support withdrawal by the Left, and thus I will concentrate on it in the brief time. US-Nuclear Deal only. My Leader has very categorically clarified that we are not against Nuclear Energy, but against the Deal.
Sir, you will recall that in the NDA GovernmentVajpayeeji negotiated 2x500 MW fast Breeder Reactor Atomic Plant being built which is in advance stage in Kudukullum with USSR. Now if BJP is not against Nuclear Energy - why all this ruckus about the IndoUS Nuclear deal? That is the question being asked to us. Sir, Pranab Da mentioned very lightly that we in all these years did not sign the NPT and CTBT-but what is 123. Is it not a back door entry of the NPT/CTBT and even more? Let the Government explain this. Sir, may I also say that today in the world, U.S. which has done over 900 nuclear tests now does not need to do physical on the ground tests but are done on computer-called Computer Stimulation Tests.
I need to draw the attention of the P.M. whether US has been approached and in the negotiations talked about giving this technology to India in this deal.
In the deal if we do a Nuclear Test the NSG countries would also stop supply of Nuclear Fuel. Has this point been clarified and if not then our strategic defence would be affected.
Lastly, I am surprised at the timing and the venue of your meeting President Bush. The whole world knows you. Sir, P.M. had gone to Japan to attend the G8 and G5 meet.
Is it not true that Japan is averse to any Nuclear Programme due to what it suffered in Nagasaki and Hiroshima.
Sir, do you not think it was wrong to go and
*Speech was laid on the Table.
the Council of Ministers 202
discuss the Nuclear Deal in Japan? Was it not an act of sacrilege?
In the end I feel that the Nuclear deal must be renegotiated looking at our strategic need and requirement.
*SHRI K. FRANCIS GEORGE (Idukki): Sir, we have reached the fag end of this chain-reaction debate on the Confidence Motion moved by the Hon'ble Prime Minister.
Four years and two months back, when the UPA was formed, with the help of the Left Parties, we, the Kerala Congress Party of Kerala, also joined the alliance, as part of the Left Front. The Government started functioning on the basis of the CMP and off and on we used to have UPA Coordination Committee meetings, where we, Madam Mehbooba, Mr. Owaisi, Mr. Athawale etc. were also invited. We used to have some kind of discussions on issues, but I should say it was just a formality.
I need not elaborate on what the Goverment has done or need not do on various fronts. This moming the Hon'ble Finance Minister spoke in detail about the achievements of the Government, the rate of growth that was achieved, specially about the debt waiver and loan waiver that was implemented as per the Budget declaration.
I am not saying that it has come to nothing, but a large number of farmers who were equally debt ridden, like the ones who had their loans outstanding as per the norms specified by the Government but had chosen to be truthful and to meet their repayment obligations by whatever way that was possible has been left out. Now they are asking us, Members of Parliament, is this the price we have to pay for being truthful? I wish and hope the Government will pay due attention to this issue.
Sir, this Special Session was called due to situation that arose in the country on account of the Indo-US Nuclear Deal.
The left combine had raised certain issues-if 1 Speech was laid on the Table.
|
english
|
/** @typedef {String} PathifiedSvgContent */
/** @typedef {String} FeatherIconsPathDefinition */
/** @typedef {{ prefix: 'fe', iconName: 'feAlignLeft', icon: [Number|String, Number|String, import('feather-icons').FeatherAttributes, PathifiedSvgContent, FeatherIconsPathDefinition] }} FeatherAlignLeftIconData */
/** @type {FeatherAlignLeftIconData} */
const feAlignLeft = {
prefix: 'fe',
iconName: 'align-left',
icon: [
24,
24,
'<line x1="17" y1="10" x2="3" y2="10"></line><line x1="21" y1="6" x2="3" y2="6"></line><line x1="21" y1="14" x2="3" y2="14"></line><line x1="17" y1="18" x2="3" y2="18"></line>',
'<svg xmlns="http://www.w3.org/2000/svg" width="24" height="24" viewBox="0 0 24 24" fill="none" stroke="currentColor" stroke-width="2" stroke-linecap="round" stroke-linejoin="round" class="feather feather-align-left"><path d="M17 10 L3 10"/><path d="M21 6 L3 6"/><path d="M21 14 L3 14"/><path d="M17 18 L3 18"/></svg>',
'M17 10 L3 10 M21 6 L3 6 M21 14 L3 14 M17 18 L3 18',
'"xmlns"="http=//www.w3.org/2000/svg" "width"="24" "height"="24" "fill"="none" "stroke"="currentColor" "stroke-width"=2 "stroke-linecap"="round" "stroke-linejoin"="round"',
{
xmlns: 'http://www.w3.org/2000/svg',
width: 24,
height: 24,
fill: 'none',
stroke: 'currentColor',
'stroke-width': 2,
'stroke-linecap': 'round',
'stroke-linejoin': 'round',
},
],
}
export default feAlignLeft
|
javascript
|
<reponame>Unknoob/buck<gh_stars>1000+
/*
* Copyright (c) Facebook, Inc. and its affiliates.
*
* Licensed under the Apache License, Version 2.0 (the "License");
* you may not use this file except in compliance with the License.
* You may obtain a copy of the License at
*
* http://www.apache.org/licenses/LICENSE-2.0
*
* Unless required by applicable law or agreed to in writing, software
* distributed under the License is distributed on an "AS IS" BASIS,
* WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
* See the License for the specific language governing permissions and
* limitations under the License.
*/
package com.facebook.buck.json;
import com.facebook.buck.parser.syntax.ListWithSelects;
import com.facebook.buck.parser.syntax.SelectorValue;
import com.facebook.buck.util.hashing.StringHashing;
import com.facebook.buck.util.types.Unit;
import com.google.common.collect.ImmutableSortedMap;
import com.google.common.hash.Hasher;
import java.util.Collection;
import java.util.List;
import java.util.Map;
import java.util.Optional;
import javax.annotation.Nullable;
/** Hashes parsed BUCK file objects. */
public class JsonObjectHashing {
private enum HashedObjectType {
BOOLEAN,
DOUBLE,
FLOAT,
INTEGER,
LIST,
LONG,
SHORT,
STRING,
MAP,
SELECTOR_LIST,
SELECTOR_VALUE,
NULL,
}
// Utility class; do not instantiate.
private JsonObjectHashing() {}
/**
* Given a {@link Hasher} and a parsed BUCK file object, updates the Hasher with the contents of
* the JSON object.
*/
public static void hashJsonObject(Hasher hasher, @Nullable Object obj) {
if (obj instanceof Map) {
Map<?, ?> map = (Map<?, ?>) obj;
ImmutableSortedMap.Builder<String, Optional<Object>> sortedMapBuilder =
ImmutableSortedMap.naturalOrder();
for (Map.Entry<?, ?> entry : map.entrySet()) {
Object key = entry.getKey();
if (!(key instanceof String)) {
throw new RuntimeException(
String.format(
"Keys of JSON maps are expected to be strings. Actual type: %s, contents: %s",
key.getClass().getName(), key));
}
Object value = entry.getValue();
if (value != null) {
sortedMapBuilder.put((String) key, Optional.of(value));
}
}
ImmutableSortedMap<String, Optional<Object>> sortedMap = sortedMapBuilder.build();
hasher.putInt(HashedObjectType.MAP.ordinal());
hasher.putInt(sortedMap.size());
for (Map.Entry<String, Optional<Object>> entry : sortedMap.entrySet()) {
hashJsonObject(hasher, entry.getKey());
if (entry.getValue().isPresent()) {
hashJsonObject(hasher, entry.getValue().get());
} else {
hashJsonObject(hasher, null);
}
}
} else if (obj instanceof Collection) {
Collection<?> collection = (Collection<?>) obj;
hasher.putInt(HashedObjectType.LIST.ordinal());
hasher.putInt(collection.size());
for (Object collectionEntry : collection) {
hashJsonObject(hasher, collectionEntry);
}
} else if (obj instanceof String) {
hasher.putInt(HashedObjectType.STRING.ordinal());
String s = (String) obj;
StringHashing.hashStringAndLength(hasher, s);
} else if (obj instanceof Boolean) {
hasher.putInt(HashedObjectType.BOOLEAN.ordinal());
hasher.putBoolean((boolean) obj);
} else if (obj instanceof Number) {
// This is gross, but it mimics the logic originally in RawParser.
Number number = (Number) obj;
if (number.longValue() == number.doubleValue()) {
hasher.putInt(HashedObjectType.LONG.ordinal());
hasher.putLong(number.longValue());
} else {
hasher.putInt(HashedObjectType.DOUBLE.ordinal());
hasher.putDouble(number.doubleValue());
}
} else if (obj instanceof Void || obj instanceof Unit || obj == null) {
hasher.putInt(HashedObjectType.NULL.ordinal());
} else if (obj instanceof ListWithSelects) {
ListWithSelects listWithSelects = (ListWithSelects) obj;
hasher.putInt(HashedObjectType.SELECTOR_LIST.ordinal());
List<Object> elements = listWithSelects.getElements();
hasher.putInt(elements.size());
for (Object collectionEntry : elements) {
hashJsonObject(hasher, collectionEntry);
}
} else if (obj instanceof SelectorValue) {
SelectorValue selectorValue = (SelectorValue) obj;
hasher.putInt(HashedObjectType.SELECTOR_VALUE.ordinal());
hashJsonObject(hasher, selectorValue.getDictionary());
hashJsonObject(hasher, selectorValue.getNoMatchError());
} else {
throw new RuntimeException(
String.format("Unsupported object %s (class %s)", obj, obj.getClass()));
}
}
}
|
java
|
<filename>js-nacl/0.0.4.json
{"nacl.js":"<KEY>,"nacl.min.js":"<KEY>}
|
json
|
/* global jpTracksAJAX */
( function ( $, jpTracksAJAX ) {
window.jpTracksAJAX = window.jpTracksAJAX || {};
var debugSet = localStorage.getItem( 'debug' ) === 'dops:analytics';
window.jpTracksAJAX.record_ajax_event = function ( eventName, eventType, eventProp ) {
var data = {
tracksNonce: jpTracksAJAX.jpTracksAJAX_nonce,
action: 'jetpack_tracks',
tracksEventType: eventType,
tracksEventName: eventName,
tracksEventProp: eventProp || false,
};
return $.ajax( {
type: 'POST',
url: jpTracksAJAX.ajaxurl,
data: data,
success: function ( response ) {
if ( debugSet ) {
// eslint-disable-next-line
console.log( 'AJAX tracks event recorded: ', data, response );
}
},
} );
};
$( document ).ready( function () {
$( 'body' ).on( 'click', '.jptracks a, a.jptracks', function ( event ) {
var $this = $( event.target );
// We know that the jptracks element is either this, or its ancestor
var $jptracks = $this.closest( '.jptracks' );
// We need an event name at least
var eventName = $jptracks.attr( 'data-jptracks-name' );
if ( undefined === eventName ) {
return;
}
var eventProp = $jptracks.attr( 'data-jptracks-prop' ) || false;
var url = $this.attr( 'href' );
var target = $this.get( 0 ).target;
if ( url && target && '_self' !== target ) {
var newTabWindow = window.open( '', target );
newTabWindow.opener = null;
}
event.preventDefault();
window.jpTracksAJAX.record_ajax_event( eventName, 'click', eventProp ).always( function () {
// Continue on to whatever url they were trying to get to.
if ( url && ! $this.hasClass( 'thickbox' ) ) {
if ( newTabWindow ) {
newTabWindow.location = url;
return;
}
window.location = url;
}
} );
} );
} );
} )( jQuery, jpTracksAJAX );
|
javascript
|
Shri Jaitley stated that “we are now seeing a growing interest in start-ups. Experimenting in cutting edge technologies, creating value out of ideas and initiatives and converting them into scalable enterprises and businesses is at the core of our strategy for engaging our youth and for inclusive and sustainable growth of the country.” He said concerns such as a more liberal system of raising global capital, incubation facilities in our Centres of Excellence, funding for seed capital and growth, and ease of Doing Business etc need to be addressed to create lakh of jobs and hundreds of billion dollars in value. The Minister said, with this objective in mind, SETU is being set up.
|
english
|
<gh_stars>0
{
"source" : "http:\/\/lema.rae.es\/drae\/srv\/search?val=racionalizar",
"word" : "racionalizar",
"infinitivo" : "racionalizar",
"participio" : [ "racionalizado" ],
"gerundio" : [ "racionalizando" ],
"tenses" : {
"3" : [
[ "racionalizo" ],
[ "racionalizas", "racionalizás" ],
[ "racionaliza" ],
[ "racionalizamos" ],
[ "racionalizáis", "racionalizan" ],
[ "racionalizan" ]
],
"4" : [
[ "racionalicé" ],
[ "racionalizaste" ],
[ "racionalizó" ],
[ "racionalizamos" ],
[ "racionalizasteis", "racionalizaron" ],
[ "racionalizaron" ]
],
"5" : [
[ "racionalizaba" ],
[ "racionalizabas" ],
[ "racionalizaba" ],
[ "racionalizábamos" ],
[ "racionalizabais", "racionalizaban" ],
[ "racionalizaban" ]
],
"6" : [
[ "racionalizaría" ],
[ "racionalizarías" ],
[ "racionalizaría" ],
[ "racionalizaríamos" ],
[ "racionalizaríais", "racionalizarían" ],
[ "racionalizarían" ]
],
"7" : [
[ "racionalizaré" ],
[ "racionalizarás" ],
[ "racionalizará" ],
[ "racionalizaremos" ],
[ "racionalizaréis", "racionalizarán" ],
[ "racionalizarán" ]
],
"8" : [
[ "racionalice" ],
[ "racionalices" ],
[ "racionalice" ],
[ "racionalicemos" ],
[ "racionalicéis", "racionalicen" ],
[ "racionalicen" ]
],
"9" : [
[ "racionalizara" ],
[ "racionalizaras" ],
[ "racionalizara" ],
[ "racionalizáramos" ],
[ "racionalizarais", "racionalizaran" ],
[ "racionalizaran" ]
],
"10" : [
[ "racionalizase" ],
[ "racionalizases" ],
[ "racionalizase" ],
[ "racionalizásemos" ],
[ "racionalizaseis", "racionalizasen" ],
[ "racionalizasen" ]
],
"11" : [
[ "racionalizare" ],
[ "racionalizares" ],
[ "racionalizare" ],
[ "racionalizáremos" ],
[ "racionalizareis", "racionalizaren" ],
[ "racionalizaren" ]
],
"12" : {
"1" : [ "racionaliza" ],
"2" : [ "racionalice" ],
"3" : [ "racionalicemos" ],
"4" : [ "racionalizad" ],
"5" : [ "racionalicen" ]
}
}
}
|
json
|
<reponame>P0rtk3y/auditi_frontend
{"ast":null,"code":"var _jsxFileName = \"/Users/skout/Desktop/Auditi/auditi_frontend/src/index.js\";\nimport React from 'react';\nimport ReactDOM from 'react-dom';\nimport './index.css';\nimport App from './App'; // import 'semantic-ui-css/semantic.min.css'\n\nimport * as serviceWorker from './serviceWorker';\nimport { Provider } from 'react-redux';\nimport { createStore, applyMiddleware, compose, combineReducers } from 'redux'; // import logger from 'redux-logger'\n\nimport thunk from 'redux-thunk'; // import rootReducer from './rootReducer'\n\nconst users = () => [];\n\nconst reducer = combineReducers({\n users\n});\nconst composeEnhancers = window.__REDUX_DEVTOOLS_EXTENSION_COMPOSE__ || compose;\nconst store = createStore(reducer, composeEnancer(applyMiddleware(thunk)));\nconst middleware = [thunk]; // let store = createStore(composeEnhancers(applyMiddleware(...middleware)))\n\nReactDOM.render( /*#__PURE__*/React.createElement(Provider, {\n store: store,\n __self: this,\n __source: {\n fileName: _jsxFileName,\n lineNumber: 25,\n columnNumber: 3\n }\n}, /*#__PURE__*/React.createElement(App, {\n __self: this,\n __source: {\n fileName: _jsxFileName,\n lineNumber: 26,\n columnNumber: 5\n }\n})), document.getElementById('root')); // If you want your app to work offline and load faster, you can change\n// unregister() to register() below. Note this comes with some pitfalls.\n// Learn more about service workers: https://bit.ly/CRA-PWA\n\nserviceWorker.unregister();","map":{"version":3,"sources":["/Users/skout/Desktop/Auditi/auditi_frontend/src/index.js"],"names":["React","ReactDOM","App","serviceWorker","Provider","createStore","applyMiddleware","compose","combineReducers","thunk","users","reducer","composeEnhancers","window","__REDUX_DEVTOOLS_EXTENSION_COMPOSE__","store","composeEnancer","middleware","render","document","getElementById","unregister"],"mappings":";AAAA,OAAOA,KAAP,MAAkB,OAAlB;AACA,OAAOC,QAAP,MAAqB,WAArB;AACA,OAAO,aAAP;AACA,OAAOC,GAAP,MAAgB,OAAhB,C,CACA;;AACA,OAAO,KAAKC,aAAZ,MAA+B,iBAA/B;AACA,SAASC,QAAT,QAAyB,aAAzB;AACA,SAASC,WAAT,EAAsBC,eAAtB,EAAuCC,OAAvC,EAAgDC,eAAhD,QAAuE,OAAvE,C,CACA;;AACA,OAAOC,KAAP,MAAkB,aAAlB,C,CACA;;AAEA,MAAMC,KAAK,GAAG,MAAM,EAApB;;AACA,MAAMC,OAAO,GAAGH,eAAe,CAAC;AAC9BE,EAAAA;AAD8B,CAAD,CAA/B;AAIA,MAAME,gBAAgB,GAAGC,MAAM,CAACC,oCAAP,IAA+CP,OAAxE;AACA,MAAMQ,KAAK,GAAGV,WAAW,CAACM,OAAD,EAAUK,cAAc,CAACV,eAAe,CAACG,KAAD,CAAhB,CAAxB,CAAzB;AAEA,MAAMQ,UAAU,GAAG,CAACR,KAAD,CAAnB,C,CACA;;AAEAR,QAAQ,CAACiB,MAAT,eACE,oBAAC,QAAD;AAAU,EAAA,KAAK,EAAEH,KAAjB;AAAA;AAAA;AAAA;AAAA;AAAA;AAAA;AAAA,gBACE,oBAAC,GAAD;AAAA;AAAA;AAAA;AAAA;AAAA;AAAA;AAAA,EADF,CADF,EAIEI,QAAQ,CAACC,cAAT,CAAwB,MAAxB,CAJF,E,CAOA;AACA;AACA;;AACAjB,aAAa,CAACkB,UAAd","sourcesContent":["import React from 'react';\nimport ReactDOM from 'react-dom';\nimport './index.css';\nimport App from './App';\n// import 'semantic-ui-css/semantic.min.css'\nimport * as serviceWorker from './serviceWorker';\nimport { Provider } from 'react-redux'\nimport { createStore, applyMiddleware, compose, combineReducers } from 'redux'\n// import logger from 'redux-logger'\nimport thunk from 'redux-thunk'\n// import rootReducer from './rootReducer'\n\nconst users = () => []\nconst reducer = combineReducers({\n users\n})\n\nconst composeEnhancers = window.__REDUX_DEVTOOLS_EXTENSION_COMPOSE__ || compose\nconst store = createStore(reducer, composeEnancer(applyMiddleware(thunk)))\n\nconst middleware = [thunk]\n// let store = createStore(composeEnhancers(applyMiddleware(...middleware)))\n\nReactDOM.render(\n <Provider store={store}>\n <App />\n </Provider>,\n document.getElementById('root')\n);\n\n// If you want your app to work offline and load faster, you can change\n// unregister() to register() below. Note this comes with some pitfalls.\n// Learn more about service workers: https://bit.ly/CRA-PWA\nserviceWorker.unregister();\n"]},"metadata":{},"sourceType":"module"}
|
json
|
Former India all-rounder Irfan Pathan on Sunday said that national team vice-captain Rohit Sharma did not lack in hard work in his early years even though his body language suggested a “relaxed” attitude to his batting.
“A lot of people are mistaken when they see a guy who has a lot of time and he is slightly more relaxed than compared to Rohit. Then you say he needs to work hard,” Pathan was quoted as saying by Star Sports show ‘Cricket Connected’.
He said the same things were said about another former India opener, Wasim Jaffer.
“… when he used to run he used to run very relaxed, when he used to bat he had lot of time and we used to think why isn’t he working hard but actually, he was working really hard.
“Similarly with Rohit, from outside we used to think he might need to work harder, he might need to put more application,” said the 35-year-old Pathan who played 29 Tests between 2003 and 2008.
Pathan said Rohit “always talk about sensible things” and that is why he is successful as a batsman as well as captain of IPL side Mumbai Indians.
“He used to always talk about working hard and he used to always talk about the team first as well, that is why you see some of the results he got at the captain of the Mumbai Indians team.
|
english
|
Civil Aviation Minister Jyotiraditya Scindia has hit back at his old rival Digvijaya Singh for calling him a traitor. Scindia, who was formerly with Congress in Madhya Pradesh, suggested that that Digvijaya Singh's outburst against him was as a result of his political rise after he dumped Congress for BJP. The minister said that it was for public to decide who is the traitor and added that Digvijaya Singh is the same person who called Osama Bin Laden as Osama ji and promised to reinstate Article 370 if Congress government returns to power. Earlier Digvijaya Singh had slammed Scindia for betraying Congress and had called him a traitor. Watch this video for more.
|
english
|
http://data.doremus.org/artist/de2093f8-931b-3eb3-83cc-dd3220e936c5
http://data.doremus.org/artist/68584f59-8712-3dc2-ad70-afff81f9df3a
http://data.doremus.org/artist/0b48e256-c0aa-3f77-a3f8-497a9563ab20
http://data.doremus.org/artist/ffa93841-8d2a-380d-af3b-5883d2142bf1
http://data.doremus.org/artist/9ac45767-c34f-38f7-94a5-ea960a56dc53
http://data.doremus.org/artist/b24d00b1-ddfc-3d76-a8ef-782298a5a3f5
http://data.doremus.org/artist/0789a79b-031f-3dc6-a849-392ab5bc194b
http://data.doremus.org/artist/f198e75f-7481-31bd-bf19-3279efa0c76a
http://data.doremus.org/artist/44f13be5-3d67-3e69-bca1-bb96b6b7b9ca
http://data.doremus.org/artist/6056f157-3290-353d-9a7e-68b6d2b4ecac
http://data.doremus.org/artist/5024801e-4b23-3833-8843-5d2bad1576d9
http://data.doremus.org/artist/5f2bf9db-521f-31e6-8efc-d8a6e282bb7a
|
json
|
<gh_stars>0
use std::cell::RefCell;
use std::fmt::Display;
use std::rc::Rc;
type NodeRef<T> = Rc<RefCell<Node<T>>>;
struct LinkedList<T> {
head: Option<NodeRef<T>>,
}
struct Node<T> {
data: T,
next: Option<NodeRef<T>>,
}
struct Iter<T> {
next: Option<NodeRef<T>>,
}
impl<T> Node<T> {
fn tail(node: &NodeRef<T>) -> Option<NodeRef<T>> {
if let Some(cur) = node.borrow().next.as_ref().cloned() {
return Node::tail(&cur.clone());
}
Some(node.clone())
}
}
impl<T> LinkedList<T>
where
T: std::cmp::Eq,
T: std::hash::Hash,
T: std::clone::Clone,
T: std::cmp::PartialOrd,
T: std::cmp::PartialEq,
{
fn new() -> Self {
Self { head: None }
}
fn append(&mut self, new_value: T) {
if let Some(tail) = self.tail() {
tail.borrow_mut().next = Some(Rc::new(RefCell::new(Node {
data: new_value,
next: None,
})));
} else {
self.head = Some(Rc::new(RefCell::new(Node {
data: new_value,
next: None,
})));
}
}
fn tail(&self) -> Option<NodeRef<T>> {
if let Some(cur) = self.head.as_ref().cloned() {
if cur.borrow().next.is_none() {
return Some(cur.clone());
} else {
return Node::tail(&cur.clone());
}
}
None
}
fn iter(&self) -> Iter<T> {
Iter {
next: self.head.as_ref().cloned(),
}
}
fn has_cycle(&self) -> Option<NodeRef<T>> {
let mut tortoise_iter = self.iter();
let mut hare_iter = self.iter();
let mut tortoise = tortoise_iter.next();
hare_iter.next(); // start 1 iteration ahead of tortoise
let mut hare = hare_iter.next();
let mut prev_tortoise = tortoise.as_ref().unwrap().clone();
let mut prev_hare = hare.as_ref().unwrap().clone();
while hare.is_some() && tortoise.is_some() {
if Rc::ptr_eq(hare.as_ref().unwrap(), tortoise.as_ref().unwrap()) {
if Rc::ptr_eq(&prev_tortoise, &prev_hare) {
return Some(prev_tortoise.clone());
} else {
return Some(hare.as_ref().unwrap().clone());
}
}
hare = hare_iter.next();
prev_hare = hare.as_ref().unwrap().clone();
if hare.is_some() {
hare = hare_iter.next();
}
prev_tortoise = tortoise.as_ref().unwrap().clone();
tortoise = tortoise_iter.next();
}
None
}
}
impl<'a, T> Iterator for Iter<T> {
type Item = NodeRef<T>;
fn next(&mut self) -> Option<Self::Item> {
self.next.as_ref()?;
if let Some(cur) = self.next.as_ref().cloned() {
self.next = cur.borrow().next.clone();
return Some(cur.clone());
}
None
}
}
impl<T: Display> Display for LinkedList<T> {
fn fmt(&self, w: &mut std::fmt::Formatter) -> std::result::Result<(), std::fmt::Error> {
write!(w, "[")?;
let mut node = self.head.clone();
while let Some(n) = node {
write!(w, "{}", n.borrow().data)?;
node = n.borrow().next.clone();
if node.is_some() {
write!(w, ", ")?;
}
}
write!(w, "]")
}
}
#[cfg(test)]
mod tests {
use super::*;
#[test]
fn test_cycle() {
let datavec = vec!['A', 'B', 'C', 'D', 'E'];
let mut cycle_list = LinkedList::<char>::new();
for value in datavec.iter() {
cycle_list.append(*value);
}
let mut list_iter = cycle_list.iter();
list_iter.next();
list_iter.next();
let third_node = list_iter.next();
cycle_list.tail().unwrap().borrow_mut().next = Some(third_node.unwrap().clone());
let cycle_result = cycle_list.has_cycle();
assert_eq!(cycle_result.is_some(), true);
assert_eq!(cycle_result.as_ref().unwrap().borrow().data, 'C');
let mut nocycle_list = LinkedList::<char>::new();
for value in datavec.iter() {
nocycle_list.append(*value);
}
let nocycle_result = nocycle_list.has_cycle();
assert_eq!(nocycle_result.is_none(), true);
// Second case
let datavec2 = vec!['A', 'B', 'C', 'D', 'E', 'F'];
let mut cycle_list2 = LinkedList::<char>::new();
for value in datavec2.iter() {
cycle_list2.append(*value);
}
let mut list_iter2 = cycle_list2.iter();
list_iter2.next();
list_iter2.next();
let third_node2 = list_iter2.next();
cycle_list2.tail().unwrap().borrow_mut().next = Some(third_node2.unwrap().clone());
let cycle_result2 = cycle_list2.has_cycle();
assert_eq!(cycle_result2.is_some(), true);
assert_eq!(cycle_result2.as_ref().unwrap().borrow().data, 'C');
}
}
fn main() {
let mut left = LinkedList::<i32>::new();
left.append(6);
left.has_cycle();
}
|
rust
|
<gh_stars>10-100
{
// skill
"skill.name": "Cocina",
"skill.level-up-perk": "+{{bonus}}% comestibilidad en la comida casera",
// professions
"efficient.name": "Eficiencia",
"efficient.desc": "15% de posibilidades de no consumir ingredientes",
"gourmet.name": "Gourmet",
"gourmet.desc": "+20% del precio de venta",
"intense-flavors.name": "Sabores Intensos",
"intense-flavors.desc": "Los buffs de la comida son un nivel más fuerte una vez se comen\n(+20% para la energía máxima o el magnetismo)",
"professional-chef.name": "Chef Profesional",
"professional-chef.desc": "Las comidas caseras son siempre al menos de rango plata",
"satisfying.name": "Satisfactorio",
"satisfying.desc": "+25% de duración del buff una vez comido",
"secret-spices.name": "Especias Secretas",
"secret-spices.desc": "Proporciona unos cuantos buffs aleatorios cuando se come comida sin buffs"
}
|
json
|
{"body": "another option is vmware with a linux image.\n\n\nbut my vote, preference, soul (and pocket) goes the Mac OS X way...\nI would not be using radiance without a mac. \nlife is short and it is once, why suffering?\n\n\nG\n\n\nOn 16 Jun 2011, at 19:05, <NAME> wrote:\n___\n<sup>Automatically generated content from [radiance mailing-list](https://radiance-online.org/pipermail/radiance-general/2011-June/007919.html).</sup>", "attachments": [], "created_by_name": "Giugi", "created_at": "June 16, 2011 at 11:39AM", "created_by": "Giugi", "parent_id": "radiance-general_007914", "id": "radiance-general_007919"}
|
json
|
Realme launched the Realme 5 series smartphones a couple of months ago. While everyone thought that the next smartphones in the line-up would be the Realme 6 series handsets, there seems to be another Realme 5 phone in the pipeline – the Realme 5s.
The device, which, we’ve heard for the first time, has been spotted on Flipkart, confirming its existence. Going by the name, the Realme 5s seems to be an advanced variant of the Realme 5. The dedicated page that Flipkart has created for the Realme 5s reveals that the smartphone will launch in the Indian market on November 20. That’s the same day when Realme is set to launch the X2 Pro in the country. So, there is a high possibility that the brand might announce the Realme 5s at the same event where it is supposed to launch the X2 Pro.
What Is the Difference Between the Realme 5 And the Realme 5s?
The teaser page for the Realme 5s on Flipkart reveals that the device has a 48MP primary camera at the rear in a quad-camera setup. In comparison, the Realme 5 has a 12MP primary camera. That’s one major difference we could spot between the two smartphones. According to the page, the device can capture “Ultra Detailed Pictures, that remain sharp even when zoomed in,” suggesting that the device might have a telephoto lens, but I am not quite sure about it at the moment.
The dedicated page also reveals that one of the color options of the Realme 5s includes red. Realme 5, on the other hand, comes in blue and purple color options. So, that’s another change between the two devices, if it matters to you. Unfortunately, that’s all we could gather from the teaser page. So far, Realme hasn’t made any announcement regarding the Realme 5s on its social media platforms. However, it might soon make an announcement, as the news is already out.
What About the Price?
You can expect the Realme 5s to be priced between the Realme 5 and the Realme 5 Pro. The base model could come with a price tag of INR 9,999 or INR 10,999 at most. With the expected price tag, the Realme 5 will compete directly with the recently-launched Redmi Note 8.
|
english
|
<reponame>liuguanglei123/FasterRunnerNew<gh_stars>0
from rest_framework import pagination
class MyCursorPagination(pagination.CursorPagination):
"""
Cursor 光标分页 性能高,安全
"""
page_size = 9
ordering = '-update_time'
page_size_query_param = "pages"
max_page_size = 20
class MyPageNumberPagination(pagination.PageNumberPagination):
"""
普通分页,数据量越大性能越差
"""
page_size = 10
page_size_query_param = 'size'
page_query_param = 'page'
max_page_size = 20
|
python
|
Begin typing your search above and press return to search.
What can we expect from Indian Grand Prix 4?
Hima Das and Dutee Chand to run in Indian Grand Prix 4 to secure qualification at the Tokyo Olympics.
Each member of the Tokyo Olympics Indian shooting contingent has their own story of hard work and perseverance.
|
english
|
- Lifestyle Beach Day Fashion: Tips and Tricks for a Stylish Look!
Amazon's new sale offers of FAB phone fest which starts from today that is 5th March and will end on 7th March brings in plenty of amazing deals and discounts. Users can get up to 40% off on smartphones, which is much more than anyone can expect. Below you will find a list of devices which will offer you such lucrative offers.
Other offers provided by Amazon include- 5% instant discounts on debit and credit card EMI, no cost EMI, greater exchange and cashback offers, 10 days replacement policy, Accidental and Liquid Damage Protection insurance worth Rs. 2000 FREE on opening a Kotak 811 Account.
Consumers can also get Instant Cashback worth Rs. 5400 & up to 3 TB Jio 4G Data, save up to Rs 2400 a year and earn 2% cash back on every order with Amazon Pay balance. In addition, you also get a 100% purchase protection plan. There are some more enticing deals that you can find in details by having a look at the listing below.
- Dual SIM (nano + nano)
- Dual SIM (nano + nano + microSD)
- Dual SIM (nano + nano + microSD)
- Hybrid Dual SIM (nano + nano / microSD)
- Android 7. 1. 1 (Nougat)
- Hybrid Dual SIM (nano + nano / microSD)
- Dual SIM (nano + nano + microSD)
- ColorOS 5. 2 based on Android 8. 1 (Oreo)
- Funtouch OS 4. 0 based on Android 8. 1 (Oreo)
|
english
|
{
"id": "d802-43",
"text": "NAEB HEADQUARTERS\n14 Gregory Hall\nUrbana, Illinois\nJune 7, 1954\nta*. <NAME>\nRadio Station WtTQA\nUniversity of Alabama\nUniversity, Alabama\nDear Graydon?\nEnclosed is a proposal from <NAME> for an NAEB Network\nCommittee. Such a committee, as outlined here# should make for a\nmuch smoother and effective Network operation.\nI think the proposal and the notes which follow it give\nthe picture. The proposal could easily be adapted as an announce¬\nment if it were decided to activate this committee.\nWe*d be grateful for your reaction.\nSincerely,\n<NAME>\nExecutive Director\nHJSswc\nEnc.\ncc? Executive Committee"
}
|
json
|
{
"@metadata": {
"authors": [
"Mrkczr"
]
},
"ep_table_of_contents.toc": "Ipakita ang Talaan ng Nilalaman"
}
|
json
|
This came a day after the Anti Terrorism Squad arrested two clerics from Delhi for allegedly converting more than 1,000 people to Islam in the state.
The Uttar Pradesh government on Tuesday directed the police to trace those involved in religious conversions and invoke provisions of the stringent National Security Act against them, The Indian Express reported.
This came a day after the Anti Terrorism Squad arrested two clerics from Delhi for allegedly converting more than 1,000 people to Islam in the state. The police said that the two accused – Mufti Kazi Jahageer Alam and Mohammad Umar Gautam – ran an organisation named Islamic Dawah Centre with their associates. The organisation, the police alleged, had been involved in large-scale religious conversions for the past one-and-a-half years.
The National Security Act, which was enacted in 1980, empowers the Centre and state governments to detain those “acting in a manner prejudicial to the defence of India” or threatening public order. Detained persons need not be told they have been held for up to 10 days. Those arrested under the law may be detained without a charge for up to a year. Critics have said that the law violates basic tenets of natural justice such as the presumption of innocence and the accused person’s right to legal counsel.
Last year, the Allahabad High Court had warned the state government to use the NSA with “extreme care”.
The Adityanath government has also instructed the police to invoke Uttar Pradesh Gangsters and Anti-Social Activities (Prevention) Act against the accused in religious conversion.
Uttar Pradesh is among a host of Bharatiya Janata Party-ruled states that have enacted anti-conversion laws to penalise “love jihad” – a pejorative term coined by the right-wing groups to push the conspiracy theory that Muslim men charm Hindu women into marrying them with the sole purpose of converting their brides to Islam.
|
english
|
We always look up to our favourite celebrities to learn tips and tricks on leading a healthy life. While stars like Malaika Arora and Tiger Shroff often inspire us with their workout videos, celebs including Mira Rajput and Alia Bhatt never shy away from spilling their skincare secrets. Gushing over them, we try and take notes of their little hacks for perfect skin and body.
Recently, we caught up with Meezaan Jaffrey, who has been making headlines for his movie Hungama 2, and asked him to share some grooming tips that all men should follow.
"I think hygiene is the most important, regardless of anything. So, always be hygienic, brush your teeth, take a shower, put deodorant, smell good," Meezaan said.
The actor then went on to share that clipping his nails and grooming his beard are two things he does regularly. "In terms of grooming, I think, I cut my nails - toe and fingernails every week. I always take care of that and at the same time, I always shape my beard every single time. I do it every three days or something. I do it on my own at home because I can't afford to keep going to the salon every time, so I just do it on my own at home," he added.
Sharing another important grooming tip, the star kid went on to say that one should get a hairstyle that suits them. He shared, "I think a hairstyle makes a very big difference in terms of your appearance, so keep your hair groomed, clean and take care of it, oil it. Taking care of your hair is very important. "
He also believes that one should stay fit. Meezaan said, "Fitness and workout are also very important because if your body looks good, it brings a kind of confidence in you and then you wear clothes that you want to wear as well and you pull it off as well, better. So, I think fitness is key. "
Coming to Hungama 2, the movie, which also stars Paresh Rawal, Shilpa Shetty and Pranitha Subhash, has recently released on an OTT platform. It is a spiritual successor to the 2003 film, Hungama and has been directed by Priyadarshan, and jointly produced by Ratan Jain, Ganesh Jain, Chetan Jain and Armaan Ventures.
What do you think about these grooming tips by Meezaan? Let us know by tweeting to us @TimesNow.
|
english
|
import React from 'react';
import { connect } from 'react-redux';
import { createStructuredSelector } from 'reselect';
import { makeSelectLoaderStatus } from 'containers/App/selectors';
import Loader from './loader';
class LoaderSpinner extends React.PureComponent { // eslint-disable-line react/prefer-stateless-function
render() {
const { visible } = this.props;
return (
<Loader visible={visible} />
);
}
}
LoaderSpinner.propTypes = {
visible: React.PropTypes.bool,
};
export function mapDispatchToProps() {
return {
};
}
const mapStateToProps = createStructuredSelector({
visible: makeSelectLoaderStatus(),
});
export default connect(mapStateToProps, mapDispatchToProps)(LoaderSpinner);
|
javascript
|
use super::color::*;
/// Methods to blend two colors together
pub enum ColorBlend {
/// Set to new color, ignore old
Set,
Multiply,
Lighten,
Darken,
Screen,
ColorDodge,
Add,
Burn,
Overlay,
AddAlpha,
}
impl Default for ColorBlend {
fn default() -> Self {
ColorBlend::Set
}
}
fn overlay(a: u8, b: u8) -> u8 {
let a = a as i32;
let b = b as i32;
let a = if b <= 128 {
2 * b * a / 255
} else {
255 - 2 * (255 - b) * (255 - a) / 255
};
a as u8
}
fn add_alpha(a: u8, b: u8, alpha: u8) -> u8 {
let a = a as i32;
let b = b as i32;
let alpha = alpha as i32;
(b + alpha * a / 255) as u8
}
impl ColorBlend {
pub fn blend(&self, old: TileColor, new: TileColor) -> TileColor {
match self {
ColorBlend::Set => new,
ColorBlend::Multiply => old * new,
ColorBlend::Lighten => old.max(&new),
ColorBlend::Darken => old.min(&new),
ColorBlend::Screen => WHITE - (WHITE - new) * (WHITE - old),
ColorBlend::ColorDodge => {
if new == WHITE {
new
} else {
WHITE * new / (WHITE - new)
}
}
ColorBlend::Add => old + new,
ColorBlend::Burn => old + new - WHITE,
ColorBlend::Overlay => TileColor::rgb(
overlay(old.r, new.r),
overlay(old.g, new.g),
overlay(old.b, new.b),
),
ColorBlend::AddAlpha => {
let alpha = old.a;
TileColor::rgb(
add_alpha(old.r, new.r, alpha),
add_alpha(old.g, new.g, alpha),
add_alpha(old.b, new.b, alpha),
)
}
}
}
}
#[cfg(test)]
mod tests {}
|
rust
|
<!DOCTYPE html>
<html lang="en" dir="ltr">
<head>
<meta charset="utf-8" />
<meta http-equiv="X-UA-Compatible" content="IE=edge" />
<meta name="viewport" content="width=device-width, initial-scale=1, shrink-to-fit=no" />
<meta name="description" content="" />
<meta name="author" content="" />
<title>COVID-19 Canada Tracker - Vaccination Data Sources</title>
<link href="css/styles.css?v=2.2" rel="stylesheet" />
<link href="https://cdn.datatables.net/1.10.20/css/dataTables.bootstrap4.min.css" rel="stylesheet" crossorigin="anonymous" />
<script src="https://cdnjs.cloudflare.com/ajax/libs/font-awesome/5.11.2/js/all.min.js" crossorigin="anonymous"></script>
<script src="https://cdnjs.cloudflare.com/ajax/libs/moment.js/2.10.0/moment.min.js" crossorigin="anonymous"></script>
<script src="https://code.jquery.com/jquery-3.4.1.min.js" crossorigin="anonymous"></script>
<script src="https://stackpath.bootstrapcdn.com/bootstrap/4.3.1/js/bootstrap.bundle.min.js" crossorigin="anonymous"></script>
<script src="https://cdnjs.cloudflare.com/ajax/libs/Chart.js/2.8.0/Chart.min.js" crossorigin="anonymous"></script>
<script src="https://cdn.datatables.net/1.10.20/js/jquery.dataTables.min.js" crossorigin="anonymous"></script>
<script src="https://cdn.datatables.net/1.10.20/js/dataTables.bootstrap4.min.js" crossorigin="anonymous"></script>
<script src="https://api.mapbox.com/mapbox-gl-js/v1.8.1/mapbox-gl.js"></script>
<link href="https://api.mapbox.com/mapbox-gl-js/v1.8.1/mapbox-gl.css" rel="stylesheet" />
<script src="https://api.mapbox.com/mapbox-gl-js/plugins/mapbox-gl-geocoder/v4.4.2/mapbox-gl-geocoder.min.js"></script>
<link rel="stylesheet" href="https://api.mapbox.com/mapbox-gl-js/plugins/mapbox-gl-geocoder/v4.4.2/mapbox-gl-geocoder.css" type="text/css" />
<!-- <script type="text/javascript" src="js/mapdata.js"></script> -->
<!-- <script type="text/javascript" src="js/canadamap.js"></script> -->
<!-- <script type="text/javascript" src="js/state.js"></script> -->
<script type="text/javascript" src="js/config.js?v=1.0.3"></script>
<!-- Global site tag (gtag.js) - Google Analytics -->
<script async src="https://www.googletagmanager.com/gtag/js?id=UA-160029240-1"></script>
<script>
window.dataLayer = window.dataLayer || [];
function gtag() {
dataLayer.push(arguments);
}
gtag('js', new Date());
gtag('config', 'UA-160029240-1');
</script>
</head>
<body class="sb-nav-fixed">
<nav class="sb-topnav navbar navbar-expand navbar-dark bg-dark">
<a class="navbar-brand" href="index.html">COVID-19 Tracker Canada</a>
<div class="d-none d-md-inline-block form-inline ml-auto mr-0 mr-md-3 my-2 my-md-0">
</div>
<ul class="navbar-nav ml-auto ml-md-0">
<li class="nav-item dropdown">
<a class="nav-link dropdown-toggle" id="userDropdown" href="#" role="button" data-toggle="dropdown" aria-haspopup="true" aria-expanded="false"><b>Menu</b></a>
<div class="dropdown-menu dropdown-menu-right" aria-labelledby="userDropdown">
<a class="dropdown-item" href="about.html">About</a>
<a class="dropdown-item" href="sources.html">Sources</a>
<a class="dropdown-item" href="ontario.html">Ontario Data</a>
<a class="dropdown-item" href="notes.html">Data Notes</a>
<a class="dropdown-item" href="https://api.covid19tracker.ca">API Access</a>
<a class="dropdown-item" href="acknowledgements.html">Acknowledgments</a>
<a class="dropdown-item" href="licensing.html">Citation</a>
<div class="dropdown-divider"></div>
<a class="dropdown-item" href="mailto:<EMAIL>">Contact</a>
</div>
</li>
</ul>
</nav>
<div class="sb-sidenav-footer">
</div>
<div id="layoutSidenav_content">
<main>
<div class="container-fluid">
<br>
<br>
<br>
<div class="card mb-4">
<div class="card-header">Sources</div>
<div class="card-body">
<H1>How We Get Our Vaccination Data</H1>
<p>Our COVID-19 vaccination data is manually compiled exclusively from official government sources in near real-time, as new information is released.</p>
<p>For the moment, there's still some variability in where data is released day-to-day. Numbers may be shared in press conferences, interviews, press releases, online government dashboards, situation reports or even through direct correspondence with provincial governments. However, we only report official data that's been directly shared by provincial governments. For distribution data, we rely on both federal reporting by PHAC and data from provincial governments, which is often more up-to-date.</p>
<p>Though sources may change, these are our primary sources for each province:</p>
<ul>
<li>Vaccination data for <b>Quebec</b>: <a href="https://www.quebec.ca/sante/problemes-de-sante/a-z/coronavirus-2019/situation-coronavirus-quebec/donnees-sur-la-vaccination-covid-19/">Dashboard</a></li>
<li>Vaccination data for <b>Ontario</b>: <a href="https://covid-19.ontario.ca/covid-19-vaccines-ontario">Dashboard</a></li>
<li>Vaccination data for <b>Alberta</b>: <a href="https://www.alberta.ca/covid19-vaccine.aspx">Dashboard</a></li>
<li>Vaccination data for <b>British Columbia</b>: <a href="https://experience.arcgis.com/experience/a6f23959a8b14bfa989e3cda29297ded">Dashboard</a></li>
<li>Vaccination data for <b>Manitoba</b>: <a href="https://manitoba.ca/covid19/vaccine/index.html">Dashboard</a></li>
<li>Vaccination data for <b>Saskatchewan</b>: <a href="https://www.saskatchewan.ca/government/health-care-administration-and-provider-resources/treatment-procedures-and-guidelines/emerging-public-health-issues/2019-novel-coronavirus/covid-19-vaccine/vaccine-delivery-update">Dashboard</a></li>
<li>Vaccination data for <b>Nova Scotia</b>: <a href="https://experience.arcgis.com/experience/204d6ed723244dfbb763ca3f913c5cad">Dashboard</a></li>
<li>Vaccination data for <b>New Brunswick</b>: <a href="https://experience.arcgis.com/experience/8eeb9a2052d641c996dba5de8f25a8aa">Dashboard</a></li>
<li>Vaccination data for <b>Newfoundland and Labrador</b>: <a href="https://covid-19-newfoundland-and-labrador-gnl.hub.arcgis.com/">Dashboard</a></li>
<li>Vaccination data for <b>Prince Edward Island</b>: <a href="https://www.princeedwardisland.ca/en/information/health-and-wellness/covid-19-vaccination-data">Dashboard</a></li>
<li>Vaccination data for <b>Nunavut</b>: <a href="https://www.gov.nu.ca/health/information/covid-19-novel-coronavirus">Daily Updates</a></li>
<li>Vaccination data for <b>Yukon</b>: <a href="https://yukon.ca/en/health-and-wellness/covid-19-information/latest-updates-covid-19/current-covid-19-situation">Daily Updates</a></li>
<li>Vaccination data for <b>Northwest Territories</b>: <a href="https://www.gov.nt.ca/covid-19/">Dashboard</a></li>
<br>
<li>Population Data: <a href="https://www150.statcan.gc.ca/t1/tbl1/en/tv.action?pid=1710000901">StatsCan Q4 2020</a></li>
<li>Distribution Data: <a href="https://www.canada.ca/en/public-health/services/diseases/2019-novel-coronavirus-infection/prevention-risks/covid-19-vaccine-treatment/vaccine-rollout.html#a4a">PHAC</a> and provincial reporting</li>
</ul>
<p>If you have any additional questions, please feel free to <a href="mailto:noah.<EMAIL>">reach out</a>.
</div>
</div>
</div>
</main>
<footer class="py-4 bg-light mt-auto">
<div class="container-fluid">
<div class="d-flex align-items-center justify-content-between small">
<div class="text-muted">Copyright © COVID19Tracker.ca 2020 // <EMAIL></div>
</div>
</div>
</footer>
</div>
</div>
</html>
|
html
|
<gh_stars>1-10
package INSTANCE
import (
"net/http"
"os"
"github.com/aws/aws-sdk-go/aws"
"time"
"strconv"
)
type AwsUserConfig struct {
AwsAccessKeyId string `json:"AwsAccessKeyId"`
AwsSecretAccessKey string `json:"AwsSecretAccessKey"`
Region *string `json:"Region"`
}
type Ec2Instances struct {
InstanceId string `json:"InstanceId"`
InstanceState string `json:"InstanceState"`
AvailabilityZone string `json:"AvailabilityZone"`
PublicIpAddress string `json:"PublicIpAddress"`
InstanceType string `json:"InstanceType"`
ImageId string `json:"ImageId"`
CoreCount int64 `json:"CoreCount"`
LaunchTime time.Time `json:"LaunchTime"`
}
type Ec2InstancesTime struct {
Timestamp int64 `json:"timestamp"`
Instances []Ec2Instances `json:"instances"`
}
type InstancesData struct {
Pending int64 `json:"Pending"`
Running int64 `json:"Running"`
SshLogin int64 `json:"SshLogin"`
ShuttingDown int64 `json:"ShuttingDown"`
Stopped int64 `json:"Stopped"`
Terminated int64 `json:"Terminated"`
Other int64 `json:"Other"`
}
type InstanceBootTime struct {
Avg float64 `json:"Avg"`
Min int64 `json:"Min"`
Max int64 `json:"Max"`
}
type InstanceTime struct {
Avg float64 `json:"Avg"`
Min int64 `json:"Min"`
Max int64 `json:"Max"`
}
type InstanceShutDownTime struct {
Avg float64 `json:"Avg"`
Min int64 `json:"Min"`
Max int64 `json:"Max"`
}
type InstanceBootShutdownRate struct {
NumInstances int `json:"NumInstances"`
NumExperiments int `json:"NumExperiments"`
BootTime InstanceTime `json:"BootTime"`
ShutdownTime InstanceTime `json:"ShutdownTime"`
}
type ExperimentSetting struct {
ExperimentNum string `json:"ExperimentNum"`
NumInstances int `json:"NumInstances"`
Instances []InstancesData `json:"Instances"`
InstancesBootTime InstanceTime `json:"InstancesBootTime"`
InstancesShutDownTime InstanceTime `json:"InstancesShutDownTime"`
TotalInstancesBootTime int64 `json:"TotalInstancesBootTime"`
TotalInstancesShutDownTime int64 `json:"TotalInstancesShutDownTime"`
}
type ExperimentsLoop struct {
Experiments []ExperimentSetting `json:"Experiments"`
}
type VmTemplateData struct {
InstanceType string `json:"InstanceType"`
Region string `json:"Region"`
AvailabilityZone string `json:"AvailabilityZone"`
ImageId string `json:"ImageId"`
CoreCount int64 `json:"CoreCount"`
ExperimentLoop []ExperimentsLoop `json:"ExperimentLoop"`
BootShutdownRate []InstanceBootShutdownRate `json:"BootShutdownRate"`
}
type PerExperiment struct {
Instances []InstancesData `json:"Instances"`
InstanceType string `json:"InstanceType"`
NumInstances int `json:"NumInstances"`
InstanceBootRate InstanceTime `json:"InstanceBootRate"`
ShutDownRate InstanceTime `json:"ShutDownRate"`
ExperimentNum string `json:"ExperimentNum"`
ExperimenLoopCount int `json:"ExperimenLoopCount"`
Region string `json:"Region"`
AvailabilityZone string `json:"AvailabilityZone"`
ImageId string `json:"ImageId"`
CoreCount int64 `json:"CoreCount"`
TotalStartTime int64 `json:"TotalStartTime"`
TotalShutDownTime int64 `json:"TotalShutDownTime"`
}
type InstanceValue struct {
NumInstances int `json:"NumInstances"`
BootTime float64 `json:"BootTime"`
ShutDownTime float64 `json:"ShutDownTime"`
}
type InstanceRegression struct {
InstanceType string `json:"InstanceType"`
Region string `json:"Region"`
InstanceValues []InstanceValue `json:"InstanceValues"`
}
type VMBootShutDownRatePerInstanceTypeAll struct {
InstanceValues []InstanceValue `json:"InstanceValues"`
}
type VMBootShutDownRatePerInstanceTypeOne struct {
BootTime float64 `json:"BootTime"`
ShutDownTime float64 `json:"ShutDownTime"`
}
var TotalExperimentsLoop = 5
var InstanceTypes = []string{ "t2.micro"}
var AllInstanceTypes = []string{ "t2.micro", "t2.nano", "t2.small", "t2.medium", "t2.large"}
var DefaultNumInstances = []int64{1,2, 3, 4, 5}
//var DefaultRegion = []string{ "us-east-2", "us-east-1", "us-west-1", "eu-central-1", "ap-south-1", "eu-west-1"}
var DefaultRegion = []string{ "us-east-2"}
//var DefaultAMI = []string{ "ami-5e8bb23b", "ami-759bc50a", "ami-4aa04129", "ami-de8fb135", "ami-188fba77","ami-2a7d75c0"}
var DefaultAMI = []string{ "ami-5e8bb23b"}
func Schedule(what func(), delay time.Duration) chan bool {
stop := make(chan bool)
go func() {
for {
what()
select {
case <-time.After(delay):
case <-stop:
return
}
}
}()
return stop
}
func StringToFloat(stringVal string) float64 {
// to convert a float number to a string
if s, err := strconv.ParseFloat(stringVal, 64); err == nil {
return s// 3.14159265
}
return 0
}
func ValueAssignString(value *string, fallback string) string{
if value!=nil {
return *value
} else {
return fallback
}
}
func ValueAssignInt64(value *int64, fallback int64) int64{
if value!=nil {
return *value
} else {
return fallback
}
}
func StringInSlice(a string, list []string) bool {
for _, b := range list {
if b == a {
return true
}
}
return false
}
func FloatToString(input_num float64) string {
// to convert a float number to a string
return strconv.FormatFloat(input_num, 'f', 6, 64)
}
func InitVMStartScriptBootRate(dummyURL string, templateName string) string {
var vmStartScript = "#!/bin/bash\n"+
"mkdir test_VM\n"+
"myip =\"$(hostname --ip-address)\"\n"+
"echo \"My WAN/Public IP address: $myip\" >test_VM/test\n"+
"curl \""+dummyURL+"/?vmip=$myip&templateName="+templateName+"\""
return vmStartScript
}
var UserConfig = AwsUserConfig{AwsAccessKeyId: "<KEY>", AwsSecretAccessKey: "<KEY>",
Region: aws.String("us-east-2")}
func InitUserConfigEnvVars(w http.ResponseWriter, r *http.Request){
UserConfig.AwsAccessKeyId = os.Getenv("AWS_ACCESS_KEY_ID")
UserConfig.AwsSecretAccessKey = os.Getenv("AWS_SECRET_ACCESS_KEY")
UserConfig.Region = aws.String(os.Getenv("REGION"))
w.Header().Set("Content-Type", "application/json; charset=UTF-8")
w.WriteHeader(http.StatusOK)
}
|
go
|
#include "ogm/ast/parse.h"
#include <iostream>
#include <cstring>
payload_type_t ogm_ast_tree_get_payload_type(
const ogm_ast_t* tree
)
{
switch (tree->m_subtype)
{
case ogm_ast_st_imp_assignment:
case ogm_ast_st_exp_arithmetic:
case ogm_ast_st_exp_accessor:
case ogm_ast_st_imp_body:
case ogm_ast_st_imp_loop:
case ogm_ast_st_imp_control:
return ogm_ast_payload_t_spec;
case ogm_ast_st_exp_literal_primitive:
return ogm_ast_payload_t_literal_primitive;
case ogm_ast_st_imp_var:
return ogm_ast_payload_t_declaration;
case ogm_ast_st_imp_enum:
case ogm_ast_st_exp_literal_struct:
return ogm_ast_payload_t_declaration_enum;
case ogm_ast_st_exp_literal_function:
return ogm_ast_payload_t_literal_function;
case ogm_ast_st_exp_identifier:
return ogm_ast_payload_t_string;
case ogm_ast_st_imp_body_list:
return ogm_ast_payload_t_string_list;
default:
return ogm_ast_payload_t_none;
}
}
const char* ogm_ast_tree_get_payload_string(
const ogm_ast_t* tree
)
{
if (ogm_ast_tree_get_payload_type(tree) == ogm_ast_payload_t_string)
{
return (const char*) tree->m_payload;
}
else
{
return nullptr;
}
}
const char* ogm_ast_tree_get_payload_string_list(
const ogm_ast_t* tree,
size_t i
)
{
if (ogm_ast_tree_get_payload_type(tree) == ogm_ast_payload_t_string_list)
{
if (i >= tree->m_sub_count) return nullptr;
return ((const char**) tree->m_payload)[i];
}
else
{
return nullptr;
}
}
bool ogm_ast_tree_get_spec(
const ogm_ast_t* tree,
ogm_ast_spec_t* out_spec
)
{
if (ogm_ast_tree_get_payload_type(tree) == ogm_ast_payload_t_spec)
{
*out_spec = tree->m_spec;
return true;
}
else
{
*out_spec = ogm_ast_spec_none;
return false;
}
}
bool ogm_ast_tree_get_payload_literal_primitive(
const ogm_ast_t* tree,
ogm_ast_literal_primitive_t** out_payload
)
{
if (tree->m_subtype == ogm_ast_st_exp_literal_primitive)
{
*out_payload = (ogm_ast_literal_primitive_t*)tree->m_payload;
return true;
}
return false;
}
bool ogm_ast_tree_get_payload_declaration(
const ogm_ast_t* tree,
ogm_ast_declaration_t** out_payload
)
{
if (tree->m_subtype == ogm_ast_st_imp_var || tree->m_subtype == ogm_ast_st_imp_enum || tree->m_subtype == ogm_ast_st_exp_literal_struct)
{
*out_payload = (ogm_ast_declaration_t*)tree->m_payload;
return true;
}
return false;
}
bool ogm_ast_tree_get_payload_function_literal(
const ogm_ast_t* tree,
ogm_ast_literal_function_t** out_payload
)
{
if (tree->m_subtype == ogm_ast_st_exp_literal_function)
{
*out_payload = (ogm_ast_literal_function_t*)tree->m_payload;
return true;
}
return false;
}
void print_indent(
int32_t indent
)
{
for (int32_t i = 0; i < indent; i++)
{
std::cout << " ";
}
}
void ogm_ast_tree_print_helper(
const ogm_ast_t* tree,
int32_t indent
)
{
print_indent(indent);
if (tree->m_type == ogm_ast_t_exp)
{
std::cout << "& ";
}
else
{
std::cout << "> ";
}
std::cout << ogm_ast_subtype_string[tree->m_subtype];
ogm_ast_spec_t spec;
if (ogm_ast_tree_get_spec(tree, &spec))
{
std::cout << " " << ogm_ast_spec_string[tree->m_spec];
}
// extra payload:
switch(tree->m_subtype)
{
case ogm_ast_st_exp_literal_primitive:
{
ogm_ast_literal_primitive_t* payload;
ogm_ast_tree_get_payload_literal_primitive(tree, &payload);
std::cout << ": " << payload->m_value;
}
break;
case ogm_ast_st_exp_identifier:
std::cout << ": " << (char*) tree->m_payload;
break;
default:
break;
}
std::cout << " [" << tree->m_start.m_line + 1<< ":" << tree->m_start.m_column + 1
<< " - " << tree->m_end.m_line + 1<< ":" << tree->m_end.m_column + 1 << "]";
std::cout << std::endl;
// subtrees
for (int32_t i = 0; i < tree->m_sub_count; i++)
{
ogm_ast_tree_print_helper(
tree->m_sub + i,
indent + 1
);
}
}
void ogm_ast_tree_print(
const ogm_ast_t* tree
)
{
ogm_ast_tree_print_helper(tree, 0);
}
|
cpp
|
//go:build go1.7
// +build go1.7
package stscreds
import (
"fmt"
"net/http"
"reflect"
"strings"
"testing"
"time"
"github.com/sotowang/aws-sdk-go/aws"
"github.com/sotowang/aws-sdk-go/aws/client"
"github.com/sotowang/aws-sdk-go/aws/credentials"
"github.com/sotowang/aws-sdk-go/aws/request"
"github.com/sotowang/aws-sdk-go/service/sts"
"github.com/sotowang/aws-sdk-go/service/sts/stsiface"
)
func TestWebIdentityProviderRetrieve(t *testing.T) {
cases := map[string]struct {
roleARN string
tokenPath string
sessionName string
newClient func(t *testing.T) stsiface.STSAPI
duration time.Duration
expectedError string
expectedCredValue credentials.Value
}{
"session name case": {
roleARN: "arn01234567890123456789",
tokenPath: "testdata/token.jwt",
sessionName: "foo",
newClient: func(t *testing.T) stsiface.STSAPI {
return mockAssumeRoleWithWebIdentityClient{
t: t,
doRequest: func(t *testing.T, input *sts.AssumeRoleWithWebIdentityInput) (
*sts.AssumeRoleWithWebIdentityOutput, error,
) {
if e, a := "foo", *input.RoleSessionName; e != a {
t.Errorf("expected %v, but received %v", e, a)
}
if input.DurationSeconds != nil {
t.Errorf("expect no duration, got %v", *input.DurationSeconds)
}
return &sts.AssumeRoleWithWebIdentityOutput{
Credentials: &sts.Credentials{
Expiration: aws.Time(time.Now()),
AccessKeyId: aws.String("access-key-id"),
SecretAccessKey: aws.String("secret-access-key"),
SessionToken: aws.String("session-token"),
},
}, nil
},
}
},
expectedCredValue: credentials.Value{
AccessKeyID: "access-key-id",
SecretAccessKey: "secret-access-key",
SessionToken: "session-token",
ProviderName: WebIdentityProviderName,
},
},
"with duration": {
roleARN: "arn01234567890123456789",
tokenPath: "testdata/token.jwt",
sessionName: "foo",
duration: 15 * time.Minute,
newClient: func(t *testing.T) stsiface.STSAPI {
return mockAssumeRoleWithWebIdentityClient{
t: t,
doRequest: func(t *testing.T, input *sts.AssumeRoleWithWebIdentityInput) (
*sts.AssumeRoleWithWebIdentityOutput, error,
) {
if e, a := int64((15*time.Minute)/time.Second), *input.DurationSeconds; e != a {
t.Errorf("expect %v duration, got %v", e, a)
}
return &sts.AssumeRoleWithWebIdentityOutput{
Credentials: &sts.Credentials{
Expiration: aws.Time(time.Now()),
AccessKeyId: aws.String("access-key-id"),
SecretAccessKey: aws.String("secret-access-key"),
SessionToken: aws.String("session-token"),
},
}, nil
},
}
},
expectedCredValue: credentials.Value{
AccessKeyID: "access-key-id",
SecretAccessKey: "secret-access-key",
SessionToken: "session-token",
ProviderName: WebIdentityProviderName,
},
},
}
for name, c := range cases {
t.Run(name, func(t *testing.T) {
p := NewWebIdentityRoleProvider(c.newClient(t), c.roleARN, c.sessionName, c.tokenPath)
p.Duration = c.duration
credValue, err := p.Retrieve()
if len(c.expectedError) != 0 {
if err == nil {
t.Fatalf("expect error, got none")
}
if e, a := c.expectedError, err.Error(); !strings.Contains(a, e) {
t.Fatalf("expect error to contain %v, got %v", e, a)
}
return
}
if err != nil {
t.Fatalf("expect no error, got %v", err)
}
if e, a := c.expectedCredValue, credValue; !reflect.DeepEqual(e, a) {
t.Errorf("expected %v, but received %v", e, a)
}
})
}
}
type mockAssumeRoleWithWebIdentityClient struct {
stsiface.STSAPI
t *testing.T
doRequest func(*testing.T, *sts.AssumeRoleWithWebIdentityInput) (*sts.AssumeRoleWithWebIdentityOutput, error)
}
func (c mockAssumeRoleWithWebIdentityClient) AssumeRoleWithWebIdentityRequest(input *sts.AssumeRoleWithWebIdentityInput) (
*request.Request, *sts.AssumeRoleWithWebIdentityOutput,
) {
output, err := c.doRequest(c.t, input)
req := &request.Request{
HTTPRequest: &http.Request{},
Retryer: client.DefaultRetryer{},
}
req.Handlers.Send.PushBack(func(r *request.Request) {
r.HTTPResponse = &http.Response{}
r.Data = output
r.Error = err
var found bool
for _, retryCode := range req.RetryErrorCodes {
if retryCode == sts.ErrCodeInvalidIdentityTokenException {
found = true
break
}
}
if !found {
c.t.Errorf("expect ErrCodeInvalidIdentityTokenException error code to be retry-able")
}
})
return req, output
}
func TestNewWebIdentityRoleProviderWithOptions(t *testing.T) {
const roleARN = "a-role-arn"
const roleSessionName = "a-session-name"
cases := map[string]struct {
options []func(*WebIdentityRoleProvider)
expect WebIdentityRoleProvider
}{
"no options": {
expect: WebIdentityRoleProvider{
client: stubClient{},
tokenFetcher: stubTokenFetcher{},
roleARN: roleARN,
roleSessionName: roleSessionName,
},
},
"with options": {
options: []func(*WebIdentityRoleProvider){
func(o *WebIdentityRoleProvider) {
o.Duration = 10 * time.Minute
o.ExpiryWindow = time.Minute
},
},
expect: WebIdentityRoleProvider{
client: stubClient{},
tokenFetcher: stubTokenFetcher{},
roleARN: roleARN,
roleSessionName: roleSessionName,
Duration: 10 * time.Minute,
ExpiryWindow: time.Minute,
},
},
}
for name, c := range cases {
t.Run(name, func(t *testing.T) {
p := NewWebIdentityRoleProviderWithOptions(
stubClient{}, roleARN, roleSessionName,
stubTokenFetcher{}, c.options...)
if !reflect.DeepEqual(c.expect, *p) {
t.Errorf("expect:\n%v\nactual:\n%v", c.expect, *p)
}
})
}
}
type stubClient struct {
stsiface.STSAPI
}
type stubTokenFetcher struct{}
func (stubTokenFetcher) FetchToken(credentials.Context) ([]byte, error) {
return nil, fmt.Errorf("stubTokenFetcher should not be called")
}
|
go
|
Horoscope today, 11 February 2023: As we all are very busy working and planning in this beginning of 2023. It would be a huge help and relief for you to take a look at your complete 2023 forecast and get it so you could keep going on the right tracks and reach the success you are clearly aiming at this year in every single area of your life. (Horoscope today, 11 February 2023)
We should believe in the science of stars. Solution to our problems is hidden in Astrology. Most of our problems will be resolved with the aura of good day. Everyone has pending questions and issues. And astrology really helps you to fix them all your questions and problems.
We all are curious to know how our new year will go and how will this year turn for us. (Horoscope today,11 February 2023: )We have already made many planning for future and try to achieve the things better than going year. Astrology gives you clear vision of right path which help you to achieve success for future endeavors. It not only helps us to choose correct path but also alerts us of untoward situation in life. So here we get to know what will be your day today according to your zodiac. Horoscope today, 11 February 2023:
Aries Horoscope Today (March 21 – April 19)
Your health will improve after noon, but mind will become very irritable after noon, Moon’s position will change to Ketu in the afternoon, very strong chance of government job is made today. Today, we will solve all our tasks very wisely, the program to go for a walk will be made in the evening. Today the meeting with the friends of the partner will make it very pleasant.
Taurus Horoscope Today (April 20 – May 20)
Nervous system will be weak today, as well as you can feel pain in your feet, today there can be a dispute or estrangement with your father, keep a distance. Cheating or relationship with a special close friend can also spoil. Today your enemies will fear you, and you will dominate them. Luck will be with you today. If you love someone, then propose for marriage today.
Gemini Horoscope Today (May 21 – June 20)
The day can be loose in terms of health, do not ignore it, pay attention to food, today the whole focus will be on earning money, as well as how to fulfill all the responsibilities towards the family. Today you can go on a journey, be careful while traveling. Take care of your mother’s health, it can deteriorate. Today I will feel very emotional in love relationship.
Cancer Horoscope Today (June 21 – July 22)
You will be able to achieve your destination today only by hard work, mood will be good till afternoon but after noon the mood may deteriorate, and there may be differences with the mother. With the help of friends, any difficult task will be successful and easy today. It will happen, today will be spent in the pursuit of happiness. Take care of your partner’s health today.
Leo Horoscope Today (July 23 – August 22)
Relationships will be good, or strong.
Virgo Horoscope Today (August 23 – September 22)
You can get angry on small things, and that anger can make the tongue bitter, exercise some control, time is good for buying a property vehicle, but take it carefully, avoid taking disputed property. If you are thinking of partnership in business, then do not make such a mistake. Today, the government works which were pending since last time will have to be completed because it is a day of success in government works.
Avoid getting entangled with life partner.
Libra Horoscope Today (September 23 – October 22)
Moon-Ketu alliance will be formed in your zodiac after noon today, due to which the mind will feel confused. Today it will be very important to control the speech, otherwise the old relations may get spoiled. But today luck will be very strong, all the work will be done by evening, but in spite of this his mind will be distracted. Very good day for work, businessmen will also get profit.
There will be sometimes hot and sometimes soft mood with the partner.
Scorpio Horoscope Today (October 23 – November 21)
Today, relations with elder brother will be sweet and there will be benefits from him, but for gold traders, the day will be of ups and downs, you will be praised in the workplace, the results of your creative activities will be visible in the coming times. You will get some good news, you can also go on a small religious journey.
If you are thinking of getting married, then it would be better to wait for a while.
Sagittarius Horoscope Today (November 22 – December 21)
Today you may have some complaints of phlegm and cold, but by the evening you will get relief, today the day will be beneficial for media, cloth traders and silver traders. Excess of expenses can spoil the budget. Today an old friend will call you personally and will wish to meet. You have to maintain the sweetness in the marital relationship, otherwise things can get spoiled.
Capricorn Horoscope Today (December 22 – January 19)
Today your personality will be very influential, be it work area or home, your importance will be understood, but take care of your health, eating more chili spices can cause stomach problems. Today, people associated with the technical sector will get a new opportunity. Which will be beneficial in future.
Avoid arguing in love relationship.
Aquarius Horoscope Today (January 20 – February 18)
Today your respect and honor will increase, today is the day of promotion. A higher position can also be achieved. Will participate in many creative works today, the mood will also be very romantic today, will go out for a walk in the evening, today there is a possibility of profit in business. Give importance to your life partner instead of outside people (Horoscope 11 February 2023)
Pisces Horoscope Today (February 19 – March 20)
Mental worries will remain today, avoid being emotional, otherwise someone can take advantage, health will also be a little weak, but the company of family will encourage you, as well as the financial condition will also be very good. Love relations will be good today, give time to the partner.
|
english
|
And, he is back, as full of bluster and bombast as ever. I speak, as some of you may have guessed, of our Home Minister. He has been so invisible since the pandemic arrived that Delhi’s pulsating political grapevine has been abuzz with rumours that he was seriously unwell. There have also been rumours that he is no longer the Prime Minister’s favourite minister on account of the bungled handling of a long list of domestic issues. Remember how it was Amit Shah who, with great aplomb, piloted in Parliament the laws that amended the citizenship Act and abrogated Article 370.
He was riding so high that he imposed his will brutally on the Kashmir Valley. There were months of curfew and an Internet shutdown that has become famous for being the longest shutdown ever. He made promises of the dawning of a golden new era of prosperity and development. But, as we approach the first anniversary of Kashmir’s altered status, all we hear is bad news from the Valley. Violent, jihadist groups seem to have so revived their horrible activities that last week a Hindu sarpanch, Ajay Pandita, was murdered despite his having pleaded for his security to be enhanced. A video clip of his pleading for better security went viral after he was killed. There has been no comment so far from the Home Minister who is directly responsible for law and order in the new Union Territory of Jammu and Kashmir.
Since his reappearance, Mr Shah has given a series of television interviews. But, they appear to have been stage managed so as to allow the minister to get away with half-truths and lies. The subject of the protests and violence that swept across the country after the CAA barely found mention in them despite the protests being a direct result of the Home Minister’s speeches. In these he repeatedly warned that the CAA was only a first step towards creating a National Register of Citizens. He has never been asked about these speeches or his ugly choice of words like ‘termite’. It was the minister’s words that alarmed Indian Muslims and made them fearful about proving their citizenship. So, they took to the streets in protest. The violence and chaos that resulted were because of administrative ineptitude.
Since I have been accused of shielding Modi by shifting blame, I want to make it clear that I believe his administrative skills are being rightly questioned. He became Prime Minister with more administrative experience than any other, so nobody doubted his ability to govern. Why then was there disruption and economic collapse before the pandemic? Covid-19 appears only to have dealt the final blow. His devotees assert that if Modi had not been Prime Minister, there would have been bodies piled high in the streets of Delhi and Mumbai. We do not know. What we do know is that serious mistakes were made.
What reason was there to give four days notice before declaring a Janata Curfew and just four hours before imposing that first nationwide lockdown? The Prime Minister has not given any interviews since the pandemic arrived or he would have to explain other mistakes of which the most serious was the inability of his administration to anticipate the exodus of migrant workers from our cities. The Home Minister has spoken for him, and it is sad that he lied.
In one of his recent interviews he said that the idea behind giving no notice before that first lockdown was to try and keep migrant workers from carrying the virus back to their villages. But, then he told his first lie. He said that full arrangements had been made for the suddenly homeless and jobless workers to be provided shelter and food in the cities. Not true. Or they would not have been desperate enough to start walking home. The second lie he has repeated more than once is that full arrangements were made for them to be transported home. Not true.
Instead of lies and half-truths, it would be better for the Prime Minister and his ministers to have the humility to apologise for the terrible suffering that millions of Indians have endured. They suffered because of administrative failures on a criminal scale. Having had nearly two decades of administrative experience before becoming Prime Minister, these failures of Modi’s administration are both surprising and very worrying.
Now that India is opening up without ‘flattening the curve’, we are going to need the Prime Minister to really lead. He could begin by implementing those reforms that promise to make the revival of the economy easier. It is hard to remember a bleaker moment in recent Indian history. Our healthcare services are showing signs of collapse despite the lockdowns. And, it is going to be a long while before the wheels of the economy begin to turn at enough speed to create new jobs and bring back those that have been lost in the past three months.
It is a time that will test Modi’s leadership and administrative skills to their fullest. This is why it is unfortunate that with the return of Mr Shah, we are beginning to see games of dirty politics being played in states not ruled by BJP governments. There are ugly stories from Rajasthan and murmurings from Maharashtra. This is no time to start toppling state governments.
|
english
|
from surprise.model_selection import train_test_split
from surprise.model_selection import LeaveOneOut
from surprise import KNNBaseline
from surprise import Dataset, KNNBasic
from surprise import Reader
import heapq
from movies_analyzer.Movies import Movies, RATINGS, LINKS, MOVIES
from movies_recommender.utils import get_popularity_ranking
import pandas as pd
from operator import itemgetter
from surprise.similarities import cosine
class RecommendationDataSet:
def __init__(self, movies: Movies):
# train_test_split(dataset, test_size=test_size, random_state=1)
self.movies = movies
self.dataset_df = pd.read_csv(movies.movielens_path / RATINGS)
reader = Reader(line_format='user item rating timestamp', sep=',', skip_lines=1)
"""
line_format - list of columns
sep - separator for csv file
skip_lines - start from the second line
"""
self.dataset = Dataset.load_from_file(self.movies.movielens_path / RATINGS, reader=reader)
self.full_dataset = self.dataset.build_full_trainset()
# ranking
self.ratings, self.rankings = get_popularity_ranking(self.full_dataset)
# TRAINING
self.train_set, self.test_set = None, None
self.anti_test_set = None
self.leave_one_out_train_set = None
self.leave_one_out_test_set = None
self.leave_one_out_anti_test_set = None
self.similarity_algorithm = None
def clear_training(self):
self.train_set, self.test_set = None, None
self.anti_test_set = None
self.leave_one_out_train_set = None
self.leave_one_out_test_set = None
self.leave_one_out_anti_test_set = None
self.similarity_algorithm = None
def get_dataset_with_extended_user(self, watched):
"""
Create new dataset with new user, based only on the score of current movies.
:param
"""
df = pd.DataFrame.from_dict(watched, orient='index', columns=['rating'])
df.reset_index(inplace=True)
df.rename(columns={'index': 'movieId'},inplace=True)
new_user_id = max(self.dataset_df['userId']) + 1
df['userId'] = new_user_id
rating_df = self.dataset_df[['userId', 'movieId', 'rating']].append(
df[['userId', 'movieId', 'rating']], ignore_index=True, sort=False)
rating_df['movieId'] = rating_df['movieId'].astype(str)
reader = Reader(rating_scale=(1, 5))
dataset = Dataset.load_from_df(rating_df[['userId', 'movieId', 'rating']], reader)
full_dataset = dataset.build_full_trainset()
return new_user_id, full_dataset
def build_train_test(self, test_size=.25):
# Train Set, Test Set to test results
self.train_set, self.test_set = train_test_split(self.dataset, test_size=test_size, random_state=1)
# https://surprise.readthedocs.io/en/stable/trainset.html#surprise.Trainset.build_anti_testset
# Situation when the user u is known, the item is known, but the rating is not in the trainset
self.anti_test_set = self.full_dataset.build_anti_testset()
# Cross-validation iterator where each user has exactly one rating in the testset.
leave_one_out_set = LeaveOneOut(n_splits=1, random_state=1)
loo_train_set, loo_test_set = list(leave_one_out_set.split(self.dataset))[0]
self.leave_one_out_train_set = loo_train_set
self.leave_one_out_test_set = loo_test_set
self.leave_one_out_anti_test_set = loo_train_set.build_anti_testset()
# Compute similarity matrix between items so we can measure diversity
sim_options = {'name': 'cosine', 'user_based': False}
self.similarity_algorithm = KNNBaseline(sim_options=sim_options)
self.similarity_algorithm.fit(self.full_dataset)
|
python
|
England pace spearhead Mark Wood said he is unsure if he will recover from a hip injury in time to play in Sunday’s Twenty20 World Cup final against Pakistan but could throw his name into the hat if skipper Jos Buttler desperately needs him.
Wood, who is nursing a right hip problem, missed England’s brilliant 10-wicket victory over India on Thursday along with injured batsman Dawid Malan.
“I tried my best to make the last game but I couldn’t bowl at the intensity and speeds required to play for England,” Wood told the BBC.
“I couldn’t get my hip going. Hopefully if required I can try and get it right for this game -- I don’t know if I’ll be able to.
Wood’s pace has added bite to England’s attack in Australia with the 32-year-old claiming nine wickets in the group stage of the tournament.
The fast bowler missed the Indian Premier League this year after sustaining an elbow injury during England’s test series in the West Indies and the team will hope he recovers before their test series in Pakistan which starts on Dec. 1.
|
english
|
# Flows
Flows can be deployed to Prefect Cloud for scheduling and execution, as well as management of run histories, logs, and other important metrics.
## Deploying a flow from Prefect Core
To deploy a flow from Prefect Core, simply use its `deploy()` method:
```python
flow.deploy(project_name="<a project name>")
```
Note that this assumes you have already [authenticated](auth.md) with Prefect Cloud. For more information on Flow deployment see [here](../flow-deploy.html).
## Deploying a flow <Badge text="GQL"/>
To deploy a flow via the GraphQL API, first serialize the flow to JSON:
```python
flow.serialize()
```
Next, use the `createFlow` GraphQL mutation to pass the serialized flow to Prefect Cloud. You will also need to provide a project ID:
```graphql
mutation($flow: JSON!) {
createFlow(input: { serializedFlow: $flow, projectId: "<project id>" }) {
id
}
}
```
```json
// graphql variables
{
serializedFlow: <the serialized flow JSON>
}
```
## Flow Versions and Archiving <Badge text="GQL"/>
You can control how Cloud versions your Flows by providing a `versionGroupId` whenever you deploy a Flow (exposed via the `version_group_id` keyword argument in `flow.deploy`). Flows which provide the same `versionGroupId` will be considered versions of each other. By default, Flows with the same name in the same Project will be given the same `versionGroupId` and are considered "versions" of each other. Anytime you deploy a new version of a flow, Prefect Cloud will automatically "archive" the old version in place of the newly deployed flow. Archiving means that the old version's schedule is set to "Paused" and no new flow runs can be created.
You can always revisit old versions and unarchive them, if for example you want the same Flow to run on two distinct schedules. To archive or unarchive a flow, use the following GraphQL mutations:
```graphql
mutation {
archiveFlow( input: { flowId: "your-flow-id-here" }) {
id
}
}
```
```graphql
mutation {
unarchiveFlow( input: { flowId: "your-flow-id-here" }) {
id
}
}
```
|
markdown
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.