text
stringlengths
1
1.04M
language
stringclasses
25 values
Just spoke to @SkyNews about why I think the accusations about Cummings WILL be the sort of story that cuts through with the public and why this will be politically difficult for the government. 1 - The public really don't like Cummings. Before all of this, just 14% had a favourable view of him, 45% unfavourable. He was even in negative territory among Conservative voters. Normally this doesn't matter too much, but he's not an ideal person to be front and centre. Normally this doesn't matter too much, but he's not an ideal person to be front and centre. 2 - Goes without saying, but the public hate anything that sounds like public figures acting "hypocritically" or not following the rules that they ask the rest of us to follow. 3 – It comes at a time when there is already a gap opening up between the public and the government over lockdown policy, with the public thinking they are moving too fast. This could build into a narrative that those at the top don’t care enough about public health. 4 - The public have loved to judge others during this crisis. While almost everyone says they have been following the rules, they are constantly upset/angry because there is a perception that others aren't.
english
<reponame>daniel-shuy/webiny-js import React from "react"; import { useHandler } from "@webiny/app/hooks/useHandler"; import { connect } from "@webiny/app-page-builder/editor/redux"; import { set } from "dot-prop-immutable"; import ConnectedSlate from "@webiny/app-page-builder/editor/components/ConnectedSlate"; import { ElementRoot } from "@webiny/app-page-builder/render/components/ElementRoot"; import { updateElement } from "@webiny/app-page-builder/editor/actions"; import { getElement } from "@webiny/app-page-builder/editor/selectors"; export const className = "webiny-pb-base-page-element-style webiny-pb-page-element-text"; const Text = props => { const onChange = useHandler(props, ({ element, updateElement }) => value => { updateElement({ element: set(element, "data.text", value) }); }); return ( <ElementRoot element={props.element} className={className}> <ConnectedSlate elementId={props.element.id} onChange={onChange} /> </ElementRoot> ); }; export default connect<any, any, any>( (state, props) => ({ element: getElement(state, props.elementId) }), { updateElement } )(Text);
typescript
Max Verstappen outpaced Valtteri Bottas in Friday's second free practice session ahead of the 2021 Mexican Grand Prix at the Autodromo Hermanos Rodriguez circuit. The Dutchman outpaced the entire field by 0.424 seconds and his title contender Lewis Hamilton by half a second. Mercedes looked like they had a strong start to the Mexican weekend earlier in the day as they led the field in the first practice session. By afternoon, however, it was Red Bull Racing's Verstappen who had a comfortable edge over the two Mercedes drivers. With Bottas and Hamilton second and third fastest in the practice session, Verstappen’s teammate Sergio Perez slotted in fourth, less than a tenth slower than the second Mercedes. Although the Mexican was fourth-fastest, he was half a second behind his teammate. Ferrari drivers Carlos Sainz and Charles Leclerc were fifth and seventh fastest respectively, split by Alpha Tauri's Pierre Gasly in sixth. Gasly’s team-mate Yuki Tsunoda clocked the eighth-quickest lap and was followed by multiple world champions Fernando Alonso and Sebastian Vettel in ninth and tenth respectively. Alfa Romeo's Kimi Raikkonen slotted in P11 on the timesheet and was followed by McLaren's Lando Norris in P12. The McLaren team had a sluggish run in both sessions with Norris outside the top 10, while Daniel Ricciardo’s session was thwarted by gearbox issues after just seven laps of running. The Australian was classified P15 in the session, a far cry from his or the team’s US GP form. Alpine F1 driver Esteban Ocon clocked the 13th-fastest lap and was followed by Alfa Romeo's Antonio Giovinazzi in P14. Behind them, Haas driver Mick Schumacher and Aston Martin's Lance Stroll were classified P16 and P17 respectively. Williams drivers Nicholas Latifi and George Russell were classified 18th and 20th respectively in the practice session, with the latter suffering from similar grip issues as Ricciardo. The two Williams were split by Haas rookie Nikita Mazepin, who clocked the 19th-fastest lap of the session. Drivers continued to face similar grip issues with the dusty tarmac in the second practice session as they had in the first. While Mercedes seemed to have the upper hand earlier in the day, they were clearly edged out by Verstappen come noon. Hamilton’s scrappy practice session was thwarted by setup issues, and his team-radio chatter indicated that the driver was unhappy with the balance of his car. The alarming indicator of the session was the half-second gap the Briton had between him and his title contender. Local driver Perez, who enjoys huge support at the circuit, was seen carrying out race-spec simulations for the majority of the practice session and is expected to be close to Verstappen's pace come qualifying. The circuit is expected to be Red Bull Racing territory overall, and it will be interesting to see how Mercedes keep up through the weekend. The fastest sector times of the practice session were dominated by Honda-powered cars, with Gasly quickest in the first sector and Verstappen dominating the remainder of the lap. The lap time averages seemed to favor Red Bull Racing team, whose drivers had long stints on medium tire compounds, but Mercedes were quick on the hards. Hamilton and Bottas attempted most of their race simulations on the hard compound and some on the soft compound. Verstappen and Perez also opted for both medium and hard tire compounds for their race simulations. Looking at McLaren’s compromised form in the practice session, one can expect low air density and grip issues on the unused tarmac to create a bit of drama in future sessions. But in terms of pace, Red Bull Racing might have a clear advantage over the rest come Sunday.
english
A video of an alien riding a New York City subway has gone viral across social media platforms. This comes after a recent U.S. congressional hearing where former military members testified about “non-human” activity they had discovered. The viral clip seems to have left netizens alarmed in light of the recent testimonies. However, the footage that has gone viral does not show a real ET. Twitter user @StanleyRoberts was one among the many who uploaded the viral video of the Martian in a subway. The extraterrestrial creature appeared as one would have imagined, resembling an emoji. They were seen wearing a “Love Your Mother” t-shirt that also showed planet Earth. The foreign entity was also barefoot and absorbing their surroundings as others looked on. The video has amassed 7.5 million views at the time of writing this article. However, the entity is not a real extraterrestrial. It is simply a promotional campaign for a movie. Netizens who are coming across the "alien" are not seeing a real-life extraterrestrial entity, but it is simply incredible makeup skills being used to promote a video. According to Trevor Decker, the alien is a character from the latest movie Jules, which is out in theaters at the moment. The marketing team of Jules put in effort to create an alien that looked real to create buzz for their film. Tourists and locals have since been taking pictures and videos of the person dressed as an extraterrestrial. The movie is about a man’s life being upended when a UFO crashes into his backyard in rural Pennsylvania. He then befriends the entity, leading to matters going haywire. The movie has received 85% on Rotten Tomatoes. It is directed by Marc Turtletaub and stars Ben Kingsley, Harriet Sansom Harris, Donald Paul and Jane Curtin amongst others. Twitter also noted that there was not a real entity traveling the New York City subway. The platform said: Hence, the ET is not real and videos claiming that a real alien is on the subway are false in nature. Internet users had a field day with the viral clip. Many were certain that the Martian was not real. A few reactions to the videos floating on the internet read: It seems like the movie has left netizens worldwide in a frenzy.
english
Jaipur: After seeing a helicopter his wife had once wistfully asked him how much it would cost to hire one, so this school teacher in Rajasthan's Alwar district decided to fulfil her wish on his retirement on Saturday. Scores of people gathered to watch as Ramesh Chand Meena, donning traditional attire and sunglasses, along with his wife Somoti and grandson Ajay boarded a chopper at a helipad near his school in Saurai to fly to their residence in Malawali village, which is 22 km away. Meena, who was given a farewell after 34 years of service, said he booked the chopper service from New Delhi for Rs 3. 70 lakh as he wanted to fulfil Somoti's wish. The flight was all of 18 minutes, but Meena said it was a memorable maiden experience of flying. "We were sitting on roof-top when my wife after seeing a chopper asked me about its hiring price. To fulfil her wish I decided to book a chopper on the retirement day. It was our first experience to fly. We enjoyed it immensely," Meena told PTI, after his no ordinary commute. Meena thanked the district administration for permitting him to fly after he put up a request. "I had taken all necessary permissions from the district administration and other departments. Thanks to district administration officials who made the process easy," he said. The couple has two sons. One is a teacher while the other is an inspector in Food Corporation of India. (This story has not been edited by News18 staff and is published from a syndicated news agency feed - PTI)
english
Defending champions Mumbai Indians (MI) will look to turn things around when they take on a star-studded Royal Challengers Bangalore (RCB) at the Wankhede Stadium on Tuesday (April 17). Mumbai, meanwhile, has lost three games on a trot – to start the tournament in the worst possible way. They started with a loss to Chennai Super Kings owing to the last-over thriller on April 7. It continued with their loss to Sunrisers Hyderabad (SRH) and Delhi Daredevils (DD). However, RCB has only won only one game out of their three outings so far. Both the teams will look to give their best which will subsequently make this encounter more interesting. Weather Forecast: According to Accu Weather, there will no cloud cover over Mumbai and this will excite the fans. There will be less humidity as well compared to the previous matches which come as a good news for all the cricketers including foreign recruits. Moreover, it will be partly sunny until 6:00 PM and is likely to turn hazy while the temperature falls down. The temperatures will be hovering around 30 degree Celsius and the humidity will be around 45 percent. Meanwhile, dew will play a part in this game with the team batting second getting an added advantage. As a result, both the captains will opt to field first as has the scenario in the past 13 outings. Having the net run-rate in minus (-0.174) Rohit Sharma-led Mumbai Indians side lingering at the bottom of the IPL points table, whereas, Bangalore is placed sixth with two points. Mumbai is termed as the slow-starters in this lucrative Twenty20 competition, as Virat Kohli-led Bangalore has further looked dismal evening having the star players in their camp. Here are more details regarding the forecast:
english
KKR vs SRH, IPL 2020, Match 8: Abu Dhabi Weather Forecast and Pitch Report: KKR lost their previous match here by a huge margin of 49 runs against MI. But unlike SRH, they know what to expect from the pitch this time around. It is expected to be a sunny day in Abu Dhabi with the maximum temperature to be around 37 degree Celsius, while the minimum temperature is expected to be around 29 degrees. The sky will mainly be clear and there is almost no chance of precipitation, so fans can expect an uninterrupted, quality match. The playing conditions will be a bit on the heavier side, with over 50 per cent humidity. The pitch at the Sheikh Zayed Stadium in Abu Dhabi certainly looks favouring the batsman if we take the last two matches played here into account. But it is the same pitch where Kolkata have already lost a match in which they were restricted to 146, while chasing a target of 196 against Mumbai Indians. Although Sunil Narine proved very economical, pacer Pat Cummins fared very badly the last time, conceding 49 runs in just 3 overs. But KKR’s loss had a lot to do with the batting too. But at least, now they know what to expect playing here. SRH will play here for the first time in this tournament. Having a balanced team with a good batting line-up, they can benefit from the pitch. The opening match of this edition of IPL was also played here, when Mumbai Indians and Chennai Super Kings showed that bowlers could have a tough time on this ground. So, overall fans can expect a high scoring, competitive match here.
english
Kabul: The casualty total from Tuesday's major attack in Kabul has risen to 64 killed, more than double the total previously estimated by police, and 347 wounded, Afghan Interior Ministry spokesman Sediq Sediqqi said. He said today most of those killed in the attack, which hit a security services office in the heart of the government and diplomatic area of the Afghan capital, were civilians. The attack, which was quickly claimed by the Taliban, was the deadliest single incident of its kind in Kabul since 2011 and came only days after the Islamist insurgent movement announced the start of its annual spring offensive. It began at around 9. 00 am (0530GMT), in the middle of the morning rush hour, when a suicide bomber in a vehicle packed with explosives blew himself up in front of an office of a department of the National Security Directorate. In a pattern similar to major attacks in Kabul and other Afghan cities, the bombing was followed by one or more gunmen who engaged in an extended shootout with security forces. The attack underlined concerns raised in a United Nations report this week, which said an increase in urban warfare had caused a spike in civilian casualties during the first three months of the year. (This story has not been edited by NDTV staff and is auto-generated from a syndicated feed. )
english
The purported release of the Omani head of a private Doha-based firm providing defence training to the Qatari navy saw the family of an ex-Indian Navy officer, one of eight others of the same company taken into custody on August 30, renew a call for their expeditious release. Among the eight Indians in custody in Qatar is Commander Purnendu Tiwari (retd), managing director of Doha Global Technologies Consultancy Services. Twitter user @DrMeetuBhargava, who identified herself to The Indian Express on Thursday as Commodore Tiwari’s sister, said the family had received “confirmed information” that Khamis Al Ajmi, a retired squadron leader of Royal Oman Air Force and CEO of the Doha firm, had been released. The Indian Express was not able to independently confirm Al Ajmi’s arrest and release. Bhargava declined to reveal who had given her the information of Al Khamis’s release but said the release had raised her hopes. She again appealed to the government to “expedite the release” of the Navy veterans without delay of “even a day or two”. “When the CEO of the company has been released, all other officers of the company should be released,” Bhargava said. All eight in custody were being allowed to communicate once a week with their families, Bhargava said. “They are all mentally fatigued and depressed, and if they are not released now, their mental and physical health will deteriorate badly,” she said. The eight are being held by the State Security Bureau, the Qatari intelligence agency. Each is reportedly confined to a solitary cell. Efforts by India to have the men released have not succeeded yet — Bhargava said they are being held “illegally”. “The Indian government should act immediately, swiftly and decisively if they really care about their defence personnel, as today is the 80th day of the illegal solitary confinement of our senior citizen Navy veterans in Doha. They are suffering from medical ailments due to their age,” she said. The families have not been told about the charges against the men, all of whom were senior Indian Navy officers at the time of retirement. In a letter to the Indian Embassy in Doha, wife of one of the men wrote that her husband had been escorted to their home on October 5, but was not allowed to speak to her, and taken away again by the four escorts after he packed some belongings. An MEA spokesman had earlier this month said that the government is making efforts to bring back the men. The Indian Express learnt that an Indian official travelled to Doha late last month. But efforts to secure the release of the men have not yielded a breakthrough.
english
The website of Swedish club Malmo FF crashed following the news that Zlatan Ibrahimovic, who started his career playing for the Swedish champions, will return to his homeland to take part in the group stage match with PSG. PSG were pitted against Real Madrid, Shakhtar Donetsk and Malmo FF in group A after the draw for the Champions League group stage took place in Monaco. The fans flocked to the club's homepage after the groups were drawn on Thursday to find out how they could order tickets for the highly anticipated match causing so much traffic to the server that the whole site crashed. As reported by The Guardian, Malmo’s head of communications Peter Ahlander told “There was a lot of pressure on it [the website] just after the draw was made,” adding that the club were delighted with the draw. Zlatan led the tributes to his former club when they clawed their way into the tournament for the second year running. He congratulated Malmo FF for making it to the group stage of the Champions League for the first time. Earlier, Malmo qualified for the group stage at the expense of Scottish Champions Celtic who lost the Champions League play-off 3-4 on aggregate. Adding to the love of Zlatan in Sweden is the fact that he is one of the ten players to have made 100 appearances for the Swedish National team and is the country’s all-time leading goalscorer with 56 goals to his name. The 33-year old striker has represented Sweden at two World Cups and 3 Euro finals. Ibrahimovic had 40 appearances for the Swedish side in which he found the back of the net on 16 occasions. He went on to sign for the Dutch club Ajax in 2001 and later played for several elite clubs in his career which includes Juventus, Barcelona, Inter Milan, AC Milan and PSG. The tall Swede has won league titles wherever he has went but is still without a single Champions League medal. However, he will be looking to add that to his resume as PSG once again will go all out in search of their first Champions League trophy ever.
english
<filename>README.md # docker-alpine-postgres
markdown
var class_y_t_music_uploader_1_1_providers_1_1_request_models_1_1_browse_artist_results_continuation_context_1_1_responsecontext = [ [ "serviceTrackingParams", "de/d86/class_y_t_music_uploader_1_1_providers_1_1_request_models_1_1_browse_artist_results_continuation_context_1_1_responsecontext.html#ad96920467f862754869d3005b21edd7f", null ] ];
javascript
<reponame>mauromascarenhas/PGC_UFABC {"output":"The output should consist of the vector that describes the skyline as shown in the example above. In the skyline vector (v1, v2, v3, . . . , vn−2, vn−1, vn), the vi such that i is an even number represent a horizontal line (height). The vi such that i is an odd number represent a vertical line (x-coordinate). The skyline vector should represent the \"path\" taken, for example, by a bug starting at the minimum x-coordinate and traveling horizontally and vertically over all the lines that define the skyline. Thus the last entry in all skyline vectors will be a 0.","input":"The input is a sequence of building triples. All coordinates of buildings are integers less than 10,000 and there will be at least one and at most 5,000 buildings in the input file. Each building triple is on a line by itself in the input file. All integers in a triple are separated by one or more spaces. The triples will be sorted by Li , the left x-coordinate of the building, so the building with the smallest left x-coordinate is first in the input file.","level":9,"name":"The Skyline Problem","has_images":true,"description":"With the advent of high speed graphics workstations, CAD (computer-aided design) and other areas (CAM, VLSI design) have made increasingly effective use of computers. One of the problems with drawing images is the elimination of hidden lines \u2014 lines obscured by other parts of a drawing.\nYou must design a program to assist an architect in drawing the skyline of a city given the locations of the buildings in the city. To make the problem tractable, all buildings are rectangular in shape and they share a common bottom (the city they are built in is very flat). The city is also viewed as two-dimensional. A building is specified by an ordered triple (Li, Hi, Ri) where Li and Ri are left and right coordinates, respectively, of building i and Hi is the height of the building. In the diagram below buildings are shown on the left with triples\n\n(1,11,5),(2,6,7),(3,13,9),(12,7,16),(14,3,25),(19,18,22),(23,13,29),(24,4,28)\n\nthe skyline, shown on the right, is represented by the sequence:\n\n(1,11,3,13,9,0,12,7,16,3,19,18,22,3,23,13,29,0)","id":"1576","category":"Ad-Hoc","statistics":{"level":"9 / 10","submissions":305,"solved":82,"ratio":"26.89%"}}
json
import sys import os # check SBMolGen_PATH setting if os.getenv('SBMolGen_PATH') == None: print("THe SBMolGen_PATH has not defined, please set it before use it!") exit(0) else: SBMolGen_PATH=os.getenv('SBMolGen_PATH') sys.path.append(SBMolGen_PATH+'/utils') from subprocess import Popen, PIPE from math import * import random import random as pr import numpy as np from copy import deepcopy import itertools import time import math import argparse import subprocess from keras.preprocessing import sequence from rdkit import Chem from rdkit.Chem import Draw from rdkit.Chem import Descriptors from load_model import loaded_model from make_smile import zinc_data_with_bracket_original, zinc_processed_with_bracket from add_node_type_zinc import chem_kn_simulation, make_input_smile,predict_smile,check_node_type,node_to_add,expanded_node import yaml class chemical: def __init__(self): self.position=['&'] self.num_atom=8 #self.vl=['\n', '&', 'C', '(', 'c', '1', 'o', '=', 'O', 'N', 'F', '[C@@H]', #'n', '-', '#', 'S', 'Cl', '[O-]', '[C@H]', '[NH+]', '[C@]', 's', 'Br', '/', '[nH]', '[NH3+]', #'[NH2+]', '[C@@]', '[N+]', '[nH+]', '\\', '[S@]', '[N-]', '[n+]', '[S@@]', '[S-]', #'I', '[n-]', 'P', '[OH+]', '[NH-]', '[P@@H]', '[P@@]', '[PH2]', '[P@]', '[P+]', '[S+]', #'[o+]', '[CH2-]', '[CH-]', '[SH+]', '[O+]', '[s+]', '[PH+]', '[PH]', '[S@@+]'] self.vl = ['\n', '&', 'C', '1', 'N', '[C@@H]', '2', '[C@H]', '(', '=', 'O', ')', 'S', 'c', '[S@]', '[nH]', '[O-]', '[N+]', 'n', 'F', '#', '[C@]', '[C@@]', '[S@@]', 'P', '/', '\\', 'Cl', 's', 'Br', 'o', '[NH3+]', 'I', '[n+]', '[nH+]', '3', '[N-]', '[S-]', 'B', '4', '5', '[NH+]', '[Si]', '[P@]', '[NH2+]', '[P@@]', '[N@+]', '6', '[N@@+]', '[S@@+]', '7', '8', '[P@@H]', '[n-]', '[C-]', '[P+]', '[Cu]', '[Ni]', '[Zn]', '[Au-]', '[OH+]'] def Clone(self): st = chemical() st.position= self.position[:] return st def SelectPosition(self,m): self.position.append(m) def Getatom(self): return [i for i in range(self.num_atom)] class Node: def __init__(self, position = None, parent = None, state = None): self.position = position self.parentNode = parent self.childNodes = [] self.child=None self.wins = 0 self.visits = 0 self.nonvisited_atom=state.Getatom() self.type_node=[] self.depth=0 def Selectnode(self): #s = sorted(self.childNodes, key = lambda c: c.wins/c.visits + 0.8*sqrt(2*log(self.visits)/c.visits))[-1] #s=random.choice(self.childNodes) ucb=[] print('UCB:') for i in range(len(self.childNodes)): ucb_tmp = self.childNodes[i].wins/self.childNodes[i].visits+ c_val*sqrt(2*log(self.visits)/self.childNodes[i].\ visits) ucb.append(ucb_tmp) print(self.childNodes[i].position, ucb_tmp,) m = np.amax(ucb) indices = np.nonzero(ucb == m)[0] ind=pr.choice(indices) s=self.childNodes[ind] print('\n', 'index', ind, self.position, m,) return s def Addnode(self, m, s): n = Node(position = m, parent = self, state = s) self.childNodes.append(n) def simulation(self,state): predicted_smile=predict_smile(model,state) input_smile=make_input_smile(predicted_smile) logp,valid_smile,all_smile=logp_calculation(input_smile) return logp,valid_smile,all_smile def Update(self, result): self.visits += 1 self.wins += result def MCTS(root, verbose = False): """initialization of the chemical trees and grammar trees""" #run_time=time.time()+3600*48 start_time = time.time() run_time = time.time() + 3600*hours # 3600*24 rootnode = Node(state = root) state = root.Clone() """----------------------------------------------------------------------""" """global variables used for save valid compounds and simulated compounds""" valid_compound=[] all_simulated_compound=[] desired_compound=[] max_logp=[] desired_activity=[] depth=[] min_score=1000 score_distribution=[] min_score_distribution=[] generated_dict = {} #dictionary of generated compounds dict_id = 1 ## this id used for save best docking pose. """----------------------------------------------------------------------""" out_f = open(output_dir, 'a') while time.time()<=run_time: node = rootnode # important ! this node is different with state / node is the tree node state = root.Clone() # but this state is the state of the initialization . too important !!! """selection step""" node_pool=[] while node.childNodes!=[]: node = node.Selectnode() state.SelectPosition(node.position) print("state position:,",state.position) if len(state.position)>= 70: re= -1.0 while node != None: node.Update(re) node = node.parentNode continue if node.position == '\n': re = -1.0 while node != None: node.Update(re) node = node.parentNode continue """------------------------------------------------------------------""" """expansion step""" expanded=expanded_node(model,state.position,val,loop_num_nodeExpansion) new_compound = [] nodeadded = [] for n in range(simulation_num): nodeadded_tmp = node_to_add(expanded, val) all_posible=chem_kn_simulation(model,state.position,val,nodeadded_tmp) generate_smile=predict_smile(all_posible,val) new_compound_tmp = make_input_smile(generate_smile) nodeadded.extend(nodeadded_tmp) new_compound.extend(new_compound_tmp) print('nodeadded', nodeadded) print('new compound', new_compound) print('generated_dict', generated_dict) print('dict_id', dict_id) for comp in new_compound: print('lastcomp', comp[-1], ' ... ',comp[-1] == '\n') node_index,rdock_score,valid_smile,generated_dict = check_node_type(new_compound, score_type, generated_dict, sa_threshold = sa_threshold, rule = rule5, radical = radical_check, docking_num = docking_num, target_dir = target_dir, hashimoto_filter = hashimoto_filter, dict_id = dict_id, trial = trial) valid_compound.extend(valid_smile) score_distribution.extend(rdock_score) print('node', node_index, 'rdock_score', rdock_score, 'valid', valid_smile) #out_f = open(output_dir, 'a') #out_f.write(str(valid_smile) + ', '+ str(rdock_score)+', '+str(min_score)+', '+str(len(state.position))) out_f.write(str(valid_smile) + ', '+ str(rdock_score)+', '+str(min_score)+', '+str(len(state.position))+', '+str(time.time()-start_time)) out_f.write('\n') out_f.flush() #out_f.close() dict_id += 1 if len(node_index)==0: re=-1.0 while node != None: node.Update(re) node = node.parentNode else: re_list = [] #atom_list = [nodeadded[m] for m in node_index] atom_checked = [] for i in range(len(node_index)): m=node_index[i] atom = nodeadded[m] if atom not in atom_checked: node.Addnode(atom, state) node_pool.append(node.childNodes[len(atom_checked)]) depth.append(len(state.position)) atom_checked.append(atom) else: node_pool.append(node.childNodes[atom_checked.index(atom)]) #node.Addnode(nodeadded[m],state) #node.Addnode(nodeadded[m],state) #print valid_smile[i], 'node m', m, 'nodeadded[m]', nodeadded[m], 'node.childNodes[i]', node.childNodes[i] for child in node.childNodes: print(child.position) print('\n') #node_pool.append(node.childNodes[i]) #depth.append(len(state.position)) score_index = 0 if score_type == 'SCORE' else 1 print("current minmum score",min_score) if rdock_score[i][score_index]<=min_score: min_score_distribution.append(rdock_score[i][score_index]) min_score=rdock_score[i][score_index] else: min_score_distribution.append(min_score) """simulation""" if atom == '\n': re = -1 else: #re=(- (rdock_score[i][score_index] + 20)*0.1)/(1+abs(rdock_score[i][score_index] + 20)*0.1) re=(- (rdock_score[i][score_index] - base_rdock_score)*0.1)/(1+abs(rdock_score[i][score_index] -base_rdock_score)*0.1) #### pj16 reward fuction: #base_rdock_score = -20 #reward = (np.tanh(0.1*(abs(rdock_score[max_index])+base_rdock_score)) + 1)/2 re_list.append(re) print('atom', atom, 're_list', re_list) #re=(- (rdock_score[i]/100))/(1+abs(rdock_score[i]/100)) """backpropation step""" for i in range(len(node_pool)): node=node_pool[i] while node != None: node.Update(re_list[i]) node = node.parentNode for child in node_pool: print(child.position, child.wins, child.visits) out_f.close() """check if found the desired compound""" #print "all valid compounds:",valid_compound #print "all active compounds:",desired_compound print("rdock_score",score_distribution) print("num valid_compound:",len(valid_compound)) print("valid compounds",valid_compound) print("depth",depth) print("min_score",min_score_distribution) return valid_compound def UCTchemical(): one_search_start_time=time.time() time_out=one_search_start_time+60*10 state = chemical() best = MCTS(root = state,verbose = False) return best if __name__ == "__main__": # set parameter argvs = sys.argv """read yaml file for configuration""" f = open(str(argvs[1]), "r+") conf = yaml.load(f, Loader=yaml.SafeLoader) f.close() trial = conf.get('trial', 1) c_val = conf.get('c_val', 1.0) loop_num_nodeExpansion = conf.get('loop_num_nodeExpansion', 1000) target = conf.get('target', 'CDK2') target_dir = conf.get('target_path', './') hours = conf.get('hours', 1) score_type = conf.get('score_type', 'SCORE.INTER') #<SCORE> or <SCORE.INTER> docking_num = conf.get('docking_num', 10) sa_threshold = conf.get('sa_threshold', 3.5) #if SA > sa_threshold, score = 0. Default sa_threshold = 10 #RO5: if a compound does not satisfy rule of 5, score = 0. rule5 = conf.get('rule5', 1) #0:none, 1: rule of 5, 2: rule of 3 radical_check = conf.get('radical_check', True) simulation_num = conf.get('simulation_num', 3) hashimoto_filter = conf.get('hashimoto_filter', True) # or False, use/not use hashimoto filter base_rdock_score = conf.get('base_rdock_score', -20) model_name = conf.get('model_name', 'model') print('========== display configuration ==========') print('trial num is: ', trial) print('c_val: ', c_val) print('loop_num_nodeExpansion: ', loop_num_nodeExpansion) print('target: ', target) print('target_dir: ',target_dir) print('max run time: ',hours) print('score_type: ', score_type) print('docking_num: ',docking_num) print('sa_threshold: ',sa_threshold) print('model_name: ', model_name) print('base_rdock_score: ', base_rdock_score) print('simulation_num: ',simulation_num) print('hashimoto_filter: ', hashimoto_filter) """----------------------------------------------------------------------""" output_dir = 'result_'+target+'_C'+str(c_val)+'_trial'+str(trial)+'.txt' smile_old=zinc_data_with_bracket_original(SBMolGen_PATH + '/data/250k_rndm_zinc_drugs_clean.smi') val,smile=zinc_processed_with_bracket(smile_old) print('val is ', val) out_f = open(output_dir, 'w') out_f.write('#valid_smile, rdock_score, min_score, depth, used_time') out_f.write('\n') out_f.close() model=loaded_model(SBMolGen_PATH + '/RNN-model/'+ model_name) #WM300 not tested valid_compound=UCTchemical()
python
There is a huge population of India that has gone digital. With the launch of Prime Minister Narendra Modi's Digital India campaign the count of people using digital mediums for socialising, education, business, and entertainment is expected to touch new heights. With that much of digital exposure also comes an equally serious threat of data breach vulnerability and its misuse. Just a few points to be included in the New Law: 1. Promote storage of all the data of Indians in India only. More we stick to servers based in India, safer we will be. 2. Introduce strict penalties and punishment for data breach. 3. Bring all national or international apps and platforms under the same Indian law. 4. Special provisions for financial data security. While we move towards Digital Economy, the laws should be strict and ahead of the time. 5. There should be special centralised investigative agency well-equipped to deal with data-breach issues and cybercrime specifically. 6. Provision of special courts that could understand the subject and give early and fair judgment. 7. Use or misuse of data (directly or indirectly) to play with the electoral system of the country should be considered as a serious crime against the state and amount to severe punishment. Please give Indians the freedom to live and enjoy the digital way of life but at the same time also provide with the safety and security around data sharing and use.
english
Sydney: Former captain Michael Clarke urged a furious sporting public Monday to forgive Steve Smith over the cheating scandal that has plunged Australian cricket into crisis. He said Australia needed to move on from the anger over Smith's ball-tampering plot in the third Test against South Africa and work on restoring the sport's battered reputation. But Clarke acknowledged many fans would struggle to find sympathy for Smith over his role in a plan to have batsman Cameron Bancroft change the condition of the ball by illegally rubbing it with sticky yellow paper. "I do feel for Steve Smith. 100 percent he has made a major mistake and he and a lot of other people I think are going to have to suffer the consequences," Clarke told Channel Seven. "That's fair enough. But I think it's important that we do over time forgive as well. " Australian cricket fans have long regarded the national team's style as hard but fair, even though many take issue with the boorish behaviour of some players in recent years. The admission that an Australian Test captain helped hatch a premeditated plan to cheat, and the clumsy cover-up attempt that followed, has prompted genuine shock among cricket lovers. Clarke, who handed over the captaincy to Smith in 2015, said changes needed to be implemented for the good of the game. "When I woke up this morning a couple of things really stood in my mind -- this can never happen again," the 115-Test veteran said. "I think that has to be Cricket Australia's focus, this can never, ever happen again in this great game of cricket. "We have so much work to do to get cricket back to where it belongs. " Clarke has likened the ball-tampering affair to "a bad dream" and cricketing greats have slammed Smith and his team-mates for bringing the game into disrepute. However, there have been some calls for perspective, including former New Zealand batsman Mark Richardson, who said interference to make the ball reverse swing was common in his playing days. "It's very, very difficult to go to a former cricketer and get him to be totally outraged about ball-tampering because it would quickly make people hypocrites," the player turned television host told TV3. "There was a time there where we were all trying to work out how the heck you do this," he added, saying he did not remember tampering with the ball in an international match. Richardson said the extreme reaction was because Australians in the past were quick to make cheating accusations while casting themselves as paragons. Ex-England captain Michael Atherton, while criticising Smith, has also questioned whether ball-tampering deserved its reputation as a major sin. "It has gone on since the year dot," said the former opener, who faced tampering allegations himself in 1994 when he rubbed dirt from his pocket on the ball during a Test against South Africa at Lord's. "If the condition of the ball is changed, you get a five-run penalty and change the ball. That hardly sends the message that this is a heinous crime. " He told Sky Sport that ball-tampering was rated a level two offence under current laws but authorities should make it a top-of-the-scale level four if they felt it was so serious. (This story has not been edited by News18 staff and is published from a syndicated news agency feed - AFP)
english
<filename>Author Data/cnr rao/Papers/paper 2219.json {"citations": null, "paper_link": "https://scholar.google.com/citations?view_op=view_citation&hl=en&user=Zs9227oAAAAJ&cstart=2159&pagesize=100&citation_for_view=Zs9227oAAAAJ:DQQjGlBKAuwC", "authors": ["<NAME>", "<NAME>", "<NAME>"], "title": "Oviposition preference of some insect pests of citrus in relation to leaf/twig age of Nagpur mandarin", "publication": "Indian Journal of Horticulture 67 (4), 447-449, 2010"}
json
# Profile Management EcoSystem API documentation The API uses the REST API standard. The host you could find here: http://172.16.17.32. Only integer numbers could be stored in the blockchains. If you want to store float number in the blockchain, you should multiply it on some constant (for instance, `10^8` for QTUM). Therefore, variables `amount`, `balance`, `offer_price`, `price`, `buyer_price` and `seller_price` represented as `x * 10^8`. Where `x` could be `float`. Variable `timestamp` has the following format `%Y%m%d%H%M`. For instance, timestamp `201806081300` means 2018 June 8 13:00. For checking status of the transaction in the QTUM blockchain use the following site https://testnet.qtum.org. Posted profile is written to the blockchain when the status of the transaction changes from `Unconfirmed` to `Success`. After that, the user could view it by executing [Get all profiles which posted user](#get-all-profiles-which-posted-user) method. The user should pay attention on the `cid` attribute because it is profile identifier. To meet the nginx demands user should add `/` at the end of the URLs as presented in the documentation below. The API-methods: - [Create new account](#create-new-account) - [Get account information](#get-account-information) - [Get all news for the account](#get-all-news-for-the-account) - [Post profile to the blockchain](#post-profile-to-the-blockchain) - [Get profile from the blockchain by cid](#get-profile-from-the-blockchain-by-cid) - [Set description of profile for cid](#set-description-of-profile-for-cid) - [Set profiles price](#set-profiles-price) - [Make a profiles write access offer for owner](#make-a-profiles-write-access-offer-for-owner) - [Make a profiles read access offer for owner](#make-a-profiles-read-access-offer-for-owner) - [Accept buyers offer](#accept-buyers-offer) - [Reject the write access offer by either buyer or seller](#reject-the-write-access-offer-by-either-buyer-or-seller) - [Reject the read access offer by either buyer or seller](#reject-the-read-access-offer-by-either-buyer-or-seller) - [Get all PMES profiles from the blockchain](#get-all-pmes-profiles-from-the-blockchain) - [Get all profiles which posted user](#get-all-profiles-which-posted-user) - [Get all offers which made user](#get-all-offers-which-made-user) - [Get all offers by cid](#get-all-offers-by-cid) - [Get all purchased read access profiles](#get-all-purchased-read-access-profiles) - [Make review for purchased profile](#make-review-for-purchased-profile) - [Get all profiles reviews](#get-all-profiles-reviews) - [Withdraw tokens or coins](#withdraw-tokens-or-coins) - [View fee of the withdraw tokens or coins operation](#view-fee-of-the-withdraw-tokens-or-coins-operation) - [Bulk operations](#bulk_operations) - [Get transactions history by address](#get-transactions-history-by-address) - [Refill wallet with test coins or tokens](#refill-wallet-with-test-coins-or-tokens) - [Standardization of an error messages](#standardization-of-an-error-messages) Descriptions of the API methods provided below: ## Create new account * **URL:** `/api/accounts/` * **Method:** `POST` * **URL params** None * **Body params** `[json]` **Optional:** `email` and `phone` **Required:** `device_id` ```bash { "public_key": [string], "message": { "email": [string], "device_id": [string], "phone": [string], "timestamp": [string], "nickname": [string], "type": [string] # one of the ["user", "kol", "group"] }, "signature": [string] } ``` * **Sample response** `[json]` If such account already exists user receives a `Unique violation error`. There are the following user's balances present in the user's wallets: - `amount_active` - an active amount of user's balance. - `amount_frozen` - a frozen amount. When a user posted an offer to buy some profile with price is 100 tokens, these tokens will be frozen and displays as "amount_frozen" field. Or the number of coins/tokens that is not confirmed in the blockchain yet. Therefore, when a buyer pays for buying profile from the seller the following steps happen: 1) buyer sends tokens to the seller and this tokens will become frozen and will be shown in the `amount_frozen` field 2) seller receive tokens and they will be shown in the `amount_frozen` field until transaction will be confirmed in the blockchain `balance` represented as `real_user_balance * 10^8`. Where `real_user_balance` could be `float`. After successful account creation user receives the response with the following structure: ```bash { "count": [integer], # number of user's wallets "device_id": [string], "email": [string], "href": [string], # link to the user account "level": [integer], # user account level (2 - when balance is zero (by default), 3 - when balance is not null) "public_key": [string], "news_count": [integer], # number of news about offers to buy profile (0 by default) "id": [integer], # identifier of the user account "type": [string], # type which user specified during registration "nickname": [string], # nickname which user specified during registration "wallets": [string], # list of dicts with user's wallets addresses "uid": [integer] # users id "address": [string], # wallet address to which user could refill coins/tokens "amount_active": 0 [integer], # an active amount of coins/tokens "amount_frozen": 0 [integer], # a frozen amount of coins/tokens or the number of coins/tokens that is not confirmed in the blockchain yet "coinid": [string] # type of the cryptocurrency `PUTTEST` or `QTUMTEST` } ``` ## Get account information * **URL:** `/api/accounts/[public_key]/` * **Method:** `GET` * **URL params** `[json]` ```bash { "public_key": [string], "message": { "timestamp": [string] }, "signature": [string] } ``` * **Body params** None * **Sample response** `[json]` `balance` represented as `real_user_balance * 10^8`. Where `real_user_balance` could be `float`. `amount_active` and `amount_frozen` fields represented in a similar way. ```bash { "count": [integer], # number of user's wallets "device_id": [string], "email": [string], "href": [string], # link to user account "level": [integer], # user account level (2 - when balance is zero ( by default), 3 - when balance is not null) "public_key": [string], "news_count": [integer], # number of news about offers to buy profile (0 by default) "id": [integer], # user's identifier "nickname": [string], # nickname which user specified during registration "type": [string], # type which user specified during registration "wallets": [string], # list of dicts with user's wallets addresses "uid": [integer] # users id "address": [string], # wallet address to which user could refill coins/tokens "amount_active": 0 [integer], # an active amount of coins/tokens "amount_frozen": 0 [integer], # a frozen amount of coins/tokens or the number of coins/tokens that is not confirmed in the blockchain yet "coinid": [string] # type of the cryptocurrency `PUTTEST` or `QTUMTEST` } ``` ## Get all news for the account * **URL:** `/api/accounts/[public_key]/news/` * **Method:** `GET` * **URL params** `[json]` ```bash { "public_key": [string], "message": { "timestamp": [string] }, "signature": [string] } ``` * **Body params** None * **Sample response** `[json]` When the user sends an offer to buy profile `event_type` is `made offer`. `buyer_price` and `seller_price` represented as `real_price * 10^8`. Where `real_price` could be `float`. ```bash [ { "event_type": [string], # type of news "access_string": [string], # now it is user's public key "cid": [integer], # profile identifier "buyer_address": [string], # buyer address "buyer_pubkey": [string], # buyer public key "buyer_price": [integer], # proposed buyer price * 10^8 "seller_price": [integer], # profiles price * 10^8 "coinid": [string] # type of the blockchain (ETH - Ethereum blockchain, QTUM - QTUM blockchain) "offer_type": [string] # offers type } ] ``` * **Description** News about actions with user profile. ## Post profile to the blockchain * **URL:** `/api/blockchain/[public_key]/[coinid]/profile/` * **Method:** `POST` * **URL params** `public_key=[string]` `coinid=[string]` - type of the blockchain (`ETH` - Ethereum blockchain, `QTUM` - QTUM blockchain) `price` represented as `real_price * 10^8`. Where `real_price` (price of profile) could be `float`. * **Body params** `[json]` ```bash { "public_key": [string], "message": { "timestamp": [string], "cus": [string], # profile encrypted with private key "read_access": [integer], # profile read access price * 10^8 "write_access": [integer], # profile write access price * 10^8 "description": [string] # profile description }, "signature": [string] } ``` * **Sample response** `[json]` ```bash { "owneraddr": [string], # owners address "description": [string], # profiles description "read_price": [integer], # read access price "write_read": [integer] # write access price } ``` ## Get profile from the blockchain by cid * **URL:** `/api/blockchain/[cid]/[coinid]/profile/` * **Method:** `GET` * **URL params** `cid=[string]` - profile identifier `coinid=[string]` - type of the blockchain (`ETH` - Ethereum blockchain, `QTUM` - QTUM blockchain) * **Body params** None * **Sample response** `[json]` `price` represented as `real_price * 10^8`. Where `real_price` could be `float`. ```bash { "cid": [integer], # profile identifier "coinid": [string], # type of the blockchain (ETH - Ethereum blockchain, QTUM - QTUM blockchain) "description": [string], # profile description "owner": [string], # owner public key "owneraddr": [string], # owner address "read_access": [integer], # profiles read access price * 10^8 "write_access": [integer], # profiles write access price * 10^8 "content": [string], # profile "seller_access_string": [string], # seller access string "seller_pubkey": [string], # seller public key "access_type": [string] # access type of profile "nickname": [string] # nickname which user specified during registration } ``` * **Description** Return profile from the blockchain by profile id ## Set description of profile for cid **in progress** * **URL:** `/api/blockchain/[cid]/description/` * **Method:** `PUT` * **URL params** `cid=[string]` - profile identifier * **Body params** `[json]` ```bash { "public_key": [string], "message": { "timestamp": [string], "cid": [integer], # profile'ss cid "description": [string], # profile's new description "coinid": [string] # type of the blockchain (ETH - Ethereum blockchain, QTUM - QTUM blockchain) }, "signature": [string] } ``` * **Sample response** `[json]` ```bash { "cid": [integer], # profile's cid "description": [string], # profile's new description "owneraddr": [string], # owner address "coinid": [string] # type of the blockchain (ETH - Ethereum blockchain, QTUM - QTUM blockchain) } ``` ## Set profiles price **in progress** * **URL:** `/api/blockchain/[cid]/price/` * **Method:** `PUT` * **URL params** `cid=[string]` - profile identifier * **Body params** `[json]` `price` represented as `real_price * 10^8`. Where `real_price` could be `float`. ```bash { "public_key": [string], "message": { "timestamp": [string], "cid": [integer], "price": [integer], # price * 10^8 "coinid": [string] # type of the blockchain (ETH - Ethereum blockchain, QTUM - QTUM blockchain) "access_type": [string] (read_access, write_access) }, "signature": [string] } ``` * **Sample response** `[json]` ```bash { "cid": [integer], # profiles cid "write_access" or "read_access": [integer] # new profiles write access or read access price "coinid": [string] # type of the blockchain (ETH - Ethereum blockchain, QTUM - QTUM blockchain) } ``` ## Make a profiles write access offer for owner * **URL:** `/api/blockchain/[public_key]/write-access-offer/` * **Method:** `POST` * **URL params** `public_key=[string]` * **Body params** `[json]` ```bash { "message": { "timestamp": [string], "cid": [integer], # profiles identifier "coinid": [string], # type of the blockchain (ETH - Ethereum blockchain, QTUM - QTUM blockchain) "price": [integer], # write access price (optional, sellers price by default) "buyer_access_string": [string] # now it is user's public key }, "signature": [string] } ``` * **Sample response** `[json]` `offer_price` represented as `real_offer_price * 10^8`. Where `real_offer_price` could be `float`. ```bash { "cid": [integer], # profile identifier "buyer_address": [string], # buyer address "buyer_access_string": [string], # now it is buyer's public key "offer_price": [integer], # price of profile * 10^8 "offer_type": [string] # offers type (write access) } ``` ## Make a profiles read access offer for owner * **URL:** `/api/blockchain/[public_key]/read-access-offer/` * **Method:** `POST` * **URL params** `public_key=[string]` * **Body params** `[json]` ```bash { "message": { "timestamp": [string], "cid": [integer], # profiles identifier "coinid": [string], # type of the blockchain (ETH - Ethereum blockchain, QTUM - QTUM blockchain) "price": [integer], # read access price (optional, sellers price by default) "buyer_access_string": [string] # now it is user's public key }, "signature": [string] } ``` * **Sample response** `[json]` `offer_price` represented as `real_offer_price * 10^8`. Where `real_offer_price` could be `float`. ```bash { "cid": [integer], # profile identifier "buyer_address": [string], # buyer address "buyer_access_string": [string], # now it is buyer's public key "offer_price": [integer], # price of profile * 10^8 "offer_type": [string] # offers type (read access) } ``` ## Accept buyers offer * **URL:** `/api/blockchain/[public_key]/deal/` * **Method:** `POST` * **URL params** `public_key=[string]` * **Body params** `[json]` ```bash { "public_key": [string], "message": { "timestamp": [string], "cid": [integer], # profile identifier "buyer_access_string": [string], # now it is user's public key "buyer_pubkey": [string], # buyer public key "seller_access_string": [string], "access_type": [string], # write access or read access "coinid": [string] # type of the blockchain (ETH - Ethereum blockchain, QTUM - QTUM blockchain) }, "signature": [string] } ``` * **Sample response** `[json]` ```bash { "cid": [integer], # profile identifier "access_string": [string], # now it is user's public key "new_owner": [string], # address of the new owner "prev_owner": [string] # address of the previous owner } ``` ## Reject the write access offer by either buyer or seller * **URL:** `/api/blockchain/[public_key]/write-access-offer/` * **Method:** `PUT` * **URL params** `public_key=[string]` * **Body params** `[json]` `buyer_address` is address of user who sent "make offer" request for buying profile. ```bash { "public_key": [string], "message": { "timestamp": [string], "offer_id": { "cid": [integer], # profile identifier "buyer_address": [string] # buyer address } }, "signature": [string] } ``` * **Sample response** `[json]` ```bash { "cid": [integer], # profile identifier "buyer_address": [string] # buyer address } ``` ## Reject the read access offer by either buyer or seller * **URL:** `/api/blockchain/[public_key]/read-access-offer/` * **Method:** `PUT` * **URL params** `public_key=[string]` * **Body params** `[json]` `buyer_address` is address of user who sent "make offer" request for buying profile. ```bash { "public_key": [string], "message": { "timestamp": [string], "offer_id": { "cid": [integer], # profile identifier "buyer_address": [string] # buyer address } }, "signature": [string] } ``` * **Sample response** `[json]` ```bash { "cid": [integer], # profile identifier "buyer_address": [string] # buyer address } ``` ## Get all PMES profiles from the blockchain * **URL:** `/api/blockchain/profile?page=[page]/` * **Method:** `GET` * **URL params** `page: [integer]` * **Body params** None * **Sample response** `[json]` `price` represented as `real_price * 10^8`. Where `real_price` could be `float`. ```bash {"profiles": [array] # array with profiles [ { "cid": [integer], # profile identifier "coinid": [string], # type of the blockchain (ETH - Ethereum blockchain, QTUM - QTUM blockchain) "description": [string], # profile description "owneraddr": [string], # owner address "read_access": [integer], # profile read access price * 10^8 "write_access": [integer], # profile write access price * 10^8 "txid": [string] # transaction status reference }, ... ], "pages":[integer] # number of used pages for pagination } ``` ## Get all profiles which posted user * **URL:** `/api/accounts/[nickname]/profiles?page=[page]/` * **Method:** `GET` * **URL params** `page: [integer]` * **Body params** `[json]` ```bash { "public_key": [string], "message": { "timestamp": [string] }, "signature": [string]c. } ``` * **Sample response** `[json]` `price` represented as `real_price * 10^8`. Where `real_price` could be `float`. ```bash [ profiles: [array] # array with profiles [ { "cid": [integer], # profile identifier "coinid": [string] # type of the blockchain (ETH - Ethereum blockchain, QTUM - QTUM blockchain) "description": [string], # profile description "owneraddr": [string], # owner address "read_access": [integer], # profile read access price * 10^8 "write_access": [integer], # profile write access price * 10^8 "txid": [string] # transaction status reference } ] ... ] ``` ## Get all offers which made user * **URL:** `/api/accounts/[public_key]/output-offers/` * **Method:** `GET` * **URL params** `[json]` ```bash { "public_key": [string], "message": { "timestamp": [string] }, "signature": [string] } ``` * **Body params** None * **Sample response** `[json]` ```bash [ { "buyer_access_string": [string], "buyer_address": [string], # buyers address "cid": [integer], # profiles identifier "price": [integer], # offers price "seller_access_string": [integer], # profile price * 10^8 "type": [string], # offers type "coinid": [string], # type of the blockchain (ETH - Ethereum blockchain, QTUM - QTUM blockchain) "status": [integer], # offer status "seller_public_key": [string] # seller public key }, ... ] ``` * **Description** Get all offers which made the user for buying access or rights of profiles ## Get all offers by cid * **URL:** `/api/accounts/[public_key]/input-offers/` * **Method:** `GET` * **URL params** `[json]` ```bash { "public_key": [string], "message": { "cid": [integer], "timestamp": [string], "coinid": [string] # type of the blockchain (ETH - Ethereum blockchain, QTUM - QTUM blockchain) }, "signature": [string] } ``` * **Body params** None * **Sample response** `[json]` `buyer_price` and `seller_price` represented as `real_price * 10^8`. Where `real_price` could be `float`. ```bash [ { "buyer_access_string": [string], "buyer_address": [string], # buyers address "cid": [integer], # profiles identifier "price": [integer], # offers price "seller_access_string": [integer], # profile price * 10^8 "type": [string], # offers type "coinid": [string], # type of the blockchain (ETH - Ethereum blockchain, QTUM - QTUM blockchain) "status": [integer], # offer status "seller_public_key": [string] # sellers public key }, ... ] ``` ## Get all purchased read access profiles * **URL:** `/api/accounts/[public_key]/deals/` * **Method:** `GET` * **URL params** None * **Body params** None * **Sample response** `[json]` ```bash [ { "cid": [integer], # profile identifier "coinid": [string] # type of the blockchain (ETH - Ethereum blockchain, QTUM - QTUM blockchain) "description": [string], # profile description "owneraddr": [string], # owner address "read_access": [integer], # profile read access price * 10^8 "write_access": [integer], # profile write access price * 10^8 "txid": [string], # transaction status reference }, ] ``` ## Make review for purchased profile * **URL:** `/api/accounts/[public_key]/review/` * **Method:** `POST` * **URL params** None * **Body params** ```bash { "public_key": [string], "message": { "cid": [integer], # profile identifier "timestamp": [string], "coinid": [string], # type of the blockchain (ETH - Ethereum blockchain, QTUM - QTUM blockchain) "review": [string], # review "rating": [integer] # profiles rating in range from 1 to 5 }, "signature": [string] } ``` * **Sample response** `[json]` ```bash [ { "review": [string], # review "rating": [integer] # profiles rating in range from 1 to 5 "cid": [integer] # profiles identifier }, ... ] ``` ## Get all profiles reviews * **URL:** `/api/accounts/[cid]/[coinid]/reviews/` * **Method:** `GET` * **URL params** `cid=[string]` - profile identifier `coinid=[string]` - type of the blockchain (ETH - Ethereum blockchain, QTUM - QTUM blockchain) * **Body params** None * **Sample response** `[json]` ```bash [ { "review": [string], # review "rating": [integer] # profiles rating in range from 1 to 5 "buyer_address": [integer] # buyers address "confirmed": [integer] # 1 by default }, ... ] ``` ## Withdraw tokens or coins * **URL:** `/api/accounts/withdraw/` * **Method:** `POST` * **URL params** None * **Body params** Now works with the PUTTEST and QTUMTEST tokens only. ```bash { "public_key": [string], "message": { "timestamp": [string], "coinid": [string], # type of the cryptocurrency `PUTTEST` or `QTUMTEST` "amount": [integer], # amount of tokens that user send to the "address" * 10^8 "address": [string], # address to which user send the tokens "recvWindow": [integer] # signature of the message timeout expired after this timing. For instance, in 5000 milliseconds }, "signature": [string] } ``` * **Sample response** `[json]` `message` and `signature` is repeated from the user request respectively. The user could check the status of the transaction by viewing it by the `txid`. ```bash { "public_key": [string], "message": { "timestamp": [string], "coinid": [string], # type of the blockchain (ETH - Ethereum blockchain, QTUM - QTUM blockchain) "amount": [integer], # amount of tokens that user send to the "address" * 10^8 "address": [string], # address to which user send the tokens "recvWindow": [integer] # signature of the message timeout expired after this timing. For instance, in 5000 milliseconds }, "signature": [string], "txid": [string] # transaction identifier } ``` ## Bulk operations * **URL:** `/api/bulk/` * **Method:** `POST` * **URL params** None * **Body params** Now work for the PUT tokens for sending tokens only. Full JSON format of the bulk operation is present [here](bulk_operations_json_format_description.js). `txid` field shouldn't be filled by user, it will be filled by the PMES in the response. Part of bulk operation is present bellow: ```bash { "message": { # signed json "send": [{ "message":{ "input": "address1", # sender addresses "output": "address2", # receiver addresses "amount": "value", "coinid": "coinid", # blockchain type "txid": null, # txid will filled with response "error": null, "response": null }, "signature": "signature", # users signature "public_key": "public_key", # users public key } ], ... }, "decimal": 8, "signature": "signature", "public_key": "public key", "callbackURL": "some_url" // PMES backend reply send to this endpoint } ``` * **Sample response** `[json]` In the response, user receives filled JSON format of the bulk operation with server signature. ## View fee of the withdraw tokens or coins operation * **URL:** `/api/accounts/fees/` * **Method:** `POST` * **URL params** None * **Body params** Now works with the PUTTEST and QTUMTEST tokens only. ```bash { "public_key": [string], "message": { "timestamp": [string], "coinid": [string], # type of the cryptocurrency `PUTTEST` or `QTUMTEST` "amount": [integer], # amount of tokens that user will send * 10^8 }, "signature": [string] } ``` * **Sample response** `[json]` ```bash { "coinid": [string], # type of the blockchain (ETH - Ethereum blockchain, QTUM - QTUM blockchain) "amount": [integer], # an active amount of coins/tokens on user's wallet * 10^8 "fee": [int], # fee * 10^8 } ``` ## Bulk operations * **URL:** `/api/bulk/` * **Method:** `POST` * **URL params** None * **Body params** Now work for the PUT tokens for sending tokens only. Full JSON format of the bulk operation is present [here](bulk_operations_json_format_description.js). `txid` field shouldn't be filled by user, it will be filled by the PMES in the response if operation will write data to the blockchain. After PMES will execute a request from the bulk operation its result will be written to the `response` in the case when there is no errors. Otherwise, in the `error` field. Part of bulk operation is present bellow: ```bash { "message": { # signed json "send": [{ "message":{ "input": "address1", # sender addresses "output": "address2", # receiver addresses "amount": "value", "coinid": "coinid", # blockchain type "txid": null, # txid will filled with response "error": null, "response": null }, "signature": "signature", # users signature "public_key": "public_key", # users public key } ], ... }, "decimal": 8, "signature": "signature", "public_key": "public key", "callbackURL": "some_url" // PMES backend reply send to this endpoint } ``` * **Sample response** `[json]` In the response, user receives filled JSON format of the bulk operation with server signature. ## Get transactions history by address * **URL:** `/api/accounts/withdraw/` * **Method:** `GET` * **URL params** ```bash { "address": [string], # users address "coinid": [string], # blockchain type `QTUMTEST` or `PUTTEST` } ``` * **Body params** None * **Sample response** `[json]` ## Refill wallet with test coins or tokens * **URL:** `/api/accounts/[uid]/balance/` * **Method:** `POST` * **URL params** `uid: [integer]` - created account id * **Body params** `amount` represented as `amount * 10^8`. Where `amount` could be `float`. ```bash { "amount": [integer], # refilling amount * 10^8 "coinid": [string], # type of the cryptocurrency `PUTTEST` or `QTUMTEST` } ``` ## Standardization of an error messages A standard error answer from the server has the following structure: ```bash { "error": [integer], "reason": [string]} ``` Where `error` contains an error code, while `reason` contains error description.
markdown
Parineeti Chopra is a glam queen in black outfits. Take a look at her sultry looks in black attires. Parineeti rocks in this black shimmery dress and we are swooning over her hot look. She is all set for a party night. If you want to slay like a diva, then take tips from Parineeti's effortless style in this black thigh high slit dress. She oozes omph with all elegance. The actress looks chic in this casual black look. Her oversized shirt with black glares are giving us major fashion goals. Parineeti flaunts her toned midriff in this crop top paired with a knee-length pleated skirt. Her exquisite look is enough to fall for. Parineeti is looking like a boss lady in black high neck sweater and leather pants. Her charisma is so strong in black. Isn't it? This ethnic look of Parineeti in black and white saree is so eye-catchy. She completed her look with a tousled ponytail. We must say that Parineeti Chopra's glam in black is just unbeatable.
english
<reponame>PradyX/RaspiCheck /** * MIT License * * Copyright (c) 2019 RasPi Check Contributors * * Permission is hereby granted, free of charge, to any person obtaining a copy * of this software and associated documentation files (the "Software"), to deal * in the Software without restriction, including without limitation the rights * to use, copy, modify, merge, publish, distribute, sublicense, and/or sell * copies of the Software, and to permit persons to whom the Software is * furnished to do so, subject to the following conditions: * * The above copyright notice and this permission notice shall be included in all * copies or substantial portions of the Software. * * THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR * IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY, * FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE * AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER * LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, * OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE * SOFTWARE. */ package de.eidottermihi.rpicheck.test.mocks; import net.schmizz.sshj.SSHClient; import net.schmizz.sshj.connection.ConnectionException; import net.schmizz.sshj.connection.channel.direct.Session; import net.schmizz.sshj.transport.TransportException; import org.mockito.Mockito; /** * Mocker for {@link SSHClient} * * @author Michael */ public class SSHClientMocker { private SSHClient client = Mockito.mock(SSHClient.class); public SSHClientMocker setAuthed(boolean isAuthed) { Mockito.when(client.isAuthenticated()).thenReturn(isAuthed); return this; } public SSHClientMocker setConnected(boolean isConnected) { Mockito.when(client.isConnected()).thenReturn(isConnected); return this; } public SSHClientMocker withSession(Session session) { try { Mockito.when(client.startSession()).thenReturn(session); } catch (ConnectionException e) { } catch (TransportException e) { } return this; } public SSHClient mock() { return this.client; } }
java
Former NFL quarterback Tarvaris Jackson spent his college football career with both Arkansas and Alabama State. After two years with the Arkansas Razorbacks. Tarvaris Jackson transferred to Alabama State, where he threw for over 7,000 yards and 67 touchdowns. The Minnesota Vikings selected Tarvaris Jackson with their 64th overall pick in the second-round of the 2006 NFL draft. During his time with Minnesota, Jackson made 20 total starts. Tarvaris Jackson won ten of his 20 starts, but he also lost ten. After the 2010 NFL season, Tarvaris Jackson joined the Seattle Seahawks. Seattle signed Tarvaris Jackson as the starting quarterback. All 14 starts that Jackson had in Seattle came in the first season with the team. Tavaris Jackson lost the quarterback battle to newly drafted Russell Wilson. After the Seahawks lost Super Bowl XLIX to the New England Patriots, the Seahawks let Jackson become an unrestricted free agent. Tarvaris Jackson ended up retiring from the NFL in 2015. Three years after retiring from the NFL, Tarvaris Jackson a graduate assistant job at his alma mater, Alabama State. Jackson spent one year at Alabama State before heading to Tennessee State. Tarvaris Jackson became the quarterbacks coach at Tennessee State. Tarvaris Jackson was heading to his hometown when his family received the one phone call that no one wants to receive. What happened to NFL quarterback Tarvaris Jackson? April 12, 2020 - Tarvaris Jackson was driving his 2012 Chevrolet Camaro, when he lost control and struck a tree. The accident happened seven miles south of Jackson's hometown of Montgomery, Alabama. When first responders arrived at the scene, they rushed Jackson to a local hospital. The former NFL quarterback was announced deceased at an Alabama hospital on April 12, 2020. Tarvaris Jackson's life ended too soon at the young age of 36 years old. It has been reported that Jackson was driving 70 mph in a 35 mph zone when he lost control of his car. When the news broke, the Minnesota Vikings released this statement about Jackson's death: "The entire Vikings family is saddened by the news of Tarvaris Jackson being taken from us too soon, One of Tarvaris' greatest attributes was his positive outlook and approach. He genuinely cared about others, was a good friend and will be missed by family, teammates and Vikings fans everywhere. We send out deepest condolences to his family." Tavaris Jackson is survived by his wife Lakitta Jackson and their three children, Tarvaris Jackson II, Takayla Jackson, and Tyson Jackson.
english
Bhubaneswar: A US-based organisation that was established by an Odia woman, donated nearly Rs 50 lakh to the Odisha Chief Minister’s relief fund to support the state in its fight against the coronavirus pandemic, an official release said. ‘Our Biswas’, which is managed by Joyasree Mahanti, has been working for women empowerment in different parts of Odisha since 2008, the release said. The organisation provides economic empowerment through a programme of trust, and support for women and girls who live in the dire conditions of extreme poverty around the world, according to its website. It has donated USD 68,500 — Rs 49. 89 lakh approximately — to the relief fund, Additional Chief Secretary (Home) Sanjeev Chopra said in a statement. “Because of the timely decisions and appropriate action, Covid situation in Odisha is within control,” Mahanti said. She expressed hope that the state would come out victorious in its fight against Covid with cooperation of everyone.
english
<section class="c9 c30"> <p class="c14"><span class="c0 c28 c9">slow.trade</span><span class="c0 c9 c28">&nbsp;Privacy Policy</span><span class="c0">&nbsp;</span></p> <p class="c29"><span class="c7 c22 c26">Last updated: December 2018</span></p> <p class="c12 c25"><span class="c1"></span></p> <p class="c12"><span class="c1">We are delighted that you have chosen to use our Platform. We take our data protection responsibilities with the utmost seriousness and we have designed our site so that you may navigate and use it without having to provide Personal Data.</span></p> <p class="c12"><span class="c1">&nbsp;</span></p> <p class="c12"><span class="c7">This Privacy Policy (the &ldquo;</span><span class="c0">Policy</span><span class="c1">&rdquo;) sets out what Personal Data we collect, how we process it and how long we retain it. This Policy is applying to all of our processing activities where we act as a data controller.</span></p> <p class="c12"><span class="c1">&nbsp;</span></p> <p class="c20"><span class="c1">In this Policy, &quot;we&quot;, &quot;us&quot; and &quot;our&quot; refers to d.ex O&Uuml;, a company incorporated in Estonia under company registration No. No. 14553524 with its registered address at Ahtri 12, Kesklinna District, 10151 Tallinn, Harju County, Estonia. For more information about us, see the Contact Us section of this policy.</span></p> <p class="c12"><span class="c1">&nbsp;</span></p> <p class="c12"><span class="c7">In this Policy, &ldquo;Personal Data&rdquo; means any information relating to you as an identified or identifiable natural person (&ldquo;</span><span class="c0">Data Subject</span><span class="c1">&rdquo;); an identifiable natural person is one who can be identified, directly or indirectly, in particular by reference to an identifier such as a name, an online identifier or to one or more factors specific to your physical, physiological, genetic, mental, economic, cultural or social identity.</span></p> <p class="c12"><span class="c1">&nbsp;</span></p> <p class="c12"><span class="c1">In this Policy, &ldquo;Processing&rdquo; means any operation or set of operations which is performed on Personal Data (as defined in this Privacy Policy) or on sets of Personal Data, whether or not by automated means, such as collection, recording, organisation, structuring, storage, adaptation or alteration, retrieval, consultation, use, disclosure by transmission, dissemination or otherwise making available, alignment or combination, restriction, erasure or destruction.</span></p> <p class="c12 c25"><span class="c1"></span></p> <p class="c20"><span class="c3 c27">Capitalized terms used but not defined here have the respective meanings given to them in the</span><span class="c3 c27">&nbsp;</span><span class="c18 c7"><a class="c4" href="https://www.google.com/url?q=https://slow.trade/%23/terms&amp;sa=D&amp;ust=1554480371637000">Terms and Conditions</a></span><span class="c1">. </span></p> <p class="c12"><span class="c1">&nbsp;</span></p> <ol class="c2 lst-kix_wkl89xja70k3-0 start" start="1"> <li class="c12 c17"><span class="c5 c0">NAVIGATING THIS POLICY</span></li> </ol> <ol class="c2 lst-kix_wkl89xja70k3-1 start" start="1"> <li class="c8"><span class="c1 c9">You can click on the below links to jump to the relevant section:</span></li> </ol> <ul class="c2 lst-kix_t7e157kp7yiu-0 start"> <li class="c6"><span class="c18 c0 c9"><a class="c4" href="#id.f5t1onxzuqlt">Your information and the Blockchain</a></span></li> <li class="c6"><span class="c18 c0 c9"><a class="c4" href="#id.dqwvuud3gfyr">How We Use Personal Data</a></span> </li> <li class="c6"><span class="c18 c0 c9"><a class="c4" href="#id.q4lmtv841i6x">Use of Third Party Applications</a></span></li> <li class="c6"><span class="c18 c0 c9"><a class="c4" href="#id.1i8nb0n35nnz">Sharing Your Personal Data</a></span></li> <li class="c6"><span class="c18 c0 c9"><a class="c4" href="#id.d3kr7z172p0p">Transferring Your data outside of the EU</a></span></li> <li class="c6"><span class="c18 c0 c9"><a class="c4" href="#id.5cb980ypx0dn">Existence of Automated Decision-Making</a></span></li> <li class="c6"><span class="c0 c9 c18"><a class="c4" href="#id.ydxsrc1hk8pk">Data Security</a></span></li> <li class="c6"><span class="c18 c0 c9"><a class="c4" href="#id.e64xadka8wl9">Your Rights as a Data Subject</a></span></li> <li class="c6"><span class="c18 c0 c9"><a class="c4" href="#id.sbk83tq3n553">Storing Personal Data</a></span> </li> <li class="c6"><span class="c18 c0 c9"><a class="c4" href="#id.gbm96lt67oqt">Changes to this Privacy Policy</a></span></li> <li class="c6"><span class="c18 c0 c9"><a class="c4" href="#id.qz7oi61el199">Our Details</a></span></li> </ul> <p class="c12 c15 c25"><span class="c5 c0"></span></p><a id="id.f5t1onxzuqlt"></a> <ol class="c2 lst-kix_wkl89xja70k3-0" start="2"> <li class="c12 c17"><span class="c5 c0">YOUR INFORMATION AND THE BLOCKCHAIN</span></li> </ol> <ol class="c2 lst-kix_wkl89xja70k3-1 start" start="1"> <li class="c8"><span class="c1 c9">Blockchain technology, also known as distributed ledger technology (or simply &ldquo;DLT&rdquo;), is at the core of our business. Blockchains are decentralized and made up of digitally recorded data in a chain of packages called &ldquo;blocks&rdquo;. The manner in which these blocks are linked is chronological, meaning that the data is close to impossible &nbsp;to alter once recorded. Since the ledger may be distributed all over the world (across several &lsquo;nodes&rsquo; which usually replicate the ledger) this means there is no single person making decisions or otherwise administering the system (such as an operator of a cloud computing system), and that there is no centralized place where it is located either.</span></li> <li class="c8"><span class="c1 c9">Accordingly, by design, a blockchain&rsquo;s records cannot be changed or deleted and is said to be &lsquo;immutable&rsquo;. This may affect your ability to exercise your rights such as your right to erasure (&lsquo;right to be forgotten&rsquo;), or your rights to object or restrict Processing, of your personal data. Data on the blockchain cannot be erased and cannot be changed. Although smart contracts may be used to revoke certain access rights, and some content may be made invisible to others, it is not deleted.</span></li> <li class="c8"><span class="c3">In certain circumstances, when interacting with the DutchX Decentralised Trading Protocol (the &ldquo;</span><span class="c0 c9">Protocol</span><span class="c1 c9">&rdquo;) as further defined in the Terms such as delivery of tokens it will be necessary to write certain personal data, such as your Ethereum or other cryptocurrency wallet address, onto the blockchain; this is done through a smart contract and requires you to execute such transactions using your wallet&rsquo;s private key.</span></li> <li class="c8"><span class="c1 c9">In most cases, the ultimate decision to (i) transact on the Ethereum Blockchain using your Ethereum or other cryptocurrency wallet address, as well as to (ii) share the public key relating to your Ethereum or other cryptocurrency wallet address with anyone (including us) rests with you.</span></li> <li class="c8"><span class="c16 c0">IF YOU WANT TO ENSURE YOUR PRIVACY RIGHTS ARE NOT AFFECTED IN ANY WAY, YOU SHOULD NOT TRANSACT ON BLOCKCHAINS AS CERTAIN RIGHTS MAY NOT BE FULLY AVAILABLE OR EXERCISABLE BY YOU OR US DUE TO THE TECHNOLOGICAL INFRASTRUCTURE OF THE BLOCKCHAIN. THE ETHEREUM BLOCKCHAIN IS AVAILABLE TO THE PUBLIC AND ANY PERSONAL DATA SHARED ON IT WILL BECOME PUBLICLY AVAILABLE.</span></li> </ol><a id="id.dqwvuud3gfyr"></a> <ol class="c2 lst-kix_wkl89xja70k3-0" start="3"> <li class="c12 c17"><span class="c0 c5">HOW WE USE PERSONAL DATA</span></li> </ol> <ol class="c2 lst-kix_wkl89xja70k3-1 start" start="1"> <li class="c8"><span class="c5 c0">When visiting our website</span></li> </ol> <ol class="c2 lst-kix_wkl89xja70k3-2 start" start="1"> <li class="c11"><span class="c1 c9">We may collect and process Personal Data about your use of our website. This data may include:</span></li> </ol> <ol class="c2 lst-kix_wkl89xja70k3-3 start" start="1"> <li class="c6"><span class="c1 c9">the browser types and versions used;</span></li> <li class="c6"><span class="c1 c9">the operating system used by the accessing system;</span></li> <li class="c6"><span class="c1 c9">the website from which an accessing system reaches our website (so-called referrers);</span></li> <li class="c6"><span class="c1 c9">behaviour: subpage, duration, and revisit;</span></li> <li class="c6"><span class="c1 c9">the date and time of access to our website;</span></li> <li class="c6"><span class="c3">the Internet protocol address (&ldquo;</span><span class="c0 c9">IP address</span><span class="c1 c9">&rdquo;);</span></li> <li class="c6"><span class="c1 c9">the Internet service provider of the accessing system; and</span></li> <li class="c6"><span class="c1 c9">any other similar data and information that may be used in the event of attacks on our information technology systems.</span></li> </ol> <ol class="c2 lst-kix_wkl89xja70k3-2" start="2"> <li class="c11"><span class="c1 c9">This Personal Data may be processed in order to deliver the content of our site correctly, to optimize the content of our site to ensure the long-term viability of our information technology systems and website technology, and to provide law enforcement authorities with the information necessary for criminal prosecution in case of a cyber-attack.</span></li> <li class="c11"><span class="c3">The legal basis for this processing is our legitimate business interests, namely monitoring and improving our website and the proper protection of our business against risks and your consent when agreeing to accept </span><span class="c18 c3"><a class="c4" href="https://www.google.com/url?q=https://slow.trade/%23/cookies&amp;sa=D&amp;ust=1554480371643000">cookies</a></span><span class="c1 c9">.</span></li> </ol> <ol class="c2 lst-kix_wkl89xja70k3-1" start="2"> <li class="c8"><span class="c0 c9">When using slow.trade</span></li> </ol> <ol class="c2 lst-kix_wkl89xja70k3-2 start" start="1"> <li class="c11"><span class="c7">Slow.trade is a graphical interface and trading platform (the &ldquo;</span><span class="c0">Platform</span><span class="c7">&rdquo;), that helps you interact via a Wallet with the DutchX Decentralized Trading Protocol for ERC-20 tokens (the &ldquo;</span><span class="c0">DutchX Protocol</span><span class="c7">&rdquo;). </span></li> <li class="c11"><span class="c1 c9">When submitting trades via the slow.trade Platform we may collect and process Personal Data. The data will be stored in different instances.</span></li> </ol> <ol class="c2 lst-kix_wkl89xja70k3-3 start" start="1"> <li class="c6"><span class="c1 c9">On the Ethereum Blockchain following data will be stored:</span></li> </ol> <ol class="c2 lst-kix_wkl89xja70k3-4 start" start="1"> <li class="c12 c23"><span class="c1 c9">your wallet address;</span></li> <li class="c12 c23"><span class="c7">trade data (</span><span class="c1 c9">timestamp, sell and bought token (amount and kind));</span></li> <li class="c12 c23"><span class="c1 c9">Liquidity contribution (OWL detection, liquidity contribution settled in OWL, MGN generated, liquidity contribution reduction based on MGN and liquidity contribution settled in participating Supported Tokens).</span></li> </ol> <ol class="c2 lst-kix_wkl89xja70k3-3" start="2"> <li class="c6"><span class="c0 c16">The data will be stored on the Ethereum Blockchain. Given the technological design of the Ethereum Blockchain, as explained in Section 2, this data will become public and it will not likely be possible to delete or change the data at any given time.</span></li> <li class="c6"><span class="c1 c9">Furthermore, we will store log data which include: </span></li> </ol> <ol class="c2 lst-kix_wkl89xja70k3-4 start" start="1"> <li class="c12 c23"><span class="c1 c9">the Internet protocol address (&ldquo;IP address&rdquo;); and</span> </li> <li class="c12 c23"><span class="c1 c9">browser description.</span></li> </ol> <ol class="c2 lst-kix_wkl89xja70k3-3" start="4"> <li class="c6"><span class="c1 c9">This Personal Data may be processed in order to deliver the functionality of the product. The legal basis for this Processing is that it is necessary to fulfil a contract with you and our legitimate business interests, namely monitoring and improving our service.</span></li> </ol> <ol class="c2 lst-kix_wkl89xja70k3-1" start="3"> <li class="c8"><span class="c21 c3 c24 c22">Other uses of your Personal Data</span></li> </ol> <ol class="c2 lst-kix_wkl89xja70k3-2 start" start="1"> <li class="c11"><span class="c1 c9">We may process any of your Personal Data where it is necessary to establish, exercise, or defend legal claims. The legal basis for this Processing is our legitimate interests, namely the protection and assertion of our legal rights, your legal rights and the legal rights of others.</span></li> <li class="c11"><span class="c1 c9">Further, we may process your Personal Data where such Processing is necessary in order for us to comply with a legal obligation to which we are subject. The legal basis for this Processing is our legitimate interests, namely the protection and assertion of our legal rights.</span></li> </ol><a id="id.q4lmtv841i6x"></a> <ol class="c2 lst-kix_wkl89xja70k3-0" start="4"> <li class="c12 c17"><span class="c5 c0">USE OF THIRD PARTY APPLICATIONS</span></li> </ol> <ol class="c2 lst-kix_wkl89xja70k3-1 start" start="1"> <li class="c8"><span class="c5 c0">Ethereum Blockchain</span></li> </ol> <ol class="c2 lst-kix_wkl89xja70k3-2 start" start="1"> <li class="c11"><span class="c1 c9">When participating in the DutchX Protocol via the slow.trade Platform, your Wallet address, trade data, and liquidity contribution will be stored on the Ethereum Blockchain. See Section 2 of this Policy.</span></li> <li class="c11"><span class="c16 c0">The data will be stored on the Ethereum Blockchain. Given the technological design of the blockchain, as explained in section 2, this data will become public and it will not likely be possible to delete or change the data at any given time.</span></li> </ol> <ol class="c2 lst-kix_wkl89xja70k3-1" start="2"> <li class="c8"><span class="c0 c9">Wallet &nbsp;</span><span class="c0">provider</span></li> </ol> <ol class="c2 lst-kix_wkl89xja70k3-2 start" start="1"> <li class="c11"><span class="c1">As further set out in the Terms &amp; Conditions, to use the slow.trade Platform to trade on the DutchX Protocol, you will have to connect with your Wallet. Recommended Wallets are listed in the Terms &amp; Conditions and may change from time to time. Wallet providers may collect and store Personal Data. For example, data collected by MetaMask may include:</span></li> </ol> <ol class="c2 lst-kix_wkl89xja70k3-3 start" start="1"> <li class="c6"><span class="c1 c9">network information;</span></li> <li class="c6"><span class="c1 c9">the first wallet address created through the MetaMask plugin;</span></li> <li class="c6"><span class="c1 c9">interaction with the site is also documented via a MetaMask Google Analytics account.</span></li> </ol> <ol class="c2 lst-kix_wkl89xja70k3-2" start="2"> <li class="c11"><span class="c3">If you have chosen h</span><span class="c3">ighest browser permission settings &nbsp;for your web browser, this could also lead to procurements of more Personal Data.</span></li> <li class="c11"><span class="c3">For further information and the applicable data protection provisions of MetaMask, please visit </span><span class="c3"><a class="c4" href="https://www.google.com/url?q=https://metamask.io/privacy.html&amp;sa=D&amp;ust=1554480371647000">&nbsp;</a></span><span class="c3 c10"><a class="c4" href="https://www.google.com/url?q=https://metamask.io/privacy.html&amp;sa=D&amp;ust=1554480371647000">https://metamask.io/privacy.html</a></span><span class="c1 c9">. </span></li> <li class="c11"><span class="c3">Please check the privacy policy of </span><span class="c0 c9">your</span><span class="c1 c9">&nbsp;allet provider before injecting to the slow.trade platform.</span></li> </ol> <ol class="c2 lst-kix_wkl89xja70k3-1" start="3"> <li class="c8"><span class="c5 c0">Amazon Web Server</span></li> </ol> <ol class="c2 lst-kix_wkl89xja70k3-2 start" start="1"> <li class="c11"><span class="c3">We use the Amazon Web Server (AWS) to store, log and database data as described in Section 3.2 c). For further information and the applicable data protection provisions of AWS please visit </span><span class="c10 c3"><a class="c4" href="https://www.google.com/url?q=https://aws.amazon.com/privacy/?nc1%3Df_pr&amp;sa=D&amp;ust=1554480371648000">https://aws.amazon.com/privacy/?nc1=f_pr</a></span><span class="c1 c9">&nbsp;.</span></li> </ol> <ol class="c2 lst-kix_wkl89xja70k3-1" start="4"> <li class="c8"><span class="c5 c0">API</span></li> </ol> <ol class="c2 lst-kix_wkl89xja70k3-2 start" start="1"> <li class="c11"><span class="c3">For information purposes, we note here that the DutchX Protocol offers a range of additional Services, including the provision of an Application Programming Interface site (&ldquo;API&rdquo;) at </span><span class="c18 c3"><a class="c4" href="https://www.google.com/url?q=https://dutchx.d.exchange/api&amp;sa=D&amp;ust=1554480371649000">https://dutchx.d.exchange/api</a></span><span class="c3">, which offers anyone &nbsp;easy access to &nbsp;the </span><span class="c0 c9">public</span><span class="c1 c9">&nbsp;information contained on the Ethereum Blockchain regarding the DutchX Protocol .</span></li> <li class="c11"><span class="c1">The API enables everyone to access the information of the DutchX Protocol smart contracts including:</span></li> </ol> <ol class="c2 lst-kix_wkl89xja70k3-3 start" start="1"> <li class="c6"><span class="c1 c9">balances of the users for the different tokens, including locked MGN;</span> </li> <li class="c6"><span class="c1 c9">information about the tokens listed on the DutchX Protocol and the ones that &nbsp;generate MGN;</span></li> <li class="c6"><span class="c1 c9">information about the auctions: sell volumes, buy volumes, start dates, closing prices; and</span></li> <li class="c6"><span class="c1 c9">information about the deposits, sell orders, buy orders, liquidity contribution level applied, dates, and</span></li> <li class="c6"><span class="c1 c9">address of the trade, as well as information about the claiming and generation of MGN.</span></li> </ol><a id="id.1i8nb0n35nnz"></a> <ol class="c2 lst-kix_wkl89xja70k3-0" start="5"> <li class="c12 c17"><span class="c5 c0">SHARING YOUR PERSONAL DATA</span></li> </ol> <ol class="c2 lst-kix_wkl89xja70k3-1 start" start="1"> <li class="c8"><span class="c1 c9">We may pass your information to our Business Partners, administration centres, third party service providers, agents, subcontractors and other associated organisations for the purposes of completing tasks and providing our services to you.</span></li> <li class="c8"><span class="c1 c9">In addition, when we use any other third-party service providers, we will disclose only the Personal Data that is necessary to deliver the service required and we will ensure, via contractual obligations that these require them to keep your information secure and not to use it for their own direct marketing purposes.</span></li> <li class="c8"><span class="c1 c9">In addition, we may transfer your Personal Data to a third party as part of a sale of some, or all, of our business and assets or as part of any business restructuring or reorganisation, or if we are under a duty to disclose or share your personal data to comply with any legal obligation. However, we will take steps to ensure that your privacy rights continue to be protected.</span></li> </ol><a id="id.d3kr7z172p0p"></a> <ol class="c2 lst-kix_wkl89xja70k3-0" start="6"> <li class="c12 c17"><span class="c5 c0">TRANSFERRING YOUR DATA OUTSIDE OF THE EU</span></li> </ol> <ol class="c2 lst-kix_wkl89xja70k3-1 start" start="1"> <li class="c8"><span class="c3 c22">The log data collected when using our service will the stored in our Amazon Web Server, which is based in the US. Amazon is certified under the EU- US Privacy Shield.</span></li> <li class="c8"><span class="c0 c9 c19">As explained above in this Policy, the Ethereum Blockchain is a global decentralised public network and accordingly any personal data written onto the Ethereum Blockchain may be transferred and stored across the globe.</span><span class="c13 c0">&nbsp;</span></li> </ol><a id="id.5cb980ypx0dn"></a> <ol class="c2 lst-kix_wkl89xja70k3-0" start="7"> <li class="c12 c17"><span class="c13 c0">EXISTENCE OF AUTOMATED DECISION-MAKING</span></li> </ol> <ol class="c2 lst-kix_wkl89xja70k3-1 start" start="1"> <li class="c8"><span class="c1">We do not use automatic decision-making or profiling when Processing Personal Data.</span></li> </ol><a id="id.ydxsrc1hk8pk"></a> <ol class="c2 lst-kix_wkl89xja70k3-0" start="8"> <li class="c12 c17"><span class="c13 c0">DATA SECURITY</span></li> </ol> <ol class="c2 lst-kix_wkl89xja70k3-1 start" start="1"> <li class="c8"><span class="c1 c9">We have put in place appropriate security measures to prevent your Personal Data from being accidentally lost, used or accessed in an unauthorised way, altered or disclosed. In addition, we limit access to your Personal Data to those employees, agents, contractors and other third parties who have a business need to know. They will only process your Personal Data on our instructions and they are subject to a duty of confidentiality.</span></li> <li class="c8"><span class="c1 c9">We have put in place procedures to deal with any suspected Personal Data breach and will notify you and any applicable regulator of a breach where we are legally required to do so.</span></li> </ol><a id="id.e64xadka8wl9"></a> <ol class="c2 lst-kix_wkl89xja70k3-0" start="9"> <li class="c12 c17"><span class="c5 c0">YOUR RIGHTS AS A DATA SUBJECT</span></li> </ol> <ol class="c2 lst-kix_wkl89xja70k3-1 start" start="1"> <li class="c8"><span class="c7">You have certain rights under applicable legislation, and in particular under Regulation EU 2016/679 (General Data Protection Regulation or &ldquo;</span><span class="c0">GDPR</span><span class="c7">&rdquo;). We explain these below. You can find out more about the GDPR and your rights by accessing the</span><span class="c7"><a class="c4" href="https://www.google.com/url?q=https://ec.europa.eu/info/law/law-topic/data-protection_en&amp;sa=D&amp;ust=1554480371652000">&nbsp;</a></span><span class="c7 c21"><a class="c4" href="https://www.google.com/url?q=https://ec.europa.eu/info/law/law-topic/data-protection_en&amp;sa=D&amp;ust=1554480371652000">European Commission&rsquo;s website</a></span><span class="c1">.</span></li> <li class="c8"><span class="c0 c13">Right Information and access</span></li> </ol> <ol class="c2 lst-kix_wkl89xja70k3-2 start" start="1"> <li class="c11"><span class="c7">You have a right to be informed</span><span class="c0">&nbsp;</span><span class="c1">about the Processing of your Personal Data (and if you did not give it to us, information as to the source) and this Policy intends to provide the information. Of course, if you have any further questions you can contact us on the above details.</span></li> </ol> <ol class="c2 lst-kix_wkl89xja70k3-1" start="3"> <li class="c8"><span class="c13 c0">Right to rectification</span></li> </ol> <ol class="c2 lst-kix_wkl89xja70k3-2 start" start="1"> <li class="c11"><span class="c1">You have the right to have any inaccurate Personal Data about you rectified and to have any incomplete Personal Data about you completed. You may also request that we restrict the Processing of that information.</span></li> <li class="c11"><span class="c1">The accuracy of your information is important to us. If you do not want us to use your Personal Data in the manner set out in this Policy, or need to advise us of any changes to your Personal Data, or would like any more information about the way in which we collect and use your Personal Data, please contact us at the above details.</span></li> </ol> <ol class="c2 lst-kix_wkl89xja70k3-1" start="4"> <li class="c8"><span class="c13 c0">Right to erasure (right to be &lsquo;forgotten&rsquo;)</span></li> </ol> <ol class="c2 lst-kix_wkl89xja70k3-2 start" start="1"> <li class="c11"><span class="c1">You have the general right to request the erasure of your Personal Data in the following circumstances:</span></li> </ol> <ol class="c2 lst-kix_wkl89xja70k3-3 start" start="1"> <li class="c6"><span class="c1">the Personal Data is no longer necessary for the purpose for which it was collected;</span></li> <li class="c6"><span class="c1">you withdraw your consent to consent based Processing and no other legal justification for Processing applies;</span></li> <li class="c6"><span class="c1">you object to Processing for direct marketing purposes;</span></li> <li class="c6"><span class="c1">we unlawfully processed your Personal Data; and</span></li> <li class="c6"><span class="c1">erasure is required to comply with a legal obligation that applies to us.</span> </li> </ol> <ol class="c2 lst-kix_wkl89xja70k3-2" start="2"> <li class="c11"><span class="c16 c0">However, when interacting with the Ethereum Blockchain, as explained above in this Policy, it will likely not be possible to erase and permanently delete Personal Data which has been written onto the Ethereum Blockchain. In these circumstances, we will use our reasonable endeavors to ensure that all personal data held by us is permanently deleted however, notwithstanding this, your right to erasure may not be able to be fully enforced.</span></li> <li class="c11"><span class="c1">We will proceed to comply with an erasure request without delay unless continued retention is necessary for:</span></li> </ol> <ol class="c2 lst-kix_wkl89xja70k3-3 start" start="1"> <li class="c6"><span class="c1">exercising the right of freedom of expression and information;</span></li> <li class="c6"><span class="c1">complying with a legal obligation under EU or other applicable law;</span></li> <li class="c6"><span class="c1">the performance of a task carried out in the public interest;</span></li> <li class="c6"><span class="c1">archiving purposes in the public interest, scientific or historical research purposes, or statistical purposes, under certain circumstances; and/or</span></li> <li class="c6"><span class="c1">the establishment, exercise, or defence of legal claims.</span></li> </ol> <ol class="c2 lst-kix_wkl89xja70k3-1" start="5"> <li class="c8"><span class="c13 c0">Right to restrict Processing and right to object to Processing</span></li> </ol> <ol class="c2 lst-kix_wkl89xja70k3-2 start" start="1"> <li class="c11"><span class="c1">You have a right to restrict Processing of your Personal Data, such as where:</span></li> <li class="c11"><span class="c1">you contest the accuracy of the Personal Data;</span></li> <li class="c11"><span class="c1">where Processing is unlawful you may request, instead of requesting erasure, that we restrict the use of the unlawfully processed Personal Data;</span></li> <li class="c11"><span class="c1">we no longer need to process your Personal Data but need to retain your information for the establishment, exercise, or defence of legal claims. </span></li> <li class="c11"><span class="c1">You also have the right to object to Processing of your Personal Data under certain circumstances, such as where the Processing is based on your consent and you withdraw that consent. This may impact the services we can provide and we will explain this to you if you decide to exercise this right.</span></li> <li class="c11"><span class="c16 c0">However, when interacting with the Ethereum Blockchain, as explained above in this Policy, it will not likely be able to prevent external parties from Processing any Personal Data which has been written onto the Ethereum Blockchain. In these circumstances we will use our reasonable endeavors to ensure that all Processing of Personal Data held by us is restricted, notwithstanding this, your right to restrict to Processing may not be able to be fully enforced.</span></li> </ol> <ol class="c2 lst-kix_wkl89xja70k3-1" start="6"> <li class="c8"><span class="c13 c0">Right to data portability</span></li> </ol> <ol class="c2 lst-kix_wkl89xja70k3-2 start" start="1"> <li class="c11"><span class="c1">Where the legal basis for our Processing is your consent or the Processing is necessary for the performance of a contract to which you are party or in order to take steps at your request prior to entering into a contract, you have a right to receive the Personal Data you provided to us in a structured, commonly used and machine-readable format, or ask us to send it to another person.</span></li> </ol> <ol class="c2 lst-kix_wkl89xja70k3-1" start="7"> <li class="c8"><span class="c13 c0">Right to freedom from automated decision-making</span></li> </ol> <ol class="c2 lst-kix_wkl89xja70k3-2 start" start="1"> <li class="c11"><span class="c1">As explained above, we do not use automated decision-making, but where any automated decision-making takes place, you have the right in this case to express your point of view and to contest the decision, as well as request that decisions based on automated Processing concerning you or significantly affecting you and based on your personal data are made by natural persons, not only by computers.</span></li> </ol> <ol class="c2 lst-kix_wkl89xja70k3-1" start="8"> <li class="c8"><span class="c13 c0">Right to object to direct marketing (&lsquo;opting out&rsquo;)</span></li> </ol> <ol class="c2 lst-kix_wkl89xja70k3-2 start" start="1"> <li class="c11"><span class="c1">You have a choice about whether or not you wish to receive information from us.</span></li> <li class="c11"><span class="c1">We will not contact you for marketing purposes unless:</span></li> </ol> <ol class="c2 lst-kix_wkl89xja70k3-3 start" start="1"> <li class="c6"><span class="c1">you have a business relationship with us, and we rely on our legitimate interests as the lawful basis for Processing (as described above);</span></li> <li class="c6"><span class="c1">you have otherwise given your prior consent (such as when you download one of our guides).</span></li> </ol> <ol class="c2 lst-kix_wkl89xja70k3-2" start="3"> <li class="c11"><span class="c1">You can change your marketing preferences at any time by contacting us on the above details. On each and every marketing communication, we will always provide the option for you to exercise your right to object to the Processing of your Personal Data for marketing purposes (known as &lsquo;opting-out&rsquo;) by clicking on the &lsquo;unsubscribe&rsquo; button on our marketing emails or choosing a similar opt-out option on any forms we use to collect your data. You may also opt-out at any time by contacting us on the below details.</span></li> <li class="c11"><span class="c1">Please note that any administrative or service-related communications (to offer our services, or notify you of an update to this Policy or the Terms and Conditions , etc.) will solely be directed at our clients or business partners, and such communications generally do not offer an option to unsubscribe as they are necessary to provide the services requested. Therefore, please be aware that your ability to opt-out from receiving marketing and promotional materials does not change our right to contact you regarding your use of our site and Platform or as part of a contractual relationship we may have with you.</span></li> </ol> <ol class="c2 lst-kix_wkl89xja70k3-1" start="9"> <li class="c8"><span class="c13 c0">Right to request access</span></li> </ol> <ol class="c2 lst-kix_wkl89xja70k3-2 start" start="1"> <li class="c11"><span class="c1">You also have a right to access information we hold about you. We are happy to provide you with details of your Personal Data that we hold or process. To protect your Personal Data, we follow set storage and disclosure procedures, which mean that we will require proof of identity from you prior to disclosing such information. You can exercise this right at any time by contacting us on the above details.</span></li> </ol> <ol class="c2 lst-kix_wkl89xja70k3-1" start="10"> <li class="c8"><span class="c13 c0">Right to withdraw consent</span></li> </ol> <ol class="c2 lst-kix_wkl89xja70k3-2 start" start="1"> <li class="c11"><span class="c1">Where the legal basis for Processing your Personal Information is your consent, you have the right to withdraw that consent at any time by contacting us on the above details.</span> </li> </ol> <ol class="c2 lst-kix_wkl89xja70k3-1" start="11"> <li class="c8"><span class="c13 c0">Raising a complaint about how we have handled your Personal Data</span></li> </ol> <ol class="c2 lst-kix_wkl89xja70k3-2 start" start="1"> <li class="c11"><span class="c1">If you wish to raise a complaint on how we have handled your personal data, you can contact us as set out above and we will then investigate the matter.</span></li> </ol> <ol class="c2 lst-kix_wkl89xja70k3-1" start="12"> <li class="c8"><span class="c13 c0">Right to lodge a complaint with a relevant supervisory authority</span></li> </ol> <ol class="c2 lst-kix_wkl89xja70k3-2 start" start="1"> <li class="c11"><span class="c7">If we have not responded to you within a reasonable time or if you feel that your complaint has not been resolved to your satisfaction, you are entitled to make a complaint to the Data Protection Commissioner under the Data Protection Act, which is presently the</span><span class="c7"><a class="c4" href="https://www.google.com/url?q=http://www.gra.gi&amp;sa=D&amp;ust=1554480371658000">&nbsp;</a></span><span class="c7">Estonian Data Protection Inspectorate - Andmekaitse Inspektsioon (&ldquo;</span><span class="c0">AI</span><span class="c1">&rdquo;). You may contact the AI on the below details:</span></li> </ol> <p class="c12 c15"><span class="c1">Estonian Data Protection Inspectorate (Andmekaitse Inspektsioon) </span></p> <p class="c12 c15"><span class="c1">V&auml;ike-Ameerika 19</span></p> <p class="c12 c15"><span class="c1">10129 Tallinn</span></p> <p class="c12 c15"><span class="c1">Estonia</span></p> <p class="c12 c15"><span class="c1">Tel. +372 6274 135 </span></p> <p class="c12 c15"><span class="c1">Fax +372 6274 137 </span></p> <p class="c12 c15"><span class="c1">e-mail: <EMAIL> </span></p> <p class="c12 c15"><span class="c7">website: </span><span class="c18 c7"><a class="c4" href="https://www.google.com/url?q=http://www.aki.ee/en&amp;sa=D&amp;ust=1554480371659000">http://www.aki.ee/en</a></span> </p> <ol class="c2 lst-kix_wkl89xja70k3-2" start="2"> <li class="c11"><span class="c1">You also have the right to lodge a complaint with the supervisory authority in the country of your habitual residence, place of work, or the place where you allege an infringement of one or more of our rights has taken place, if that is based in the EEA.</span></li> </ol><a id="id.sbk83tq3n553"></a> <ol class="c2 lst-kix_wkl89xja70k3-0" start="10"> <li class="c12 c17"><span class="c5 c0">STORING PERSONAL DATA</span></li> </ol> <ol class="c2 lst-kix_wkl89xja70k3-1 start" start="1"> <li class="c8"><span class="c3">We retain your</span><span class="c7">&nbsp;Personal Data</span><span class="c1 c9">&nbsp;only for as long as is necessary for the purposes for which we process the information as set out in this Policy.</span></li> <li class="c8"><span class="c1">However, we may retain your Personal Data for a longer period of time where such retention is necessary for compliance with a legal obligation to which we are subject, or in order to protect your vital interests or the vital interests of another natural person.</span></li> </ol><a id="id.gbm96lt67oqt"></a> <ol class="c2 lst-kix_wkl89xja70k3-0" start="11"> <li class="c12 c17"><span class="c13 c0">CHANGES TO THIS PRIVACY POLICY</span></li> </ol> <ol class="c2 lst-kix_wkl89xja70k3-1 start" start="1"> <li class="c8"><span class="c1">We may make changes to this Policy from time to time. Where we do so, we will notify those who have a business relationship with us or who are subscribed to our emailing lists, if any, directly of the changes, and change the &lsquo;Last updated&rsquo; date above. We encourage you to review the Policy whenever you access or use our site to stay informed about our information practices and the choices available to you. If you do not agree to the revised Policy, you should discontinue your use of our Site and Platform.</span></li> </ol><a id="id.qz7oi61el199"></a> <ol class="c2 lst-kix_wkl89xja70k3-0" start="12"> <li class="c12 c17"><span class="c13 c0">OUR DETAILS</span></li> </ol> <ol class="c2 lst-kix_wkl89xja70k3-1 start" start="1"> <li class="c8"><span class="c7">This website is owned and operated by d.ex O&Uuml;. We are registered in Estonia under Company registration No. 14553524, and our registered office is located at </span><span class="c1">Ahtri 12, Kesklinna District, 10151 Tallinn, Harju County, Estonia.</span></li> <li class="c8"><span class="c1">If you have any queries concerning your rights under this Privacy Policy, please contact us at <EMAIL>.</span></li> </ol> <p class="c12"><span class="c1">&nbsp;</span></p> <p class="c20 c25"><span class="c1"></span></p> </section>
html
<reponame>LuckeeDev/my-covid-story import NextLink from 'next/link' import { Box, Link } from '@chakra-ui/react' import { list } from '../lib/api/stories' import StoryFeed from '../components/stories/StoryFeed' import FloatingRibbon, { Button } from '../components/common/FloatingRibbon' import SiteLayout from '../layouts/Default' import { Story } from '@prisma/client' import HeadTags from '../components/common/HeadTags' interface MainPageProps { stories: Story[] } const MainPage = ({ stories }: MainPageProps) => { return ( <> <HeadTags> <link rel="canonical" href="https://www.mycovidstory.ca" /> </HeadTags> <Box> <StoryFeed stories={stories} /> </Box> <FloatingRibbon> <NextLink href="/new" passHref> <Link> <Button my={'5px'}>Add Your Story</Button> </Link> </NextLink> </FloatingRibbon> </> ) } export async function getStaticProps() { const stories = await list() return { props: { stories }, revalidate: 60, // 1 minute } } const MainPageLayout = ({ children }) => <SiteLayout navPosition="sticky">{children}</SiteLayout> MainPage.setLayout = MainPageLayout export default MainPage
typescript
At a time when Indian IT companies’ commentaries have been muted about their hiring plans for this fiscal, they have been adding employees inorganically. As per publicly available numbers, IT companies have inducted at least 5,550 employees on their payroll inorganically, which analysts say, could go over 8,000 if other deals, whose numbers are not in public domain, are taken into account. The addition in their headcount is being done either through rebadging deals or through acquisitions they are making. Rebadging is the transfer of employees from clients to IT vendors and many times are part of the deals that IT companies bagged from clients. As per Jefferies, the Indian IT services firms saw a decline of 22,000 in their net aggregate headcount in the first quarter of FY24, compared to a decline of 9,000 in their net aggregate headcount in March quarter of FY23. Take the case of Infosys and Danske Bank deal where the IT giant in June said that it will acquire 1,400 employees of the bank’s IT centre in India. Similarly, it is adding another 400 employees from its latest deal with Liberty Global. HCLTech this week also announced inducting 400 employees from its deal with Cloud Software Group. In August, the company completed the acquisition of the entire 100% stake in German automotive engineering services provider ASAP Group for $279 million. ASAP Group has 1,600 employees across different locations. UST, this week acquired a Dallas-based telecom engineering firm, MobileComm and integrated more than 1,300 employees to strengthen its telecommunication practice. Similarly, last month Xoriant acquired Thoucentric, a Bengaluru-headquartered specialised consulting firm along with its 450 employees. Even other midcaps like Happiest Minds, Mphasis, Sonata and LTTS have made acquisitions this calendar year.
english
package net.krazyweb.starmodmanager.data; import java.nio.file.Path; public class ModFile { private Path path; private boolean json; private boolean ignored; private boolean autoMerged; //The file uses the official "__merge" system. public Path getPath() { return path; } public void setPath(final Path path) { this.path = path; } public boolean isJson() { return json; } public void setJson(boolean json) { this.json = json; } public boolean isIgnored() { return ignored; } public void setIgnored(boolean ignored) { this.ignored = ignored; } public boolean isAutoMerged() { return autoMerged; } public boolean isModinfo() { return path.toString().endsWith(".modinfo"); } public void setAutoMerged(boolean autoMerged) { this.autoMerged = autoMerged; } }
java
{ "name": "cordova-windows-barcode", "version": "0.0.1", "description": "Cordova Windows barcode", "cordova": { "id": "cordova-windows-barcode", "platforms": [ "windows" ] }, "keywords": [ "cordova", "barcode", "ecosystem:cordova", "cordova-windows" ], "author": "Dynamsoft", "license": "MIT" }
json
<reponame>yiannist/pkg-ganeti<gh_stars>0 # # # Copyright (C) 2006, 2007, 2010, 2011, 2012, 2013, 2014 Google Inc. # All rights reserved. # # Redistribution and use in source and binary forms, with or without # modification, are permitted provided that the following conditions are # met: # # 1. Redistributions of source code must retain the above copyright notice, # this list of conditions and the following disclaimer. # # 2. Redistributions in binary form must reproduce the above copyright # notice, this list of conditions and the following disclaimer in the # documentation and/or other materials provided with the distribution. # # THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS "AS # IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED # TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR # PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT HOLDER OR # CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL, # EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT LIMITED TO, # PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, DATA, OR # PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY THEORY OF # LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT (INCLUDING # NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE OF THIS # SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE. """Cluster related commands""" # pylint: disable=W0401,W0613,W0614,C0103 # W0401: Wildcard import ganeti.cli # W0613: Unused argument, since all functions follow the same API # W0614: Unused import %s from wildcard import (since we need cli) # C0103: Invalid name gnt-cluster from cStringIO import StringIO import os import time import OpenSSL import tempfile import itertools from ganeti.cli import * from ganeti import bootstrap from ganeti import compat from ganeti import constants from ganeti import errors from ganeti import netutils from ganeti import objects from ganeti import opcodes from ganeti import pathutils from ganeti import qlang from ganeti import serializer from ganeti import ssconf from ganeti import ssh from ganeti import uidpool from ganeti import utils from ganeti.client import base ON_OPT = cli_option("--on", default=False, action="store_true", dest="on", help="Recover from an EPO") GROUPS_OPT = cli_option("--groups", default=False, action="store_true", dest="groups", help="Arguments are node groups instead of nodes") FORCE_FAILOVER = cli_option("--yes-do-it", dest="yes_do_it", help="Override interactive check for --no-voting", default=False, action="store_true") FORCE_DISTRIBUTION = cli_option("--yes-do-it", dest="yes_do_it", help="Unconditionally distribute the" " configuration, even if the queue" " is drained", default=False, action="store_true") TO_OPT = cli_option("--to", default=None, type="string", help="The Ganeti version to upgrade to") RESUME_OPT = cli_option("--resume", default=False, action="store_true", help="Resume any pending Ganeti upgrades") _EPO_PING_INTERVAL = 30 # 30 seconds between pings _EPO_PING_TIMEOUT = 1 # 1 second _EPO_REACHABLE_TIMEOUT = 15 * 60 # 15 minutes def _InitEnabledDiskTemplates(opts): """Initialize the list of enabled disk templates. """ if opts.enabled_disk_templates: return opts.enabled_disk_templates.split(",") else: return constants.DEFAULT_ENABLED_DISK_TEMPLATES def _InitVgName(opts, enabled_disk_templates): """Initialize the volume group name. @type enabled_disk_templates: list of strings @param enabled_disk_templates: cluster-wide enabled disk templates """ vg_name = None if opts.vg_name is not None: vg_name = opts.vg_name if vg_name: if not utils.IsLvmEnabled(enabled_disk_templates): ToStdout("You specified a volume group with --vg-name, but you did not" " enable any disk template that uses lvm.") elif utils.IsLvmEnabled(enabled_disk_templates): raise errors.OpPrereqError( "LVM disk templates are enabled, but vg name not set.") elif utils.IsLvmEnabled(enabled_disk_templates): vg_name = constants.DEFAULT_VG return vg_name def _InitDrbdHelper(opts, enabled_disk_templates): """Initialize the DRBD usermode helper. """ drbd_enabled = constants.DT_DRBD8 in enabled_disk_templates if not drbd_enabled and opts.drbd_helper is not None: ToStdout("Note: You specified a DRBD usermode helper, while DRBD storage" " is not enabled.") if drbd_enabled: if opts.drbd_helper is None: return constants.DEFAULT_DRBD_HELPER if opts.drbd_helper == '': raise errors.OpPrereqError( "Unsetting the drbd usermode helper while enabling DRBD is not" " allowed.") return opts.drbd_helper @UsesRPC def InitCluster(opts, args): """Initialize the cluster. @param opts: the command line options selected by the user @type args: list @param args: should contain only one element, the desired cluster name @rtype: int @return: the desired exit code """ enabled_disk_templates = _InitEnabledDiskTemplates(opts) try: vg_name = _InitVgName(opts, enabled_disk_templates) drbd_helper = _InitDrbdHelper(opts, enabled_disk_templates) except errors.OpPrereqError, e: ToStderr(str(e)) return 1 master_netdev = opts.master_netdev if master_netdev is None: nic_mode = opts.nicparams.get(constants.NIC_MODE, None) if not nic_mode: # default case, use bridging master_netdev = constants.DEFAULT_BRIDGE elif nic_mode == constants.NIC_MODE_OVS: # default ovs is different from default bridge master_netdev = constants.DEFAULT_OVS opts.nicparams[constants.NIC_LINK] = constants.DEFAULT_OVS hvlist = opts.enabled_hypervisors if hvlist is None: hvlist = constants.DEFAULT_ENABLED_HYPERVISOR hvlist = hvlist.split(",") hvparams = dict(opts.hvparams) beparams = opts.beparams nicparams = opts.nicparams diskparams = dict(opts.diskparams) # check the disk template types here, as we cannot rely on the type check done # by the opcode parameter types diskparams_keys = set(diskparams.keys()) if not (diskparams_keys <= constants.DISK_TEMPLATES): unknown = utils.NiceSort(diskparams_keys - constants.DISK_TEMPLATES) ToStderr("Disk templates unknown: %s" % utils.CommaJoin(unknown)) return 1 # prepare beparams dict beparams = objects.FillDict(constants.BEC_DEFAULTS, beparams) utils.ForceDictType(beparams, constants.BES_PARAMETER_COMPAT) # prepare nicparams dict nicparams = objects.FillDict(constants.NICC_DEFAULTS, nicparams) utils.ForceDictType(nicparams, constants.NICS_PARAMETER_TYPES) # prepare ndparams dict if opts.ndparams is None: ndparams = dict(constants.NDC_DEFAULTS) else: ndparams = objects.FillDict(constants.NDC_DEFAULTS, opts.ndparams) utils.ForceDictType(ndparams, constants.NDS_PARAMETER_TYPES) # prepare hvparams dict for hv in constants.HYPER_TYPES: if hv not in hvparams: hvparams[hv] = {} hvparams[hv] = objects.FillDict(constants.HVC_DEFAULTS[hv], hvparams[hv]) utils.ForceDictType(hvparams[hv], constants.HVS_PARAMETER_TYPES) # prepare diskparams dict for templ in constants.DISK_TEMPLATES: if templ not in diskparams: diskparams[templ] = {} diskparams[templ] = objects.FillDict(constants.DISK_DT_DEFAULTS[templ], diskparams[templ]) utils.ForceDictType(diskparams[templ], constants.DISK_DT_TYPES) # prepare ipolicy dict ipolicy = CreateIPolicyFromOpts( ispecs_mem_size=opts.ispecs_mem_size, ispecs_cpu_count=opts.ispecs_cpu_count, ispecs_disk_count=opts.ispecs_disk_count, ispecs_disk_size=opts.ispecs_disk_size, ispecs_nic_count=opts.ispecs_nic_count, minmax_ispecs=opts.ipolicy_bounds_specs, std_ispecs=opts.ipolicy_std_specs, ipolicy_disk_templates=opts.ipolicy_disk_templates, ipolicy_vcpu_ratio=opts.ipolicy_vcpu_ratio, ipolicy_spindle_ratio=opts.ipolicy_spindle_ratio, fill_all=True) if opts.candidate_pool_size is None: opts.candidate_pool_size = constants.MASTER_POOL_SIZE_DEFAULT if opts.mac_prefix is None: opts.mac_prefix = constants.DEFAULT_MAC_PREFIX uid_pool = opts.uid_pool if uid_pool is not None: uid_pool = uidpool.ParseUidPool(uid_pool) if opts.prealloc_wipe_disks is None: opts.prealloc_wipe_disks = False external_ip_setup_script = opts.use_external_mip_script if external_ip_setup_script is None: external_ip_setup_script = False try: primary_ip_version = int(opts.primary_ip_version) except (ValueError, TypeError), err: ToStderr("Invalid primary ip version value: %s" % str(err)) return 1 master_netmask = opts.master_netmask try: if master_netmask is not None: master_netmask = int(master_netmask) except (ValueError, TypeError), err: ToStderr("Invalid master netmask value: %s" % str(err)) return 1 if opts.disk_state: disk_state = utils.FlatToDict(opts.disk_state) else: disk_state = {} hv_state = dict(opts.hv_state) if opts.install_image: install_image = opts.install_image else: install_image = "" if opts.zeroing_image: zeroing_image = opts.zeroing_image else: zeroing_image = "" compression_tools = _GetCompressionTools(opts) default_ialloc_params = opts.default_iallocator_params if opts.enabled_user_shutdown: enabled_user_shutdown = True else: enabled_user_shutdown = False bootstrap.InitCluster(cluster_name=args[0], secondary_ip=opts.secondary_ip, vg_name=vg_name, mac_prefix=opts.mac_prefix, master_netmask=master_netmask, master_netdev=master_netdev, file_storage_dir=opts.file_storage_dir, shared_file_storage_dir=opts.shared_file_storage_dir, gluster_storage_dir=opts.gluster_storage_dir, enabled_hypervisors=hvlist, hvparams=hvparams, beparams=beparams, nicparams=nicparams, ndparams=ndparams, diskparams=diskparams, ipolicy=ipolicy, candidate_pool_size=opts.candidate_pool_size, modify_etc_hosts=opts.modify_etc_hosts, modify_ssh_setup=opts.modify_ssh_setup, maintain_node_health=opts.maintain_node_health, drbd_helper=drbd_helper, uid_pool=uid_pool, default_iallocator=opts.default_iallocator, default_iallocator_params=default_ialloc_params, primary_ip_version=primary_ip_version, prealloc_wipe_disks=opts.prealloc_wipe_disks, use_external_mip_script=external_ip_setup_script, hv_state=hv_state, disk_state=disk_state, enabled_disk_templates=enabled_disk_templates, install_image=install_image, zeroing_image=zeroing_image, compression_tools=compression_tools, enabled_user_shutdown=enabled_user_shutdown, ) op = opcodes.OpClusterPostInit() SubmitOpCode(op, opts=opts) return 0 @UsesRPC def DestroyCluster(opts, args): """Destroy the cluster. @param opts: the command line options selected by the user @type args: list @param args: should be an empty list @rtype: int @return: the desired exit code """ if not opts.yes_do_it: ToStderr("Destroying a cluster is irreversible. If you really want" " destroy this cluster, supply the --yes-do-it option.") return 1 op = opcodes.OpClusterDestroy() master_uuid = SubmitOpCode(op, opts=opts) # if we reached this, the opcode didn't fail; we can proceed to # shutdown all the daemons bootstrap.FinalizeClusterDestroy(master_uuid) return 0 def RenameCluster(opts, args): """Rename the cluster. @param opts: the command line options selected by the user @type args: list @param args: should contain only one element, the new cluster name @rtype: int @return: the desired exit code """ cl = GetClient() (cluster_name, ) = cl.QueryConfigValues(["cluster_name"]) new_name = args[0] if not opts.force: usertext = ("This will rename the cluster from '%s' to '%s'. If you are" " connected over the network to the cluster name, the" " operation is very dangerous as the IP address will be" " removed from the node and the change may not go through." " Continue?") % (cluster_name, new_name) if not AskUser(usertext): return 1 op = opcodes.OpClusterRename(name=new_name) result = SubmitOpCode(op, opts=opts, cl=cl) if result: ToStdout("Cluster renamed from '%s' to '%s'", cluster_name, result) return 0 def ActivateMasterIp(opts, args): """Activates the master IP. """ op = opcodes.OpClusterActivateMasterIp() SubmitOpCode(op) return 0 def DeactivateMasterIp(opts, args): """Deactivates the master IP. """ if not opts.confirm: usertext = ("This will disable the master IP. All the open connections to" " the master IP will be closed. To reach the master you will" " need to use its node IP." " Continue?") if not AskUser(usertext): return 1 op = opcodes.OpClusterDeactivateMasterIp() SubmitOpCode(op) return 0 def RedistributeConfig(opts, args): """Forces push of the cluster configuration. @param opts: the command line options selected by the user @type args: list @param args: empty list @rtype: int @return: the desired exit code """ op = opcodes.OpClusterRedistConf() if opts.yes_do_it: SubmitOpCodeToDrainedQueue(op) else: SubmitOrSend(op, opts) return 0 def ShowClusterVersion(opts, args): """Write version of ganeti software to the standard output. @param opts: the command line options selected by the user @type args: list @param args: should be an empty list @rtype: int @return: the desired exit code """ cl = GetClient() result = cl.QueryClusterInfo() ToStdout("Software version: %s", result["software_version"]) ToStdout("Internode protocol: %s", result["protocol_version"]) ToStdout("Configuration format: %s", result["config_version"]) ToStdout("OS api version: %s", result["os_api_version"]) ToStdout("Export interface: %s", result["export_version"]) ToStdout("VCS version: %s", result["vcs_version"]) return 0 def ShowClusterMaster(opts, args): """Write name of master node to the standard output. @param opts: the command line options selected by the user @type args: list @param args: should be an empty list @rtype: int @return: the desired exit code """ master = bootstrap.GetMaster() ToStdout(master) return 0 def _FormatGroupedParams(paramsdict, roman=False): """Format Grouped parameters (be, nic, disk) by group. @type paramsdict: dict of dicts @param paramsdict: {group: {param: value, ...}, ...} @rtype: dict of dicts @return: copy of the input dictionaries with strings as values """ ret = {} for (item, val) in paramsdict.items(): if isinstance(val, dict): ret[item] = _FormatGroupedParams(val, roman=roman) elif roman and isinstance(val, int): ret[item] = compat.TryToRoman(val) else: ret[item] = str(val) return ret def ShowClusterConfig(opts, args): """Shows cluster information. @param opts: the command line options selected by the user @type args: list @param args: should be an empty list @rtype: int @return: the desired exit code """ cl = GetClient() result = cl.QueryClusterInfo() if result["tags"]: tags = utils.CommaJoin(utils.NiceSort(result["tags"])) else: tags = "(none)" if result["reserved_lvs"]: reserved_lvs = utils.CommaJoin(result["reserved_lvs"]) else: reserved_lvs = "(none)" enabled_hv = result["enabled_hypervisors"] hvparams = dict((k, v) for k, v in result["hvparams"].iteritems() if k in enabled_hv) info = [ ("Cluster name", result["name"]), ("Cluster UUID", result["uuid"]), ("Creation time", utils.FormatTime(result["ctime"])), ("Modification time", utils.FormatTime(result["mtime"])), ("Master node", result["master"]), ("Architecture (this node)", "%s (%s)" % (result["architecture"][0], result["architecture"][1])), ("Tags", tags), ("Default hypervisor", result["default_hypervisor"]), ("Enabled hypervisors", utils.CommaJoin(enabled_hv)), ("Hypervisor parameters", _FormatGroupedParams(hvparams, opts.roman_integers)), ("OS-specific hypervisor parameters", _FormatGroupedParams(result["os_hvp"], opts.roman_integers)), ("OS parameters", _FormatGroupedParams(result["osparams"], opts.roman_integers)), ("Hidden OSes", utils.CommaJoin(result["hidden_os"])), ("Blacklisted OSes", utils.CommaJoin(result["blacklisted_os"])), ("Cluster parameters", [ ("candidate pool size", compat.TryToRoman(result["candidate_pool_size"], convert=opts.roman_integers)), ("maximal number of jobs running simultaneously", compat.TryToRoman(result["max_running_jobs"], convert=opts.roman_integers)), ("maximal number of jobs simultaneously tracked by the scheduler", compat.TryToRoman(result["max_tracked_jobs"], convert=opts.roman_integers)), ("mac prefix", result["mac_prefix"]), ("master netdev", result["master_netdev"]), ("master netmask", compat.TryToRoman(result["master_netmask"], opts.roman_integers)), ("use external master IP address setup script", result["use_external_mip_script"]), ("lvm volume group", result["volume_group_name"]), ("lvm reserved volumes", reserved_lvs), ("drbd usermode helper", result["drbd_usermode_helper"]), ("file storage path", result["file_storage_dir"]), ("shared file storage path", result["shared_file_storage_dir"]), ("gluster storage path", result["gluster_storage_dir"]), ("maintenance of node health", result["maintain_node_health"]), ("uid pool", uidpool.FormatUidPool(result["uid_pool"])), ("default instance allocator", result["default_iallocator"]), ("default instance allocator parameters", result["default_iallocator_params"]), ("primary ip version", compat.TryToRoman(result["primary_ip_version"], opts.roman_integers)), ("preallocation wipe disks", result["prealloc_wipe_disks"]), ("OS search path", utils.CommaJoin(pathutils.OS_SEARCH_PATH)), ("ExtStorage Providers search path", utils.CommaJoin(pathutils.ES_SEARCH_PATH)), ("enabled disk templates", utils.CommaJoin(result["enabled_disk_templates"])), ("install image", result["install_image"]), ("instance communication network", result["instance_communication_network"]), ("zeroing image", result["zeroing_image"]), ("compression tools", result["compression_tools"]), ("enabled user shutdown", result["enabled_user_shutdown"]), ]), ("Default node parameters", _FormatGroupedParams(result["ndparams"], roman=opts.roman_integers)), ("Default instance parameters", _FormatGroupedParams(result["beparams"], roman=opts.roman_integers)), ("Default nic parameters", _FormatGroupedParams(result["nicparams"], roman=opts.roman_integers)), ("Default disk parameters", _FormatGroupedParams(result["diskparams"], roman=opts.roman_integers)), ("Instance policy - limits for instances", FormatPolicyInfo(result["ipolicy"], None, True, opts.roman_integers)), ] PrintGenericInfo(info) return 0 def ClusterCopyFile(opts, args): """Copy a file from master to some nodes. @param opts: the command line options selected by the user @type args: list @param args: should contain only one element, the path of the file to be copied @rtype: int @return: the desired exit code """ filename = args[0] filename = os.path.abspath(filename) if not os.path.exists(filename): raise errors.OpPrereqError("No such filename '%s'" % filename, errors.ECODE_INVAL) cl = GetClient() qcl = GetClient() try: cluster_name = cl.QueryConfigValues(["cluster_name"])[0] results = GetOnlineNodes(nodes=opts.nodes, cl=qcl, filter_master=True, secondary_ips=opts.use_replication_network, nodegroup=opts.nodegroup) ports = GetNodesSshPorts(opts.nodes, qcl) finally: cl.Close() qcl.Close() srun = ssh.SshRunner(cluster_name) for (node, port) in zip(results, ports): if not srun.CopyFileToNode(node, port, filename): ToStderr("Copy of file %s to node %s:%d failed", filename, node, port) return 0 def RunClusterCommand(opts, args): """Run a command on some nodes. @param opts: the command line options selected by the user @type args: list @param args: should contain the command to be run and its arguments @rtype: int @return: the desired exit code """ cl = GetClient() qcl = GetClient() command = " ".join(args) nodes = GetOnlineNodes(nodes=opts.nodes, cl=qcl, nodegroup=opts.nodegroup) ports = GetNodesSshPorts(nodes, qcl) cluster_name, master_node = cl.QueryConfigValues(["cluster_name", "master_node"]) srun = ssh.SshRunner(cluster_name=cluster_name) # Make sure master node is at list end if master_node in nodes: nodes.remove(master_node) nodes.append(master_node) for (name, port) in zip(nodes, ports): result = srun.Run(name, constants.SSH_LOGIN_USER, command, port=port) if opts.failure_only and result.exit_code == constants.EXIT_SUCCESS: # Do not output anything for successful commands continue ToStdout("------------------------------------------------") if opts.show_machine_names: for line in result.output.splitlines(): ToStdout("%s: %s", name, line) else: ToStdout("node: %s", name) ToStdout("%s", result.output) ToStdout("return code = %s", result.exit_code) return 0 def VerifyCluster(opts, args): """Verify integrity of cluster, performing various test on nodes. @param opts: the command line options selected by the user @type args: list @param args: should be an empty list @rtype: int @return: the desired exit code """ skip_checks = [] if opts.skip_nplusone_mem: skip_checks.append(constants.VERIFY_NPLUSONE_MEM) cl = GetClient() op = opcodes.OpClusterVerify(verbose=opts.verbose, error_codes=opts.error_codes, debug_simulate_errors=opts.simulate_errors, skip_checks=skip_checks, ignore_errors=opts.ignore_errors, group_name=opts.nodegroup) result = SubmitOpCode(op, cl=cl, opts=opts) # Keep track of submitted jobs jex = JobExecutor(cl=cl, opts=opts) for (status, job_id) in result[constants.JOB_IDS_KEY]: jex.AddJobId(None, status, job_id) results = jex.GetResults() (bad_jobs, bad_results) = \ map(len, # Convert iterators to lists map(list, # Count errors map(compat.partial(itertools.ifilterfalse, bool), # Convert result to booleans in a tuple zip(*((job_success, len(op_results) == 1 and op_results[0]) for (job_success, op_results) in results))))) if bad_jobs == 0 and bad_results == 0: rcode = constants.EXIT_SUCCESS else: rcode = constants.EXIT_FAILURE if bad_jobs > 0: ToStdout("%s job(s) failed while verifying the cluster.", bad_jobs) return rcode def VerifyDisks(opts, args): """Verify integrity of cluster disks. @param opts: the command line options selected by the user @type args: list @param args: should be an empty list @rtype: int @return: the desired exit code """ cl = GetClient() op = opcodes.OpClusterVerifyDisks() result = SubmitOpCode(op, cl=cl, opts=opts) # Keep track of submitted jobs jex = JobExecutor(cl=cl, opts=opts) for (status, job_id) in result[constants.JOB_IDS_KEY]: jex.AddJobId(None, status, job_id) retcode = constants.EXIT_SUCCESS for (status, result) in jex.GetResults(): if not status: ToStdout("Job failed: %s", result) continue ((bad_nodes, instances, missing), ) = result for node, text in bad_nodes.items(): ToStdout("Error gathering data on node %s: %s", node, utils.SafeEncode(text[-400:])) retcode = constants.EXIT_FAILURE ToStdout("You need to fix these nodes first before fixing instances") for iname in instances: if iname in missing: continue op = opcodes.OpInstanceActivateDisks(instance_name=iname) try: ToStdout("Activating disks for instance '%s'", iname) SubmitOpCode(op, opts=opts, cl=cl) except errors.GenericError, err: nret, msg = FormatError(err) retcode |= nret ToStderr("Error activating disks for instance %s: %s", iname, msg) if missing: for iname, ival in missing.iteritems(): all_missing = compat.all(x[0] in bad_nodes for x in ival) if all_missing: ToStdout("Instance %s cannot be verified as it lives on" " broken nodes", iname) else: ToStdout("Instance %s has missing logical volumes:", iname) ival.sort() for node, vol in ival: if node in bad_nodes: ToStdout("\tbroken node %s /dev/%s", node, vol) else: ToStdout("\t%s /dev/%s", node, vol) ToStdout("You need to replace or recreate disks for all the above" " instances if this message persists after fixing broken nodes.") retcode = constants.EXIT_FAILURE elif not instances: ToStdout("No disks need to be activated.") return retcode def RepairDiskSizes(opts, args): """Verify sizes of cluster disks. @param opts: the command line options selected by the user @type args: list @param args: optional list of instances to restrict check to @rtype: int @return: the desired exit code """ op = opcodes.OpClusterRepairDiskSizes(instances=args) SubmitOpCode(op, opts=opts) @UsesRPC def MasterFailover(opts, args): """Failover the master node. This command, when run on a non-master node, will cause the current master to cease being master, and the non-master to become new master. @param opts: the command line options selected by the user @type args: list @param args: should be an empty list @rtype: int @return: the desired exit code """ if opts.no_voting and not opts.yes_do_it: usertext = ("This will perform the failover even if most other nodes" " are down, or if this node is outdated. This is dangerous" " as it can lead to a non-consistent cluster. Check the" " gnt-cluster(8) man page before proceeding. Continue?") if not AskUser(usertext): return 1 rvlaue, msgs = bootstrap.MasterFailover(no_voting=opts.no_voting) for msg in msgs: ToStderr(msg) return rvlaue def MasterPing(opts, args): """Checks if the master is alive. @param opts: the command line options selected by the user @type args: list @param args: should be an empty list @rtype: int @return: the desired exit code """ try: cl = GetClient() cl.QueryClusterInfo() return 0 except Exception: # pylint: disable=W0703 return 1 def SearchTags(opts, args): """Searches the tags on all the cluster. @param opts: the command line options selected by the user @type args: list @param args: should contain only one element, the tag pattern @rtype: int @return: the desired exit code """ op = opcodes.OpTagsSearch(pattern=args[0]) result = SubmitOpCode(op, opts=opts) if not result: return 1 result = list(result) result.sort() for path, tag in result: ToStdout("%s %s", path, tag) def _ReadAndVerifyCert(cert_filename, verify_private_key=False): """Reads and verifies an X509 certificate. @type cert_filename: string @param cert_filename: the path of the file containing the certificate to verify encoded in PEM format @type verify_private_key: bool @param verify_private_key: whether to verify the private key in addition to the public certificate @rtype: string @return: a string containing the PEM-encoded certificate. """ try: pem = utils.ReadFile(cert_filename) except IOError, err: raise errors.X509CertError(cert_filename, "Unable to read certificate: %s" % str(err)) try: OpenSSL.crypto.load_certificate(OpenSSL.crypto.FILETYPE_PEM, pem) except Exception, err: raise errors.X509CertError(cert_filename, "Unable to load certificate: %s" % str(err)) if verify_private_key: try: OpenSSL.crypto.load_privatekey(OpenSSL.crypto.FILETYPE_PEM, pem) except Exception, err: raise errors.X509CertError(cert_filename, "Unable to load private key: %s" % str(err)) return pem def _RenewCrypto(new_cluster_cert, new_rapi_cert, # pylint: disable=R0911 rapi_cert_filename, new_spice_cert, spice_cert_filename, spice_cacert_filename, new_confd_hmac_key, new_cds, cds_filename, force, new_node_cert): """Renews cluster certificates, keys and secrets. @type new_cluster_cert: bool @param new_cluster_cert: Whether to generate a new cluster certificate @type new_rapi_cert: bool @param new_rapi_cert: Whether to generate a new RAPI certificate @type rapi_cert_filename: string @param rapi_cert_filename: Path to file containing new RAPI certificate @type new_spice_cert: bool @param new_spice_cert: Whether to generate a new SPICE certificate @type spice_cert_filename: string @param spice_cert_filename: Path to file containing new SPICE certificate @type spice_cacert_filename: string @param spice_cacert_filename: Path to file containing the certificate of the CA that signed the SPICE certificate @type new_confd_hmac_key: bool @param new_confd_hmac_key: Whether to generate a new HMAC key @type new_cds: bool @param new_cds: Whether to generate a new cluster domain secret @type cds_filename: string @param cds_filename: Path to file containing new cluster domain secret @type force: bool @param force: Whether to ask user for confirmation @type new_node_cert: string @param new_node_cert: Whether to generate new node certificates """ if new_rapi_cert and rapi_cert_filename: ToStderr("Only one of the --new-rapi-certificate and --rapi-certificate" " options can be specified at the same time.") return 1 if new_cds and cds_filename: ToStderr("Only one of the --new-cluster-domain-secret and" " --cluster-domain-secret options can be specified at" " the same time.") return 1 if new_spice_cert and (spice_cert_filename or spice_cacert_filename): ToStderr("When using --new-spice-certificate, the --spice-certificate" " and --spice-ca-certificate must not be used.") return 1 if bool(spice_cacert_filename) ^ bool(spice_cert_filename): ToStderr("Both --spice-certificate and --spice-ca-certificate must be" " specified.") return 1 rapi_cert_pem, spice_cert_pem, spice_cacert_pem = (None, None, None) try: if rapi_cert_filename: rapi_cert_pem = _ReadAndVerifyCert(rapi_cert_filename, True) if spice_cert_filename: spice_cert_pem = _ReadAndVerifyCert(spice_cert_filename, True) spice_cacert_pem = _ReadAndVerifyCert(spice_cacert_filename) except errors.X509CertError, err: ToStderr("Unable to load X509 certificate from %s: %s", err[0], err[1]) return 1 if cds_filename: try: cds = utils.ReadFile(cds_filename) except Exception, err: # pylint: disable=W0703 ToStderr("Can't load new cluster domain secret from %s: %s" % (cds_filename, str(err))) return 1 else: cds = None if not force: usertext = ("This requires all daemons on all nodes to be restarted and" " may take some time. Continue?") if not AskUser(usertext): return 1 def _RenewCryptoInner(ctx): ctx.feedback_fn("Updating certificates and keys") # Note: the node certificate will be generated in the LU bootstrap.GenerateClusterCrypto(new_cluster_cert, new_rapi_cert, new_spice_cert, new_confd_hmac_key, new_cds, rapi_cert_pem=rapi_cert_pem, spice_cert_pem=spice_cert_pem, spice_cacert_pem=spice_cacert_pem, cds=cds) files_to_copy = [] if new_cluster_cert: files_to_copy.append(pathutils.NODED_CERT_FILE) if new_rapi_cert or rapi_cert_pem: files_to_copy.append(pathutils.RAPI_CERT_FILE) if new_spice_cert or spice_cert_pem: files_to_copy.append(pathutils.SPICE_CERT_FILE) files_to_copy.append(pathutils.SPICE_CACERT_FILE) if new_confd_hmac_key: files_to_copy.append(pathutils.CONFD_HMAC_KEY) if new_cds or cds: files_to_copy.append(pathutils.CLUSTER_DOMAIN_SECRET_FILE) if files_to_copy: for node_name in ctx.nonmaster_nodes: port = ctx.ssh_ports[node_name] ctx.feedback_fn("Copying %s to %s:%d" % (", ".join(files_to_copy), node_name, port)) for file_name in files_to_copy: ctx.ssh.CopyFileToNode(node_name, port, file_name) RunWhileClusterStopped(ToStdout, _RenewCryptoInner) ToStdout("All requested certificates and keys have been replaced." " Running \"gnt-cluster verify\" now is recommended.") if new_node_cert: cl = GetClient() renew_op = opcodes.OpClusterRenewCrypto() SubmitOpCode(renew_op, cl=cl) return 0 def RenewCrypto(opts, args): """Renews cluster certificates, keys and secrets. """ return _RenewCrypto(opts.new_cluster_cert, opts.new_rapi_cert, opts.rapi_cert, opts.new_spice_cert, opts.spice_cert, opts.spice_cacert, opts.new_confd_hmac_key, opts.new_cluster_domain_secret, opts.cluster_domain_secret, opts.force, opts.new_node_cert) def _GetEnabledDiskTemplates(opts): """Determine the list of enabled disk templates. """ if opts.enabled_disk_templates: return opts.enabled_disk_templates.split(",") else: return None def _GetVgName(opts, enabled_disk_templates): """Determine the volume group name. @type enabled_disk_templates: list of strings @param enabled_disk_templates: cluster-wide enabled disk-templates """ # consistency between vg name and enabled disk templates vg_name = None if opts.vg_name is not None: vg_name = opts.vg_name if enabled_disk_templates: if vg_name and not utils.IsLvmEnabled(enabled_disk_templates): ToStdout("You specified a volume group with --vg-name, but you did not" " enable any of the following lvm-based disk templates: %s" % utils.CommaJoin(constants.DTS_LVM)) return vg_name def _GetDrbdHelper(opts, enabled_disk_templates): """Determine the DRBD usermode helper. """ drbd_helper = opts.drbd_helper if enabled_disk_templates: drbd_enabled = constants.DT_DRBD8 in enabled_disk_templates if not drbd_enabled and opts.drbd_helper: ToStdout("You specified a DRBD usermode helper with " " --drbd-usermode-helper while DRBD is not enabled.") return drbd_helper def _GetCompressionTools(opts): """Determine the list of custom compression tools. """ if opts.compression_tools: return opts.compression_tools.split(",") elif opts.compression_tools is None: return None # To note the parameter was not provided else: return constants.IEC_DEFAULT_TOOLS # Resetting to default def SetClusterParams(opts, args): """Modify the cluster. @param opts: the command line options selected by the user @type args: list @param args: should be an empty list @rtype: int @return: the desired exit code """ if not (opts.vg_name is not None or opts.drbd_helper is not None or opts.enabled_hypervisors or opts.hvparams or opts.beparams or opts.nicparams or opts.ndparams or opts.diskparams or opts.candidate_pool_size is not None or opts.max_running_jobs is not None or opts.max_tracked_jobs is not None or opts.uid_pool is not None or opts.maintain_node_health is not None or opts.add_uids is not None or opts.remove_uids is not None or opts.default_iallocator is not None or opts.default_iallocator_params or opts.reserved_lvs is not None or opts.mac_prefix is not None or opts.master_netdev is not None or opts.master_netmask is not None or opts.use_external_mip_script is not None or opts.prealloc_wipe_disks is not None or opts.hv_state or opts.enabled_disk_templates or opts.disk_state or opts.ipolicy_bounds_specs is not None or opts.ipolicy_std_specs is not None or opts.ipolicy_disk_templates is not None or opts.ipolicy_vcpu_ratio is not None or opts.ipolicy_spindle_ratio is not None or opts.modify_etc_hosts is not None or opts.file_storage_dir is not None or opts.install_image is not None or opts.instance_communication_network is not None or opts.zeroing_image is not None or opts.shared_file_storage_dir is not None or opts.compression_tools is not None or opts.shared_file_storage_dir is not None or opts.enabled_user_shutdown is not None): ToStderr("Please give at least one of the parameters.") return 1 enabled_disk_templates = _GetEnabledDiskTemplates(opts) vg_name = _GetVgName(opts, enabled_disk_templates) try: drbd_helper = _GetDrbdHelper(opts, enabled_disk_templates) except errors.OpPrereqError, e: ToStderr(str(e)) return 1 hvlist = opts.enabled_hypervisors if hvlist is not None: hvlist = hvlist.split(",") # a list of (name, dict) we can pass directly to dict() (or []) hvparams = dict(opts.hvparams) for hv_params in hvparams.values(): utils.ForceDictType(hv_params, constants.HVS_PARAMETER_TYPES) diskparams = dict(opts.diskparams) for dt_params in diskparams.values(): utils.ForceDictType(dt_params, constants.DISK_DT_TYPES) beparams = opts.beparams utils.ForceDictType(beparams, constants.BES_PARAMETER_COMPAT) nicparams = opts.nicparams utils.ForceDictType(nicparams, constants.NICS_PARAMETER_TYPES) ndparams = opts.ndparams if ndparams is not None: utils.ForceDictType(ndparams, constants.NDS_PARAMETER_TYPES) ipolicy = CreateIPolicyFromOpts( minmax_ispecs=opts.ipolicy_bounds_specs, std_ispecs=opts.ipolicy_std_specs, ipolicy_disk_templates=opts.ipolicy_disk_templates, ipolicy_vcpu_ratio=opts.ipolicy_vcpu_ratio, ipolicy_spindle_ratio=opts.ipolicy_spindle_ratio, ) mnh = opts.maintain_node_health uid_pool = opts.uid_pool if uid_pool is not None: uid_pool = uidpool.ParseUidPool(uid_pool) add_uids = opts.add_uids if add_uids is not None: add_uids = uidpool.ParseUidPool(add_uids) remove_uids = opts.remove_uids if remove_uids is not None: remove_uids = uidpool.ParseUidPool(remove_uids) if opts.reserved_lvs is not None: if opts.reserved_lvs == "": opts.reserved_lvs = [] else: opts.reserved_lvs = utils.UnescapeAndSplit(opts.reserved_lvs, sep=",") if opts.master_netmask is not None: try: opts.master_netmask = int(opts.master_netmask) except ValueError: ToStderr("The --master-netmask option expects an int parameter.") return 1 ext_ip_script = opts.use_external_mip_script if opts.disk_state: disk_state = utils.FlatToDict(opts.disk_state) else: disk_state = {} hv_state = dict(opts.hv_state) compression_tools = _GetCompressionTools(opts) op = opcodes.OpClusterSetParams( vg_name=vg_name, drbd_helper=drbd_helper, enabled_hypervisors=hvlist, hvparams=hvparams, os_hvp=None, beparams=beparams, nicparams=nicparams, ndparams=ndparams, diskparams=diskparams, ipolicy=ipolicy, candidate_pool_size=opts.candidate_pool_size, max_running_jobs=opts.max_running_jobs, max_tracked_jobs=opts.max_tracked_jobs, maintain_node_health=mnh, modify_etc_hosts=opts.modify_etc_hosts, uid_pool=uid_pool, add_uids=add_uids, remove_uids=remove_uids, default_iallocator=opts.default_iallocator, default_iallocator_params=opts.default_iallocator_params, prealloc_wipe_disks=opts.prealloc_wipe_disks, mac_prefix=opts.mac_prefix, master_netdev=opts.master_netdev, master_netmask=opts.master_netmask, reserved_lvs=opts.reserved_lvs, use_external_mip_script=ext_ip_script, hv_state=hv_state, disk_state=disk_state, enabled_disk_templates=enabled_disk_templates, force=opts.force, file_storage_dir=opts.file_storage_dir, install_image=opts.install_image, instance_communication_network=opts.instance_communication_network, zeroing_image=opts.zeroing_image, shared_file_storage_dir=opts.shared_file_storage_dir, compression_tools=compression_tools, enabled_user_shutdown=opts.enabled_user_shutdown, ) return base.GetResult(None, opts, SubmitOrSend(op, opts)) def QueueOps(opts, args): """Queue operations. @param opts: the command line options selected by the user @type args: list @param args: should contain only one element, the subcommand @rtype: int @return: the desired exit code """ command = args[0] client = GetClient() if command in ("drain", "undrain"): drain_flag = command == "drain" client.SetQueueDrainFlag(drain_flag) elif command == "info": result = client.QueryConfigValues(["drain_flag"]) if result[0]: val = "set" else: val = "unset" ToStdout("The drain flag is %s" % val) else: raise errors.OpPrereqError("Command '%s' is not valid." % command, errors.ECODE_INVAL) return 0 def _ShowWatcherPause(until): if until is None or until < time.time(): ToStdout("The watcher is not paused.") else: ToStdout("The watcher is paused until %s.", time.ctime(until)) def WatcherOps(opts, args): """Watcher operations. @param opts: the command line options selected by the user @type args: list @param args: should contain only one element, the subcommand @rtype: int @return: the desired exit code """ command = args[0] client = GetClient() if command == "continue": client.SetWatcherPause(None) ToStdout("The watcher is no longer paused.") elif command == "pause": if len(args) < 2: raise errors.OpPrereqError("Missing pause duration", errors.ECODE_INVAL) result = client.SetWatcherPause(time.time() + ParseTimespec(args[1])) _ShowWatcherPause(result) elif command == "info": result = client.QueryConfigValues(["watcher_pause"]) _ShowWatcherPause(result[0]) else: raise errors.OpPrereqError("Command '%s' is not valid." % command, errors.ECODE_INVAL) return 0 def _OobPower(opts, node_list, power): """Puts the node in the list to desired power state. @param opts: The command line options selected by the user @param node_list: The list of nodes to operate on @param power: True if they should be powered on, False otherwise @return: The success of the operation (none failed) """ if power: command = constants.OOB_POWER_ON else: command = constants.OOB_POWER_OFF op = opcodes.OpOobCommand(node_names=node_list, command=command, ignore_status=True, timeout=opts.oob_timeout, power_delay=opts.power_delay) result = SubmitOpCode(op, opts=opts) errs = 0 for node_result in result: (node_tuple, data_tuple) = node_result (_, node_name) = node_tuple (data_status, _) = data_tuple if data_status != constants.RS_NORMAL: assert data_status != constants.RS_UNAVAIL errs += 1 ToStderr("There was a problem changing power for %s, please investigate", node_name) if errs > 0: return False return True def _InstanceStart(opts, inst_list, start, no_remember=False): """Puts the instances in the list to desired state. @param opts: The command line options selected by the user @param inst_list: The list of instances to operate on @param start: True if they should be started, False for shutdown @param no_remember: If the instance state should be remembered @return: The success of the operation (none failed) """ if start: opcls = opcodes.OpInstanceStartup text_submit, text_success, text_failed = ("startup", "started", "starting") else: opcls = compat.partial(opcodes.OpInstanceShutdown, timeout=opts.shutdown_timeout, no_remember=no_remember) text_submit, text_success, text_failed = ("shutdown", "stopped", "stopping") jex = JobExecutor(opts=opts) for inst in inst_list: ToStdout("Submit %s of instance %s", text_submit, inst) op = opcls(instance_name=inst) jex.QueueJob(inst, op) results = jex.GetResults() bad_cnt = len([1 for (success, _) in results if not success]) if bad_cnt == 0: ToStdout("All instances have been %s successfully", text_success) else: ToStderr("There were errors while %s instances:\n" "%d error(s) out of %d instance(s)", text_failed, bad_cnt, len(results)) return False return True class _RunWhenNodesReachableHelper(object): """Helper class to make shared internal state sharing easier. @ivar success: Indicates if all action_cb calls were successful """ def __init__(self, node_list, action_cb, node2ip, port, feedback_fn, _ping_fn=netutils.TcpPing, _sleep_fn=time.sleep): """Init the object. @param node_list: The list of nodes to be reachable @param action_cb: Callback called when a new host is reachable @type node2ip: dict @param node2ip: Node to ip mapping @param port: The port to use for the TCP ping @param feedback_fn: The function used for feedback @param _ping_fn: Function to check reachabilty (for unittest use only) @param _sleep_fn: Function to sleep (for unittest use only) """ self.down = set(node_list) self.up = set() self.node2ip = node2ip self.success = True self.action_cb = action_cb self.port = port self.feedback_fn = feedback_fn self._ping_fn = _ping_fn self._sleep_fn = _sleep_fn def __call__(self): """When called we run action_cb. @raises utils.RetryAgain: When there are still down nodes """ if not self.action_cb(self.up): self.success = False if self.down: raise utils.RetryAgain() else: return self.success def Wait(self, secs): """Checks if a host is up or waits remaining seconds. @param secs: The secs remaining """ start = time.time() for node in self.down: if self._ping_fn(self.node2ip[node], self.port, timeout=_EPO_PING_TIMEOUT, live_port_needed=True): self.feedback_fn("Node %s became available" % node) self.up.add(node) self.down -= self.up # If we have a node available there is the possibility to run the # action callback successfully, therefore we don't wait and return return self._sleep_fn(max(0.0, start + secs - time.time())) def _RunWhenNodesReachable(node_list, action_cb, interval): """Run action_cb when nodes become reachable. @param node_list: The list of nodes to be reachable @param action_cb: Callback called when a new host is reachable @param interval: The earliest time to retry """ client = GetClient() cluster_info = client.QueryClusterInfo() if cluster_info["primary_ip_version"] == constants.IP4_VERSION: family = netutils.IPAddress.family else: family = netutils.IP6Address.family node2ip = dict((node, netutils.GetHostname(node, family=family).ip) for node in node_list) port = netutils.GetDaemonPort(constants.NODED) helper = _RunWhenNodesReachableHelper(node_list, action_cb, node2ip, port, ToStdout) try: return utils.Retry(helper, interval, _EPO_REACHABLE_TIMEOUT, wait_fn=helper.Wait) except utils.RetryTimeout: ToStderr("Time exceeded while waiting for nodes to become reachable" " again:\n - %s", " - ".join(helper.down)) return False def _MaybeInstanceStartup(opts, inst_map, nodes_online, _instance_start_fn=_InstanceStart): """Start the instances conditional based on node_states. @param opts: The command line options selected by the user @param inst_map: A dict of inst -> nodes mapping @param nodes_online: A list of nodes online @param _instance_start_fn: Callback to start instances (unittest use only) @return: Success of the operation on all instances """ start_inst_list = [] for (inst, nodes) in inst_map.items(): if not (nodes - nodes_online): # All nodes the instance lives on are back online start_inst_list.append(inst) for inst in start_inst_list: del inst_map[inst] if start_inst_list: return _instance_start_fn(opts, start_inst_list, True) return True def _EpoOn(opts, full_node_list, node_list, inst_map): """Does the actual power on. @param opts: The command line options selected by the user @param full_node_list: All nodes to operate on (includes nodes not supporting OOB) @param node_list: The list of nodes to operate on (all need to support OOB) @param inst_map: A dict of inst -> nodes mapping @return: The desired exit status """ if node_list and not _OobPower(opts, node_list, False): ToStderr("Not all nodes seem to get back up, investigate and start" " manually if needed") # Wait for the nodes to be back up action_cb = compat.partial(_MaybeInstanceStartup, opts, dict(inst_map)) ToStdout("Waiting until all nodes are available again") if not _RunWhenNodesReachable(full_node_list, action_cb, _EPO_PING_INTERVAL): ToStderr("Please investigate and start stopped instances manually") return constants.EXIT_FAILURE return constants.EXIT_SUCCESS def _EpoOff(opts, node_list, inst_map): """Does the actual power off. @param opts: The command line options selected by the user @param node_list: The list of nodes to operate on (all need to support OOB) @param inst_map: A dict of inst -> nodes mapping @return: The desired exit status """ if not _InstanceStart(opts, inst_map.keys(), False, no_remember=True): ToStderr("Please investigate and stop instances manually before continuing") return constants.EXIT_FAILURE if not node_list: return constants.EXIT_SUCCESS if _OobPower(opts, node_list, False): return constants.EXIT_SUCCESS else: return constants.EXIT_FAILURE def Epo(opts, args, qcl=None, _on_fn=_EpoOn, _off_fn=_EpoOff, _confirm_fn=ConfirmOperation, _stdout_fn=ToStdout, _stderr_fn=ToStderr): """EPO operations. @param opts: the command line options selected by the user @type args: list @param args: should contain only one element, the subcommand @rtype: int @return: the desired exit code """ if opts.groups and opts.show_all: _stderr_fn("Only one of --groups or --all are allowed") return constants.EXIT_FAILURE elif args and opts.show_all: _stderr_fn("Arguments in combination with --all are not allowed") return constants.EXIT_FAILURE if qcl is None: # Query client qcl = GetClient() if opts.groups: node_query_list = \ itertools.chain(*qcl.QueryGroups(args, ["node_list"], False)) else: node_query_list = args result = qcl.QueryNodes(node_query_list, ["name", "master", "pinst_list", "sinst_list", "powered", "offline"], False) all_nodes = map(compat.fst, result) node_list = [] inst_map = {} for (node, master, pinsts, sinsts, powered, offline) in result: if not offline: for inst in (pinsts + sinsts): if inst in inst_map: if not master: inst_map[inst].add(node) elif master: inst_map[inst] = set() else: inst_map[inst] = set([node]) if master and opts.on: # We ignore the master for turning on the machines, in fact we are # already operating on the master at this point :) continue elif master and not opts.show_all: _stderr_fn("%s is the master node, please do a master-failover to another" " node not affected by the EPO or use --all if you intend to" " shutdown the whole cluster", node) return constants.EXIT_FAILURE elif powered is None: _stdout_fn("Node %s does not support out-of-band handling, it can not be" " handled in a fully automated manner", node) elif powered == opts.on: _stdout_fn("Node %s is already in desired power state, skipping", node) elif not offline or (offline and powered): node_list.append(node) if not (opts.force or _confirm_fn(all_nodes, "nodes", "epo")): return constants.EXIT_FAILURE if opts.on: return _on_fn(opts, all_nodes, node_list, inst_map) else: return _off_fn(opts, node_list, inst_map) def _GetCreateCommand(info): buf = StringIO() buf.write("gnt-cluster init") PrintIPolicyCommand(buf, info["ipolicy"], False) buf.write(" ") buf.write(info["name"]) return buf.getvalue() def ShowCreateCommand(opts, args): """Shows the command that can be used to re-create the cluster. Currently it works only for ipolicy specs. """ cl = GetClient() result = cl.QueryClusterInfo() ToStdout(_GetCreateCommand(result)) def _RunCommandAndReport(cmd): """Run a command and report its output, iff it failed. @param cmd: the command to execute @type cmd: list @rtype: bool @return: False, if the execution failed. """ result = utils.RunCmd(cmd) if result.failed: ToStderr("Command %s failed: %s; Output %s" % (cmd, result.fail_reason, result.output)) return False return True def _VerifyCommand(cmd): """Verify that a given command succeeds on all online nodes. As this function is intended to run during upgrades, it is implemented in such a way that it still works, if all Ganeti daemons are down. @param cmd: the command to execute @type cmd: list @rtype: list @return: the list of node names that are online where the command failed. """ command = utils.text.ShellQuoteArgs([str(val) for val in cmd]) nodes = ssconf.SimpleStore().GetOnlineNodeList() master_node = ssconf.SimpleStore().GetMasterNode() cluster_name = ssconf.SimpleStore().GetClusterName() # If master node is in 'nodes', make sure master node is at list end if master_node in nodes: nodes.remove(master_node) nodes.append(master_node) failed = [] srun = ssh.SshRunner(cluster_name=cluster_name) for name in nodes: result = srun.Run(name, constants.SSH_LOGIN_USER, command) if result.exit_code != 0: failed.append(name) return failed def _VerifyVersionInstalled(versionstring): """Verify that the given version of ganeti is installed on all online nodes. Do nothing, if this is the case, otherwise print an appropriate message to stderr. @param versionstring: the version to check for @type versionstring: string @rtype: bool @return: True, if the version is installed on all online nodes """ badnodes = _VerifyCommand(["test", "-d", os.path.join(pathutils.PKGLIBDIR, versionstring)]) if badnodes: ToStderr("Ganeti version %s not installed on nodes %s" % (versionstring, ", ".join(badnodes))) return False return True def _GetRunning(): """Determine the list of running jobs. @rtype: list @return: the number of jobs still running """ cl = GetClient() qfilter = qlang.MakeSimpleFilter("status", frozenset([constants.JOB_STATUS_RUNNING])) return len(cl.Query(constants.QR_JOB, [], qfilter).data) def _SetGanetiVersion(versionstring): """Set the active version of ganeti to the given versionstring @type versionstring: string @rtype: list @return: the list of nodes where the version change failed """ failed = [] if constants.HAS_GNU_LN: failed.extend(_VerifyCommand( ["ln", "-s", "-f", "-T", os.path.join(pathutils.PKGLIBDIR, versionstring), os.path.join(pathutils.SYSCONFDIR, "ganeti/lib")])) failed.extend(_VerifyCommand( ["ln", "-s", "-f", "-T", os.path.join(pathutils.SHAREDIR, versionstring), os.path.join(pathutils.SYSCONFDIR, "ganeti/share")])) else: failed.extend(_VerifyCommand( ["rm", "-f", os.path.join(pathutils.SYSCONFDIR, "ganeti/lib")])) failed.extend(_VerifyCommand( ["ln", "-s", "-f", os.path.join(pathutils.PKGLIBDIR, versionstring), os.path.join(pathutils.SYSCONFDIR, "ganeti/lib")])) failed.extend(_VerifyCommand( ["rm", "-f", os.path.join(pathutils.SYSCONFDIR, "ganeti/share")])) failed.extend(_VerifyCommand( ["ln", "-s", "-f", os.path.join(pathutils.SHAREDIR, versionstring), os.path.join(pathutils.SYSCONFDIR, "ganeti/share")])) return list(set(failed)) def _ExecuteCommands(fns): """Execute a list of functions, in reverse order. @type fns: list of functions. @param fns: the functions to be executed. """ for fn in reversed(fns): fn() def _GetConfigVersion(): """Determine the version the configuration file currently has. @rtype: tuple or None @return: (major, minor, revision) if the version can be determined, None otherwise """ config_data = serializer.LoadJson(utils.ReadFile(pathutils.CLUSTER_CONF_FILE)) try: config_version = config_data["version"] except KeyError: return None return utils.SplitVersion(config_version) def _ReadIntentToUpgrade(): """Read the file documenting the intent to upgrade the cluster. @rtype: (string, string) or (None, None) @return: (old version, version to upgrade to), if the file exists, and (None, None) otherwise. """ if not os.path.isfile(pathutils.INTENT_TO_UPGRADE): return (None, None) contentstring = utils.ReadFile(pathutils.INTENT_TO_UPGRADE) contents = utils.UnescapeAndSplit(contentstring) if len(contents) != 3: # file syntactically mal-formed return (None, None) return (contents[0], contents[1]) def _WriteIntentToUpgrade(version): """Write file documenting the intent to upgrade the cluster. @type version: string @param version: the version we intent to upgrade to """ utils.WriteFile(pathutils.INTENT_TO_UPGRADE, data=utils.EscapeAndJoin([constants.RELEASE_VERSION, version, "%d" % os.getpid()])) def _UpgradeBeforeConfigurationChange(versionstring): """ Carry out all the tasks necessary for an upgrade that happen before the configuration file, or Ganeti version, changes. @type versionstring: string @param versionstring: the version to upgrade to @rtype: (bool, list) @return: tuple of a bool indicating success and a list of rollback tasks """ rollback = [] if not _VerifyVersionInstalled(versionstring): return (False, rollback) _WriteIntentToUpgrade(versionstring) rollback.append( lambda: utils.RunCmd(["rm", "-f", pathutils.INTENT_TO_UPGRADE])) ToStdout("Draining queue") client = GetClient() client.SetQueueDrainFlag(True) rollback.append(lambda: GetClient().SetQueueDrainFlag(False)) if utils.SimpleRetry(0, _GetRunning, constants.UPGRADE_QUEUE_POLL_INTERVAL, constants.UPGRADE_QUEUE_DRAIN_TIMEOUT): ToStderr("Failed to completely empty the queue.") return (False, rollback) ToStdout("Pausing the watcher for one hour.") rollback.append(lambda: GetClient().SetWatcherPause(None)) GetClient().SetWatcherPause(time.time() + 60 * 60) ToStdout("Stopping daemons on master node.") if not _RunCommandAndReport([pathutils.DAEMON_UTIL, "stop-all"]): return (False, rollback) if not _VerifyVersionInstalled(versionstring): utils.RunCmd([pathutils.DAEMON_UTIL, "start-all"]) return (False, rollback) ToStdout("Stopping daemons everywhere.") rollback.append(lambda: _VerifyCommand([pathutils.DAEMON_UTIL, "start-all"])) badnodes = _VerifyCommand([pathutils.DAEMON_UTIL, "stop-all"]) if badnodes: ToStderr("Failed to stop daemons on %s." % (", ".join(badnodes),)) return (False, rollback) backuptar = os.path.join(pathutils.BACKUP_DIR, "ganeti%d.tar" % time.time()) ToStdout("Backing up configuration as %s" % backuptar) if not _RunCommandAndReport(["mkdir", "-p", pathutils.BACKUP_DIR]): return (False, rollback) # Create the archive in a safe manner, as it contains sensitive # information. (_, tmp_name) = tempfile.mkstemp(prefix=backuptar, dir=pathutils.BACKUP_DIR) if not _RunCommandAndReport(["tar", "-cf", tmp_name, "--exclude=queue/archive", pathutils.DATA_DIR]): return (False, rollback) os.rename(tmp_name, backuptar) return (True, rollback) def _VersionSpecificDowngrade(): """ Perform any additional downrade tasks that are version specific and need to be done just after the configuration downgrade. This function needs to be idempotent, so that it can be redone if the downgrade procedure gets interrupted after changing the configuration. Note that this function has to be reset with every version bump. @return: True upon success """ ToStdout("Performing version-specific downgrade tasks.") return True def _SwitchVersionAndConfig(versionstring, downgrade): """ Switch to the new Ganeti version and change the configuration, in correct order. @type versionstring: string @param versionstring: the version to change to @type downgrade: bool @param downgrade: True, if the configuration should be downgraded @rtype: (bool, list) @return: tupe of a bool indicating success, and a list of additional rollback tasks """ rollback = [] if downgrade: ToStdout("Downgrading configuration") if not _RunCommandAndReport([pathutils.CFGUPGRADE, "--downgrade", "-f"]): return (False, rollback) # Note: version specific downgrades need to be done before switching # binaries, so that we still have the knowledgeable binary if the downgrade # process gets interrupted at this point. if not _VersionSpecificDowngrade(): return (False, rollback) # Configuration change is the point of no return. From then onwards, it is # safer to push through the up/dowgrade than to try to roll it back. ToStdout("Switching to version %s on all nodes" % versionstring) rollback.append(lambda: _SetGanetiVersion(constants.DIR_VERSION)) badnodes = _SetGanetiVersion(versionstring) if badnodes: ToStderr("Failed to switch to Ganeti version %s on nodes %s" % (versionstring, ", ".join(badnodes))) if not downgrade: return (False, rollback) # Now that we have changed to the new version of Ganeti we should # not communicate over luxi any more, as luxi might have changed in # incompatible ways. Therefore, manually call the corresponding ganeti # commands using their canonical (version independent) path. if not downgrade: ToStdout("Upgrading configuration") if not _RunCommandAndReport([pathutils.CFGUPGRADE, "-f"]): return (False, rollback) return (True, rollback) def _UpgradeAfterConfigurationChange(oldversion): """ Carry out the upgrade actions necessary after switching to the new Ganeti version and updating the configuration. As this part is run at a time where the new version of Ganeti is already running, no communication should happen via luxi, as this is not a stable interface. Also, as the configuration change is the point of no return, all actions are pushed trough, even if some of them fail. @param oldversion: the version the upgrade started from @type oldversion: string @rtype: int @return: the intended return value """ returnvalue = 0 ToStdout("Ensuring directories everywhere.") badnodes = _VerifyCommand([pathutils.ENSURE_DIRS]) if badnodes: ToStderr("Warning: failed to ensure directories on %s." % (", ".join(badnodes))) returnvalue = 1 ToStdout("Starting daemons everywhere.") badnodes = _VerifyCommand([pathutils.DAEMON_UTIL, "start-all"]) if badnodes: ToStderr("Warning: failed to start daemons on %s." % (", ".join(badnodes),)) returnvalue = 1 ToStdout("Redistributing the configuration.") if not _RunCommandAndReport(["gnt-cluster", "redist-conf", "--yes-do-it"]): returnvalue = 1 ToStdout("Restarting daemons everywhere.") badnodes = _VerifyCommand([pathutils.DAEMON_UTIL, "stop-all"]) badnodes.extend(_VerifyCommand([pathutils.DAEMON_UTIL, "start-all"])) if badnodes: ToStderr("Warning: failed to start daemons on %s." % (", ".join(list(set(badnodes))),)) returnvalue = 1 ToStdout("Undraining the queue.") if not _RunCommandAndReport(["gnt-cluster", "queue", "undrain"]): returnvalue = 1 _RunCommandAndReport(["rm", "-f", pathutils.INTENT_TO_UPGRADE]) ToStdout("Running post-upgrade hooks") if not _RunCommandAndReport([pathutils.POST_UPGRADE, oldversion]): returnvalue = 1 ToStdout("Unpausing the watcher.") if not _RunCommandAndReport(["gnt-cluster", "watcher", "continue"]): returnvalue = 1 ToStdout("Verifying cluster.") if not _RunCommandAndReport(["gnt-cluster", "verify"]): returnvalue = 1 return returnvalue def UpgradeGanetiCommand(opts, args): """Upgrade a cluster to a new ganeti version. @param opts: the command line options selected by the user @type args: list @param args: should be an empty list @rtype: int @return: the desired exit code """ if ((not opts.resume and opts.to is None) or (opts.resume and opts.to is not None)): ToStderr("Precisely one of the options --to and --resume" " has to be given") return 1 # If we're not told to resume, verify there is no upgrade # in progress. if not opts.resume: oldversion, versionstring = _ReadIntentToUpgrade() if versionstring is not None: # An upgrade is going on; verify whether the target matches if versionstring == opts.to: ToStderr("An upgrade is already in progress. Target version matches," " resuming.") opts.resume = True opts.to = None else: ToStderr("An upgrade from %s to %s is in progress; use --resume to" " finish it first" % (oldversion, versionstring)) return 1 oldversion = constants.RELEASE_VERSION if opts.resume: ssconf.CheckMaster(False) oldversion, versionstring = _ReadIntentToUpgrade() if versionstring is None: return 0 version = utils.version.ParseVersion(versionstring) if version is None: return 1 configversion = _GetConfigVersion() if configversion is None: return 1 # If the upgrade we resume was an upgrade between compatible # versions (like 2.10.0 to 2.10.1), the correct configversion # does not guarantee that the config has been updated. # However, in the case of a compatible update with the configuration # not touched, we are running a different dirversion with the same # config version. config_already_modified = \ (utils.IsCorrectConfigVersion(version, configversion) and not (versionstring != constants.DIR_VERSION and configversion == (constants.CONFIG_MAJOR, constants.CONFIG_MINOR, constants.CONFIG_REVISION))) if not config_already_modified: # We have to start from the beginning; however, some daemons might have # already been stopped, so the only way to get into a well-defined state # is by starting all daemons again. _VerifyCommand([pathutils.DAEMON_UTIL, "start-all"]) else: versionstring = opts.to config_already_modified = False version = utils.version.ParseVersion(versionstring) if version is None: ToStderr("Could not parse version string %s" % versionstring) return 1 msg = utils.version.UpgradeRange(version) if msg is not None: ToStderr("Cannot upgrade to %s: %s" % (versionstring, msg)) return 1 if not config_already_modified: success, rollback = _UpgradeBeforeConfigurationChange(versionstring) if not success: _ExecuteCommands(rollback) return 1 else: rollback = [] downgrade = utils.version.ShouldCfgdowngrade(version) success, additionalrollback = \ _SwitchVersionAndConfig(versionstring, downgrade) if not success: rollback.extend(additionalrollback) _ExecuteCommands(rollback) return 1 return _UpgradeAfterConfigurationChange(oldversion) commands = { "init": ( InitCluster, [ArgHost(min=1, max=1)], [BACKEND_OPT, CP_SIZE_OPT, ENABLED_HV_OPT, GLOBAL_FILEDIR_OPT, HVLIST_OPT, MAC_PREFIX_OPT, MASTER_NETDEV_OPT, MASTER_NETMASK_OPT, NIC_PARAMS_OPT, NOMODIFY_ETCHOSTS_OPT, NOMODIFY_SSH_SETUP_OPT, SECONDARY_IP_OPT, VG_NAME_OPT, MAINTAIN_NODE_HEALTH_OPT, UIDPOOL_OPT, DRBD_HELPER_OPT, DEFAULT_IALLOCATOR_OPT, DEFAULT_IALLOCATOR_PARAMS_OPT, PRIMARY_IP_VERSION_OPT, PREALLOC_WIPE_DISKS_OPT, NODE_PARAMS_OPT, GLOBAL_SHARED_FILEDIR_OPT, USE_EXTERNAL_MIP_SCRIPT, DISK_PARAMS_OPT, HV_STATE_OPT, DISK_STATE_OPT, ENABLED_DISK_TEMPLATES_OPT, IPOLICY_STD_SPECS_OPT, GLOBAL_GLUSTER_FILEDIR_OPT, INSTALL_IMAGE_OPT, ZEROING_IMAGE_OPT, COMPRESSION_TOOLS_OPT, ENABLED_USER_SHUTDOWN_OPT, ] + INSTANCE_POLICY_OPTS + SPLIT_ISPECS_OPTS, "[opts...] <cluster_name>", "Initialises a new cluster configuration"), "destroy": ( DestroyCluster, ARGS_NONE, [YES_DOIT_OPT], "", "Destroy cluster"), "rename": ( RenameCluster, [ArgHost(min=1, max=1)], [FORCE_OPT, DRY_RUN_OPT], "<new_name>", "Renames the cluster"), "redist-conf": ( RedistributeConfig, ARGS_NONE, SUBMIT_OPTS + [DRY_RUN_OPT, PRIORITY_OPT, FORCE_DISTRIBUTION], "", "Forces a push of the configuration file and ssconf files" " to the nodes in the cluster"), "verify": ( VerifyCluster, ARGS_NONE, [VERBOSE_OPT, DEBUG_SIMERR_OPT, ERROR_CODES_OPT, NONPLUS1_OPT, DRY_RUN_OPT, PRIORITY_OPT, NODEGROUP_OPT, IGNORE_ERRORS_OPT], "", "Does a check on the cluster configuration"), "verify-disks": ( VerifyDisks, ARGS_NONE, [PRIORITY_OPT], "", "Does a check on the cluster disk status"), "repair-disk-sizes": ( RepairDiskSizes, ARGS_MANY_INSTANCES, [DRY_RUN_OPT, PRIORITY_OPT], "[instance...]", "Updates mismatches in recorded disk sizes"), "master-failover": ( MasterFailover, ARGS_NONE, [NOVOTING_OPT, FORCE_FAILOVER], "", "Makes the current node the master"), "master-ping": ( MasterPing, ARGS_NONE, [], "", "Checks if the master is alive"), "version": ( ShowClusterVersion, ARGS_NONE, [], "", "Shows the cluster version"), "getmaster": ( ShowClusterMaster, ARGS_NONE, [], "", "Shows the cluster master"), "copyfile": ( ClusterCopyFile, [ArgFile(min=1, max=1)], [NODE_LIST_OPT, USE_REPL_NET_OPT, NODEGROUP_OPT], "[-n node...] <filename>", "Copies a file to all (or only some) nodes"), "command": ( RunClusterCommand, [ArgCommand(min=1)], [NODE_LIST_OPT, NODEGROUP_OPT, SHOW_MACHINE_OPT, FAILURE_ONLY_OPT], "[-n node...] <command>", "Runs a command on all (or only some) nodes"), "info": ( ShowClusterConfig, ARGS_NONE, [ROMAN_OPT], "[--roman]", "Show cluster configuration"), "list-tags": ( ListTags, ARGS_NONE, [], "", "List the tags of the cluster"), "add-tags": ( AddTags, [ArgUnknown()], [TAG_SRC_OPT, PRIORITY_OPT] + SUBMIT_OPTS, "tag...", "Add tags to the cluster"), "remove-tags": ( RemoveTags, [ArgUnknown()], [TAG_SRC_OPT, PRIORITY_OPT] + SUBMIT_OPTS, "tag...", "Remove tags from the cluster"), "search-tags": ( SearchTags, [ArgUnknown(min=1, max=1)], [PRIORITY_OPT], "", "Searches the tags on all objects on" " the cluster for a given pattern (regex)"), "queue": ( QueueOps, [ArgChoice(min=1, max=1, choices=["drain", "undrain", "info"])], [], "drain|undrain|info", "Change queue properties"), "watcher": ( WatcherOps, [ArgChoice(min=1, max=1, choices=["pause", "continue", "info"]), ArgSuggest(min=0, max=1, choices=["30m", "1h", "4h"])], [], "{pause <timespec>|continue|info}", "Change watcher properties"), "modify": ( SetClusterParams, ARGS_NONE, [FORCE_OPT, BACKEND_OPT, CP_SIZE_OPT, RQL_OPT, MAX_TRACK_OPT, INSTALL_IMAGE_OPT, INSTANCE_COMMUNICATION_NETWORK_OPT, ENABLED_HV_OPT, HVLIST_OPT, MAC_PREFIX_OPT, MASTER_NETDEV_OPT, MASTER_NETMASK_OPT, NIC_PARAMS_OPT, VG_NAME_OPT, MAINTAIN_NODE_HEALTH_OPT, UIDPOOL_OPT, ADD_UIDS_OPT, REMOVE_UIDS_OPT, DRBD_HELPER_OPT, DEFAULT_IALLOCATOR_OPT, DEFAULT_IALLOCATOR_PARAMS_OPT, RESERVED_LVS_OPT, DRY_RUN_OPT, PRIORITY_OPT, PREALLOC_WIPE_DISKS_OPT, NODE_PARAMS_OPT, USE_EXTERNAL_MIP_SCRIPT, DISK_PARAMS_OPT, HV_STATE_OPT, DISK_STATE_OPT] + SUBMIT_OPTS + [ENABLED_DISK_TEMPLATES_OPT, IPOLICY_STD_SPECS_OPT, MODIFY_ETCHOSTS_OPT, ENABLED_USER_SHUTDOWN_OPT] + INSTANCE_POLICY_OPTS + [GLOBAL_FILEDIR_OPT, GLOBAL_SHARED_FILEDIR_OPT, ZEROING_IMAGE_OPT, COMPRESSION_TOOLS_OPT], "[opts...]", "Alters the parameters of the cluster"), "renew-crypto": ( RenewCrypto, ARGS_NONE, [NEW_CLUSTER_CERT_OPT, NEW_RAPI_CERT_OPT, RAPI_CERT_OPT, NEW_CONFD_HMAC_KEY_OPT, FORCE_OPT, NEW_CLUSTER_DOMAIN_SECRET_OPT, CLUSTER_DOMAIN_SECRET_OPT, NEW_SPICE_CERT_OPT, SPICE_CERT_OPT, SPICE_CACERT_OPT, NEW_NODE_CERT_OPT], "[opts...]", "Renews cluster certificates, keys and secrets"), "epo": ( Epo, [ArgUnknown()], [FORCE_OPT, ON_OPT, GROUPS_OPT, ALL_OPT, OOB_TIMEOUT_OPT, SHUTDOWN_TIMEOUT_OPT, POWER_DELAY_OPT], "[opts...] [args]", "Performs an emergency power-off on given args"), "activate-master-ip": ( ActivateMasterIp, ARGS_NONE, [], "", "Activates the master IP"), "deactivate-master-ip": ( DeactivateMasterIp, ARGS_NONE, [CONFIRM_OPT], "", "Deactivates the master IP"), "show-ispecs-cmd": ( ShowCreateCommand, ARGS_NONE, [], "", "Show the command line to re-create the cluster"), "upgrade": ( UpgradeGanetiCommand, ARGS_NONE, [TO_OPT, RESUME_OPT], "", "Upgrade (or downgrade) to a new Ganeti version"), } #: dictionary with aliases for commands aliases = { "masterfailover": "master-failover", "show": "info", } def Main(): return GenericMain(commands, override={"tag_type": constants.TAG_CLUSTER}, aliases=aliases)
python
The controversial reality show Bigg Boss OTT is all set to air from August 8. And the contestants who will be participating in the show have already been sent to quarantine. We also came to you with the names who will be locked inside amongst which one name was that of Manasvi Vasisht. Now, a reliable source tells SpotboyE.com, "Manasvi who was supposed to get quarantined, last evening was informed two hours before he left his house to get quarantined that his participation has been put on hold due to a creative call." Well, this piece of news definitely came as a shock to us and we immediately called Manasvi to know about the development. However, the actor chose to not respond to us. Meanwhile, a source who is close to him confirmed the news to us and went on saying, "Yes, Manasvi is a little upset with the last minute decision taken by the creatives. He was all excited to be a part of this exciting show. And he was aware his fans and people were supporting him and excited to be part of Bigg Boss. He is still hoping that the decision changes and he is open to be part of the show even in the future. But till then he is being extremely positive in doing some good work." Well, we are sure his fans will be highly upset with this last moment change.
english
<reponame>dhhabi/cecs.scheduling package org.csulb.cecs.ui; import java.util.Locale; import org.csulb.cecs.ui.security.SecuredNavigator; import org.csulb.cecs.ui.security.SpringSecurityErrorHandler; import org.springframework.beans.factory.annotation.Autowired; import org.vaadin.spring.VaadinUI; import org.vaadin.spring.events.EventBus; import org.vaadin.spring.navigator.SpringViewProvider; import org.vaadin.spring.security.Security; import com.vaadin.annotations.Theme; import com.vaadin.annotations.Title; import com.vaadin.server.VaadinRequest; import com.vaadin.ui.UI; @VaadinUI @Title("CSULB Class Scheduling") @Theme("tests-valo-facebook") @SuppressWarnings("serial") public class MainUI extends UI { @Autowired SpringViewProvider springViewProvider; @Autowired EventBus eventBus; @Autowired Security security; @Autowired MainLayout mainLayout; @Override protected void init(VaadinRequest request) { //setLocale(new Locale.Builder().setLanguage("sr").setScript("Latn").setRegion("RS").build()); SecuredNavigator securedNavigator = new SecuredNavigator(MainUI.this, mainLayout, springViewProvider, security, eventBus); securedNavigator.addViewChangeListener(mainLayout); setContent(mainLayout); setErrorHandler(new SpringSecurityErrorHandler()); /* * Handling redirections */ // RequestAttributes attrs = RequestContextHolder.getRequestAttributes(); // if (sessionStrategy.getAttribute(attrs, VaadinRedirectObject.REDIRECT_OBJECT_SESSION_ATTRIBUTE) != null) { // VaadinRedirectObject redirectObject = (VaadinRedirectObject) sessionStrategy.getAttribute(attrs, VaadinRedirectObject.REDIRECT_OBJECT_SESSION_ATTRIBUTE); // sessionStrategy.removeAttribute(attrs, VaadinRedirectObject.REDIRECT_OBJECT_SESSION_ATTRIBUTE); // // navigator.navigateTo(redirectObject.getRedirectViewToken()); // // if (redirectObject.getErrorMessage() != null) { // Notification.show("Error", redirectObject.getErrorMessage(), Type.ERROR_MESSAGE); // } // // } } }
java
When an industrial spy steals a Xenomorph egg, former Colonial Marine Zula Hendricks must prevent an alien from killing everyone on an isolated colony planet. Tamar Prather is a spy for Venture, a company in direct competition with the Weyland-Yutani Corporation. When Prather steals a Xenomorph egg from a Weyland-Yutani vessel, she takes it to the Venture proto-colony on Jericho 3. Though unaware of the danger the egg poses, the scientists there realize how important it is to their rivals, and seek to use it to gain an edge over the competition. Zula Hendricks - former Colonial Marine and teammate to Amanda Ripley - learns about the stolen egg and infiltrates Venture as a member of their security team. But Zula is a member of the underground resistance that opposes Weyland-Yutani, and is eager to stay under their radar. When Venture resorts to a human test subject, allowing him to be impregnated by the Alien, Zula and scientist Dan McClaren must stop the resulting Xenomorph before it can escape and kill every human being on Jericho 3.
english
/////////////////////////////////////////////////////////////////////////////// // Name: src/common/textentrycmn.cpp // Purpose: wxTextEntryBase implementation // Author: <NAME> // Created: 2007-09-26 // RCS-ID: $Id: textentrycmn.cpp 61834 2009-09-05 12:39:12Z JMS $ // Copyright: (c) 2007 <NAME> <<EMAIL>> // Licence: wxWindows licence /////////////////////////////////////////////////////////////////////////////// // ============================================================================ // declarations // ============================================================================ // ---------------------------------------------------------------------------- // headers // ---------------------------------------------------------------------------- // for compilers that support precompilation, includes "wx.h". #include "wx/wxprec.h" #ifdef __BORLANDC__ #pragma hdrstop #endif #if wxUSE_TEXTCTRL || wxUSE_COMBOBOX #ifndef WX_PRECOMP #include "wx/window.h" #include "wx/dataobj.h" #endif //WX_PRECOMP #include "wx/textentry.h" #include "wx/clipbrd.h" // ---------------------------------------------------------------------------- // wxTextEntryHintData // ---------------------------------------------------------------------------- class WXDLLIMPEXP_CORE wxTextEntryHintData wxBIND_OR_CONNECT_HACK_ONLY_BASE_CLASS { public: wxTextEntryHintData(wxTextEntryBase *entry, wxWindow *win) : m_entry(entry), m_win(win) { wxBIND_OR_CONNECT_HACK(win, wxEVT_SET_FOCUS, wxFocusEventHandler, wxTextEntryHintData::OnSetFocus, this); wxBIND_OR_CONNECT_HACK(win, wxEVT_KILL_FOCUS, wxFocusEventHandler, wxTextEntryHintData::OnKillFocus, this); // we don't have any hint yet m_showsHint = false; } // default dtor is ok // are we showing the hint right now? bool ShowsHint() const { return m_showsHint; } void SetHintString(const wxString& hint) { m_hint = hint; if ( m_showsHint ) { // update it immediately m_entry->ChangeValue(hint); } //else: the new hint will be shown later } const wxString& GetHintString() const { return m_hint; } private: void OnSetFocus(wxFocusEvent& event) { // hide the hint if we were showing it if ( m_showsHint ) { // Clear() would send an event which we don't want, so do it like // this m_entry->ChangeValue(wxString()); m_win->SetForegroundColour(m_colFg); m_showsHint = false; } event.Skip(); } void OnKillFocus(wxFocusEvent& event) { // restore the hint if the user didn't do anything in the control if ( m_entry->IsEmpty() ) { m_entry->ChangeValue(m_hint); m_colFg = m_win->GetForegroundColour(); m_win->SetForegroundColour(*wxLIGHT_GREY); m_showsHint = true; } event.Skip(); } // the text control we're associated with (as its interface and its window) wxTextEntryBase * const m_entry; wxWindow * const m_win; // the original foreground colour of m_win before we changed it wxColour m_colFg; // the hint passed to wxTextEntry::SetHint() wxString m_hint; // true if we're currently showing it, for this we must be empty and not // have focus bool m_showsHint; wxDECLARE_NO_COPY_CLASS(wxTextEntryHintData); }; // ============================================================================ // wxTextEntryBase implementation // ============================================================================ wxTextEntryBase::~wxTextEntryBase() { delete m_hintData; } // ---------------------------------------------------------------------------- // text accessors // ---------------------------------------------------------------------------- wxString wxTextEntryBase::GetValue() const { return m_hintData && m_hintData->ShowsHint() ? wxString() : DoGetValue(); } wxString wxTextEntryBase::GetRange(long from, long to) const { wxString sel; wxString value = GetValue(); if ( from < to && (long)value.length() >= to ) { sel = value.substr(from, to - from); } return sel; } // ---------------------------------------------------------------------------- // text operations // ---------------------------------------------------------------------------- void wxTextEntryBase::AppendText(const wxString& text) { SetInsertionPointEnd(); WriteText(text); } void wxTextEntryBase::DoSetValue(const wxString& value, int flags) { EventsSuppressor noeventsIf(this, !(flags & SetValue_SendEvent)); SelectAll(); WriteText(value); SetInsertionPoint(0); } void wxTextEntryBase::Replace(long from, long to, const wxString& value) { { EventsSuppressor noevents(this); Remove(from, to); } SetInsertionPoint(from); WriteText(value); } // ---------------------------------------------------------------------------- // selection // ---------------------------------------------------------------------------- bool wxTextEntryBase::HasSelection() const { long from, to; GetSelection(&from, &to); return from < to; } void wxTextEntryBase::RemoveSelection() { long from, to; GetSelection(& from, & to); if (from != -1 && to != -1) Remove(from, to); } wxString wxTextEntryBase::GetStringSelection() const { long from, to; GetSelection(&from, &to); return GetRange(from, to); } // ---------------------------------------------------------------------------- // clipboard // ---------------------------------------------------------------------------- bool wxTextEntryBase::CanCopy() const { return HasSelection(); } bool wxTextEntryBase::CanCut() const { return CanCopy() && IsEditable(); } bool wxTextEntryBase::CanPaste() const { if ( IsEditable() ) { #if wxUSE_CLIPBOARD // check if there is any text on the clipboard if ( wxTheClipboard->IsSupported(wxDF_TEXT) #if wxUSE_UNICODE || wxTheClipboard->IsSupported(wxDF_UNICODETEXT) #endif // wxUSE_UNICODE ) { return true; } #endif // wxUSE_CLIPBOARD } return false; } // ---------------------------------------------------------------------------- // hints support // ---------------------------------------------------------------------------- bool wxTextEntryBase::SetHint(const wxString& hint) { if ( !m_hintData ) m_hintData = new wxTextEntryHintData(this, GetEditableWindow()); m_hintData->SetHintString(hint); return true; } wxString wxTextEntryBase::GetHint() const { return m_hintData ? m_hintData->GetHintString() : wxString(); } // ---------------------------------------------------------------------------- // margins support // ---------------------------------------------------------------------------- bool wxTextEntryBase::DoSetMargins(const wxPoint& WXUNUSED(pt)) { return false; } wxPoint wxTextEntryBase::DoGetMargins() const { return wxPoint(-1, -1); } #endif // wxUSE_TEXTCTRL || wxUSE_COMBOBOX
cpp
import { ICreateCategoriesDTO } from '@modules/categories/dtos/ICreateCategories'; import { AppError } from '@shared/errors/AppError'; import { CategoriesFakerRepository } from '../../../repositories/in-memory/CategoryFakerRepository'; import { CreateCategoryUseCase } from '../CreateCategoryUseCase'; let categoryFakerRepository: CategoriesFakerRepository; let createCategoryUseCase: CreateCategoryUseCase; describe('Create User', () => { beforeEach(() => { categoryFakerRepository = new CategoriesFakerRepository(); createCategoryUseCase = new CreateCategoryUseCase(categoryFakerRepository); }); it('should be able to new create category', async () => { const createCategoryDTO: ICreateCategoriesDTO = { name: 'test', }; await createCategoryUseCase.execute(createCategoryDTO); expect(categoryFakerRepository.categories.length).toBe(1); expect(categoryFakerRepository.categories[0]).toHaveProperty('id'); expect(categoryFakerRepository.categories[0]).toHaveProperty('name'); }); it('should throw an error if category name is empty', async () => { expect(async () => { await createCategoryUseCase.execute({ name: '' }); }).rejects.toBeInstanceOf(AppError); }); });
typescript
Jiraiya is one of the most popular characters in Naruto. He is known for his lascivious behavior and amazing comic timing, making him the most entertaining character in the series. During his lifetime, Jiraiya was kind to people with good intentions and especially to his godson, Naruto. However, his enemies feared him, as he was one of Konoha’s strongest Shinobi. Even the likes of Itachi Uchiha chose to flee from Jiraiya, rather than go against him. Jiraiya’s death took a heavy toll on every Naruto fan. When Kabuto initiated the Fourth Great Ninja War, he revived every powerful shinobi, but fans were perplexed when they didn't see Jiraiya among the reincarnated shinobis. Edo Tensei or the Impure World Reincarnation is one of the most powerful and forbidden techniques in Naruto that not even the creator, Tobirama Senju, was capable of mastering. However, after decades, Kabuto Yakushi was the only person who managed to perform it with immense proficiency. During the Fourth Great Ninja War, Kabuto using the Edo Tensei brought every powerful shinobi back to life, including the greatest former Kages. Each Naruto fan expected to see Jiraiya in his reincarnated version. However, their wish didn’t see the light of day, as Jiraiya was never reincarnated by Kabuto. Prior to the Fourth Great Ninja War, Kabuto explained to Obito (in Madara Uchiha’s disguise) how Edo Tensei works. To reincarnate a certain individual, the user will need a DNA or blood sample of the person, which will be smeared on a special scroll that will act as a medium to activate the technique. Kabuto didn’t have anything that belonged to Jiraiya since the latter was in the depths of Amegakure's sea after being defeated by Pain. Although retrieving Jiraiya's body could have been the hardest part for Kabuto, he was capable of doing so. However, Obito interrupted the conversation, and it seemed like he didn’t want Jiraiya to be reincarnated. There are several theories as to why Jiraiya was not reincarnated, one of which is related to Naruto's character growth. Jiraiya's death also opened the avenue for Naruto's (the former's godson) maturity and a high degree of insight, eventually making him the hero of Konoha and, later, the entire shinobi world. This was also the idea of Naruto's creator, Masashi Kishimoto, who didn't want Jiraiya to be an obstacle in Naruto's endeavors. Killing Jiraiya was crucial to the plot's development. Masashi Kishimoto also did not want to damage Jiraiya's character's integrity. Jiraiya was the most important individual in Naruto’s life, who not only taught him powerful techniques and made him learn Senjutsu, but also endowed upon him some of the best advice that paved the way for the latter’s success. Jiraiya played a pivotal role in Naruto’s life, which helped the latter accomplish his long sought after dream of becoming a Hokage as well as earning the title of the strongest shinobi in the world. Disclaimer: All external media in this article are the property of their respective owners and Sportskeeda claims no ownership of the same. Sportskeeda Anime is now on Twitter! Follow us here for latest news & updates.
english
# Exporting to your project (OUTDATED) There are some headers/libraries in the core software repo you want to test or work with in your own repo. This guide will explain how to export/integrate each part into your project. ## Using the export script In the root directory of the repo, you can find the `export_client.sh` script. This script will automatically compile the repo for PC and extract the relevant header files and libraries into the `EXPORT_CLIENT/` folder. It can be run using: ``` ./export_client.sh ``` Once the export script runs successfully, you may copy `libCLIENT_API.a` and `include/client_api.h` to your project directory. ## Linking to your project In order for the client API to work, you must link the `libCLIENT_API.a` library to your project. ### Manually linking If you are manually compiling using `gcc` or something, you just need to include `-lCLIENT_API` for the linker to include the library. This will work as long as you've copied the library from `EXPORT_CLIENT` to your working directory, as shown earlier. ### Linking in CMake Linking a library to an executable CMake can be done with the following line in your `CMakeLists.txt`: ```CMake TARGET_LINK_LIBRARIES(${TARGET_NAME} PUBLIC "${CMAKE_SOURCE_DIR}/libCLIENT_API.a") ``` This must be called after you declare the executable with `add_executable(...)`. Once this is added, cmake will take care of the rest. Like earlier, the library must be found in the working directory. In this case, the absolute path given the cmake in the example leads to the `src` folder. Feel free to set the absolute path of your choosing; but I believe cmake is happiest with absolute paths. ## Including the header Once the required libraries are linked, all that is left to do is to `#include include/client_api.h` into your source code. If you have a different directory layout for header files (like all source files and headers in the same folder, for example), feel free to change the path of the header file in `#include`.
markdown
bracteate, thin, gold, disk-shaped pendant peculiar to early Scandinavian civilizations. Bracteates were produced by first carving the design in relief on some resistant material, such as bronze or wood, and then pressing a thin sheet of gold over the carving. These circular bracteates were derived from late Roman and Byzantine coins. Goldsmiths later abandoned the Roman originals for a local style of animal ornament or for designs representing their native deities, such as Thor riding a goat.
english
/* ==UserStyle== @name TDWTF Dark Skin @namespace USO Archive @author andreander @description `Dark skin for the daily WTF. and partially what WTF (didn't find the time to complete yet)` @version 20150318.14.16 @license NO-REDISTRIBUTION @preprocessor uso ==/UserStyle== */ @namespace url(http://www.w3.org/1999/xhtml); @-moz-document domain("thedailywtf.com") { a, h4, .articleContainer .author .fakeLink, .articleContainer .comments .fakeLink, .discourse-link, .topic-map .avatars, .topic-map .links, .topic-map .information, .topic-map .map .number, .topic-map .map i, p > code, li > code, pre > code, .d-header, .extra-info-wrapper .topic-link, .navbar.dailywtf li > a, a.mention { color: #ccc; } .container, body, #articlePage p, #articlePage .articleWrapper h1, pre code, pre .subst, pre .tag .title, pre .lisp .title, pre .clojure .built_in, pre .nginx .title, code[class*="language-"], pre[class*="language-"], .hljs, nav.post-controls button:hover, nav.post-controls .show-replies:hover span.badge-posts, aside.quote .title, .quote aside blockquote, .quote aside .onebox, .quote aside .onebox-result, .quote aside .quote, .quote aside .title { color: #aaa !important; } .quote aside .quote, .quote aside .title { color: #555; } .hljs-title { color: #eaab00; } .keyword, .hljs-keyword { color: #529fd5; } pre .number, pre .date, pre .regexp, pre .literal, pre .smalltalk .symbol, pre .smalltalk .char, pre .go .constant, pre .change, pre .markdown .bullet, pre .markdown .link_url { color: #7bd996; } pre .string, pre .title, pre .constant, pre .parent, pre .tag .value, pre .rules .value, pre .rules .value .number, pre .preprocessor, pre .ruby .symbol, pre .ruby .symbol .string, pre .aggregate, pre .template_tag, pre .django .variable, pre .smalltalk .class, pre .addition, pre .flow, pre .stream, pre .bash .variable, pre .apache .tag, pre .apache .cbracket, pre .tex .command, pre .tex .special, pre .erlang_repl .function_or_atom, pre .markdown .header, .token.property, .token.tag, .token.boolean, .token.number, .token.function-name, .token.constant, .token.symbol, .hljs-string { color: #fb8e70; } .side-bar-list > ul > li, div.about, .articleList li, pre code, :not(pre) > code[class*="language-"], pre[class*="language-"], .hljs, nav.post-controls .show-replies:hover { background-color: #222 !important; } #wrapper, .hideNonDesktop, .da-space, blockquote, .comment blockquote, .badge-notification.clicks, .topic-map, nav.post-controls .show-replies, p > code, li > code, pre > code, body code pre, body samp, .d-header, a.mention { background-color: #333; } .articleContainer, body, #articlePage .articleWrapper { background-color: #191919 !important; } .heading, #nav3, #articlePage .author, aside.quote .title, .quote aside .quote, .quote aside .title, .quote aside blockquote, .quote aside .onebox, .quote aside .onebox-result { background-color: #555 !important; } .articleContainer, blockquote, .comment blockquote { border-color: #555; } .articleList li::after { background: linear-gradient(to right, rgba(0, 0, 0, 0) 0%, #222 100%) repeat scroll 0 0 rgba(0, 0, 0, 0) } .token.operator, .token.entity, .token.url, .token.variable { background: none repeat scroll 0 0 rgba(80, 80, 80, 0.5) } .discourse-link { background-color: #2d3868; } #topic-progress h4 { color: #222 !important; } }
css
Indian space programme turned a full circle when India’s Polar Satellite Launch Vehicle (PSLV-C8) successfully launched AGILE, a satellite of the Italian Space Agency, in April 2007 under a commercial agreement. India has come a long way from using satellites of other nations to demonstrate the application of space technology for societal benefits in the 70’s, using rockets of other space agencies to launch Indian experimental and, later, operational satellites to the present enviable status of building its own satellites and rockets to launch not only for Indian users but also of other countries. The 350 kg AGILE was precisely injected in the intended 550 km circular orbit, unequivocally demonstrating the maturity in this complex technology. Earlier, in January 2007, there was jubilation for another major landmark. The Space capsule Recovery Experiment (SRE-1) was launched by PSLV and later successfully recovered from the Bay of Bengal. This marked the beginning of a new era for the Indian space programme – that of not only providing a platform for the scientists to conduct experiments in the micro gravity environment of space and return the samples safely back to earth, but also, demonstrating India’s capability in mastering critical technologies like aero-thermodynamics, recovery through deceleration and floatation system, navigation, guidance and control. All these technologies are important for re-recoverable and reusable launch vehicles as well as to undertake manned space missions. The Indian space programme started modestly in the 1960’s with the launching of small sounding rockets to investigate the ionosphere over the magnetic equator that passes over Thumba near Thiruvananthapuram. In the past decades, despite being a developing economy with its attendant problems, India has been able to successfully master space technology and, more importantly, use it effectively for deriving benefits for the society at grassroots level. Today, INSAT and Indian Remote Sensing (IRS) Satellite System form important elements of the national developmental infrastructure. The Polar Satellite Launch Vehicle, PSLV, and Geosynchronous Satellite Launch Vehicle, GSLV, have been designed and built in the face of several geo-political challenges, have made the space programme self-reliant, indeed a credit to the homegrown engineers and scientists. INSAT System is the largest domestic communication satellite system in the Asia Pacific region with ten satellites in operation carrying a total of 200 transponders for communication and broadcasting services including direct-to-home service besides meteorological instruments for providing meteorological services. Today, more than 55,000 VSATs - both in private and government sectors - are operating through INSAT. This has enabled the expansion of television coverage with more than 40 Doordarshan and 50 private TV channels operating through INSAT. Direct-To-Home television services have become a reality. There have been several innovative applications of INSAT system. EDUSAT, launched in September 2004, is the first thematic satellite dedicated exclusively for educational services. It is providing a wide range of educational delivery modes like one-way TV broadcast, interactive TV, video conferencing, computer conferencing, web-based instructions, etc. More than 10,000 classrooms are connected in the EDUSAT network. Telemedicine is another example. Space-based telemedicine has enabled the population in the remotest parts to access super specialty medical care. Already, there are 230 hospitals connected in the telemedicine network including 190 in remote and rural areas and 40 super specialty hospitals in major cities. Meteorological data from INSAT is used for weather forecasting and specially designed disaster warning receivers have been installed in vulnerable coastal areas for direct transmission of warnings against impending disaster like cyclones. The major emphasis in the coming years will be to meet the growing demand for transponders by progressively increasing the capacity to about 500 transponders. An Indian Regional Navigational Satellite System (IRNSS), with a constellation of seven satellites is also being established over the next 6-7 years to provide navigation and timing services over the Indian subcontinent. IRNSS will be an important component of the Indian strategy for establishing an indigenous and independent satellite navigation system. With seven satellites in operation, Indian Remote Sensing satellite system (IRS) is the largest civilian remote sensing satellite constellation in the world providing imageries in a variety of spatial resolutions and spectral bands. The latest, CARTOSAT-2, launched in January 10, 2007, provides one meter spatial resolution. The data from IRS satellites is used for a variety of applications including groundwater prospect mapping, crop acreage and production estimation, potential fishing zone forecasting based on chlorophyll and sea surface temperature, biodiversity characterisation, detailed impact assessment of watershed development projects, generation of natural resources data/information, etc. In order to reach space-based services directly to the rural population, establishment of Village Resource Centres (VRC) has been recently initiated with the participation of NGOs. VRCs provide a variety of space based products and services such as tele-education; telemedicine; information on natural resources; interactive advisories on agriculture, fisheries, land and water resources management; livestock management; interactive vocational training towards livelihood support; etc. So far, 200 VRCs have been set up. Space systems also help in disaster management through creation of database for facilitating hazard zonation and damage assessment, monitoring of major natural disasters using satellite and aerial data and strengthening the communication backbone for timely dissemination of information and emergency support. An important addition in the coming years will be the microwave remote sensing satellite, RISAT, which will provides all-weather remote sensing capability important for applications in agriculture and disaster management. India’s Polar Satellite Launch Vehicle (PSLV) and Geosynchronous Satellite Launch Vehicle (GSLV) are now used for launching the remote sensing and communication satellites. So far, PSLV has had ten consecutively successful flights including the one in April 2007 that launched the Italian AGILE. GSLV can launch 2 to 2.5 tonne satellite into Geo-synchronous Transfer Orbit, GTO (200 km by 36,000 km). The immediate target is to complete the development of GSLV Mk III capable of launching 4 tonne class communication satellites. Technology development and demonstration missions on reusable launch vehicle including space recovery technologies and air breathing propulsion are also envisaged. The success of launching the Space Capsule Recovery Experiment (SRE-1) and its recovery in January 2007 is an important landmark in this direction. Indian space programme encompasses research in atmospheric sciences, planetary and geosciences and theoretical physics. There are ground facilities like Mesosphere-Stratosphere-Troposphere Radar at Tirupati and Udaipur Solar Observatory. A series of sounding rockets are available for atmospheric experiments. Several scientific instruments have been flown on satellites especially to detect celestial X-ray and gamma-ray bursts. India has now embarked on a major mission, Chandrayaan-1. It is an Indian scientific mission to moon planned by 2008. The objective is high resolution mapping of the moon in visible, near infrared, low energy X-ray and high-energy X-ray regions and prepares a 3-dimensional atlas of regions of scientific interest. The spacecraft will carry six primary Indian scientific instruments besides two from National Aeronautics and Space Administration (NASA) of USA, three instruments from the European Space Agency and another from the Bulgarian Academy of Sciences is also to be included. ASTROSAT is another major initiative. This satellite, to be launched during 2008, will be useful for multi-wavelength studies of a variety of celestial sources and phenomena using a cluster of X-ray astronomy instruments and Ultraviolet imaging telescope. The capabilities created under the Indian space programme are bringing in commercial benefits too. Antrix Corporation Limited was specially created in 1992 under the Department of Space to market space services and hardware in the international market. ANTRIX provides transponders on lease and remote sensing data services as well as launch services. It also provides technical services for launch and early-orbit-phase mission support and in-orbit testing for satellites of other countries. ANTRIX has been profitable throughout registering an annual growth rate of about 20 percent in the past few years. The overall revenue of ANTRIX was Rs 414 crore during 2005-06, which is expected to cross Rs 500 crore this year. A recent foray has been the contract with a European company for joint development of communication satellites for the international market. ANTRIX’s vision is to further expand its market share in fields such as remote sensing imageries, commercial satellites and infrastructure services in space for broadcasting and other emerging services like mobile communication and positioning systems. It also envisions strengthening the role of Indian industries and would develop alliances with other major global players to make forays into new markets in the developing part of the world. Space has emerged as the next frontier of human kind. Involvement of human beings in space for building and maintaining space assets will become important in the coming decades and it will be necessary to initiate the activities towards manned missions by developing critical technologies. The immediate objective will be to develop a fully autonomous manned space vehicle – in about 8-10 years — which could be launched by India’s GSLV and can carry a two-member crew to low earth orbit and safely return to earth. The Indian space programme, while meeting the developmental needs of the nation through establishment of space systems in a self reliant manner, is poised to expand further and play an increasing role in the national developmental efforts besides substantially contributing to the exploration of space, the next frontier of mankind.
english
Bollywood actor Salman Khan’s love for bicycles is known to all. The actor who recently launched his own line of e-cycles took his new bicycle for a ride on the streets of Mumbai. The 1:39-minute video shows the actor not pedalling the bike as he was using the electric feature of the newly launched cycle. The actor smoothly takes over the roads amidst tight security. Salman Khan opted for a cool hoodie jacket with black shorts and trainer shoes. The actor recently tweeted a video which shows him balancing his cycle with one hand while the other hand is busy waving at fans. Salman can be seen enjoying each and every moment as he waves at the fans who are passing by him. On the professional front, Salman will next be seen in Kabir Khan’s ‘Tubelight’ which also casts Sohail Khan and Chinese actress Zhu Zhu in pivotal roles and it is slated to release on 23 June 2017.
english
from django.http import HttpResponse import json def bcbpay(request): res = { "ver": 3, "appUISeg": { "title": "通用支付", "value": "0.1", "affCode": "", "referInfo": "进行支付操作", "defaultPayAddr": "", "symbol": "BCB" }, "coinParams": { "note": "备注", "gasLimit": "25000", "calls": [ { "contract": "bcbLVgb3odTfKC9Y9GeFnNWL9wmR4pwWiqwe", "method": "Transfer(types.Address,bn.Number)", "params": [ "bcbL8BzfVfcxtqh9umN3dUhxBYNyEnV7GiSa", "100000000" ] } ] } } print(json.dumps(res)) return HttpResponse(json.dumps(res))
python
<reponame>paternostroleonardo/Proyectos-Personales<gh_stars>0 { "argon-theme": "Argon theme", "theme": "Navbar theme", "themes": "Themes", "argon": "Argon design" }
json
# ens-coincodec [![Tag](https://img.shields.io/github/tag/trustwallet/ens-coincodec.svg)](https://github.com/trustwallet/ens-coincodec/releases/) [![License](https://img.shields.io/github/license/trustwallet/ens-coincodec.svg)](LICENSE) [![GoDoc](https://godoc.org/github.com/trustwallet/ens-coincodec?status.svg)](https://godoc.org/github.com/trustwallet/ens-coincodec) ![CI](https://github.com/trustwallet/ens-coincodec/workflows/CI/badge.svg) [![codecov.io](https://img.shields.io/codecov/c/github/trustwallet/ens-coincodec.svg)](https://codecov.io/github/trustwallet/ens-coincodec) [![Go Report](https://goreportcard.com/badge/github.com/trustwallet/ens-coincodec)](https://goreportcard.com/report/github.com/trustwallet/ens-coincodec) Go utility library to provide movement between string and binary representation of multpile different cryptocurrency coin formats, mainly for ENS, please checkout [EIP-2304](https://github.com/ethereum/EIPs/blob/master/EIPS/eip-2304.md) for details. ## Table of Contents - [Supported Coins](#coins) - [Install](#install) - [Usage](#usage) - [Contribute](#contribute) - [License](#license) ## Coins <a href="https://bitcoin.org" target="_blank"><img src="https://raw.githubusercontent.com/TrustWallet/tokens/master/coins/0.png" width="32" /></a> <a href="https://litecoin.org" target="_blank"><img src="https://raw.githubusercontent.com/TrustWallet/tokens/master/coins/2.png" width="32" /></a> <a href="https://dogecoin.com" target="_blank"><img src="https://raw.githubusercontent.com/TrustWallet/tokens/master/coins/3.png" width="32" /></a> <a href="https://dash.org" target="_blank"><img src="https://raw.githubusercontent.com/TrustWallet/tokens/master/coins/5.png" width="32" /></a> <a href="https://viacoin.org" target="_blank"><img src="https://raw.githubusercontent.com/TrustWallet/tokens/master/coins/14.png" width="32" /></a> <a href="https://www.digibyte.io" target="_blank"><img src="https://raw.githubusercontent.com/TrustWallet/tokens/master/coins/20.png" width="32" /></a> <a href="https://monacoin.org" target="_blank"><img src="https://raw.githubusercontent.com/TrustWallet/tokens/master/coins/22.png" width="32" /></a> <a href="https://ethereum.org" target="_blank"><img src="https://raw.githubusercontent.com/TrustWallet/tokens/master/coins/60.png" width="32" /></a> <a href="https://ethereumclassic.github.io" target="_blank"><img src="https://raw.githubusercontent.com/TrustWallet/tokens/master/coins/61.png" width="32" /></a> <a href="https://cosmos.network/" target="_blank"><img src="https://raw.githubusercontent.com/TrustWallet/tokens/master/coins/118.png" width="32" /></a> <a href="https://z.cash" target="_blank"><img src="https://raw.githubusercontent.com/TrustWallet/tokens/master/coins/133.png" width="32" /></a> <a href="https://zcoin.io" target="_blank"><img src="https://raw.githubusercontent.com/TrustWallet/tokens/master/coins/136.png" width="32" /></a> <a href="https://ripple.com" target="_blank"><img src="https://raw.githubusercontent.com/TrustWallet/tokens/master/coins/144.png" width="32" /></a> <a href="https://bitcoincash.org" target="_blank"><img src="https://raw.githubusercontent.com/TrustWallet/tokens/master/coins/145.png" width="32" /></a> <a href="https://stellar.org" target="_blank"><img src="https://raw.githubusercontent.com/TrustWallet/tokens/master/coins/148.png" width="32" /></a> <a href="https://ravencoin.org" target="_blank"><img src="https://raw.githubusercontent.com/TrustWallet/tokens/master/coins/175.png" width="32" /></a> <a href="https://poa.network" target="_blank"><img src="https://raw.githubusercontent.com/TrustWallet/tokens/master/coins/178.png" width="32" /></a> <a href="https://tron.network" target="_blank"><img src="https://raw.githubusercontent.com/TrustWallet/tokens/master/coins/195.png" width="32" /></a> <a href="https://nimiq.com" target="_blank"><img src="https://raw.githubusercontent.com/TrustWallet/tokens/master/coins/242.png" width="32" /></a> <a href="https://iotex.io" target="_blank"><img src="https://raw.githubusercontent.com/TrustWallet/tokens/master/coins/304.png" width="32" /></a> <a href="https://zilliqa.com" target="_blank"><img src="https://raw.githubusercontent.com/TrustWallet/tokens/master/coins/313.png" width="32" /></a> <a href="https://www.thetatoken.org" target="_blank"><img src="https://raw.githubusercontent.com/TrustWallet/tokens/master/coins/500.png" width="32" /></a> <a href="https://binance.com" target="_blank"><img src="https://raw.githubusercontent.com/TrustWallet/tokens/master/coins/714.png" width="32" /></a> <a href="https://vechain.org" target="_blank"><img src="https://raw.githubusercontent.com/TrustWallet/tokens/master/coins/818.png" width="32" /></a> <a href="https://callisto.network" target="_blank"><img src="https://raw.githubusercontent.com/TrustWallet/tokens/master/coins/820.png" width="32" /></a> <a href="https://tomochain.network" target="_blank"><img src="https://raw.githubusercontent.com/TrustWallet/tokens/master/coins/889.png" width="32" /></a> <a href="https://thudercore.com" target="_blank"><img src="https://raw.githubusercontent.com/TrustWallet/tokens/master/coins/1001.png" width="32" /></a> <a href="https://ont.io" target="_blank"><img src="https://raw.githubusercontent.com/TrustWallet/tokens/master/coins/1024.png" width="32" /></a> <a href="https://tezos.com" target="_blank"><img src="https://raw.githubusercontent.com/TrustWallet/tokens/master/coins/1729.png" width="32" /></a> <a href="https://kin.org" target="_blank"><img src="https://raw.githubusercontent.com/TrustWallet/tokens/master/coins/2017.png" width="32" /></a> <a href="https://qtum.org" target="_blank"><img src="https://raw.githubusercontent.com/TrustWallet/tokens/master/coins/2301.png" width="32" /></a> <a href="https://gochain.io" target="_blank"><img src="https://raw.githubusercontent.com/TrustWallet/tokens/master/coins/6060.png" width="32" /></a> ## Install `ens-coincodec` is a standard Go module which can be installed with: ```sh go get github.com/trustwallet/ens-coincodec ``` ## Usage ### Example ```go import ( "fmt" cc "github.com/trustwallet/ens-coincodec" slip44 "github.com/wealdtech/go-slip44" ) func main() { // Ethereum bytes, err := cc.ToBytes("0x314159265dD8dbb310642f98f50C066173C1259b", slip44.ETHER) // hex: 314159265dd8dbb310642f98f50c066173c1259b if err != nil { panic(err) } str, err := cc.ToString(bytes, slip44.ETHER) if err != nil { panic(err) } fmt.Printf("Ethereum: %s\n", str) // Bitcoin bytes, err = cc.ToBytes("bc1qw508d6qejxtdg4y5r3zarvary0c5xw7kv8f3t4", slip44.BITCOIN) // script hash: 0014751e76e8199196d454941c45d1b3a323f1433bd6 if err != nil { panic(err) } str, err = cc.ToString(bytes, slip44.BITCOIN) if err != nil { panic(err) } fmt.Printf("Bitcoin: %s\n", str) // BNB bytes, err = cc.ToBytes("bnb1grpf0955h0ykzq3ar5nmum7y6gdfl6lxfn46h2", slip44.BINANCE) // public key hash: 40c2979694bbc961023d1d27be6fc4d21a9febe6 if err != nil { panic(err) } str, err = cc.ToString(bytes, slip44.BINANCE) if err != nil { panic(err) } fmt.Printf("BNB: %s\n", str) } ``` ## Contribute Contributions welcome. Please check out [the issues](https://github.com/trustwallet/ens-coincodec/issues). If you are adding a new coin type please try to follow the following rules: - use the existing `ethereum.go` and `ethereum_test.go` as templates - ensure you have 100% code coverage with your tests - try not to import large amounts of code; consider copying the relevant code rather than bringing in an entire project to use the address conversion functions ## License [Apache-2.0](LICENSE) © 2019 Weald Technology Trading Ltd / Trust Wallet
markdown
export const muted: string export const infoLabel: string export const buySellSpan: string export const redSpan: string export const greenSpan: string export const currency: string export const highlight: string
typescript
<filename>LeetCode/Tree/same-tree.cpp /** Definition for a binary tree node. */ //https://leetcode.com/problems/same-tree/ #include<bits/stdc++.h> using namespace std; struct TreeNode { int val; TreeNode *left; TreeNode *right; TreeNode() : val(0), left(nullptr), right(nullptr) {} TreeNode(int x) : val(x), left(nullptr), right(nullptr) {} TreeNode(int x, TreeNode *left, TreeNode *right) : val(x), left(left), right(right) {} }; class Solution { public: bool isSameTree(TreeNode* p, TreeNode* q) { if(p==NULL && q==NULL) return true; if(p==NULL || q==NULL) return false; return (p->val==q->val&&isSameTree(p->left,q->left)&&isSameTree(p->right,q->right)); } };
cpp
On Sunday, Mumbai Indians produced a below-par batting show against Lucknow Super Giants in IPL 2022. Meanwhile, head coach Mahela Jayawardene has mulled some changes. Record five-time former champion Mumbai Indians surrendered to a 36-run defeat to new team Lucknow Super Giants (LSG) in the 2022 Indian Premier League (IPL) Match 37. Played at the Wankhede Stadium in Mumbai, MI failed to utilise its home advantage, while it was a lacklustre show by the MI batters. Meanwhile, MI head coach Mahela Jayawardene has mulled making some changes to the line-up. MI was being blamed for its shoddy bowling for most of IPL 2022. While its bowlers were on target against LSG, barring skipper-opener KL Rahul's unbeaten 103, its batters goofed up the chasable target of 169. Meanwhile, Jayawardene has asserted that he will talk about the current situation with the rest of the coaches. "Batting has been a concern for us, especially on good wickets where we have batted under par. It is a senior group that understands the conditions and performed in the past. We need to keep pushing, and if we need to make those changes, we will do that," Jayawardene said during the post-match press conference. "We had few changes, but not a lot. We wanted to keep the batting consistent. There are concerns as we have not been consistent whether we are batting first or chasing totals down. Ishan [Kishan] has struggled a bit. We have given him the freedom to play his natural game. I haven't yet spoken to him today, but I will have a conversation with him soon," concluded Jayawardene. However, Jayawardene lauded MI's improved batting by saying, "We bowled pretty well. I'm happy with the bowling, but we could have been better. Our bowlers have definitely improved the last two games, but we are still not consistent enough. Most bowling units are controlling things this season, but we have not picked many early wickets. So, there is room to improve, and we need to execute better. "
english
{ "rules": { "strict": ["off"], "@typescript-eslint/explicit-function-return-type": ["off"], "@typescript-eslint/no-var-requires": ["off"] } }
json
import { Mocker } from "./core/mocker"; import { MockTemplate } from "./core/mockTemplate"; import { ContextMock, ContextMocker } from "./mocks/context"; import { SurfaceElementMocker } from "./mocks/elements/surfaceElement"; import { TileElementMocker } from "./mocks/elements/tileElement"; import { CarMock, CarMocker } from "./mocks/entities/car"; import { EntityMocker } from "./mocks/entities/entity"; import { GuestMock, GuestMocker } from "./mocks/entities/guest"; import { PeepMock } from "./mocks/entities/peep"; import { StaffMock, StaffMocker } from "./mocks/entities/staff"; import { GameDateMocker } from "./mocks/gameDate"; import { GameMapMock, GameMapMocker } from "./mocks/gameMap"; import { LoadedObjectMocker } from "./mocks/objects/loadedObject"; import { RideObjectMocker } from "./mocks/objects/rideObject"; import { RideObjectVehicleMock } from "./mocks/objects/rideObjectVehicle"; import { ParkMock, ParkMocker } from "./mocks/park"; import { RideMocker } from "./mocks/ride"; import { TileMocker } from "./mocks/tile"; import { UiMock, UiMocker } from "./mocks/ui/ui"; import { ViewportMocker } from "./mocks/ui/viewport"; import { WindowMock, WindowMocker } from "./mocks/ui/window"; /** * Mock is an easy to use utility to help create mocks for well known OpenRCT2 * interfaces. Each mock allows passing in a base template to set specific * values for your unit tests. * * Some general notes: * * Some members will be auto-mocked with basic functionality if not supplied * through the template. * * Some templates have additional storage properties to supply specific internal * objects which are normally only accesible through the interface's methods. * * Members that return interfaces, will return default mocks if not supplied * through the template. */ export interface Mock { /** * Allows creating a (partial) mock of the specified type or interface. * * @param source A partial of T containing the mocked methods and values. * @returns The specified partial as a fully typed T. */ <T>(source?: Partial<T>): T; /** * Create a mock of an OpenRCT2 car entity. * * Auto-mocks the following members if they are not set on the given template: * * Inherits everything from the {@link Mock.entity|`entity`} mock. * * `type` is set to `car`. * * `travelBy` updates `remainingDistance` and `trackProgress`. * * Various `number` properties map to values on matching object from `context.getObject` * if `rideObject` and `vehicleObject` are present. */ car: MockTemplate<CarMock>; /** * Create a mock of an OpenRCT2 context. * * Auto-mocks the following members if they are not set on the given template: * * `getObject` and `getAllObjects` query the `objects` array. * * `subscribe` and `executeAction` map to the `subscriptions` property. */ context: MockTemplate<ContextMock>; /** * Create a mock of an OpenRCT2 date object. * * Auto-mocks the following members if they are not set on the given template: * * `ticksElapsed`, `monthsElapsed` and `monthProgress` is set to 0. * * `yearsElapsed`, `year` and `month` are calculated from `monthsElapsed`. * * `day` is calculated from `monthProgress` and influenced by `month`. */ date: MockTemplate<GameDate>; /** * Create a mock of an OpenRCT2 entity. * * Auto-mocks the following members if they are not set on the given template: * * `id` is assigned an unique number. */ entity: MockTemplate<Entity>; /** * Create a mock of an OpenRCT2 guest. * * Auto-mocks the following members if they are not set on the given template: * * Inherits everything from the {@link Mock.entity|`entity`} mock. * * `type` and `peepType` are set to `"guest"`. * * `getFlag` and `setFlag` map to the `flags` property. * * `isInPark` is set to `true` to reflect the most common scenario. * * `isLost` checks if `lostCountdown` is lower than 90 or not. */ guest: MockTemplate<GuestMock>; /** * Create a mock of an OpenRCT2 loaded object. * * Auto-mocks the following members if they are not set on the given template: * * `index` is assigned an unique number. */ loadedObject: MockTemplate<LoadedObject>; /** * Create a mock of an OpenRCT2 map. * * Auto-mocks the following members if they are not set on the given template: * * `numEntities` maps to the length of the `entities` array. * * `numRides` maps to the length of the `rides` array. * * `getEntity` and `getAllEntities` query the `entities` array. * * `getRide` queries the `rides` array. * * `getTile` maps to `tiles`, or queries it if it is an array. */ map: MockTemplate<GameMapMock>; /** * Create a mock of an OpenRCT2 park. * * Auto-mocks the following members if they are not set on the given template: * * `getFlag` and `setFlag` map to the `flags` property. * * `guests` maps to the total of guests that are in the park from `map.getAllEntities("guest")`. * If `map` is not defined, it returns 0. */ park: MockTemplate<ParkMock>; /** * Create a mock of an OpenRCT2 ride. * * Auto-mocks the following members if they are not set on the given template: * * `id` is assigned an unique number. * * `classification` is set to `"ride"`. * * `object` maps to `context.getObject` with matching `objectId` if `context` * is defined. */ ride: MockTemplate<Ride>; /** * Create a mock of an OpenRCT2 ride object. * * Auto-mocks the following members if they are not set on the given template: * * Inherits everything from the {@link Mock.loadedObject|`loadedObject`} mock. * * `type` is set to `"ride"`. * * `carsPerFlagRide` is set to 255 (standard for tracked rides). * * `vehicles` contains one mocked {@link Mock.rideObjectVehicle|`rideObjectVehicle`}. */ rideObject: MockTemplate<RideObject>; /** * Create a mock of an OpenRCT2 ride object's vehicle. * * Auto-mocks the following members if they are not set on the given template: * * `baseImageId` is assigned an unique number. */ rideObjectVehicle: MockTemplate<RideObjectVehicle>; /** * Create a mock of an OpenRCT2 staff. * * Auto-mocks the following members if they are not set on the given template: * * Inherits everything from the {@link Mock.entity|`entity`} mock. * * `type` and `peepType` are set to `"staff"`. * * `getFlag` and `setFlag` map to the `flags` property. */ staff: MockTemplate<StaffMock>; /** * Create a mock of an OpenRCT2 surface tile element. * * Auto-mocks the following members if they are not set on the given template: * * `type` is always set to `"surface"`. * * `hasOwnership` and `hasConstructionRights` map to `ownership`. * * All properties returning a `number` are set to 0. */ surface: MockTemplate<SurfaceElement>; /** * Create a mock of an OpenRCT2 tile. * * Auto-mocks the following members if they are not set on the given template: * * `getElement`, `insertElement`, `removeElement` and `numElements` map to the `elements` array. * * `elements` contains a single mocked {@link Mock.surface|`surface`} tile element. */ tile: MockTemplate<Tile>; /** * Create a mock of an OpenRCT2 tile element. */ tileElement: MockTemplate<TileElement>; /** * Create a mock of an OpenRCT2 user interface context. * * Auto-mocks the following members if they are not set on the given template: * * `openWindow`, `closeWindows` and `getWindow` map to the `createdWindows` array. * * `mainViewport` is set to a mocked {@link Mock.viewport|`viewport`}. */ ui: MockTemplate<UiMock>; /** * Create a mock of an OpenRCT2 viewport widget. * * Auto-mocks the following members if they are not set on the given template: * * `getCentrePosition` returns the center position of `top`, `bottom`, `left` and `right`. * * `moveTo` and `scrollTo` updates `top`, `bottom`, `left` and `right`. * * The default viewport size reflected in the properties is 100x100. */ viewport: MockTemplate<Viewport>; /** * Create a mock of an OpenRCT2 window. * * Auto-mocks the following members if they are not set on the given template: * * `findWidget` queries the `widgets` array. * * `classificationName` maps to the original string based classification, if it was specified. */ window: MockTemplate<WindowMock>; } /** * Helper that can create mocks. */ const Mock: Mock = Object.assign(Mocker, { car: CarMocker, context: ContextMocker, date: GameDateMocker, entity: EntityMocker, guest: GuestMocker, loadedObject: LoadedObjectMocker, map: GameMapMocker, park: ParkMocker, ride: RideMocker, rideObject: RideObjectMocker, rideObjectVehicle: RideObjectVehicleMock, staff: StaffMocker, surface: SurfaceElementMocker, tile: TileMocker, tileElement: TileElementMocker, ui: UiMocker, viewport: ViewportMocker, window: WindowMocker, }); export default Mock; export type { ContextMock, GameMapMock, GuestMock, ParkMock, PeepMock, StaffMock, UiMock, WindowMock };
typescript
<filename>benchmarks/module/math/acot/regular/acot.hpp //================================================================================================== /* EVE - Expressive Vector Engine Copyright : EVE Contributors & Maintainers SPDX-License-Identifier: MIT */ //================================================================================================== #include <eve/module/core.hpp> #include <eve/module/math.hpp> #include <cmath> int main() { auto lmin = EVE_VALUE(-5); auto lmax = EVE_VALUE(5); auto arg0 = eve::bench::random_<EVE_VALUE>(lmin,lmax); auto std__acot = [](auto x){return std::atan(1/x);}; eve::bench::experiment xp; run<EVE_VALUE>(EVE_NAME(std__acot) , xp, std__acot , arg0); run<EVE_VALUE>(EVE_NAME(acot) , xp, eve::acot , arg0); run<EVE_TYPE> (EVE_NAME(acot) , xp, eve::acot , arg0); }
cpp
<gh_stars>1-10 [ { "merged": "/Users/alexyuan/Desktop/FTC/Programming/FTC_app_master_2017/FtcRobotController/build/intermediates/res/merged/androidTest/debug/menu/ftc_robot_controller.xml", "source": "/Users/alexyuan/Desktop/FTC/Programming/FTC_app_master_2017/FtcRobotController/build/intermediates/bundles/debug/res/menu/ftc_robot_controller.xml" }, { "merged": "/Users/alexyuan/Desktop/FTC/Programming/FTC_app_master_2017/FtcRobotController/build/intermediates/res/merged/androidTest/debug/menu/main_menu.xml", "source": "/Users/alexyuan/.android/build-cache/ea469d498d3993c9b3a064b7141ffad7906eacc1/output/res/menu/main_menu.xml" }, { "merged": "/Users/alexyuan/Desktop/FTC/Programming/FTC_app_master_2017/FtcRobotController/build/intermediates/res/merged/androidTest/debug/menu/menu_server.xml", "source": "/Users/alexyuan/.android/build-cache/2ed08df1a5700543479e17d6798a0336acbf51be/output/res/menu/menu_server.xml" } ]
json
Twitter crypto scammers are now targeting Moonbirds NFTs, a project by entrepreneurs Kevin Rose and Ryan Carson. It’s a collection of 10,000 pixelated owls that have raked millions in sales within days. Scammers trying to get some money off successful NFT projects is a recurring theme now. Last month, they were successful in stealing NFTs worth more than $500,000 from Bored Ape Yacht Club (BAYC) and Mutant Ape Yacht Club (MAYC) owners. After opening up NFT minting on Saturday, the project has clocked over $290 million in sales across Opensea and Looksrare. To take advantage of this hype, scammers have started taking over numerous verified accounts on Twitter, and tweeting out malicious links that might lead people to transfer their cryptocurrencies or NFTs in the hopes of scoring a Moonbird. We have observed at least 10 hacked Twitter accounts across countries ranging from athletes to politicians posting scammy links that lead you to a fake Moonbirds website. These names include Levi Sanders (son of Senator Bernie Sanders), New Zealand cricketer Martin Guptil, former RuPaul’s Drag Race stars Dahlia Sin, Pangina Heals, and Lady Camden, golfer Sofie Powell, Indian politicians Malti Maheshwari and Bikha Joshi, and former member of the Chamber of Deputies of Argentina, Horacio Pietragalla Corti. The list goes on. These accounts, with thousands of followers, are tagging hundreds of Twitter users to try and siphon off some money before the handles are returned to the rightful owner. Moonbirds is gaining tremendous popularity amongst NFT collectors, as it has become the top project for the last 30 days period, according to analytics site Cyrptoslam. Given the high-profile names attached to it, the Moonbirds project might be the target of a lot of scams in the future. As noted by crypto scam watcher account zachxbt, the project was the target of a Sybil attack at launch. That means one person created a ton of wallets to be on the allowlist that gets to mint NFTs. This person was successful in winning 50 slots, and could end up with thousands of dollars by selling Moonbirds on a secondary market. Essentially, the project failed to put in enough filters to disallow such bidding. When it comes to Twitter spam, Moobirds co-founder Justin Mezzell addressed this by saying that the situation is bad, and the company’s trying to do everything to keep it under control. Oh the spam is terrible! We’re doing everything we can to contain it. Lots of bad actors doing their play. This wasn’t project criticism (which is of course valid) so much as gatekeeping which projects deserve recognition or success. Yo! These are scammers who purchase verified accounts, pretend to be @moonbirds_xyz, and try to get folks to connect their wallet and siphon funds. It's the worst. Just block / report if you're seeing it! This issue is not limited to one project. As The Block noted, another popular NFT project called Azuki has also been targeted using the same playbook. We’ve asked Twitter if it’s taking any action against scammers, and we’ll update the story if we hear back. Earlier this month, Elon Musk, who’s trying to buy the social network, called crypto spam bots the “single most annoying problem” on the platform. Wonder if he has any suggestions to fight this issue. Get the most important tech news in your inbox each week.
english
import React from 'react'; import PropTypes from 'prop-types'; import { UI_PREFIX } from '../../config'; import { propTypesChildren } from '../../utils/types'; const TEXT_CLASS = `${UI_PREFIX}__text`; const TEXT_BIG_CLASS = `${UI_PREFIX}__text--big`; const TEXT_SMALL_CLASS = `${UI_PREFIX}__text--small`; export function Text({ size = 'normal', tag: Tag = 'div', className = '', children, ...rest }) { const sizeClassName = size === 'big' ? TEXT_BIG_CLASS : size === 'small' ? TEXT_SMALL_CLASS : ''; const textClassName = `${TEXT_CLASS} ${sizeClassName} ${className}`.trim(); return ( <Tag className={textClassName} {...rest}> {children} </Tag> ); } Text.propTypes = { size: PropTypes.oneOf(['big', 'small', 'normal']), tag: PropTypes.oneOfType([PropTypes.string, PropTypes.node, PropTypes.func]), className: PropTypes.string, children: propTypesChildren, };
javascript
Boyfriend punched in the faceSeeing the video, it is known that the manner in which the boyfriend had punched the screen, must have caused a lot of injury to the girlfriend's face. This video is becoming increasingly viral on social media. Seeing this, social media users are asking the makers of prank videos to learn a lesson. Many people are also asking about the condition of the woman. People are saying that one should be very careful while doing pranks.
english
Gautam Gambhir, former Indian opener, had many memorable duels with some of the best bowlers in the world during his playing days. Known for his feisty character, Gautam Gambhir has a plethora of verbal battles, especially with Pakistani cricketers during his career, whether it was all-rounded Shahid Afridi or wicket-keeper Kamran Akmal. Gambhir, however, recently picked a Pakistani bowler against whom he enjoyed some memorable battles. According to the 2011 World Cup winner, former off-spinner Saeed Ajmal was one bowler that he enjoyed his battles with. Hailing Ajmal as the toughest off-spinner he has ever faced, Gambhir reckoned that Ajmal’s doosra and the speed with which he used to bowl, made him an incredibly difficult bowler to counter especially under lights. “One battle which I thoroughly enjoyed was against Saeed Ajmal. Because Ajmal was probably one of the toughest off-spinners I have faced, especially under lights as we couldn’t pick his Doosra. And the speed with which he used to bowl at, he was very very lethal,” Gautam Gambhir said on ‘Cricket Connected’. “The world knows about that. I absolutely loved it and even now,” Gautam Gambhir said. Gambhir and Shahid Afridi’s rivalry goes back a long way. It all started during the 2007 ODI series between India and Pakistan where the duo got involved in a heated verbal altercation. Since then, no love has been lost between the duo as they have, on a regular basis, proceeded to take jibes at each other on social media. Recently, Gautam Gambhir wished Shahid Afridi a speedy recovery after the all-rounder was tested positive for Covid-19.
english
India has qualified for the Commonwealth Games 2022 final in women's T20 cricket. It defeated England by four runs in the semis, assuring itself of the silver medal. The 2022 Commonwealth GGamesis all set to be held in Birmingham between July 28 and August 8. Meanwhile, it would be the first time that women's Twenty20 cricket would be played at the event, while the dates for the same have been confirmed.
english
<filename>Polyhedron/demo/Polyhedron/Plugins/Operations_on_polyhedra/Partition_graph_plugin.cpp #include <CGAL/Exact_predicates_inexact_constructions_kernel.h> #include <CGAL/Three/Polyhedron_demo_plugin_helper.h> #include "ui_PartitionDialog.h" #include "Color_map.h" #include "Scene_surface_mesh_item.h" #include <CGAL/boost/graph/METIS/partition_graph.h> #include <CGAL/boost/graph/METIS/partition_dual_graph.h> #include <QString> #include <QAction> #include <QMenu> #include <QMainWindow> #include <QApplication> #include <QElapsedTimer> #include <QMessageBox> typedef Scene_surface_mesh_item Scene_facegraph_item; typedef Scene_facegraph_item::Face_graph FaceGraph; class PartitionDialog : public QDialog, public Ui::PartitionDialog { Q_OBJECT public: PartitionDialog(QWidget* =0) { setupUi(this); } }; using namespace CGAL::Three; class Partition_graph_plugin : public QObject, public Polyhedron_demo_plugin_helper { Q_OBJECT Q_INTERFACES(CGAL::Three::Polyhedron_demo_plugin_interface) Q_PLUGIN_METADATA(IID "com.geometryfactory.PolyhedronDemo.PluginInterface/1.0") public: QList<QAction*> actions() const { return QList<QAction*>() << actionNodalPartition << actionDualPartition;; } bool applicable(QAction*) const { return qobject_cast<Scene_facegraph_item*>(scene->item(scene->mainSelectionIndex())); } void init(QMainWindow* _mw, CGAL::Three::Scene_interface* scene_interface, Messages_interface*) { mw = _mw; this->scene = scene_interface; actionNodalPartition = new QAction( tr("Create a Nodal Graph Based Partition") , mw); if(actionNodalPartition) { connect(actionNodalPartition, SIGNAL(triggered()),this, SLOT(create_nodal_partition())); } actionDualPartition = new QAction( tr("Create a Dual Graph Based Partition") , mw); if(actionDualPartition) { connect(actionDualPartition, SIGNAL(triggered()),this, SLOT(create_dual_partition())); } } private: QAction* actionNodalPartition; QAction* actionDualPartition; CGAL::Three::Scene_interface* scene; enum PARTITION_TYPE{ NODAL=0, DUAL}; void create_partition(PARTITION_TYPE type) { Scene_facegraph_item* item = qobject_cast<Scene_facegraph_item*>(scene->item(scene->mainSelectionIndex())); if(!item) return; if(!(CGAL::is_triangle_mesh(*item->face_graph()) && is_valid(*item->face_graph()))) return; PartitionDialog *dialog = new PartitionDialog(); //opens the dialog if(!dialog->exec()) return; int nparts = dialog->nparts_spinBox->value(); QApplication::setOverrideCursor(Qt::WaitCursor); item->face_graph()->collect_garbage(); item->color_vector().clear(); if(!item->hasPatchIds()){ item->setItemIsMulticolor(true); item->computeItemColorVectorAutomatically(true); } typedef boost::property_map<FaceGraph,CGAL::face_patch_id_t<int> >::type PatchIDMap; FaceGraph* fg =item->face_graph(); boost::property_map<FaceGraph, boost::vertex_index_t>::type vimap = get(boost::vertex_index, *fg); PatchIDMap pidmap = get(CGAL::face_patch_id_t<int>(), *fg); std::map<boost::graph_traits<FaceGraph>::vertex_descriptor, int> vpm; if(type == DUAL) CGAL::METIS::partition_dual_graph(*fg, nparts, CGAL::parameters::vertex_partition_id_map(boost::make_assoc_property_map(vpm)).face_partition_id_map(pidmap).vertex_index_map(vimap) ); else if(type == NODAL) CGAL::METIS::partition_graph(*fg, nparts, CGAL::parameters::vertex_partition_id_map(boost::make_assoc_property_map(vpm)).face_partition_id_map(pidmap).vertex_index_map(vimap) ); item->setProperty("NbPatchIds", nparts); item->invalidateOpenGLBuffers(); QApplication::restoreOverrideCursor(); item->redraw(); } public Q_SLOTS: void create_nodal_partition() { create_partition(NODAL); } void create_dual_partition() { create_partition(DUAL); } }; // end class Polyhedron_demo_affine_transform_plugin #include "Partition_graph_plugin.moc"
cpp
Acer has expanded is Predator gaming laptop portfolio by adding the Helios 300 model to it. Now available in India at a starting price of Rs 1. 2 lakh, the laptop packs several gaming-related features. - Acer Predator Helios 300 comes with a 4th Generation AeroBlade 3D fan for cooling. - It offers 240Hz refresh rate for seamless visuals while gaming. - The laptop comes with a custom utility app that allows several functions to be performed on it remotely. By Sarthak Dogra: Acer has launched its Predator Helios 300 laptop in India at a starting price of Rs 1,19,999. The gaming laptop comes equipped with octa-core Intel Core i7 mobile gaming processors, the latest Nvidia GeForce RTX 30 series GPUs and a 240Hz refresh rate as highlights. Acer says that its latest addition to its Predator series of gaming laptops uses a 4th Generation AeroBlade 3D fan. The cooling system aims to bring significant cooling enhancements to maintain peak performance at a range of temperatures. Powered by a 4-cell battery pack, the Helios 300 weighs 2. 3 kg and measures 22. 9 mm across the thickness. The Predator Helios 300 will be available for purchase on Acer Exclusive Store, Acer Online Store and Flipkart starting at Rs 1,19,999. The Predator Helios 300 is powered by an octa-core 10th Gen Intel Core i7 processor. The chipset is combined with NVIDIA RTX 30 series graphics card and up to 32GB DDR4 RAM. It sports a 240Hz IPS display with 3ms response time, and 3D simulated surround sound with DTS-X Ultra audio fine-tuning. For gamers, the Predator Helios 300 carries a 4-zone RGB customized keyboard. The keyboard sports see-through concave-shaped keycaps for WASD, and features two integral keys Turbo and PredatorSense. A custom utility app also allows the user to monitor the system, overclock, customize RGB preferences and perform more functions. Connectivity on the gaming laptop is taken care of by Killer’s E2600 Ethernet Controller, Killer Wi-Fi 6 AX1650i, and Control Center 2. 0. It also houses an HDMI 2. 0, MiniDP, and a USB 3. 2 standard with Gen 1 and 2 support. Acer has appointed its custom-engineered cooling technology on the Predator Helios 300. The company says that the new design reduces noise while increasing airflow. Fan speeds also increase based on the heat being generated during use.
english
<reponame>yxhbj/rtpms-client<filename>renderer-process/prosdb/institution-data.js const { ipcRenderer } = require("electron"); var institutionTable = document.querySelector("#institution-data-table"); var patientTable = document.querySelector("#patient-data-table"); var planTable = document.querySelector("#plan-data-table"); // 模拟了一个简单的promise请求,获取Institution数据 const getInstitutionData = function(paramse) { return new Promise((resolve, reject) => { const xhr = new XMLHttpRequest(); xhr.open("POST", "http://127.0.0.1:3000/prosdb/institution"); xhr.setRequestHeader("Content-Type", "multipart/form-data"); xhr.onreadystatechange = function() { if (xhr.readyState !== 4) { return; } if ((xhr.status >= 200 && xhr.status < 300) || xhr.status === 304) { resolve(xhr.response); var result = JSON.parse(xhr.response); patientInit(result.data[0].institutionid); } else { reject(xhr); } }; // 一个简单的处理参数的示例 let formData = ""; for (let key in paramse) { if (formData !== "") { formData += "&"; } formData += key + "=" + paramse[key]; } xhr.send(formData); }); }; function institutionInit() { institutionTable.GM( "init", { supportRemind: false, gridManagerName: "institution-table", isCombSorting: false, height: "200px", supportCheckbox: true, useRowCheck: true, useRadio: true, supportAjaxPage: false, supportSorting: true, emptyTemplate: '<div class="gm-emptyTemplate">没有数据</div>', ajaxData: function(settings, params) { // 传入参数信息 return getInstitutionData(params); }, query: {}, pageSize: 20, columnData: [ { key: "institutionid", remind: "编号", text: "编号", sorting: "" }, { key: "name", remind: "机构名称", text: "机构名称", sorting: "" }, { key: "institutionpath", remind: "文件路径", text: "文件路径", sorting: "" }, { key: "lastmodifiedtimestamp", remind: "最近修改时间", text: "最近修改", sorting: "" } ], checkedAfter: function(checkedList, isChecked, rowData) { var _query = { institutionid: rowData.institutionid }; patientTable.GM("setQuery", _query).GM("refreshGrid", function() { console.log("选择了分组" + rowData.name); }); } }, cb => console.log(cb) ); } // 一个简单的promise请求,获取Patient数据 const getPatientData = function(paramse) { return new Promise((resolve, reject) => { const xhr = new XMLHttpRequest(); xhr.open("POST", "http://127.0.0.1:3000/prosdb/patient"); xhr.setRequestHeader("Content-Type", "multipart/form-data"); xhr.onreadystatechange = function() { if (xhr.readyState !== 4) { return; } if ((xhr.status >= 200 && xhr.status < 300) || xhr.status === 304) { resolve(xhr.response); var result = JSON.parse(xhr.response); //console.log(result) if (result.data.length > 0) { var _query = { patientid: result.data[0].patientid }; var planTableMarginTop = document.defaultView.getComputedStyle( planTable, null )["margin-top"]; if (planTableMarginTop != "0px") planTable.GM("setQuery", _query).GM("refreshGrid", function() {}); } } else { reject(xhr); } }; // 一个简单的处理参数的示例 let formData = ""; for (let key in paramse) { if (formData !== "") { formData += "&"; } formData += key + "=" + paramse[key]; } xhr.send(formData); }); }; function patientInit(institutionId) { patientTable.GM( "init", { supportRemind: false, gridManagerName: "patient-table", isCombSorting: false, height: "600px", supportCheckbox: true, useRowCheck: false, useRadio: true, supportAjaxPage: true, supportSorting: true, emptyTemplate: '<div class="gm-emptyTemplate">没有数据</div>', ajaxData: function(settings, params) { // 传入参数信息 return getPatientData(params); }, query: { institutionid: institutionId }, pageSize: 20, columnData: [ { key: "patientid", remind: "TPS内部编号", text: "编号", sorting: "" }, { key: "lastname", text: "姓氏", sorting: "" }, { key: "firstname", text: "名字", sorting: "" }, { key: "middlename", text: "中间名字", sorting: "" }, { key: "medicalrecordnumber", remind: "medical record number", text: "病历号", sorting: "" }, { key: "primaryphysician", text: "主管医生" }, { key: "planLockDate", text: "计划锁定日期" }, { key: "backupTimeStamp", text: "备份时间", sorting: "" }, { key: "backupFileName", text: "备份文件名称", sorting: "" }, { key: "lastmodifiedtimestamp", remind: "", text: "最近修改", sorting: "" }, { key: "patientpath", remind: "", text: "文件夹路径" }, { key: "comment", remind: "", text: "备注" }, { key: "action", //remind: 'the action', width: "60px", text: "操作", template: function(action, rowObject) { var actionButton = document.createElement("div"); actionButton.innerText = "删除"; actionButton.classList.add("plugin-action"); actionButton.addEventListener("click", function(e) { if ( rowObject.backupTimeStamp == undefined || rowObject.backupTimeStamp == null ) { ipcRenderer.send("open-error-dialog"); } else { ipcRenderer.send("delete-warning-dialog", rowObject); } }); return actionButton; } } ], checkedAfter: function(checkedList, isChecked, rowData) { //console.log(checkedList,isChecked,rowData); console.log("选择了病人" + rowData.lastname + rowData.firstname); var _query = { patientid: rowData.patientid }; //console.log(planTable.GM) var planTableMarginTop = document.defaultView.getComputedStyle( planTable, null )["margin-top"]; if (planTableMarginTop != "0px") { planTable.GM("setQuery", _query).GM("refreshGrid", function() {}); } else { planInit(rowData.patientid); } }, cellHover: function(row, rowIndex, colIndex) { console.log(row, rowIndex, colIndex); } }, callBack => console.log(callBack) ); } // 一个简单的promise请求,获取Plan数据 const getPlanData = function(paramse) { return new Promise((resolve, reject) => { const xhr = new XMLHttpRequest(); xhr.open("POST", "http://127.0.0.1:3000/prosdb/plan"); xhr.setRequestHeader("Content-Type", "multipart/form-data"); xhr.onreadystatechange = function() { if (xhr.readyState !== 4) { return; } if ((xhr.status >= 200 && xhr.status < 300) || xhr.status === 304) { resolve( xhr.response.replace(/\"planislocked\"\:\d{1}/g, m => { return m.substr(-1) == 1 ? '"planislocked":' + '"Yes"' : '"planislocked":' + '"No"'; }) ); } else { reject(xhr); } }; // 一个简单的处理参数的示例 let formData = ""; for (let key in paramse) { if (formData !== "") { formData += "&"; } formData += key + "=" + paramse[key]; } xhr.send(formData); }); }; function planInit(patientid) { planTable.GM("init", { supportRemind: false, gridManagerName: "plan-table", isCombSorting: false, supportCheckbox: false, height: "200px", supportAjaxPage: false, supportSorting: true, emptyTemplate: '<div class="gm-emptyTemplate">没有数据</div>', ajaxData: function(settings, params) { // 传入参数信息 return getPlanData(params); }, query: { patientid: patientid }, pageSize: 20, columnData: [ { key: "planname", remind: "计划名称", text: "计划名称", sorting: "" }, { key: "pinnacleversiondescription", remind: "计划软件版本", text: "软件版本", sorting: "" }, { key: "dosimetrist", remind: "计划制定剂量师", text: "剂量师", sorting: "" }, { key: "comment", remind: "备注信息", text: "备注", sorting: "" }, { key: "lastmodifiedtimestamp", remind: "最近修改时间", text: "最近修改", sorting: "" }, { key: "planLockDate", text: "计划锁定日期" }, { key: "planpath", remind: "文件夹路径", text: "文件夹路径" } ] }); } function init() { const xhr = new XMLHttpRequest(); xhr.open("POST", "http://127.0.0.1:3000/system/settings"); xhr.setRequestHeader("Content-Type", "multipart/form-data"); xhr.onreadystatechange = function() { //console.log(xhr.status) if (xhr.readyState !== 4) { return; } if ((xhr.status >= 200 && xhr.status < 300) || xhr.status === 304) { var result = JSON.parse(xhr.response); //console.log(result) var ss = document.getElementById("institutionList"); if (ss.options.length == 0) { //定义加载institution列表 var instList = JSON.parse( JSON.stringify(result.institution.institutions) ); var allInst = { id: "All", name: "All institutions" }; instList.push(allInst); instList.forEach(institution => { var op = document.createElement("option"); op.setAttribute("label", institution.name); op.setAttribute("value", institution.id); ss.appendChild(op); }); } document.querySelector("#institution-filter").value = result.institution.defaultInstitution; patientInit(result.institution.defaultInstitution); } }; xhr.send(""); } init(); function refreshPatientList() { var _query = { searchString: document .querySelector('[name="search-Field"]') .value.replace(/[^a-zA-Z0-9\-\_\s\/]/g, ""), institutionid: document.querySelector("#institution-filter").value }; patientTable.GM("setQuery", _query).GM("refreshGrid", function() {}); } //绑定选择Institution事件 document .querySelector("#institution-filter") .addEventListener("change", function() { refreshPatientList(); }); //绑定搜索事件 document .querySelector('[name="search-Field"]') .addEventListener("change", function() { refreshPatientList(); }); //绑定搜索事件 document .querySelector('[name="search-Field"]') .addEventListener("input", function() { refreshPatientList(); }); ipcRenderer.on("delete-dialog-selection", (event, para) => { if (para.index === 0) { var patients = []; patients.push(para.patient); const xhr = new XMLHttpRequest(); xhr.open("POST", "http://127.0.0.1:3000/prosdb/deletePatient"); xhr.setRequestHeader("Content-Type", "application/json; charset=UTF-8"); xhr.send(JSON.stringify(patients)); setTimeout(() => { refreshPatientList(); window.alert( "删除已完成,如必要,请退出并重新打开当前已经打开的LaunchPad以刷新。" ); }, 10000); } });
javascript
<filename>Xania.TemplateJS/wwwroot/admin/grid.css .xn-border-box, div.xn-content { -moz-box-sizing: border-box; -webkit-box-sizing: border-box; box-sizing: border-box; } div.xn-list-scrollable { position: relative; overflow: auto; width: 100%; max-height: 100%; height: 100%; border: 1px solid #AAA; } div.xn-grid { width: 100%; max-height: 100%; height: 100%; -moz-box-sizing: border-box; -webkit-box-sizing: border-box; box-sizing: border-box; padding-top: 40px; background-color: #ffffff; background-color: rgba(255, 255, 255, 0.1); position: relative; } div.xn-grid div.xn-content > table > thead { display: none; } div.xn-grid-header { background-color: rgba(155, 155, 155, 0.4); width: 100%; border: 1px solid gray; line-height: 30px; overflow-y: hidden; overflow-x: hidden; margin-top: -40px; } div.xn-grid-row-header { min-width: 30px; text-align: center; background-color: #DDD; } div.xn-grid-row-header input { pointer-events: none; } div.xn-grid-header-cell { display: table-cell; font-weight: bold; } .xn-list-item { line-height: 30px; height: 30px; } .xn-grid-row-header { border-top: 1px dotted #AAA; } div.xn-grid-cell { -moz-box-sizing: border-box; -webkit-box-sizing: border-box; box-sizing: border-box; padding: 0px 0px; overflow: hidden; border-left: 1px dotted #AAA; border-top: 1px dotted #AAA; } div.xn-grid-cell-content { display: block; color: inherit; padding: 0px 10px; white-space: nowrap; overflow: hidden; text-overflow: ellipsis; text-decoration: none; width: 280px; border-left: 1px solid gray; padding: 0px 10px; } .xn-list-filter { -moz-box-sizing: border-box; -webkit-box-sizing: border-box; box-sizing: border-box; width: 100%; max-width: 100%; line-height: 24px; height: 24px; font-size: 12px; padding: 0px 4px; background-color: white; color: gray; } .open .xn-focus-point { display: none; } .overlay { display: none; position: fixed; top: 0; left: 0; width: 100%; height: 100%; background-color: #000; filter: alpha(opacity=05); -moz-opacity: 0.05; -khtml-opacity: 0.05; opacity: 0.05; z-index: 10000; } .xn-selected { color: green; } .xn-files { border: 1px solid black; background-color: white; } .xn-files.xn-files-focused { border: 1px dashed green; } .xn-grid-content-loader { position: absolute; top: 0px; height: 100%; width: 100%; line-height: 100%; text-align: center; } div.xn-grid-cell-content > a { display: block; text-decoration: none; border: none; font-size: smaller; } tr.xn-grid-row-alternate td.xn-grid-cell { background-color: #eee; } tr.xn-grid-row-activated div.xn-grid-row-header { background-color: #427bb2; color: white; } tr.xn-grid-row-activated td.xn-grid-cell { background-color: #6e9ecc; } tr.xn-grid-row-selected div.xn-grid-cell-content > * { color: green; } tr.xn-grid-row-activated div.xn-grid-cell-content > * { color: white; font-weight: bold; text-decoration: none; } tr.xn-grid-row-updated div.xn-grid-cell-content > * { font-weight: bold; } #users .xn-grid-column-0, #users .xn-grid-column-0 .xn-grid-cell-content { min-width: 250px; width: 250px; } #users .xn-grid-column-1, #users .xn-grid-column-1 .xn-grid-cell-content { min-width: 400px; width: 400px; } #users .xn-grid-column-2, #users .xn-grid-column-2 .xn-grid-cell-content { min-width: 180px; width: 180px; } .xn-list-item td { margin: 0; padding: 0; }
css
<gh_stars>0 [{"id_hochschule":216,"id_studien":289,"name":"<NAME>","id_studiengang":289,"Hochschule":"Muthesius Kunsthochschule","Studienort":"Kiel","Abschluss":"Bachelor of Fine Arts","Studientyp":"grundständig","Studienform":"Vollzeitstudium"},{"id_hochschule":216,"id_studien":289,"name":"<NAME>","id_studiengang":1,"Hochschule":"Muthesius Kunsthochschule","Studienort":"Kiel","Abschluss":"Master of Fine Arts","Studientyp":"weiterführend","Studienform":"Vollzeitstudium"},{"id_hochschule":216,"id_studien":289,"name":"Industriedesign","id_studiengang":2,"Hochschule":"Muthesius Kunsthochschule","Studienort":"Kiel","Abschluss":"Bachelor of Arts","Studientyp":"grundständig","Studienform":"Vollzeitstudium"},{"id_hochschule":216,"id_studien":289,"name":"Industriedesign: Interface Design","id_studiengang":3,"Hochschule":"Muthesius Kunsthochschule","Studienort":"Kiel","Abschluss":"Master of Arts","Studientyp":"weiterführend","Studienform":"Vollzeitstudium"},{"id_hochschule":216,"id_studien":289,"name":"Industriedesign: Medical Design","id_studiengang":4,"Hochschule":"Muthesius Kunsthochschule","Studienort":"Kiel","Abschluss":"Master of Arts","Studientyp":"weiterführend","Studienform":"Vollzeitstudium"},{"id_hochschule":216,"id_studien":289,"name":"Kommunikationsdesign","id_studiengang":5,"Hochschule":"Muthesius Kunsthochschule","Studienort":"Kiel","Abschluss":"Bachelor of Arts","Studientyp":"grundständig","Studienform":"Vollzeitstudium"},{"id_hochschule":216,"id_studien":289,"name":"Kommunikationsdesign","id_studiengang":6,"Hochschule":"Muthesius Kunsthochschule","Studienort":"Kiel","Abschluss":"Master of Arts","Studientyp":"weiterführend","Studienform":"Vollzeitstudium"},{"id_hochschule":216,"id_studien":289,"name":"<NAME>","id_studiengang":7,"Hochschule":"<NAME>","Studienort":"Kiel","Abschluss":"Bachelor of Arts","Studientyp":"grundständig","Studienform":"Vollzeitstudium"},{"id_hochschule":216,"id_studien":289,"name":"<NAME>","id_studiengang":8,"Hochschule":"Muthesius Kunsthochschule","Studienort":"Kiel","Abschluss":"Master of Education","Studientyp":"weiterführend","Studienform":"Vollzeitstudium"},{"id_hochschule":216,"id_studien":289,"name":"Raumstrategien/ Interior Design","id_studiengang":9,"Hochschule":"<NAME>","Studienort":"Kiel","Abschluss":"Bachelor of Arts","Studientyp":"grundständig","Studienform":"Vollzeitstudium"},{"id_hochschule":216,"id_studien":289,"name":"Raumstrategien/ Spatial Strategies","id_studiengang":10,"Hochschule":"Muthesius Kunsthochschule","Studienort":"Kiel","Abschluss":"Master of Arts","Studientyp":"weiterführend","Studienform":"Vollzeitstudium"}]
json
<reponame>daveagill/aws-sam-js-starter { "name": "todo", "version": "1.0.0", "description": "todo", "author": "daveagill", "license": "UNLICENSED", "private": true, "scripts": { "test": "jest", "build": "webpack", "build.watch": "webpack --watch", "serve.api": "sam local start-api", "package": "sam package --s3-bucket $npm_config_bucketname --output-template-file build/packaged-template.yaml", "deploy": "sam deploy --template-file ./build/packaged-template.yaml --stack-name $npm_config_stackname --capabilities CAPABILITY_IAM", "undeploy": "aws cloudformation delete-stack --stack-name $npm_config_stackname", "outputs": "aws cloudformation describe-stacks --stack-name $npm_config_stackname --query \"Stacks[].Outputs\"" }, "dependencies": { "axios": "^0.21.1" }, "devDependencies": { "eslint": "^6.2.2", "eslint-loader": "^2.2.1", "jest": "^24.9.0", "jest-runner-eslint": "^0.7.4", "webpack": "^4.39.3", "webpack-cli": "^3.3.7" } }
json
Sarma stated in X, “See for yourself, as elections approach, how vested groups distort a speech with the criminal intention of spreading disinformation and communal disharmony. The long arms of the law will catch up with these elements”. Assam chief minister Himanta Biswa Sarma rules out immediate talks with Paresh Baruah led militant outfit United Liberation Front of Asom (Independent) faction. He said that he is in touch with Baruah. Sarma said "As the political head of the state, I will continue to reach out to him. I usually talk to him every three or six months and I plan to speak to him again soon, but I don't expect him to come for talks immediately," Sarma said during an interaction with journalists here. Baruah, while talking to local news channels on Saturday, said he is not upset or upbeat with the pact. He is not averse to dialogue. "We are open to discussion keeping in mind the history and principles of the state. There is no problem discussing our core issue. Just discussing our core issue will not mean that it is against the Constitution of India. Discussion of the issue should not scare anyone." As per the MoA, a Research & Development centre, to be called “NRL-NEIST R&D Centre” or “NRDC” will be set up at the premise of NEIST, Jorhat for taking up various research activities in emerging energy sectors for a long-term basis, with a collaborative effort of NRL and NEIST. An important meeting was held at Urban Development Minister Ashok Singhal's office at Janata Bhawan on Tuesday to outline the implementation of the ‘Ten Cities Development Concept’ (Doh Shaher, Ek Rupayan) undertaken by the Housing and Urban Affairs Department as a special step towards well-planned and rapid development of urban areas. Extending the olive branch to the ULFA(I) led by Paresh Baruah, the Chief Minister said violence would impede growth and development and it is in the greater interest of the state and its people, ULFA(I) must come to an amicable settlement once and for all. CM Sarma also said that the State has already witnessed a robust industrial climate. He also said in the last couple of years under the present State government, MoU worth Rs. 10 thousand crore have been agreed upon with different business ventures. The Total Electors stood at 2,43,02,460, Male electors accounted for 1,22,12,483 while female Electors stood at 1,20,89,569 and third Gender was 408. There is an overall increase of 1,90,717 electors (0.8%) in the state. Family members of the 13 victims expressed suspicion of torture in custody and cold-blooded execution and demanded the government for swift investigation to bring all those involved in the crime to justice. In a statement the faction added this is to reiterate the stand of UNLF on the question of talks with India. At long last, the breakaway Pambei Group of the UNLF has signed a 'Peace Talks Agreement' with the Government of India on November 29 at New Delhi. On Wednesday, four members of the United Liberation Front of Asom-Independent (ULFA-I) surrendered at the Assam Police Headquarters in Guwahati. Identified as Dipok Hatiboruah, Nayan Patmaut, Montu Moran, and Palash Moran, they submitted two hand grenades and pistols. Assam DGP GP Singh revealed that 11 ULFA members have surrendered in 2023, 16 have been arrested, and there are currently 75 active cadres. In October, ULFA-I claimed to have executed two members, accusing them of espionage, in Myanmar where the group has bases. Sarma on Tuesday attended as Chief Guest the passing-out parade of the 52nd batch of the basic course of North Eastern Police Academy comprising 377 trainee officers in the rank of Deputy Superintendents and Sub-Inspectors held at the Parade ground of the police academy at Umsaw in Ri-Bhoi district of Meghalaya. Manipur has been in the grip of ethnic violence since May 3 following a tribal solidarity rally that turned violent in Churachandpur district, leading to retaliatory attacks between Chin-Kuki-Zo and Meitei communities in the state. At least four people have died and 50 others injured after 21 coaches of the North East Express train derailed near Raghunathpur station in Buxar district, Bihar. Assam Chief Minister Himanta Biswa Sarma is closely monitoring the situation and in contact with district authorities and rescue agencies. The Railways Minister, Ashwini Vaishnaw, confirmed that evacuation and rescue operations have been completed, and passengers will be shifted to a special train for their onward journey. An inquiry will be conducted to determine the cause of the derailment. The two – Lachit Hazarika alias Brigadier Salim Asom and Bornali Asom alias Nayanmoni Chetia – were killed on September 20, the ULFA said in a statement on Tuesday sent to media houses in Guwahati. The execution took place in Myanmar where the outfit has its bases. Salim had joined the outfit in the 1990s. While Bornali had joined the outfit in the year 2021. With this increase the daily wages of workers in Brahmaputra Valley will be Rs 250. Daily wages in Barak Valley will be Rs 228. The new wages for tea garden workers have come into effect from October 1. This strategic fleet expansion is a visionary initiative that seeks to harness the vast potential of Assam's abundant river systems. The introduction of these vessels is set to redefine inland water transport in the state, offering numerous benefits in terms of efficiency, safety, and environmental sustainability. Meira Paibi, a collective of Meitei women in India, have demanded the removal of the Assam Rifles paramilitary force from five valley districts in Imphal, following violence which has claimed over 160 lives. The confrontations stemmed from ethnic clashes between Kukis and Meiteis, which broke out in May and have continued for three months. The Chief Minister Himanta Biswa Sarma said, "All cabinet ministers will stay in village areas for 15 days. I will also go to some of the villages and 5000 selected government officers will also stay in the villages for three days."
english
<reponame>carrotflakes/tensorflake use ndarray::Axis; use crate::{ndarray_util::map_axes_keep_dim, *}; // TODO: infer time pub struct Normalization { pub axes: Vec<usize>, pub gamma: Param, pub beta: Param, pub eps: f32, // 0.001 } impl Normalization { pub fn new(axes: Vec<usize>, eps: f32, optimizer: impl Optimizer + Clone) -> Self { Self { axes, gamma: Param::new(scalar(1.0), "normalization".into(), optimizer.clone()), beta: Param::new(scalar(0.0), "normalization".into(), optimizer.clone()), eps, } } } impl Layer for Normalization { type Input = Computed; type Output = Computed; fn call(&self, x: Self::Input, _train: bool) -> Self::Output { let mean = map_axes_keep_dim(&*x, &self.axes, |x| x.mean_axis(Axis(1)).unwrap()); let var = map_axes_keep_dim(&*x, &self.axes, |x| x.var_axis(Axis(1), 1.0)); (x - Computed::new(mean.into_ndarray())) * (self.gamma.get() / Computed::new((var + self.eps).map(|x| x.sqrt()).into_ndarray())) + self.beta.get() } fn all_params(&self) -> Vec<Param> { vec![self.gamma.clone(), self.beta.clone()] } } #[test] fn test() { let x = Computed::new(ndarray::array![1.0, 2.0, 3.0, 4.0, 5.0, 6.0].into_ndarray()); let bn = Normalization::new(vec![0], 0.001, optimizers::Adam::new()); let y = bn.call(x, false); assert!((y.mean().unwrap() - 0.0).abs() < 1e-6); assert!((y.var(1.0) - 1.0).abs() < 0.01); let x = backprop( ndarray::Array::from_shape_vec( [2, 3, 4, 5], (0..2 * 3 * 4 * 5).map(|x| x as f32).collect(), ) .unwrap() .into_ndarray(), ); let bn = Normalization::new(vec![1, 2], 0.001, optimizers::Adam::new()); let y = bn.call(x.clone(), false); dbg!(&*y); assert_eq!(x.shape(), y.shape()); let grads = gradients(&[y], &[x], false); dbg!(&*grads[0]); }
rust
import { AuthService } from './auth.service'; import { TestBed, inject } from '@angular/core/testing'; import { Configuration } from '../configuration'; import { HttpClient, HttpHandler } from '@angular/common/http'; import { BehaviorSubject } from 'rxjs'; import { AppConfigService } from '../app-config.service'; import { AuthProvider } from './auth-provider.service'; describe('AuthService', () => { let authProvider: any; let authProviderTrySpy: any; let accessTokenObs: BehaviorSubject<string>; beforeEach(() => { TestBed.configureTestingModule({ providers: [ HttpClient, HttpHandler, AuthService, { provide: Configuration, useValue: new Configuration({ withCredentials: true, accessToken: ''})}, AppConfigService, AuthProvider ] }); authProvider = TestBed.get(AuthProvider); accessTokenObs = new BehaviorSubject(''); authProvider.accessToken = accessTokenObs; authProviderTrySpy = spyOn(authProvider, 'tryLogin').and.returnValue(Promise.resolve('eybToken')); }); it('should be created', inject([AuthService], (service: AuthService) => { expect(service).toBeTruthy(); })); // it('should set user values given a valid user', (done) => { // inject([AuthService], (service: AuthService) => { // spyOn(service, 'parseExpiry').and.returnValue(500); // service.currentUser.subscribe((user) => { // if (typeof user.name !== 'undefined') { // expect(user.access_token).toBe('eybToken'); // expect(user.user_roles).toEqual(['Contributor']); // expect(user.name).toBe('Test'); // expect(user.expiry).toBe(500 * 1000); // done(); // } // }); // const spy = spyOn(authProvider, 'getUser').and.returnValue({'idToken': {'roles': ['Contributor']}, 'displayableId': 'Test'}); // service.setUser('eybToken'); // expect(spy).toHaveBeenCalled(); // })(); // }); // it('should set user values given a valid user', (done) => { // inject([AuthService], (service: AuthService) => { // service.currentUser.subscribe((user) => { // if (typeof user.name !== 'undefined') { // expect(user.access_token).toBe('eybToken'); // expect(user.user_roles).toEqual(['Contributor']); // expect(user.name).toBe('Test'); // done(); // } // }); // const spy = spyOn(authProvider, 'getUser').and.returnValue({'idToken': {'roles': ['Contributor']}, 'displayableId': 'Test'}); // service.setUser('eybToken'); // expect(spy).toHaveBeenCalled(); // })(); // }); // it('should pass role match given user has role', (done) => { // inject([AuthService], (service: AuthService) => { // service.currentUser.subscribe((user) => { // if (typeof user.name !== 'undefined') { // const rvl = service.roleMatch(['Contributor']); // expect(rvl).toBeTruthy(); // done(); // } // }); // spyOn(authProvider, 'getUser').and.returnValue({'idToken': {'roles': ['Contributor']}, 'displayableId': 'Test'}); // service.setUser('eybToken'); // })(); // }); it('should fail role match given user does not have role', inject([AuthService], (service: AuthService) => { spyOn(authProvider, 'getUser').and.returnValue({'idToken': {'roles': []}, 'displayableId': 'Test'}); service.setUser('eybToken!'); const rvl = service.roleMatch(['Contributor']); expect(rvl).toBeFalsy(); } )); it('should fail role match given user is not logged in', inject([AuthService], (service: AuthService) => { const rvl = service.roleMatch(['Contributor']); expect(rvl).toBeFalsy(); } )); });
typescript
{"artist_id":"AROWH4C1187B9B11CB","artist_latitude":null,"artist_location":"Newnan, GA","artist_longitude":null,"artist_name":"<NAME>","duration":168.80281,"num_songs":1,"song_id":"SOFXDFB12AB0188F32","title":"Hard Hat And A Hammer","year":2010}
json
package controllers import ( "github.com/astaxie/beego" "github.com/loovien/webcron/app/libs" "github.com/loovien/webcron/app/models" "strconv" "strings" ) type GroupController struct { BaseController } func (this *GroupController) List() { page, _ := this.GetInt("page") if page < 1 { page = 1 } list, count := models.TaskGroupGetList(page, this.pageSize) this.Data["pageTitle"] = "分组列表" this.Data["list"] = list this.Data["pageBar"] = libs.NewPager(page, int(count), this.pageSize, beego.URLFor("GroupController.List"), true).ToString() this.display() } func (this *GroupController) Add() { if this.isPost() { group := new(models.TaskGroup) group.GroupName = strings.TrimSpace(this.GetString("group_name")) group.UserId = this.userId group.Description = strings.TrimSpace(this.GetString("description")) _, err := models.TaskGroupAdd(group) if err != nil { this.ajaxMsg(err.Error(), MSG_ERR) } this.ajaxMsg("", MSG_OK) } this.Data["pageTitle"] = "添加分组" this.display() } func (this *GroupController) Edit() { id, _ := this.GetInt("id") group, err := models.TaskGroupGetById(id) if err != nil { this.showMsg(err.Error()) } if this.isPost() { group.GroupName = strings.TrimSpace(this.GetString("group_name")) group.Description = strings.TrimSpace(this.GetString("description")) err := group.Update() if err != nil { this.ajaxMsg(err.Error(), MSG_ERR) } this.ajaxMsg("", MSG_OK) } this.Data["pageTitle"] = "编辑分组" this.Data["group"] = group this.display() } func (this *GroupController) Batch() { action := this.GetString("action") ids := this.GetStrings("ids") if len(ids) < 1 { this.ajaxMsg("请选择要操作的项目", MSG_ERR) } for _, v := range ids { id, _ := strconv.Atoi(v) if id < 1 { continue } switch action { case "delete": models.TaskGroupDelById(id) models.TaskResetGroupId(id) } } this.ajaxMsg("", MSG_OK) }
go
[{"successCallback":[],"javaMethodName":"findDiskTypeByCloudRegionInfo","method":"GET","javaClass":"com.infinities.cloudfusion.admin.SysCommonRest","bodyParam":[],"title":"Find Disk Type By Cloud Region Info","url":"/sysCommon/diskType/{cloudRegionInfoId}","urlParam":[{"dataType":"String","name":"{cloudRegionInfoId}","isArray":false,"sample":"REQUIRED ","subParam":[]}]},{"successCallback":[],"javaMethodName":"findCloudProvider","method":"GET","javaClass":"com.infinities.cloudfusion.admin.SysCommonRest","bodyParam":[],"title":"Find Cloud Provider","url":"/sysCommon/getCloudregionInfo","urlParam":[]},{"successCallback":[],"javaMethodName":"listCostingType","method":"GET","javaClass":"com.infinities.cloudfusion.admin.SysCommonRest","bodyParam":[],"title":"List Costing Type","url":"/sysCommon/getCostingType","urlParam":[]},{"successCallback":[],"javaMethodName":"listosType","method":"GET","javaClass":"com.infinities.cloudfusion.admin.SysCommonRest","bodyParam":[],"title":"Listos Type","url":"/sysCommon/getOsType","urlParam":[]},{"successCallback":[],"javaMethodName":"listSlbSpec","method":"GET","javaClass":"com.infinities.cloudfusion.admin.SysCommonRest","bodyParam":[],"title":"List Slb Spec","url":"/sysCommon/getSlbSpec","urlParam":[]},{"successCallback":[],"javaMethodName":"getserviceType","method":"GET","javaClass":"com.infinities.cloudfusion.admin.SysCommonRest","bodyParam":[],"title":"Getservice Type","url":"/sysCommon/serviceTypes","urlParam":[]},{"successCallback":[],"javaMethodName":"getInstancedisklist","method":"GET","javaClass":"com.infinities.cloudfusion.admin.SysCommonRest","bodyParam":[],"title":"Get Instancedisklist","url":"/sysCommon/volumeSize/{cloudRegionInfo}","urlParam":[{"dataType":"String","name":"{cloudRegionInfo}","isArray":false,"sample":"REQUIRED ","subParam":[]}]}]
json
<filename>docs/t-sql/statements/get-transmission-status-transact-sql.md<gh_stars>0 --- title: GET_TRANSMISSION_STATUS (Transact-SQL) | Microsoft Docs ms.custom: ms.date: 07/26/2017 ms.prod: sql-non-specified ms.prod_service: sql-database ms.service: ms.component: t-sql|statements ms.reviewer: ms.suite: sql ms.technology: - database-engine ms.tgt_pltfrm: ms.topic: language-reference f1_keywords: - STATUS_TSQL - TRANSMISSION - TRANSMISSION_TSQL - GET_TRANSMISSION_STATUS - STATUS - GET_TRANSMISSION_STATUS_TSQL dev_langs: - TSQL helpviewer_keywords: - conversations [Service Broker], transmission status - Service Broker errors, transmission status - transmission status information - status information [SQL Server], conversations - GET_TRANSMISSION_STATUS statement ms.assetid: 621805d5-49ed-4764-b3cb-2ae4a3bf797e caps.latest.revision: author: edmacauley ms.author: edmaca manager: craigg ms.workload: Inactive ms.openlocfilehash: 5ef506faa6df757ac3a1a89af5b1d1b0715e33fd ms.sourcegitcommit: 45e4efb7aa828578fe9eb7743a1a3526da719555 ms.translationtype: HT ms.contentlocale: pt-BR ms.lasthandoff: 11/21/2017 --- # <a name="gettransmissionstatus-transact-sql"></a>GET_TRANSMISSION_STATUS (Transact-SQL) [!INCLUDE[tsql-appliesto-ss2008-xxxx-xxxx-xxx-md](../../includes/tsql-appliesto-ss2008-xxxx-xxxx-xxx-md.md)] Retorna o status da última transmissão para um lado de uma conversa. ![Ícone de link do tópico](../../database-engine/configure-windows/media/topic-link.gif "Topic link icon") [Convenções da sintaxe Transact-SQL](../../t-sql/language-elements/transact-sql-syntax-conventions-transact-sql.md) ## <a name="syntax"></a>Sintaxe ``` GET_TRANSMISSION_STATUS ( conversation_handle ) ``` ## <a name="arguments"></a>Argumentos *conversation_id* É o identificador de conversa para a conversa. Esse parâmetro é do tipo **uniqueidentifier**. ## <a name="return-types"></a>Tipos de retorno **nchar** ## <a name="remarks"></a>Remarks Retorna uma cadeia de caracteres que descreve o status da última tentativa de transmissão para a conversa especificada. Retorna uma cadeia de caracteres vazia se a última tentativa de transmissão tiver êxito, se nenhuma tentativa de transmissão for feita ou se *conversation_handle* não existir. As informações retornadas por essa função são as mesmas exibidas na coluna last_transmission_error da exibição de gerenciamento sys.transmission_queue. Entretanto, essa função pode ser usada para localizar o status de transmissão para conversas que atualmente não tenham mensagens na fila de transmissão. > [!NOTE] > GET_TRANSMISSION_STATUS não fornece informações para mensagens que não tenham um ponto de extremidade de conversa na instância atual. Isto é, nenhuma informação está disponível para as mensagens a serem encaminhadas. ## <a name="examples"></a>Exemplos O exemplo a seguir relata o status de transmissão para a conversa com o identificador de conversa `58ef1d2d-c405-42eb-a762-23ff320bddf0`. ``` SELECT Status = GET_TRANSMISSION_STATUS('58ef1d2d-c405-42eb-a762-23ff320bddf0') ; ``` Conjunto de resultados de exemplo, editado para comprimento de linha: ``` Status ------------------------------- The Service Broker protocol transport is disabled or not configured. ``` Nesse caso, o [!INCLUDE[ssNoVersion](../../includes/ssnoversion-md.md)] não é configurado para permitir que o [!INCLUDE[ssSB](../../includes/sssb-md.md)] se comunique pela rede. ## <a name="see-also"></a>Consulte Também [sys.conversation_endpoints &#40;Transact-SQL&#41;](../../relational-databases/system-catalog-views/sys-conversation-endpoints-transact-sql.md) [sys.transmission_queue &#40;Transact-SQL&#41;](../../relational-databases/system-catalog-views/sys-transmission-queue-transact-sql.md)
markdown
<reponame>JetBrains-Research/pubtrends-nature-reviews { "b8": { "title": "Integration of growth and patterning during vascular tissue formation in Arabidopsis", "selected": true, "reason": "In this study, combined experimental and computational analyses indicate that auxin-dependent cytokinin biosynthesis is crucial for growth and patterning of the embryonic vascular tissue" }, "b35": { "title": "Cell signalling by microRNA165/6 directs gene dose-dependent root cell fate", "selected": true, "reason": "This work reveals how mi RNAs control the specification of the different xylem cell types by regulating HD" }, "b59": { "title": "Molecular genetic framework for protophloem formation", "selected": true, "reason": "The authors demonstrate the role of antagonistic regulatory pathways in controlling early protophloem development" }, "b69": { "title": "Contribution of NAC Transcription Factors to Plant Adaptation to Land", "selected": true, "reason": "This paper demonstrates that VND transcription factors that mediate xylem differentiation in vascular plants control differentiation of water-conducting cells in a moss" }, "b76": { "title": "Arabidopsis NAC45/86 direct sieve element morphogenesis culminating in enucleation", "selected": true, "reason": "The authors identify nucleases that mediate phloem cell differentiation, as well as their transcriptional regulators" } }
json
<gh_stars>0 /** * Created by zhangbo21 on 14-9-2. */ /* * getKfContent : 将image的src从base64替换为文件名 * param : callback -- 回调函数 其参数为替换之后的内容 * return : void * */ UE.Editor.prototype.getKfContent = function(callback){ var me = this; var actionUrl = me.getActionUrl(me.getOpt('scrawlActionName')), params = UE.utils.serializeParam(me.queryCommandValue('serverparam')) || '', url = UE.utils.formatUrl(actionUrl + (actionUrl.indexOf('?') == -1 ? '?':'&') + params); // 找到所有的base64 var count = 0; var imgs =me.body.getElementsByTagName('img'); var base64Imgs = []; UE.utils.each(imgs, function(item){ var imgType = item.getAttribute('src').match(/^[^;]+/)[0]; if ( imgType === 'data:image/png') { base64Imgs.push(item); } }); if (base64Imgs.length == 0){ execCallback(); } else { UE.utils.each(base64Imgs, function(item){ var opt ={}; opt[me.getOpt('scrawlFieldName')]= item.getAttribute('src').replace(/^[^,]+,/, ''); opt.onsuccess = function(xhr){ var json = UE.utils.str2json(xhr.responseText), url = me.options.scrawlUrlPrefix + json.url; item.setAttribute('src', url); item.setAttribute('_src', url); count++; execCallback(); } opt.onerror = function(err){ console.error(err); count++; execCallback(); } UE.ajax.request(url, opt); }); } function execCallback(){ if (count >= base64Imgs.length) { me.sync(); callback(me.getContent()); } } };
javascript
version https://git-lfs.github.com/spec/v1 oid sha256:66c3088a6a7c22c53732dcc0f524cc5e610d57edb095dd94fed54a7816f8e88e size 579
json
{ "url": "https://www.nasdaq.com/article/prospect-of-nodeal-brexit-is-damaging-business-confidence-says-cbi-business-lobby-20190905-00102", "title": "Prospect of no-deal Brexit is damaging business confidence, says CBI business lobby - Nasdaq.com", "text": [ "LONDON, Sept 5 (Reuters) - British business is diverting billions of pounds from productive investment to prepare for the prospect of a possible no-deal Brexit, according to the business lobby group the Confederation of British Industry.", "The group said while most businesses will welcome opposition parties attempt to pass legislation to delay Britain's departure from the European Union rather than leave without a deal, firms remain concerned about the ongoing political uncertainty.", "\"Until a deal is agreed, companies will continue to divert billions of pounds from productive investment to no deal preparations, and international investors will continue to question if the UK is a stable, open place to do business,\" CBI Director General <NAME> said on Thursday." ], "published_datetime": "2019-09-05 03:22:00" }
json
<gh_stars>1-10 Index,Facility_Name,ODRSF_facility_type,Provider,Street_No,Street_Name,Postal_Code,City,Prov_Terr 65369,Walking_Routes,trail,surrey,..,..,..,..,bc 65370,Walking_Routes,trail,surrey,..,..,..,..,bc 65371,Walking_Routes,trail,surrey,..,..,..,..,bc 65372,Walking_Routes,trail,surrey,..,..,..,..,bc 65373,Walking_Routes,trail,surrey,..,..,..,..,bc 65374,Walking_Routes,trail,surrey,..,..,..,..,bc 65375,Walking_Routes,trail,surrey,..,..,..,..,bc 65376,Walking_Routes,trail,surrey,..,..,..,..,bc 65377,Walking_Routes,trail,surrey,..,..,..,..,bc 65378,Walking_Routes,trail,surrey,..,..,..,..,bc 65379,Walking_Routes,trail,surrey,..,..,..,..,bc 65380,Walking_Routes,trail,surrey,..,..,..,..,bc 65381,Walking_Routes,trail,surrey,..,..,..,..,bc 65382,Walking_Routes,trail,surrey,..,..,..,..,bc 65383,Walking_Routes,trail,surrey,..,..,..,..,bc 65384,Walking_Routes,trail,surrey,..,..,..,..,bc 65385,Walking_Routes,trail,surrey,..,..,..,..,bc 65386,Walking_Routes,trail,surrey,..,..,..,..,bc 65387,Walking_Routes,trail,surrey,..,..,..,..,bc 65388,Walking_Routes,trail,surrey,..,..,..,..,bc 65389,Walking_Routes,trail,surrey,..,..,..,..,bc 65390,Walking_Routes,trail,surrey,..,..,..,..,bc 65391,Walking_Routes,trail,surrey,..,..,..,..,bc 65392,Walking_Routes,trail,surrey,..,..,..,..,bc 65393,Walking_Routes,trail,surrey,..,..,..,..,bc 65394,Walking_Routes,trail,surrey,..,..,..,..,bc 65395,Walking_Routes,trail,surrey,..,..,..,..,bc 65396,Walking_Routes,trail,surrey,..,..,..,..,bc 65397,Walking_Routes,trail,surrey,..,..,..,..,bc 65398,Walking_Routes,trail,surrey,..,..,..,..,bc 65399,Walking_Routes,trail,surrey,..,..,..,..,bc 65400,Walking_Routes,trail,surrey,..,..,..,..,bc 65401,Walking_Routes,trail,surrey,..,..,..,..,bc 65402,Walking_Routes,trail,surrey,..,..,..,..,bc 65403,Walking_Routes,trail,surrey,..,..,..,..,bc 65404,Walking_Routes,trail,surrey,..,..,..,..,bc 65405,Walking_Routes,trail,surrey,..,..,..,..,bc 65406,Walking_Routes,trail,surrey,..,..,..,..,bc 65407,Walking_Routes,trail,surrey,..,..,..,..,bc 65408,Walking_Routes,trail,surrey,..,..,..,..,bc 65409,Walking_Routes,trail,surrey,..,..,..,..,bc 65410,Walking_Routes,trail,surrey,..,..,..,..,bc 65411,Walking_Routes,trail,surrey,..,..,..,..,bc 65412,Walking_Routes,trail,surrey,..,..,..,..,bc 65413,Walking_Routes,trail,surrey,..,..,..,..,bc 65414,Walking_Routes,trail,surrey,..,..,..,..,bc 65415,Walking_Routes,trail,surrey,..,..,..,..,bc 65416,Walking_Routes,trail,surrey,..,..,..,..,bc 65417,Walking_Routes,trail,surrey,..,..,..,..,bc
json
Facebook's Mark Zuckerberg has posted 5,500 word missive on the social network, in which he discusses many topics – including how the company aims to tackle the promotion of terrorist activities and so-called "fake news" using AI and algorithms. The size of the global Facebook community, with more than a billion people posting several billion messages and posts each day, has made it impossible for individuals to effectively police the network in its entirety. "The complexity of the issues we've seen has outstripped our existing processes for governing the community," said Zuckerberg. "We are researching systems that can read text and look at photos and videos to understand if anything dangerous may be happening." However, Zuckerberg admits that these systems will take time to perfect. "This is still very early in development, but we have started to have it look at some content, and it already generates about one third of all reports to the team that reviews content," he continued. "Right now, we're starting to explore ways to use AI to tell the difference between news stories about terrorism and actual terrorist propaganda." It's an approach that could help turn the tide against unreliable, "fake" news sources too, though Zuckerberg also notes that the nuances a human can discern between tasteful and trustworthy and distasteful and untrustworthy content will initially be difficult for an AI to comprehend. "It's worth noting that major advances in AI are required to understand text, photos and videos to judge whether they contain hate speech, graphic violence, sexually explicit content, and more," he said. "At our current pace of research, we hope to begin handling some of these cases in 2017, but others will not be possible for many years." So, despite the long term goal, the crux of Facebook's defences will remain, for the time being at least, within community moderation. "Where is your line on nudity? On violence? On graphic content? On profanity? What you decide will be your personal settings," Zuckerberg said. "For those who don't make a decision, the default will be whatever the majority of people in your region selected, like a referendum." Get the hottest deals available in your inbox plus news, reviews, opinion, analysis, deals and more from the TechRadar team. Gerald is Editor-in-Chief of iMore.com. Previously he was the Executive Editor for TechRadar, taking care of the site's home cinema, gaming, smart home, entertainment and audio output. He loves gaming, but don't expect him to play with you unless your console is hooked up to a 4K HDR screen and a 7.1 surround system. Before TechRadar, Gerald was Editor of Gizmodo UK. He is also the author of 'Get Technology: Upgrade Your Future', published by Aurum Press.
english
<gh_stars>1-10 fn main() { yew::start_app::<pub_sub::Model>(); }
rust
<gh_stars>0 { "name": "<NAME>", "tagline": "", "body": "### About\r\nAs the Mayor’s Chief Innovation Officer for San Francisco and White House Champion of Change, <NAME> works with the tech community and the public to help make government more effective, efficient, and responsive. Jay applies modern, agile thinking to government administration, focusing on “lean government” as a platform for innovation. Under his leadership, the Mayor’s Office of Civic Innovation launched the first of its kind Startup in Residence program in collaboration with the White House, offering an on-premises incubator at City Hall, meant to apply startup ingenuity directly to pain points within government itself. In partnership with GSA, Nath launched Superpublic the first of its kind collaboration space for government. Inspired by the Presidential Innovation Fellows, Nath created the Mayor’s Senior Fellowship program where cross-sector leaders spend one year in City Hall working on high impact projects. He also established the nation’s first open source software policy for a city government and authored open data legislation that requires City departments to make nearly all non-confidential datasets available to the public, mandates department-level data coordinators, and creates a chief data officer position for the city. Prior to public service, Nath worked at a San Francisco startup as a VP of product and at PricewaterhouseCoopers as a senior consultant. \r\n\r\n### Get updates and event invites\r\n\r\n### Connect\r\n<a href=\"https://twitter.com/Jay_Nath\" class=\"twitter-follow-button\" data-show-count=\"false\">Follow @Jay_Nath</a><script async src=\"//platform.twitter.com/widgets.js\" charset=\"utf-8\"></script><br>\r\n<a href=\"https://www.linkedin.com/pub/jay-nath/2/2a7/77\" style=\"text-decoration:none;\"><span style=\"font: 80% Arial,sans-serif; color:#0783B6;\"><img src=\"https://static.licdn.com/scds/common/u/img/webpromo/btn_in_20x15.png\" width=\"20\" height=\"15\" alt=\"View <NAME>'s LinkedIn profile\" style=\"vertical-align:middle;\" border=\"0\">&nbsp;View <NAME>'s profile</span></a><br>\r\njay <dot> nath AT sfgov.org <br>\r\njay AT jaynath.com (non-work related) \r\n\r\n### Contact me\r\n<!-- Begin SimplerSES Signup Form -->\r\n<link href=\"//cdn-images.mailchimp.com/embedcode/horizontal-slim-10_7.css\" rel=\"stylesheet\" type=\"text/css\">\r\n<style type=\"text/css\">\r\n\t#mc_embed_signup{background:#fff; clear:left; font:14px Helvetica,Arial,sans-serif; width:100%;}\r\n\t/* Add your own MailChimp form style overrides in your site stylesheet or in this style block.\r\n\t We recommend moving this block and the preceding CSS link to the HEAD of your HTML file. */\r\n</style>\r\n<div id=\"mc_embed_signup\">\r\n<form action=\"https://api.simplerses.com/v1/subscriber_lists/Test/subscribers?auth_token=CZyU_<PASSWORD>-6fvPx_iC<PASSWORD>&email=email\" method=\"post\" id=\"mc-embedded-subscribe-form\" name=\"mc-embedded-subscribe-form\" class=\"validate\" target=\"_blank\" novalidate>\r\n <div id=\"mc_embed_signup_scroll\">\r\n\t<label for=\"mce-EMAIL\">Get updates and event invites</label>\r\n\t<input type=\"email\" value=\"\" name=\"EMAIL\" class=\"email\" id=\"mce-EMAIL\" placeholder=\"email address\" required>\r\n <!-- real people should not fill this in and expect good things - do not remove this or risk form bot signups-->\r\n <div style=\"position: absolute; left: -5000px;\" aria-hidden=\"true\"><input type=\"text\" name=\"b_f826e84399346d92de23ca3a7_019c6bd035\" tabindex=\"-1\" value=\"\"></div>\r\n <div class=\"clear\"><input type=\"submit\" value=\"Subscribe\" name=\"subscribe\" id=\"mc-embedded-subscribe\" class=\"button\"></div>\r\n </div>\r\n</form>\r\n</div>\r\n\r\n<!--End mc_embed_signup-->\r\n", "note": "Don't delete this file! It's used internally to help with page regeneration." }
json
{"type":"Feature","id":"node/602060034","properties":{"addr:city":"Sarzbüttel","addr:country":"DE","addr:housenumber":"43","addr:postcode":"25785","addr:street":"Hauptstraße","fax":"+49 4806 501","name":"Meierei-Genossenschaft Sarzbüttel-Feinkäserei","opening_hours":"Mo-Fr 08:00-12:00, 14:00-18:00; Sa 08:00-12:00;PH off","phone":"+49 4806 328","shop":"farm","website":"https://www.kaeserei-sarzbuettel.de/","id":"node/602060034"},"geometry":{"type":"Point","coordinates":[9.1840219,54.1183676]}}
json
.navbar { background-color: white; border: 0; border-radius: 0; margin-bottom: 0; font-size: 15px; letter-spacing: 1.5px; } .navbar li a{ color: green; } .navbar-default .navbar-nav > li > a { color: #0E0F0F; } .navbar-default .navbar-nav > li > a:focus, .navbar-default .navbar-nav > li > a:hover { color: #F3EBEB; background-color: #8B0000; } .navbar-default .navbar-brand:focus, .navbar-default .navbar-brand:hover { color: black; background-color: #8B0000; } .navbar-default .navbar-brand:focus, .navbar-default .navbar-brand{ color: black; background-color: transparent; } .carousel-inner > .item > a > img, .carousel-inner > .item > img, .img-responsive, .thumbnail a > img, .thumbnail > img { max-width: 100%; height: auto; } .carousel-inner img { -webkit-filter: grayscale(0%); filter: grayscale(0%); width: 100%; height: 300px; max-height: 800px; } .carousel-caption h3 { color: white !important; } .btn-group-lg > .btn, .btn-lg:hover{ color: white; background-color: #8B0000; } .container-fluid { padding-right: 15px; padding-left: 15px; margin-right: auto; margin-left: auto; } .cookies-bar { min-width: 992px; background: #354860; color: #d0dff2; position: fixed; left: 0; right: 0; bottom: 0; } .jumbotron { text-align: center; } .logoo{ background-color: black; text-align:center; }
css
A free app for iPhone, by SPORTSOCIAL LLC. A free program for iPhone, by bwin.party entertainment Limited. A free program for iPhone, by Hillside New Media Limited. A free app for iPhone, by Sports.ru LLC.
english
<reponame>ccronje/bilara-data { "sn13.10:0.1": "Saṁyutta Nikāya 13 ", "sn13.10:0.2": "1. Abhisamayavagga ", "sn13.10:0.3": "10. Dutiyapabbatasutta ", "sn13.10:1.1": "Sāvatthiyaṁ viharati. ", "sn13.10:1.2": "“Seyyathāpi, bhikkhave, himavā pabbatarājā parikkhayaṁ pariyādānaṁ gaccheyya, ṭhapetvā satta sāsapamattiyo pāsāṇasakkharā. ", "sn13.10:1.3": "Taṁ kiṁ maññatha, bhikkhave, ", "sn13.10:1.4": "katamaṁ nu kho bahutaraṁ, yaṁ vā himavato pabbatarājassa parikkhīṇaṁ pariyādiṇṇaṁ yā vā satta sāsapamattiyo pāsāṇasakkharā avasiṭṭhā”ti? ", "sn13.10:2.1": "“Etadeva, bhante, bahutaraṁ himavato pabbatarājassa yadidaṁ parikkhīṇaṁ pariyādiṇṇaṁ; ", "sn13.10:2.2": "appamattikā satta sāsapamattiyo pāsāṇasakkharā avasiṭṭhā. ", "sn13.10:2.3": "Neva satimaṁ kalaṁ upenti na sahassimaṁ kalaṁ upenti na satasahassimaṁ kalaṁ upenti himavato pabbatarājassa parikkhīṇaṁ pariyādiṇṇaṁ upanidhāya satta sāsapamattiyo pāsāṇasakkharā avasiṭṭhā”ti. ", "sn13.10:3.1": "“Evameva kho, bhikkhave, ariyasāvakassa diṭṭhisampannassa puggalassa abhisametāvino etadeva bahutaraṁ dukkhaṁ yadidaṁ parikkhīṇaṁ pariyādiṇṇaṁ; ", "sn13.10:3.2": "appamattakaṁ avasiṭṭhaṁ. ", "sn13.10:3.3": "Neva satimaṁ kalaṁ upeti na sahassimaṁ kalaṁ upeti na satasahassimaṁ kalaṁ upeti purimaṁ dukkhakkhandhaṁ parikkhīṇaṁ pariyādiṇṇaṁ upanidhāya yadidaṁ sattakkhattuṁparamatā. ", "sn13.10:3.4": "Evaṁ mahatthiyo kho, bhikkhave, dhammābhisamayo, evaṁ mahatthiyo dhammacakkhupaṭilābho”ti. ", "sn13.10:3.5": "Dasamaṁ. " }
json
WhatsApp has become an essential messaging app amongst all kinds of mobile users irrespective of the operating system their devices run on. In February last year, Facebook-owned WhatsApp announced to end support for BlackBerry OS and Nokia S40 platforms, which got pushed back to June this year. It seems that the end-of-life date has been pushed back yet again, as the company has reportedly confirmed the extension of its services for BlackBerry and Nokia S40 platforms till December 2017 and December 2018 respectively. Update: WhatsApp has confirmed the revised end-of-life timelines for the BlackBerry OS, BlackBerry 10, and Nokia S40 platforms on its website. As per a report by Netherlands-based fan website WhatsAppen, the WhatsApp apps for BlackBerry 10 and BlackBerry OS7+ received an update on Monday that extends support for the platforms until December 31, 2017. According to the website, the changelog of the update, seen on devices, states "changed client end-of-life date to December 31, 2017". This means that WhatsApp users on eligible BlackBerry devices will be continue sending and receiving messages, calls till the end of the year. In addition, the Nokia S40 platform has reportedly got an extension for WhatsApp support till December 31, 2018 - however, this date may be a typographical error. Spotted by WhatsApp watcher WABetaInfo, the end-of-life date has been moved from December 31, 2017 to December 31, 2018. There are a limited number of customers who use the Nokia S40 platform nowadays, but it will still bring relief to those who still carry the dated devices. However, Nokia Symbian S60 users don't have much time on their hands as WhatsApp will end its support on the platform on June 30 this year. If the reports are true, BlackBerry 10, BlackBerry OS7+, and Nokia S40 users still have ample time to access WhatsApp on their platforms. However, the Nokia Symbian S60 devices will cease to support WhatsApp and if you need your chat backed up, you can make a request to get the backup in an email from the company. Notably, there is no option on Nokia S40 and Symbian S60 devices to backup chats.
english
Motion ra: 20 FEBRUARY 1958 Report of the Commission 1796 of Inquiry into the Affairs of L. I. C. part of it is embodied in the motion which has been moved by the Prime Minister. The Finance Minister has already resigned. The merits of the case do not call for any further consideration. The Government has condemned the methods that were adopted in this Mundhra deal in unqualified terms; it was bad, it was improper, it was irregular, it was in contravention of the rules framed and prescribed for the purpose. So, as far as that goes, there is no difference of opinion. So far as the other matters go, they have already been included in the motion as I just said. After this I had thought that it would be possible for all of us to concentrate on the issues which have arisen out of this report. I was surprised when I heard some of the remarks. Some Members seemed to take credit for this report. I agree that this entire episode, beginning from the questions put by Dr. Ram Subhag Singh, who was at that time the Secretary of the Congress Party, to this day…….. 14 hrs. Dr. Ram Subhag Singh (Sasaram): No, Sir; not Secretary at that time. Pandit G. B. Pant: Just, I think, a few weeks before that,--and to this moment, when this report is under \ discussion, will be treated as a landmark in the growth of the strength and vitality of democracy in country. And it is, I think, a matter for which we can give credit always to the Members of the Congress Party and to nobody else. They have been vigilant not only as observers in the affairs of the country but when they I have felt that there was something wrong, whether in the administration or in the handling of public affairs, they have risen above party affiliations and given priority to the country over party. That is what we have noticed with a certain degree of gratification. There are very few instances, I think, in political history where the members of a party have themselves gone out of the way to criticise the acts of omissions of Government and to demand an enquiry and a probe, and members have done so not only in this case, in this disreputable deal, as I am prepared to call it, but it iš again the crusader sitting there, beJonging to the Congress Party, who also raised the question pertaining to Dalmia concerns in this House. It was he who also brought before this House some of the a pects of Telco organisation or firm. So, it is something which must assure the people of the country that the Congress Party is watchful. Shri Naushir Bharucha (East Khandesh): Then why is there no quorum in the House if it is watchful? Mr. Deputy-Speaker: That is for all the Members. Pandit G. B. Pant: I was saying that it is, I think, a matter of some assurance for the future that the members of the party have at least in a measure, however small it may be, caught the spirit of integrity of the Prime Minister and they bothered not over petty things where the interests of the country were at stake. Yesterday, I listened to the speech of Shri Dange. Well, I listened to him with rapt attention. In fact, his homely method of presentation attracts one's attention and it is sustained all the time he speaks. But I was somewhat perplexed when I was listening to him. Ultimately I found the key to his speech in the last few words that he uttered. While concluding, he gave a quotation from an old man who, he said, had been buried in the grave a hundred years ago. That is the difficulty with him. He is always obsessed by what the man to whom he referred said a hundred years ago. He thinks that nothing has happened during these hundred years, that the gospel remains unaffected, that what was said a hundred years ago should guide him today not only in matters of principle but also while we are examining the details of a report by a Commission of Inquiry. It is something very queer. That explains to some Motion re: 20 FEBRUARY 1958 Report of the Commission 1778 of Inquiry into the Affairs of L. I. C. [Pandit G. B. Pant] extent why he is not able to examine these questions in a dispassionate and detached way. His angle of vision is coloured by this thought of obsession. His own being is steeped in it. I do not complain about it. But the judgment of a man like that in matters of this kind cannot carry much weight. Shri S. A. Dange: Why do you take so much energy to fight that man a hundred years after, every day and every now and then? Pandit G. B. Pant: It is because some people take some trouble in making reference to that man after a hundred years. Shri S. A. Dange: No. Because his philosophy rules half the world. Pandit G. B. Pant: So, you think it is necessary to be guided by that man in every matter, whether it could possibly have been within the imagination of that man or anyone in anyway who was sitting with him then. But anyway, I think that is a matter over which perhaps we can to some extent postpone the discussion to some other occasion. But Shri Dange raised other things. He is always thinking of scandals. He mentioned the jeep scandal. He mentioned the fertiliser scandal. Well, there may have been or there might not have been a very thorough enquiry into a matter, but if the decision arrived at does not agree with his own preconceived notions or if those making the enquiry do not condemn the Government, then he will not accept those findings. He will again repeat the word 'scandal' even though the truth may have been fully established and may have been fully accepted by this House. In the circumstances, it is difficult to try to convince a mind of that type by any rational approach. In the course of the speeches, I submit that in many cases the question of economic policy has been raised, although the Prime Minister had made it clear, here as well as outside, that this inquiry has nothing to do with the economic policy of the Government. Of course, it would not be possible to do it in an inquiry of that kind. The Commission was only asked to look into the merits of this particular case, and it has done so as well as it could in the light of the material that it could collect. So the question of policy does hardly arise, so far as this particular episode is concerned. Neither the public sector, nor the private sector, can congratulate itself on this very regrettable affair. The officers and those who were connected with the public sector cannot feel happy over what has happened On the other hand, the private sector cannot but feel sorry that the man-I would not use any harsh expression-who was responsible for this sort of dirty speculation was a leading member, associated and connected with many important concerns. So, we need not condemn one or the other. May I know if there are no complaints like these in Russia? May I know if men in charge of undertakings and otherwise connected with the administration of public sector there have not been repeatedly chastised for doing the wrong thing So, this is not the monopoly of any particular sector. There is need for vigilance everywhere. In fact, the difference is only this. In a democratic country the failings are not suppressed or concealed. They become the subject of inquiry, so that others may learn a lesson. In a totalitarian country they are kept hidden. In fact, the faults of leading men can be mentioned only after their death, and not during their life-time. So, I am not surprised that there should be this sort of concentration on this aspect of the matter, which to me seems to be hardly relevant. Then it was indicated that the Congress Party or the Government is in
english
<filename>app/@esri/calcite-ui-icons/js/arrowUpLeft32F.js<gh_stars>1-10 export const arrowUpLeft32F = "M17 7H9.12l17.708 17.707-2.121 2.121L7 9.121V17H4V4h13z";
javascript
Brent crude futures dipped 41 cents, or 0.5%, to $80.66 a barrel by 0045 GMT. U.S. West Texas Intermediate crude was at $76.70 a barrel, down 37 cents, or 0.5%. The benchmarks rose 1.5% and 2.2% respectively last week, their fourth straight of week of gain, as supply is expected to tighten following OPEC+ cuts. Fighting also escalated last week in Ukraine after Russia withdrew from a U.N.-brokered safe sea corridor agreement for grains exports. "While another Fed rate hike this week may drive some short-term price volatility, we expect tightening market conditions on OPEC's supply cuts and increasing market speculation of further stimulus in China to continue to push prices higher through 3Q23," analysts from National Australian Bank said in a note. Rising interest rates have dampened investments and strengthened the greenback, making dollar-denominated commodities more expensive for holders of other currencies. Market participants also expect Beijing to implement targeted stimulus measures to support its flagging economy, likely boosting oil demand in the world's No. 2 consumer. On supply, United Arab Emirates Energy Minister Suhail al-Mazrouei said on Friday that actions by OPEC+ to support the oil market are sufficient for now and the group is "only a phone call away" if any further steps are needed. Last week, U.S. energy firms made their deepest oil rig cut since early June, with operating units down by seven to 530, energy services firm Baker Hughes said on Friday. (Reporting by Florence Tan; Editing by Tom Hogue) Download The Economic Times News App to get Daily Market Updates & Live Business News. Subscribe to The Economic Times Prime and read the Economic Times ePaper Online.and Sensex Today.
english
<reponame>boost-entropy-repos-org/metasfresh<gh_stars>0 /* * #%L * de.metas.business * %% * Copyright (C) 2020 metas GmbH * %% * This program is free software: you can redistribute it and/or modify * it under the terms of the GNU General Public License as * published by the Free Software Foundation, either version 2 of the * License, or (at your option) any later version. * * This program is distributed in the hope that it will be useful, * but WITHOUT ANY WARRANTY; without even the implied warranty of * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the * GNU General Public License for more details. * * You should have received a copy of the GNU General Public * License along with this program. If not, see * <http://www.gnu.org/licenses/gpl-2.0.html>. * #L% */ package de.metas.product.impl; import com.google.common.collect.ImmutableSet; import com.google.common.collect.Maps; import de.metas.cache.CCache; import de.metas.cache.annotation.CacheCtx; import de.metas.organization.OrgId; import de.metas.product.CreateProductRequest; import de.metas.product.IProductDAO; import de.metas.product.IProductMappingAware; import de.metas.product.ProductAndCategoryAndManufacturerId; import de.metas.product.ProductAndCategoryId; import de.metas.product.ProductCategoryId; import de.metas.product.ProductId; import de.metas.product.ProductPlanningSchemaSelector; import de.metas.product.ResourceId; import de.metas.product.UpdateProductRequest; import de.metas.util.Check; import de.metas.util.Services; import lombok.NonNull; import org.adempiere.ad.dao.IQueryBL; import org.adempiere.ad.dao.IQueryBuilder; import org.adempiere.ad.dao.IQueryOrderBy.Direction; import org.adempiere.ad.dao.IQueryOrderBy.Nulls; import org.adempiere.ad.dao.impl.CompareQueryFilter; import org.adempiere.ad.trx.api.ITrx; import org.adempiere.exceptions.AdempiereException; import org.adempiere.model.InterfaceWrapperHelper; import org.adempiere.service.ClientId; import org.adempiere.util.lang.ImmutablePair; import org.adempiere.util.proxy.Cached; import org.compiere.model.IQuery; import org.compiere.model.I_M_Product; import org.compiere.model.I_M_Product_Category; import org.compiere.model.X_M_Product; import org.compiere.util.Env; import org.compiere.util.TimeUtil; import javax.annotation.Nullable; import java.sql.Timestamp; import java.time.Instant; import java.util.Collections; import java.util.List; import java.util.Map; import java.util.Optional; import java.util.Properties; import java.util.Set; import java.util.function.BiConsumer; import java.util.function.Consumer; import java.util.stream.Collectors; import java.util.stream.Stream; import static de.metas.util.Check.isEmpty; import static org.adempiere.model.InterfaceWrapperHelper.load; import static org.adempiere.model.InterfaceWrapperHelper.loadByIdsOutOfTrx; import static org.adempiere.model.InterfaceWrapperHelper.loadByRepoIdAwares; import static org.adempiere.model.InterfaceWrapperHelper.loadByRepoIdAwaresOutOfTrx; import static org.adempiere.model.InterfaceWrapperHelper.loadOutOfTrx; import static org.adempiere.model.InterfaceWrapperHelper.newInstance; import static org.adempiere.model.InterfaceWrapperHelper.saveRecord; public class ProductDAO implements IProductDAO { private final IQueryBL queryBL = Services.get(IQueryBL.class); final static int ONE_YEAR_DAYS = 365; final static int TWO_YEAR_DAYS = 730; final static int THREE_YEAR_DAYS = 1095; private final CCache<Integer, ProductCategoryId> defaultProductCategoryCache = CCache.<Integer, ProductCategoryId>builder() .tableName(I_M_Product_Category.Table_Name) .initialCapacity(1) .expireMinutes(CCache.EXPIREMINUTES_Never) .build(); @Override public I_M_Product getById(@NonNull final ProductId productId) { return getById(productId, I_M_Product.class); } @Override public <T extends I_M_Product> T getById(@NonNull final ProductId productId, @NonNull final Class<T> productClass) { final T product = load(productId, productClass); // we can't load out-of-trx, because it's possible that the product was created just now, within the current trx! if (product == null) { throw new AdempiereException("@NotFound@ @M_Product_ID@: " + productId); } return product; } @Override public I_M_Product getById(final int productId) { return getById(ProductId.ofRepoId(productId), I_M_Product.class); } @Override public List<I_M_Product> getByIds(@NonNull final Set<ProductId> productIds) { return loadByRepoIdAwaresOutOfTrx(productIds, I_M_Product.class); } @Override public I_M_Product retrieveProductByValue(@NonNull final String value) { final ProductId productId = retrieveProductIdByValue(value); return productId != null ? getById(productId) : null; } @Nullable @Override public ProductId retrieveProductIdByValue(@NonNull final String value) { return retrieveProductIdByValueOrNull(Env.getCtx(), value); } @Nullable @Cached(cacheName = I_M_Product.Table_Name + "#ID#by#" + I_M_Product.COLUMNNAME_Value) public ProductId retrieveProductIdByValueOrNull(@CacheCtx final Properties ctx, @NonNull final String value) { final int productRepoId = queryBL.createQueryBuilder(I_M_Product.class, ctx, ITrx.TRXNAME_None) .addEqualsFilter(I_M_Product.COLUMNNAME_Value, value) .addOnlyActiveRecordsFilter() .addOnlyContextClient(ctx) .create() .firstIdOnly(); return ProductId.ofRepoIdOrNull(productRepoId); } @Nullable @Override public ProductId retrieveProductIdBy(@NonNull final ProductQuery query) { final IQueryBuilder<I_M_Product> queryBuilder; if (query.isOutOfTrx()) { queryBuilder = Services .get(IQueryBL.class) .createQueryBuilderOutOfTrx(I_M_Product.class) .setOption(IQuery.OPTION_ReturnReadOnlyRecords, true); } else { queryBuilder = Services .get(IQueryBL.class) .createQueryBuilder(I_M_Product.class); } if (query.isIncludeAnyOrg()) { queryBuilder .addInArrayFilter(I_M_Product.COLUMNNAME_AD_Org_ID, query.getOrgId(), OrgId.ANY) .orderByDescending(I_M_Product.COLUMNNAME_AD_Org_ID); } else { queryBuilder.addEqualsFilter(I_M_Product.COLUMNNAME_AD_Org_ID, query.getOrgId()); } if (!isEmpty(query.getValue(), true)) { queryBuilder.addEqualsFilter(I_M_Product.COLUMNNAME_Value, query.getValue().trim()); } if (query.getExternalId() != null) { queryBuilder.addEqualsFilter(I_M_Product.COLUMNNAME_ExternalId, query.getExternalId().getValue().trim()); } final int productRepoId = queryBuilder .addOnlyActiveRecordsFilter() .create() .firstId(); return ProductId.ofRepoIdOrNull(productRepoId); } @Override public Optional<ProductCategoryId> retrieveProductCategoryIdByCategoryValue(@NonNull final String categoryValue) { final int productCategoryRepoId = queryBL .createQueryBuilder(I_M_Product_Category.class) .addOnlyActiveRecordsFilter() .addEqualsFilter(I_M_Product_Category.COLUMNNAME_Value, categoryValue) .create() .firstIdOnly(); return Optional.ofNullable(ProductCategoryId.ofRepoIdOrNull(productCategoryRepoId)); } @Override public Stream<I_M_Product> streamAllProducts(@Nullable final Instant since) { final IQueryBuilder<I_M_Product> queryBuilder = queryBL.createQueryBuilderOutOfTrx(I_M_Product.class) .addOnlyActiveRecordsFilter(); if (since != null) { final Timestamp updatedAfter = TimeUtil.asTimestamp(since); queryBuilder.addCompareFilter(I_M_Product.COLUMNNAME_Updated, CompareQueryFilter.Operator.GREATER_OR_EQUAL, updatedAfter); } return queryBuilder .orderBy(I_M_Product.COLUMNNAME_M_Product_ID) .create() .iterateAndStream(); } @Override public @NonNull ProductCategoryId getDefaultProductCategoryId() { return defaultProductCategoryCache.getOrLoad(0, this::retrieveDefaultProductCategoryId); } /** * @return All the active products with the given product planning schema selector */ @Override public Set<ImmutablePair<ProductId, OrgId>> retrieveProductsAndOrgsForSchemaSelector( @NonNull final ProductPlanningSchemaSelector productPlanningSchemaSelector) { return queryBL .createQueryBuilder(I_M_Product.class) .addOnlyActiveRecordsFilter() .addOnlyContextClient() .addEqualsFilter(I_M_Product.COLUMNNAME_M_ProductPlanningSchema_Selector, productPlanningSchemaSelector) .create() .listColumns(I_M_Product.COLUMNNAME_M_Product_ID, I_M_Product.COLUMNNAME_AD_Org_ID) .stream() .map(pair -> { final ProductId productId = ProductId.ofRepoId((int)pair.get(I_M_Product.COLUMNNAME_M_Product_ID)); final OrgId orgId = OrgId.ofRepoId((int)pair.get(I_M_Product.COLUMNNAME_AD_Org_ID)); return ImmutablePair.of(productId, orgId); }) .collect(Collectors.toSet()); } private ProductCategoryId retrieveDefaultProductCategoryId() { final ProductCategoryId productCategoryId = queryBL .createQueryBuilderOutOfTrx(I_M_Product_Category.class) .addOnlyActiveRecordsFilter() .orderBy() .addColumn(I_M_Product_Category.COLUMNNAME_IsDefault, Direction.Descending, Nulls.Last) .addColumn(I_M_Product_Category.COLUMNNAME_M_Product_Category_ID) .endOrderBy() .create() .firstId(ProductCategoryId::ofRepoIdOrNull); if (productCategoryId == null) { throw new AdempiereException("default product category shall exist"); } return productCategoryId; } @Nullable @Override public ProductId retrieveMappedProductIdOrNull(final ProductId productId, final OrgId orgId) { final I_M_Product product = getById(productId); final IProductMappingAware productMappingAware = InterfaceWrapperHelper.asColumnReferenceAwareOrNull(product, IProductMappingAware.class); if (productMappingAware.getM_Product_Mapping_ID() <= 0) { return null; } if (!productMappingAware.getM_Product_Mapping().isActive()) { return null; } return queryBL.createQueryBuilderOutOfTrx(I_M_Product.class) .addOnlyActiveRecordsFilter() .addEqualsFilter(IProductMappingAware.COLUMNNAME_M_Product_Mapping_ID, productMappingAware.getM_Product_Mapping_ID()) .addEqualsFilter(I_M_Product.COLUMNNAME_AD_Org_ID, orgId) .create() .firstIdOnly(ProductId::ofRepoIdOrNull); } @Override public List<de.metas.product.model.I_M_Product> retrieveAllMappedProducts(final I_M_Product product) { final IProductMappingAware productMappingAware = InterfaceWrapperHelper.asColumnReferenceAwareOrNull(product, IProductMappingAware.class); if (productMappingAware.getM_Product_Mapping_ID() <= 0) { return Collections.emptyList(); } if (!productMappingAware.getM_Product_Mapping().isActive()) { return Collections.emptyList(); } return queryBL.createQueryBuilder(de.metas.product.model.I_M_Product.class, product) .addOnlyActiveRecordsFilter() .addEqualsFilter(IProductMappingAware.COLUMNNAME_M_Product_Mapping_ID, productMappingAware.getM_Product_Mapping_ID()) .addNotEqualsFilter(I_M_Product.COLUMNNAME_M_Product_ID, product.getM_Product_ID()) .create() .list(de.metas.product.model.I_M_Product.class); } @Nullable @Override public ProductCategoryId retrieveProductCategoryByProductId(@Nullable final ProductId productId) { if (productId == null) { return null; } final I_M_Product product = getById(productId); return product != null ? ProductCategoryId.ofRepoId(product.getM_Product_Category_ID()) : null; } @Nullable @Override public ProductAndCategoryId retrieveProductAndCategoryIdByProductId(@NonNull final ProductId productId) { final ProductCategoryId productCategoryId = retrieveProductCategoryByProductId(productId); return productCategoryId != null ? ProductAndCategoryId.of(productId, productCategoryId) : null; } @Override public ProductAndCategoryAndManufacturerId retrieveProductAndCategoryAndManufacturerByProductId(@NonNull final ProductId productId) { final I_M_Product product = getById(productId); if (!product.isActive()) { throw new AdempiereException("Cannot retrieve product category and manufacturer because product is not active: " + product); } return createProductAndCategoryAndManufacturerId(product); } @Override public String retrieveProductValueByProductId(@NonNull final ProductId productId) { final I_M_Product product = getById(productId); return product.getValue(); } @Override public Set<ProductAndCategoryAndManufacturerId> retrieveProductAndCategoryAndManufacturersByProductIds(final Set<ProductId> productIds) { return loadByIdsOutOfTrx(ProductId.toRepoIds(productIds), I_M_Product.class) .stream() .map(this::createProductAndCategoryAndManufacturerId) .collect(ImmutableSet.toImmutableSet()); } private ProductAndCategoryAndManufacturerId createProductAndCategoryAndManufacturerId(final I_M_Product product) { return ProductAndCategoryAndManufacturerId.of(product.getM_Product_ID(), product.getM_Product_Category_ID(), product.getManufacturer_ID()); } @Override public I_M_Product_Category getProductCategoryById(@NonNull final ProductCategoryId id) { return getProductCategoryById(id, I_M_Product_Category.class); } @Override public <T extends I_M_Product_Category> T getProductCategoryById(@NonNull final ProductCategoryId id, final Class<T> modelClass) { return loadOutOfTrx(id, modelClass); } @Override public String getProductCategoryNameById(@NonNull final ProductCategoryId id) { return getProductCategoryById(id).getName(); } @Override public Stream<I_M_Product_Category> streamAllProductCategories() { return queryBL.createQueryBuilderOutOfTrx(I_M_Product_Category.class) .addOnlyActiveRecordsFilter() .orderBy(I_M_Product_Category.COLUMN_M_Product_Category_ID) .create() .iterateAndStream(); } @Cached(cacheName = I_M_Product.Table_Name + "#by#" + I_M_Product.COLUMNNAME_S_Resource_ID) @Override public ProductId getProductIdByResourceId(@NonNull final ResourceId resourceId) { final ProductId productId = queryBL .createQueryBuilderOutOfTrx(I_M_Product.class) .addEqualsFilter(I_M_Product.COLUMN_S_Resource_ID, resourceId) .addOnlyActiveRecordsFilter() .create() .firstIdOnly(ProductId::ofRepoIdOrNull); if (productId == null) { throw new AdempiereException("No product found for " + resourceId); } return productId; } @Override public void updateProductsByResourceIds(@NonNull final Set<ResourceId> resourceIds, @NonNull final Consumer<I_M_Product> productUpdater) { updateProductsByResourceIds(resourceIds, (resourceId, product) -> { if (product != null) { productUpdater.accept(product); } }); } @Override public void updateProductsByResourceIds(@NonNull final Set<ResourceId> resourceIds, @NonNull final BiConsumer<ResourceId, I_M_Product> productUpdater) { Check.assumeNotEmpty(resourceIds, "resourceIds is not empty"); final Set<ProductId> productIds = queryBL .createQueryBuilder(I_M_Product.class) // in trx! .addInArrayFilter(I_M_Product.COLUMN_S_Resource_ID, resourceIds) .create() .listIds(ProductId::ofRepoId); if (productIds.isEmpty()) { return; } final Map<ResourceId, I_M_Product> productsByResourceId = Maps.uniqueIndex( loadByRepoIdAwares(productIds, I_M_Product.class), product -> ResourceId.ofRepoId(product.getS_Resource_ID())); resourceIds.forEach(resourceId -> { final I_M_Product product = productsByResourceId.get(resourceId); // might be null productUpdater.accept(resourceId, product); saveRecord(product); }); } @Override public void deleteProductByResourceId(@NonNull final ResourceId resourceId) { queryBL .createQueryBuilder(I_M_Product.class) // in trx .addEqualsFilter(I_M_Product.COLUMN_S_Resource_ID, resourceId) .addOnlyActiveRecordsFilter() .addOnlyContextClient() .create() .delete(); } @Override public I_M_Product createProduct(@NonNull final CreateProductRequest request) { final I_M_Product product = newInstance(I_M_Product.class); if (request.getProductValue() != null) { product.setValue(request.getProductValue()); } product.setName(request.getProductName()); product.setM_Product_Category_ID(request.getProductCategoryId().getRepoId()); product.setProductType(request.getProductType()); product.setC_UOM_ID(request.getUomId().getRepoId()); product.setIsPurchased(request.isPurchased()); product.setIsSold(request.isSold()); if (request.getBomVerified() != null) { product.setIsVerified(request.getBomVerified()); } if (request.getPlanningSchemaSelector() != null) { product.setM_ProductPlanningSchema_Selector(request.getPlanningSchemaSelector().getCode()); } saveRecord(product); return product; } @Override public void updateProduct(@NonNull final UpdateProductRequest request) { final I_M_Product product = load(request.getProductId(), I_M_Product.class); // in-trx if (request.getIsBOM() != null) { product.setIsBOM(request.getIsBOM()); if (!request.getIsBOM()) { product.setIsVerified(false); } } saveRecord(product); } @Override public int getProductGuaranteeDaysMinFallbackProductCategory(final @NonNull ProductId productId) { final I_M_Product productRecord = getById(productId); if (productRecord.getGuaranteeDaysMin() > 0) { return productRecord.getGuaranteeDaysMin(); } else if (Check.isNotBlank(productRecord.getGuaranteeMonths())) { return getGuaranteeMonthsInDays(productId); } else { final ProductCategoryId productCategoryId = ProductCategoryId.ofRepoId(productRecord.getM_Product_Category_ID()); final I_M_Product_Category productCategoryRecord = getProductCategoryById(productCategoryId); return productCategoryRecord.getGuaranteeDaysMin(); } } @Override public int getGuaranteeMonthsInDays(@NonNull final ProductId productId) { final I_M_Product product = getById(productId); if(product != null && Check.isNotBlank(product.getGuaranteeMonths())) { switch (product.getGuaranteeMonths()) { case X_M_Product.GUARANTEEMONTHS_12: return ONE_YEAR_DAYS; case X_M_Product.GUARANTEEMONTHS_24: return TWO_YEAR_DAYS; case X_M_Product.GUARANTEEMONTHS_36: return THREE_YEAR_DAYS; default: return 0; } } return 0; } @Override public Optional<ProductId> getProductIdByBarcode(@NonNull final String barcode, @NonNull final ClientId clientId) { final ProductId productId = queryBL.createQueryBuilderOutOfTrx(I_M_Product.class) .addOnlyActiveRecordsFilter() .addEqualsFilter(I_M_Product.COLUMNNAME_AD_Client_ID, clientId) .filter(queryBL.createCompositeQueryFilter(I_M_Product.class) .setJoinOr() .addEqualsFilter(I_M_Product.COLUMNNAME_UPC, barcode) .addEqualsFilter(I_M_Product.COLUMNNAME_Value, barcode)) .create() .firstIdOnly(ProductId::ofRepoIdOrNull); return Optional.ofNullable(productId); } }
java
<gh_stars>0 { "name": "<NAME> 1000", "description": "A drafting pencil.", "url": "https://www.amazon.com/Pentel-Automatic-Drafting-Accents-PG1013E/dp/B005GSL762" }
json
The institute is channeling its students' entrepreneurial spirit into servicing its vision of 'IIT Delhi for society'. One of the original seven IITs set up to provide students with world-class facilities in training, research and development in the field of science, engineering and technology, the Indian Institute of Technology (IIT), Delhi, has seen over 48,000 students graduate in various disciplines since its inception in 1961. With undergraduate, postgraduate, doctoral and certificate programmes offered, the institute has multiple departments under its wings. Ranging from applied mechanics, biochemical engineering and technology, chemical engineering, chemistry, civil engineering, electrical engineering, and materials science and engineering to textile and fibre engineering, IIT Delhi takes its diverse disciplines quite seriously. The distinguished alumni includes former IPS officer and Lt Governor of Pondicherry Kiran Bedi, Flipkart co-founder Sachin Bansal, former RBI governor Raghuram Rajan, MP and former Union minister Jayant Sinha, and NIIT chairman and co-founder Rajendra Pawar, among others. IIT Delhi director V. Ramgopal Rao says, "IIT Delhi is known for its start-up culture. Since we are located in the capital, the connect with society and government boosts entrepreneurship on the campus. " One of the latest research activities include PRACRITI (Prediction and Assessment of Corona Infections and Transmission in India), a web-based platform for monitoring COVID-19. On the start-up front, Nanosafe Solutions has launched an anti-microbial and washable face mask called 'NSafe' and ETEX, a smart textile start-up is working on innovative solutions in healthcare. Adds Rao, "Our focus today is on relevant research to help in the fight against COVID-19. There are about 20 ongoing projects at IIT Delhi for prevention of coronavirus; one million PPE kits have also been made in the past two months. We have developed a Covid detection kit that has been approved by the ICMR. " Some of the COVID-19 R&D projects include developing knit-based masks, Hazmat hoodie-based masks, low-cost hand sanitisers and PPE for hospital and health workers. The Centres of Excellence (CoEs) here are of three types-those set up by the institute, industry sponsored and funded as sponsored projects. Those set up by the institute are on biologically-inspired robots and drones (BIRD), transportation research and injury prevention programme (TRIPP), and in cyber systems and information assurance. The industry-sponsored are the oilfield services company Schlumberger Ltd's CoE on oil technology, Denmark-based DESMI's CoE on waste management and Yardi Systems Inc's centre for sustainable infrastructure. The sponsored projects include the DRDO-IIT Delhi joint advanced technology centre. The culture of R&D and incubators is so strong at IIT Delhi that it is now fertile ground for CoEs, say the teaching staff. "IIT Delhi changed my entire life. I became more self-confident, made friends and learnt to solve the toughest problems. I am a writer now, but even today people hold me in high esteem because I studied here" One of our student surveys noted that everyone wants to be an entrepreneur at IIT Delhi. In the past three years, over 16 unicorns have come from the campus. An immersion programme has started under which students are sent to villages and hospitals to see and understand real-world problems and connect with society. This is optional for them and they get non-graded units (NGUs) for this. We have modified our minor degree programme in entrepreneurship. Last year, in 2019, for the faculty, two researchbased programmes were started and they get funding from the institute. For PhD students and faculty, research solutions for multidisciplinary initiatives and major initiatives include FIDP (Faculty Induction & Development Programme) where two faculty members work on a project that is internally funded. IIT Delhi also signed an MoU with the All India Institute of Medical Sciences (AIIMS) Delhi for biomedical research. About 100 research projects were started with a focus on issues in society. We shifted focus from research and development to relevance and delivery; relevance with focus on society and delivery with focus on industry interaction and start-ups. There will be a further boost to start-ups; we want at least one in five faculty members to have a start-up. We will be working towards ensuring that students become job providers rather than job seekers. COVID-19 has brought a spurt in activity and we will be looking at a "problem first approach" and "not a solutions approach". IIT Delhi for society will be key, and research papers will look at the question: have we solved a problem in society? The three 'Is' approach- interdisciplinary research, internationalisation and industry connect-will be followed. We have already given a proposal to the senate for short-term certificate programmes in artificial intelligence and data science in collaboration with eight companies in the education space. A decision on this could come as early as the first week of July.
english
Red Bull has cruised through the season so far with utmost ease and dominance. However, this also means they have a target on their backs as teams bring in new updates that are only salt in Red Bull’s budget-cap-inflicted wounds. And their closest competitors, Aston Martin, are ready to take full advantage of it. Following a dazzling display at the Canadian GP, Aston Martin’s new package has proven to be a formidable upgrade that boasts a strikingly revamped floor, sidepod, and floor. These enhancements proved to be a game-changer for Fernando Alonso, who secured second place despite a challenging day on the track. Although the circuit provided the perfect canvas for the AMR23 to show its new, flying colors, Aston Martin wants more- closing the gap to Red Bull. At the heart of this upgrade was expertise. And as the team in green acknowledged, Red Bull holds the ultimate advantage of a powerful DRS; a supremacy that Aston Martin is diligently trying to eradicate. Fernando Alonso himself had warned his rivals and Max Verstappen that the upgrades in Canada were meant to destroy them. To be fair, Alonso came close enough, putting on a brave fight with Lewis Hamilton in Canada. However, there is a race to win and one man to overthrow. With the Austrian GP around the corner, team principal Mike Krack is confident and eager to outshine Red Bull in its own backyard. Optimistic, Krack believes their car will undergo significant improvements ahead of the race at the Red Bull Ring. With this bold battle cry, Aston Martin will surely leave no stone unturned in their mission to seize victory and dethrone their formidable rivals. Adrenaline coursing through their veins and determination electrifying the team- Aston Martin is ready to rewrite the narrative of Red Bull’s dominance in the 2023 season.
english
<filename>flink-table/flink-table-planner-blink/src/main/java/org/apache/flink/table/planner/plan/processors/utils/TopologyGraph.java /* * Licensed to the Apache Software Foundation (ASF) under one * or more contributor license agreements. See the NOTICE file * distributed with this work for additional information * regarding copyright ownership. The ASF licenses this file * to you under the Apache License, Version 2.0 (the * "License"); you may not use this file except in compliance * with the License. You may obtain a copy of the License at * * http://www.apache.org/licenses/LICENSE-2.0 * * Unless required by applicable law or agreed to in writing, software * distributed under the License is distributed on an "AS IS" BASIS, * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. * See the License for the specific language governing permissions and * limitations under the License. */ package org.apache.flink.table.planner.plan.processors.utils; import org.apache.flink.annotation.Internal; import org.apache.flink.annotation.VisibleForTesting; import org.apache.flink.streaming.api.datastream.DataStream; import org.apache.flink.table.planner.plan.nodes.exec.AbstractExecNodeExactlyOnceVisitor; import org.apache.flink.table.planner.plan.nodes.exec.ExecNode; import org.apache.flink.table.planner.plan.nodes.physical.batch.BatchExecBoundedStreamScan; import org.apache.flink.util.Preconditions; import java.util.Collections; import java.util.HashMap; import java.util.HashSet; import java.util.LinkedList; import java.util.List; import java.util.Map; import java.util.Queue; import java.util.Set; /** * A data structure storing the topological and input priority information of an {@link ExecNode} graph. */ @Internal class TopologyGraph { private final Map<ExecNode<?, ?>, TopologyNode> nodes; TopologyGraph(List<ExecNode<?, ?>> roots) { this(roots, Collections.emptySet()); } TopologyGraph(List<ExecNode<?, ?>> roots, Set<ExecNode<?, ?>> boundaries) { this.nodes = new HashMap<>(); // we first link all edges in the original exec node graph AbstractExecNodeExactlyOnceVisitor visitor = new AbstractExecNodeExactlyOnceVisitor() { @Override protected void visitNode(ExecNode<?, ?> node) { if (boundaries.contains(node)) { return; } for (ExecNode<?, ?> input : node.getInputNodes()) { link(input, node); } visitInputs(node); } }; roots.forEach(n -> n.accept(visitor)); } /** * Link an edge from `from` node to `to` node if no loop will occur after adding this edge. * Returns if this edge is successfully added. */ boolean link(ExecNode<?, ?> from, ExecNode<?, ?> to) { TopologyNode fromNode = getOrCreateTopologyNode(from); TopologyNode toNode = getOrCreateTopologyNode(to); if (canReach(toNode, fromNode)) { // invalid edge, as `to` is the predecessor of `from` return false; } else { // link `from` and `to` fromNode.outputs.add(toNode); toNode.inputs.add(fromNode); return true; } } /** * Remove the edge from `from` node to `to` node. If there is no edge between them then do nothing. */ void unlink(ExecNode<?, ?> from, ExecNode<?, ?> to) { TopologyNode fromNode = getOrCreateTopologyNode(from); TopologyNode toNode = getOrCreateTopologyNode(to); fromNode.outputs.remove(toNode); toNode.inputs.remove(fromNode); } /** * Calculate the maximum distance of the currently added nodes from the nodes without inputs. * The smallest distance is 0 (which are exactly the nodes without inputs) and the distances of * other nodes are the largest distances in their inputs plus 1. * * <p>Distance of a node is defined as the number of edges one needs to go through from the * nodes without inputs to this node. */ Map<ExecNode<?, ?>, Integer> calculateMaximumDistance() { Map<ExecNode<?, ?>, Integer> result = new HashMap<>(); Map<TopologyNode, Integer> inputsVisitedMap = new HashMap<>(); Queue<TopologyNode> queue = new LinkedList<>(); for (TopologyNode node : nodes.values()) { if (node.inputs.size() == 0) { queue.offer(node); } } while (!queue.isEmpty()) { TopologyNode node = queue.poll(); int dist = -1; for (TopologyNode input : node.inputs) { dist = Math.max( dist, Preconditions.checkNotNull( result.get(input.execNode), "The distance of an input node is not calculated. This is a bug.")); } dist++; result.put(node.execNode, dist); for (TopologyNode output : node.outputs) { int inputsVisited = inputsVisitedMap.compute(output, (k, v) -> v == null ? 1 : v + 1); if (inputsVisited == output.inputs.size()) { queue.offer(output); } } } return result; } /** * Make the distance of node A at least as far as node B by adding edges * from all inputs of node B to node A. */ void makeAsFarAs(ExecNode<?, ?> a, ExecNode<?, ?> b) { TopologyNode nodeA = getOrCreateTopologyNode(a); TopologyNode nodeB = getOrCreateTopologyNode(b); for (TopologyNode input : nodeB.inputs) { link(input.execNode, nodeA.execNode); } } @VisibleForTesting boolean canReach(ExecNode<?, ?> from, ExecNode<?, ?> to) { TopologyNode fromNode = getOrCreateTopologyNode(from); TopologyNode toNode = getOrCreateTopologyNode(to); return canReach(fromNode, toNode); } private boolean canReach(TopologyNode from, TopologyNode to) { Set<TopologyNode> visited = new HashSet<>(); visited.add(from); Queue<TopologyNode> queue = new LinkedList<>(); queue.offer(from); while (!queue.isEmpty()) { TopologyNode node = queue.poll(); if (to.equals(node)) { return true; } for (TopologyNode next : node.outputs) { if (visited.contains(next)) { continue; } visited.add(next); queue.offer(next); } } return false; } private TopologyNode getOrCreateTopologyNode(ExecNode<?, ?> execNode) { // NOTE: We treat different `BatchExecBoundedStreamScan`s with same `DataStream` object as the same if (execNode instanceof BatchExecBoundedStreamScan) { DataStream<?> currentStream = ((BatchExecBoundedStreamScan) execNode).boundedStreamTable().dataStream(); for (Map.Entry<ExecNode<?, ?>, TopologyNode> entry : nodes.entrySet()) { ExecNode<?, ?> key = entry.getKey(); if (key instanceof BatchExecBoundedStreamScan) { DataStream<?> existingStream = ((BatchExecBoundedStreamScan) key).boundedStreamTable().dataStream(); if (existingStream.equals(currentStream)) { return entry.getValue(); } } } TopologyNode result = new TopologyNode(execNode); nodes.put(execNode, result); return result; } else { return nodes.computeIfAbsent(execNode, k -> new TopologyNode(execNode)); } } /** * A node in the {@link TopologyGraph}. */ private static class TopologyNode { private final ExecNode<?, ?> execNode; private final Set<TopologyNode> inputs; private final Set<TopologyNode> outputs; private TopologyNode(ExecNode<?, ?> execNode) { this.execNode = execNode; this.inputs = new HashSet<>(); this.outputs = new HashSet<>(); } } }
java
KAILALI: A black fungus-infected man has died today morning in Dhangadhi, Kailali. The 65-year-old man of Bardagoriya in Kailali district died at Seti Provincial Hospital. He was admitted to the hospital on May 31, said the hospital’s information officer Dilip Kumar Shrestha. A biopsy test confirmed black fungus infection in him, he said.
english
{ "about_to_stake_to_cpu": "Vous êtes sur le point d'attribuer en CPU: ", "about_to_stake_to_net": "Vous êtes sur le point d'attribuer en Bandwidth:", "about_to_unstake_from_cpu": "Vous êtes sur le point de désattribuer en CPU: ", "about_to_unstake_from_net": "Vous êtes sur le point de désattribuer en Bandwidth: ", "amount_not_staked": "EOS disponible (Non Attribué)", "confirm_stake": "Confirmer", "cpu_staked": "EOS attribué au CPU", "eos_in_cpu_after": "EOS en CPU après", "eos_in_net_after": "EOS en Bandwidth après", "net_staked": "EOS attribué au Bandwidth", "stake_button_cta": "Mettez vos ressources à jour", "stake_form": "Attribué des EOS", "stake_modal_title": "Mettez vos ressources à jour", "stake_success": "Vos EOS ont été attribué avec succès", "stake_updated": "Vos ressources ont été mise à jour ", "staked_balances": "Votre Solde de EOS attribué", "staked_data": "Statistique de Blocage de EOS", "total_staked": "Quantité total de EOS attribué", "undelegate_explanation": "Tout EOS désattribué ne sera pas indisponible pendant 3 jours. Après cette période d'attente, il apparaîtra comme disponible", "unstake_success": "Vos EOS ont été désattribué avec succès", "update_staked_coins": "Mettre à jour les EOS attribué", "update_staked_cpu_amount": "Quantité de EOS attribué en ressource CPU", "update_staked_net_amount": "Quantité de EOS attribué en ressource Bandwidth", "will_have_less_than_one_eos_staked": "You will have less than 1 EOS staked to Bandwidth and/or CPU. That may prevent you from carrying out transactions when the network is busy.", "you_will_have": "Vous aurez", "about_to_unstake": "Vous êtes sur le point de désattribuer des jetons, veuillez noter que tous les jetons qui sont désattribuer devront être réclamé dans 72 heures.", "have_already_unstaked": "You are currently unstaking ", "unstaking_will_be_reset": ". If you proceed to unstake to this new value, it will replace this previous request, and reset the 72 hour count down." }
json
Beirut: People in Beirut attended a vigil for the more than 200 victims of the huge explosion that devastated the Lebanese capital a week ago. A crowd stood in silence near the ruins of the city’s port as a Muslim call to prayer was broadcast and church bells tolled at 18. 09 (15. 09 GMT), the BBC reported. That was the exact time when 2,750 tonnes of ammonium nitrate stored unsafely in a port warehouse detonated. There has been outrage that so much hazardous material was kept there. The Lebanese government’s resignation on Monday failed to pacify protesters, who clashed with police in central Beirut for a third consecutive night. Lebanon was already struggling with an unprecedented economic crisis before the disaster, with families pushed into poverty and hunger. Since October, protesters have been demanding the complete overhaul of the political system, which they blame for government corruption and mismanagement. Prime Minister Hassan Diab, a university professor who took office in January with the support of the Iran-backed Hezbollah movement and its allies following the resignation of the previous government, announced his cabinet’s resignation in a televised address on Monday night. He avoided taking responsibility for last week’s blast, blaming it on the entrenched political elite. Diab said that his caretaker administration would “follow the will of the people in their demand to hold accountable those responsible for the disaster”. Media reports say it is unlikely the mass resignation of the government will remove much heat from the protests, as Lebanon’s problems are only deepening. A new prime minister will have to be chosen using the same system of sectarian politics at the root of many people’s complaints, the reports further added.
english
# # Copyright (c) 2008-2016 Citrix Systems, Inc. # # Licensed under the Apache License, Version 2.0 (the "License") # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. # See the License for the specific language governing permissions and # limitations under the License. # from nssrc.com.citrix.netscaler.nitro.resource.base.base_resource import base_resource from nssrc.com.citrix.netscaler.nitro.resource.base.base_resource import base_response from nssrc.com.citrix.netscaler.nitro.service.options import options from nssrc.com.citrix.netscaler.nitro.exception.nitro_exception import nitro_exception from nssrc.com.citrix.netscaler.nitro.util.nitro_util import nitro_util class appfwsettings(base_resource) : """ Configuration for AS settings resource. """ def __init__(self) : self._defaultprofile = None self._undefaction = None self._sessiontimeout = None self._learnratelimit = None self._sessionlifetime = None self._sessioncookiename = None self._clientiploggingheader = None self._importsizelimit = None self._signatureautoupdate = None self._signatureurl = None self._cookiepostencryptprefix = None self._logmalformedreq = None self._geolocationlogging = None self._ceflogging = None self._entitydecoding = None self._useconfigurablesecretkey = None @property def defaultprofile(self) : r"""Profile to use when a connection does not match any policy. Default setting is APPFW_BYPASS, which sends unmatched connections back to the NetScaler appliance without attempting to filter them further.<br/>Default value: APPFW_BYPASS<br/>Minimum length = 1. """ try : return self._defaultprofile except Exception as e: raise e @defaultprofile.setter def defaultprofile(self, defaultprofile) : r"""Profile to use when a connection does not match any policy. Default setting is APPFW_BYPASS, which sends unmatched connections back to the NetScaler appliance without attempting to filter them further.<br/>Default value: APPFW_BYPASS<br/>Minimum length = 1 """ try : self._defaultprofile = defaultprofile except Exception as e: raise e @property def undefaction(self) : r"""Profile to use when an application firewall policy evaluates to undefined (UNDEF). An UNDEF event indicates an internal error condition. The APPFW_BLOCK built-in profile is the default setting. You can specify a different built-in or user-created profile as the UNDEF profile.<br/>Default value: APPFW_BLOCK<br/>Minimum length = 1. """ try : return self._undefaction except Exception as e: raise e @undefaction.setter def undefaction(self, undefaction) : r"""Profile to use when an application firewall policy evaluates to undefined (UNDEF). An UNDEF event indicates an internal error condition. The APPFW_BLOCK built-in profile is the default setting. You can specify a different built-in or user-created profile as the UNDEF profile.<br/>Default value: APPFW_BLOCK<br/>Minimum length = 1 """ try : self._undefaction = undefaction except Exception as e: raise e @property def sessiontimeout(self) : r"""Timeout, in seconds, after which a user session is terminated. Before continuing to use the protected web site, the user must establish a new session by opening a designated start URL.<br/>Default value: 900<br/>Minimum length = 1<br/>Maximum length = 65535. """ try : return self._sessiontimeout except Exception as e: raise e @sessiontimeout.setter def sessiontimeout(self, sessiontimeout) : r"""Timeout, in seconds, after which a user session is terminated. Before continuing to use the protected web site, the user must establish a new session by opening a designated start URL.<br/>Default value: 900<br/>Minimum length = 1<br/>Maximum length = 65535 """ try : self._sessiontimeout = sessiontimeout except Exception as e: raise e @property def learnratelimit(self) : r"""Maximum number of connections per second that the application firewall learning engine examines to generate new relaxations for learning-enabled security checks. The application firewall drops any connections above this limit from the list of connections used by the learning engine.<br/>Default value: 400<br/>Minimum length = 1<br/>Maximum length = 1000. """ try : return self._learnratelimit except Exception as e: raise e @learnratelimit.setter def learnratelimit(self, learnratelimit) : r"""Maximum number of connections per second that the application firewall learning engine examines to generate new relaxations for learning-enabled security checks. The application firewall drops any connections above this limit from the list of connections used by the learning engine.<br/>Default value: 400<br/>Minimum length = 1<br/>Maximum length = 1000 """ try : self._learnratelimit = learnratelimit except Exception as e: raise e @property def sessionlifetime(self) : r"""Maximum amount of time (in seconds) that the application firewall allows a user session to remain active, regardless of user activity. After this time, the user session is terminated. Before continuing to use the protected web site, the user must establish a new session by opening a designated start URL.<br/>Default value: 0<br/>Maximum length = 2147483647. """ try : return self._sessionlifetime except Exception as e: raise e @sessionlifetime.setter def sessionlifetime(self, sessionlifetime) : r"""Maximum amount of time (in seconds) that the application firewall allows a user session to remain active, regardless of user activity. After this time, the user session is terminated. Before continuing to use the protected web site, the user must establish a new session by opening a designated start URL.<br/>Default value: 0<br/>Maximum length = 2147483647 """ try : self._sessionlifetime = sessionlifetime except Exception as e: raise e @property def sessioncookiename(self) : r"""Name of the session cookie that the application firewall uses to track user sessions. Must begin with a letter or number, and can consist of from 1 to 31 letters, numbers, and the hyphen (-) and underscore (_) symbols. The following requirement applies only to the NetScaler CLI: If the name includes one or more spaces, enclose the name in double or single quotation marks (for example, "my cookie name" or 'my cookie name').<br/>Minimum length = 1. """ try : return self._sessioncookiename except Exception as e: raise e @sessioncookiename.setter def sessioncookiename(self, sessioncookiename) : r"""Name of the session cookie that the application firewall uses to track user sessions. Must begin with a letter or number, and can consist of from 1 to 31 letters, numbers, and the hyphen (-) and underscore (_) symbols. The following requirement applies only to the NetScaler CLI: If the name includes one or more spaces, enclose the name in double or single quotation marks (for example, "my cookie name" or 'my cookie name').<br/>Minimum length = 1 """ try : self._sessioncookiename = sessioncookiename except Exception as e: raise e @property def clientiploggingheader(self) : r"""Name of an HTTP header that contains the IP address that the client used to connect to the protected web site or service. """ try : return self._clientiploggingheader except Exception as e: raise e @clientiploggingheader.setter def clientiploggingheader(self, clientiploggingheader) : r"""Name of an HTTP header that contains the IP address that the client used to connect to the protected web site or service. """ try : self._clientiploggingheader = clientiploggingheader except Exception as e: raise e @property def importsizelimit(self) : r"""Cumulative total maximum number of bytes in web forms imported to a protected web site. If a user attempts to upload files with a total byte count higher than the specified limit, the application firewall blocks the request.<br/>Default value: 134217728<br/>Minimum length = 1<br/>Maximum length = 268435456. """ try : return self._importsizelimit except Exception as e: raise e @importsizelimit.setter def importsizelimit(self, importsizelimit) : r"""Cumulative total maximum number of bytes in web forms imported to a protected web site. If a user attempts to upload files with a total byte count higher than the specified limit, the application firewall blocks the request.<br/>Default value: 134217728<br/>Minimum length = 1<br/>Maximum length = 268435456 """ try : self._importsizelimit = importsizelimit except Exception as e: raise e @property def signatureautoupdate(self) : r"""Flag used to enable/disable auto update signatures.<br/>Default value: OFF<br/>Possible values = ON, OFF. """ try : return self._signatureautoupdate except Exception as e: raise e @signatureautoupdate.setter def signatureautoupdate(self, signatureautoupdate) : r"""Flag used to enable/disable auto update signatures.<br/>Default value: OFF<br/>Possible values = ON, OFF """ try : self._signatureautoupdate = signatureautoupdate except Exception as e: raise e @property def signatureurl(self) : r"""URL to download the mapping file from server.<br/>Default value: https://s3.amazonaws.com/NSAppFwSignatures/SignaturesMapping.xml. """ try : return self._signatureurl except Exception as e: raise e @signatureurl.setter def signatureurl(self, signatureurl) : r"""URL to download the mapping file from server.<br/>Default value: https://s3.amazonaws.com/NSAppFwSignatures/SignaturesMapping.xml """ try : self._signatureurl = signatureurl except Exception as e: raise e @property def cookiepostencryptprefix(self) : r"""String that is prepended to all encrypted cookie values.<br/>Minimum length = 1. """ try : return self._cookiepostencryptprefix except Exception as e: raise e @cookiepostencryptprefix.setter def cookiepostencryptprefix(self, cookiepostencryptprefix) : r"""String that is prepended to all encrypted cookie values.<br/>Minimum length = 1 """ try : self._cookiepostencryptprefix = cookiepostencryptprefix except Exception as e: raise e @property def logmalformedreq(self) : r"""Log requests that are so malformed that application firewall parsing doesn't occur.<br/>Default value: ON<br/>Possible values = ON, OFF. """ try : return self._logmalformedreq except Exception as e: raise e @logmalformedreq.setter def logmalformedreq(self, logmalformedreq) : r"""Log requests that are so malformed that application firewall parsing doesn't occur.<br/>Default value: ON<br/>Possible values = ON, OFF """ try : self._logmalformedreq = logmalformedreq except Exception as e: raise e @property def geolocationlogging(self) : r"""Enable Geo-Location Logging in CEF format logs.<br/>Default value: OFF<br/>Possible values = ON, OFF. """ try : return self._geolocationlogging except Exception as e: raise e @geolocationlogging.setter def geolocationlogging(self, geolocationlogging) : r"""Enable Geo-Location Logging in CEF format logs.<br/>Default value: OFF<br/>Possible values = ON, OFF """ try : self._geolocationlogging = geolocationlogging except Exception as e: raise e @property def ceflogging(self) : r"""Enable CEF format logs.<br/>Default value: OFF<br/>Possible values = ON, OFF. """ try : return self._ceflogging except Exception as e: raise e @ceflogging.setter def ceflogging(self, ceflogging) : r"""Enable CEF format logs.<br/>Default value: OFF<br/>Possible values = ON, OFF """ try : self._ceflogging = ceflogging except Exception as e: raise e @property def entitydecoding(self) : r"""Transform multibyte (double- or half-width) characters to single width characters.<br/>Default value: OFF<br/>Possible values = ON, OFF. """ try : return self._entitydecoding except Exception as e: raise e @entitydecoding.setter def entitydecoding(self, entitydecoding) : r"""Transform multibyte (double- or half-width) characters to single width characters.<br/>Default value: OFF<br/>Possible values = ON, OFF """ try : self._entitydecoding = entitydecoding except Exception as e: raise e @property def useconfigurablesecretkey(self) : r"""Use configurable secret key in AppFw operations.<br/>Default value: OFF<br/>Possible values = ON, OFF. """ try : return self._useconfigurablesecretkey except Exception as e: raise e @useconfigurablesecretkey.setter def useconfigurablesecretkey(self, useconfigurablesecretkey) : r"""Use configurable secret key in AppFw operations.<br/>Default value: OFF<br/>Possible values = ON, OFF """ try : self._useconfigurablesecretkey = useconfigurablesecretkey except Exception as e: raise e def _get_nitro_response(self, service, response) : r""" converts nitro response into object and returns the object array in case of get request. """ try : result = service.payload_formatter.string_to_resource(appfwsettings_response, response, self.__class__.__name__) if(result.errorcode != 0) : if (result.errorcode == 444) : service.clear_session(self) if result.severity : if (result.severity == "ERROR") : raise nitro_exception(result.errorcode, str(result.message), str(result.severity)) else : raise nitro_exception(result.errorcode, str(result.message), str(result.severity)) return result.appfwsettings except Exception as e : raise e def _get_object_name(self) : r""" Returns the value of object identifier argument """ try : return 0 except Exception as e : raise e @classmethod def update(cls, client, resource) : r""" Use this API to update appfwsettings. """ try : if type(resource) is not list : updateresource = appfwsettings() updateresource.defaultprofile = resource.defaultprofile updateresource.undefaction = resource.undefaction updateresource.sessiontimeout = resource.sessiontimeout updateresource.learnratelimit = resource.learnratelimit updateresource.sessionlifetime = resource.sessionlifetime updateresource.sessioncookiename = resource.sessioncookiename updateresource.clientiploggingheader = resource.clientiploggingheader updateresource.importsizelimit = resource.importsizelimit updateresource.signatureautoupdate = resource.signatureautoupdate updateresource.signatureurl = resource.signatureurl updateresource.cookiepostencryptprefix = resource.cookiepostencryptprefix updateresource.logmalformedreq = resource.logmalformedreq updateresource.geolocationlogging = resource.geolocationlogging updateresource.ceflogging = resource.ceflogging updateresource.entitydecoding = resource.entitydecoding updateresource.useconfigurablesecretkey = resource.useconfigurablesecretkey return updateresource.update_resource(client) except Exception as e : raise e @classmethod def unset(cls, client, resource, args) : r""" Use this API to unset the properties of appfwsettings resource. Properties that need to be unset are specified in args array. """ try : if type(resource) is not list : unsetresource = appfwsettings() return unsetresource.unset_resource(client, args) except Exception as e : raise e @classmethod def get(cls, client, name="", option_="") : r""" Use this API to fetch all the appfwsettings resources that are configured on netscaler. """ try : if not name : obj = appfwsettings() response = obj.get_resources(client, option_) return response except Exception as e : raise e class Ceflogging: ON = "ON" OFF = "OFF" class Logmalformedreq: ON = "ON" OFF = "OFF" class Signatureautoupdate: ON = "ON" OFF = "OFF" class Useconfigurablesecretkey: ON = "ON" OFF = "OFF" class Geolocationlogging: ON = "ON" OFF = "OFF" class Entitydecoding: ON = "ON" OFF = "OFF" class appfwsettings_response(base_response) : def __init__(self, length=1) : self.appfwsettings = [] self.errorcode = 0 self.message = "" self.severity = "" self.sessionid = "" self.appfwsettings = [appfwsettings() for _ in range(length)]
python
1 These are now the chiefe fathers of them, and the genealogie of them that came vp with mee from Babel, in the reigne of King Artahshashte. 2 Of the sonnes of Phinehas, Gershom: of the sonnes of Ithamar, Daniel: of the sonnes of Dauid, Hattush: 3 Of the sonnes of Shechaniah, of the sonnes of Pharosh, Zechariah, and with him the count of the males, an hundreth and fiftie. 4 Of the sonnes of Pahath Moab, Elihoenai, the sonne of Zerahiah, and with him two hundreth males. 5 Of the sonnes of Shechaniah, the sonne of Iahaziel, and with him three hundreth males. 6 And of the sonnes of Adin, Ebed the sonne of Ionathan, and with him fiftie males. 7 And of the sonnes of Elam, Ieshaiah the sonne of Athaliah, and with him seuentie males. 8 And of the sonnes of Shephatiah, Zebadiah the sonne of Michael, and with him fourescore males. 9 Of the sonnes of Ioab, Obadiah the sonne of Iehiel, and with him two hundreth and eighteene males. 10 And of the sonnes of Shelomith the sonne of Iosiphiah, and with him an hundreth and threescore males. 11 And of the sonnes of Bebai, Zechariah the sonne of Bebai, and with him eight and twentie males. 12 And of the sonnes of Azgad, Iohanan the sonne of Hakkatan, and with him an hundreth and ten males. 13 And of the sonnes of Adonikam, that were the last, whose names are these: Eliphelet, Iehiel and Shemaiah, and with them three score males. 14 And of the sonnes of Biguai, Vthai, and Zabbud, and with them seuentie males. 15 And I gathered them to the Riuer that goeth toward Ahaua, and there abode we three dayes: then I viewed the people, and the Priests, and found there none of the sonnes of Leui. 16 Therefore sent I to Eliezer, to Ariel, to Shemeiah, and to Elnathan, and to Iarib, and to Elnathan, and to Nathan, and to Zechariah, and to Meshullam the chiefe, and to Ioiarib and to Elnathan, men of vnderstanding, 17 And I gaue them commandement, to Iddo the chiefest at the place of Casiphia, and I told them the words that they should speake to Iddo, and to his brethren the Nethinims at the place of Casiphia, that they should cause the ministers of the house of our God to come vnto vs. 18 So by the good hande of our God which was vpon vs, they brought vs a man of vnderstanding of the sonnes of Mahali the sonne of Leui the sonne of Israel, and Sherebiah with his sonnes and his brethren, euen eighteene. 19 Also Hashabiah, and with him Ieshaiah of the sonnes of Merari, with his brethren, and their sonnes twentie. 20 And of the Nethinims, whom Dauid had set, and the Princes for the seruice of the Leuites, two hundreth and twentie of the Nethinims, which all were named by name. 21 And there at the Riuer, by Ahaua, I proclaimed a fast, that we might humble our selues before our God, and seeke of him a right way for vs, and for our children, and for all our substance. 22 For I was ashamed to require of the King an armie and horsemen, to helpe vs against the enemie in the way, because we had spoken to the King, saying, The hande of our God is vpon all them that seeke him in goodnesse, but his power and his wrath is against all them that forsake him. 23 So we fasted, aud besought our God for this: and he was intreated of vs. 24 Then I separated twelue of the chiefe of the Priests, Sherebiah, and Hashabiah, and ten of their brethren with them, 25 And weighed them the siluer and the gold, and the vessels, euen the offring of ye house of our God, which the King and his counselers, and his Princes, and all Israel that were present had offred. 26 And I weighed vnto their hand sixe hundreth and fiftie talents of siluer, and in siluer vessel, an hundreth talents, and in golde, an hundreth talents: 27 And twentie basins of golde, of a thousand drammes, and two vessels of shining brasse very good, and precious as golde. 28 And I said vnto them, Ye are consecrate vnto the Lord, and the vessels are consecrate, and the gold and the siluer are freely offred vnto the Lord God of your fathers. 29 Watch ye, and keepe them vntill ye weigh them before the chiefe Priestes and the Leuites, and the chiefe fathers of Israel in Ierusalem in the chambers of the house of the Lord. 30 So the Priests and the Leuites receiued the weight of the siluer and of the golde, and of the vessels to bring them to Ierusalem, vnto the house of our God. 31 Then we departed from the Riuer of Ahauah on the twelft day of the first moneth, to go vnto Ierusalem, and the hand of our God was vpon vs, and deliuered vs from the hand of the enemie, and of such as layde waite by the way. 32 And we came to Ierusalem, and abode there three dayes. 33 And on ye fourth day was the siluer weighed, and the golde and the vessell in the house of our God by the hand of Meremoth the sonne of Vriah the Priest, and with him was Eleazar the sonne of Phinehas, and with them was Iozabad the sonne of Ieshua, and Noadiah the sonne of Binnui the Leuites, 34 By number and by weight of euery one, and all the weight was written at the same time. 35 Also the children of the captiuitie, which were come out of captiuitie, offred burnt offrings vnto the God of Israel, twelue bullockes for all Israel, ninetie and sixe rammes, seuentie and seuen lambes, and twelue hee goates for sinne: all was a burnt offring of the Lord. 36 And they deliuered the Kings commission vnto the Kings officers, and to the captaines beyond the Riuer: and they promoted the people, and the house of God.
english
{"month": "10", "num": 1432, "link": "", "year": "2014", "news": "", "safe_title": "The Sake of Argument", "transcript": "{{Title text: 'It's not actually ... it's a DEVICE for EXPLORING a PLAUSIBLE REALITY that's not the one we're in, to gain a broader understanding about it.' 'oh, like a boat!' '...' 'Just for the sake of argument, we should get a boat! You can invite the Devil, too, if you want.'}}\n\n[[Woman talking to a man]]\nWoman: Just for the sake of argument, let's say that--\nMan: -- wait, for the sake of what?\n\n[[Zoom in on man]]\nWoman ((off scene)): Argument.\nMan: Ok, cool, that's totally a good reason to say something that's wrong. Gotta have arguments.\n\n[[Zoom out to original scene]]\nWoman: I'm just playing devil's advocate.\nMan: Ok. So you saw an argument where one side was the devil, and you were like \"man, that guy could use an advocate.\"\n\n[[Zoom out and in silhouette]]\nWoman: It's... why are you being so difficult?\nMan: For the sake of argument.\nWoman: Argh!\nMan: Yay, it's working!", "alt": "'It's not actually ... it's a DEVICE for EXPLORING a PLAUSIBLE REALITY that's not the one we're in, to gain a broader understanding about it.' 'oh, like a boat!' '...' 'Just for the sake of argument, we should get a boat! You can invite the Devil, too, if you want.'", "img": "https://imgs.xkcd.com/comics/the_sake_of_argument.png", "title": "The Sake of Argument", "day": "10", "mirror_img": "https://raw.githubusercontent.com/aghontpi/mirror-xkcd-api/main/api/1432/the_sake_of_argument.png"}
json
What do you make of this recent consolidation that we have seen in the Indian market? Despite global news flow, there has really been no major reaction? The consolidation is there, we have also seen a profit-booking of more than 1,000 points actually and that has been the nature of the market. We have seen markets making new highs and then again falling into that zone of 2,000 points or 1,000 points and that is where it creates an opportunity also in the Indian markets. The dollar is topping out and that is very clear. If the dollar is topping out, we will see the FII flows coming into the emerging markets. It is looking at the value pockets right now. For example, look at the Chinese or Hang Seng markets, they moved up by more than 50% because there was a lot of value over there and they were quite cheap. Indian markets related to emerging markets were expensively valued but that story is still not there because we have seen the correction of almost 25% in valuations over the emerging markets when I look at the Indian markets right now. We were 85 times premium, now we are close to 63 times premium because the Chinese market has moved up. This is where the attractiveness of the Indian market starts coming back. It will remain little elevated because the kind of growth we are going to show now, it is going to be much higher. The inflation scenario of India actually is lower than the global average inflation consistently and the money flow into the Indian markets are at attractive valuations. It will be much faster and higher. If you look at the FDI flows, two things are very important. Generally, we miss out in all the macro headwinds. One is the political stability in the Indian markets though we are heading for elections but that is something which has attracted a lot of FDI flows in the Indian markets in the last four-five years and that has continued. Another thing is the law and order situation when there is political stability and there is a better law and order. You have seen what is happening in Uttar Pradesh. They have been able to invite some of the global investors into that particular state and this is something that we need to look at. State-wise, if FDI flows are increasing in more sectors, then our forex reserves will be much better and the stability of the currency again will come back. This all is going to benefit India when you compare with the developed or the emerging markets right now. How would you look at which sectors to buy now? On one hand, we have a global slowdown and on the other hand, we have domestics, probably consumption not really picking up. How would you play this? Would you look at global cyclicals, banks and consumer stocks? It is a very interesting question because the money is looking into the value pockets and that is very clear. The very high PE stocks have gone into a consolidation time correction, you are not making returns over there but definitely you are making returns into those stocks which are direct beneficiaries of government expenditure and direct beneficiary of private capex which is now coming up. It is clearly seen in metal, cement and chemicals and is the direct beneficiary of revival in housing which is not firing from all cylinders. When they are going with such a high capex, the question is whether we can make money here because the supply side is growing by almost 13%-14% but the demand because of the little lower real estate market, is not at full pace. The demand is around 9% because infra is doing good, 50% cement goes in infra and 50% goes in real estate and the demand overall is around 9% so there is a mismatch between demand and supply. If you are not making money in the OEMs and are more into the consolidating phase, how can you make money? You need to go one step back, look at the B2B companies which are providing materials to all of these suppliers. All of these cement manufacturers and that is where you can make money. For example, AIA Engineering is one stock we have been tracking. Download The Economic Times News App to get Daily Market Updates & Live Business News. Subscribe to The Economic Times Prime and read the Economic Times ePaper Online.and Sensex Today.
english
# ***** BEGIN GPL LICENSE BLOCK ***** # # # This program is free software; you can redistribute it and/or # modify it under the terms of the GNU General Public License # as published by the Free Software Foundation; either version 2 # of the License, or (at your option) any later version. # # This program is distributed in the hope that it will be useful, # but WITHOUT ANY WARRANTY; without even the implied warranty of # MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the # GNU General Public License for more details. # # You should have received a copy of the GNU General Public License # along with this program; if not, write to the Free Software Foundation, # Inc., 51 Franklin Street, Fifth Floor, Boston, MA 02110-1301, USA. # # ***** END GPL LICENCE BLOCK ***** bl_info = { "name": "Surface Heat Diffuse Skinning", "author": "mesh online", "version": (3, 3, 1), "blender": (2, 80, 0), "location": "View3D > UI > Mesh Online", "description": "Surface Heat Diffuse Skinning", "warning": "", "wiki_url": "http://www.mesh-online.net/vhd.html", "category": "Object" } import bpy import sys import os import time import platform from subprocess import PIPE, Popen from threading import Thread from bpy.props import * from queue import Queue, Empty class SFC_OT_ModalTimerOperator(bpy.types.Operator): """Operator which runs its self from a timer""" bl_idname = "wm.surface_heat_diffuse" bl_label = "Surface Heat Diffuse Skinning" bl_options = {'REGISTER', 'UNDO'} _timer = None _pid = None _queue = None _objs = [] _permulation = [] _selected_indices = [] _selected_group_index_weights = [] _start_time = None def write_bone_data(self, obj, filepath): f = open(filepath, 'w', encoding='utf-8') f.write("# surface heat diffuse bone export.\n") amt = obj.data bpy.ops.object.mode_set(mode='EDIT') for bone in amt.edit_bones: if bone.use_deform: world_bone_head = obj.matrix_world @ bone.head world_bone_tail = obj.matrix_world @ bone.tail f.write("b,{},{:.6f},{:.6f},{:.6f},{:.6f},{:.6f},{:.6f}\n".format( bone.name, world_bone_head[0], world_bone_head[1], world_bone_head[2], world_bone_tail[0], world_bone_tail[1], world_bone_tail[2])) bpy.ops.object.mode_set(mode='OBJECT') f.close() def write_mesh_data(self, objs, filepath): f = open(filepath, 'w', encoding='utf-8') f.write("# surface heat diffuse mesh export.\n") vertex_offset = 0 for obj in objs: for v in obj.data.vertices: world_v_co = obj.matrix_world @ v.co f.write("v,{:.6f},{:.6f},{:.6f}\n".format(world_v_co[0], world_v_co[1], world_v_co[2])) for poly in obj.data.polygons: f.write("f"); for loop_ind in poly.loop_indices: vert_ind = obj.data.loops[loop_ind].vertex_index f.write(",{}".format(vertex_offset + vert_ind)) f.write("\n") vertex_offset += len(obj.data.vertices) f.close() def read_weight_data(self, objs, filepath): # make permulation for all vertices vertex_offset = 0; for obj in objs: for index in range(len(obj.data.vertices)): self._permulation.append((vertex_offset + index, index, obj)) vertex_offset += len(obj.data.vertices) if bpy.context.scene.surface_protect: for index in range(len(objs)): obj = objs[index] # get selected vertex indices self._selected_indices.append([i.index for i in obj.data.vertices if i.select]) self._selected_group_index_weights.append([]) # push protected vertices weight for vert_ind in self._selected_indices[index]: for g in obj.data.vertices[vert_ind].groups: self._selected_group_index_weights[index].append((obj.vertex_groups[g.group].name, vert_ind, g.weight)) f = open(filepath, 'r', encoding='utf-8') bones = [] for line in f: if len(line) == 0: continue tokens = line.strip("\r\n").split(",") if tokens[0] == "b": group_name = tokens[1] bones.append(group_name) for obj in objs: #check for existing group with the same name if None != obj.vertex_groups.get(group_name): group = obj.vertex_groups[group_name] obj.vertex_groups.remove(group) obj.vertex_groups.new(name = group_name) if tokens[0] == "w": group_name = bones[int(tokens[2])] index = int(tokens[1]) vert_ind = self._permulation[index][1] weight = float(tokens[3]) obj = self._permulation[index][2] # protect vertices weight if bpy.context.scene.surface_protect and vert_ind in self._selected_indices[objs.index(obj)]: continue obj.vertex_groups[group_name].add([vert_ind], weight, 'REPLACE') f.close() if bpy.context.scene.surface_protect: for index in range(len(objs)): obj = objs[index] # pop protected vertices weight for (group_name, vert_ind, weight) in self._selected_group_index_weights[index]: obj.vertex_groups[group_name].add([vert_ind], weight, 'REPLACE') def modal(self, context, event): if event.type == 'ESC': self._pid.terminate() return self.cancel(context) if event.type == 'TIMER': # background task is still running if None == self._pid.poll(): # read line without blocking try: rawline = self._queue.get_nowait() except Empty: pass else: line = rawline.decode().strip("\r\n") self.report({'INFO'}, line) else: # background task finished running self.read_weight_data(self._objs, os.path.join(os.path.dirname(__file__), "data", "untitled-weight.txt")) running_time = time.time() - self._start_time self.report({'INFO'}, "".join(("Complete, ", "running time: ", \ str(int(running_time / 60))," minutes ", str(int(running_time % 60)), " seconds"))) # bind meshes to the armature bpy.ops.object.parent_set(type='ARMATURE') return self.cancel(context) return {'RUNNING_MODAL'} def execute(self, context): self._objs = [] self._permulation = [] self._selected_indices = [] self._selected_group_index_weights = [] arm = None objs = [] # get armature and mesh for ob in bpy.context.selected_objects: if 'ARMATURE' == ob.type: arm = ob if 'MESH' == ob.type: objs.append(ob) # sort meshes by name objs.sort(key=lambda obj:obj.name); # save the reference for later use self._objs = objs for obj in objs: # focus on the mesh bpy.context.view_layer.objects.active = obj # synchronize data bpy.ops.object.mode_set(mode='OBJECT') # write mesh data self.write_mesh_data(objs, os.path.join(os.path.dirname(__file__), "data", "untitled-mesh.txt")) # we must focus on the armature before we can write bone data bpy.context.view_layer.objects.active = arm # synchronize data bpy.ops.object.mode_set(mode='OBJECT') # write bone data self.write_bone_data(arm, os.path.join(os.path.dirname(__file__), "data", "untitled-bone.txt")) # do voxel skinning in background ON_POSIX = 'posix' in sys.builtin_module_names # chmod if ON_POSIX: os.chmod(os.path.join(os.path.dirname(__file__), "bin", platform.system(), "shd"), 0o755) def enqueue_output(out, queue): for line in iter(out.readline, b''): queue.put(line) out.close() executable_path = None if platform.system() == 'Windows': if platform.machine().endswith('64'): executable_path = os.path.join(os.path.dirname(__file__), "bin", platform.system(), "x64", "shd") else: executable_path = os.path.join(os.path.dirname(__file__), "bin", platform.system(), "x86", "shd") else: executable_path = os.path.join(os.path.dirname(__file__), "bin", platform.system(), "shd") self._pid = Popen([executable_path, "untitled-mesh.txt", "untitled-bone.txt", "untitled-weight.txt", str(context.scene.surface_resolution), str(context.scene.surface_loops), str(context.scene.surface_samples), str(context.scene.surface_influence), str(context.scene.surface_falloff), context.scene.surface_sharpness, "y" if context.scene.detect_surface_solidify else "n"], cwd = os.path.join(os.path.dirname(__file__), "data"), stdout = PIPE, bufsize = 1, close_fds = ON_POSIX) self._queue = Queue() t = Thread(target=enqueue_output, args=(self._pid.stdout, self._queue)) t.daemon = True t.start() self._start_time = time.time() # start timer to poll data self._timer = context.window_manager.event_timer_add(0.1, window=context.window) context.window_manager.modal_handler_add(self) return {'RUNNING_MODAL'} def cancel(self, context): # remove timer context.window_manager.event_timer_remove(self._timer) self._objs = [] self._permulation = [] self._selected_indices = [] self._selected_group_index_weights = [] return {'CANCELLED'} def init_properties(): bpy.types.Scene.surface_resolution = IntProperty( name = "Voxel Resolution", description = "Maximum voxel grid size", default = 128, min = 32, max = 1024) bpy.types.Scene.surface_loops = IntProperty( name = "Diffuse Loops", description = "Heat diffuse pass = Voxel Resolution * Diffuse Loops", default = 5, min = 1, max = 9) bpy.types.Scene.surface_samples = IntProperty( name = "Sample Rays", description = "Ray samples count", default = 64, min = 32, max = 128) bpy.types.Scene.surface_influence = IntProperty( name = "Influence Bones", description = "Max influence bones", default = 4, min = 1, max = 8) bpy.types.Scene.surface_falloff = FloatProperty( name = "Diffuse Falloff", description = "Heat diffuse falloff", default = 0.2, min = 0.01, max = 0.99) bpy.types.Scene.surface_protect = BoolProperty( name = "Protect Selected Vertex Weight", description = "Protect selected vertex weight", default = False) bpy.types.Scene.surface_sharpness = EnumProperty( name = "Edges", description = "Edges", items = [ ('1','Soft','Soft Curvature'), ('2','Normal','Normal Curvature'), ('3','Sharp','Sharp Curvature'), ('4','Sharpest','Sharpest Curvature')], default = '3') bpy.types.Scene.detect_surface_solidify = BoolProperty( name = "Detect Solidify", description = "Detect solidified clothes, if you enable this option, make sure that all bones are in the charecter's volume, otherwise, the result may be wrong", default = False) def clear_properties(): props = ["surface_resolution", "surface_samples", "surface_falloff", "surface_loops", "surface_influence", "surface_protect"] for p in props: if p in bpy.types.Scene.bl_rna.properties: exec("del bpy.types.Scene." + p) class SFC_PT_SurfaceHeatDiffuseSkinningPanel(bpy.types.Panel): """Creates a Panel in the Object properties window""" bl_label = "Surface Heat Diffuse Skinning" bl_space_type = 'VIEW_3D' bl_region_type = 'UI' bl_category = 'Mesh Online' @classmethod def poll(self, context): arm_count = 0 obj_count = 0 for ob in bpy.context.selected_objects: if 'ARMATURE' == ob.type: arm_count += 1 if 'MESH' == ob.type: obj_count += 1 return (context.mode == 'OBJECT' and arm_count == 1 and obj_count >= 1) def draw(self, context): layout = self.layout layout.prop(context.scene, 'surface_resolution', icon='BLENDER', toggle=True) layout.prop(context.scene, 'surface_loops', icon='BLENDER', toggle=True) layout.prop(context.scene, 'surface_samples', icon='BLENDER', toggle=True) layout.prop(context.scene, 'surface_influence', icon='BLENDER', toggle=True) layout.prop(context.scene, 'surface_falloff', icon='BLENDER', toggle=True) layout.prop(context.scene, 'surface_sharpness') layout.prop(context.scene, 'surface_protect') layout.prop(context.scene, 'detect_surface_solidify') row = layout.row() row.operator("wm.surface_heat_diffuse") def register(): bpy.utils.register_class(SFC_PT_SurfaceHeatDiffuseSkinningPanel) bpy.utils.register_class(SFC_OT_ModalTimerOperator) init_properties() def unregister(): bpy.utils.unregister_class(SFC_PT_SurfaceHeatDiffuseSkinningPanel) bpy.utils.unregister_class(SFC_OT_ModalTimerOperator) clear_properties() if __name__ == "__main__": register()
python
<filename>poo/Linkedlist/EmpleadosVector/Jefe.java package EmpleadosVector; public class Jefe extends Empleado{ }
java
package edu.cmu.cs.cs214.hw4.core; import javafx.util.Pair; import java.util.ArrayList; import java.util.HashMap; import java.util.Map; import java.util.Objects; import java.util.List; import java.util.Collection; /** * Represents the scrabble game board with 15x15 squares * Letter and special tiles can be placed and removed from the board * Board also gives helper functions to verify moves */ public class Board { //Dimension of the board private final int boardDimension = 15; //Added border for iterators protection on all 4 sides private final int border = 1; //Represents center of the board private final int centerBoard = 8; //17x17 (only 15x15 are used for the game) private Square[][] squares = new Square[border + boardDimension + border] [border + boardDimension + border]; private boolean empty = true; //Horizontal(across) and vertical(down) directions private final boolean horizontal = true; private final boolean vertical = false; //Count of number of tiles on board private int numTiles = 0; private Map<Location, LetterTile> letterTileMap = new HashMap<>(); /** * Constructor * Creates a new board constituting squares as in a scrabble game */ public Board() { generateBoardWithSquares(); } /** * Copy constructor * @param board instance of board */ Board(Board board) { generateBoardWithSquares(); for(int col = 1; col <= boardDimension; col++){ for(int row = 1; row <= boardDimension; row++) { Location loc = new Location(row, col); LetterTile letterTile= board.squareAt(loc).getLetterTile(); if(letterTile != null) placeLetterTile(letterTile, loc); for(Map.Entry<Player, SpecialTile> entry : board.squareAt(loc).getSpecialTiles().entrySet()) { squareAt(loc).removeAllSpecialTiles(); placeSpecialTile(entry.getValue(), loc, entry.getKey()); } } } } /** * isEmpty check * @return true if there are no letter tiles on board, false otherwise */ public boolean isEmpty(){ return numTiles==0; } /** * Places the letter tile on square at a particular location * @param letterTile lettertile to be placed * @param location location of the square */ void placeLetterTile(LetterTile letterTile, Location location) { Objects.requireNonNull(letterTile, "Letter Tile can't be null"); Objects.requireNonNull(location, "Location can't be null"); if(isValidLocation(location)) { squareAtLoc(location).placeLetterTile(letterTile); letterTileMap.put(location, letterTile); } else throw new IllegalArgumentException("Location not valid"); numTiles++; } /** * Places all the letter tiles at the specified locations * @param letterTileMap map of locations as keys and letterTile as values */ void placeAllLetterTiles(Map<Location, LetterTile> letterTileMap) { for (Map.Entry<Location, LetterTile> entry : letterTileMap.entrySet()) { placeLetterTile(entry.getValue(), entry.getKey()); } } /** * Places the special tile on the up of a square * @param specialTile special tile * @param location location * @param player player */ void placeSpecialTile(SpecialTile specialTile, Location location, Player player) { Objects.requireNonNull(specialTile, "Letter Tile can't be null"); Objects.requireNonNull(location, "Location can't be null"); if(isValidLocation(location)) squareAtLoc(location).placeSpecialTile(specialTile, player); else throw new IllegalArgumentException("Location not valid"); } /** * Removes all the special tiles from the given location * @param location location * @return owner and the removed special tile */ Map<Player, SpecialTile> removeAllSpecialTiles(Location location) { if(isLocationInBoard(location)) return squareAt(location).removeAllSpecialTiles(); return null; } /** * Gets all special tile from the specified location * @param location location * @return map of player and special tiles */ public Map<Player, SpecialTile> getSpecialTile(Location location) { Objects.requireNonNull(location, "Location can't be null"); if(isValidLocation(location)) return squareAt(location).getSpecialTiles(); else throw new IllegalArgumentException("Location not valid"); } /** * Removes the letter tile form the specified location * @param location location * @return lettertile */ LetterTile removeLetterTile(Location location) { Objects.requireNonNull(location, "Location can't be null"); isLocationInBoard(location); numTiles--; LetterTile removedTile = squareAtLoc(location).removeLetterTile(); letterTileMap.remove(location); return removedTile; } /** * Removes all the letter tile from the specified location * @param locations locations * @return removed letter tiles */ List<LetterTile> removeAllLetterTiles(Collection<Location> locations) { List<LetterTile> letterTiles = new ArrayList<>(); for(Location location: locations) letterTiles.add(removeLetterTile(location)); return letterTiles; } /** * Returns letter tile at the specified location * @param location location * @return letter tile */ public LetterTile letterTileAt(Location location) { if(isLocationInBoard(location)) return squareAtLoc(location).getLetterTile(); else return null; } /** * Checks if the location is valid with 2 criteria * 1. Location is in board * 2. There is no tile already on the location * @param location location * @return true if valid else false */ boolean isValidLocation(Location location) { return isLocationInBoard(location) && letterTileAt(location) == null; } /** * Checks if all the locations valid, should satisfy: * 1. Location is in board * 2. There is no tile already on the location * @param locations List of locations * @return true if all are valid, false otherwise */ boolean areValidLocations(List<Location> locations) { for(Location location: locations) { if(!isValidLocation(location)) return false; } return true; } /** * Checks if the location is inside boundary of the board (in 15x15) * @param location location * @return true if inside else false */ boolean isLocationInBoard(Location location) { if(location.row() < 1 || location.row() >= boardDimension + border || location.col() < 1 || location.col() >= boardDimension + border) return false; return true; } /** * Helper method to generate an empty board with squares */ private void generateBoardWithSquares() { fillLeftTopQuadrant(); replicateLeftTopToOtherQuadrants(); fillRegularSquares(); } /** * Fill up left quadtrant with squares */ private void fillLeftTopQuadrant() { Location[] dwLocations = { new Location(2,2), new Location(3,3), new Location(4,4), new Location(5,5), new Location(8,8)}; addSquaresAtLocations(dwLocations, SquareType.DoubleWordSquare); Location[] twLocations = { new Location(1,1), new Location(8,1), new Location(1,8)}; addSquaresAtLocations(twLocations, SquareType.TripleWordSquare); Location[] dlLocations = { new Location(4,1), new Location(1,4), new Location(3,7), new Location(7,3), new Location(8,4), new Location(4,8) , new Location(7,7)}; addSquaresAtLocations(dlLocations, SquareType.DoubleLetterSquare); Location[] tlLocations = { new Location(6,2), new Location(2,6), new Location(6,6)}; addSquaresAtLocations(tlLocations, SquareType.TripleLetterSquare); } /** * Helper method to add the squares at specified locations * @param locations array of location * @param squareType squaretype (same for all locations) */ private void addSquaresAtLocations(Location[] locations, SquareType squareType) { for(Location loc: locations) squares[loc.row()][loc.col()] = new Square(squareType, loc); } /** * Helper method to replicate up left to other 3 quadrant */ private void replicateLeftTopToOtherQuadrants() { for(Square[] row: squares) { for(Square square: row) { if(square != null) { Location loc = new Location(square.location()); squares[loc.row()][boardDimension + border - loc.col() ] = new Square(square.squareType(), new Location(loc.row(), boardDimension + border - loc.col())); squares[boardDimension + border - loc.row()][loc.col()] = new Square(square.squareType(), new Location(boardDimension + border - loc.row(), loc.col())); squares[boardDimension + border - loc.row()][boardDimension + border - loc.col()] = new Square(square.squareType(), new Location(boardDimension + border - loc.row(),boardDimension + border - loc.col())); } } } } /** * Fill rest of the board with regular squares */ private void fillRegularSquares() { for(int row = 1; row <= boardDimension; row++) { for (int col = 1; col <= boardDimension; col++) { if(squares[row][col] == null) squares[row][col] = new Square(SquareType.RegularSquare, new Location(row, col)); } } } /** * Checks if the list of locations are collinear in row (horizontally) * @param locations list of locations * @return true if collinear, false otherwise */ boolean areCollinearInRow(List<Location> locations) { int row = locations.get(0).row(); for(Location location: locations) { if(location.row() != row) return false; } return true; } /** * Checks if the list of locations are collinear in colums (vertically) * @param locations list of locations * @return true if collinear, false otherwise */ boolean areCollinearInColumn(List<Location> locations) { int col = locations.get(0).col(); for(Location location: locations) { if(location.col() != col) return false; } return true; } /** * Get first and last tiles in the sequence * @param locations locations of all the tiles * @param isHorizontal true if horizontal, false otherwise * @return Pair of first and last locations */ Pair<Location, Location> getFirstAndLastElements(List<Location> locations, boolean isHorizontal) { int min = Integer.MAX_VALUE; int max = Integer.MIN_VALUE; int same; if (isHorizontal) same = locations.get(0).row(); else same = locations.get(0).col(); Location location = locations.get(0); while (squareAt(location) != null && squareAt(location).getLetterTile() != null || locations.contains(location)) { if(isHorizontal) { if (location.col() < min) min = location.col(); location = location.left(); } else { if (location.row() < min) min = location.row(); location = location.up(); } } location = locations.get(0); while (squareAt(location) != null && (squareAt(location).getLetterTile() != null || locations.contains(location))) { if(isHorizontal) { if (location.col() > max) max = location.col(); location = location.right(); } else { if (location.row() > max) max = location.row(); location = location.down(); } } if(isHorizontal) return new Pair<>(new Location(same,min), new Location(same, max)); else return new Pair<>(new Location(min, same), new Location(max, same)); } /** * Checks if locations are collinear and continous * @param locations list of locations * @return true if conditions are satisfied, false otherwise */ boolean areCollinearAndContinuous(List<Location> locations) { if(areCollinearInRow(locations)) return noEmptySquareInRange(locations, horizontal); else if(areCollinearInColumn(locations)) return noEmptySquareInRange(locations, vertical); return false; } /** * Returns maximum and minimum elements from the list * min is nearest to left top and max is farthest to left top * @param locations list of locations * @param isHorizontal to search horizontally or vertically * @return */ private Pair<Location, Location> getMaxAndMinElements(List<Location> locations, boolean isHorizontal) { Location min = locations.get(0); Location max = locations.get(0); for(Location location: locations) { if(isHorizontal) { if(location.col() < min.col()) min = location; if(location.col() > max.col()) max = location; } else { if(location.row() < min.row()) min = location; if(location.row() > max.row()) max = location; } } return new Pair<>(min, max); } /** * Checks if there is no empty square in the word * @param locations list of location * @param isHorizontal to check horizontally or vertically * @return true if conditions are satisfied, false otherwise */ boolean noEmptySquareInRange(List<Location> locations, boolean isHorizontal) { Pair<Location, Location> p = getMaxAndMinElements(locations, isHorizontal); Location min = p.getKey(); Location max = p.getValue(); if(isHorizontal) { for (int i = min.col(); i <= max.col(); i++) { if(!locations.contains(new Location(min.row(), i))) { if ((squares[min.row()][i]).getLetterTile() == null) return false; } } } else { for (int i = min.row(); i <= max.row(); i++) { if(!locations.contains(new Location(i, min.col()))) { if ((squares[i][min.col()]).getLetterTile() == null) return false; } } } return true; } /** * Checks if any location is the center of the board * @param locations list of locations * @return true if valid, false otherwise */ boolean anyTileAtCenter(List<Location> locations) { return locations.contains(new Location(centerBoard, centerBoard)); } /** * Checks if any tile has adjacent tiles * @param locations list of locations * @return true if there is, false otherwise */ boolean anyExistingTileAdjacent(List<Location> locations) { for(Location location: locations) { for(Location neighbour: location.neighbours()) { if(letterTileAt(neighbour) != null) return true; } } return false; } /** * Gets the square at location * shallow copy * @param location location * @return square */ private Square squareAtLoc(Location location) { return squares[location.row()][location.col()]; } /** * Gets the square at location * deep copy of square * @param location location * @return square */ public Square squareAt(Location location) { Square square = null; if(isLocationInBoard(location)) square = new Square(squares[location.row()][location.col()]); return square; } /** * Return the toString representation of the board * The entire board can be viewed in command line with this * @return String */ @Override public String toString() { StringBuilder result = new StringBuilder(); for(int i = 1; i < squares.length -1; i++) { for (int j = 1; j < squares.length -1; j++) { result.append(squares[i][j]); result.append(" "); } result.append("\n"); } return result.toString(); } /** * get all letter tiles place on the board * @return map of location and letter tiles */ Map<Location, LetterTile> getLetterTileMap() { return new HashMap<Location, LetterTile>(letterTileMap); } /** * Returns dimension of the board * @return dimension */ public int dimension() { return boardDimension; } }
java
<filename>Demos/Compiled Framedemo/sample1.htm<gh_stars>1-10 <head> <title>Sample HTML file</title> </head> <frameset rows="125,4,*"> <frame src="sample2.htm" name="Top" > <frame src="space.htm" scrolling="no"> <frame src="sample3.htm" name="Bot"> <noframes> <center><h1>Frame Alert</h1> This document is designed to be viewed with a Frame viewer. </center> </noframes> </frameset>
html
//! Rectangle packer is used to pack set of smaller rectangles into one big, it //! used in texture atlas packer. use crate::{ math::Rect, pool::{Handle, Pool}, }; use nalgebra::Scalar; use std::ops::{Add, AddAssign, Mul, Sub}; struct RectPackNode<T: Scalar> { filled: bool, split: bool, bounds: Rect<T>, left: Handle<RectPackNode<T>>, right: Handle<RectPackNode<T>>, } impl<T: Scalar> RectPackNode<T> { fn new(bounds: Rect<T>) -> Self { Self { bounds, filled: false, split: false, left: Handle::NONE, right: Handle::NONE, } } } /// See module docs. pub struct RectPacker<T: Scalar> { nodes: Pool<RectPackNode<T>>, root: Handle<RectPackNode<T>>, width: T, height: T, unvisited: Vec<Handle<RectPackNode<T>>>, } impl<T> RectPacker<T> where T: Add<Output = T> + Sub<Output = T> + Scalar + Mul<Output = T> + PartialOrd + Default + Copy + AddAssign, { /// Creates new instance of rectangle packer with given bounds. /// /// # How to choose bounds /// /// If you have a set of rectangles and you need to calculate average side length of a square, /// then calculate total area of your triangles by sum of width*height and then take square /// root out of area. You'll get side length of a square which can be used as width and height /// parameters. pub fn new(w: T, h: T) -> Self { let mut nodes = Pool::new(); let root = nodes.spawn(RectPackNode::new(Rect::new( Default::default(), Default::default(), w, h, ))); Self { nodes, root, width: w, height: h, unvisited: Default::default(), } } /// Clears packer and prepares it for another run. It is much cheaper than create new packer, /// because it reuses previously allocated memory. pub fn clear(&mut self) { self.nodes.clear(); self.unvisited.clear(); self.root = self.nodes.spawn(RectPackNode::new(Rect::new( Default::default(), Default::default(), self.width, self.height, ))); } /// Tries to find free place to put rectangle with given size. Returns None if there insufficient /// space. pub fn find_free(&mut self, w: T, h: T) -> Option<Rect<T>> { if self.unvisited.is_empty() { self.unvisited.push(self.root); } while let Some(node_handle) = self.unvisited.pop() { let node = self.nodes.borrow_mut(node_handle); if node.split { self.unvisited.push(node.right); self.unvisited.push(node.left); } else if !node.filled && node.bounds.w() >= w && node.bounds.h() >= h { if node.bounds.w() == w && node.bounds.h() == h { node.filled = true; return Some(node.bounds); } // Split and continue node.split = true; let (left_bounds, right_bounds) = if node.bounds.w() - w > node.bounds.h() - h { ( Rect::new(node.bounds.x(), node.bounds.y(), w, node.bounds.h()), Rect::new( node.bounds.x() + w, node.bounds.y(), node.bounds.w() - w, node.bounds.h(), ), ) } else { ( Rect::new(node.bounds.x(), node.bounds.y(), node.bounds.w(), h), Rect::new( node.bounds.x(), node.bounds.y() + h, node.bounds.w(), node.bounds.h() - h, ), ) }; let left = self.nodes.spawn(RectPackNode::new(left_bounds)); let right = self.nodes.spawn(RectPackNode::new(right_bounds)); let node = self.nodes.borrow_mut(node_handle); node.left = left; node.right = right; self.unvisited.push(left); } } None } }
rust