text stringlengths 1 1.04M | language stringclasses 25 values |
|---|---|
<gh_stars>0
package onemessageui;
import android.content.BroadcastReceiver;
import android.content.Context;
import android.content.Intent;
import android.content.IntentFilter;
import com.umeng.analytics.MobclickAgent;
import oneapp.onechat.oneandroid.chatsdk.ConfigConstants;
import oneapp.onechat.oneandroid.onemessage.common.CommonHelperUtils;
import onewalletui.util.jump.JumpAppPageUtil;
/**
* Created by 何帅 on 2018/9/3.
*/
public class OneChatUiHelper {
private static Context mContext;
public static void initOneChatUi(Context context) {
mContext = context;
initUmeng();
IntentFilter filter = new IntentFilter();
filter.addAction(CommonHelperUtils.getCheckSignBroadcastAction());
CheckSignBroadcastReceiver checkSignBroadcastReceiver = new CheckSignBroadcastReceiver();
mContext.registerReceiver(checkSignBroadcastReceiver, filter);
}
/**
* 初始化友盟
*/
private static void initUmeng() {
/** 设置是否对日志信息进行加密, 默认false(不加密). */
MobclickAgent.enableEncrypt(!ConfigConstants.DEBUG);//6.0.0版本及以后
}
/**
* 验证签名广播接收者
*/
private static class CheckSignBroadcastReceiver extends BroadcastReceiver {
@Override
public void onReceive(final Context context, Intent intent) {
JumpAppPageUtil.jumpCheckSignPage(context);
}
}
}
| java |
What do you think?
“You can’t come with us. We are on a quest.” His voice and eyes were as stern as he could make them, but he could feel his nose being bewildered. He had never been able to discipline his nose.In the words of Dr. Mardy Grothe, Beagle never metaphor he didn’t like. Occasionally it’s a bit over the top, but overall I found his writing delightful. Less engaging, at least for me, were the songs occasionally sung by the characters. They weren’t particularly inspired or inspiring, and I thought most of them were weaker links in the story. But there are some delicious ironies, such as the prince trying (and failing) to win the heart of the lady he hopelessly loves by bringing her heads of ogres and dead bodies of dangerous beast, in classic conquering hero style. And I was unexpectedly moved to tears by the ending.
“we are not always what we seem, and hardly ever what we dream.” // “it’s a rare man who is taken for what he truly is.”the premise of this book is deceptively simple: in an unspecified magical world where butterflies still occasionally sing of taking the a-train, highwaymen longingly admire the legend of robin hood, and guards wear armor made of bottle-caps -- there lives a unicorn.
“you can strike your own time, and start the count anywhere. when you understand that — then any time at all will be the right time for you.”the prose in this is so hauntingly beautiful that you will find yourself scribbling down quotes every other page. for a story that barely even includes a romance, it is probably one of the most romantic books i’ve ever read.
molly laughed with her lips flat.still, all of them eventually end up with a choice: to do something that might cause them great pain, that might even file some part of their soul away; all in order to achieve something that they believe in.
with a flap of her hand she summed herself up: barren face, desert eyes, and yellowing heart.
“your name is a golden bell hung in my heart. i would break my body to pieces to call you once by your name.”there are so many little things scattered throughout this book blurring the line between reality and illusion. the unicorn has to face the fact that most people who meet her see only a pretty white mare. not who she truly is.
the unicorn was there as a star is suddenly there, moving a little way ahead of them, a sail in the dark. molly said, “if lír is the hero, what is she?”i think this is also where my focus lay as a younger reader: i appreciated the story for its deconstruction of tropes, and the witty way it spoke of wizards and mythical creatures. of not having the beautiful princess as a main character, but grouchy molly instead. of the juxtaposition of schmendrick possibly being one of the most powerful wizards in the world, but unable to access that power.
“my son, your ineptitude is so vast, your incompetence so profound, that i am certain you are inhabited by greater power than i have ever known.”this book is profoundly perfect to me precisely because it is not. but we review books here, so i feel compelled to include a section with its possible faults.
“as for you and your heart and the things you said and didn't say, she will remember them all when men are fairy tales in books written by rabbits.”conclusion: this is still one of the best books i’ve ever read.
“... why, life is short, and how many can i help or harm? i have my power at last, but the world is still too heavy for me to move, though my friend lír might think otherwise.” and he laughed again in his dream, a little sadly.✎ 5.0 stars.
| english |
<gh_stars>0
import { Injectable } from '@angular/core';
import {
ActivatedRouteSnapshot,
CanActivate,
Router,
RouterStateSnapshot,
} from '@angular/router';
import { LocalStorageService } from './local-storage.service';
@Injectable({
providedIn: 'root',
})
export class AuthGuardService implements CanActivate {
constructor(
private router: Router,
private localStorage: LocalStorageService
) {}
canActivate(route: ActivatedRouteSnapshot, state: RouterStateSnapshot) {
let exp;
let token;
if (null !== this.localStorage.getUserObject()) {
exp = this.localStorage.getUserObject().exp;
token = this.localStorage.getUserObject().token;
if (null !== token && !this.tokenExpired(exp)) {
return true;
}
}
// this.router.navigate(['/auth']);
//return false;
return true;
}
private tokenExpired(exp): boolean {
return Math.floor(new Date().getTime() / 1000) >= exp;
}
}
| typescript |
Quick behat
===========
Installation
------------
* composer install
#### Tests fonctionnels
* vendor/bin/behat
### Test
* edit add method in Calculus | markdown |
pub use crate::ast::{expression::*, semantic::*, statement::*, Ast};
pub use crate::ir::*;
pub use std::collections::HashMap;
pub struct CfgBuilder<'env> {
cfg_graph: CfgGraph,
env: &'env mut Environment,
current_proc_id: SymbolId,
proc_jmp_table: HashMap<SymbolId, CfgProc>,
}
impl<'env> CfgBuilder<'env> {
pub fn new(env: &'env mut Environment) -> Self {
let mut cfg_graph = CfgGraph::new();
let main_proc = env.symbol_table.get_proc_by_name("__main__");
Self {
current_proc_id: main_proc.id,
cfg_graph,
env,
proc_jmp_table: HashMap::new(),
}
}
pub fn build(mut self, ast: &Ast) -> CfgObject {
let entry_id = self.cfg_graph.get_entry_node_id();
let mut node_id = entry_id;
for stmt in &ast.statements {
node_id = self.build_stmt(node_id, stmt);
}
// appending `EOC` to the end of `main`
self.append_eoc(node_id);
let mut jmp_table: HashMap<CfgNodeId, SymbolId> = self
.proc_jmp_table
.iter()
.map(|(proc_id, cfg_proc)| (cfg_proc.node_id, *proc_id))
.collect();
// adding `main` itself
let main_proc = self.env.symbol_table.get_proc_by_name("__main__");
jmp_table.insert(entry_id, main_proc.id);
CfgObject {
graph: self.cfg_graph,
jmp_table,
}
}
fn build_stmt(&mut self, node_id: CfgNodeId, stmt: &Statement) -> CfgNodeId {
match stmt {
Statement::NOP | Statement::EOF => node_id,
Statement::Command(cmd) => self.build_cmd(node_id, cmd),
Statement::Direction(direct_stmt) => self.build_direct(node_id, direct_stmt),
Statement::Expression(expr) => self.build_expr(node_id, expr),
Statement::Make(make_stmt) => self.build_make(node_id, make_stmt),
Statement::If(if_stmt) => self.build_if(node_id, if_stmt),
Statement::Repeat(repeat_stmt) => self.build_repeat(node_id, repeat_stmt),
Statement::Procedure(proc_stmt) => self.build_proc(node_id, proc_stmt),
Statement::Return(return_stmt) => self.build_return(node_id, return_stmt),
Statement::Print(expr) => self.build_print(node_id, expr),
}
}
fn build_print(&mut self, node_id: CfgNodeId, expr: &Expression) -> CfgNodeId {
self.build_expr(node_id, expr);
let node = self.cfg_graph.get_node_mut(node_id);
node.append_inst(CfgInstruction::Print);
node_id
}
fn build_return(&mut self, node_id: CfgNodeId, return_stmt: &ReturnStmt) -> CfgNodeId {
if return_stmt.expr.is_some() {
let expr: &Expression = return_stmt.expr.as_ref().unwrap();
self.build_expr(node_id, expr);
}
let node = self.cfg_graph.get_node_mut(node_id);
node.append_inst(CfgInstruction::Return);
node_id
}
fn build_proc(&mut self, node_id: CfgNodeId, proc_stmt: &ProcedureStmt) -> CfgNodeId {
let proc_id = proc_stmt.id.unwrap();
let cfg_proc = self.proc_jmp_table.get(&proc_id);
let parent_proc_id = self.current_proc_id;
self.current_proc_id = proc_id;
let proc_node_id;
if cfg_proc.is_some() {
let cfg_proc = cfg_proc.unwrap();
if cfg_proc.built == true {
// we've already built the CFG for the procedure
// so we just return the start node id
return cfg_proc.node_id;
}
// we have allocated already the proc start node
// (even though we didn't build the proc instructions yet)
proc_node_id = cfg_proc.node_id;
} else {
// there is no proc CFG node, so we'll allocate one
proc_node_id = self.cfg_graph.new_node();
// we explicitly save immediately the CFG proc in order
// to support recursive procedures
let cfg_proc = CfgProc {
node_id: proc_node_id,
proc_id,
built: false,
};
self.proc_jmp_table.insert(proc_id, cfg_proc);
}
let last_block_node_id = self.build_block(proc_node_id, &proc_stmt.block);
// marking the CFG proc as built
let cfg_proc = CfgProc {
node_id: proc_node_id,
proc_id,
built: true,
};
self.proc_jmp_table.insert(proc_id, cfg_proc);
// we append a `RETURN` instruction to the end of the procedure
// in case the last instruction isn't a `RETURN`
self.append_ret(last_block_node_id);
// restoring the `current_proc_id`
self.current_proc_id = parent_proc_id;
// the empty CFG node `node_id` will be used in the next non-procedure statement
node_id
}
fn build_cmd(&mut self, node_id: CfgNodeId, cmd: &Command) -> CfgNodeId {
let inst = match cmd {
Command::Trap => CfgInstruction::Trap,
_ => CfgInstruction::Command(cmd.clone()),
};
self.append_inst(node_id, inst);
node_id
}
fn build_direct(&mut self, node_id: CfgNodeId, direct_stmt: &DirectionStmt) -> CfgNodeId {
self.build_expr(node_id, &direct_stmt.expr);
let direct = direct_stmt.direction.clone();
let inst = CfgInstruction::Direction(direct);
self.append_inst(node_id, inst);
node_id
}
fn build_make(&mut self, node_id: CfgNodeId, make_stmt: &MakeStmt) -> CfgNodeId {
let expr = &make_stmt.expr;
let var_id = make_stmt.var_id.unwrap();
self.build_assign(node_id, var_id, expr)
}
fn build_assign(
&mut self,
node_id: CfgNodeId,
var_id: SymbolId,
expr: &Expression,
) -> CfgNodeId {
self.build_expr(node_id, expr);
let inst = CfgInstruction::Store(var_id);
self.append_inst(node_id, inst);
node_id
}
fn build_expr(&mut self, node_id: CfgNodeId, expr: &Expression) -> CfgNodeId {
match expr.expr_ast {
ExpressionAst::Literal(_) => self.build_lit_expr(node_id, expr),
ExpressionAst::Not(_) => self.build_not_expr(node_id, expr),
ExpressionAst::Binary(..) => self.build_bin_expr(node_id, expr),
ExpressionAst::Parentheses(_) => self.build_parentheses_expr(node_id, expr),
ExpressionAst::ProcCall(..) => self.build_proc_call_expr(node_id, expr),
}
node_id
}
fn build_proc_call_expr(&mut self, node_id: CfgNodeId, expr: &Expression) {
let (_proc_name, proc_args_exprs, proc_id) = expr.as_proc_call_expr();
for proc_arg_expr in proc_args_exprs {
self.build_expr(node_id, proc_arg_expr);
}
let proc_id = *proc_id.unwrap();
let cfg_proc = self.proc_jmp_table.get(&proc_id);
let jmp_node_id = if cfg_proc.is_none() {
let proc_node_id = self.cfg_graph.new_node();
let cfg_proc = CfgProc {
node_id: proc_node_id,
proc_id,
built: false,
};
self.proc_jmp_table.insert(proc_id, cfg_proc);
proc_node_id
} else {
cfg_proc.unwrap().node_id
};
self.append_inst(node_id, CfgInstruction::Call(jmp_node_id));
}
fn build_parentheses_expr(&mut self, node_id: CfgNodeId, expr: &Expression) {
let expr = expr.as_parentheses_expr();
self.build_expr(node_id, expr);
}
fn build_bin_expr(&mut self, node_id: CfgNodeId, expr: &Expression) {
let (bin_op, lexpr, rexpr) = expr.as_binary_expr();
self.build_expr(node_id, lexpr);
self.build_expr(node_id, rexpr);
let inst = match bin_op {
BinaryOp::Add => CfgInstruction::Add,
BinaryOp::Mul => CfgInstruction::Mul,
BinaryOp::Div => CfgInstruction::Div,
BinaryOp::And => CfgInstruction::And,
BinaryOp::Or => CfgInstruction::Or,
BinaryOp::LessThan => CfgInstruction::LessThan,
BinaryOp::GreaterThan => CfgInstruction::GreaterThan,
};
self.append_inst(node_id, inst);
}
fn build_not_expr(&mut self, node_id: CfgNodeId, expr: &Expression) {
let expr = expr.as_not_expr();
self.build_expr(node_id, expr);
self.append_inst(node_id, CfgInstruction::Not);
}
fn build_lit_expr(&mut self, node_id: CfgNodeId, expr: &Expression) {
let expr = expr.as_lit_expr();
match expr {
LiteralExpr::Bool(v) => self.append_bool_lit(node_id, *v),
LiteralExpr::Int(v) => self.append_int_lit(node_id, *v),
LiteralExpr::Str(v) => self.append_str_lit(node_id, v),
LiteralExpr::Var(_, ref var_id) => {
self.append_var_lit(node_id, var_id.as_ref().unwrap())
}
}
}
fn append_bool_lit(&mut self, node_id: CfgNodeId, lit: bool) {
self.append_inst(node_id, CfgInstruction::Bool(lit));
}
fn append_int_lit(&mut self, node_id: CfgNodeId, lit: usize) {
self.append_inst(node_id, CfgInstruction::Int(lit as isize));
}
fn append_str_lit(&mut self, node_id: CfgNodeId, lit: &str) {
self.append_inst(node_id, CfgInstruction::Str(lit.to_string()));
}
fn append_var_lit(&mut self, node_id: CfgNodeId, var_id: &SymbolId) {
let inst = CfgInstruction::Load(var_id.clone());
self.append_inst(node_id, inst);
}
fn build_repeat(&mut self, node_id: CfgNodeId, repeat_stmt: &RepeatStmt) -> CfgNodeId {
// 1) allocate a new local variable of type `INT`, let's call it `TMPVAR_A`
// 2) allocate a new local variable of type `INT`, let's call it `TMPVAR_B`
// 3) emit instructions for `MAKE TMPVAR_A = 0 (within `CURRENT_NODE_ID node)
// 4) emit instructions for `MAKE TMPVAR_B = `cond_expr` (within `CURRENT_NODE_ID node)
// 5) emit expression-instructions for `TMPVAR_A < TMPVAR_B` (within `CURRENT_NODE_ID node)
// 6) create a new empty CFG node. let's mark its node id as `WHILE_NODE_ID`
// 7) add edge `CURRENT_NODE_ID` --jmp-when-true--> `WHILE_NODE_ID`
// 8) generate statement-instructions for `block_stmt` (within `WHILE_NODE_ID` node)
// the CFG generation will return `LAST_WHILE_BLOCK_NODE_ID` node_id
// 9) emit instructions for `TMPVAR_A = TMPVAR_A + 1` (within `LAST_TRUE_BLOCK_NODE_ID`)
// 10) emit expression-instructions for `TMPVAR_A < TMPVAR_B` (within `LAST_TRUE_BLOCK_NODE_ID`)
// 11) add edge `LAST_TRUE_BLOCK_NODE_ID` --jmp-when-true--> `WHILE_NODE_ID`
// 12) create a new empty CFG node. let's mark its node id as `AFTER_NODE_ID`
// 13) add edge `LAST_TRUE_BLOCK_NODE_ID` --jmp-fallback--> `AFTER_NODE_ID`
// 14) add edge `CURRENT_NODE_ID` --jmp-fallback--> `AFTER_NODE_ID`
// 15) return `AFTER_NODE_ID` node_id (empty CFG node to be used for the next statement)
// allocating temporary variables: `TMPVAR_A` and `TMPVAR_B`
let (var_id_a, var_name_a) = self
.env
.create_tmp_var(self.current_proc_id, ExpressionType::Int);
let (var_id_b, var_name_b) = self
.env
.create_tmp_var(self.current_proc_id, ExpressionType::Int);
// MAKE TMPVAR_A = 0
let zero_lit = LiteralExpr::Int(0);
let zero_expr = Expression {
expr_type: Some(ExpressionType::Int),
expr_ast: ExpressionAst::Literal(zero_lit),
};
self.build_assign(node_id, var_id_a, &zero_expr);
// MAKE TMPVAR_B = `cond_expr`
self.build_assign(node_id, var_id_b, &repeat_stmt.count_expr);
// TMPVAR_A < TMPVAR_B
let var_lit_a = LiteralExpr::Var(var_name_a, Some(var_id_a));
let var_lit_b = LiteralExpr::Var(var_name_b, Some(var_id_b));
let var_lit_a_clone = var_lit_a.clone();
let var_expr_a = Expression {
expr_ast: ExpressionAst::Literal(var_lit_a),
expr_type: Some(ExpressionType::Int),
};
let var_expr_b = Expression {
expr_ast: ExpressionAst::Literal(var_lit_b),
expr_type: Some(ExpressionType::Int),
};
let cond_ast = ExpressionAst::Binary(
BinaryOp::LessThan,
Box::new(var_expr_a),
Box::new(var_expr_b),
);
let cond_expr = Expression {
expr_ast: cond_ast,
expr_type: Some(ExpressionType::Bool),
};
self.build_expr(node_id, &cond_expr);
// `REPEAT block`
let while_node_id = self.cfg_graph.new_node();
self.add_edge(node_id, while_node_id, CfgJumpType::WhenTrue);
let last_while_block_node_id = self.build_block(while_node_id, &repeat_stmt.block);
// TMPVAR_A = TMPVAR_A + 1
let one_lit = LiteralExpr::Int(1);
let one_expr = Expression {
expr_type: Some(ExpressionType::Int),
expr_ast: ExpressionAst::Literal(one_lit),
};
let var_expr_a = Expression {
expr_ast: ExpressionAst::Literal(var_lit_a_clone),
expr_type: Some(ExpressionType::Int),
};
let incr_var_a_ast =
ExpressionAst::Binary(BinaryOp::Add, Box::new(var_expr_a), Box::new(one_expr));
let incr_expr = Expression {
expr_type: Some(ExpressionType::Int),
expr_ast: incr_var_a_ast,
};
self.build_assign(last_while_block_node_id, var_id_a, &incr_expr);
// TMPVAR_A < TMPVAR_B
self.build_expr(last_while_block_node_id, &cond_expr);
// jump when-true to the start of the loop
self.add_edge(
last_while_block_node_id,
while_node_id,
CfgJumpType::WhenTrue,
);
let after_node_id = self.cfg_graph.new_node();
self.add_edge(
last_while_block_node_id,
after_node_id,
CfgJumpType::Fallback,
);
self.add_edge(node_id, after_node_id, CfgJumpType::Fallback);
after_node_id
}
fn build_if(&mut self, node_id: CfgNodeId, if_stmt: &IfStmt) -> CfgNodeId {
// 1) let's mark current CFG node as `CURRENT_NODE_ID` (the `node_id` parameter)
// this node is assumed to be empty
// 2) generate expression-instructions for `if-stmt` conditional-expression (within `CURRENT_NODE_ID` node)
// 3) create a new empty CFG node. let's mark its node id as `TRUE_NODE_ID`
// 4) generate statement-instructions for `if-stmt` `true-block` (within `TRUE_NODE_ID` node)
// the CFG generation will return `LAST_TRUE_BLOCK_NODE_ID` node_id
// 5) add edge `CURRENT_NODE_ID` --jmp-when-true--> `TRUE_NODE_ID`
// 6) if `if-stmt` has `else-block`:
// 6.1) create a new empty CFG node. let's mark its node id as `FALSE_NODE_ID`
// 6.2) generate statement-instructions for `false-block` (within `FALSE_NODE_ID` node)
// the CFG generation will return `LAST_FALSE_BLOCK_NODE_ID` node_id
// 6.3) add edge `CURRENT_NODE_ID` --jmp-fallback--> `FALSE_NODE_ID`
// 7) create a new empty CFG node. let's mark its node id as `AFTER_NODE_ID`
// 8) add edge `LAST_TRUE_BLOCK_NODE_ID` --jmp-always--> `AFTER_NODE_ID`
// 9) if `if-stmt` has `else-block`:
// 9.1) add edge `LAST_FALSE_BLOCK_NODE_ID` --jmp-always--> `AFTER_NODE_ID`
// else:
// 9.1) add edge `CURRENT_NODE_ID` --jmp-fallback--> `AFTER_NODE_ID`
// 10) return `AFTER_NODE_ID` node_id (empty CFG node to be used for the next statement)
self.build_expr(node_id, &if_stmt.cond_expr);
let true_node_id = self.cfg_graph.new_node();
let last_true_block_node_id = self.build_block(true_node_id, &if_stmt.true_block);
self.add_edge(node_id, true_node_id, CfgJumpType::WhenTrue);
let mut last_false_block_node_id = None;
if if_stmt.false_block.is_some() {
let false_node_id = self.cfg_graph.new_node();
let last_node_id =
self.build_block(false_node_id, if_stmt.false_block.as_ref().unwrap());
last_false_block_node_id = Some(last_node_id);
self.add_edge(node_id, false_node_id, CfgJumpType::Fallback);
}
let true_block_ends_with_empty_node = self.cfg_graph.node_is_empty(last_true_block_node_id);
let true_block_ends_with_return = self.cfg_graph.ends_with_return(last_true_block_node_id);
let mut draw_true_block_to_after_node_edge = false;
let after_node_id = match true_block_ends_with_empty_node {
true => Some(last_true_block_node_id), // we'll reuse this empty node
false => {
// we know the `true-block last node` isn't empty
// but we want to allocate a new empty CFG node **only**
// when the last node-statement *IS NOT* a `RETURN`-statement
if true_block_ends_with_return {
// no need to draw edge `LAST_TRUE_BLOCK_NODE_ID` --jmp-always--> `AFTER_NODE_ID`
None
} else {
draw_true_block_to_after_node_edge = true;
Some(self.cfg_graph.new_node())
}
}
};
if draw_true_block_to_after_node_edge {
self.add_edge(
last_true_block_node_id,
after_node_id.unwrap(),
CfgJumpType::Always,
);
}
if if_stmt.false_block.is_some() {
// we draw edge `LAST_FALSE_BLOCK_NODE_ID` --jmp-always--> `AFTER_NODE_ID`
// only if the `else-block` statement *IS NOT* a `RETURN`-statement
let last_false_block_node_id = last_false_block_node_id.unwrap();
let false_block_ends_with_return =
self.cfg_graph.ends_with_return(last_false_block_node_id);
if false_block_ends_with_return == false {
self.add_edge(
last_false_block_node_id,
after_node_id.unwrap(),
CfgJumpType::Always,
);
}
} else {
// there is no `else-block`
// we'll draw edge `CURRENT_NODE_ID` --jmp-fallback--> `AFTER_NODE_ID`
// in case there is an after node
if after_node_id.is_some() {
self.add_edge(node_id, after_node_id.unwrap(), CfgJumpType::Fallback);
}
}
if after_node_id.is_some() {
after_node_id.unwrap()
} else {
self.cfg_graph.new_node()
}
}
fn build_block(&mut self, node_id: CfgNodeId, block_stmt: &BlockStatement) -> CfgNodeId {
let mut last_node_id = node_id;
for stmt in &block_stmt.stmts {
last_node_id = self.build_stmt(node_id, stmt);
}
last_node_id
}
fn append_inst(&mut self, node_id: CfgNodeId, inst: CfgInstruction) {
let node = self.cfg_graph.get_node_mut(node_id);
node.append_inst(inst);
}
fn add_edge(&mut self, src_id: CfgNodeId, dst_id: CfgNodeId, jmp_type: CfgJumpType) {
self.cfg_graph.add_edge(src_id, dst_id, jmp_type);
}
fn append_ret(&mut self, node_id: CfgNodeId) {
let node = self.cfg_graph.get_node_mut(node_id);
let mut append_ret = false;
if node.is_empty() || *node.insts.last().unwrap() != CfgInstruction::Return {
append_ret = true;
}
if append_ret {
self.append_inst(node_id, CfgInstruction::Return);
}
}
fn append_eoc(&mut self, node_id: CfgNodeId) {
self.append_inst(node_id, CfgInstruction::EOC);
}
}
| rust |
<gh_stars>1-10
{"dna.css":"<KEY>,"dna.js":"<KEY>,"dna.min.js":"<KEY>} | json |
import * as React from "react";
const ClrHeadphonesSolid: React.SFC = () => (
<svg
version="1.1"
viewBox="0 0 36 36"
preserveAspectRatio="xMidYMid meet"
xmlns="http://www.w3.org/2000/svg"
focusable="false"
role="img"
xmlnsXlink="http://www.w3.org/1999/xlink"
>
<path d="M18,3A14.27,14.27,0,0,0,4,17.5V31H8.2A1.74,1.74,0,0,0,10,29.33V22.67A1.74,1.74,0,0,0,8.2,21H6V17.5A12.27,12.27,0,0,1,18,5,12.27,12.27,0,0,1,30,17.5V21H27.8A1.74,1.74,0,0,0,26,22.67v6.67A1.74,1.74,0,0,0,27.8,31H32V17.5A14.27,14.27,0,0,0,18,3Z" />
</svg>
);
export default ClrHeadphonesSolid;
| typescript |
<filename>index/r/roast-chicken-with-fig-plantain-and.json<gh_stars>10-100
{
"directions": [
"Preheat an oven to 475 degrees F (245 degrees C).",
"Place chicken quarters in a 9x13-inch casserole dish. Arrange onion, plantain, figs, and garlic around chicken quarters and season with rosemary, salt, and pepper.",
"Whisk olive oil and balsamic vinegar together in a bowl; pour over chicken quarters. Pour water into casserole dish.",
"Place casserole dish in the preheated oven; reduce temperature to 325 degrees F (165 degrees C). Bake until no longer pink at the bone and the juices run clear, about 1 hour. An instant-read thermometer inserted into the thickest part of the thigh, near the bone, should read 165 degrees F (74 degrees C)."
],
"ingredients": [
"2 chicken leg quarters",
"1/2 large red onion, chopped",
"1 ripe plantain, sliced",
"6 fresh figs, stems removed",
"2 cloves garlic, sliced",
"1 teaspoon dried rosemary",
"salt and ground black pepper to taste",
"1/4 cup olive oil",
"2 tablespoons balsamic vinegar",
"1/4 cup water"
],
"language": "en-US",
"source": "allrecipes.com",
"tags": [],
"title": "Roast Chicken with Fig, Plantain, and Red Onion",
"url": "http://allrecipes.com/recipe/234373/roast-chicken-with-fig-plantain-and/"
}
| json |
<gh_stars>1-10
version https://git-lfs.github.com/spec/v1
oid sha256:1a22ae2214dcd1d0cc65f004b1a17f6c176ac41201d690add64edf284304edc4
size 14640
| json |
<reponame>elanthia-online/cartograph<gh_stars>1-10
{
"id": 24934,
"title": [
"[Iyo Village, Painted Hut]"
],
"description": [
"Built with walls of bamboo and a roof of palm fronds, the hut's door is oriented to face due south. Shelves are lined with bottles and jars. Standing in the center of the hut is a low table, which is littered with ashes, charred bone, and several bowls. Vivid markings in red, white, and black are painted all around the hut, their alternating linear and nonlinear patterns flowing from the center of the ceiling, down the walls, and then across blood red sand floor."
],
"paths": [
"Obvious exits: none"
],
"location": "the Shimmering Mists",
"wayto": {
"24933": "go door"
},
"timeto": {
"24933": 0.2
},
"image": "ifw-iyo_village.png",
"image_coords": [
302,
86,
322,
107
]
} | json |
<gh_stars>0
package pl.kkwiatkowski.loan.rest;
import com.fasterxml.jackson.core.type.TypeReference;
import com.fasterxml.jackson.databind.ObjectMapper;
import org.junit.Test;
import org.junit.runner.RunWith;
import org.springframework.beans.factory.annotation.Autowired;
import org.springframework.boot.test.autoconfigure.web.servlet.AutoConfigureMockMvc;
import org.springframework.boot.test.context.SpringBootTest;
import org.springframework.http.MediaType;
import org.springframework.test.context.junit4.SpringJUnit4ClassRunner;
import org.springframework.test.web.servlet.MockMvc;
import org.springframework.test.web.servlet.MvcResult;
import pl.kkwiatkowski.loan.Application;
import pl.kkwiatkowski.loan.common.Util;
import pl.kkwiatkowski.loan.constants.Constants;
import pl.kkwiatkowski.loan.dto.ApplyLoanRequest;
import pl.kkwiatkowski.loan.dto.Loan;
import java.math.BigDecimal;
import java.time.Duration;
import java.time.LocalDateTime;
import static org.junit.Assert.assertEquals;
import static org.junit.Assert.assertNotNull;
import static org.springframework.test.web.servlet.request.MockMvcRequestBuilders.post;
import static org.springframework.test.web.servlet.result.MockMvcResultMatchers.status;
@RunWith(SpringJUnit4ClassRunner.class)
@SpringBootTest(classes = Application.class)
@AutoConfigureMockMvc
public class LoanRestApiTest {
@Autowired
private MockMvc mockMvc;
private static final String REST_ROOT = "/api/loan";
@Test
public void applyForLoanSuccessfully() throws Exception {
ApplyLoanRequest request = new ApplyLoanRequest();
BigDecimal askedAmount = BigDecimal.valueOf(12000);
Duration askedDuration = Duration.ofDays(120);
LocalDateTime askedDate = LocalDateTime.now().plus(askedDuration);
request.setIssuedAmount(askedAmount);
request.setIssuedDuration(askedDuration);
MvcResult result = mockMvc.perform(post(getUri("/apply_for_loan"))
.contentType(MediaType.APPLICATION_JSON)
.content(Util.asJsonString(request)))
.andExpect(status().isOk()).andReturn();
ObjectMapper mapper = new ObjectMapper();
Loan response = mapper.readValue(result.getResponse().getContentAsString(), new TypeReference<Loan>() {
});
assertNotNull(response);
assertEquals(askedDate.getDayOfMonth(), response.getLoanTerm().getDayOfMonth());
assertEquals(askedAmount, response.getLoanAmount());
assertEquals(askedAmount.multiply(Constants.INTEREST_PERCENTAGE), response.getRepaymentAmount());
}
@Test
public void applyForLoanFailedWithAmountTooSmall() throws Exception {
ApplyLoanRequest request = new ApplyLoanRequest();
BigDecimal askedAmount = BigDecimal.valueOf(120);
Duration askedDuration = Duration.ofDays(120);
request.setIssuedAmount(askedAmount);
request.setIssuedDuration(askedDuration);
mockMvc.perform(post(getUri("/apply_for_loan"))
.contentType(MediaType.APPLICATION_JSON)
.content(Util.asJsonString(request)))
.andExpect(status().isNotAcceptable());
}
@Test
public void applyForLoanFailedWithTermTooShort() throws Exception {
ApplyLoanRequest request = new ApplyLoanRequest();
BigDecimal askedAmount = BigDecimal.valueOf(12000);
Duration askedDuration = Duration.ofDays(12);
request.setIssuedAmount(askedAmount);
request.setIssuedDuration(askedDuration);
mockMvc.perform(post(getUri("/apply_for_loan"))
.contentType(MediaType.APPLICATION_JSON)
.content(Util.asJsonString(request)))
.andExpect(status().isNotAcceptable());
}
@Test
public void applyForLoanFailedWithAmountTooBig() throws Exception {
ApplyLoanRequest request = new ApplyLoanRequest();
BigDecimal askedAmount = BigDecimal.valueOf(120000);
Duration askedDuration = Duration.ofDays(120);
request.setIssuedAmount(askedAmount);
request.setIssuedDuration(askedDuration);
mockMvc.perform(post(getUri("/apply_for_loan"))
.contentType(MediaType.APPLICATION_JSON)
.content(Util.asJsonString(request)))
.andExpect(status().isNotAcceptable());
}
@Test
public void applyForLoanFailedWithTermTooLong() throws Exception {
ApplyLoanRequest request = new ApplyLoanRequest();
BigDecimal askedAmount = BigDecimal.valueOf(12000);
Duration askedDuration = Duration.ofDays(120000);
request.setIssuedAmount(askedAmount);
request.setIssuedDuration(askedDuration);
mockMvc.perform(post(getUri("/apply_for_loan"))
.contentType(MediaType.APPLICATION_JSON)
.content(Util.asJsonString(request)))
.andExpect(status().isNotAcceptable());
}
@Test
public void extendLoanTerm() throws Exception {
ApplyLoanRequest request = new ApplyLoanRequest();
BigDecimal askedAmount = BigDecimal.valueOf(12000);
Duration askedDuration = Duration.ofDays(120);
request.setIssuedAmount(askedAmount);
request.setIssuedDuration(askedDuration);
MvcResult result = mockMvc.perform(post(getUri("/apply_for_loan"))
.contentType(MediaType.APPLICATION_JSON)
.content(Util.asJsonString(request)))
.andExpect(status().isOk()).andReturn();
ObjectMapper mapper = new ObjectMapper();
Loan response = mapper.readValue(result.getResponse().getContentAsString(), new TypeReference<Loan>() {
});
result = mockMvc.perform(post(getUri("/extend_loan/" + response.getLoanId().toString()))
.contentType(MediaType.APPLICATION_JSON))
.andExpect(status().isOk()).andReturn();
response = mapper.readValue(result.getResponse().getContentAsString(), new TypeReference<Loan>() {
});
assertNotNull(response);
assertEquals(LocalDateTime.now().plus(askedDuration).plus(Constants.DURATION_OF_EXTENSION).getDayOfMonth(), response.getLoanTerm().getDayOfMonth());
assertNotNull(response.getLastExtendDate());
assertEquals(response.getLastExtendDate().getDayOfMonth(), LocalDateTime.now().getDayOfMonth());
}
@Test
public void extendLoanTermFailure() throws Exception {
mockMvc.perform(post(getUri("/extend_loan/0"))
.contentType(MediaType.APPLICATION_JSON))
.andExpect(status().isExpectationFailed());
}
private String getUri(String uri) {
return REST_ROOT + uri;
}
} | java |
<reponame>uw-it-aca/myuw<gh_stars>10-100
# Copyright 2021 UW-IT, University of Washington
# SPDX-License-Identifier: Apache-2.0
import logging
import traceback
from myuw.dao.term import get_current_quarter
from myuw.logger.timer import Timer
from myuw.views.error import handle_exception
from myuw.views.api.base_schedule import StudClasSche
logger = logging.getLogger(__name__)
class StudClasScheCurQuar(StudClasSche):
"""
Performs actions on resource at /api/v1/schedule/current/.
"""
def get(self, request, *args, **kwargs):
"""
GET returns 200 with the current quarter course section schedule
@return class schedule data in json format
status 404: no schedule found (not registered)
status 543: data error
"""
timer = Timer()
try:
return self.make_http_resp(timer,
get_current_quarter(request),
request)
except Exception:
return handle_exception(logger, timer, traceback)
| python |
{
"manifest_version": 2,
"name": "SSBird",
"version": "1.0",
"description": "Merge sheets in Spreadsheet and push it to GitHub as csv",
"icons": {
"16": "icons/16.png",
"48": "icons/48.png",
"128": "icons/128.png"
},
"page_action": {
"default_title": "SSBird",
"default_popup": "popup.html"
},
"background": {
"scripts": ["js/jquery-3.5.0.min.js", "js/utils.js", "js/background.js"],
"persistent": true
},
"content_scripts": [
{
"matches": [
"https://docs.google.com/spreadsheets/*",
"https://drive.google.com/drive/*"
],
"js": ["js/jquery-3.5.0.min.js", "js/contents.js"]
}
],
"key": "MIIBIjANBgkqhkiG9w0BAQEFAAOCAQ8AMIIBCgKCAQEA1vZ/3Jhh0v7QkeqEPRoxGEq2IM6Dv8LDWRUF9evSCcaHEIqfsLG+o2lBb6wQm3fon0SxbRt5g80XA73h9gIU8/8EWRV8HFejnaNSGWzV2jswylmh8Y25BK/Av7RpuZJy3vq7104p5Nt8CKVF9dkNzcL3Q83C/PjTAedGGWhV9VWrB/AIW9PFZolkJIY6k8zLuS7i7V+yws5Vs1pv3ywB6NAsrtO4CftWEPbaE3YjAXaticHUC+EjtPAUcVpM+Pubip8nZt4RPMzEQR/TVtxilL0yYjaOjLmSJEJDC7q38H6y0JdTfcShotz3cM9uKDNsBaZQ5z6ktJgCqX/jCxEpNwIDAQAB",
"permissions": [
"declarativeContent",
"nativeMessaging",
"storage",
"tabs",
"https://script.google.com/",
"https://docs.google.com/",
"https://drive.google.com/",
"https://*.googleusercontent.com/"
]
}
| json |
{
"directions": [
"If using fresh yuca, remove peel and pink flesh using a vegetable peeler. Cover fresh or frozen yuca with salted cold water in a 3- to 4-quart saucepan. Bring to a boil, then simmer, covered, until yuca begins to split, 15 to 20 minutes. Transfer yuca with a slotted spoon to a cutting board, reserving water in pan. Cut into 1-inch wedges, pulling out and discarding tough core. Return yuca to cooking water and simmer, covered, until tender and completely translucent, 15 to 25 minutes more.",
"Cover onion with boiling-hot water in a heatproof bowl and let stand 15 minutes. Drain onion and return to bowl. Add lime juice and salt, tossing to coat, and let stand at least 5 minutes before serving.",
"Bring water, tomatoes, cilantro, cumin, and 3/4 teaspoon salt to a boil in a 3- to 4-quart heavy saucepan. Reduce heat and simmer, covered, 5 minutes. Toss fish with remaining 3/4 teaspoon salt and stir into broth. Cook, uncovered, over moderate heat, stirring occasionally, until fish is just cooked through, 3 to 5 minutes. 3Drain yuca and divide among 4 large soup bowls. Ladle fish stew over yuca and top with some of pickled onions."
],
"ingredients": [
"1 1/2 lb fresh or frozen yuca (1 large), cut crosswise into 2-inch sections for pickled onions",
"1 medium red onion, halved lengthwise, then thinly sliced",
"1/4 cup fresh lime juice",
"1/2 teaspoon salt",
"3 cups water",
"1 lb tomatoes, coarsely chopped",
"1 tablespoon chopped fresh cilantro",
"1 teaspoon ground cumin",
"1 1/2 teaspoons salt",
"2 lb skinless tuna, bluefish, or mackerel fillets, cut into 1-inch pieces",
"Accompaniment: lime wedges",
"Garnish: chopped fresh cilantro"
],
"language": "en-US",
"source": "www.epicurious.com",
"tags": [
"Citrus",
"Fish",
"Herb",
"Onion",
"Tomato",
"Gourmet"
],
"title": "Fish and Yuca Stew with Pickled Onions",
"url": "http://www.epicurious.com/recipes/food/views/fish-and-yuca-stew-with-pickled-onions-237191"
}
| json |
#include <EnteeZ/TemplateStorage.hpp>
using namespace enteez;
| cpp |
package net.violet.db.cache;
import java.io.Serializable;
/**
* Interface pour les messages du cache.
*/
public interface CacheMessage extends Serializable {
// This space for rent.
}
| java |
<filename>lib/crunch/crnlib/crn_dxt_hc.cpp
// File: crn_dxt_hc.cpp
// This software is in the public domain. Please see license.txt.
#include "crn_core.h"
#include "crn_dxt_hc.h"
#include "crn_image_utils.h"
#include "crn_console.h"
#include "crn_dxt_fast.h"
#define CRNLIB_USE_FAST_DXT 1
#define CRNLIB_ENABLE_DEBUG_MESSAGES 0
namespace crnlib
{
static color_quad_u8 g_tile_layout_colors[cNumChunkTileLayouts] =
{
color_quad_u8(255,90,32,255),
color_quad_u8(64,210,192,255),
color_quad_u8(128,16,225,255),
color_quad_u8(255,192,200,255),
color_quad_u8(255,128,200,255),
color_quad_u8(255,0,0,255),
color_quad_u8(0,255,0,255),
color_quad_u8(0,0,255,255),
color_quad_u8(255,0,255,255)
};
dxt_hc::dxt_hc() :
m_num_chunks(0),
m_pChunks(NULL),
m_num_alpha_blocks(0),
m_has_color_blocks(false),
m_has_alpha0_blocks(false),
m_has_alpha1_blocks(false),
m_main_thread_id(crn_get_current_thread_id()),
m_canceled(false),
m_pTask_pool(NULL),
m_prev_phase_index(-1),
m_prev_percentage_complete(-1)
{
utils::zero_object(m_encoding_hist);
}
dxt_hc::~dxt_hc()
{
}
void dxt_hc::clear()
{
m_num_chunks = 0;
m_pChunks = NULL;
m_chunk_encoding.clear();
m_num_alpha_blocks = 0;
m_has_color_blocks = false;
m_has_alpha0_blocks = false;
m_has_alpha1_blocks = false;
m_color_selectors.clear();
m_alpha_selectors.clear();
for (uint i = 0; i < cNumCompressedChunkVecs; i++)
m_compressed_chunks[i].clear();
utils::zero_object(m_encoding_hist);
m_total_tiles = 0;
m_color_clusters.clear();
m_alpha_clusters.clear();
m_color_selectors.clear();
m_alpha_selectors.clear();
m_chunk_blocks_using_color_selectors.clear();
m_chunk_blocks_using_alpha_selectors.clear();
m_color_endpoints.clear();
m_alpha_endpoints.clear();
m_dbg_chunk_pixels.clear();
m_dbg_chunk_pixels_tile_vis.clear();
m_dbg_chunk_pixels_color_quantized.clear();
m_dbg_chunk_pixels_alpha_quantized.clear();
m_dbg_chunk_pixels_quantized_color_selectors.clear();
m_dbg_chunk_pixels_orig_color_selectors.clear();
m_dbg_chunk_pixels_final_color_selectors.clear();
m_dbg_chunk_pixels_final_alpha_selectors.clear();
m_dbg_chunk_pixels_quantized_alpha_selectors.clear();
m_dbg_chunk_pixels_orig_alpha_selectors.clear();
m_dbg_chunk_pixels_final_alpha_selectors.clear();
m_dbg_chunk_pixels_final.clear();
m_canceled = false;
m_prev_phase_index = -1;
m_prev_percentage_complete = -1;
}
bool dxt_hc::compress(const params& p, uint num_chunks, const pixel_chunk* pChunks, task_pool& task_pool)
{
m_pTask_pool = &task_pool;
m_main_thread_id = crn_get_current_thread_id();
bool result = compress_internal(p, num_chunks, pChunks);
m_pTask_pool = NULL;
return result;
}
bool dxt_hc::compress_internal(const params& p, uint num_chunks, const pixel_chunk* pChunks)
{
if ((!num_chunks) || (!pChunks))
return false;
if ((m_params.m_format == cDXT1A) || (m_params.m_format == cDXT3))
return false;
clear();
m_params = p;
m_num_chunks = num_chunks;
m_pChunks = pChunks;
switch (m_params.m_format)
{
case cDXT1:
{
m_has_color_blocks = true;
break;
}
case cDXT5:
{
m_has_color_blocks = true;
m_has_alpha0_blocks = true;
m_num_alpha_blocks = 1;
break;
}
case cDXT5A:
{
m_has_alpha0_blocks = true;
m_num_alpha_blocks = 1;
break;
}
case cDXN_XY:
case cDXN_YX:
{
m_has_alpha0_blocks = true;
m_has_alpha1_blocks = true;
m_num_alpha_blocks = 2;
break;
}
default:
{
return false;
}
}
determine_compressed_chunks();
if (m_has_color_blocks)
{
if (!determine_color_endpoint_clusters())
return false;
if (!determine_color_endpoint_codebook())
return false;
}
if (m_num_alpha_blocks)
{
if (!determine_alpha_endpoint_clusters())
return false;
if (!determine_alpha_endpoint_codebook())
return false;
}
create_quantized_debug_images();
if (m_has_color_blocks)
{
if (!create_selector_codebook(false))
return false;
}
if (m_num_alpha_blocks)
{
if (!create_selector_codebook(true))
return false;
}
if (m_has_color_blocks)
{
if (!refine_quantized_color_selectors())
return false;
if (!refine_quantized_color_endpoints())
return false;
}
if (m_num_alpha_blocks)
{
if (!refine_quantized_alpha_endpoints())
return false;
if (!refine_quantized_alpha_selectors())
return false;
}
create_final_debug_image();
if (!create_chunk_encodings())
return false;
return true;
}
void dxt_hc::compress_dxt1_block(
dxt1_endpoint_optimizer::results& results,
uint chunk_index, const image_u8& chunk, uint x_ofs, uint y_ofs, uint width, uint height,
uint8* pColor_Selectors)
{
chunk_index;
color_quad_u8 pixels[cChunkPixelWidth * cChunkPixelHeight];
for (uint y = 0; y < height; y++)
for (uint x = 0; x < width; x++)
pixels[x + y * width] = chunk(x_ofs + x, y_ofs + y);
//double s = image_utils::compute_std_dev(width * height, pixels, 0, 3);
#if CRNLIB_USE_FAST_DXT
uint low16, high16;
dxt_fast::compress_color_block(width * height, pixels, low16, high16, pColor_Selectors);
results.m_low_color = static_cast<uint16>(low16);
results.m_high_color = static_cast<uint16>(high16);
results.m_alpha_block = false;
results.m_error = INT_MAX;
results.m_pSelectors = pColor_Selectors;
#else
dxt1_endpoint_optimizer optimizer;
dxt1_endpoint_optimizer::params params;
params.m_block_index = chunk_index;
params.m_pPixels = pixels;
params.m_num_pixels = width * height;
params.m_pixels_have_alpha = false;
params.m_use_alpha_blocks = false;
params.m_perceptual = m_params.m_perceptual;
params.m_highest_quality = false;//false;
params.m_endpoint_caching = false;
results.m_pSelectors = pColor_Selectors;
optimizer.compute(params, results);
#endif
}
void dxt_hc::compress_dxt5_block(
dxt5_endpoint_optimizer::results& results,
uint chunk_index, const image_u8& chunk, uint x_ofs, uint y_ofs, uint width, uint height, uint component_index,
uint8* pAlpha_selectors)
{
chunk_index;
color_quad_u8 pixels[cChunkPixelWidth * cChunkPixelHeight];
for (uint y = 0; y < height; y++)
for (uint x = 0; x < width; x++)
pixels[x + y * width] = chunk(x_ofs + x, y_ofs + y);
#if 0 //CRNLIB_USE_FAST_DXT
uint low, high;
dxt_fast::compress_alpha_block(width * height, pixels, low, high, pAlpha_selectors, component_index);
results.m_pSelectors = pAlpha_selectors;
results.m_error = INT_MAX;
results.m_first_endpoint = static_cast<uint8>(low);
results.m_second_endpoint = static_cast<uint8>(high);
results.m_block_type = 0;
#else
dxt5_endpoint_optimizer optimizer;
dxt5_endpoint_optimizer::params params;
params.m_block_index = chunk_index;
params.m_pPixels = pixels;
params.m_num_pixels = width * height;
params.m_comp_index = component_index;
params.m_use_both_block_types = false;
params.m_quality = cCRNDXTQualityNormal;
results.m_pSelectors = pAlpha_selectors;
optimizer.compute(params, results);
#endif
}
void dxt_hc::determine_compressed_chunks_task(uint64 data, void* pData_ptr)
{
pData_ptr;
const uint thread_index = static_cast<uint>(data);
image_u8 orig_chunk;
image_u8 decomp_chunk[cNumChunkEncodings];
orig_chunk.resize(cChunkPixelWidth, cChunkPixelHeight);
for (uint i = 0; i < cNumChunkEncodings; i++)
decomp_chunk[i].resize(cChunkPixelWidth, cChunkPixelHeight);
image_utils::error_metrics color_error_metrics[cNumChunkEncodings];
dxt1_endpoint_optimizer::results color_optimizer_results[cNumChunkTileLayouts];
uint8 layout_color_selectors[cNumChunkTileLayouts][cChunkPixelWidth * cChunkPixelHeight];
image_utils::error_metrics alpha_error_metrics[2][cNumChunkEncodings];
dxt5_endpoint_optimizer::results alpha_optimizer_results[2][cNumChunkTileLayouts];
uint8 layout_alpha_selectors[2][cNumChunkTileLayouts][cChunkPixelWidth * cChunkPixelHeight];
uint first_layout = 0;
uint last_layout = cNumChunkTileLayouts;
uint first_encoding = 0;
uint last_encoding = cNumChunkEncodings;
if (!m_params.m_hierarchical)
{
first_layout = cFirst4x4ChunkTileLayout;
first_encoding = cNumChunkEncodings - 1;
}
for (uint chunk_index = 0; chunk_index < m_num_chunks; chunk_index++)
{
if (m_canceled)
return;
if ((crn_get_current_thread_id() == m_main_thread_id) && ((chunk_index & 511) == 0))
{
if (!update_progress(0, chunk_index, m_num_chunks))
return;
}
if (m_pTask_pool->get_num_threads())
{
if ((chunk_index % (m_pTask_pool->get_num_threads() + 1)) != thread_index)
continue;
}
uint level_index = 0;
for (uint i = 0; i < m_params.m_num_levels; i++)
{
if ((chunk_index >= m_params.m_levels[i].m_first_chunk) && (chunk_index < m_params.m_levels[i].m_first_chunk + m_params.m_levels[i].m_num_chunks))
{
level_index = i;
break;
}
}
for (uint cy = 0; cy < cChunkPixelHeight; cy++)
for (uint cx = 0; cx < cChunkPixelWidth; cx++)
orig_chunk(cx, cy) = m_pChunks[chunk_index](cx, cy);
if (m_has_color_blocks)
{
for (uint l = first_layout; l < last_layout; l++)
{
utils::zero_object(layout_color_selectors[l]);
compress_dxt1_block(
color_optimizer_results[l], chunk_index,
orig_chunk,
g_chunk_tile_layouts[l].m_x_ofs, g_chunk_tile_layouts[l].m_y_ofs,
g_chunk_tile_layouts[l].m_width, g_chunk_tile_layouts[l].m_height,
layout_color_selectors[l]);
}
}
float alpha_layout_std_dev[2][cNumChunkTileLayouts];
utils::zero_object(alpha_layout_std_dev);
for (uint a = 0; a < m_num_alpha_blocks; a++)
{
for (uint l = first_layout; l < last_layout; l++)
{
utils::zero_object(layout_alpha_selectors[a][l]);
compress_dxt5_block(
alpha_optimizer_results[a][l], chunk_index,
orig_chunk,
g_chunk_tile_layouts[l].m_x_ofs, g_chunk_tile_layouts[l].m_y_ofs,
g_chunk_tile_layouts[l].m_width, g_chunk_tile_layouts[l].m_height,
m_params.m_alpha_component_indices[a],
layout_alpha_selectors[a][l]);
for (uint a = 0; a < m_num_alpha_blocks; a++)
{
float mean = 0.0f;
float variance = 0.0f;
for (uint cy = 0; cy < g_chunk_tile_layouts[l].m_height; cy++)
{
for (uint cx = 0; cx < g_chunk_tile_layouts[l].m_width; cx++)
{
uint s = orig_chunk(cx + g_chunk_tile_layouts[l].m_x_ofs, cy + g_chunk_tile_layouts[l].m_y_ofs)[m_params.m_alpha_component_indices[a]];
mean += s;
variance += s * s;
} // cx
} //cy
float scale = 1.0f / (g_chunk_tile_layouts[l].m_width * g_chunk_tile_layouts[l].m_height);
mean *= scale;
variance *= scale;
variance -= mean * mean;
alpha_layout_std_dev[a][l] = sqrt(variance);
} //a
}
}
for (uint e = first_encoding; e < last_encoding; e++)
{
for (uint t = 0; t < g_chunk_encodings[e].m_num_tiles; t++)
{
const uint layout_index = g_chunk_encodings[e].m_tiles[t].m_layout_index;
CRNLIB_ASSERT( (layout_index >= first_layout) && (layout_index < last_layout) );
if (m_has_color_blocks)
{
const dxt1_endpoint_optimizer::results& color_results = color_optimizer_results[layout_index];
const uint8* pColor_selectors = layout_color_selectors[layout_index];
color_quad_u8 block_colors[cDXT1SelectorValues];
CRNLIB_ASSERT(color_results.m_low_color >= color_results.m_high_color);
// it's okay if color_results.m_low_color == color_results.m_high_color, because in this case only selector 0 should be used
dxt1_block::get_block_colors4(block_colors, color_results.m_low_color, color_results.m_high_color);
for (uint cy = 0; cy < g_chunk_encodings[e].m_tiles[t].m_height; cy++)
{
for (uint cx = 0; cx < g_chunk_encodings[e].m_tiles[t].m_width; cx++)
{
uint s = pColor_selectors[cx + cy * g_chunk_encodings[e].m_tiles[t].m_width];
CRNLIB_ASSERT(s < cDXT1SelectorValues);
decomp_chunk[e](cx + g_chunk_encodings[e].m_tiles[t].m_x_ofs, cy + g_chunk_encodings[e].m_tiles[t].m_y_ofs) = block_colors[s];
}
}
}
for (uint a = 0; a < m_num_alpha_blocks; a++)
{
const dxt5_endpoint_optimizer::results& alpha_results = alpha_optimizer_results[a][layout_index];
const uint8* pAlpha_selectors = layout_alpha_selectors[a][layout_index];
uint block_values[cDXT5SelectorValues];
CRNLIB_ASSERT(alpha_results.m_first_endpoint >= alpha_results.m_second_endpoint);
dxt5_block::get_block_values8(block_values, alpha_results.m_first_endpoint, alpha_results.m_second_endpoint);
for (uint cy = 0; cy < g_chunk_encodings[e].m_tiles[t].m_height; cy++)
{
for (uint cx = 0; cx < g_chunk_encodings[e].m_tiles[t].m_width; cx++)
{
uint s = pAlpha_selectors[cx + cy * g_chunk_encodings[e].m_tiles[t].m_width];
CRNLIB_ASSERT(s < cDXT5SelectorValues);
decomp_chunk[e](cx + g_chunk_encodings[e].m_tiles[t].m_x_ofs, cy + g_chunk_encodings[e].m_tiles[t].m_y_ofs)[m_params.m_alpha_component_indices[a]] =
static_cast<uint8>(block_values[s]);
}
}
}
} // t
if (m_params.m_hierarchical)
{
if (m_has_color_blocks)
color_error_metrics[e].compute(decomp_chunk[e], orig_chunk, 0, 3);
for (uint a = 0; a < m_num_alpha_blocks; a++)
alpha_error_metrics[a][e].compute(decomp_chunk[e], orig_chunk, m_params.m_alpha_component_indices[a], 1);
}
} // e
uint best_encoding = cNumChunkEncodings - 1;
if (m_params.m_hierarchical)
{
float quality[cNumChunkEncodings];
utils::zero_object(quality);
float best_quality = 0.0f;
best_encoding = 0;
for (uint e = 0; e < cNumChunkEncodings; e++)
{
if (m_has_color_blocks)
{
float adaptive_tile_color_psnr_derating = m_params.m_adaptive_tile_color_psnr_derating;
if ((level_index) && (adaptive_tile_color_psnr_derating > .25f))
{
//adaptive_tile_color_psnr_derating = math::lerp(adaptive_tile_color_psnr_derating * .5f, .3f, (level_index - 1) / math::maximum(1.0f, float(m_params.m_num_levels - 2)));
adaptive_tile_color_psnr_derating = math::maximum(.25f, adaptive_tile_color_psnr_derating / powf(3.0f, static_cast<float>(level_index)));
}
float color_derating = math::lerp( 0.0f, adaptive_tile_color_psnr_derating, (g_chunk_encodings[e].m_num_tiles - 1) / 3.0f );
quality[e] = (float)math::maximum<double>(color_error_metrics[e].mPeakSNR - color_derating, 0.0f);
}
if (m_num_alpha_blocks)
{
quality[e] *= m_params.m_adaptive_tile_color_alpha_weighting_ratio;
float alpha_derating = math::lerp( 0.0f, m_params.m_adaptive_tile_alpha_psnr_derating, (g_chunk_encodings[e].m_num_tiles - 1) / 3.0f );
float max_std_dev = 0.0f;
for (uint a = 0; a < m_num_alpha_blocks; a++)
{
quality[e] += (float)math::maximum<double>(alpha_error_metrics[a][e].mPeakSNR - alpha_derating, 0.0f);
for (uint t = 0; t < g_chunk_encodings[e].m_num_tiles; t++)
{
float std_dev = alpha_layout_std_dev[a][ g_chunk_encodings[e].m_tiles[t].m_layout_index ];
max_std_dev = math::maximum(max_std_dev, std_dev);
}
}
#if 0
// rg [4/28/09] - disabling this because it's fucking up dxt5_xgbr normal maps
const float l = 6.0f;
const float k = .5f;
if (max_std_dev > l)
{
float s = max_std_dev - l;
quality[e] -= (k * s);
}
#endif
}
if (quality[e] > best_quality)
{
best_quality = quality[e];
best_encoding = e;
}
}
}
atomic_increment32(&m_encoding_hist[best_encoding]);
atomic_exchange_add32(&m_total_tiles, g_chunk_encodings[best_encoding].m_num_tiles);
for (uint q = 0; q < cNumCompressedChunkVecs; q++)
{
if (q == cColorChunks)
{
if (!m_has_color_blocks)
continue;
}
else if (q > m_num_alpha_blocks)
continue;
compressed_chunk& output = m_compressed_chunks[q][chunk_index];
output.m_encoding_index = static_cast<uint8>(best_encoding);
output.m_num_tiles = static_cast<uint8>(g_chunk_encodings[best_encoding].m_num_tiles);
for (uint t = 0; t < g_chunk_encodings[best_encoding].m_num_tiles; t++)
{
const uint layout_index = g_chunk_encodings[best_encoding].m_tiles[t].m_layout_index;
output.m_tiles[t].m_layout_index = static_cast<uint8>(layout_index);
output.m_tiles[t].m_pixel_width = static_cast<uint8>(g_chunk_encodings[best_encoding].m_tiles[t].m_width);
output.m_tiles[t].m_pixel_height = static_cast<uint8>(g_chunk_encodings[best_encoding].m_tiles[t].m_height);
if (q == cColorChunks)
{
const dxt1_endpoint_optimizer::results& color_results = color_optimizer_results[layout_index];
const uint8* pColor_selectors = layout_color_selectors[layout_index];
output.m_tiles[t].m_endpoint_cluster_index = 0;
output.m_tiles[t].m_first_endpoint = color_results.m_low_color;
output.m_tiles[t].m_second_endpoint = color_results.m_high_color;
memcpy(output.m_tiles[t].m_selectors, pColor_selectors, cChunkPixelWidth * cChunkPixelHeight);
output.m_tiles[t].m_alpha_encoding = color_results.m_alpha_block;
}
else
{
const uint a = q - cAlpha0Chunks;
const dxt5_endpoint_optimizer::results& alpha_results = alpha_optimizer_results[a][layout_index];
const uint8* pAlpha_selectors = layout_alpha_selectors[a][layout_index];
output.m_tiles[t].m_endpoint_cluster_index = 0;
output.m_tiles[t].m_first_endpoint = alpha_results.m_first_endpoint;
output.m_tiles[t].m_second_endpoint = alpha_results.m_second_endpoint;
memcpy(output.m_tiles[t].m_selectors, pAlpha_selectors, cChunkPixelWidth * cChunkPixelHeight);
output.m_tiles[t].m_alpha_encoding = alpha_results.m_block_type != 0;
}
} // t
} // q
if (m_params.m_debugging)
{
for (uint y = 0; y < cChunkPixelHeight; y++)
for (uint x = 0; x < cChunkPixelWidth; x++)
m_dbg_chunk_pixels[chunk_index](x, y) = decomp_chunk[best_encoding](x, y);
for (uint t = 0; t < g_chunk_encodings[best_encoding].m_num_tiles; t++)
{
const uint layout_index = g_chunk_encodings[best_encoding].m_tiles[t].m_layout_index;
const chunk_tile_desc& tile_desc = g_chunk_tile_layouts[layout_index];
for (uint ty = 0; ty < tile_desc.m_height; ty++)
for (uint tx = 0; tx < tile_desc.m_width; tx++)
m_dbg_chunk_pixels_tile_vis[chunk_index](tile_desc.m_x_ofs + tx, tile_desc.m_y_ofs + ty) = g_tile_layout_colors[layout_index];
}
}
} // chunk_index
}
bool dxt_hc::determine_compressed_chunks()
{
utils::zero_object(m_encoding_hist);
for (uint i = 0; i < cNumCompressedChunkVecs; i++)
m_compressed_chunks[i].clear();
if (m_has_color_blocks)
m_compressed_chunks[cColorChunks].resize(m_num_chunks);
for (uint a = 0; a < m_num_alpha_blocks; a++)
m_compressed_chunks[cAlpha0Chunks + a].resize(m_num_chunks);
if (m_params.m_debugging)
{
m_dbg_chunk_pixels.resize(m_num_chunks);
m_dbg_chunk_pixels_tile_vis.resize(m_num_chunks);
for (uint i = 0; i < m_num_chunks; i++)
{
m_dbg_chunk_pixels[i].clear();
m_dbg_chunk_pixels_tile_vis[i].clear();
}
}
m_total_tiles = 0;
for (uint i = 0; i <= m_pTask_pool->get_num_threads(); i++)
m_pTask_pool->queue_object_task(this, &dxt_hc::determine_compressed_chunks_task, i);
m_pTask_pool->join();
if (m_canceled)
return false;
#if CRNLIB_ENABLE_DEBUG_MESSAGES
if (m_params.m_debugging)
{
console::info("Total Pixels: %u, Chunks: %u, Blocks: %u, Adapted Tiles: %u", m_num_chunks * cChunkPixelWidth * cChunkPixelHeight, m_num_chunks, m_num_chunks * cChunkBlockWidth * cChunkBlockHeight, m_total_tiles);
console::info("Chunk encoding type symbol_histogram: ");
for (uint e = 0; e < cNumChunkEncodings; e++)
console::info("%u ", m_encoding_hist[e]);
console::info("Blocks per chunk encoding type: ");
for (uint e = 0; e < cNumChunkEncodings; e++)
console::info("%u ", m_encoding_hist[e] * cChunkBlockWidth * cChunkBlockHeight);
}
#endif
return true;
}
void dxt_hc::assign_color_endpoint_clusters_task(uint64 data, void* pData_ptr)
{
const uint thread_index = (uint)data;
assign_color_endpoint_clusters_state& state = *static_cast<assign_color_endpoint_clusters_state*>(pData_ptr);
for (uint chunk_index = 0; chunk_index < m_num_chunks; chunk_index++)
{
if (m_canceled)
return;
if ((crn_get_current_thread_id() == m_main_thread_id) && ((chunk_index & 63) == 0))
{
if (!update_progress(2, chunk_index, m_num_chunks))
return;
}
if (m_pTask_pool->get_num_threads())
{
if ((chunk_index % (m_pTask_pool->get_num_threads() + 1)) != thread_index)
continue;
}
compressed_chunk& chunk = m_compressed_chunks[cColorChunks][chunk_index];
for (uint tile_index = 0; tile_index < chunk.m_num_tiles; tile_index++)
{
uint cluster_index = state.m_vq.find_best_codebook_entry_fs(state.m_training_vecs[chunk_index][tile_index]);
chunk.m_endpoint_cluster_index[tile_index] = static_cast<uint16>(cluster_index);
}
}
}
bool dxt_hc::determine_color_endpoint_clusters()
{
if (!m_has_color_blocks)
return true;
#if CRNLIB_ENABLE_DEBUG_MESSAGES
if (m_params.m_debugging)
console::info("Generating color training vectors");
#endif
const float r_scale = .5f;
const float b_scale = .25f;
vec6F_tree_vq vq;
crnlib::vector< crnlib::vector<vec6F> > training_vecs;
training_vecs.resize(m_num_chunks);
for (uint chunk_index = 0; chunk_index < m_num_chunks; chunk_index++)
{
if ((chunk_index & 255) == 0)
{
if (!update_progress(1, chunk_index, m_num_chunks))
return false;
}
const compressed_chunk& chunk = m_compressed_chunks[cColorChunks][chunk_index];
training_vecs[chunk_index].resize(chunk.m_num_tiles);
for (uint tile_index = 0; tile_index < chunk.m_num_tiles; tile_index++)
{
const compressed_tile& tile = chunk.m_tiles[tile_index];
const chunk_tile_desc& layout = g_chunk_tile_layouts[tile.m_layout_index];
tree_clusterizer<vec3F> palettizer;
for (uint y = 0; y < layout.m_height; y++)
{
for (uint x = 0; x < layout.m_width; x++)
{
const color_quad_u8& c = m_pChunks[chunk_index](layout.m_x_ofs + x, layout.m_y_ofs + y);
vec3F v;
if (m_params.m_perceptual)
{
v.set(c[0] * 1.0f/255.0f, c[1] * 1.0f/255.0f, c[2] * 1.0f/255.0f);
v[0] *= r_scale;
v[2] *= b_scale;
}
else
{
v.set(c[0] * 1.0f/255.0f, c[1] * 1.0f/255.0f, c[2] * 1.0f/255.0f);
}
palettizer.add_training_vec(v, 1);
}
}
palettizer.generate_codebook(2);
uint tile_weight = tile.m_pixel_width * tile.m_pixel_height;
tile_weight = static_cast<uint>(tile_weight * m_pChunks[chunk_index].m_weight);
vec3F v[2];
utils::zero_object(v);
for (uint i = 0; i < palettizer.get_codebook_size(); i++)
v[i] = palettizer.get_codebook_entry(i);
if (palettizer.get_codebook_size() == 1)
v[1] = v[0];
if (v[0].length() > v[1].length())
utils::swap(v[0], v[1]);
vec6F vv;
for (uint i = 0; i < 2; i++)
{
vv[i*3+0] = v[i][0];
vv[i*3+1] = v[i][1];
vv[i*3+2] = v[i][2];
}
vq.add_training_vec(vv, tile_weight);
training_vecs[chunk_index][tile_index] = vv;
}
}
#if CRNLIB_ENABLE_DEBUG_MESSAGES
if (m_params.m_debugging)
console::info("Begin color cluster analysis");
timer t;
t.start();
#endif
uint codebook_size = math::minimum<uint>(m_total_tiles, m_params.m_color_endpoint_codebook_size);
vq.generate_codebook(codebook_size);
#if CRNLIB_ENABLE_DEBUG_MESSAGES
if (m_params.m_debugging)
{
double total_time = t.get_elapsed_secs();
console::info("Codebook gen time: %3.3fs, Total color clusters: %u", total_time, vq.get_codebook_size());
}
#endif
m_color_clusters.resize(vq.get_codebook_size());
#if CRNLIB_ENABLE_DEBUG_MESSAGES
if (m_params.m_debugging)
console::info("Begin color cluster assignment");
#endif
assign_color_endpoint_clusters_state state(vq, training_vecs);
for (uint i = 0; i <= m_pTask_pool->get_num_threads(); i++)
m_pTask_pool->queue_object_task(this, &dxt_hc::assign_color_endpoint_clusters_task, i, &state);
m_pTask_pool->join();
if (m_canceled)
return false;
for (uint chunk_index = 0; chunk_index < m_num_chunks; chunk_index++)
{
compressed_chunk& chunk = m_compressed_chunks[cColorChunks][chunk_index];
for (uint tile_index = 0; tile_index < chunk.m_num_tiles; tile_index++)
{
uint cluster_index = chunk.m_endpoint_cluster_index[tile_index];
m_color_clusters[cluster_index].m_tiles.push_back( std::make_pair(chunk_index, tile_index) );
}
}
#if CRNLIB_ENABLE_DEBUG_MESSAGES
if (m_params.m_debugging)
console::info("Completed color cluster assignment");
#endif
return true;
}
void dxt_hc::determine_alpha_endpoint_clusters_task(uint64 data, void* pData_ptr)
{
const uint thread_index = static_cast<uint>(data);
const determine_alpha_endpoint_clusters_state& state = *static_cast<determine_alpha_endpoint_clusters_state*>(pData_ptr);
for (uint a = 0; a < m_num_alpha_blocks; a++)
{
for (uint chunk_index = 0; chunk_index < m_num_chunks; chunk_index++)
{
if (m_canceled)
return;
if ((crn_get_current_thread_id() == m_main_thread_id) && ((chunk_index & 63) == 0))
{
if (!update_progress(7, m_num_chunks * a + chunk_index, m_num_chunks * m_num_alpha_blocks))
return;
}
if (m_pTask_pool->get_num_threads())
{
if ((chunk_index % (m_pTask_pool->get_num_threads() + 1)) != thread_index)
continue;
}
compressed_chunk& chunk = m_compressed_chunks[cAlpha0Chunks + a][chunk_index];
for (uint tile_index = 0; tile_index < chunk.m_num_tiles; tile_index++)
{
uint cluster_index = state.m_vq.find_best_codebook_entry_fs(state.m_training_vecs[a][chunk_index][tile_index]);
chunk.m_endpoint_cluster_index[tile_index] = static_cast<uint16>(cluster_index);
}
}
}
}
bool dxt_hc::determine_alpha_endpoint_clusters()
{
if (!m_num_alpha_blocks)
return true;
#if CRNLIB_ENABLE_DEBUG_MESSAGES
if (m_params.m_debugging)
console::info("Generating alpha training vectors");
#endif
determine_alpha_endpoint_clusters_state state;
for (uint a = 0; a < m_num_alpha_blocks; a++)
{
state.m_training_vecs[a].resize(m_num_chunks);
for (uint chunk_index = 0; chunk_index < m_num_chunks; chunk_index++)
{
if ((chunk_index & 63) == 0)
{
if (!update_progress(6, m_num_chunks * a + chunk_index, m_num_chunks * m_num_alpha_blocks))
return false;
}
const compressed_chunk& chunk = m_compressed_chunks[cAlpha0Chunks + a][chunk_index];
state.m_training_vecs[a][chunk_index].resize(chunk.m_num_tiles);
for (uint tile_index = 0; tile_index < chunk.m_num_tiles; tile_index++)
{
const compressed_tile& tile = chunk.m_tiles[tile_index];
const chunk_tile_desc& layout = g_chunk_tile_layouts[tile.m_layout_index];
tree_clusterizer<vec1F> palettizer;
for (uint y = 0; y < layout.m_height; y++)
{
for (uint x = 0; x < layout.m_width; x++)
{
uint c = m_pChunks[chunk_index](layout.m_x_ofs + x, layout.m_y_ofs + y)[m_params.m_alpha_component_indices[a]];
vec1F v(c * 1.0f/255.0f);
palettizer.add_training_vec(v, 1);
}
}
palettizer.generate_codebook(2);
const uint tile_weight = tile.m_pixel_width * tile.m_pixel_height;
vec1F v[2];
utils::zero_object(v);
for (uint i = 0; i < palettizer.get_codebook_size(); i++)
v[i] = palettizer.get_codebook_entry(i);
if (palettizer.get_codebook_size() == 1)
v[1] = v[0];
if (v[0] > v[1])
utils::swap(v[0], v[1]);
vec2F vv(v[0][0], v[1][0]);
state.m_vq.add_training_vec(vv, tile_weight);
state.m_training_vecs[a][chunk_index][tile_index] = vv;
} // tile_index
} // chunk_index
} // a
#if CRNLIB_ENABLE_DEBUG_MESSAGES
if (m_params.m_debugging)
console::info("Begin alpha cluster analysis");
timer t;
t.start();
#endif
uint codebook_size = math::minimum<uint>(m_total_tiles, m_params.m_alpha_endpoint_codebook_size);
state.m_vq.generate_codebook(codebook_size);
#if CRNLIB_ENABLE_DEBUG_MESSAGES
if (m_params.m_debugging)
{
double total_time = t.get_elapsed_secs();
console::info("Codebook gen time: %3.3fs, Total alpha clusters: %u", total_time, state.m_vq.get_codebook_size());
}
#endif
m_alpha_clusters.resize(state.m_vq.get_codebook_size());
#if CRNLIB_ENABLE_DEBUG_MESSAGES
if (m_params.m_debugging)
console::info("Begin alpha cluster assignment");
#endif
for (uint i = 0; i <= m_pTask_pool->get_num_threads(); i++)
m_pTask_pool->queue_object_task(this, &dxt_hc::determine_alpha_endpoint_clusters_task, i, &state);
m_pTask_pool->join();
if (m_canceled)
return false;
for (uint a = 0; a < m_num_alpha_blocks; a++)
{
for (uint chunk_index = 0; chunk_index < m_num_chunks; chunk_index++)
{
compressed_chunk& chunk = m_compressed_chunks[cAlpha0Chunks + a][chunk_index];
for (uint tile_index = 0; tile_index < chunk.m_num_tiles; tile_index++)
{
const uint cluster_index = chunk.m_endpoint_cluster_index[tile_index];
m_alpha_clusters[cluster_index].m_tiles.push_back( std::make_pair(chunk_index, tile_index | (a << 16)) );
}
}
}
#if CRNLIB_ENABLE_DEBUG_MESSAGES
if (m_params.m_debugging)
console::info("Completed alpha cluster assignment");
#endif
return true;
}
void dxt_hc::determine_color_endpoint_codebook_task(uint64 data, void* pData_ptr)
{
pData_ptr;
const uint thread_index = static_cast<uint>(data);
if (!m_has_color_blocks)
return;
crnlib::vector<color_quad_u8> pixels;
pixels.reserve(512);
crnlib::vector<uint8> selectors;
uint total_pixels = 0;
uint total_empty_clusters = 0;
for (uint cluster_index = 0; cluster_index < m_color_clusters.size(); cluster_index++)
{
if (m_canceled)
return;
if ((crn_get_current_thread_id() == m_main_thread_id) && ((cluster_index & 63) == 0))
{
if (!update_progress(3, cluster_index, m_color_clusters.size()))
return;
}
if (m_pTask_pool->get_num_threads())
{
if ((cluster_index % (m_pTask_pool->get_num_threads() + 1)) != thread_index)
continue;
}
tile_cluster& cluster = m_color_clusters[cluster_index];
if (cluster.m_tiles.empty())
{
total_empty_clusters++;
continue;
}
pixels.resize(0);
for (uint t = 0; t < cluster.m_tiles.size(); t++)
{
const uint chunk_index = cluster.m_tiles[t].first;
const uint tile_index = cluster.m_tiles[t].second;
CRNLIB_ASSERT(chunk_index < m_num_chunks);
CRNLIB_ASSERT(tile_index < cChunkMaxTiles);
const compressed_chunk& chunk = m_compressed_chunks[cColorChunks][chunk_index];
CRNLIB_ASSERT(tile_index < chunk.m_num_tiles);
const compressed_tile& tile = chunk.m_tiles[tile_index];
const chunk_tile_desc& layout = g_chunk_tile_layouts[tile.m_layout_index];
for (uint y = 0; y < layout.m_height; y++)
for (uint x = 0; x < layout.m_width; x++)
pixels.push_back( m_pChunks[chunk_index](layout.m_x_ofs + x, layout.m_y_ofs + y) );
}
total_pixels += pixels.size();
selectors.resize(pixels.size());
dxt1_endpoint_optimizer::params params;
params.m_block_index = cluster_index;
params.m_pPixels = &pixels[0];
params.m_num_pixels = pixels.size();
params.m_pixels_have_alpha = false;
params.m_use_alpha_blocks = false;
params.m_perceptual = m_params.m_perceptual;
params.m_quality = cCRNDXTQualityUber;
params.m_endpoint_caching = false;
dxt1_endpoint_optimizer::results results;
results.m_pSelectors = &selectors[0];
dxt1_endpoint_optimizer optimizer;
const bool all_transparent = optimizer.compute(params, results);
all_transparent;
cluster.m_first_endpoint = results.m_low_color;
cluster.m_second_endpoint = results.m_high_color;
cluster.m_alpha_encoding = results.m_alpha_block;
cluster.m_error = results.m_error;
uint pixel_index = 0;
for (uint t = 0; t < cluster.m_tiles.size(); t++)
{
const uint chunk_index = cluster.m_tiles[t].first;
const uint tile_index = cluster.m_tiles[t].second;
CRNLIB_ASSERT(chunk_index < m_num_chunks);
compressed_chunk& chunk = m_compressed_chunks[cColorChunks][chunk_index];
CRNLIB_ASSERT(tile_index < chunk.m_num_tiles);
CRNLIB_ASSERT(chunk.m_endpoint_cluster_index[tile_index] == cluster_index);
const compressed_tile& tile = chunk.m_tiles[tile_index];
const chunk_tile_desc& layout = g_chunk_tile_layouts[tile.m_layout_index];
layout;
compressed_tile& quantized_tile = chunk.m_quantized_tiles[tile_index];
const uint total_pixels = tile.m_pixel_width * tile.m_pixel_height;
quantized_tile.m_endpoint_cluster_index = cluster_index;
quantized_tile.m_first_endpoint = results.m_low_color;
quantized_tile.m_second_endpoint = results.m_high_color;
//quantized_tile.m_error = results.m_error;
quantized_tile.m_alpha_encoding = results.m_alpha_block;
quantized_tile.m_pixel_width = tile.m_pixel_width;
quantized_tile.m_pixel_height = tile.m_pixel_height;
quantized_tile.m_layout_index = tile.m_layout_index;
memcpy(quantized_tile.m_selectors, &selectors[pixel_index], total_pixels);
pixel_index += total_pixels;
}
}
//CRNLIB_ASSERT(total_pixels == (m_num_chunks * cChunkPixelWidth * cChunkPixelHeight));
#if CRNLIB_ENABLE_DEBUG_MESSAGES
if (m_params.m_debugging)
{
if (total_empty_clusters)
console::warning("Total empty color clusters: %u", total_empty_clusters);
}
#endif
}
bool dxt_hc::determine_color_endpoint_codebook()
{
if (!m_has_color_blocks)
return true;
#if CRNLIB_ENABLE_DEBUG_MESSAGES
if (m_params.m_debugging)
console::info("Computing optimal color cluster endpoints");
#endif
for (uint i = 0; i <= m_pTask_pool->get_num_threads(); i++)
m_pTask_pool->queue_object_task(this, &dxt_hc::determine_color_endpoint_codebook_task, i, NULL);
m_pTask_pool->join();
return !m_canceled;
}
void dxt_hc::determine_alpha_endpoint_codebook_task(uint64 data, void* pData_ptr)
{
pData_ptr;
const uint thread_index = static_cast<uint>(data);
crnlib::vector<color_quad_u8> pixels;
pixels.reserve(512);
crnlib::vector<uint8> selectors;
selectors.reserve(512);
uint total_empty_clusters = 0;
for (uint cluster_index = 0; cluster_index < m_alpha_clusters.size(); cluster_index++)
{
if (m_canceled)
return;
if ((crn_get_current_thread_id() == m_main_thread_id) && ((cluster_index & 63) == 0))
{
if (!update_progress(8, cluster_index, m_alpha_clusters.size()))
return;
}
if (m_pTask_pool->get_num_threads())
{
if ((cluster_index % (m_pTask_pool->get_num_threads() + 1)) != thread_index)
continue;
}
tile_cluster& cluster = m_alpha_clusters[cluster_index];
if (cluster.m_tiles.empty())
{
total_empty_clusters++;
continue;
}
pixels.resize(0);
for (uint tile_iter = 0; tile_iter < cluster.m_tiles.size(); tile_iter++)
{
const uint chunk_index = cluster.m_tiles[tile_iter].first;
const uint tile_index = cluster.m_tiles[tile_iter].second & 0xFFFFU;
const uint alpha_index = cluster.m_tiles[tile_iter].second >> 16U;
CRNLIB_ASSERT(chunk_index < m_num_chunks);
CRNLIB_ASSERT(tile_index < cChunkMaxTiles);
CRNLIB_ASSERT(alpha_index < m_num_alpha_blocks);
const compressed_chunk& chunk = m_compressed_chunks[cAlpha0Chunks + alpha_index][chunk_index];
CRNLIB_ASSERT(chunk.m_endpoint_cluster_index[tile_index] == cluster_index);
CRNLIB_ASSERT(tile_index < chunk.m_num_tiles);
const compressed_tile& tile = chunk.m_tiles[tile_index];
const chunk_tile_desc& layout = g_chunk_tile_layouts[tile.m_layout_index];
color_quad_u8 c(cClear);
for (uint y = 0; y < layout.m_height; y++)
{
for (uint x = 0; x < layout.m_width; x++)
{
c[0] = m_pChunks[chunk_index](layout.m_x_ofs + x, layout.m_y_ofs + y)[ m_params.m_alpha_component_indices[alpha_index] ];
pixels.push_back(c);
}
}
}
selectors.resize(pixels.size());
dxt5_endpoint_optimizer::params params;
params.m_block_index = cluster_index;
params.m_pPixels = &pixels[0];
params.m_num_pixels = pixels.size();
params.m_comp_index = 0;
params.m_quality = cCRNDXTQualityUber;
params.m_use_both_block_types = false;
dxt5_endpoint_optimizer::results results;
results.m_pSelectors = &selectors[0];
dxt5_endpoint_optimizer optimizer;
const bool all_transparent = optimizer.compute(params, results);
all_transparent;
cluster.m_first_endpoint = results.m_first_endpoint;
cluster.m_second_endpoint = results.m_second_endpoint;
cluster.m_alpha_encoding = results.m_block_type != 0;
cluster.m_error = results.m_error;
uint pixel_index = 0;
for (uint tile_iter = 0; tile_iter < cluster.m_tiles.size(); tile_iter++)
{
const uint chunk_index = cluster.m_tiles[tile_iter].first;
const uint tile_index = cluster.m_tiles[tile_iter].second & 0xFFFFU;
const uint alpha_index = cluster.m_tiles[tile_iter].second >> 16U;
CRNLIB_ASSERT(chunk_index < m_num_chunks);
CRNLIB_ASSERT(tile_index < cChunkMaxTiles);
CRNLIB_ASSERT(alpha_index < m_num_alpha_blocks);
compressed_chunk& chunk = m_compressed_chunks[cAlpha0Chunks + alpha_index][chunk_index];
CRNLIB_ASSERT(chunk.m_endpoint_cluster_index[tile_index] == cluster_index);
CRNLIB_ASSERT(tile_index < chunk.m_num_tiles);
const compressed_tile& tile = chunk.m_tiles[tile_index];
const chunk_tile_desc& layout = g_chunk_tile_layouts[tile.m_layout_index];
layout;
compressed_tile& quantized_tile = chunk.m_quantized_tiles[tile_index];
const uint total_pixels = tile.m_pixel_width * tile.m_pixel_height;
quantized_tile.m_endpoint_cluster_index = cluster_index;
quantized_tile.m_first_endpoint = results.m_first_endpoint;
quantized_tile.m_second_endpoint = results.m_second_endpoint;
//quantized_tile.m_error = results.m_error;
quantized_tile.m_alpha_encoding = results.m_block_type != 0;
quantized_tile.m_pixel_width = tile.m_pixel_width;
quantized_tile.m_pixel_height = tile.m_pixel_height;
quantized_tile.m_layout_index = tile.m_layout_index;
memcpy(quantized_tile.m_selectors, &selectors[pixel_index], total_pixels);
pixel_index += total_pixels;
}
} // cluster_index
#if CRNLIB_ENABLE_DEBUG_MESSAGES
if (m_params.m_debugging)
{
if (total_empty_clusters)
console::warning("Total empty alpha clusters: %u", total_empty_clusters);
}
#endif
}
bool dxt_hc::determine_alpha_endpoint_codebook()
{
if (!m_num_alpha_blocks)
return true;
#if CRNLIB_ENABLE_DEBUG_MESSAGES
if (m_params.m_debugging)
console::info("Computing optimal alpha cluster endpoints");
#endif
for (uint i = 0; i <= m_pTask_pool->get_num_threads(); i++)
m_pTask_pool->queue_object_task(this, &dxt_hc::determine_alpha_endpoint_codebook_task, i, NULL);
m_pTask_pool->join();
return !m_canceled;
}
void dxt_hc::create_quantized_debug_images()
{
if (!m_params.m_debugging)
return;
if (m_has_color_blocks)
{
m_dbg_chunk_pixels_color_quantized.resize(m_num_chunks);
m_dbg_chunk_pixels_quantized_color_selectors.resize(m_num_chunks);
m_dbg_chunk_pixels_orig_color_selectors.resize(m_num_chunks);
for (uint i = 0; i < m_num_chunks; i++)
{
m_dbg_chunk_pixels_color_quantized[i].clear();
m_dbg_chunk_pixels_quantized_color_selectors[i].clear();
m_dbg_chunk_pixels_orig_color_selectors[i].clear();
}
}
if (m_num_alpha_blocks)
{
m_dbg_chunk_pixels_alpha_quantized.resize(m_num_chunks);
m_dbg_chunk_pixels_quantized_alpha_selectors.resize(m_num_chunks);
m_dbg_chunk_pixels_orig_alpha_selectors.resize(m_num_chunks);
for (uint i = 0; i < m_num_chunks; i++)
{
m_dbg_chunk_pixels_alpha_quantized[i].clear();
m_dbg_chunk_pixels_quantized_alpha_selectors[i].clear();
m_dbg_chunk_pixels_orig_alpha_selectors[i].clear();
}
}
for (uint chunk_index = 0; chunk_index < m_num_chunks; chunk_index++)
{
if (m_has_color_blocks)
{
pixel_chunk& output_chunk_color_quantized = m_dbg_chunk_pixels_color_quantized[chunk_index];
pixel_chunk& output_chunk_selectors = m_dbg_chunk_pixels_quantized_color_selectors[chunk_index];
pixel_chunk& output_chunk_orig_selectors = m_dbg_chunk_pixels_orig_color_selectors[chunk_index];
const compressed_chunk& color_chunk = m_compressed_chunks[cColorChunks][chunk_index];
for (uint tile_index = 0; tile_index < color_chunk.m_num_tiles; tile_index++)
{
const compressed_tile& quantized_tile = color_chunk.m_quantized_tiles[tile_index];
const chunk_tile_desc& layout = g_chunk_tile_layouts[quantized_tile.m_layout_index];
const uint8* pColor_Selectors = quantized_tile.m_selectors;
color_quad_u8 block_colors[cDXT1SelectorValues];
CRNLIB_ASSERT(quantized_tile.m_first_endpoint >= quantized_tile.m_second_endpoint);
dxt1_block::get_block_colors(block_colors, static_cast<uint16>(quantized_tile.m_first_endpoint), static_cast<uint16>(quantized_tile.m_second_endpoint));
for (uint y = 0; y < layout.m_height; y++)
{
for (uint x = 0; x < layout.m_width; x++)
{
const uint selector = pColor_Selectors[x + y * layout.m_width];
output_chunk_selectors(x + layout.m_x_ofs, y + layout.m_y_ofs) = selector*255/(cDXT1SelectorValues-1);
output_chunk_orig_selectors(x + layout.m_x_ofs, y + layout.m_y_ofs) = color_chunk.m_tiles[tile_index].m_selectors[x + y * layout.m_width] * 255 / (cDXT1SelectorValues-1);
output_chunk_color_quantized(x + layout.m_x_ofs, y + layout.m_y_ofs) = block_colors[selector];
}
}
}
}
for (uint a = 0; a < m_num_alpha_blocks; a++)
{
pixel_chunk& output_chunk_alpha_quantized = m_dbg_chunk_pixels_alpha_quantized[chunk_index];
pixel_chunk& output_chunk_selectors = m_dbg_chunk_pixels_quantized_alpha_selectors[chunk_index];
pixel_chunk& output_chunk_orig_selectors = m_dbg_chunk_pixels_orig_alpha_selectors[chunk_index];
const compressed_chunk& alpha_chunk = m_compressed_chunks[cAlpha0Chunks + a][chunk_index];
for (uint tile_index = 0; tile_index < alpha_chunk.m_num_tiles; tile_index++)
{
const compressed_tile& quantized_tile = alpha_chunk.m_quantized_tiles[tile_index];
const chunk_tile_desc& layout = g_chunk_tile_layouts[quantized_tile.m_layout_index];
const uint8* pAlpha_selectors = quantized_tile.m_selectors;
uint block_values[cDXT5SelectorValues];
CRNLIB_ASSERT(quantized_tile.m_first_endpoint >= quantized_tile.m_second_endpoint);
dxt5_block::get_block_values(block_values, quantized_tile.m_first_endpoint, quantized_tile.m_second_endpoint);
for (uint y = 0; y < layout.m_height; y++)
{
for (uint x = 0; x < layout.m_width; x++)
{
const uint selector = pAlpha_selectors[x + y * layout.m_width];
CRNLIB_ASSERT(selector < cDXT5SelectorValues);
output_chunk_selectors(x + layout.m_x_ofs, y + layout.m_y_ofs)[m_params.m_alpha_component_indices[a]] = static_cast<uint8>(selector*255/(cDXT5SelectorValues-1));
output_chunk_orig_selectors(x + layout.m_x_ofs, y + layout.m_y_ofs)[m_params.m_alpha_component_indices[a]] = static_cast<uint8>(alpha_chunk.m_tiles[tile_index].m_selectors[x + y * layout.m_width]*255/(cDXT5SelectorValues-1));
output_chunk_alpha_quantized(x + layout.m_x_ofs, y + layout.m_y_ofs)[m_params.m_alpha_component_indices[a]] = static_cast<uint8>(block_values[selector]);
}
}
}
} // a
}
}
void dxt_hc::create_selector_codebook_task(uint64 data, void* pData_ptr)
{
const uint thread_index = static_cast<uint>(data);
const create_selector_codebook_state& state = *static_cast<create_selector_codebook_state*>(pData_ptr);
for (uint comp_chunk_index = state.m_comp_index_start; comp_chunk_index <= state.m_comp_index_end; comp_chunk_index++)
{
const uint alpha_index = state.m_alpha_blocks ? (comp_chunk_index - cAlpha0Chunks) : 0;
const uint alpha_pixel_comp = state.m_alpha_blocks ? m_params.m_alpha_component_indices[alpha_index] : 0;
for (uint chunk_index = 0; chunk_index < m_num_chunks; chunk_index++)
{
if (m_canceled)
return;
if ((crn_get_current_thread_id() == m_main_thread_id) && ((chunk_index & 127) == 0))
{
if (!update_progress(12 + comp_chunk_index, chunk_index, m_num_chunks))
return;
}
if (m_pTask_pool->get_num_threads())
{
if ((chunk_index % (m_pTask_pool->get_num_threads() + 1)) != thread_index)
continue;
}
compressed_chunk& chunk = m_compressed_chunks[comp_chunk_index][chunk_index];
for (uint tile_index = 0; tile_index < chunk.m_num_tiles; tile_index++)
{
compressed_tile& quantized_tile = chunk.m_quantized_tiles[tile_index];
const chunk_tile_desc& layout = g_chunk_tile_layouts[quantized_tile.m_layout_index];
const uint tile_blocks_x = layout.m_width >> 2;
const uint tile_blocks_y = layout.m_height >> 2;
const uint tile_block_ofs_x = layout.m_x_ofs >> 2;
const uint tile_block_ofs_y = layout.m_y_ofs >> 2;
if (state.m_alpha_blocks)
{
uint block_values[cDXT5SelectorValues];
dxt5_block::get_block_values(block_values, quantized_tile.m_first_endpoint, quantized_tile.m_second_endpoint);
for (uint by = 0; by < tile_blocks_y; by++)
{
for (uint bx = 0; bx < tile_blocks_x; bx++)
{
#if 0
uint best_index = selector_vq.find_best_codebook_entry_fs(training_vecs[comp_chunk_index][(tile_block_ofs_x+bx)+(tile_block_ofs_y+by)*2][chunk_index]);
#else
const dxt_pixel_block& block = m_pChunks[chunk_index].m_blocks[tile_block_ofs_y + by][tile_block_ofs_x + bx];
uint best_error = UINT_MAX;
uint best_index = 0;
for (uint i = 0; i < state.m_selectors_cb.size(); i++)
{
const selectors& s = state.m_selectors_cb[i];
uint total_error = 0;
for (uint y = 0; y < cBlockPixelHeight; y++)
{
for (uint x = 0; x < cBlockPixelWidth; x++)
{
int a = block.m_pixels[y][x][alpha_pixel_comp];
int b = block_values[s.m_selectors[y][x]];
int error = a - b;
error *= error;
total_error += error;
if (total_error > best_error)
goto early_out;
} // x
} //y
early_out:
if (total_error < best_error)
{
best_error = total_error;
best_index = i;
if (best_error == 0)
break;
}
} // i
#endif
CRNLIB_ASSERT( (tile_block_ofs_x + bx) < 2 );
CRNLIB_ASSERT( (tile_block_ofs_y + by) < 2 );
chunk.m_selector_cluster_index[tile_block_ofs_y + by][tile_block_ofs_x + bx] = static_cast<uint16>(best_index);
{
scoped_spinlock lock(state.m_chunk_blocks_using_selectors_lock);
state.m_chunk_blocks_using_selectors[best_index].push_back( block_id(chunk_index, alpha_index, tile_index, tile_block_ofs_x + bx, tile_block_ofs_y + by ) );
}
// std::make_pair(chunk_index, (tile_index << 16) | ((tile_block_ofs_y + by) << 8) | (tile_block_ofs_x + bx) ) );
} // bx
} // by
}
else
{
color_quad_u8 block_colors[cDXT1SelectorValues];
dxt1_block::get_block_colors4(block_colors, static_cast<uint16>(quantized_tile.m_first_endpoint), static_cast<uint16>(quantized_tile.m_second_endpoint));
const bool block_with_alpha = quantized_tile.m_first_endpoint == quantized_tile.m_second_endpoint;
for (uint by = 0; by < tile_blocks_y; by++)
{
for (uint bx = 0; bx < tile_blocks_x; bx++)
{
const dxt_pixel_block& block = m_pChunks[chunk_index].m_blocks[tile_block_ofs_y + by][tile_block_ofs_x + bx];
uint best_error = UINT_MAX;
uint best_index = 0;
for (uint i = 0; i < state.m_selectors_cb.size(); i++)
{
const selectors& s = state.m_selectors_cb[i];
uint total_error = 0;
for (uint y = 0; y < cBlockPixelHeight; y++)
{
for (uint x = 0; x < cBlockPixelWidth; x++)
{
const color_quad_u8& a = block.m_pixels[y][x];
uint selector_index = s.m_selectors[y][x];
if ((block_with_alpha) && (selector_index == 3))
total_error += 999999;
const color_quad_u8& b = block_colors[selector_index];
uint error = color::color_distance(m_params.m_perceptual, a, b, false);
total_error += error;
if (total_error > best_error)
goto early_out2;
} // x
} //y
early_out2:
if (total_error < best_error)
{
best_error = total_error;
best_index = i;
if (best_error == 0)
break;
}
} // i
CRNLIB_ASSERT( (tile_block_ofs_x + bx) < 2 );
CRNLIB_ASSERT( (tile_block_ofs_y + by) < 2 );
chunk.m_selector_cluster_index[tile_block_ofs_y + by][tile_block_ofs_x + bx] = static_cast<uint16>(best_index);
{
scoped_spinlock lock(state.m_chunk_blocks_using_selectors_lock);
state.m_chunk_blocks_using_selectors[best_index].push_back( block_id(chunk_index, 0, tile_index, tile_block_ofs_x + bx, tile_block_ofs_y + by ) );
}
// std::make_pair(chunk_index, (tile_index << 16) | ((tile_block_ofs_y + by) << 8) | (tile_block_ofs_x + bx) ) );
} // bx
} // by
} // if alpha_blocks
} // tile_index
} // chunk_index
} // comp_chunk_index
}
bool dxt_hc::create_selector_codebook(bool alpha_blocks)
{
#if CRNLIB_ENABLE_DEBUG_MESSAGES
if (m_params.m_debugging)
console::info("Computing selector training vectors");
#endif
const uint cColorDistToWeight = 2000;
const uint cAlphaErrorToWeight = 8;
vec16F_tree_vq selector_vq;
uint comp_index_start = cColorChunks;
uint comp_index_end = cColorChunks;
if (alpha_blocks)
{
comp_index_start = cAlpha0Chunks;
comp_index_end = cAlpha0Chunks + m_num_alpha_blocks - 1;
}
crnlib::vector<vec16F> training_vecs[cNumCompressedChunkVecs][4];
for (uint comp_chunk_index = comp_index_start; comp_chunk_index <= comp_index_end; comp_chunk_index++)
{
for (uint i = 0; i < 4; i++)
training_vecs[comp_chunk_index][i].resize(m_num_chunks);
for (uint chunk_index = 0; chunk_index < m_num_chunks; chunk_index++)
{
if ((chunk_index & 63) == 0)
{
if (!update_progress(9 + comp_chunk_index, chunk_index, m_num_chunks))
return false;
}
const compressed_chunk& chunk = m_compressed_chunks[comp_chunk_index][chunk_index];
uint8 block_selectors[cChunkBlockWidth][cChunkBlockHeight][cBlockPixelWidth * cBlockPixelHeight];
uint block_weight[cChunkBlockWidth][cChunkBlockHeight];
for (uint tile_index = 0; tile_index < chunk.m_num_tiles; tile_index++)
{
const compressed_tile& quantized_tile = chunk.m_quantized_tiles[tile_index];
uint weight;
if (comp_chunk_index == cColorChunks)
{
const color_quad_u8 first_color(dxt1_block::unpack_color(static_cast<uint16>(quantized_tile.m_first_endpoint), true));
const color_quad_u8 second_color(dxt1_block::unpack_color(static_cast<uint16>(quantized_tile.m_second_endpoint), true));
const uint dist = color::color_distance(m_params.m_perceptual, first_color, second_color, false);
weight = dist / cColorDistToWeight;
weight = static_cast<uint>(weight * m_pChunks[chunk_index].m_weight);
}
else
{
int first_endpoint = quantized_tile.m_first_endpoint;
int second_endpoint = quantized_tile.m_second_endpoint;
int error = first_endpoint - second_endpoint;
error = error * error;
weight = static_cast<uint>(error / cAlphaErrorToWeight);
}
const uint cMaxWeight = 2048;
weight = math::clamp<uint>(weight, 1U, cMaxWeight);
// umm, this is a hack
float f = math::lerp(1.15f, 1.0f, chunk.m_encoding_index / float(cNumChunkEncodings - 1));
weight = (uint)(weight * f);
const chunk_tile_desc& layout = g_chunk_tile_layouts[quantized_tile.m_layout_index];
for (uint y = 0; y < (layout.m_height >> 2); y++)
for (uint x = 0; x < (layout.m_width >> 2); x++)
block_weight[x + (layout.m_x_ofs >> 2)][y + (layout.m_y_ofs >> 2)] = weight;
const uint8* pSelectors = quantized_tile.m_selectors;
for (uint y = 0; y < layout.m_height; y++)
{
const uint cy = y + layout.m_y_ofs;
for (uint x = 0; x < layout.m_width; x++)
{
const uint selector = pSelectors[x + y * layout.m_width];
if (comp_chunk_index == cColorChunks)
CRNLIB_ASSERT(selector < cDXT1SelectorValues);
else
CRNLIB_ASSERT(selector < cDXT5SelectorValues);
const uint cx = x + layout.m_x_ofs;
block_selectors[cx >> 2][cy >> 2][(cx & 3) + (cy & 3) * 4] = static_cast<uint8>(selector);
} // x
} // y
} // tile_index
vec16F v;
for (uint y = 0; y < cChunkBlockHeight; y++)
{
for (uint x = 0; x < cChunkBlockWidth; x++)
{
for (uint i = 0; i < cBlockPixelWidth * cBlockPixelHeight; i++)
{
uint s = block_selectors[x][y][i];
float f;
if (comp_chunk_index == cColorChunks)
{
CRNLIB_ASSERT(s < cDXT1SelectorValues);
f = (g_dxt1_to_linear[s] + .5f) * 1.0f/4.0f;
}
else
{
CRNLIB_ASSERT(s < cDXT5SelectorValues);
f = (g_dxt5_to_linear[s] + .5f) * 1.0f/8.0f;
}
CRNLIB_ASSERT((f >= 0.0f) && (f <= 1.0f));
v[i] = f;
} // i
selector_vq.add_training_vec(v, block_weight[x][y]);
training_vecs[comp_chunk_index][x+y*2][chunk_index] = v;
} // x
} // y
} // chunk_index
} // comp_chunk_index
timer t;
t.start();
selector_vq.generate_codebook(alpha_blocks ? m_params.m_alpha_selector_codebook_size : m_params.m_color_selector_codebook_size);
#if CRNLIB_ENABLE_DEBUG_MESSAGES
if (m_params.m_debugging)
{
double total_time = t.get_elapsed_secs();
console::info("Codebook gen time: %3.3fs, Selector codebook size: %u", total_time, selector_vq.get_codebook_size());
}
#endif
selectors_vec& selectors_cb = alpha_blocks ? m_alpha_selectors : m_color_selectors;
selectors_cb.resize(selector_vq.get_codebook_size());
for (uint i = 0; i < selector_vq.get_codebook_size(); i++)
{
const vec16F& v = selector_vq.get_codebook_entry(i);
for (uint j = 0; j < cBlockPixelWidth * cBlockPixelHeight; j++)
{
int s;
if (alpha_blocks)
{
s = math::clamp<int>(static_cast<int>(v[j] * 8.0f), 0, 7);
s = g_dxt5_from_linear[s];
}
else
{
s = math::clamp<int>(static_cast<int>(v[j] * 4.0f), 0, 3);
s = g_dxt1_from_linear[s];
}
selectors_cb[i].m_selectors[j >> 2][j & 3] = static_cast<uint8>(s);
} // j
} // i
chunk_blocks_using_selectors_vec& chunk_blocks_using_selectors = alpha_blocks ? m_chunk_blocks_using_alpha_selectors : m_chunk_blocks_using_color_selectors;
chunk_blocks_using_selectors.clear();
chunk_blocks_using_selectors.resize(selectors_cb.size());
create_selector_codebook_state state(*this, alpha_blocks, comp_index_start, comp_index_end, selector_vq, chunk_blocks_using_selectors, selectors_cb);
for (uint i = 0; i <= m_pTask_pool->get_num_threads(); i++)
m_pTask_pool->queue_object_task(this, &dxt_hc::create_selector_codebook_task, i, &state);
m_pTask_pool->join();
return !m_canceled;
}
bool dxt_hc::refine_quantized_color_selectors()
{
if (!m_has_color_blocks)
return true;
#if CRNLIB_ENABLE_DEBUG_MESSAGES
if (m_params.m_debugging)
console::info("Refining quantized color selectors");
#endif
uint total_refined_selectors = 0;
uint total_refined_pixels = 0;
uint total_selectors = 0;
for (uint selector_index = 0; selector_index < m_color_selectors.size(); selector_index++)
{
if ((selector_index & 255) == 0)
{
if (!update_progress(15, selector_index, m_color_selectors.size()))
return false;
}
if (m_chunk_blocks_using_color_selectors[selector_index].empty())
continue;
selectors& sel = m_color_selectors[selector_index];
for (uint y = 0; y < cBlockPixelHeight; y++)
{
for (uint x = 0; x < cBlockPixelWidth; x++)
{
uint best_s = 0;
uint best_error = UINT_MAX;
for (uint s = 0; s < cDXT1SelectorValues; s++)
{
uint total_error = 0;
for (uint block_iter = 0; block_iter < m_chunk_blocks_using_color_selectors[selector_index].size(); block_iter++)
{
const block_id& id = m_chunk_blocks_using_color_selectors[selector_index][block_iter];
const uint chunk_index = id.m_chunk_index;
const uint tile_index = id.m_tile_index;
const uint chunk_block_x = id.m_block_x;
const uint chunk_block_y = id.m_block_y;
CRNLIB_ASSERT((chunk_block_x < cChunkBlockWidth) && (chunk_block_y < cChunkBlockHeight));
const compressed_chunk& chunk = m_compressed_chunks[cColorChunks][chunk_index];
CRNLIB_ASSERT(tile_index < chunk.m_num_tiles);
CRNLIB_ASSERT(chunk.m_selector_cluster_index[chunk_block_y][chunk_block_x] == selector_index);
const compressed_tile& tile = chunk.m_quantized_tiles[tile_index];
//const chunk_tile_desc& tile_desc = g_chunk_tile_layouts[tile.m_layout_index];
color_quad_u8 block_colors[cDXT1SelectorValues];
CRNLIB_ASSERT(tile.m_first_endpoint >= tile.m_second_endpoint);
dxt1_block::get_block_colors4(block_colors, static_cast<uint16>(tile.m_first_endpoint), static_cast<uint16>(tile.m_second_endpoint));
if ((tile.m_first_endpoint == tile.m_second_endpoint) && (s == 3))
total_error += 999999;
const color_quad_u8& orig_pixel = m_pChunks[chunk_index](chunk_block_x * cBlockPixelWidth + x, chunk_block_y * cBlockPixelHeight + y);
const color_quad_u8& quantized_pixel = block_colors[s];
const uint error = color::color_distance(m_params.m_perceptual, orig_pixel, quantized_pixel, false);
total_error += error;
} // block_iter
if (total_error < best_error)
{
best_error = total_error;
best_s = s;
}
} // s
if (sel.m_selectors[y][x] != best_s)
{
total_refined_selectors++;
total_refined_pixels += m_chunk_blocks_using_color_selectors[selector_index].size();
sel.m_selectors[y][x] = static_cast<uint8>(best_s);
}
total_selectors++;
} //x
} //y
} // selector_index
#if CRNLIB_ENABLE_DEBUG_MESSAGES
if (m_params.m_debugging)
console::info("Total refined pixels: %u, selectors: %u out of %u", total_refined_pixels, total_refined_selectors, total_selectors);
#endif
return true;
}
bool dxt_hc::refine_quantized_alpha_selectors()
{
if (!m_num_alpha_blocks)
return true;
#if CRNLIB_ENABLE_DEBUG_MESSAGES
if (m_params.m_debugging)
console::info("Refining quantized alpha selectors");
#endif
uint total_refined_selectors = 0;
uint total_refined_pixels = 0;
uint total_selectors = 0;
for (uint selector_index = 0; selector_index < m_alpha_selectors.size(); selector_index++)
{
if ((selector_index & 255) == 0)
{
if (!update_progress(16, selector_index, m_alpha_selectors.size()))
return false;
}
if (m_chunk_blocks_using_alpha_selectors[selector_index].empty())
continue;
selectors& sel = m_alpha_selectors[selector_index];
for (uint y = 0; y < cBlockPixelHeight; y++)
{
for (uint x = 0; x < cBlockPixelWidth; x++)
{
uint best_s = 0;
uint best_error = UINT_MAX;
for (uint s = 0; s < cDXT5SelectorValues; s++)
{
uint total_error = 0;
for (uint block_iter = 0; block_iter < m_chunk_blocks_using_alpha_selectors[selector_index].size(); block_iter++)
{
const block_id& id = m_chunk_blocks_using_alpha_selectors[selector_index][block_iter];
const uint chunk_index = id.m_chunk_index;
const uint tile_index = id.m_tile_index;
const uint chunk_block_x = id.m_block_x;
const uint chunk_block_y = id.m_block_y;
const uint alpha_index = id.m_alpha_index;
CRNLIB_ASSERT(alpha_index < m_num_alpha_blocks);
CRNLIB_ASSERT((chunk_block_x < cChunkBlockWidth) && (chunk_block_y < cChunkBlockHeight));
const compressed_chunk& chunk = m_compressed_chunks[alpha_index + cAlpha0Chunks][chunk_index];
CRNLIB_ASSERT(tile_index < chunk.m_num_tiles);
CRNLIB_ASSERT(chunk.m_selector_cluster_index[chunk_block_y][chunk_block_x] == selector_index);
const compressed_tile& tile = chunk.m_quantized_tiles[tile_index];
//const chunk_tile_desc& tile_desc = g_chunk_tile_layouts[tile.m_layout_index];
uint block_values[cDXT5SelectorValues];
CRNLIB_ASSERT(tile.m_first_endpoint >= tile.m_second_endpoint);
dxt5_block::get_block_values(block_values, tile.m_first_endpoint, tile.m_second_endpoint);
int orig_value = m_pChunks[chunk_index](chunk_block_x * cBlockPixelWidth + x, chunk_block_y * cBlockPixelHeight + y)[m_params.m_alpha_component_indices[alpha_index]];
int quantized_value = block_values[s];
int error = (orig_value - quantized_value);
error *= error;
total_error += error;
} // block_iter
if (total_error < best_error)
{
best_error = total_error;
best_s = s;
}
} // s
if (sel.m_selectors[y][x] != best_s)
{
total_refined_selectors++;
total_refined_pixels += m_chunk_blocks_using_alpha_selectors[selector_index].size();
sel.m_selectors[y][x] = static_cast<uint8>(best_s);
}
total_selectors++;
} //x
} //y
} // selector_index
#if CRNLIB_ENABLE_DEBUG_MESSAGES
if (m_params.m_debugging)
console::info("Total refined pixels: %u, selectors: %u out of %u", total_refined_pixels, total_refined_selectors, total_selectors);
#endif
return true;
}
bool dxt_hc::refine_quantized_color_endpoints()
{
if (!m_has_color_blocks)
return true;
uint total_refined_tiles = 0;
uint total_refined_pixels = 0;
#if CRNLIB_ENABLE_DEBUG_MESSAGES
if (m_params.m_debugging)
console::info("Refining quantized color endpoints");
#endif
for (uint cluster_index = 0; cluster_index < m_color_clusters.size(); cluster_index++)
{
if ((cluster_index & 255) == 0)
{
if (!update_progress(17, cluster_index, m_color_clusters.size()))
return false;
}
tile_cluster& cluster = m_color_clusters[cluster_index];
uint total_pixels = 0;
for (uint tile_iter = 0; tile_iter < cluster.m_tiles.size(); tile_iter++)
{
const uint chunk_index = cluster.m_tiles[tile_iter].first;
const uint tile_index = cluster.m_tiles[tile_iter].second;
compressed_chunk& chunk = m_compressed_chunks[cColorChunks][chunk_index];
compressed_tile& tile = chunk.m_quantized_tiles[tile_index];
CRNLIB_ASSERT(tile.m_first_endpoint == cluster.m_first_endpoint);
CRNLIB_ASSERT(tile.m_second_endpoint == cluster.m_second_endpoint);
total_pixels += (tile.m_pixel_width * tile.m_pixel_height);
}
if (!total_pixels)
continue;
crnlib::vector<color_quad_u8> pixels;
crnlib::vector<uint8> selectors;
pixels.reserve(total_pixels);
selectors.reserve(total_pixels);
for (uint tile_iter = 0; tile_iter < cluster.m_tiles.size(); tile_iter++)
{
const uint chunk_index = cluster.m_tiles[tile_iter].first;
const uint tile_index = cluster.m_tiles[tile_iter].second;
compressed_chunk& chunk = m_compressed_chunks[cColorChunks][chunk_index];
compressed_tile& tile = chunk.m_quantized_tiles[tile_index];
const pixel_chunk& src_pixels = m_pChunks[chunk_index];
CRNLIB_ASSERT(tile.m_first_endpoint == cluster.m_first_endpoint);
CRNLIB_ASSERT(tile.m_second_endpoint == cluster.m_second_endpoint);
const chunk_tile_desc& tile_layout = g_chunk_tile_layouts[tile.m_layout_index];
for (uint y = 0; y < tile.m_pixel_height; y++)
{
for (uint x = 0; x < tile.m_pixel_width; x++)
{
selectors.push_back(tile.m_selectors[x + y * tile.m_pixel_width]);
pixels.push_back(src_pixels(x + tile_layout.m_x_ofs, y + tile_layout.m_y_ofs));
}
}
}
dxt_endpoint_refiner refiner;
dxt_endpoint_refiner::params p;
dxt_endpoint_refiner::results r;
p.m_perceptual = m_params.m_perceptual;
p.m_pSelectors = &selectors[0];
p.m_pPixels = &pixels[0];
p.m_num_pixels = total_pixels;
p.m_dxt1_selectors = true;
p.m_error_to_beat = cluster.m_error;
p.m_block_index = cluster_index;
if (!refiner.refine(p, r))
continue;
total_refined_tiles++;
total_refined_pixels += total_pixels;
cluster.m_error = r.m_error;
cluster.m_first_endpoint = r.m_low_color;
cluster.m_second_endpoint = r.m_high_color;
for (uint tile_iter = 0; tile_iter < cluster.m_tiles.size(); tile_iter++)
{
const uint chunk_index = cluster.m_tiles[tile_iter].first;
const uint tile_index = cluster.m_tiles[tile_iter].second;
compressed_chunk& chunk = m_compressed_chunks[cColorChunks][chunk_index];
compressed_tile& tile = chunk.m_quantized_tiles[tile_index];
tile.m_first_endpoint = r.m_low_color;
tile.m_second_endpoint = r.m_high_color;
}
}
#if CRNLIB_ENABLE_DEBUG_MESSAGES
if (m_params.m_debugging)
console::info("Total refined pixels: %u, endpoints: %u out of %u", total_refined_pixels, total_refined_tiles, m_color_clusters.size());
#endif
return true;
}
bool dxt_hc::refine_quantized_alpha_endpoints()
{
if (!m_num_alpha_blocks)
return true;
uint total_refined_tiles = 0;
uint total_refined_pixels = 0;
#if CRNLIB_ENABLE_DEBUG_MESSAGES
if (m_params.m_debugging)
console::info("Refining quantized alpha endpoints");
#endif
for (uint cluster_index = 0; cluster_index < m_alpha_clusters.size(); cluster_index++)
{
if ((cluster_index & 255) == 0)
{
if (!update_progress(18, cluster_index, m_alpha_clusters.size()))
return false;
}
tile_cluster& cluster = m_alpha_clusters[cluster_index];
uint total_pixels = 0;
for (uint tile_iter = 0; tile_iter < cluster.m_tiles.size(); tile_iter++)
{
const uint chunk_index = cluster.m_tiles[tile_iter].first;
const uint tile_index = cluster.m_tiles[tile_iter].second & 0xFFFFU;
const uint alpha_index = cluster.m_tiles[tile_iter].second >> 16U;
compressed_chunk& chunk = m_compressed_chunks[cAlpha0Chunks + alpha_index][chunk_index];
compressed_tile& tile = chunk.m_quantized_tiles[tile_index];
CRNLIB_ASSERT(tile.m_first_endpoint == cluster.m_first_endpoint);
CRNLIB_ASSERT(tile.m_second_endpoint == cluster.m_second_endpoint);
total_pixels += (tile.m_pixel_width * tile.m_pixel_height);
}
if (!total_pixels)
continue;
crnlib::vector<color_quad_u8> pixels;
crnlib::vector<uint8> selectors;
pixels.reserve(total_pixels);
selectors.reserve(total_pixels);
for (uint tile_iter = 0; tile_iter < cluster.m_tiles.size(); tile_iter++)
{
const uint chunk_index = cluster.m_tiles[tile_iter].first;
const uint tile_index = cluster.m_tiles[tile_iter].second & 0xFFFFU;
const uint alpha_index = cluster.m_tiles[tile_iter].second >> 16U;
compressed_chunk& chunk = m_compressed_chunks[cAlpha0Chunks + alpha_index][chunk_index];
compressed_tile& tile = chunk.m_quantized_tiles[tile_index];
const pixel_chunk& src_pixels = m_pChunks[chunk_index];
CRNLIB_ASSERT(tile.m_first_endpoint == cluster.m_first_endpoint);
CRNLIB_ASSERT(tile.m_second_endpoint == cluster.m_second_endpoint);
const chunk_tile_desc& tile_layout = g_chunk_tile_layouts[tile.m_layout_index];
for (uint y = 0; y < tile.m_pixel_height; y++)
{
for (uint x = 0; x < tile.m_pixel_width; x++)
{
selectors.push_back(tile.m_selectors[x + y * tile.m_pixel_width]);
pixels.push_back(color_quad_u8(src_pixels(x + tile_layout.m_x_ofs, y + tile_layout.m_y_ofs)[m_params.m_alpha_component_indices[alpha_index]]));
}
}
}
dxt_endpoint_refiner refiner;
dxt_endpoint_refiner::params p;
dxt_endpoint_refiner::results r;
p.m_perceptual = m_params.m_perceptual;
p.m_pSelectors = &selectors[0];
p.m_pPixels = &pixels[0];
p.m_num_pixels = total_pixels;
p.m_dxt1_selectors = false;
p.m_error_to_beat = cluster.m_error;
p.m_block_index = cluster_index;
if (!refiner.refine(p, r))
continue;
total_refined_tiles++;
total_refined_pixels += total_pixels;
cluster.m_error = r.m_error;
cluster.m_first_endpoint = r.m_low_color;
cluster.m_second_endpoint = r.m_high_color;
for (uint tile_iter = 0; tile_iter < cluster.m_tiles.size(); tile_iter++)
{
const uint chunk_index = cluster.m_tiles[tile_iter].first;
const uint tile_index = cluster.m_tiles[tile_iter].second & 0xFFFFU;
const uint alpha_index = cluster.m_tiles[tile_iter].second >> 16U;
compressed_chunk& chunk = m_compressed_chunks[cAlpha0Chunks + alpha_index][chunk_index];
compressed_tile& tile = chunk.m_quantized_tiles[tile_index];
tile.m_first_endpoint = r.m_low_color;
tile.m_second_endpoint = r.m_high_color;
}
}
#if CRNLIB_ENABLE_DEBUG_MESSAGES
if (m_params.m_debugging)
console::info("Total refined pixels: %u, endpoints: %u out of %u", total_refined_pixels, total_refined_tiles, m_alpha_clusters.size());
#endif
return true;
}
void dxt_hc::create_final_debug_image()
{
if (!m_params.m_debugging)
return;
m_dbg_chunk_pixels_final.resize(m_num_chunks);
for (uint i = 0; i < m_num_chunks; i++)
m_dbg_chunk_pixels_final[i].clear();
if (m_has_color_blocks)
{
m_dbg_chunk_pixels_final_color_selectors.resize(m_num_chunks);
for (uint i = 0; i < m_num_chunks; i++)
m_dbg_chunk_pixels_final_color_selectors[i].clear();
}
if (m_num_alpha_blocks)
{
m_dbg_chunk_pixels_final_alpha_selectors.resize(m_num_chunks);
for (uint i = 0; i < m_num_chunks; i++)
m_dbg_chunk_pixels_final_alpha_selectors[i].clear();
}
for (uint chunk_index = 0; chunk_index < m_num_chunks; chunk_index++)
{
pixel_chunk& output_chunk_final = m_dbg_chunk_pixels_final[chunk_index];
if (m_has_color_blocks)
{
const compressed_chunk& chunk = m_compressed_chunks[cColorChunks][chunk_index];
pixel_chunk& output_chunk_quantized_color_selectors = m_dbg_chunk_pixels_final_color_selectors[chunk_index];
for (uint tile_index = 0; tile_index < chunk.m_num_tiles; tile_index++)
{
const compressed_tile& quantized_tile = chunk.m_quantized_tiles[tile_index];
const chunk_tile_desc& layout = g_chunk_tile_layouts[quantized_tile.m_layout_index];
color_quad_u8 block_colors[cDXT1SelectorValues];
dxt1_block::get_block_colors(block_colors, static_cast<uint16>(quantized_tile.m_first_endpoint), static_cast<uint16>(quantized_tile.m_second_endpoint));
for (uint y = 0; y < layout.m_height; y++)
{
for (uint x = 0; x < layout.m_width; x++)
{
const uint chunk_x_ofs = x + layout.m_x_ofs;
const uint chunk_y_ofs = y + layout.m_y_ofs;
const uint block_x = chunk_x_ofs >> 2;
const uint block_y = chunk_y_ofs >> 2;
const selectors& s = m_color_selectors[chunk.m_selector_cluster_index[block_y][block_x]];
uint selector = s.m_selectors[chunk_y_ofs & 3][chunk_x_ofs & 3];
output_chunk_final(x + layout.m_x_ofs, y + layout.m_y_ofs) = block_colors[selector];
output_chunk_quantized_color_selectors(x + layout.m_x_ofs, y + layout.m_y_ofs) = g_tile_layout_colors[selector];
}
}
}
}
if (m_num_alpha_blocks)
{
pixel_chunk& output_chunk_quantized_alpha_selectors = m_dbg_chunk_pixels_final_alpha_selectors[chunk_index];
for (uint a = 0; a < m_num_alpha_blocks; a++)
{
const compressed_chunk& chunk = m_compressed_chunks[cAlpha0Chunks + a][chunk_index];
for (uint tile_index = 0; tile_index < chunk.m_num_tiles; tile_index++)
{
const compressed_tile& quantized_tile = chunk.m_quantized_tiles[tile_index];
const chunk_tile_desc& layout = g_chunk_tile_layouts[quantized_tile.m_layout_index];
uint block_values[cDXT5SelectorValues];
// purposely call the general version to debug single color alpah6 blocks
CRNLIB_ASSERT(quantized_tile.m_first_endpoint >= quantized_tile.m_second_endpoint);
dxt5_block::get_block_values(block_values, quantized_tile.m_first_endpoint, quantized_tile.m_second_endpoint);
for (uint y = 0; y < layout.m_height; y++)
{
for (uint x = 0; x < layout.m_width; x++)
{
const uint chunk_x_ofs = x + layout.m_x_ofs;
const uint chunk_y_ofs = y + layout.m_y_ofs;
const uint block_x = chunk_x_ofs >> 2;
const uint block_y = chunk_y_ofs >> 2;
const selectors& s = m_alpha_selectors[chunk.m_selector_cluster_index[block_y][block_x]];
uint selector = s.m_selectors[chunk_y_ofs & 3][chunk_x_ofs & 3];
CRNLIB_ASSERT(selector < cDXT5SelectorValues);
output_chunk_final(x + layout.m_x_ofs, y + layout.m_y_ofs)[m_params.m_alpha_component_indices[a]] = static_cast<uint8>(block_values[selector]);
output_chunk_quantized_alpha_selectors(x + layout.m_x_ofs, y + layout.m_y_ofs)[m_params.m_alpha_component_indices[a]] = static_cast<uint8>(selector*255/(cDXT5SelectorValues-1));
} //x
} // y
} // tile_index
} // a
}
} // chunk_index
}
bool dxt_hc::create_chunk_encodings()
{
m_chunk_encoding.resize(m_num_chunks);
for (uint chunk_index = 0; chunk_index < m_num_chunks; chunk_index++)
{
if ((chunk_index & 255) == 0)
{
if (!update_progress(19, chunk_index, m_num_chunks))
return false;
}
chunk_encoding& encoding = m_chunk_encoding[chunk_index];
for (uint q = 0; q < cNumCompressedChunkVecs; q++)
{
bool skip = true;
if (q == cColorChunks)
{
if (m_has_color_blocks)
skip = false;
}
else if (q <= m_num_alpha_blocks)
skip = false;
if (skip)
continue;
CRNLIB_ASSERT(!m_compressed_chunks[q].empty());
const compressed_chunk& chunk = m_compressed_chunks[q][chunk_index];
CRNLIB_ASSERT(chunk.m_encoding_index < cNumChunkEncodings);
encoding.m_encoding_index = static_cast<uint8>(chunk.m_encoding_index);
CRNLIB_ASSERT(chunk.m_num_tiles <= cChunkMaxTiles);
encoding.m_num_tiles = static_cast<uint8>(chunk.m_num_tiles);
for (uint tile_index = 0; tile_index < chunk.m_num_tiles; tile_index++)
{
const compressed_tile& quantized_tile = chunk.m_quantized_tiles[tile_index];
if (!q)
{
CRNLIB_ASSERT(quantized_tile.m_endpoint_cluster_index < m_color_clusters.size());
}
else
{
CRNLIB_ASSERT(quantized_tile.m_endpoint_cluster_index < m_alpha_clusters.size());
}
encoding.m_endpoint_indices[q][tile_index] = static_cast<uint16>(quantized_tile.m_endpoint_cluster_index);
}
for (uint y = 0; y < cChunkBlockHeight; y++)
{
for (uint x = 0; x < cChunkBlockWidth; x++)
{
const uint selector_index = chunk.m_selector_cluster_index[y][x];
if (!q)
{
CRNLIB_ASSERT(selector_index < m_color_selectors.size());
}
else
{
CRNLIB_ASSERT(selector_index < m_alpha_selectors.size());
}
encoding.m_selector_indices[q][y][x] = static_cast<uint16>(selector_index);
}
}
} // q
} // chunk_index
if (m_has_color_blocks)
{
m_color_endpoints.resize(m_color_clusters.size());
for (uint i = 0; i < m_color_clusters.size(); i++)
m_color_endpoints[i] = dxt1_block::pack_endpoints(m_color_clusters[i].m_first_endpoint, m_color_clusters[i].m_second_endpoint);
}
if (m_num_alpha_blocks)
{
m_alpha_endpoints.resize(m_alpha_clusters.size());
for (uint i = 0; i < m_alpha_clusters.size(); i++)
m_alpha_endpoints[i] = dxt5_block::pack_endpoints(m_alpha_clusters[i].m_first_endpoint, m_alpha_clusters[i].m_second_endpoint);
}
return true;
}
void dxt_hc::create_debug_image_from_chunks(uint num_chunks_x, uint num_chunks_y, const pixel_chunk_vec& chunks, const chunk_encoding_vec *pChunk_encodings, image_u8& img, bool serpentine_scan, int comp_index)
{
if (chunks.empty())
{
img.set_all(color_quad_u8::make_black());
return;
}
img.resize(num_chunks_x * cChunkPixelWidth, num_chunks_y * cChunkPixelHeight);
for (uint y = 0; y < num_chunks_y; y++)
{
for (uint x = 0; x < num_chunks_x; x++)
{
uint c = x + y * num_chunks_x;
if ((serpentine_scan) && (y & 1))
c = (num_chunks_x - 1 - x) + y * num_chunks_x;
if (comp_index >= 0)
{
for (uint cy = 0; cy < cChunkPixelHeight; cy++)
for (uint cx = 0; cx < cChunkPixelWidth; cx++)
img(x * cChunkPixelWidth + cx, y * cChunkPixelHeight + cy) = chunks[c](cx, cy)[comp_index];
}
else
{
for (uint cy = 0; cy < cChunkPixelHeight; cy++)
for (uint cx = 0; cx < cChunkPixelWidth; cx++)
img(x * cChunkPixelWidth + cx, y * cChunkPixelHeight + cy) = chunks[c](cx, cy);
}
if (pChunk_encodings)
{
const chunk_encoding& chunk = (*pChunk_encodings)[c];
const chunk_encoding_desc &encoding_desc = g_chunk_encodings[chunk.m_encoding_index];
CRNLIB_ASSERT(chunk.m_num_tiles == encoding_desc.m_num_tiles);
for (uint t = 0; t < chunk.m_num_tiles; t++)
{
const chunk_tile_desc &tile_desc = encoding_desc.m_tiles[t];
img.unclipped_fill_box(
x*8 + tile_desc.m_x_ofs, y*8 + tile_desc.m_y_ofs,
tile_desc.m_width + 1, tile_desc.m_height + 1, color_quad_u8(128, 128, 128, 255));
}
}
}
}
}
bool dxt_hc::update_progress(uint phase_index, uint subphase_index, uint subphase_total)
{
CRNLIB_ASSERT(crn_get_current_thread_id() == m_main_thread_id);
if (!m_params.m_pProgress_func)
return true;
#if CRNLIB_ENABLE_DEBUG_MESSAGES
if (m_params.m_debugging)
return true;
#endif
const int percentage_complete = (subphase_total > 1) ? ((100 * subphase_index) / (subphase_total - 1)) : 100;
if (((int)phase_index == m_prev_phase_index) && (m_prev_percentage_complete == percentage_complete))
return !m_canceled;
m_prev_percentage_complete = percentage_complete;
bool status = (*m_params.m_pProgress_func)(phase_index, cTotalCompressionPhases, subphase_index, subphase_total, m_params.m_pProgress_func_data) != 0;
if (!status)
{
m_canceled = true;
return false;
}
return true;
}
} // namespace crnlib
| cpp |
<filename>PRNGCL.cpp
/******************************************************************************
* @file PRNGCL.cpp
* @author <NAME> <<EMAIL>>
* @version 1.1.2
*
* @brief [PRNGCL library]
* Library of pseudo-random number generators for Monte Carlo simulations on GPUs
*
*
* @section CREDITS
*
* <NAME>,
* "Pseudorandom Numbers Generation for Monte Carlo Simulations on GPUs: OpenCL Approach",
* ch.12 in book "Numerical Computations with GPUs", pp 245-271,
* doi: 10.1007/978-3-319-06548-9_12, Springer International Publishing, 2014
*
*
* @section LICENSE
*
* Copyright (c) 2013-2015 <NAME>
* All rights reserved.
*
* Redistribution and use in source and binary forms, with or without modification,
* are permitted provided that the following conditions are met:
*
* Redistributions of source code must retain the above copyright notice,
* this list of conditions and the following disclaimer.
*
* Redistributions in binary form must reproduce the above copyright notice,
* this list of conditions and the following disclaimer in the documentation
* and/or other materials provided with the distribution.
*
* THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS "AS IS"
* AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE
* IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE
* ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT HOLDER OR CONTRIBUTORS BE LIABLE
* FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES
* (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES;
* LOSS OF USE, DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND
* ON ANY THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT
* (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE OF
* THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
*
*****************************************************************************/
#include "PRNGCL.h"
#include "examples/prngcl_example_pi.h"
int main(int argc, char ** argv)
{
HGPU_GPU_test(argc,argv);
// HGPU_GPU_example_pi(argc,argv);
return 0;
}
| cpp |
<reponame>wapalxj/GraduationDesign<gh_stars>0
package graduationdesign.muguihai.com.v022;
import android.content.BroadcastReceiver;
import android.content.Context;
import android.content.Intent;
import android.content.IntentFilter;
import android.support.v4.app.Fragment;
import android.support.v4.app.FragmentManager;
import android.support.v4.app.FragmentPagerAdapter;
import android.support.v4.app.FragmentStatePagerAdapter;
import android.support.v4.view.ViewPager;
import android.support.v7.app.AppCompatActivity;
import android.os.Bundle;
import android.util.Log;
import android.widget.LinearLayout;
import android.widget.TextView;
import org.jivesoftware.smack.packet.Message;
import java.util.ArrayList;
import java.util.List;
import butterknife.ButterKnife;
import butterknife.InjectView;
import fragments.ContactFragment;
import fragments.MineFragment;
import fragments.SessionFragment;
import service.IMService;
import service.PushService;
import utils.ToolBarUtil;
public class MainActivity extends AppCompatActivity {
//注解的方式寻找控件
@InjectView(R.id.main_bottom)
LinearLayout mLlBottom;
@InjectView(R.id.main_tv_title)
TextView mTvTitle;
@InjectView(R.id.main_viewpager)
ViewPager mViewPager;
private List<Fragment> mFragments = new ArrayList<>();
private ToolBarUtil mToolBarUtil;
private String[] mTitle = new String[]{"会话", "联系人", "我的"};
@Override
protected void onCreate(Bundle savedInstanceState) {
super.onCreate(savedInstanceState);
setContentView(R.layout.activity_main);
ButterKnife.inject(this);
init();
initPagerListener();//滑动
}
public void init() {
// regReceiver();
//viewPager---->graduationdesign.muguihai.com.v023.view----->pagerAdapter
//viewPager---->fragment---->fragmentPagerAdapter:选这个,因为frag数量比较少
//viewPager---->fragment---->fragmentStatePagerAdapter
//把fragments装起来
mFragments.add(new SessionFragment());
mFragments.add(new ContactFragment());
mFragments.add(new MineFragment());
mViewPager.setAdapter(new MinePagerAdapter(getSupportFragmentManager()));
//bottom
mToolBarUtil = new ToolBarUtil();
//tooBar图标
int[] icons = {R.drawable.selector_icon_msg, R.drawable.selector_icon_contact, R.drawable.selector_icon_mine};
mToolBarUtil.initTooBar(mLlBottom, icons);
//设置默认选中会话
mToolBarUtil.toolBarSelect(0);
}
public void initPagerListener() {
mViewPager.addOnPageChangeListener(new ViewPager.OnPageChangeListener() {
@Override
public void onPageScrolled(int position, float positionOffset, int positionOffsetPixels) {
}
@Override
public void onPageSelected(int position) {
//修改图标样式和标题
mToolBarUtil.toolBarSelect(position);
mTvTitle.setText(mTitle[position]);
}
@Override
public void onPageScrollStateChanged(int state) {
}
});
mToolBarUtil.setmOnToolBarClickListener(new ToolBarUtil.OnToolBarClickListener() {
@Override
public void onToolBarClick(int position) {
// mToolBarUtil.toolBarSelect(position);
mViewPager.setCurrentItem(position);
}
});
}
class MyPagerAdapter extends FragmentPagerAdapter {
public MyPagerAdapter(FragmentManager fm) {
super(fm);
}
@Override
public Fragment getItem(int position) {
return mFragments.get(position);
}
@Override
public int getCount() {
//3个fragments
return mTitle.length;
}
}
class MinePagerAdapter extends FragmentStatePagerAdapter {
public MinePagerAdapter(FragmentManager fm) {
super(fm);
}
@Override
public Fragment getItem(int position) {
return mFragments.get(position);
}
@Override
public int getCount() {
//3个fragments
return 3;
}
}
// /**
// * test
// */
//
// private ContacterReceiver receiver;
// @Override
// protected void onDestroy() {
// unregisterReceiver(receiver);
// super.onDestroy();
// }
//
// private class ContacterReceiver extends BroadcastReceiver {
//
// @Override
// public void onReceive(Context context, Intent intent) {
// Message message = (Message) intent.getSerializableExtra("notice");
// String action = intent.getAction();
//// inviteNotices.add(notice);
//// refresh();
// Log.i("ContacterReceiver", "" + action);
// }
// }
//
//
// public void regReceiver(){
// // 注册广播接收器
// IntentFilter filter = new IntentFilter();
// // 好友请求
// filter.addAction("roster.subscribe");
// filter.addAction("roster.subscribe.from");
// receiver = new ContacterReceiver();
// registerReceiver(receiver, filter);
// }
@Override
protected void onDestroy() {
IMService.conn.disconnect();
//关闭IMService
Intent intent =new Intent(getApplicationContext(),IMService.class);
stopService(intent);
//关闭pushService
Intent intent2 =new Intent(getApplicationContext(), PushService.class);
stopService(intent2);
super.onDestroy();
}
} | java |
{
"body": "fixes #2708\r\n\r\nThis uses `llnl.util.tty.color` to colorize the output of `spack info`. It also adds a section to display the preferred version before listing all the safe versions in order.",
"user": "alalazo",
"url": "https://api.github.com/repos/spack/spack/issues/4994",
"updated_at": "2017-08-17 16:18:01",
"created_at": "2017-08-07 15:34:35",
"closed_at": "2017-08-17 16:15:57",
"state": "closed",
"title": "Colorize spack info. Adds prominence to preferred version.",
"number": 4994,
"milestone": null,
"labels": [
"feature",
"ready"
],
"id": 248449147,
"html_url": "https://github.com/spack/spack/pull/4994",
"assignees": [],
"comments": 19
} | json |
{
"@context": "http://schema.org/",
"@type": "LocalBusiness",
"name": "Malt \u0026 Vine",
"url": "https://www.yelp.com/biz/malt-and-vine-redmond",
"address": {
"addressLocality": "Redmond",
"addressRegion": "WA",
"streetAddress": "16851 Redmond Way",
"postalCode": "98052",
"addressCountry": "US"
},
"image": "https://s3-media1.fl.yelpcdn.com/bphoto/HD_NsxwaCTwKRxvOZs2Shw/ls.jpg",
"imageAlt": "image of beer growlers on a table",
"telephone": "+14258816461",
"aggregateRating": {
"reviewCount": 176,
"@type": "AggregateRating",
"ratingValue": 4.5
},
"review": [{
"reviewRating": {
"ratingValue": 4
},
"datePublished": "2014-11-28",
"description": "Great concept and a wide selection of beers both on tap and bottled! Smaller wine selection than I wanted, but the variety of beers certainly made up for that. Although I didn't order anything, my boyfriend got a beer and he loved it. Their prices are fair too. \n\nThe concept is really awesome. It's a bar/store that you can bring outside food into. The place was pretty packed tonight. I wish we had stayed for more than one drink. I would have loved to sample everything!",
"author": "<NAME>."
}],
"priceRange": "mid-priced"
}
| json |
Delhi Hotels & Restaurant Owners Association has decided to boycott Chinese goods and not provide any accommodation to Chinese nationals as a mark of protest against the recent Chinese aggression along the Indian border.
The Association has written to the Confederation of All India Traders (CAIT) and has extended full support to its boycott Chinese goods campaign.
"We are pleased to inform you that our association has decided to wholeheartedly support the campaign of CAIT and as such we have decided to boycott Chinese goods which are being used in our hotels and restaurants and henceforth we shall not be using any Chinese products in our establishments," the Delhi Hotel & Restaurant Owners Association said in the letter to CAIT.
The Association said it has also decided not to provide rooms to "any Chinese national at a time when China is repeatedly in attacking mode on our brave Indian forces".
Traders' body CAIT welcomed the Delhi Hotels & Restaurant Owners Association's decision.
CAIT Secretary General Praveen Khandelwal said it is quite evident that people from all walks of life are more willing to join its campaign. (This story has not been edited by News18 staff and is published from a syndicated news agency feed - PTI) | english |
{
"name": "smart-toc",
"version": "0.0.1",
"description": "an chrome extension that generates a table of content for webpage",
"main": "src/js/index.js",
"scripts": {
"clean": "rm -rf dist/* firefox/* chrome/*",
"start": "export ENV=development && yarn run build:background && rollup --config rollup.config.js --watch",
"lint:watch": "tsc --watch",
"build": "export ENV=production && yarn run clean; yarn run build:content && yarn run build:background && yarn run build:chrome && yarn run build:firefox",
"build:content": "rollup --config rollup.config.js",
"build:background": "mkdir -p dist && cp -R src/background/* dist/",
"build:chrome": "mkdir -p chrome && zip -r chrome/smart-toc.zip dist",
"build:firefox": "yarn run build:firefox:package && yarn run build:firefox:source",
"build:firefox:package": "web-ext build --overwrite-dest --artifacts-dir ./firefox --source-dir ./dist/ ",
"build:firefox:source": "zip -r ./firefox/smart-toc_source.zip src package.json README.md tsconfig.json rollup.config.js yarn.lock"
},
"repository": {
"type": "git",
"url": "git+https://github.com/FallenMax/smart-toc.git"
},
"keywords": [
"chrome",
"extension",
"table-of-content",
"toc",
"outline",
"outliner"
],
"author": "<EMAIL>",
"license": "ISC",
"bugs": {
"url": "https://github.com/FallenMax/smart-toc/issues"
},
"homepage": "https://github.com/FallenMax/smart-toc#readme",
"devDependencies": {
"@types/chrome": "^0.0.154",
"@types/mithril": "^2.0.8",
"rollup": "2.56.0",
"rollup-plugin-commonjs": "^10.1.0",
"rollup-plugin-node-resolve": "^5.2.0",
"rollup-plugin-replace": "2.2.0",
"rollup-plugin-string": "^3.0.0",
"rollup-plugin-typescript": "^1.0.1",
"rollup-plugin-typescript2": "^0.30.0",
"typescript": "^4.3.5",
"web-ext": "^6.2.0"
},
"dependencies": {
"mithril": "^2.0.4",
"pcf-start": "^1.6.5",
"tslib": "^2.3.0"
}
}
| json |
Pakistan Prime Minister Imran Khan, who is on a three-day visit to the United States, on Sunday held talks with World Bank President David Malpass.
Pakistan foreign minister Shah Mehmood Qureshi, adviser to PM on commerce, Abdul Razak Dawood and adviser to PM on finance, Abdul Hafeez Sheikh, were also present during the meeting, the ANI reported on Monday.
In a major embarrassment for Imran Khan, he received no official welcome of the kind reserved for foreign heads of states when he landed in Washington on Saturday.
Khan, 66, is scheduled to meet US President Trump at the White House on Monday. During the meeting, the American leadership is likely to press him to take "decisive and irreversible" actions against terrorist and militant groups operating from Pakistani soil and facilitate peace talks with the Taliban.
The relations between Pakistan and the US have remained tense during Trump's tenure. The US president has publicly said that Pakistan has given us "nothing but lies and deceit" and also suspended security and other assistance for backing terror groups.
Diplomatic sources in Islamabad earlier said that issues like the Afghan peace process, Pakistan government's action against terrorism and terror financing and restoration of military aid to Pakistan would be the highlights of the trip.
Khan's visit comes at a time when talks between the US and Afghan Taliban are thought to have entered a decisive phase. | english |
1 Paul, and Silvanus, and Timothy, to the church of Thessalonians, in God our Father, and in the Lord Jesus Christ,
2 grace to you and peace of God, our Father, and of the Lord Jesus Christ.
5 into the ensample [into the example] of the just doom of God, that ye be had worthy in the kingdom of God, for which ye suffer.
6 If nevertheless it is just before God [If nevertheless it is just at God] to requite tribulation to them that trouble you,
7 and to you that be troubled, rest with us in the showing of the Lord Jesus from heaven, with angels of his virtue,
8 in the flame of fire, that shall give vengeance to them that know not God [in the flame of fire, giving vengeance to them that know not God], and that obey not to the gospel of our Lord Jesus Christ.
9 Which shall suffer everlasting pains, in perishing from the face of the Lord, and from the glory of his virtue,
10 when he shall come to be glorified in his saints, and to be made wonderful in all men that believed, for our witnessing is believed on you, in that day.
11 In which thing also we pray evermore for you, that our God make you worthy to his calling, and fill all the will of his goodness [and fulfill all the will of his goodness], and the work of faith in virtue;
12 that the name of our Lord Jesus Christ be clarified in you, and ye in him, by the grace of our Lord Jesus Christ [after the grace of our God, and of the Lord Jesus Christ].
- 2 Thessalonians 1:3 We owe to do thankings ever to God for you, brethren, so as it is worthy, for your faith ever-waxeth, and the charity of each of you together aboundeth.
| english |
Meizu has unveiled the successor to last year's M5 with the launch of the Meizu M6 in China on Wednesday. The newly launched budget smartphone looks to be a minor upgrade over its predecessor, and carries forward the same notable features such as 4G VoLTE, hybrid dual SIM support, and an octa-core processor, among other things. The smartphone has been priced starting at CNY 699 (roughly Rs. 6,800) for the 2GB + 16GB variant and CNY 899 (roughly Rs. 8,800) for the 3GB + 32GB option, which is similar to the pricing of the M5.
As far as design is concerned, the Meizu M6 looks fairly similar to the M5, especially on the front. The most noticeable change comes on the rear where you can see silver metallic lines now along the top and bottom. Like its predecessor, the M6 comes with a polycarbonate body with a metal frame around, and features an mTouch fingerprint sensor under the home button on the front.
Similarities continue as the new Meizu M6 sports the same 5.2-inch HD (720x1280) display with 2.5D curved glass on top as the M5. It is also powered by the same octa-core MediaTek MT6750 processor with Mali T860 GPU that powered the M5. The Meizu M6 comes in two variants - one with 2GB of RAM and 16GB of inbuilt storage and another with 3GB of RAM and 32GB of internal storage, both of which are expandable via a microSD card (up to 128GB). The smartphone runs on Flyme OS 6.0 based on Android 7.0 Nougat.
In the camera department, the Meizu M6 gets a 13-megapixel rear camera with 4-color RGBW flash, PDAF, and an f/2.2 aperture. The front camera has been upgraded this tie around with an 8-megapixel sensor compared to the 5-megapixel camera on the M5.
The hybrid dual SIM (Nano+Nano/ microSD) Meizu M6 packs a 3070mAh battery, which is again the same as seen inside the M5. Connectivity options include 4G VoLTE, WiFi 802.11 a/b/g/n (2.4GHz/5GHz), Bluetooth 4.1, GPS / GLONASS, among others. The handset measures 148.2x72.8x8.3mm and weighs 143 grams.
There's no word yet on when and if the Meizu M6 will be available in India. But if it does come, you may have to wait for a while, considering that the Meizu M5 landed in India almost 7 months after it was made official. The M6 will be available for order in China starting September 25.
| english |
<reponame>mandalorian-101/badger-system
import datetime
ONE_MINUTE = 60
ONE_HOUR = 3600
ONE_DAY = 24 * ONE_HOUR
ONE_YEAR = 1 * 365 * ONE_DAY
def days(days):
return int(days * 86400.0)
def hours(hours):
return int(hours * 3600.0)
def minutes(minutes):
return int(minutes * 60.0)
def to_utc_date(timestamp):
return datetime.datetime.utcfromtimestamp(timestamp).strftime("%Y-%m-%dT%H:%M:%SZ")
def to_timestamp(date):
print(date.timestamp())
return int(date.timestamp())
def to_minutes(duration):
return duration / ONE_MINUTE
def to_days(duration):
return duration / ONE_DAY
def to_hours(duration):
return duration / ONE_HOUR | python |
Maya, Mamatha and Sonia were seen on the same dais and they hugged each other posed for photographs. Akhilesh, Sitaram Yechuri, Tejeshwar, Sharad Pawar, Chandrababu they were all there greeted by the Gowdas and it was more than the opposition Unity.
They beamed with all the confidence, posed for the photos and showed the strength of unity. Both Sonia and Rahul were elated and they were seen talking with other leaders at the Vidhana Soudha before the oath-taking ceremony, and Chandrababu was also equally busy. It looked as if Babu will also join the front under the Congress.
The Prominent figure missing was Telangana CM KCR. He was busy at Pragati Bhavan with official business here in Hyderabad.
Gowda hugged all the young leaders here and Tejeshwar war seen touching the feet of Mamatha. Sonia almost kissed on the forehead of Maya and there were the scenes of unity and happiness of the opposition that they could prevent BJP from coming to power in Karnataka.
The Congress has sacrificed its majority and gave the chance to JDS, just to prevent the BJP. For the time being the Congress has the upper hand in controlling the situation and forming the Government in Karnataka. If this continues further then the Modi -Shah jodi will land in trouble. This could be the beginning of the end for the BJP in 2019. If all of them come together against the BJP, then it will be difficult for Modi. Lets hope the unity continues. | english |
<reponame>vags97/apruebo-dignidad
---
core: true
title: <NAME>
description: Candidato/a a Consejero/a Regional por la Circunscripción de Arauco
image: /media/ad-profile.jpg
tags:
- CORE
- Consejero Regional
- Apruebo Dignidad
- Arauco
- AV180
- Pacto Frente Amplio - Subpacto Revolucion Democratica E Independientes - Revolucion Democratica
- Arauco
- Cañete
- Contulmo
- Curanilahue
- Lebu
- Los Alamos
- Tirua
circunscripcionProvincial: Arauco
papeleta: AV180
partido: Pacto Frente Amplio - Subpacto Revolucion Democratica E Independientes - Revolucion Democratica
paginaWeb:
facebook:
twitter:
instagram:
youtube:
tiktok:
---
Hola, mi nombre es <NAME> y soy candidato/a a Consejero/a Regional por la circunscripcion de Arauco.
Vota AV180. | markdown |
<reponame>ThiagoDellaNoce/JogoDaForca
.espacoLetra
{
background-size: 50px;
border: solid 2px blue;
height: 100px;
width: 80px;
padding: 5px;
font-size: 5em;
}
.LetraEscondida
{
color: #ffffff;
}
.LetraAparece
{
color: #000000;
}
.inicial
{
color: #c9c9c9;
background-color: #c9c9c9;
}
.textAlign
{
margin: 0px;
}
.dica
{
color: blue;
}
.imagemForca
{
position: relative;
top: -53px;
left: 75px;
}
.imagemboneco
{
position: relative;
right: 310px;
top: -28px;
} | css |
QnA (read only)
JavaScript is disabled. For a better experience, please enable JavaScript in your browser before proceeding.
You are using an out of date browser. It may not display this or other websites correctly.
what is an ESTER EGG?
Not open for further replies.
what is an ESTER EGG?
Hdden stuff in the programs basically.
Xactly hidden under the s/w.
kerthivasan said:
what is an ESTER EGG?
think you were asking about easter eggs in programs...for which you already got the answers. But if you weren't Easter Egg is what kids get on Easter...;-q:
Not open for further replies.
QnA (read only)
Hi Guest we just wanted to alert you to a major change in the forum. We will no longer be allowing the posting of outgoing links. Please use the attachment feature to attach media to your posts.
| english |
<filename>src/tests/unittests/measurements/acquisition/test_uhfli_stimulus.py
import unittest
from unittest.mock import patch, MagicMock, call
from qilib.utils import PythonJsonStructure
from qtt.measurements.acquisition import UHFLIStimulus
class TestLockInStimulus(unittest.TestCase):
def setUp(self):
self.adapter_mock = MagicMock()
self.uhfli_mock = MagicMock()
self.adapter_mock.instrument = self.uhfli_mock
with patch('qtt.measurements.acquisition.uhfli_stimulus.InstrumentAdapterFactory') as factory_mock:
factory_mock.get_instrument_adapter.return_value = self.adapter_mock
self.uhfli_stimulus = UHFLIStimulus('mock42')
def test_initialize(self):
config = PythonJsonStructure(bla='blu')
self.uhfli_stimulus.initialize(config)
self.adapter_mock.apply.assert_called_once_with(config)
def test_set_demodulation_enabled(self):
getitem_mock = MagicMock()
self.uhfli_mock.parameters.__getitem__ = getitem_mock
self.uhfli_stimulus.set_demodulation_enabled(2, True)
expected_calls = [call('demod2_streaming'), call()('ON')]
getitem_mock.assert_has_calls(expected_calls)
self.uhfli_stimulus.set_demodulation_enabled(1, False)
expected_calls = [call('demod1_streaming'), call()('OFF')]
getitem_mock.assert_has_calls(expected_calls)
def test_set_output_enabled(self):
getitem_mock = MagicMock()
self.uhfli_mock.parameters.__getitem__ = getitem_mock
self.uhfli_stimulus.set_output_enabled(2, True)
expected_calls = [call('signal_output2_on'), call()('ON')]
getitem_mock.assert_has_calls(expected_calls)
self.uhfli_stimulus.set_output_enabled(1, False)
expected_calls = [call('signal_output1_on'), call()('OFF')]
getitem_mock.assert_has_calls(expected_calls)
def test_set_oscillator_frequency(self):
getitem_mock = MagicMock()
self.uhfli_mock.parameters.__getitem__ = getitem_mock
self.uhfli_stimulus.set_oscillator_frequency(5, 42.0)
expected_calls = [call('oscillator5_freq'), call()(42.0)]
getitem_mock.assert_has_calls(expected_calls)
def test_set_oscillator_frequency_partial(self):
getitem_mock = MagicMock()
getitem_mock.return_value = 'FakeParameter'
self.uhfli_mock.parameters.__getitem__ = getitem_mock
parameter = self.uhfli_stimulus.set_oscillator_frequency(5)
expected_calls = [call('oscillator5_freq')]
getitem_mock.assert_has_calls(expected_calls)
self.assertEqual(parameter, 'FakeParameter')
def test_set_signal_output_enabled(self):
getitem_mock = MagicMock()
self.uhfli_mock.parameters.__getitem__ = getitem_mock
self.uhfli_stimulus.set_signal_output_enabled(2, 3, True)
expected_calls = [call('signal_output2_enable3'), call()(True)]
getitem_mock.assert_has_calls(expected_calls)
def test_set_signal_output_amplitude(self):
getitem_mock = MagicMock()
self.uhfli_mock.parameters.__getitem__ = getitem_mock
self.uhfli_stimulus.set_signal_output_amplitude(1, 7, 0.42)
expected_calls = [call('signal_output1_amplitude7'), call()(0.42)]
getitem_mock.assert_has_calls(expected_calls)
def test_demodulator_signal_input(self):
getitem_mock = MagicMock()
self.uhfli_mock.parameters.__getitem__ = getitem_mock
self.uhfli_stimulus.set_demodulator_signal_input(8, 1)
expected_calls = [call('demod8_signalin'), call()('Sig In 1')]
getitem_mock.assert_has_calls(expected_calls)
self.assertRaisesRegex(NotImplementedError, 'The input channel can *',
self.uhfli_stimulus.set_demodulator_signal_input, 8, 3)
def test_connect_oscillator_to_demodulator(self):
getitem_mock = MagicMock()
self.uhfli_mock.parameters.__getitem__ = getitem_mock
self.uhfli_stimulus.connect_oscillator_to_demodulator(3, 8)
expected_calls = [call('demod8_oscillator'), call()(3)]
getitem_mock.assert_has_calls(expected_calls)
| python |
<gh_stars>0
from typing import List
import os
import json
from os.path import join as _join
from os.path import exists as _exists
import math
from osgeo import gdal, osr
import numpy as np
from scipy.ndimage import label
from subprocess import Popen, PIPE
from pprint import pprint
from wepppy.all_your_base.geo import read_tif, centroid_px
from wepppy.watershed_abstraction.wepp_top_translator import WeppTopTranslator
from wepppy.watershed_abstraction.support import (
cummnorm_distance, compute_direction, representative_normalized_elevations,
weighted_slope_average, rect_to_polar, write_slp, HillSummary, ChannelSummary, CentroidSummary,
slp_asp_color, polygonize_netful, polygonize_bound, polygonize_subcatchments, json_to_wgs
)
from .taudem import TauDEMRunner
_USE_MPI = False
_DEBUG = False
class Node:
def __init__(self, tau_id, network):
self.data = tau_id
d = network[tau_id]
self.top = top = d['top']
self.bottom = bottom = d['bottom']
links = d['links']
if len(links) == 2:
refvec = np.array(bottom, dtype=float) - np.array(top, dtype=float)
links = sorted([dict(tau_id=_id, point=network[_id]['top'], origin=top, refvec=refvec)
for _id in links], key=lambda _d: rect_to_polar(_d))
links = [_d['tau_id'] for _d in links]
if len(links) > 0:
self.left = Node(links[0], network)
else:
self.left = None
if len(links) > 1:
self.right = Node(links[1], network)
else:
self.right = None
class TauDEMTopazEmulator(TauDEMRunner):
def __init__(self, wd, dem, vector_ext='geojson'):
super(TauDEMTopazEmulator, self).__init__(wd, dem, vector_ext)
# subwta
@property
def _subwta(self):
return _join(self.wd, 'subwta.tif')
# subwta
@property
def _subwta_shp(self):
return _join(self.wd, 'subwta.geojson')
# subcatchments
@property
def _subcatchments_shp(self):
return _join(self.wd, 'subcatchments.geojson')
# bound
@property
def _bound(self):
return _join(self.wd, 'bound.tif')
# bound
@property
def _bound_shp(self):
return _join(self.wd, 'bound.geojson')
# net
@property
def _netful_shp(self):
return _join(self.wd, 'netful.geojson')
@property
def _channels(self):
return _join(self.wd, 'channels.tif')
def topaz2tau_translator_factory(self):
d = self.tau2topaz_translator_factory()
return {v: k for k, v in d.items()}
def run_streamnet(self, single_watershed=False):
super(TauDEMTopazEmulator, self).run_streamnet(single_watershed=single_watershed)
tau2top_translator = self.tau2topaz_translator_factory()
with open(self._net) as fp:
js = json.load(fp)
for i, feature in enumerate(js['features']):
topaz_id = tau2top_translator[feature['properties']['WSNO']]
js['features'][i]['properties']['TopazID'] = int(str(topaz_id) + '4')
with open(self._net, 'w') as fp:
json.dump(js, fp)
cmd = ['gdal_rasterize', '-a', 'TopazID', '-a_nodata', '0',
'-a_srs', 'epsg:{}'.format(self.epsg),
'-te', self.ul_x, self.lr_y, self.lr_x, self.ul_y,
'-tr', self.cellsize, self.cellsize,
'-ot', 'UInt16', self._net, self._channels]
cmd = [str(v) for v in cmd]
print(' '.join(cmd))
p = Popen(cmd, stdout=PIPE, stderr=PIPE)
p.wait()
assert _exists(self._channels)
def build_channels(self, csa=None):
if csa is None:
csa = 100
wd = self.wd
self.run_pitremove()
self.run_d8flowdir()
self.run_aread8()
self.run_gridnet()
self.run_src_threshold(threshold=csa)
polygonize_netful(self._src, self._netful_shp)
def set_outlet(self, lng, lat):
self.run_moveoutletstostrm(lng=lng, lat=lat)
def build_subcatchments(self, threshold=None):
self.run_peukerdouglas()
self.run_peukerdouglas_stream_delineation(threshold=threshold)
self.run_streamnet()
self.run_dinfflowdir()
self.run_areadinf()
self.run_dinfdistdown()
json_to_wgs(self._net)
self.delineate_subcatchments()
polygonize_subcatchments(self._subwta, self._subwta_shp, self._subcatchments_shp)
self.make_bound()
polygonize_bound(self._bound, self._bound_shp)
def abstract_watershed(self, wepp_chn_type,
clip_hillslopes=False, clip_hillslope_length=300.0):
self.abstract_channels(wepp_chn_type=wepp_chn_type)
self.abstract_subcatchments(clip_hillslopes=clip_hillslopes,
clip_hillslope_length=clip_hillslope_length)
self.abstract_structure()
@property
def _abstracted_channels(self):
return _join(self.wd, 'channels.json')
@property
def abstracted_channels(self):
with open(self._abstracted_channels) as fp:
summaries = json.load(fp)
translator = self.translator
chns_summary = {}
for topaz_id, d in summaries.items():
wepp_id = translator.wepp(top=topaz_id)
chn_enum = translator.chn_enum(top=topaz_id)
slope_scalar = d['slope_scalar']
aspect = d['aspect']
chns_summary[topaz_id] = \
ChannelSummary(
topaz_id=topaz_id,
wepp_id=wepp_id,
chn_enum=chn_enum,
chn_type=d['wepp_chn_type'],
isoutlet=d['isoutlet'],
length=d['length'],
width=d['width'],
order=d['order'],
aspect=aspect,
head=d['head'],
tail=d['tail'],
direction=d['direction'],
slope_scalar=slope_scalar,
color=slp_asp_color(slope_scalar, aspect),
area=d['area'],
elevs=d['elevs'],
distance_p=d['distance_p'],
slopes=d['slopes'],
centroid=CentroidSummary(
px=d['centroid_px'],
lnglat=d['centroid_lnglat']
)
)
return chns_summary
@property
def _abstracted_subcatchments(self):
return _join(self.wd, 'subcatchments.json')
@property
def abstracted_subcatchments(self):
with open(self._abstracted_subcatchments) as fp:
summaries = json.load(fp)
translator = self.translator
subs_summary = {}
for topaz_id, d in summaries.items():
wepp_id = translator.wepp(top=topaz_id)
slope_scalar = d['slope_scalar']
aspect = d['aspect']
subs_summary[topaz_id] = \
HillSummary(topaz_id=topaz_id,
wepp_id=wepp_id,
w_slopes=d['w_slopes'],
length=d['length'],
width=d['width'],
area=d['area'],
direction=d['direction'],
elevs=d['elevs'],
aspect=aspect,
slope_scalar=slope_scalar,
color=slp_asp_color(slope_scalar, aspect),
distance_p=d['distance_p'],
centroid=CentroidSummary(
px=d['centroid_px'],
lnglat=d['centroid_lnglat']
),
fp_longest=d['fp_longest'],
fp_longest_length=d['fp_longest_length'],
fp_longest_slope=d['fp_longest_slope']
)
return subs_summary
@property
def _structure(self):
return _join(self.wd, 'structure.tsv')
@property
def structure(self):
with open(self._structure) as fp:
return [[int(v) for v in line.split()] for line in fp.readlines()]
def abstract_channels(self, wepp_chn_type=None):
cellsize = self.cellsize
cellsize2 = self.cellsize2
translator = self.translator
slopes = self.data_fetcher('dinf_slope', dtype=np.float)
fvslop = self.data_fetcher('dinf_angle', dtype=np.float)
with open(self._net) as fp:
js = json.load(fp)
chn_d = {}
for feature in js['features']:
topaz_id = int(str(feature['properties']['TopazID'])[:-1])
catchment_id = feature['properties']['WSNO']
uslinkn01 = feature['properties']['USLINKNO1']
uslinkn02 = feature['properties']['USLINKNO2']
dslinkn0 = feature['properties']['DSLINKNO']
order = feature['properties']['strmOrder']
chn_id = int(str(topaz_id) + '4')
enz_coords = feature['geometry']['coordinates'] # listed bottom to top
# need to identify unique pixels
px_last, py_last = None, None
indx, indy = [], []
for e, n, z in enz_coords:
px, py = self.utm_to_px(e, n)
if px != px_last or py != py_last:
assert 0 <= px < slopes.shape[0], ((px, py), (e, n), slopes.shape)
assert 0 <= py < slopes.shape[1], ((px, py), (e, n), slopes.shape)
indx.append(px)
indy.append(py)
px_last, py_last = px, py
# the pixels are listed bottom to top we want them top to bottom as if we walked downt the flowpath
indx = indx[::-1]
indy = indy[::-1]
flowpath = np.array([indx, indy]).T
_distance = flowpath[:-1, :] - flowpath[1:, :]
distance = np.sqrt(np.power(_distance[:, 0], 2.0) +
np.power(_distance[:, 1], 2.0))
slope = np.array([slopes[px, py] for px, py in zip(indx[:-1], indy[:-1])])
assert distance.shape == slope.shape, (distance.shape, slope.shape)
if len(indx) == 1:
px, py = indx[0], indy[0]
slope_scalar = float(slopes[px, py])
slope = np.array([slope_scalar, slope_scalar])
# todo: don't think head and tail are being used any where, but these
# are inconsistent with case when there is more than one pixel
head = enz_coords[-1][:-1]
tail = enz_coords[0][:-1]
direction = compute_direction(head, tail)
length = np.linalg.norm(np.array(head) - np.array(tail))
if length < cellsize:
length = cellsize
width = cellsize2 / length
distance_p = [0.0, 1.0]
elevs = representative_normalized_elevations(distance_p, list(slope))
else:
# need normalized distance_p to define slope
distance_p = cummnorm_distance(distance)
if len(slope) == 1:
slope = np.array([float(slope), float(slope)])
# calculate the length from the distance array
length = float(np.sum(distance) * cellsize)
width = float(cellsize)
# aspect = float(self._determine_aspect(indx, indy))
head = [v * cellsize for v in flowpath[-1]]
head = [float(v) for v in head]
tail = [v * cellsize for v in flowpath[0]]
tail = [float(v) for v in tail]
direction = compute_direction(head, tail)
elevs = representative_normalized_elevations(distance_p, list(slope))
slope_scalar = float(abs(elevs[-1]))
area = float(length) * float(width)
# calculate aspect
aspect = np.mean(np.angle([np.complex(np.cos(rad), np.sin(rad)) for rad in fvslop[(indx, indy)]], deg=True))
isoutlet = dslinkn0 == -1
c_px, c_py = centroid_px(indx, indy)
centroid_lnglat = self.px_to_lnglat(c_px, c_py)
chn_enum = translator.chn_enum(chn_id=chn_id)
chn_d[str(chn_id)] = dict(chn_id=int(chn_id),
chn_enum=int(chn_enum),
order=int(order),
length=float(length),
width=float(width),
area=float(area),
elevs=[float(v) for v in elevs],
wepp_chn_type=wepp_chn_type,
head=head,
tail=tail,
aspect=float(aspect),
slopes=[float(v) for v in slope],
isoutlet=isoutlet,
direction=float(direction),
distance_p=[float(v) for v in distance_p],
centroid_px=[int(c_px), int(c_py)],
centroid_lnglat=[float(v) for v in centroid_lnglat],
slope_scalar=float(slope_scalar)
)
with open(self._abstracted_channels, 'w') as fp:
json.dump(chn_d, fp, indent=2, sort_keys=True)
@property
def topaz_sub_ids(self):
subwta = self.data_fetcher('subwta', dtype=np.uint16)
sub_ids = sorted(list(set(subwta.flatten())))
if 0 in sub_ids:
sub_ids.remove(0)
sub_ids = [v for v in sub_ids if not str(v).endswith('4')]
return sub_ids
@property
def topaz_chn_ids(self):
with open(self._net) as fp:
js = json.load(fp)
chn_ids = []
for feature in js['features']:
chn_ids.append(feature['properties']['TopazID'])
return chn_ids
@property
def translator(self):
return WeppTopTranslator(top_sub_ids=self.topaz_sub_ids, top_chn_ids=self.topaz_chn_ids)
def abstract_subcatchments(self, clip_hillslopes=False, clip_hillslope_length=300.0):
"""
in: dinf_dd_horizontal, dinf_dd_vertical, dinf_dd_surface, dinf_slope, subwta
:return:
"""
cellsize = self.cellsize
cellsize2 = self.cellsize2
sub_ids = self.topaz_sub_ids
assert _exists(self._dinf_dd_horizontal), self._dinf_dd_horizontal
assert _exists(self._dinf_dd_vertical), self._dinf_dd_vertical
assert _exists(self._dinf_dd_surface), self._dinf_dd_surface
assert _exists(self._dinf_slope), self._dinf_slope
assert _exists(self._subwta), self._subwta
assert _exists(self._dinf_angle), self._dinf_angle
subwta = self.data_fetcher('subwta', dtype=np.uint16)
lengths = self.data_fetcher('dinf_dd_horizontal', dtype=np.float)
verticals = self.data_fetcher('dinf_dd_vertical', dtype=np.float)
surface_lengths = self.data_fetcher('dinf_dd_surface', dtype=np.float)
slopes = self.data_fetcher('dinf_slope', dtype=np.float)
aspects = self.data_fetcher('dinf_angle', dtype=np.float)
chns_d = self.abstracted_channels
subs_d = {}
for sub_id in sub_ids:
# identify cooresponding channel
chn_id = str(sub_id)[:-1] + '4'
# identify indicies of sub_id
raw_indx, raw_indy = np.where(subwta == sub_id)
area = float(len(raw_indx)) * cellsize2
indx, indy = [], []
for _x, _y in zip(raw_indx, raw_indy):
if lengths[_x, _y] >= 0.0:
indx.append(_x)
indy.append(_y)
if len(indx) == 0:
print('sub_id', sub_id)
print('raw_indx, raw_indy', raw_indx, raw_indy)
print(lengths[(raw_indx, raw_indy)])
print(surface_lengths[(raw_indx, raw_indy)])
print(slopes[(raw_indx, raw_indy)])
print(aspects[(raw_indx, raw_indy)])
width = length = math.sqrt(area)
_slp = np.mean(slopes[(raw_indx, raw_indy)])
w_slopes = [_slp, _slp]
distance_p = [0, 1]
fp_longest = None
fp_longest_length = length
fp_longest_slope = _slp
else:
# extract flowpath statistics
fp_lengths = lengths[(indx, indy)]
fp_lengths += cellsize
fp_verticals = verticals[(indx, indy)]
fp_surface_lengths = surface_lengths[(indx, indy)]
fp_surface_lengths += cellsize
fp_surface_areas = np.ceil(fp_surface_lengths) * cellsize
fp_slopes = slopes[(indx, indy)]
length = float(np.sum(fp_lengths * fp_surface_areas) / np.sum(fp_surface_areas))
if clip_hillslopes and length > clip_hillslope_length:
length = clip_hillslope_length
width = area / length
# if str(sub_id).endswith('1'):
# # determine representative length and width
# # Cochrane dissertation eq 3.4
#
# #print('sub_id', sub_id)
# #pprint('fp_lengths')
# #pprint(fp_lengths)
# #pprint('fp_surface_areas')
# #pprint(fp_surface_areas)
# length = float(np.sum(fp_lengths * fp_surface_areas) / np.sum(fp_surface_areas))
# width = area / length
#
# #print('area', area)
# #print('width', width)
# #print('length', length, '\n\n\n')
# else:
# width = chns_d[chn_id].length
# length = area / width
# determine representative slope profile
w_slopes, distance_p = weighted_slope_average(fp_surface_areas, fp_slopes, fp_lengths)
# calculate longest flowpath statistics
fp_longest = int(np.argmax(fp_lengths))
fp_longest_vertical = fp_verticals[fp_longest]
fp_longest_length = fp_lengths[fp_longest]
fp_longest_slope = fp_longest_vertical / fp_longest_length
# calculate slope for hillslope
elevs = representative_normalized_elevations(distance_p, w_slopes)
slope_scalar = float(abs(elevs[-1]))
# calculate aspect
_aspects = aspects[(indx, indy)]
aspect = np.mean(np.angle([np.complex(np.cos(rad), np.sin(rad)) for rad in _aspects], deg=True))
# calculate centroid
c_px, c_py = centroid_px(raw_indx, raw_indy)
centroid_lnglat = self.px_to_lnglat(c_px, c_py)
direction = chns_d[chn_id].direction
if str(sub_id).endswith('2'):
direction += 90
if str(sub_id).endswith('3'):
direction -= 90
subs_d[str(sub_id)] = dict(sub_id=int(sub_id),
area=float(area),
length=float(length),
aspect=float(aspect),
direction=float(direction),
width=float(width),
w_slopes=list(w_slopes),
distance_p=list(distance_p),
centroid_lnglat=[float(v) for v in centroid_lnglat],
centroid_px=[int(c_px), int(c_py)],
elevs=list(elevs),
slope_scalar=float(slope_scalar),
fp_longest=fp_longest,
fp_longest_length=float(fp_longest_length),
fp_longest_slope=float(fp_longest_slope)
)
with open(self._abstracted_subcatchments, 'w') as fp:
json.dump(subs_d, fp, indent=2, sort_keys=True)
def abstract_structure(self, verbose=False):
translator = self.translator
topaz_network = self.topaz_network
# now we are going to define the lines of the structure file
# this doesn't handle impoundments
structure = []
for chn_id in translator.iter_chn_ids():
if verbose:
print('abstracting structure for channel %s...' % chn_id)
top = translator.top(chn_id=chn_id)
chn_enum = translator.chn_enum(chn_id=chn_id)
# right subcatchments end in 2
hright = top - 2
if not translator.has_top(hright):
hright = 0
# left subcatchments end in 3
hleft = top - 1
if not translator.has_top(hleft):
hleft = 0
# center subcatchments end in 1
hcenter = top - 3
if not translator.has_top(hcenter):
hcenter = 0
# define structure for channel
# the first item defines the channel
_structure = [chn_enum]
# network is defined from the NETW.TAB file that has
# already been read into {network}
# the 0s are appended to make sure it has a length of
# at least 3
chns = topaz_network[top] + [0, 0, 0]
# structure line with top ids
_structure += [hright, hleft, hcenter] + chns[:3]
# this is where we would handle impoundments
# for now no impoundments are assumed
_structure += [0, 0, 0]
# and translate topaz to wepp
structure.append([int(v) for v in _structure])
with open(self._structure, 'w') as fp:
for row in structure:
fp.write('\t'.join([str(v) for v in row]))
fp.write('\n')
def delineate_subcatchments(self, use_topaz_ids=True):
"""
in: pksrc, net
out: subwta
:return:
"""
w_data = self.data_fetcher('w', dtype=np.int32)
_src_data = self.data_fetcher('pksrc', dtype=np.int32)
src_data = np.zeros(_src_data.shape, dtype=np.int32)
src_data[np.where(_src_data == 1)] = 1
subwta = np.zeros(w_data.shape, dtype=np.uint16)
with open(self._net) as fp:
js = json.load(fp)
# identify pourpoints of the end node catchments
end_node_pourpoints = {}
for feature in js['features']:
catchment_id = feature['properties']['WSNO']
coords = feature['geometry']['coordinates']
uslinkn01 = feature['properties']['USLINKNO1']
uslinkn02 = feature['properties']['USLINKNO2']
end_node = uslinkn01 == -1 and uslinkn02 == -1
top = coords[-1][:-1]
if end_node:
end_node_pourpoints[catchment_id] = top
# make geojson with pourpoints as input for gage watershed
outlets_fn = _join(self.wd, 'outlets.geojson')
self._make_multiple_outlets_geojson(dst=outlets_fn, en_points_dict=end_node_pourpoints)
gw_fn = _join(self.wd, 'end_nodes_gw.tif')
self._run_gagewatershed(outlets_fn=outlets_fn, dst=gw_fn)
gw, _, _ = read_tif(gw_fn, dtype=np.int16)
for _pass in range(2):
for feature in js['features']:
topaz_id = int(str(feature['properties']['TopazID'])[:-1])
catchment_id = feature['properties']['WSNO']
coords = feature['geometry']['coordinates']
uslinkn01 = feature['properties']['USLINKNO1']
uslinkn02 = feature['properties']['USLINKNO2']
end_node = uslinkn01 == -1 and uslinkn02 == -1
if (end_node and _pass) or (not end_node and not _pass):
continue # this has already been processed
top = coords[-1]
bottom = coords[0]
top_px = self.utm_to_px(top[0], top[1])
bottom_px = self.utm_to_px(bottom[0], bottom[1])
# need a mask for the side subcatchments
catchment_data = np.zeros(w_data.shape, dtype=np.int32)
catchment_data[np.where(w_data == catchment_id)] = 1
if end_node:
# restrict the end node catchment the catchment area.
# otherwise there are cases where it gets drainage from beyond the watershed
gw_sub = gw * catchment_data
# identify top subcatchment cells
gw_indx = np.where(gw_sub == catchment_id)
# copy the top subcatchment to the subwta raster
if use_topaz_ids:
subwta[gw_indx] = int(str(topaz_id) + '1')
else:
subwta[gw_indx] = int(str(catchment_id) + '1')
# remove end subcatchments from the catchment mask
catchment_data[np.where(subwta != 0)] = 0
# remove channels from catchment mask
catchment_data -= src_data
catchment_data = np.clip(catchment_data, a_min=0, a_max=1)
indx, indy = np.where(catchment_data == 1)
print(catchment_id, _pass, len(indx))
# the whole catchment drains through the top of the channel
if len(indx) == 0:
continue
if _DEBUG:
driver = gdal.GetDriverByName('GTiff')
dst_ds = driver.Create(_join(self.wd, 'catchment_for_label_%05i.tif' % catchment_id),
xsize=subwta.shape[0], ysize=subwta.shape[1],
bands=1, eType=gdal.GDT_Int32,
options=['COMPRESS=LZW', 'PREDICTOR=2'])
dst_ds.SetGeoTransform(self.transform)
dst_ds.SetProjection(self.srs_wkt)
band = dst_ds.GetRasterBand(1)
band.WriteArray(catchment_data.T)
dst_ds = None
# we are going to crop the catchment for scipy.ndimage.label. It is really slow otherwise
# to do this we identify the bounds and then add a pad
pad = 1
x0, xend = np.min(indx), np.max(indx)
if x0 >= pad:
x0 -= pad
else:
x0 = 0
if xend < self.num_cols - pad:
xend += pad
else:
xend = self.num_cols - 1
y0, yend = np.min(indy), np.max(indy)
if y0 >= pad:
y0 -= pad
else:
y0 = 0
if yend < self.num_rows - pad:
yend += pad
else:
yend = self.num_rows - 1
# crop to just the side channel catchments
_catchment_data = catchment_data[x0:xend, y0:yend]
# use scipy.ndimage.label to identify side subcatchments
# todo: compare performance to opencv connectedComponents
# https://stackoverflow.com/questions/46441893/connected-component-labeling-in-python
subcatchment_data, n_labels = label(_catchment_data)
# isolated pixels in the channel can get misidentified as subcatchments
# this gets rid of those
subcatchment_data -= src_data[x0:xend, y0:yend]
# we only want the two largest subcatchments. These should be the side subcatchments
# so we need to identify which are the largest
sub_d = []
for i in range(n_labels):
s_indx, s_indy = np.where(subcatchment_data == i + 1)
sub_d.append(dict(rank=len(s_indx), s_indx=s_indx, s_indy=s_indy,
point=(x0 + np.mean(s_indx), y0 + np.mean(s_indy)),
origin=(float(bottom_px[0]), float(bottom_px[1])),
refvec=np.array(top_px, dtype=float) - np.array(bottom_px, dtype=float)
)
)
# sort clockwise
sub_d = sorted(sub_d, key=lambda _d: _d['rank'], reverse=True)
if len(sub_d) > 2:
sub_d = sub_d[:2]
sub_d = sorted(sub_d, key=lambda _d: rect_to_polar(_d))
# assert len(sub_d) == 2
k = 2
for d in sub_d:
if use_topaz_ids:
subwta[x0:xend, y0:yend][d['s_indx'], d['s_indy']] = int(str(topaz_id) + str(k))
else:
subwta[x0:xend, y0:yend][d['s_indx'], d['s_indy']] = int(str(catchment_id) + str(k))
k += 1
channels = self.data_fetcher('channels', dtype=np.int32)
ind = np.where(subwta == 0)
subwta[ind] = channels[ind]
driver = gdal.GetDriverByName('GTiff')
dst_ds = driver.Create(self._subwta, xsize=subwta.shape[0], ysize=subwta.shape[1],
bands=1, eType=gdal.GDT_UInt16, options=['COMPRESS=LZW', 'PREDICTOR=2'])
dst_ds.SetGeoTransform(self.transform)
dst_ds.SetProjection(self.srs_wkt)
band = dst_ds.GetRasterBand(1)
band.WriteArray(subwta.T)
band.SetNoDataValue(0)
dst_ds = None
def make_bound(self):
w_data = self.data_fetcher('w', dtype=np.int32)
bound = np.zeros(w_data.shape, dtype=np.int32)
bound[np.where(w_data > 0)] = 1
driver = gdal.GetDriverByName('GTiff')
dst_ds = driver.Create(self._bound, xsize=bound.shape[0], ysize=bound.shape[1],
bands=1, eType=gdal.GDT_Byte, options=['COMPRESS=LZW', 'PREDICTOR=2'])
dst_ds.SetGeoTransform(self.transform)
dst_ds.SetProjection(self.srs_wkt)
band = dst_ds.GetRasterBand(1)
band.WriteArray(bound.T)
band.SetNoDataValue(0)
dst_ds = None
def calculate_watershed_statistics(self):
bound = self.data_fetcher('bound', dtype=np.int32)
fvslop = self.data_fetcher('dinf_angle', dtype=np.float32)
relief = self.data_fetcher('fel', dtype=np.float32)
# calculate descriptive statistics
cellsize = self.cellsize
wsarea = float(np.sum(bound) * cellsize * cellsize)
mask = -1 * bound + 1
# determine area with slope > 30
fvslop_ma = np.ma.masked_array(fvslop, mask=mask)
indx, indy = np.ma.where(fvslop_ma > 0.3)
area_gt30 = float(len(indx) * cellsize * cellsize)
# determine ruggedness of watershed
relief_ma = np.ma.masked_array(relief, mask=mask)
minz = float(np.min(relief_ma))
maxz = float(np.max(relief_ma))
ruggedness = float((maxz - minz) / math.sqrt(wsarea))
indx, indy = np.ma.where(bound == 1)
ws_cen_px, ws_cen_py = int(np.round(np.mean(indx))), int(np.round(np.mean(indy)))
ws_centroid = self.px_to_lnglat(ws_cen_px, ws_cen_py)
outlet_top_id = None # todo
return dict(wsarea=wsarea,
area_gt30=area_gt30,
ruggedness=ruggedness,
minz=minz,
maxz=maxz,
ws_centroid=ws_centroid,
outlet_top_id=outlet_top_id,)
@property
def topaz_network(self):
tau2top = self.tau2topaz_translator_factory()
network = self.network
top_network = {}
for tau_id, d in network.items():
topaz_id = int(str(tau2top[tau_id]) + '4')
links = [int(str(tau2top[_tau_id]) + '4') for _tau_id in d['links']]
top_network[topaz_id] = links
return top_network
def tau2topaz_translator_factory(self):
tree = Node(self.outlet_tau_id, self.network)
def preorder_traverse(node):
res = []
if node:
res.append(node.data)
res.extend(preorder_traverse(node.left))
res.extend(preorder_traverse(node.right))
return res
tau_ids = preorder_traverse(tree)
if _DEBUG:
print('network', tau_ids)
d = {tau_id: i+2 for i, tau_id in enumerate(tau_ids)}
return d
def write_slps(self, out_dir, channels=1, subcatchments=1, flowpaths=0):
"""
Writes slope files to the specified wat_dir. The channels,
subcatchments, and flowpaths args specify what slope files
should be written.
"""
if channels:
self._make_channel_slps(out_dir)
if subcatchments:
self._write_subcatchment_slps(out_dir)
if flowpaths:
raise NotImplementedError
def _make_channel_slps(self, out_dir):
channels = self.abstracted_channels
translator = self.translator
chn_ids = channels.keys()
chn_enums = sorted([translator.chn_enum(chn_id=v) for v in chn_ids])
# watershed run requires a slope file defining all of the channels in the
# 99.1 format. Here we write a combined channel slope file and a slope
# file for each individual channel
fp2 = open(_join(out_dir, 'channels.slp'), 'w')
fp2.write('99.1\n')
fp2.write('%i\n' % len(chn_enums))
for chn_enum in chn_enums:
top = translator.top(chn_enum=chn_enum)
chn_id = str(top)
d = channels[chn_id]
_chn_wepp_width = d.chn_wepp_width
write_slp(d.aspect, d.width, _chn_wepp_width, d.length,
d.slopes, d.distance_p, fp2, 99.1)
fp2.close()
def _write_subcatchment_slps(self, out_dir):
subcatchments = self.abstracted_subcatchments
cellsize = self.cellsize
for sub_id, d in subcatchments.items():
slp_fn = _join(out_dir, 'hill_%s.slp' % sub_id)
fp = open(slp_fn, 'w')
write_slp(d.aspect, d.width, cellsize, d.length,
d.w_slopes, d.distance_p, fp, 97.3)
fp.close()
| python |
<filename>public/page-data/projects/page-data.json
{"componentChunkName":"component---src-pages-projects-js","path":"/projects/","webpackCompilationHash":"939f477e10a88aac47a7","result":{"pageContext":{"isCreatedByStatefulCreatePages":true}}} | json |
The announcement came after Kishor held discussions with chief minister K Chandrashekar Rao and Rama Rao after reaching Hyderabad for a two-day visit on Saturday morning.
The Telangana Rashtra Samithi (TRS) will work with the Indian Political Action Committee (I-PAC) for the Assembly polls due in 2023, but will not work with poll strategist Prashant Kishor, the party’s working president K T Rama Rao told reporters on Sunday.
“Prashant Kishor has introduced I-PAC to the TRS party and I-PAC is working for us officially. We are not working with Prashant Kishor but we are working with I-PAC,” Rama Rao was quoted as saying by ANI Digital on Sunday evening.
The announcement came after Kishor held discussions with chief minister K Chandrashekar Rao and Rama Rao after reaching Hyderabad for a two-day visit on Saturday morning.
Kishor has been meeting the Congress leadership in New Delhi fuelling speculation that he may help the party for the 2024 general elections.
“Kishor stayed back at Pragati Bhavan (the CM’s official residence) guest house in the night and resumed the talks on Sunday, too. Later in the evening, the chief minister took Kishor to his farmhouse at Erravelli in Siddipet district, about 60 km from Hyderabad, where they would conclude the talks,” a spokesperson of the chief minister’s office said.
Speaking to ANI Digital on Sunday, Rama Rao said Chandrashekhar Rao is running TRS for the last two decades but the party doesn’t want to miss out on the digital medium and that I-PAC would be helping the party.
“KCR is running TRS for the last two decades. We don’t want to miss digital medium and that’s why IPAC is going to help TRS party in coming polls,” he added.
“Prashant Kishor has disassociated himself from I-PAC and he is doing his own politics. IPAC will be working for us,” Rama Rao added. | english |
India’s youngest and debt free tractor brand, Sonalika International Tractors Limited (ITL), which has built the World’s No 1 largest integrated tractor manufacturing plant in Hoshiarpur created history by recording highest ever annual sales of 1 lakh tractors in a short span of time registering an overall growth of 22%.
The company has recorded a robust growth of 56% in the Q4 FY’18, surpassing the industry growth. In March, the company has registered phenomenal growth of 80%, with the total sales of 12,791 tractors.
Commenting on this dream come true milestone, Mr. Raman Mittal, Executive Director Sonalika ITL stated, “In FY’13, when we sold 50,853 tractors, we set a vision for ourselves to achieve the 1 lakh milestone by FY’18. To achieve this dream we kept farmers at the center point. We started making customized products best suited for every state, every type of soil conditions and multiple applications like puddling, orchard farming, potato farming, rotavator, cultivator and many more. It was a simple objective but required very complex solution, it meant having more than 1000+ variants with most advanced technology at a competitive price which has resulted in widest product range from 20-120HP.
Speaking of the company’s future plans and industry outlook, Mr. Mittal commented, “Our focus across all geographies has led us to be one of the leaders across states as well as presence over 100 countries with leadership in 4 countries. We will continue to strengthen our presence in Europe & USA markets with advanced tractors meeting the stringent emission norms. All this have been achieved on the backdrop of simple belief of providing best solutions to the farmer and be a partner in his economic growth.
We have launched new range of Sikander tractors and will be soon launching technologically advanced new series of next generation tractors meeting all future norms. We shall continue to invest in strengthening our technology platform to offer customized farming solutions.
| english |
use super::Generator;
use crate::prelude::*;
#[derive(Clone)]
pub struct AdsrGenerator {
pub attack: f32,
pub decay: f32,
pub sustain_level: f32,
pub sustain: f32,
pub release: f32,
}
impl AdsrGenerator {
pub fn new(attack: f32, decay: f32, sustain_level: f32, sustain: f32, release: f32) -> Self {
Self {
attack,
decay,
sustain,
sustain_level,
release,
}
}
pub fn total_duration(&self) -> f32 {
self.attack + self.decay + self.sustain + self.release
}
}
impl Default for AdsrGenerator {
fn default() -> Self {
Self::new(0.0, 0.0, 1.0, 0.0, 0.0)
}
}
impl Generator for AdsrGenerator {
fn generate(&mut self, sample_timing: &SampleTiming) -> PolySample {
let mut sample_clock = sample_timing.sample_clock();
let value = if sample_clock < self.attack {
sample_clock / self.attack
} else {
sample_clock -= self.attack;
if sample_clock < self.decay {
1.0 - ((sample_clock / self.decay) * (1.0 - self.sustain_level))
} else {
sample_clock -= self.decay;
if sample_clock < self.sustain {
self.sustain_level
} else {
sample_clock -= self.sustain;
if sample_clock < self.release {
(1.0 - ((sample_clock) / self.release)) * self.sustain_level
} else {
0.0
}
}
}
};
poly_sample!([value])
}
}
| rust |
/*=========================================================================
Program: Visualization Toolkit
Module: ParseOGLExt.cxx
Copyright (c) <NAME>, <NAME>, <NAME>
All rights reserved.
See Copyright.txt or http://www.kitware.com/Copyright.htm for details.
This software is distributed WITHOUT ANY WARRANTY; without even
the implied warranty of MERCHANTABILITY or FITNESS FOR A PARTICULAR
PURPOSE. See the above copyright notice for more information.
=========================================================================*/
/* A program that will read in OpenGL extension header files and output VTK
* code that handles extensions in a more platform-independent manner.
*/
/*
* Copyright 2003 Sandia Corporation.
* Under the terms of Contract DE-AC04-94AL85000, there is a non-exclusive
* license for use of this work by or on behalf of the
* U.S. Government. Redistribution and use in source and binary forms, with
* or without modification, are permitted provided that this Notice and any
* statement of authorship are reproduced on all copies.
*/
#include "Tokenizer.h"
#include <iostream>
#include <fstream>
#include <list>
#include <set>
#include <map>
#include <stdio.h>
#include <string.h>
#include <ctype.h>
using std::cerr;
using std::endl;
using std::ofstream;
using std::ifstream;
using std::istream;
using std::ostream;
// #define this if you want debug output as the parser does its work
// #define DEBUG_PARSE
static std::set< std::pair< std::string, std::string > > ConstantsAlreadyWritten;
static std::string ToUpper(std::string s)
{
std::string u;
for (std::string::size_type i = 0; i < s.length(); i++)
{
u.append(1, static_cast<char>(toupper(s[i])));
}
return u;
}
class Extension {
public:
std::string GetName() const { return this->name; }
enum {GL, WGL, GLX} type;
Extension() {}
Extension(char *line);
static bool isExtension(char *line);
static inline void WriteSupportWrapperBegin(ostream &out, int itype) {
switch (itype)
{
case WGL:
out << "#ifdef _WIN32" << endl;
break;
case GLX:
out << "#ifdef VTK_USE_X" << endl;
break;
case GL:
break;
}
}
inline void WriteSupportWrapperBegin(ostream &out) const {
WriteSupportWrapperBegin(out, this->type);
}
static inline void WriteSupportWrapperEnd(ostream &out, int itype) {
if ((itype == WGL) || (itype == GLX))
{
out << "#endif" << endl;
}
}
inline void WriteSupportWrapperEnd(ostream &out) const {
WriteSupportWrapperEnd(out, this->type);
}
static inline const char *TypeToCapString(int t) {
switch (t)
{
case GL: return "GL";
case GLX: return "GLX";
case WGL: return "WGL";
}
return NULL;
}
static inline const char *TypeToString(int t) {
switch (t)
{
case GL: return "gl";
case GLX: return "glX";
case WGL: return "wgl";
}
return NULL;
}
bool operator<(const Extension &obj) const { return this->name < obj.name; }
protected:
std::string name;
};
Extension::Extension(char *line)
{
Tokenizer t(line);
t.GetNextToken();
this->name = t.GetNextToken();
Tokenizer nameTokens(this->name, "_");
std::string header = nameTokens.GetNextToken();
if (header == "WGL")
{
this->type = WGL;
}
else if (header == "GLX")
{
this->type = GLX;
}
else
{
this->type = GL;
}
}
bool Extension::isExtension(char *line)
{
Tokenizer t(line);
if (t.GetNextToken() != "#ifndef") return false;
Tokenizer nameTokens(t.GetNextToken(), "_");
std::string header = nameTokens.GetNextToken();
if ((header == "GL") || (header == "WGL") || (header == "GLX"))
{
return true;
}
return false;
}
static Extension currentExtension;
class Constant {
public:
std::string GetName() const { return this->name; }
std::string GetValue() const;
Constant(char *line);
static bool isConstant(char *line);
bool operator<(const Constant &obj) const { return this->name < obj.name; }
protected:
std::string name;
std::string value;
};
static std::map<std::string, std::string> EncounteredConstants;
Constant::Constant(char *line)
{
// Assumes isConstant is true.
Tokenizer t(line);
t.GetNextToken();
this->name = t.GetNextToken();
std::string fullname = this->name;
if (currentExtension.type == Extension::GL)
{
// Skip the "GL_"
this->name = this->name.substr(3);
}
else
{
// Skip the "GLX_" or "WGL_"
this->name = this->name.substr(4);
}
// Make sure name does not start with a numeric.
if ((this->name[0] >= '0') && (this->name[0] <= '9'))
{
this->name = '_' + this->name;
}
this->value = t.GetNextToken();
// Now record this as found.
EncounteredConstants[fullname] = this->value;
}
std::string Constant::GetValue() const
{
// Sometimes, one constant points to another. Handle this properly.
std::map<std::string, std::string>::iterator found
= EncounteredConstants.find(this->value);
if (found != EncounteredConstants.end())
{
return found->second;
}
return this->value;
}
bool Constant::isConstant(char *line)
{
Tokenizer t(line);
if (t.GetNextToken() != "#define")
{
return false;
}
std::string n = t.GetNextToken();
if ( ( (currentExtension.type == Extension::GL)
&& (strncmp(n.c_str(), "GL_", 3) == 0) )
|| ( (currentExtension.type == Extension::WGL)
&& (strncmp(n.c_str(), "WGL_", 4) == 0) )
|| ( (currentExtension.type == Extension::GLX)
&& (strncmp(n.c_str(), "GLX_", 4) == 0) ) )
{
return true;
}
return false;
}
class Typedef {
public:
std::string definition;
Typedef(char *line);
static bool isTypedef(char *line);
bool operator<(const Typedef &obj) const { return this->definition < obj.definition; }
};
Typedef::Typedef(char *line)
{
// Assumes isTypedef is true.
this->definition = line;
}
bool Typedef::isTypedef(char *line)
{
Tokenizer t(line);
// Hack for some SGI stuff that declares a multiline struct.
if ( (t.GetNextToken() == "typedef")
&& ((t.GetNextToken() != "struct") || (t.GetNextToken() != "{")) )
{
return true;
}
// Hack for how some WIN32 things are declared.
if (strncmp(line, "DECLARE_HANDLE(", 15) == 0)
{
return true;
}
return false;
}
class Function {
public:
std::string GetReturnType() const { return this->returnType; }
std::string GetEntry() const { return this->entry; }
std::string GetName() const { return this->name; }
std::string GetArguments() const { return this->arguments; }
int GetExtensionType() const { return this->extensionType; }
Function(char *line);
static bool isFunction(char *line);
const char *GetProcType();
bool operator<(const Function &obj) const { return this->name < obj.name; }
protected:
std::string returnType;
std::string entry;
std::string name;
std::string arguments;
int extensionType;
};
Function::Function(char *line) : extensionType(currentExtension.type)
{
// Assumes isFunction returns true.
Tokenizer t(line, " \n\t(");
t.GetNextToken();
std::string token = t.GetNextToken();
this->returnType = "";
while ((token == "const") || (token == "unsigned"))
{
this->returnType += token + " ";
token = t.GetNextToken();
}
this->returnType += token;
token = t.GetNextToken();
if (token == "*")
{
this->returnType += " *";
token = t.GetNextToken();
}
else if (token[0] == '*')
{
this->returnType += " *";
token = token.substr(1);
}
#ifdef DEBUG_PARSE
cerr << "Function return type: " << this->returnType << endl;
#endif
if (currentExtension.type == Extension::GL)
{
this->entry = "APIENTRY";
token = t.GetNextToken();
}
else if (currentExtension.type == Extension::WGL)
{
this->entry = "WINAPI";
token = t.GetNextToken();
}
else
{
this->entry = "";
}
#ifdef DEBUG_PARSE
cerr << "Function entry: " << this->entry << endl;
#endif
if (currentExtension.type == Extension::GL)
{
// Strip off "gl"
this->name = token.substr(2);
}
else
{
// Strip off "glX" or "wgl"
this->name = token.substr(3);
}
#ifdef DEBUG_PARSE
cerr << "Function name: " << this->name << endl;
#endif
this->arguments = t.GetRemainingString();
#ifdef DEBUG_PARSE
cerr << "Function arguments: " << this->arguments << endl;
#endif
}
bool Function::isFunction(char *line)
{
Tokenizer t(line);
std::string modifier = t.GetNextToken();
std::string sreturnType = t.GetNextToken();
if (sreturnType == "const")
{
// We don't really need the return type, just to skip over const.
sreturnType += " ";
sreturnType += t.GetNextToken();
}
std::string sentry = t.GetNextToken();
if (sentry == "*")
{
sreturnType += " *";
sentry = t.GetNextToken();
}
else if (sentry.size() && sentry[0] == '*')
{
sreturnType += " *";
sentry = sentry.substr(1);
}
return ( ( (currentExtension.type == Extension::GL)
&& (modifier == "GLAPI") && (sentry == "APIENTRY") )
|| ( (currentExtension.type == Extension::GL)
&& (modifier == "extern") && (sentry == "APIENTRY") )
|| ( (currentExtension.type == Extension::WGL)
&& (modifier == "extern") && (sentry == "WINAPI") )
|| ( (currentExtension.type == Extension::GLX)
&& (modifier == "extern") ) );
}
const char *Function::GetProcType()
{
static std::string proctype;
proctype = "PFN";
proctype += Extension::TypeToCapString(this->extensionType);
proctype += ToUpper(this->name);
proctype += "PROC";
return proctype.c_str();
}
static std::list<Extension> extensions;
static std::set<Extension> extensionset;
static std::map<Extension, std::list<Constant> > consts;
static std::map<Extension, std::list<Typedef> > types;
static std::map<Extension, std::list<Function> > functs;
static void ParseLine(char *line)
{
static bool inExtension = false;
static int ifLevel = 0;
Tokenizer tokens(line);
std::string firstToken = tokens.GetNextToken();
if (Extension::isExtension(line))
{
currentExtension = Extension(line);
#ifdef DEBUG_PARSE
cerr << "Recognized extension: " << line << endl;
#endif
// There are some exceptions to the extensions we support. This is
// because someone has placed some funky nonstandard stuff in the
// header files.
if ( (currentExtension.GetName() == "GLX_SGIX_video_source")
|| (currentExtension.GetName() == "GLX_SGIX_dmbuffer")
|| (currentExtension.GetName() == "GLX_SGIX_hyperpipe") )
{
inExtension = false;
return;
}
// Only add extension to list if it is not already in it.
if (extensionset.find(currentExtension) == extensionset.end())
{
if (currentExtension.GetName() == "GLX_ARB_get_proc_address")
{
// Special case where GLX_VERSION_1_4 depends on a typedef in
// GLX_ARB_get_proc_address, so we have to move the latter up.
extensions.push_front(currentExtension);
}
else
{
extensions.push_back(currentExtension);
}
extensionset.insert(currentExtension);
}
inExtension = true;
ifLevel = 0;
}
else if (inExtension)
{
if (strncmp(firstToken.c_str(), "#if", 3) == 0)
{
ifLevel++;
}
else if (firstToken == "#endif")
{
if (ifLevel == 0)
{
inExtension = false;
}
else
{
ifLevel--;
}
}
else if ( Constant::isConstant(line)
&& (strncmp(currentExtension.GetName().c_str(), (line+8),
currentExtension.GetName().length()) != 0) )
{
#ifdef DEBUG_PARSE
cerr << "Recognized constant: " << line << endl;
#endif
consts[currentExtension].push_back(line);
}
else if (Function::isFunction(line))
{
#ifdef DEBUG_PARSE
cerr << "Recognized function: " << line << endl;
#endif
functs[currentExtension].push_back(line);
}
else if (Typedef::isTypedef(line))
{
#ifdef DEBUG_PARSE
cerr << "Recognized typedef: " << line << endl;
#endif
types[currentExtension].push_back(line);
}
}
else
{
#ifdef DEBUG_PARSE
cerr << "Unrecognized line: " << line << endl;
#endif
}
}
static void WriteHeader(ostream &file, const char *generator,
char **srcs, int num_srcs)
{
file << "// -*- c++ -*-" << endl << endl;
file << "//DO NOT EDIT!" << endl;
file << "//This file was created with " << generator << endl
<< "//from";
for (int i = 0; i < num_srcs; i++)
{
file << " " << srcs[i];
}
file << endl << endl;
file << "/*" << endl
<< " * Copyright 2003 Sandia Corporation." << endl
<< " * Under the terms of Contract DE-AC04-94AL85000, there is a non-exclusive" << endl
<< " * license for use of this work by or on behalf of the" << endl
<< " * U.S. Government. Redistribution and use in source and binary forms, with" << endl
<< " * or without modification, are permitted provided that this Notice and any" << endl
<< " * statement of authorship are reproduced on all copies." << endl
<< " */" << endl << endl;
}
static void WriteClassDeclarationGuts(ostream &hfile, int type)
{
for (std::list<Extension>::iterator iextension = extensions.begin();
iextension != extensions.end(); ++iextension)
{
if (iextension->type != type) continue;
hfile << endl << " //Definitions for " << iextension->GetName().c_str() << endl;
std::map<Extension, std::list<Constant> >::iterator cExts
= consts.find(*iextension);
if (cExts != consts.end())
{
for (std::list<Constant>::iterator iconst = cExts->second.begin();
iconst != cExts->second.end(); ++iconst)
{
// New versions of the NVIDIA OpenGL headers for Linux can
// #define the same constant with the same value in multiple
// sections. This utility will happily parse those and write
// out duplicate enums in different enum classes, which
// confuses the C++ preprocessor terribly. Don't write out a
// definition for an enum with a name/value pair that's
// already been used.
if (ConstantsAlreadyWritten.find(std::make_pair(iconst->GetName(),
iconst->GetValue()))
== ConstantsAlreadyWritten.end())
{
if(strcmp(iconst->GetName().c_str(),"TIMEOUT_IGNORED")==0)
{
// BCC/VS6/VS70 cannot digest this C99 macro
hfile << "#if !defined(__BORLANDC__) && (!defined(_MSC_VER) || (defined(_MSC_VER) && _MSC_VER>=1310))" << endl;
}
hfile << " const GLenum " << iconst->GetName().c_str()
<< " = static_cast<GLenum>(" << iconst->GetValue().c_str() << ");" << endl;
ConstantsAlreadyWritten.insert(std::make_pair(iconst->GetName(),
iconst->GetValue()));
if(strcmp(iconst->GetName().c_str(),"TIMEOUT_IGNORED")==0)
{
// really special case for non C99 compilers like BCC
hfile << "#endif /* only for C99 compilers */" << endl;
}
}
else
{
hfile << " /* skipping duplicate " << iconst->GetName().c_str()
<< " = " << iconst->GetValue().c_str() << " */" << endl;
}
}
}
std::map<Extension, std::list<Typedef> >::iterator tExts
= types.find(*iextension);
if (tExts != types.end())
{
for (std::list<Typedef>::iterator itype = tExts->second.begin();
itype != tExts->second.end(); ++itype)
{
hfile << " " << itype->definition.c_str() << endl;
}
}
std::map<Extension, std::list<Function> >::iterator fExts
= functs.find(*iextension);
if (fExts != functs.end())
{
for (std::list<Function>::iterator ifunc = fExts->second.begin();
ifunc != fExts->second.end(); ++ifunc)
{
hfile << " extern VTKRENDERINGOPENGL_EXPORT " << ifunc->GetProcType()
<< " " << ifunc->GetName().c_str() << ";" << endl;
}
}
}
}
static void WriteFunctionPointerDeclarations(ostream &cxxfile, int type)
{
Extension::WriteSupportWrapperBegin(cxxfile, type);
for (std::map<Extension, std::list<Function> >::iterator fExts
= functs.begin();
fExts != functs.end(); ++fExts)
{
if (fExts->first.type != type) continue;
cxxfile << "//Functions for " << fExts->first.GetName().c_str() << endl;
for (std::list<Function>::iterator ifunc = fExts->second.begin();
ifunc != fExts->second.end(); ++ifunc)
{
cxxfile << "vtk" << Extension::TypeToString(type) << "::"
<< ifunc->GetProcType()
<< " vtk" << Extension::TypeToString(type) << "::"
<< ifunc->GetName().c_str() << " = NULL;" << endl;
}
}
Extension::WriteSupportWrapperEnd(cxxfile, type);
cxxfile << endl;
}
static void WriteCode(ostream &hfile, ostream &cxxfile)
{
// Write data for header file ---------------------------------
hfile << "#ifndef vtkgl_h" << endl
<< "#define vtkgl_h" << endl << endl;
hfile << "#include \"vtkRenderingOpenGLConfigure.h\"" << endl;
hfile << "#include \"vtkSystemIncludes.h\"" << endl;
hfile << "#include \"vtkWindows.h\"" << endl;
hfile << "#include \"vtkOpenGL.h\"" << endl;
hfile << "#include <stddef.h>" << endl << endl;
hfile << "#ifdef VTK_USE_X" << endl
<< "/* To prevent glx.h to include glxext.h from the OS */" << endl
<< "#define GLX_GLXEXT_LEGACY" << endl
<< "#include <GL/glx.h>" << endl
<< "#endif" << endl << endl;
hfile << "class vtkOpenGLExtensionManager;" << endl << endl;
hfile << "#ifndef APIENTRY" << endl
<< "#define APIENTRY" << endl
<< "#define VTKGL_APIENTRY_DEFINED" << endl
<< "#endif" << endl << endl;
hfile << "#ifndef APIENTRYP" << endl
<< "#define APIENTRYP APIENTRY *" << endl
<< "#define VTKGL_APIENTRYP_DEFINED" << endl
<< "#endif" << endl << endl;
hfile << "/* Undefine all constants to avoid name conflicts. They should be defined */" << endl
<< "/* with GL_, GLX_, or WGL_ preprended to them anyway, but sometimes you run */" << endl
<< "/* into a header file that gets it wrong. */" << endl;
for (std::map<Extension, std::list<Constant> >::iterator constlist
= consts.begin();
constlist != consts.end(); ++constlist)
{
for (std::list<Constant>::iterator c = (*constlist).second.begin();
c != (*constlist).second.end(); ++c)
{
hfile << "#ifdef " << (*c).GetName().c_str() << endl;
hfile << "#undef " << (*c).GetName().c_str() << endl;
hfile << "#endif" << endl;
}
}
Extension::WriteSupportWrapperBegin(hfile, Extension::GL);
hfile << endl << "namespace vtkgl {" << endl;
// Add necessary type declarations.
hfile << " //Define int32_t, int64_t, and uint64_t." << endl;
hfile << " typedef vtkTypeInt32 int32_t;" << endl;
hfile << " typedef vtkTypeInt64 int64_t;" << endl;
hfile << " typedef vtkTypeUInt64 uint64_t;" << endl;
// OpenGL 3.2 typedefs
hfile << " typedef int64_t GLint64;" << endl;
hfile << " typedef uint64_t GLuint64;" << endl;
hfile << " typedef struct __GLsync *GLsync;" << endl;
ConstantsAlreadyWritten.clear();
WriteClassDeclarationGuts(hfile, Extension::GL);
hfile << endl << " // Method to load functions for a particular extension.";
hfile << endl << " extern int VTKRENDERINGOPENGL_EXPORT LoadExtension(const char *name, "
<< "vtkOpenGLExtensionManager *manager);" << endl;
hfile << endl << " // Strings containing special version extensions.";
hfile << endl << " extern VTKRENDERINGOPENGL_EXPORT const char *GLVersionExtensionsString();" << endl;
hfile << endl << " const char *GLXVersionExtensionsString();" << endl;
hfile << "}" << endl;
Extension::WriteSupportWrapperEnd(hfile, Extension::GL);
Extension::WriteSupportWrapperBegin(hfile, Extension::GLX);
hfile << "namespace vtkglX {" << endl;
// glxext.h is not written very well. Add some typedefs that may not
// be defined.
hfile << " //Miscellaneous definitions." << endl;
hfile << " typedef XID GLXContextID;" << endl;
hfile << " typedef XID GLXPbuffer;" << endl;
hfile << " typedef XID GLXWindow;" << endl;
hfile << " typedef XID GLXFBConfigID;" << endl;
hfile << " typedef struct __GLXFBConfigRec *GLXFBConfig;" << endl;
hfile << " typedef vtkTypeInt32 int32_t;" << endl;
hfile << " typedef vtkTypeInt64 int64_t;" << endl;
ConstantsAlreadyWritten.clear();
WriteClassDeclarationGuts(hfile, Extension::GLX);
hfile << "}" << endl;
Extension::WriteSupportWrapperEnd(hfile, Extension::GLX);
Extension::WriteSupportWrapperBegin(hfile, Extension::WGL);
hfile << "namespace vtkwgl {" << endl;
ConstantsAlreadyWritten.clear();
WriteClassDeclarationGuts(hfile, Extension::WGL);
hfile << "}" << endl;
Extension::WriteSupportWrapperEnd(hfile, Extension::WGL);
hfile << endl
<< "#ifdef VTKGL_APIENTRY_DEFINED" << endl
<< "#undef APIENTRY" << endl
<< "#endif" << endl << endl;
hfile << "#ifdef VTKGL_APIENTRYP_DEFINED" << endl
<< "#undef APIENTRYP" << endl
<< "#endif" << endl << endl;
hfile << "#endif //_vtkgl_h" << endl;
// Write data for C++ file --------------------------------------------
cxxfile << "#include \"vtkgl.h\"" << endl;
cxxfile << "#include \"vtkOpenGLExtensionManager.h\"" << endl << endl;
// Write function pointer declarations.
WriteFunctionPointerDeclarations(cxxfile, Extension::GL);
WriteFunctionPointerDeclarations(cxxfile, Extension::GLX);
WriteFunctionPointerDeclarations(cxxfile, Extension::WGL);
std::list<Extension>::iterator iextension;
// Write function to load function pointers.
cxxfile << "int vtkgl::LoadExtension(const char *name, vtkOpenGLExtensionManager *manager)" << endl
<< "{" << endl;
for (iextension = extensions.begin();
iextension != extensions.end(); ++iextension)
{
iextension->WriteSupportWrapperBegin(cxxfile);
cxxfile << " if (strcmp(name, \"" << iextension->GetName().c_str()
<< "\") == 0)" << endl
<< " {" << endl;
std::string vtkglclass = "vtk";
vtkglclass += Extension::TypeToString(iextension->type);
std::list<Function>::iterator ifunct;
for (ifunct = functs[*iextension].begin();
ifunct != functs[*iextension].end(); ++ifunct)
{
cxxfile << " " << vtkglclass.c_str() << "::"
<< ifunct->GetName().c_str() << " = reinterpret_cast<" << vtkglclass.c_str() << "::"
<< ifunct->GetProcType()
<< ">(manager->GetProcAddress(\""
<< Extension::TypeToString(iextension->type)
<< ifunct->GetName().c_str() << "\"));" << endl;
}
cxxfile << " return 1";
for (ifunct = functs[*iextension].begin();
ifunct != functs[*iextension].end(); ++ifunct)
{
cxxfile << " && (" << vtkglclass.c_str() << "::" << ifunct->GetName().c_str()
<< " != NULL)";
}
cxxfile << ";" << endl;
cxxfile << " }" << endl;
iextension->WriteSupportWrapperEnd(cxxfile);
}
cxxfile << " vtkGenericWarningMacro(<< \"Nothing known about extension \" << name" << endl
<< " << \". vtkgl may need to be updated.\");" << endl;
cxxfile << " return 0;" << endl
<< "}" << endl;
// Write functions to report special version extension strings.
cxxfile << endl << "const char *vtkgl::GLVersionExtensionsString()" << endl
<< "{" << endl
<< " return \"";
for (iextension = extensions.begin();
iextension != extensions.end(); ++iextension)
{
if (strncmp("GL_VERSION_", iextension->GetName().c_str(), 11) == 0)
{
cxxfile << iextension->GetName().c_str() << " ";
}
}
cxxfile << "\";" << endl
<< "}" << endl;
cxxfile << endl << "const char *vtkgl::GLXVersionExtensionsString()" << endl
<< "{" << endl
<< " return \"";
for (iextension = extensions.begin();
iextension != extensions.end(); ++iextension)
{
if (strncmp("GLX_VERSION_", iextension->GetName().c_str(), 12) == 0)
{
cxxfile << iextension->GetName().c_str() << " ";
}
}
cxxfile << "\";" << endl
<< "}" << endl;
}
int main(int argc, char **argv)
{
if (argc < 3)
{
cerr << "USAGE: " << argv[0] << "<output dir> <header files>" << endl;
return 1;
}
std::string outputDir = argv[1];
for (int i = 2; i < argc; i++)
{
#ifdef DEBUG_PARSE
cerr << "*** Parsing declarations from file " << argv[i] << endl;
#endif
ifstream file(argv[i]);
if (!file)
{
cerr << "Could not open " << argv[i] << endl;
return 2;
}
while (!file.eof())
{
static char buf[4096]; // What are the odds of needing more?
file.getline(buf, 4096);
ParseLine(buf);
}
file.close();
}
ofstream hfile((outputDir + "/vtkgl.h").c_str());
WriteHeader(hfile, argv[0], argv+1, argc-1);
ofstream cxxfile((outputDir + "/vtkgl.cxx").c_str());
WriteHeader(cxxfile, argv[0], argv+1, argc-1);
WriteCode(hfile, cxxfile);
hfile.close();
cxxfile.close();
return 0;
}
| cpp |
<filename>metadata/page/page.start.json
{
"_id": "page.start",
"_type": "page.start",
"body": "- answers for checkboxes not containing </p><p>\r\n- answers for checkboxes and radios not containing &amp;\r\n- checkboxes being required",
"heading": "Monkeying around",
"lede": "Simple form with test cases",
"steps": [
"page.new-world",
"page.primate-differences",
"page.gibbon",
"page.gibbon-secret",
"page.email",
"page.check",
"page.confirmation"
],
"url": "/"
} | json |
Kim Dotcom, former chief of Megaupload, has promised that disruptive music service Megabox will launch this year.
On Twitter, Kim Dotcom sent out the following messages to over a hundred thousand followers:
Dotcom, real name Kim Schmitz, is currently embroiled within a legal battle concerning his former cyberlocker site Megaupload and authorities from the United States. Among other charges, the former Megaupload chief is being accused of facilitating "massive worldwide online piracy", racketeering and money laundering, which has cost "over $500 million in damages and earned over $175 million in profit".
Despite a looming extradition and numerous charges, Dotcom seems determined to launch Megabox this year, claiming it will be "unstoppable".
Revealed last year by TorrentFreak, the Megabox platform is designed, at least in theory, to turn the media industry on its head. Due to be hosted on domain Megabox.com, the service will allow "artists to sell their creations direct to consumers and allowing artists to keep 90 percent of earnings."
Citing an invention called the "Megakey", Dotcom said that when users download music for free on the website, artists will still earn revenue. According to the Megaupload founder, the new business model has already been tested successfully with over a million users. Dotcom continued:
"You need to understand that some labels are run by arrogant and outdated dinosaurs who have been in business for 1000 years. These guys think an iPad is a facial treatment, the Internet is the devil, and wired phones are still hip.
They are in denial about the new realities and opportunities. They don't understand that the rip-off days are over. Artists are more educated than ever about how they are getting ripped off and how the big labels only look after themselves."
If you take a look through Dotcom's Twitter communications, it seems that several artists are already interested in joining the scheme, athough we're yet to see what the U.S. authorities or music industry have to say in response.
| english |
Dr. Babu Jagjivan Ram’s 114th birth anniversary celebrations, workshop on ‘Babu Jagjivan Ram in Research Field’ and ‘Language and Social Science Research,’ Rajya Sabha Member Dr. L. Hanumanthaiah inaugurates, Mysore University Registrar Prof. R. Shivappa presides, Dr. Babu Jagjivan Ram Study Centre, Manasagangothri, 10. 30 am. | english |
Can i take your coat? in Nepali: What's Nepali for Can I take your coat?? If you want to know how to say Can I take your coat? in Nepali, you will find the translation here. We hope this will help you to understand Nepali better.
Dictionary Entries near Can I take your coat?
"Can I take your coat? in Nepali." In Different Languages, https://www.indifferentlanguages.com/words/can_i_take_your_coat%3F/nepali. Accessed 07 Oct 2023.
Check out other translations to the Nepali language:
| english |
Hindu religious procession attacked by religious fanatics in Sahibganj (Jharkhand)
Police who cannot protect themselves, how can they protect the Hindus ? Hence, it is the need of the hour that the Hindus become capable of protecting themselves !
Police who cannot protect themselves, how can they protect the Hindus ? Hence, it is the need of the hour that the Hindus become capable of protecting themselves !
Islamist fanatics raised slogans of ‘Sar Tan Se Juda’ (decapitation)’ in front of a Police station in Una for Kajal Hindustani, a social worker and devout Hindu.
Bengal’s law and order situation is spiralling out of control in the State.
Such repeated cases prove that in India, Hindus are unsafe and not the Muslims. Will the so-called seculars and liberals utter a word against such incidents ?
Why don’t people like Muslim clerics, leaders, actor Naseeruddin Shah, journalist Rana Ayyub open their mouths about making bombs during Ramzan and using them to kill Hindus ?
Hindu religious festival processions in a Hindu-majority country are attacked by fanatic Muslims in many parts of the country and the so-called secularists, progressives, and human rights activists are all silent.
This incident clearly shows the breakdown of law and order in a Trinamool Congress run State.
“The partition between India and Pakistan is artificial. The people of Pakistan now feel that partition was a mistake,” said H. H. Sarsanghchalak (Dr) Mohanji Bhagwat.
It wouldn’t be wrong if someone says the Police are not capable of controlling Muslim religious fanatics and hence, they are responsible for spoiling Hindu festivals rather than Muslim fanatics.
Incidents of attacks by religious fanatic Muslims on Ramnavami processions and Hindus returning after the immersion of Devi Idols have been reported at Sasaram, Nalanda, and Bhagalpur. | english |
Access to energy should not be a privilege of the rich and the poor also should have equal right to energy, Prime Minister Narendra Modi said on Monday. In his remarks at a special session of the G7 summit in Germany, Mr. Modi said the clean energy sector has emerged as a major domain in India and developed economies should invest in this arena. The G7 summit was dominated by the Ukraine crisis but Mr. Modi referred to the ongoing war in Europe indirectly.
"All of you will also agree with this that energy access should not be the privilege of the rich only. A poor family also has the same rights on energy. And today, when energy costs are sky-high due to geopolitical tensions, it is more important to remember this thing," Mr. Modi said in his remarks at a session of the summit on "Investing in a better future: Climate, Energy, Health".
India, along with South Africa, Indonesia, Argentina and Senegal, are the guests at this year's G7 summit.
Mr. Modi’s remarks on the alternate energy scene in India showed India’s efforts to blend in with the dominant agenda of the G7 grouping that’s aimed at severing links between the Russian energy sector and major buyers like China and India, the focus of the West for purchasing Russia’s Ural crude at a steep discount. Western economies have been championing “clean energy” as an alternative to the risk prone oil and gas mix.
"Today, a huge market for clean energy technologies is emerging in India. G-7 countries can invest in research, innovation, and manufacturing in this field. The scale that India can provide for every new technology can make that technology affordable for the whole world," Mr. Modi said.
The Indian Prime Minister said that India's commitment to environmental protection has remained unaffected despite setbacks that the country suffered during its long civilisational history. "Ancient India had seen a time of immense prosperity; then we have also tolerated the centuries of slavery, and now independent India is the fastest growing big economy in the whole world," said Mr. Modi referring to India's projected post-pandemic growth rate that is expected to be 7. 5% according to the World Bank.
He said, India accounts for 17% of the global population but is responsible for 5% of global carbon emission and added, "The main reason behind this is our lifestyle, which is based on the theory of co-existence with nature. "
Mr. Modi was earlier welcomed around midday by German Chancellor Olaf Scholz at Schloss Elmau, the venue for the G7 summit. He also had a "tea break" with French President Emmanuel Macron, who recently faced an electoral setback and lost absolute majority in the French Parliament.
Following the meeting, French Ambassador Emmanuel Lenain, in a social media message, said, "Friendship at the highest level: President Emmanuel Macron and Prime Minister Narendra Modi at a crucial G7 summit for collective decisions on global challenges and world stability. "
Mr Modi also met the South African President Cyril Ramaphosa and Indonesian President Joko Widodo and discussed fintech, food processing and connectivity. | english |
Tense situations have been going on between China and India for a very long time. Of late the situation turned even tenser after the intense Galwan clashes where Indian and Chinese troops had a violent face-off. India lost a few brave hearts and China too suffered a few losses.
With this, the Indian government started dealing with the Chinese apps with an iron fist. Many apps were banned in India saying that the apps are gathering information about users which is not acceptable. Though the Chinese apps said that this is objectionable behavior, the Indian government did not take a back step.
The North American nation Canada joined the row and banned Tik Tok. The government said that the app is not good for the privacy and security of the nation and hence the decision was taken. Saying that the safety of countrymen is more important, the government announced the decision.
The Canadian government said that Tik Tok cannot be used for government devices given the threat it poses to the nation and Prime Minister Justin Trudeau said that government employees would no longer be able to use Tik Tok in their workforces. However, common people can take a call on this later.
"I suspect that as government takes the significant step of telling all federal employees that they can no longer use TikTok on their work phones, many Canadians from business to private individuals will reflect on the security of their own data and perhaps make choices," the Prime Minister said.
TikTok app enjoys a massive following among youngsters across the globe. The app gave a big platform for young people to become content creators. However, there are many allegations that these apps are gathering the personal information of the users and transferring the same to China.
As a result, many nations are busy making steps to ban the app. After India banned the app, the likes of America and a few European nations either banned the app or took the initiative to restrict the usage of the app in their respective regions. Now Canada also joined the row and banned the usage of the app for government employees. | english |
Martina Navratilova is among the most active tennis players, past or present, on social media. The Czech-American tennis legend believes that celebrities, in general, should use the platform to have a positive impact on their legion of fans and followers and express their beliefs.
Navratilova strongly opined that 'silence' regarding a matter is equivalent to 'consent' and urged fellow celebrities to use social media to speak up more often.
She spoke about her own active use of social media, stating that while she did not "sign up to be famous," she realized the importance of using her stature to stand up for her beliefs through social media. The nine-time Wimbledon singles champion opened up about the same during a recent interview with Dr. Lipi Roy.
"I didn't sign up to be famous, I just played tennis," Martina Navratilova said in video posted on Dr. Lipi Roy's YouTube page. "But social media gave me the platform and I use it quite a lot."
"It depends on the personality of that person and the specific situation of course. I think when you have that platform, you've got to use it, because silence is consent, and most of the time that's not acceptable. So I wish more people will take it seriously," Navratilova added.
The 18-time Grand Slam singles champion further stated that celebrities do not have a specific responsibility to use social media and other platforms to reach out to people and express their beliefs on important matters. However, she believes they should still use it as they have a great opportunity to create an impact, and feels most celebrities do so.
"I don't think they have a responsibility, but they have the capability of having that platform and making good on that, making a difference in people's lives," she said. "I think most of them do take it seriously and do go public and try to help out people, whether they're indirectly or directly linked to them."
Martina Navratilova, the winner of a staggering 59 Grand Slams across singles and doubles, including 16 US Open and 12 Australian Open titles, believes that athletes tend to contribute more towards their communities and for public welfare, as compared to other celebrities.
Naratilova stressed that athletes have more charitable organizations and foundations and raise more money for good causes, putting it down to the fact that they come from pure talent and hard work and not through influence.
"Overall, particularly athletes, I think because we got to where we are because we are good and not because we know somebody or we came from the right family," she said. "I think because of that, you see a lot of athletes have a charitable organization, or they raise money, or they do a lot for the community where they live as well as the world, more so than perhaps other celebrities. It's an opportunity and you should use it."
Earlier this month, Martina Navratilova rang the NASDAQ bell in New York City’s Times Square to kick off Breast Cancer Awareness Month. The 65-year-old herself is a breast cancer survivor and often uses her platform to raise awareness regarding the matter.
| english |
<gh_stars>0
import argparse
import json
import cv2
def recognize(model, image, threshold):
emodel = cv2.createEigenFaceRecognizer(threshold=threshold)
emodel.load(model)
simg = cv2.imread(image, cv2.IMREAD_GRAYSCALE)
simg = cv2.resize(simg, (256,256))
[p_label, p_confidence] = emodel.predict(simg)
return (p_label, p_confidence)
if __name__ == '__main__':
parser = argparse.ArgumentParser(description='Match face.')
parser.add_argument('-m', '--model', default='eigenModel.xml',
help='EignenFace model')
parser.add_argument('-i', '--image', required=True,
help='Image to process')
parser.add_argument('-t', '--threshold', type=float, required=True,
help='Matching threshold, 100.0 is very close')
parser.add_argument('--meta', default='imgmeta.json',
help='File mapping subject to p_label')
parser.add_argument('--result_out', default='result.json',
help='JSON file holding results')
args = parser.parse_args()
(p_label, p_confidence) = recognize(args.model, args.image, args.threshold)
subject = None
if args.meta is not None:
with open(args.meta,'r') as metafile:
img_meta = json.load(metafile)
subject = img_meta[str(p_label)]['subject']
result = dict(p_label=p_label, p_confidence=p_confidence, subject=subject)
with open(args.result_out,'w') as resultfile:
json.dump(result, resultfile)
print(result)
| python |
Lucknow: The Yogi Adityanath government in Uttar Pradesh has completely lifted the day curfew in all 75 districts.
According to a statement issued by Additional Chief Secretary (ACS) Information, Navneet Sehgal Tuesday morning, curfew restrictions will remain in place in all districts form 7 pm to 7 am.
“There are now less than 600 active cases in all the districts and hence the day curfew is being lifted. Only 797 new cases have been reported in the past 24 hours,” he said.
Uttar Pradesh now has 14,000 active cases and 2. 85 lakh tests were carried out in the past 24 hours. | english |
There is no denying the fact that SEO has the power to make or break your business online. Organic SEO can take your business to the next level, but black hat SEO can even lead to a penalty from search engines. If you follow the right SEO techniques, it will surely give your bottom line a boost. Following are 5 compelling benefits of SEO services:
SEO optimized website loads at a faster pace, work flawlessly on various devices and platforms, they are really easy to read and surf. Websites that load quickly and are easy to read are more likely to grab the attention of a lot of visitors. People like such websites that can help provide the information they are looking for. This way you can turn your visitors or readers into your loyal customers and subscribers.
SEO is one of the most effective and affordable digital marketing strategies that can take your business to the new heights. Moreover, it will help you reach the right and authentic audience who are looking for your product or service. You just need to spend a few hours of time and a small amount of money, SEO will drive in targeted traffic to your website and eventually maximize your ROI.
One of the major benefits of securing top rankings on the SERPs is building brand awareness. If your site appears at the top first page of top search engines such as Google and Yahoo, your prospective customers will develop trust in your brand. People generally don’t rely on the brands or services that don’t have a strong web presence.
Your potential customers are already interested in your products or services, all you need to do is implementing an effective SEO strategy that can engage and compel them to buy from you. You need to research what keywords or images would gather the attention of your potential customers. With the right strategy, you can reach the right audience at the right time.
Suppose there are two businesses selling the same product at the same price. One of them has a SEO optimized website while the other doesn’t. Considering everything is equal, which company do you think will grab the attention of more customers to their website? Which company will grow faster? The answer is clear, the website which is SEO optimized. Reason being, it will rank higher, generate brand awareness and eventually more sales and profit.
Search engines and SEO are very powerful. They can make or break the image of your business. Effective and white hat SEO strategies will give you long term benefits. The digital world has provided us with an array of options to meet our goals and ace the competition. | english |
<reponame>joshhedstrom/helloagainsaml
package com.hedstrom.spring.boot.security.saml.web.core;
import java.util.ArrayList;
import java.util.List;
import org.slf4j.Logger;
import org.slf4j.LoggerFactory;
import org.springframework.security.core.GrantedAuthority;
import org.springframework.security.core.authority.SimpleGrantedAuthority;
import org.springframework.security.core.userdetails.User;
import org.springframework.security.core.userdetails.UsernameNotFoundException;
import org.springframework.security.saml.SAMLCredential;
import org.springframework.security.saml.userdetails.SAMLUserDetailsService;
import org.springframework.stereotype.Service;
@Service
public class SAMLUserDetailsServiceImpl implements SAMLUserDetailsService {
private static final Logger LOG = LoggerFactory.getLogger(SAMLUserDetailsServiceImpl.class);
public Object loadUserBySAML(SAMLCredential credential)
throws UsernameNotFoundException {
String userID = credential.getNameID().getValue();
LOG.info(userID + " is logged in");
List<GrantedAuthority> authorities = new ArrayList<GrantedAuthority>();
GrantedAuthority authority = new SimpleGrantedAuthority("ROLE_USER");
authorities.add(authority);
//locate user in database
//returns
// username
// password
// enabled: set to true if the user is enabled
// accountNonExpired: set to true if the account has not expired
// credentialsNonExpired: set to true if the credentials have not expired
// accountNonLocked: set to true if the account is not locked
// authorities: the authorities that should be granted to the caller if they presented the correct username and password and the user is enabled. Not null.
return new User(userID, "<<PASSWORD>>", true, true, true, true, authorities);
}
}
| java |
Each girl, serious about the beauty of her hands, dreams of making any manicure without any difficulties. But if simple coloring of nails usually does not cause problems, then more complex variants often require additional preparation. We will understand, than smear around a fingernail at manicure that around there were no spots of a varnish.
Why is this necessary?
First, let's determine what benefits the application of different formulations brings to the skin around the nail plate. Some of them are designed to care for the cuticle and to soften it. From special oils and creams the border of the nail plate becomes more clear, does not peel and does not separate.
But this article is about something else. Complex designs of manicure - for example, water and ombre - require a more serious approach than classic French or monochrome. In the process of creating such effects, the lacquer goes beyond the boundary of the zone allocated to it and simply stains the finger. It is to preserve the aesthetic appearance of the hands, and it is necessary to apply a special liquid to the skin.
A little later, we will go directly to the answer to the question, than smear a finger around the nail with manicure. But for this you need to have a complete idea of why this manipulation is needed at all.
This is how beauty salons call a smooth transition from one color of varnish to another.
There is nothing complicated here, so the stages of creating such a manicure can be divided into several simple steps:
- The nail plate is prepared for application of the varnish - the edge is adjusted, polished, polished and the edge of the cuticle is leveled.
- On a piece of foil, apply two or three rows of varnish close to each other, so that they are connected. In the middle they can be mixed.
- Foam (you can use a piece of sponge to wash dishes) are dipped in this varnish and pressed against the nail several times.
- This operation is done for all fingers.
Now it remains only to wait until the varnish dries, and remove its residues. If the fingers were not prepared, it would be quite difficult to do this. That is why it is important to know what smear around the nail with a manicure sponge, and specifically - in front of him.
Water manicure is much more complicated, so for instructions to it you will need to write a separate article. Do not waste time on this and go directly to the topic.
In professional beauty salons, of course, there is no question what to smear the skin around the nail with manicure. There, for this purpose, special means are used from well-known manufacturers.
Many cosmetic companies produce compounds that prevent the ingress of coloring substances on the fingers. In addition, they often contain useful substances that nourish and moisturize the skin. Some formulations have antiseptic and regenerating properties that improve the appearance of the hands directly during procedures.
The main component of such professional solutions is usually rubber - it creates a thin polymer film on the surface of the finger. Excess varnish is easily removed with it after all procedures are completed. This is one of the most convenient options for preparing for a complex manicure, but at the same time the most expensive. The cost of 15 ml averages about 250 rubles.
This is one of the most popular ways to remove excess lacquer from your fingers. Find a fat cream is not difficult - it can be a means for skin care for hands or face. The main thing is that it is poorly absorbed.
Fat cream - this is what smear the skin around the nail in front of the manicure of the girls, who want to save on the purchase of specialized products. That's how they do it. Directly before staining, a thick layer of cream is applied to the skin around the leg. You need to do this carefully. Fat mass should not get on the nail plate itself. If this happens, wipe the disc or wand carefully and carefully erase all the remains of the cream where they do not belong, otherwise the lacquer will lie unevenly. Now you can create any design at your fingertips.
After the manicure is made, and the varnish is a little dried, you can remove its remains with the cream. Use a cotton swab for this, trying not to damage the pattern on the nails.
What smear around the nail with nail polish those who have no fatty creams or lacquer film in the house? For example, petroleum jelly. This oily and viscous substance is an excellent protector against surplus lacquer. Before creating the design, apply it on the okolonogtevye areas are not very thick layer. Just as in the case of cream, you need to make sure that the product does not get to where the manicure will be created.
After you make up your nails and dry them a little, you can remove the remover with a cotton swab. The advantage of petroleum jelly is that it is very poorly absorbed, creating a reliable protection against varnish. You can buy it in any pharmacy, and it's quite inexpensive.
Polyvinyl acetate is one of the main analogs of what smear around the nail with manicure in beauty salons. Some professional products have a pronounced smell of PVA, which means their compositions are similar.
One of the most important advantages of glue is its harmlessness. This is a water-based substance with a characteristic odor, but not toxic. PVA when dried, creates a dense film that resembles a rubber film. It perfectly protects the skin from varnish and is easily removed after the manicure is completed. This tool has earned many positive reviews from those who create a nail design at home. Many note that the difference between professional tools and glue PVA virtually none.
In addition, you can buy it in almost any office store. But it should be borne in mind that some unscrupulous manufacturers sell too liquid substance, the useful properties of which are minimal. Therefore, it is not worth saving when buying PVA, especially since even the highest price for it is quite acceptable.
What smear around the nail with manicure, if the above described means are not at hand? To do this, you can use any very fatty substance - for example, oil. It can be an ordinary vegetable, intended for cooking, or a special manicure. In the latter case, you combine the creation of a beautiful design on the nails with the healing of the skin around them.
Many people do not dwell on the above methods of creating a neat manicure. The main material for one of the options is the usual office scotch, which glued the area around the nail. This method has several drawbacks. Firstly, Scotch tape is stuck to the skin badly. Therefore, it will not be possible to create reliable protection with it. Secondly, no matter how good gluing, when painting under the film, lacquer will necessarily fall. When water manicure scotch and can completely get rid of moisture. In other words, use this option is recommended only if all the others are not available.
After the creation of a manicure, even with the use of the aforementioned means, small spots of paint may remain, which must necessarily be removed. Use for this can be a special eraser pencil, which is sold in cosmetic stores. He will do even the finest work without damaging the manicure. But after a few uses, the pencil may darken, and you will have to buy a new one. Is not it better to find a cheaper way to get your hands in order?
For large spots, the usual cotton swabs are suitable. For more fine work - toothpicks. The tip of a wooden stick is wrapped in a small piece of cotton wool and moistened with a liquid to remove the varnish. This method is suitable for those who are not afraid of painstaking work and at the same time can save.
Now you know how to smear around the nail with a water manicure or when creating an ombre and how to remove the remnants of lacquer without spending any extra money. Do not forget to look after your hands, and then anyone, even the simplest design, will look at them stylishly and attractively. | english |
from django.db import models
from sorl.thumbnail import ImageField
# Create your models here.
class Post(models.Model):
text = models.CharField(max_length=140, blank=False, null=False)
image = ImageField()
def __str__(self):
return self.text | python |
Is former President Pranab Mukherjee's son Abhijit Mukherjee joining the TMC? Speculations are rife after he met with TMC general secretary and Mamata Banerjee's nephew Abhishek Banerjee in the later's Camac Street office on Monday. Mukherjee neither confirmed not denied the preposition when ANM News spoke to him on his phone. ``I am in a remote village in East Midnapur and I will talk about this later,'' he said. The former Congress MP from Jangipur, Avijit had a long meeting with Abhishek Banerjee ostensibly to invite him for a mega programme on occasion of the first death anniversary of Pranab Mukherjee. | english |
A man is forced to face his fears and confront his troubled past. He must find a way to survive when his co-worker snaps and goes on a violent killing spree.A man is forced to face his fears and confront his troubled past. He must find a way to survive when his co-worker snaps and goes on a violent killing spree.A man is forced to face his fears and confront his troubled past. He must find a way to survive when his co-worker snaps and goes on a violent killing spree.
Hidden Gem!
This little gem packs a powerful punch with realistic acting that evokes emotions that could only be felt in such a situation. The pacing and directing was perfect, and thanks to the multi-layered performances, I was never bored. Rough, rugged, and real, it reminded me of something from the 70s golden period of cinema. Like a small artsy film in which a young Al Pacino or Dustin Hoffman would have excelled in the role. Maybe we're seeing a start to a new golden age of cinema? Let's hope so. This film is pure art, and imo best of all, it does not attempt to appease or compromise to any politically correct criteria.
Your guide to all the new movies and shows streaming on Prime Video in the US this month.
| english |
## White Board tips from Big V
These are my personal notes gathered from our instructor during my time at code fellows so that I may excel at white boarding, an area of growth for me.
## Motivation
To get better!
## Build status
Build status of continus integration i.e. travis, appveyor etc. Ex. -
[](https://travis-ci.org/akashnimare/foco)
[](https://ci.appveyor.com/project/akashnimare/foco/branch/master)
## Code style
If you're using any code style like xo, standard etc. That will help others while contributing to your project. Ex. -
[](https://github.com/feross/standard)
## Screenshots
Include logo/demo screenshot etc.
## Tech/framework used
Ex. -
<b>Built with</b>
- [Electron](https://electron.atom.io)
## Features
What makes your project stand out?
## Code Example
Show what the library does as concisely as possible, developers should be able to figure out **how** your project solves their problem by looking at the code example. Make sure the API you are showing off is obvious, and that your code is short and concise.
## Installation
Provide step by step series of examples and explanations about how to get a development env running.
## API Reference
Depending on the size of the project, if it is small and simple enough the reference docs can be added to the README. For medium size to larger projects it is important to at least provide a link to where the API reference docs live.
## Tests
Describe and show how to run the tests with code examples.
## How to use?
If people like your project they’ll want to learn how they can use it. To do so include step by step guide to use your project.
## Contribute
Let people know how they can contribute into your project. A [contributing guideline](https://github.com/zulip/zulip-electron/blob/master/CONTRIBUTING.md) will be a big plus.
## Credits
Give proper credits. This could be a link to any repo which inspired you to build this project, any blogposts or links to people who contrbuted in this project.
#### Anything else that seems useful
## License
A short snippet describing the license (MIT, Apache etc)
MIT © [Yourname]() | markdown |
<gh_stars>0
package com.dbs.tcdemo.model;
public class PersonDto {
private String name;
public PersonDto(Person person) {
this.name = person.getName();
}
public String getName() {
return name;
}
}
| java |
- Lifestyle Beach Day Fashion: Tips and Tricks for a Stylish Look!
Apple starts iPhone 7 production in Bengaluru India. All you need to know about.
Apple started assembling its iPhone 7 in Bangalore India, as a step towards making in India campaign. Wistron is the company behind the assembly of the smartphone, the company was already assembled iPhone 6S in the country and this is not the first Apple phone which is manufactured in India.
An Apple spokesperson told IANS, "We are proud to be producing iPhone 7 in Bengaluru for our local customers furthering our long-term commitment in India. "
The assembling of the iPhone 7 began in March. Last year the company announced that it is going invest Rs 3000 crore in Narasupra industrial sector in Kolar, Karnataka. Wistron started assembling iPhone SE and iPhone 6s first in India it seems that the company has gradually started moving towards the latest models as well.
As per Wistron India head Gururaj A, the company is working on setting up an iPhone making plant which will be spread in 43 acres of land, which is capable of employing more 10,000 people. This will ultimately increase the employment rate of the country.
Indian electronics market is growing very fast, and it seems Apple doesn't want to lose the opportunity of grabbing the chance of producing its products with less manufacturing cost.
"I think to start with, it makes sense for Apple to localise assembling of models that have the potential to scale up and then slowly expands it to the entire portfolio," Tarun Pathak, Associate Director at Hong Kong-based Counterpoint Research, told IANS.
Let's see how much price difference Indians will notice after this step. Let's us tell you that the price of the earlier iPhones which were assembled in the country remain the same, hope this time Apple will reduce the pricing of the phones. | english |
In the smartphone-centric world of today, it can be rough if you suffer from any kind of ocular condition which keeps you from seeing your phone clearly all the time. Meaning, if you’re visually impaired or colorblind, you might have trouble seeing the hyper-colored, tiny text many sighted people take for granted.
There are, of course, hardware solutions and apps designed to help, but you’ll also be able to find some options already built into your phone.
Here are a few accessibility options you can find on iPhones and Android which might help you if you have some kind of visual impairment.
Colorblindness afflicts a small percentage of the population, and both kinds of phones have options which change the colors on screen to make the viewing experience easier for those who suffer from it.
There are several different types of colorblindness, each of which limits different spectrums of color. Some of the most common types include protopanopia (red-weakness), deuteranopia (green-weakness), and tritanopia (blue-weakness). If you have any of these types of colorblindness, then you can adjust your phone’s spectrum to accommodate you.
On iPhones, the options can be a little hard to find. Go to Settings > General. Under the Accessibility options, select Display Accommodations, and then Color Filters. From there, select from a list of options depending on what’s easiest on your eyes.
The iPhone also has a greyscale option, which can be a good catch-all for those who have rarer forms of color-weakness. There’s also a color tint option which turns your phone screen a reddish shade, for those who have sensitive eyes and have trouble looking at normal phone screens in the dark.
The process on Android is very similar. Go to your Settings, then select Accessibility from the System menu. Like iOS, color filters are listed under the Color Correction tab. The names are slightly different, but they’re describing the same conditions.
There are only three options, and some Androids have greyscale, though the latter varies from model to model.
No matter how big phone screens become, text sizes tend to remain rather small. Even if you have 20/20 vision, it can be hard on your eyes to look at such small words. Just looking at bright screens is enough to strain the eyes — it’s important to take steps to reduce the strain in order to avoid the headache that can result.
One way to avoid having to squint at your screen is to make your font size bigger. On iPhones, the basic size options can go fairly large — for those who need even more of a boost, select Larger Text to see even bigger text.
For Android users, the Font Size setting is under the Display settings. You might have to look under Advanced to see the option.
If bigger text isn’t helping, your phone also features speech options which will read on-screen text to you.
iPhones have a feature called VoiceOver, which will read text aloud when a selection of it is tapped on-screen. You can adjust the pitch and speed of the speech to make it as pleasant as possible. Under the Accessibility menu, select VoiceOver and flip the switch.
Android’s text-to-speech option is powered by Google. Like iOS, you can adjust the rate of speech and language.
Do you know of any useful apps that can help the visually-impaired navigate smartphones? Let us know in the comments.
Get the most important tech news in your inbox each week.
| english |
$(function(){
var $gnb = $('#gnb');
var $menuBtn = $('.menu');
$menuBtn.on('mouseenter',function(){
$gnb.find('.menu-bg').show().addClass("fadeIn");
$gnb.find('.menu-bg').removeClass("fadeOut");
$gnb.find('.black').show();
$gnb.find('.white').hide();
})
$gnb.hover(function(){},
function(){
$gnb.find('.black').hide();
$gnb.find('.white').show();
$gnb.find('.menu-bg').removeClass("fadeIn")
setTimeout(function(){
$gnb.find('.menu-bg').hide();
},600)
$gnb.find('.menu-bg').addClass("fadeOut");
if ( $menuWrap.hasClass("fadeIn") ) {
menuToggle(0,"fadeOut");
} else {
menuToggle(0);
}
})
var $menuBtn = $('.menu-toggle'),
$menuWrap = $('.menu-text'),
menu_img = ['/asset/img/menu/menu_btn_over.png','/asset/img/menu/menu_btn_click.png'],
toggle_check = 0,
current_class = 'fadeOut',
menuToggle = function(toggle,_class)
{
$menuWrap.removeClass('fadeIn fadeOut');
var button_target = $('.menu-toggle').find('img');
button_target.attr('src',menu_img[toggle]);
if (_class) {
$menuWrap.addClass(_class)
}
toggle_check = toggle;
};
$menuBtn.on('click',function(e)
{
if ( toggle_check == 0 )
{
toggle_check = 1;
current_class = 'fadeIn';
} else {
toggle_check = 0;
current_class = 'fadeOut';
}
menuToggle(toggle_check,current_class);
e.preventDefault();
})
var $slider = {};
var $layer = new layer({
layerid : 'sample_layer',
cookie_name : 'sample_pop_cookie_name',
is_cookie_check : false,
closeBtnClassName : 'closeBtn',
inputClassName : 'nomorepop',
before_show : function(me){
setTimeout(function(){
$slider = $('#' + me.layerid).find(".layer-bxslider").bxSlider({
auto: false,
pause: 4000,
controls: true,
pager: false,
prevText : '<img src="/asset/img/layer_left_btn.png" />',
nextText : '<img src="/asset/img/layer_right_btn.png" />',
onSliderLoad : function(){
console.log(this)
$(this).find('img').css('opacity',1);
}
});
$('.simple_layer_bg').show();
},0)
},
before_hide : function() {
$('.simple_layer_bg').hide();
$slider.destroySlider();
}
})
var $showLayerBtn = $('.callLayerBtn');
$showLayerBtn.on('click',function(e)
{
$layer.show();
e.preventDefault();
})
}) | javascript |
<reponame>alibabacloud-sdk-swift/alibabacloud-sdk
// This file is auto-generated, don't edit it. Thanks.
package com.aliyun.dataworks_public20200518.models;
import com.aliyun.tea.*;
public class GetMetaTableBasicInfoResponse extends TeaModel {
@NameInMap("RequestId")
@Validation(required = true)
public String requestId;
@NameInMap("ErrorCode")
@Validation(required = true)
public String errorCode;
@NameInMap("ErrorMessage")
@Validation(required = true)
public String errorMessage;
@NameInMap("HttpStatusCode")
@Validation(required = true)
public Integer httpStatusCode;
@NameInMap("Success")
@Validation(required = true)
public Boolean success;
@NameInMap("Data")
@Validation(required = true)
public GetMetaTableBasicInfoResponseData data;
public static GetMetaTableBasicInfoResponse build(java.util.Map<String, ?> map) throws Exception {
GetMetaTableBasicInfoResponse self = new GetMetaTableBasicInfoResponse();
return TeaModel.build(map, self);
}
public static class GetMetaTableBasicInfoResponseData extends TeaModel {
@NameInMap("TableName")
@Validation(required = true)
public String tableName;
@NameInMap("TableGuid")
@Validation(required = true)
public String tableGuid;
@NameInMap("OwnerId")
@Validation(required = true)
public String ownerId;
@NameInMap("TenantId")
@Validation(required = true)
public Long tenantId;
@NameInMap("ProjectId")
@Validation(required = true)
public Long projectId;
@NameInMap("CreateTime")
@Validation(required = true)
public Long createTime;
@NameInMap("LastModifyTime")
@Validation(required = true)
public Long lastModifyTime;
@NameInMap("LifeCycle")
@Validation(required = true)
public Integer lifeCycle;
@NameInMap("IsVisible")
@Validation(required = true)
public Integer isVisible;
@NameInMap("LastDdlTime")
@Validation(required = true)
public Long lastDdlTime;
@NameInMap("LastAccessTime")
@Validation(required = true)
public Long lastAccessTime;
@NameInMap("EnvType")
@Validation(required = true)
public Integer envType;
@NameInMap("DataSize")
@Validation(required = true)
public Long dataSize;
@NameInMap("Comment")
@Validation(required = true)
public String comment;
@NameInMap("ProjectName")
@Validation(required = true)
public String projectName;
public static GetMetaTableBasicInfoResponseData build(java.util.Map<String, ?> map) throws Exception {
GetMetaTableBasicInfoResponseData self = new GetMetaTableBasicInfoResponseData();
return TeaModel.build(map, self);
}
}
}
| java |
<filename>DerivedSources/WebCore/JSWMLElementWrapperFactory.cpp<gh_stars>1-10
/*
* THIS FILE WAS AUTOMATICALLY GENERATED, DO NOT EDIT.
*
* This file was generated by the dom/make_names.pl script.
*
* Copyright (C) 2005, 2006, 2007, 2008, 2009 Apple Inc. All rights reserved.
*
* Redistribution and use in source and binary forms, with or without
* modification, are permitted provided that the following conditions
* are met:
* 1. Redistributions of source code must retain the above copyright
* notice, this list of conditions and the following disclaimer.
* 2. Redistributions in binary form must reproduce the above copyright
* notice, this list of conditions and the following disclaimer in the
* documentation and/or other materials provided with the distribution.
*
* THIS SOFTWARE IS PROVIDED BY APPLE COMPUTER, INC. ``AS IS'' AND ANY
* EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE
* IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR
* PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL APPLE COMPUTER, INC. OR
* CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL,
* EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT LIMITED TO,
* PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, DATA, OR
* PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY THEORY
* OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT
* (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE
* OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
*/
#include "config.h"
#include "JSWMLElementWrapperFactory.h"
#if ENABLE(WML)
#include "JSWMLAElement.h"
#include "JSWMLAccessElement.h"
#include "JSWMLAnchorElement.h"
#include "JSWMLBRElement.h"
#include "JSWMLCardElement.h"
#include "JSWMLDoElement.h"
#include "JSWMLFieldSetElement.h"
#include "JSWMLGoElement.h"
#include "JSWMLImageElement.h"
#include "JSWMLInputElement.h"
#include "JSWMLInsertedLegendElement.h"
#include "JSWMLMetaElement.h"
#include "JSWMLNoopElement.h"
#include "JSWMLOnEventElement.h"
#include "JSWMLOptGroupElement.h"
#include "JSWMLOptionElement.h"
#include "JSWMLPElement.h"
#include "JSWMLPostfieldElement.h"
#include "JSWMLPrevElement.h"
#include "JSWMLRefreshElement.h"
#include "JSWMLSelectElement.h"
#include "JSWMLSetvarElement.h"
#include "JSWMLTableElement.h"
#include "JSWMLTemplateElement.h"
#include "JSWMLTimerElement.h"
#include "WMLNames.h"
#include "WMLAElement.h"
#include "WMLAccessElement.h"
#include "WMLAnchorElement.h"
#include "WMLBRElement.h"
#include "WMLCardElement.h"
#include "WMLDoElement.h"
#include "WMLFieldSetElement.h"
#include "WMLGoElement.h"
#include "WMLElement.h"
#include "WMLImageElement.h"
#include "WMLInputElement.h"
#include "WMLInsertedLegendElement.h"
#include "WMLMetaElement.h"
#include "WMLNoopElement.h"
#include "WMLOnEventElement.h"
#include "WMLOptGroupElement.h"
#include "WMLOptionElement.h"
#include "WMLPElement.h"
#include "WMLPostfieldElement.h"
#include "WMLPrevElement.h"
#include "WMLRefreshElement.h"
#include "WMLSelectElement.h"
#include "WMLSetvarElement.h"
#include "WMLTableElement.h"
#include "WMLTemplateElement.h"
#include "WMLTimerElement.h"
#include <wtf/StdLibExtras.h>
#if ENABLE(VIDEO)
#include "Document.h"
#include "Settings.h"
#endif
using namespace JSC;
namespace WebCore {
using namespace WMLNames;
typedef JSNode* (*CreateWMLElementWrapperFunction)(ExecState*, JSDOMGlobalObject*, PassRefPtr<WMLElement>);
static JSNode* createWMLAElementWrapper(ExecState* exec, JSDOMGlobalObject* globalObject, PassRefPtr<WMLElement> element)
{
return CREATE_DOM_NODE_WRAPPER(exec, globalObject, WMLAElement, element.get());
}
static JSNode* createWMLAccessElementWrapper(ExecState* exec, JSDOMGlobalObject* globalObject, PassRefPtr<WMLElement> element)
{
return CREATE_DOM_NODE_WRAPPER(exec, globalObject, WMLAccessElement, element.get());
}
static JSNode* createWMLAnchorElementWrapper(ExecState* exec, JSDOMGlobalObject* globalObject, PassRefPtr<WMLElement> element)
{
return CREATE_DOM_NODE_WRAPPER(exec, globalObject, WMLAnchorElement, element.get());
}
static JSNode* createWMLBRElementWrapper(ExecState* exec, JSDOMGlobalObject* globalObject, PassRefPtr<WMLElement> element)
{
return CREATE_DOM_NODE_WRAPPER(exec, globalObject, WMLBRElement, element.get());
}
static JSNode* createWMLCardElementWrapper(ExecState* exec, JSDOMGlobalObject* globalObject, PassRefPtr<WMLElement> element)
{
return CREATE_DOM_NODE_WRAPPER(exec, globalObject, WMLCardElement, element.get());
}
static JSNode* createWMLDoElementWrapper(ExecState* exec, JSDOMGlobalObject* globalObject, PassRefPtr<WMLElement> element)
{
return CREATE_DOM_NODE_WRAPPER(exec, globalObject, WMLDoElement, element.get());
}
static JSNode* createWMLFieldSetElementWrapper(ExecState* exec, JSDOMGlobalObject* globalObject, PassRefPtr<WMLElement> element)
{
return CREATE_DOM_NODE_WRAPPER(exec, globalObject, WMLFieldSetElement, element.get());
}
static JSNode* createWMLGoElementWrapper(ExecState* exec, JSDOMGlobalObject* globalObject, PassRefPtr<WMLElement> element)
{
return CREATE_DOM_NODE_WRAPPER(exec, globalObject, WMLGoElement, element.get());
}
static JSNode* createWMLImageElementWrapper(ExecState* exec, JSDOMGlobalObject* globalObject, PassRefPtr<WMLElement> element)
{
return CREATE_DOM_NODE_WRAPPER(exec, globalObject, WMLImageElement, element.get());
}
static JSNode* createWMLInputElementWrapper(ExecState* exec, JSDOMGlobalObject* globalObject, PassRefPtr<WMLElement> element)
{
return CREATE_DOM_NODE_WRAPPER(exec, globalObject, WMLInputElement, element.get());
}
static JSNode* createWMLInsertedLegendElementWrapper(ExecState* exec, JSDOMGlobalObject* globalObject, PassRefPtr<WMLElement> element)
{
return CREATE_DOM_NODE_WRAPPER(exec, globalObject, WMLInsertedLegendElement, element.get());
}
static JSNode* createWMLMetaElementWrapper(ExecState* exec, JSDOMGlobalObject* globalObject, PassRefPtr<WMLElement> element)
{
return CREATE_DOM_NODE_WRAPPER(exec, globalObject, WMLMetaElement, element.get());
}
static JSNode* createWMLNoopElementWrapper(ExecState* exec, JSDOMGlobalObject* globalObject, PassRefPtr<WMLElement> element)
{
return CREATE_DOM_NODE_WRAPPER(exec, globalObject, WMLNoopElement, element.get());
}
static JSNode* createWMLOnEventElementWrapper(ExecState* exec, JSDOMGlobalObject* globalObject, PassRefPtr<WMLElement> element)
{
return CREATE_DOM_NODE_WRAPPER(exec, globalObject, WMLOnEventElement, element.get());
}
static JSNode* createWMLOptGroupElementWrapper(ExecState* exec, JSDOMGlobalObject* globalObject, PassRefPtr<WMLElement> element)
{
return CREATE_DOM_NODE_WRAPPER(exec, globalObject, WMLOptGroupElement, element.get());
}
static JSNode* createWMLOptionElementWrapper(ExecState* exec, JSDOMGlobalObject* globalObject, PassRefPtr<WMLElement> element)
{
return CREATE_DOM_NODE_WRAPPER(exec, globalObject, WMLOptionElement, element.get());
}
static JSNode* createWMLPElementWrapper(ExecState* exec, JSDOMGlobalObject* globalObject, PassRefPtr<WMLElement> element)
{
return CREATE_DOM_NODE_WRAPPER(exec, globalObject, WMLPElement, element.get());
}
static JSNode* createWMLPostfieldElementWrapper(ExecState* exec, JSDOMGlobalObject* globalObject, PassRefPtr<WMLElement> element)
{
return CREATE_DOM_NODE_WRAPPER(exec, globalObject, WMLPostfieldElement, element.get());
}
static JSNode* createWMLPrevElementWrapper(ExecState* exec, JSDOMGlobalObject* globalObject, PassRefPtr<WMLElement> element)
{
return CREATE_DOM_NODE_WRAPPER(exec, globalObject, WMLPrevElement, element.get());
}
static JSNode* createWMLRefreshElementWrapper(ExecState* exec, JSDOMGlobalObject* globalObject, PassRefPtr<WMLElement> element)
{
return CREATE_DOM_NODE_WRAPPER(exec, globalObject, WMLRefreshElement, element.get());
}
static JSNode* createWMLSelectElementWrapper(ExecState* exec, JSDOMGlobalObject* globalObject, PassRefPtr<WMLElement> element)
{
return CREATE_DOM_NODE_WRAPPER(exec, globalObject, WMLSelectElement, element.get());
}
static JSNode* createWMLSetvarElementWrapper(ExecState* exec, JSDOMGlobalObject* globalObject, PassRefPtr<WMLElement> element)
{
return CREATE_DOM_NODE_WRAPPER(exec, globalObject, WMLSetvarElement, element.get());
}
static JSNode* createWMLTableElementWrapper(ExecState* exec, JSDOMGlobalObject* globalObject, PassRefPtr<WMLElement> element)
{
return CREATE_DOM_NODE_WRAPPER(exec, globalObject, WMLTableElement, element.get());
}
static JSNode* createWMLTemplateElementWrapper(ExecState* exec, JSDOMGlobalObject* globalObject, PassRefPtr<WMLElement> element)
{
return CREATE_DOM_NODE_WRAPPER(exec, globalObject, WMLTemplateElement, element.get());
}
static JSNode* createWMLTimerElementWrapper(ExecState* exec, JSDOMGlobalObject* globalObject, PassRefPtr<WMLElement> element)
{
return CREATE_DOM_NODE_WRAPPER(exec, globalObject, WMLTimerElement, element.get());
}
JSNode* createJSWMLWrapper(ExecState* exec, JSDOMGlobalObject* globalObject, PassRefPtr<WMLElement> element)
{
typedef HashMap<WTF::AtomicStringImpl*, CreateWMLElementWrapperFunction> FunctionMap;
DEFINE_STATIC_LOCAL(FunctionMap, map, ());
if (map.isEmpty()) {
map.set(aTag.localName().impl(), createWMLAElementWrapper);
map.set(accessTag.localName().impl(), createWMLAccessElementWrapper);
map.set(anchorTag.localName().impl(), createWMLAnchorElementWrapper);
map.set(brTag.localName().impl(), createWMLBRElementWrapper);
map.set(cardTag.localName().impl(), createWMLCardElementWrapper);
map.set(doTag.localName().impl(), createWMLDoElementWrapper);
map.set(fieldsetTag.localName().impl(), createWMLFieldSetElementWrapper);
map.set(goTag.localName().impl(), createWMLGoElementWrapper);
map.set(imgTag.localName().impl(), createWMLImageElementWrapper);
map.set(inputTag.localName().impl(), createWMLInputElementWrapper);
map.set(insertedLegendTag.localName().impl(), createWMLInsertedLegendElementWrapper);
map.set(metaTag.localName().impl(), createWMLMetaElementWrapper);
map.set(noopTag.localName().impl(), createWMLNoopElementWrapper);
map.set(oneventTag.localName().impl(), createWMLOnEventElementWrapper);
map.set(optgroupTag.localName().impl(), createWMLOptGroupElementWrapper);
map.set(optionTag.localName().impl(), createWMLOptionElementWrapper);
map.set(pTag.localName().impl(), createWMLPElementWrapper);
map.set(postfieldTag.localName().impl(), createWMLPostfieldElementWrapper);
map.set(prevTag.localName().impl(), createWMLPrevElementWrapper);
map.set(refreshTag.localName().impl(), createWMLRefreshElementWrapper);
map.set(selectTag.localName().impl(), createWMLSelectElementWrapper);
map.set(setvarTag.localName().impl(), createWMLSetvarElementWrapper);
map.set(tableTag.localName().impl(), createWMLTableElementWrapper);
map.set(templateTag.localName().impl(), createWMLTemplateElementWrapper);
map.set(timerTag.localName().impl(), createWMLTimerElementWrapper);
}
CreateWMLElementWrapperFunction createWrapperFunction = map.get(element->localName().impl());
if (createWrapperFunction)
return createWrapperFunction(exec, globalObject, element);
return CREATE_DOM_NODE_WRAPPER(exec, globalObject, WMLElement, element.get());
}
}
#endif
| cpp |
Both my 9 year old shepherd mixes take this daily and love it. It doesn't smell too fishy and they don't seem to notice it at all, which is great because my one can be a bit picky. Both dogs have had TPLO surgery; one boy had both knees done!, and our surgeon highly recommended we get them on supplements ASAP. They've been on this for years and have been doing really well!
| english |
MELBOURNE - Oil prices rose on Wednesday as sanctions on Russian banks following Moscow's invasion of Ukraine hampered trade finance for crude shipments and some traders opted to avoid Russian supplies in an already tight market.
Brent crude futures climbed $3. 55, or 3. 4%, to $108. 52 a barrel at 0135 GMT, scaling highs not seen since July 2014.
U. S. West Texas Intermediate (WTI) crude futures were up $3. 75, or 3. 6%, to $107. 16, after peaking at $107. 55 in early trade, the highest since July 28, 2014.
"Trade disruptions are starting to get people's attention," said Westpac economist Justin Smirk.
"Issues around trade finance and insurance - that's all impacting exports from the Black Sea. The supply shocks are unfolding," he said.
Russian oil exports account for around 8% of global supply.
At the same time, while Western powers have not imposed sanctions on energy exports directly, U. S. traders at hubs in New York and the U. S. Gulf are shunning Russian crude.
"People are not touching Russian barrels. You may see some on the water right now, but they were bought prior to the invasion. There won't be much after that," one New York Harbor trader told Reuters.
A coordinated release of 60 million barrels of oil by International Energy Agency member countries agreed on Tuesday put a lid on market gains, but analysts said that would only provide temporary relief on the supply front.
"They helped to cap the rise, but if you want to turn prices around, you need something more sustainable," Smirk said.
Commercial oil stockpiles are at their lowest since 2014, the IEA said.
Against that backdrop, the Organization of the Petroleum Exporting Countries, Russia and allies, together known as OPEC+, are due to meet on Wednesday, where they are expected to stick to plans to add 400,000 barrels per day of supply each month.
The U. S. Energy Information Administration is due to release weekly data on Wednesday, with analysts polled by Reuters expecting a crude inventory build of 2. 7 million barrels. | english |
<reponame>bukujari/obat
{"name":"<NAME>","harga":" Rp. 233.835/ kemasan","golongan":"http://medicastore.com/image_banner/obat_keras_dan_psikotropika.gif","kandungan":"Beraprost Natrium.","indikasi":"Memperbaiki luka, nyeri, dan keadaan rasa dingin berkaitan dengan penyumbatan arteri kronis. Hipertensi paru-paru primer.","kontraindikasi":"Pasien haemorrhage. Kehamilan.","perhatian":"Pasien yang minum antikoagulan, antitrombosis atau fibrinolitik. Wanita menstruasi, pasien dengan kecenderungan perdarahan atau diatesis (lebih peka terhadap penyakit). Lanjut usia, anak-anak, kehamilan. menyusui.Interaksi Obat: warfarin, aspirin, tiklopidine, urokinase, sediaan prostaglandin I2.","efeksamping":"Sakit kepala, semburan panas, gangguan saluran pencernaan, kecenderungan perdarahan, pusing, meningkatkan enzim hati, trigliserida dan bilirubin.","indeksamanwanitahamil":"","kemasan":"Tablet 20 mcg x 30 biji.","dosis":"Memperbaiki luka, nyeri, dan dan keadaan rasa dingin berkaitan dengan penyumbatan arteri kronis. Dewasa: 120 mcg sehari dalam 3 dosis terbagi. Hipertensi paru-paru primer: 60 mcg sehari terbagi dalam 3 dosis. Naikkan dosis jika dibutuhkan sampai maksimal 180 mcg sehari dalam 3-4 dosis terbagi.","penyajian":"Dikonsumsi bersamaan dengan makanan","pabrik":"Astellas.","id":"10749","category_id":"2"}
| json |
Placement NPTI (ER)
There is a separate placement cell in our Institution. Placement cell plays a vital role in institute to organize the campus recruitment programs for PGDC Trainees, PDC Trainees & B.Tech Students. The Placement Cell provides the infra-structural facilities to conduct group discussions, tests and interviews besides catering to other logistics.
| english |
The Tent(A)
26 ⌞The Lord continued,⌟ “Make the inner tent with ten sheets made from fine linen yarn. Take violet, purple, and bright red yarn, and creatively work an angel [a] design into the fabric. 2 Each sheet will be 42 feet long and 6 feet wide—all the same size. 3 Five of the sheets must be sewn together, and the other five must also be sewn together. 4 Make 50 violet loops along the edge of the end sheet in each set, 5 placing the loops opposite each other. 6 Make 50 gold fasteners. Use them to link the ⌞two sets of⌟ sheets together so that the tent is a single unit.
7 “Make 11 sheets of goats’ hair to form an outer tent over the inner tent. 8 Each of the 11 sheets will be 45 feet long and 6 feet wide. 9 Sew five of the sheets together into one set and the remaining six into another set. Fold the sixth sheet in half ⌞to hang⌟ in front of the tent. 10 Make 50 loops along the edge of the end sheet in each set. 11 Make 50 bronze fasteners, and put them through the loops to link the inner tent together as a single unit. 12 The remaining half-sheet should hang over the back of the inner tent. 13 There will be 18 inches left over on each side because of the length of the outer tent’s sheets. That part should hang over each side in order to cover the inner tent. 14 Make a cover of rams’ skins that have been dyed red for the outer tent. Over that put a cover made of fine leather.
15 “Make a framework out of acacia wood for the inner tent. 16 Each frame is to be 15 feet long and 27 inches wide, 17 with two identical pegs. Make all the frames for the inner tent the same way. 18 Make 20 frames for the south side of the inner tent. 19 Then make 40 silver sockets at the bottom of the 20 frames, two sockets at the bottom of each frame for the two pegs. 20 For the north side of the inner tent ⌞make⌟ 20 frames 21 and 40 silver sockets, two at the bottom of each frame. 22 Make six frames for the far end, the west side. 23 Make two frames for ⌞each of⌟ the corners at the far end of the inner tent. 24 These will be held together at the bottom and held tightly at the top by a single ring.[b] Both corner frames will be made this way. 25 There will be eight frames with 16 silver sockets, two at the bottom of each frame.
26 “Make crossbars out of acacia wood: five for the frames on one side of the inner tent, 27 five for those on the other side, and five for the frames on the far end of the inner tent, the west side. 28 The middle crossbar will run from one end to the other, halfway up the frames. 29 Cover the frames with gold, make gold rings to hold the crossbars, and cover the crossbars with gold.
30 “Set up the inner tent according to the plans you were shown on the mountain.
31 “Make a canopy of violet, purple, and bright red yarn. Creatively work an angel design into fine linen yarn. 32 Use gold hooks to hang it on four posts of acacia wood covered with gold, standing in four silver sockets. 33 Hang the canopy from the fasteners in the ceiling, and put the ark containing the words of my promise under it. The canopy will mark off the most holy place from the holy place. 34 Put the throne of mercy that is on the ark in the most holy place.
35 “Place the table outside the canopy on the north side of the inner tent, and put the lamp stand opposite the table on the south side.
Copyright © 1995, 2003, 2013, 2014, 2019, 2020 by God’s Word to the Nations Mission Society. All rights reserved.
| english |
<gh_stars>0
{"event":{"packageReference":{"assemblyPaths":["/path/to/a.dll"],"probingPaths":["/probing/path/1","/probing/path/2"],"packageRoot":"/the/package/root","packageName":"ThePackage","packageVersion":"1.2.3","isPackageVersionSpecified":true}},"eventType":"PackageAdded","command":{"token":"the-token","id":"command-id","commandType":"SubmitCode","command":{"code":"#r \"nuget:ThePackage,1.2.3\"","submissionType":0,"targetKernelName":null}}} | json |
<!DOCTYPE composition PUBLIC "-//W3C//DTD XHTML 1.0 Transitional//EN" "http://www.w3.org/TR/xhtml1/DTD/xhtml1-transitional.dtd">
<ui:composition
xmlns="http://www.w3.org/1999/xhtml"
xmlns:f="http://xmlns.jcp.org/jsf/core"
xmlns:h="http://xmlns.jcp.org/jsf/html"
xmlns:c="http://xmlns.jcp.org/jsp/jstl/core"
xmlns:ui="http://xmlns.jcp.org/jsf/facelets"
xmlns:a4j="http://richfaces.org/a4j"
xmlns:rich="http://richfaces.org/rich"
xmlns:aries="http://aries.org/jsf">
<!-- action support -->
<ui:include src="/common/actionSupport.xhtml" />
<!--
** graphicsListActions
** a4j:jsFunction methods to support the Graphics List.
-->
<a4j:region>
<h:outputScript>
var graphicsListState = null;
</h:outputScript>
<!--
** refreshGraphicsList(event)
** Refreshes the current Graphics List.
-->
<a4j:jsFunction
name="refreshGraphicsList"
execute="@this"
immediate="true"
bypassUpdates="true"
limitRender="true"
action="#{graphicsListManager.refresh}"
onbegin="setCursorWait(); showProgress('Nam', 'Graphics Records', 'Refreshing current Graphics List...')"
oncomplete="setCursorDefault(this); hideProgress()"
render="graphicsListActions, graphicsListMenu, graphicsListToolbar, graphicsListPane">
</a4j:jsFunction>
<!--
** executeSelectFromGraphicsList(recordIndex, recordKey)
** Handles actions generated from a row or column in the Application List.
** Selects Element on server-side. Executes NO action on server-side.
** Uses a queing delay of 0ms - no waiting to combine with other actions like double-click.
-->
<a4j:jsFunction
name="executeSelectFromGraphicsList"
execute="@this"
immediate="true"
bypassUpdates="true"
limitRender="true"
onbegin="beginSelect(this)"
oncomplete="completeSelect(this)"
render="graphicsListActions, graphicsListToolbar, #{render}">
<!-- these values are passed-in -->
<a4j:param name="recordIndex" assignTo="#{graphicsListManager.selectedRecordIndex}" />
<a4j:param name="recordKey" assignTo="#{graphicsListManager.selectedRecordKey}" />
<!-- these values are assigned here -->
<a4j:param name="selector" assignTo="#{selectionContext.selectedAction}" value="graphics" />
<!-- provide event queue settings -->
<a4j:attachQueue requestGroupingId="graphicsListEvents" requestDelay="0" />
</a4j:jsFunction>
<!--
** executeActionFromGraphicsList(recordIndex, recordKey, type, action, section)
** Selects Element on server-side. Executes action on server-side.
** Uses a queing delay of 0ms - no waiting for any future actions.
** This is typically used by double-click and other submit actions.
-->
<a4j:jsFunction
name="executeActionFromGraphicsList"
execute="@this"
immediate="true"
bypassUpdates="true"
limitRender="true"
action="#{workspaceManager.executeAction}"
oncomplete="setCursorDefault(); hideProgress()"
render="graphicsListActions, graphicsListToolbar, #{render}">
<!-- these values are passed-in -->
<a4j:param name="recordIndex" assignTo="#{graphicsListManager.selectedRecordIndex}" />
<a4j:param name="recordKey" assignTo="#{graphicsListManager.selectedRecordKey}" />
<a4j:param name="type" assignTo="#{selectionContext.selectedType}" />
<a4j:param name="action" assignTo="#{selectionContext.selectedAction}" />
<a4j:param name="section" assignTo="#{graphicsWizard.section}" />
<!-- provide event queue settings -->
<a4j:attachQueue requestGroupingId="graphicsListEvents" requestDelay="400" />
</a4j:jsFunction>
<!--
** executeActionForElement(type, action)
** Executes 'action' associated with Element 'type' on server-side.
** Uses a queing delay of 0ms - no waiting for any future actions.
** This is used by actions triggered from menus and toolbars.
-->
<a4j:jsFunction
name="executeActionForElement"
execute="@this"
immediate="true"
bypassUpdates="true"
limitRender="true"
action="#{workspaceManager.executeAction}"
render="graphicsListActions, graphicsListMenu, graphicsListToolbar">
<!-- these values are passed-in -->
<a4j:param name="type" assignTo="#{selectionContext.selectedType}" />
<a4j:param name="action" assignTo="#{selectionContext.selectedAction}" />
<!-- provide event queue settings -->
<a4j:attachQueue requestDelay="0" />
</a4j:jsFunction>
<!--
** graphicsListActions
** Javascript methods to support the Graphics List.
-->
<a4j:outputPanel
id="graphicsListActions">
<h:outputScript>
<!--
** getGraphicsListRowKey()
** Returns the unique record key of the selected row.
-->
function getGraphicsListRowKey() {
if (graphicsListState != null)
return graphicsListState.recordKey;
return null;
}
<!--
** getGraphicsListRowLabel()
** Returns the record label of the selected row.
-->
function getGraphicsListRowLabel() {
if (graphicsListState != null)
return graphicsListState.recordLabel;
return null;
}
<!--
** initializeGraphicsListState()
** Initializes and verifies Graphics List state information.
-->
function initializeGraphicsListState() {
try {
var rowIndex = '#{graphicsListManager.selectedRecordIndex}';
var recordKey = '#{graphicsListManager.selectedRecordKey}';
var recordLabel = '#{graphicsListManager.selectedRecordLabel}';
if (recordKey != '') {
updateGraphicsListState(null, rowIndex, recordKey, recordLabel);
}
} catch(e) {
alert(e);
}
}
<!--
** updateGraphicsListState(event, rowIndex, recordKey, recordLabel)
** Updates client-side state information for Graphics List.
-->
function updateGraphicsListState(event, rowIndex, recordKey, recordLabel) {
graphicsListState = new Object();
graphicsListState.rowIndex = rowIndex;
//graphicsListState.recordId = recordId;
graphicsListState.recordKey = recordKey;
graphicsListState.recordLabel = recordLabel;
//show(graphicsListState);
}
<!--
** enableGraphicsListActions(type)
** Enables (or disables) Graphics List actions based on current client-side state.
-->
function enableGraphicsListActions(type) {
//enableButton('graphicsListViewButton');
enableButton('graphicsListNewButton');
enableButton('graphicsListEditButton');
enableButton('graphicsListRemoveButton');
}
<!--
** processGraphicsListMouseDown(event, rowIndex, recordKey, recordLabel)
** Handles mouseDown event on the Graphics List.
-->
function processGraphicsListMouseDown(event, rowIndex, recordKey, recordLabel) {
updateGraphicsListState(event, rowIndex, recordKey, recordLabel);
enableGraphicsListActions('graphics');
try {
executeSelectFromGraphicsList(rowIndex, recordKey);
} catch(e) {
alert(e);
}
}
<!--
** processGraphicsListDoubleClick(event, rowIndex, recordKey, recordLabel)
** Handles double-click action on the Graphics List.
-->
function processGraphicsListDoubleClick(event, rowIndex, recordKey, recordLabel) {
try {
setCursorWait(event.target);
setCursorWait(event.currentTarget);
showProgress('Nam', 'Graphics Records', 'Preparing Graphics ' + recordLabel + ' for editing...');
executeActionFromGraphicsList(rowIndex, recordKey, 'Graphics', 'workspaceManager.editObject');
} catch(e) {
alert(e);
}
}
<!--
** processViewElement(event, type, action)
** Opens selected Element 'type' record.
** Goes to the Element 'type' summary page.
-->
function processViewElement(event, type, action) {
if (action == null)
action = 'workspaceManager.viewObject';
try {
setCursorWait(event.target);
setCursorWait(event.currentTarget);
setCursorWait(#{rich:element('graphicsListTable')});
if (graphicsListState != null) {
var label = graphicsListState.recordLabel;
showProgress('Nam', type+' Records', 'Opening \"'+label+'\" '+type+' for viewing...');
} else showProgress('Nam', type+' Records', 'Opening '+type+' for viewing...');
executeActionForElement(type, action);
} catch(e) {
alert(e);
}
}
<!--
** processNewElement(event, type, action)
** Creates new Element 'type' record.
** Goes to Element 'type' Wizard page.
-->
function processNewElement(event, type, action) {
if (action == null)
action = 'workspaceManager.newObject';
try {
setCursorWait(event.target);
setCursorWait(event.currentTarget);
if (graphicsListState != null) {
var label = graphicsListState.recordLabel;
showProgress('Nam', type+' Records', 'Creating new '+type+' for \"'+label+'\"...');
} else showProgress('Nam', type+' Records', 'Creating new '+type+'...');
executeActionForElement(type, action);
} catch(e) {
alert(e);
}
}
<!--
** processEditElement(event, type, action)
** Opens Element 'type' record for editing.
** Goes to Element 'type' Wizard page.
-->
function processEditElement(event, type, action) {
if (action == null)
action = 'workspaceManager.editObject';
try {
setCursorWait(event.target);
setCursorWait(event.currentTarget);
if (graphicsListState != null) {
var label = graphicsListState.recordLabel;
showProgress('Nam', type+' Records', 'Preparing \"'+label+'\" '+type+' for editing...');
} else showProgress('Nam', type+' Records', 'Preparing '+type+' for editing...');
executeActionForElement(type, action);
} catch(e) {
alert(e);
}
}
<!--
** processRemoveElement(event, type, action)
** Prompts user to remove selected Element 'type' record.
** Removes Element 'type' record from system.
-->
function processRemoveElement(event, type, action) {
var typeUncapped = uncapitalize(type);
var label = type;
if (graphicsListState != null)
label = graphicsListState.recordLabel + ' ' + type;
var warningTitle = 'Remove \"'+label+'\" from system';
if (action == null)
action = typeUncapped + 'EventManager.remove' + type;
popupWarningPrompt('Nam', warningTitle, 'Do you wish to continue?', action, 'graphicsListPane');
setCursorDefault();
}
</h:outputScript>
</a4j:outputPanel>
</a4j:region>
</ui:composition>
| html |
<filename>web/data/artblocks-scoundrels/203001782.json
{"nft":{"id":203001782,"name":"Scoundrels #1782","description":"Scoundrels are a limited edition of on-chain generative pixel art characters tokenized on the Ethereum blockchain.\nThis ragtag bunch of thieves and misfits are perfectly capable of relieving anyone of their most prized possessions.","image":"https://lh3.googleusercontent.com/r-DTznG3cTBV4YJ1-R8_F-0Nc-IeFbYh3p4OFYh1islOLyTCXQ7eOnWMPKanltubj24xYn-1ncuMd75PJnLHig3fJXjZsLdC7YfG","external_url":"https://artblocks.io/token/2<PASSWORD>782","attributes":[{"trait_type":"All Scoundrels"},{"trait_type":"Hat Color","value":"Blue"},{"trait_type":"Hair","value":"Half Up"},{"trait_type":"Clothes","value":"Blouse"},{"trait_type":"Face","value":"Goofy"},{"trait_type":"Beard","value":"None"},{"trait_type":"Gender","value":"Female"},{"trait_type":"Eyewear","value":"None"},{"trait_type":"Garment","value":"None"},{"trait_type":"Mask","value":"None"},{"trait_type":"Clothes Color","value":"Blue"},{"trait_type":"Garment Color","value":"None"},{"trait_type":"Hair Color","value":"Blond"},{"trait_type":"Mask Color","value":"None"},{"trait_type":"Hat","value":"Deerstalker"},{"trait_type":"Skin","value":"Black"},{"trait_type":"Trait Count","value":"16"}]},"attributeRarities":[{"trait_type":"Hat Color","value":"Blue","count":452,"ratio":0.220703125,"ratioScore":4.530973451327434},{"trait_type":"Hair","value":"Half Up","count":66,"ratio":0.0322265625,"ratioScore":31.03030303030303},{"trait_type":"Clothes","value":"Blouse","count":163,"ratio":0.07958984375,"ratioScore":12.56441717791411},{"trait_type":"Face","value":"Goofy","count":179,"ratio":0.08740234375,"ratioScore":11.441340782122905},{"trait_type":"Beard","value":"None","count":882,"ratio":0.4306640625,"ratioScore":2.3219954648526078},{"trait_type":"Gender","value":"Female","count":797,"ratio":0.38916015625,"ratioScore":2.5696361355081554},{"trait_type":"Eyewear","value":"None","count":1608,"ratio":0.78515625,"ratioScore":1.2736318407960199},{"trait_type":"Garment","value":"None","count":925,"ratio":0.45166015625,"ratioScore":2.214054054054054},{"trait_type":"Mask","value":"None","count":1490,"ratio":0.7275390625,"ratioScore":1.374496644295302},{"trait_type":"Clothes Color","value":"Blue","count":293,"ratio":0.14306640625,"ratioScore":6.989761092150171},{"trait_type":"Garment Color","value":"None","count":925,"ratio":0.45166015625,"ratioScore":2.214054054054054},{"trait_type":"Hair Color","value":"Blond","count":376,"ratio":0.18359375,"ratioScore":5.446808510638298},{"trait_type":"Mask Color","value":"None","count":1490,"ratio":0.7275390625,"ratioScore":1.374496644295302},{"trait_type":"Hat","value":"Deerstalker","count":59,"ratio":0.02880859375,"ratioScore":34.71186440677966},{"trait_type":"Skin","value":"Black","count":241,"ratio":0.11767578125,"ratioScore":8.49792531120332},{"trait_type":"Trait Count","value":"16","count":2048,"ratio":1,"ratioScore":1}],"rarityScore":129.55575860029444,"rank":1789} | json |
Communication and interaction are the glue that allows every community to evolve, organise and grow together. The rise of the virtual world provides subaltern and voiceless people unprecedented opportunities to assert themselves and experience a sense of belongingness. At the same time, it contests the interests of powerful communities and domination at the social-physical level. Online communities are, geographically, much wider and more heterogeneous than physical communities. In the past, many communities in India were not allowed to participate in public discourses, organise themselves and advance their thoughts and ideas. Their concerns, ideas, experiences, ambitions and demands largely went unheard.
Digital social media platforms have enabled them to offer democratic ideas to the world, which rejects prevailing hegemonic norms. Twitter, Facebook, Youtube, WhatsApp and several other platforms have this enabling characteristic, while honouring individual dignity and respect.
Information and Communication Technologies (ICTs) have substantially empowered Dalits, Adivasis, women, economically weaker sections and minorities, despite the uneasiness of the ruling dispensation. ICTs have provided them access to the required information without much obstruction – information and knowledge that is otherwise overlooked by the dominant media.
Theorist Frank Webster concludes that the new information society is more theoretical in nature. People strive to find the conceptual basis for their views, which in turn affects the mainstream narrative, making the media as a whole more democratic and egalitarian.
Today, creating content needs less investment than before. It is more often soft-skill driven. It is also not purely social-capital-caste driven. With the assistance of technology, anyone can create competent, authentic, effective and fresh online content.
With the increasing presence of subalterns and their competence to theorise their experiences (for example, at the Dalit Film Festival in Kirori Mal College, Delhi University, last year), the ability to access information is widening. There is a rise of Dalit intelligentsia and an alternative mass media of and for subalterns.
According to the American information studies scholar, Nicole A. Cooke, information creation and consumption will always be a significant part of our lives and society. Unlike in the earlier agricultural and industrial societies, in the information society, subalterns are attempting to generate a fresh epistemological terrain for themselves in India.
Information society is governed by knowledge workers, who could be from diverse social groups and identities. Meanwhile, the question remains: Why could agricultural society not deliver justice to subalterns in India? Because the entire agricultural economy was under the control of landowners, who belong to the upper castes. Similarly, capital is still largely controlled by upper castes.
The digital age has provided opportunities for anyone to become a knowledge owner and producer instead of remaining a mere spectator and consumer of manufactured knowledge that furthers the interests of certain dominant castes. The reach of mobile technology and the internet has facilitated this, though legitimate concerns about the digital divide persist and discrimination in this respect should not be overlooked.
(The writer is assistant professor, Department of Philosophy, Indraprastha College for Women, Delhi University) | english |
<filename>data/terrainbuilding/2021/02/t3_ltuf4l.json
{
"author": {
"id": "t2_eer3nfl",
"name": "Bone_Dice_in_Aspic"
},
"date": {
"day": 1614384000,
"full": 1614451123,
"month": 1612137600,
"week": 1613865600
},
"id": "t3_ltuf4l",
"picture": {
"filesize": 69972,
"fullUrl": "https://external-preview.redd.it/dVHiEJW86W3dDcc7PDqAiNuHgoqf3ecTqp6ZXktF2nM.jpg?auto=webp&s=298c8665b46adff800b0d746e45538e09f856b1e",
"hash": "481d8b20b2",
"height": 512,
"lqip": "data:image/jpg;base64,/9j/2wBDAAYEBQYFBAYGBQYHBwYIChAKCgkJChQODwwQFxQYGBcUFhYaHSUfGhsjHBYWICwgIyYnKSopGR8tMC0oMCUoKSj/2wBDAQcHBwoIChMKChMoGhYaKCgoKCgoKCgoKCgoKCgoKCgoKCgoKCgoKCgoKCgoKCgoKCgoKCgoKCgoKCgoKCgoKCj/wAARCAANABADASIAAhEBAxEB/8QAFwAAAwEAAAAAAAAAAAAAAAAABAUGB//EACUQAAEDAwMDBQAAAAAAAAAAAAECAwQFESEABhITFTEiQUJRcf/EABUBAQEAAAAAAAAAAAAAAAAAAAAB/8QAFxEBAQEBAAAAAAAAAAAAAAAAAQAhEf/aAAwDAQACEQMRAD8AR7FqTgi02LHi9UhTrclKgErRZIKQAc3JOB76la/tiVFMtdWQG1IVlDZ5ZJJBH2Df8x51v9HoVIbq9Kk9uYVLlpKXXiOJJ4lQVYWur0+dFbg2BFr1TqL9bmOyW1tNstoQ2lotfLkCMXzm4sdFU5TDb//Z",
"url": "https://external-preview.redd.it/dVHiEJW86W3dDcc7PDqAiNuHgoqf3ecTqp6ZXktF2nM.jpg?width=640&crop=smart&auto=webp&s=1c10f6b7b9a2cc7afebcc3eb0bb9fdd7f87e551a",
"width": 640
},
"score": {
"comments": 1,
"downs": 0,
"ratio": 1,
"ups": 2,
"value": 2
},
"subreddit": {
"id": "t5_2xy5e",
"name": "TerrainBuilding"
},
"tags": [],
"title": "Wyloch figural Fountain, Egyptian pillars, some toy soldier kitbash minis",
"url": "https://www.reddit.com/r/TerrainBuilding/comments/ltuf4l/wyloch_figural_fountain_egyptian_pillars_some_toy/"
}
| json |
<reponame>fossabot/IdeaBag2-Solutions
# Welcome
This document specifies how to contribute to the repository.
### Table Of Contents
* [Contributing code](https://github.com/jarik-marwede/IdeaBag2-Projects/blob/master/CONTRIBUTING.md#contributing-code)
* [Contributing a new program](https://github.com/jarik-marwede/IdeaBag2-Projects/blob/master/CONTRIBUTING.md#contributing-a-new-program)
* [Contributing to an existing program](https://github.com/jarik-marwede/IdeaBag2-Projects/blob/master/CONTRIBUTING.md#contributing-to-an-existing-program)
## Contributing code
### Contributing a new program
When adding a new program please remember:
* Create a new folder named after the title of the idea inside its category folder
* Add a [shebang](https://en.wikipedia.org/wiki/Shebang_(Unix)) line (i.e. `#!/usr/bin/env python3`)
* Add a module docstring containing the title and the description of the idea from *Idea Bag 2*
* Add ways to both import the program and to run it individually
* If possible, try using only modules from the standard library
* All rules for [contributing to existing programs](https://github.com/jarik-marwede/IdeaBag2-Projects/blob/master/CONTRIBUTING.md#contributing-to-an-existing-program)
### Contributing to an existing program
When improving a program please follow these rules:
* Always add docstrings
* Use comments for everything not self explanatory
* Keep the [Zen of Python](https://github.com/jarik-marwede/IdeaBag2-Projects/blob/master/CONTRIBUTING.md#the-zen-of-python) in mind
* Before commiting check your code for [codestyle](https://github.com/jarik-marwede/IdeaBag2-Projects/blob/master/CONTRIBUTING.md#codestyle) issues
#### The Zen of Python
The Zen of Python are some generall suggestions on how to write python code.
You can view it by using
```python
import this
```
inside the python shell.
> The Zen of Python, by <NAME>
>
> Beautiful is better than ugly.
>
> Explicit is better than implicit.
>
> Simple is better than complex.
>
> Complex is better than complicated.
>
> Flat is better than nested.
>
> Sparse is better than dense.
>
> Readability counts.
>
> Special cases aren't special enough to break the rules.
>
> Although practicality beats purity.
>
> Errors should never pass silently.
>
> Unless explicitly silenced.
>
> In the face of ambiguity, refuse the temptation to guess.
>
> There should be one-- and preferably only one --obvious way to do it.
>
> Although that way may not be obvious at first unless you're Dutch.
>
> Now is better than never.
>
> Although never is often better than *right* now.
>
> If the implementation is hard to explain, it's a bad idea.
>
> If the implementation is easy to explain, it may be a good idea.
>
> Namespaces are one honking great idea -- let's do more of those!
#### Codestyle
In general, all programs should follow the [PEP 8](https://pep8.org/) styleguide.
However this is only a suggestion.
Just try to make your code as readable and understandable as possible.
For example do not shorten your code to make each line fit into 80 characters like described in PEP 8.
Instead only shorten your lines if the code is still readable afterwards.
To automatically check for PEP 8 complience use:
* [Pylint](https://www.pylint.org/)
* [Flake8](https://pypi.python.org/pypi/flake8)
| markdown |
Why you can trust TechRadar We spend hours testing every product or service we review, so you can be sure you’re buying the best. Find out more about how we test.
Even now, email remains the internet's killer application. So wouldn't it be rather compelling if you could receive it as soon as it's sent, just as quickly as you'd get an email at work or on broadband? We're email junkies ourselves. A few hours away from our mail results in nervousness about what we might have been sent. It's an addiction. One we need to feed.
Of course, business people with this addiction have long been able to get a fix of email on the go. The original Blackberry was conceived as a portable email machine with a miniature keyboard. However, it was nothing like a mobile phone - apart from the fact that it connected to a mobile network to send and receive data, of course.
But now Blackberry has realised that it can combine the compelling immediacy of its older products together inside a conventional mobile phone shell. The first result of this - at least for the UK market - is the 7100. This particular model is the 7100v and, in the obviously likely event you were wondering what the 'v' means, it stands for Vodafone. The 't' model will also be available by the time you read this and, rather unsurprisingly, that will be connected to T-Mobile.
Blackberry is undoubtedly the new cool; Palm, Symbian and Nokia have all said they will support Blackberry tech in future. Defying our early impression, the 7100v is an impressive smartphone - it looks and handles like a conventional mobile.
We had early reservations simply because we're great fans of the established smartphones such as the recently reviewed Sony-Ericsson P910i and the Microsoft Windows Mobile powered Orange SPV C500.
These players have learnt an awful lot since they first launched into the smartphone market and, while the 7100v is controlled by a easy-to-get-used-to selection button and jog wheel on the side of the device, it's true to say that we didn't get on with it quite as well as we did with Sony Ericsson's Symbian system or Windows Smartphone 2003. As you'd expect, the OS on the 7100v has also been skinned to look smart in Vodafone Live!-style livery.
One aspect of the phone that's not quite as intuitive is the keyboard. Yep, you're looking at it right, instead of a conventional phone keypad, the 7100v has a QWERTY keyboard by putting two-letters on a key.
Blackberry has obviously been beavering away on a new buzzword for this; SureType, which basically means that the company has thought at great length about where the keys should go. And, of course what to do if you need a ! or a £. Essentially only you know if you'd get on with it. But, it has to be said that it's typical of the innovation that Blackberry brings to a phone.
Feature-wise, the 7100v is a treat. Bluetooth is included, though Wi-Fi is lacking. The email function is brilliant. Business types can use fancy Exchange servers, but you can also set up a conventional email account. Simply choose an email address and it's associated with your handset in a minute or two via its IMEI number. No configuration is necessary and you can send and receive emails immediately. And, since it's permanently connected to your mail, you'll receive your mail in your pocket as soon as it's sent to you.
The handset price is dependent on tariff, in conventional mobile style, and includes a certain amount of email bandwidth.
Tech.co.uk was the former name of TechRadar.com. Its staff were at the forefront of the digital publishing revolution, and spearheaded the move to bring consumer technology journalism to its natural home – online. Many of the current TechRadar staff started life a Tech.co.uk staff writer, covering everything from the emerging smartphone market to the evolving market of personal computers. Think of it as the building blocks of the TechRadar you love today.
| english |
If the thought of a day without chocolate fills you with dread, then working your vacation itinerary around a tasting, class or factory tour will be just the ticket. Whether you fancy milk chocolate or 90% cacao, here is a selection of destinations that are contenders for the epithet of chocolate heaven.
1.The Chocolate Train, Switzerland.
Switzerland is also too famous for its chocolate. Its lush Alpine pastures are perfect for rearing the dairy cows responsible for the rich creamy milk used in its production. The ultimate visitor attraction for chocolate-lovers begins in the city of Montreux. A vintage train comprised of century-old “Belle Époche” Pullman carriages offers the chance to travel in style, though the train’s modern panoramic coaches provide a better view of the dramatic scenery of the Bernese Oberland beyond the glass. The train stops in Gruyeres (where the famous cheese is made), before arriving in Broc. Alight to tour the Cailler-Nestle factory and taste its chocolate before reboarding the train to return to Montreux.
2. Hotel Chocolat, St Lucia.
If a train ride isn't sufficient, how about a stay on a cocoa plantation? The Caribbean’s not all about palm-fringed beaches lapped by warm gentle waves. Head for the hills and you'll find Hotel Chocolat's exquisite boutique hotel 'Boucan' in a lofty location amid the rainforest and cocoa groves of the Rabot Estate, St Lucia's oldest plantation. Stroll along walking paths lined with cocoa bushes and unwind in the spa with a Cocoa Juvenate spa treatment. When it's time for dinner, take your pick from the cocoa cuisine menu which showcases inventive ways of incorporating this magic ingredient into a surprising range of dishes.
3. Oaxaca, Mexico.
Wander down to the Tlacolula market near the southern city of Oaxaca to try 'tejate'. Toasted maize and fermented cacao beans form the base of this pre-Hispanic drink popular with the Zapotecs. To achieve the frothy head, tejate makers pour the liquid from a great height into clay bowls ready for drinking. Chocolate also forms the basis of some of Oaxaca's famous moles. The mole paste adds a smoky and rich flavor to the local cuisine. Buy it or try it at the stalls of the Mercado Benito Juárez or Mercado 20 de Noviembre.
4. Mampong, Ghana.
British chocoholics will know that Ghana is known for its cacao - many of the cocoa beans used by top firm Cadburys originate in the West African nation. Travelers can take a tour which calls in at the Cocoa Research Institute, an experimental laboratory that focuses on pest and disease eradication. There are plenty of cacao specimens to observe at the Aburi Botanical Gardens. Tours usually finish at the Tetteh Quarshie farm in Mampong, where cocoa production in Ghana began back in 1878.
5. Antigua, Guatemala.
Take a cookery class at the Antigua branch of ChocoMuseo and learn not only about the stages of production from pod to plate, but also make your own chocolate. Participants are encouraged to join in at each stage. Pounding the beans into a smooth paste in the traditional pestle and mortar is hard work. Creating your own box of chocolates to take home is a lot of fun!
| english |
extern crate fnv;
use fnv::FnvHasher;
extern crate roaring;
use roaring::RoaringBitmap;
use std::cmp::Eq;
use std::collections::HashMap;
use std::fmt;
use std::hash::BuildHasherDefault;
use std::hash::Hash;
use column::Column;
use value::Value;
use matches::Match;
#[derive(Debug)]
pub struct IndexStats {
pub cardinality: usize,
}
impl fmt::Display for IndexStats {
fn fmt(&self, f: &mut fmt::Formatter) -> fmt::Result {
writeln!(f, "cardinality: {}", self.cardinality)
}
}
#[derive(Debug)]
pub enum Index<'a> {
UInt(HashMap<usize, RoaringBitmap<usize>, BuildHasherDefault<FnvHasher>>),
Int(HashMap<usize, RoaringBitmap<usize>, BuildHasherDefault<FnvHasher>>),
Boolean(HashMap<bool, RoaringBitmap<usize>, BuildHasherDefault<FnvHasher>>),
Str(HashMap<&'a str, RoaringBitmap<usize>, BuildHasherDefault<FnvHasher>>),
OwnedStr(HashMap<String, RoaringBitmap<usize>, BuildHasherDefault<FnvHasher>>),
}
impl<'a> Index<'a> {
pub fn new_by_column(col: &Column) -> Index<'a> {
match *col {
Column::UInt => Index::UInt(HashMap::default()),
Column::Int => Index::Int(HashMap::default()),
Column::Boolean => Index::Boolean(HashMap::default()),
Column::Str => Index::Str(HashMap::default()),
Column::OwnedStr => Index::OwnedStr(HashMap::default()),
}
}
pub fn insert(&mut self, val: &Value<'a>, id: usize) {
match (self, val) {
(&mut Index::UInt(ref mut m), &Value::UInt(u)) => {
ensure_bitmap(m, u);
if let Some(idx) = m.get_mut(&u) {
idx.insert(id);
}
}
(&mut Index::Int(ref mut m), &Value::Int(i)) => {
let u = i as usize;
ensure_bitmap(m, u);
if let Some(idx) = m.get_mut(&u) {
idx.insert(id);
}
}
(&mut Index::Boolean(ref mut m), &Value::Boolean(tf)) => {
ensure_bitmap(m, tf);
if let Some(idx) = m.get_mut(&tf) {
idx.insert(id);
}
}
(&mut Index::Str(ref mut m), &Value::Str(s)) => {
ensure_bitmap(m, s);
if let Some(idx) = m.get_mut(s) {
idx.insert(id);
}
}
(&mut Index::OwnedStr(ref mut m), &Value::OwnedStr(ref s)) => {
ensure_bitmap(m, s.clone());
if let Some(idx) = m.get_mut(s) {
idx.insert(id);
}
}
_ => unreachable!(),
}
}
pub fn get_match_index(&self, pattern: &Match) -> Option<&RoaringBitmap<usize>> {
match (self, pattern) {
(&Index::UInt(ref m), &Match::UInt(u)) => m.get(&u),
(&Index::Int(ref m), &Match::Int(i)) => m.get(&(i as usize)),
(&Index::Boolean(ref m), &Match::Boolean(tf)) => m.get(&tf),
(&Index::Str(ref m), &Match::Str(s)) => m.get(s),
(&Index::OwnedStr(ref m), &Match::OwnedStr(ref s)) => m.get(s),
_ => unreachable!(),
}
}
pub fn get_value_index<'b>(&self, pattern: &Value<'b>) -> Option<&RoaringBitmap<usize>> {
match (self, pattern) {
(&Index::UInt(ref m), &Value::UInt(u)) => m.get(&u),
(&Index::Int(ref m), &Value::Int(i)) => m.get(&(i as usize)),
(&Index::Boolean(ref m), &Value::Boolean(tf)) => m.get(&tf),
(&Index::Str(ref m), &Value::Str(s)) => m.get(s),
(&Index::OwnedStr(ref m), &Value::OwnedStr(ref s)) => m.get(s),
_ => unreachable!(),
}
}
pub fn stats(&self) -> IndexStats {
let c = match self {
&Index::UInt(ref m) => m.len(),
&Index::Int(ref m) => m.len(),
&Index::Boolean(ref m) => m.len(),
&Index::Str(ref m) => m.len(),
&Index::OwnedStr(ref m) => m.len(),
};
IndexStats { cardinality: c }
}
}
fn ensure_bitmap<T: Eq + Hash>(m: &mut HashMap<T, RoaringBitmap<usize>, BuildHasherDefault<FnvHasher>>, key: T) {
if let None = m.get(&key) {
let idx: RoaringBitmap<usize> = RoaringBitmap::new();
m.insert(key, idx);
}
}
| rust |
<filename>_includes/hero.html
<div class="u-sm-size2of3 u-flexExpandLeft u-flexExpandRight">
{% include logo.html %}
</div>
<p class="u-textCenter">
<span class="u-md-inlineBlock">
A community-driven list of stats and news related to <a href="https://developers.google.com/web/progressive-web-apps/">Progressive Web Apps</a>
</span>
<span class="u-md-inlineBlock">
brought to you by
<a class="u-textNoWrap" href="https://cloudfour.com/">
<svg width="24" height="24" class="Icon Icon--larger u-spaceSides06" role="img">
<use xlink:href="#cloud-four"/>
</svg>Cloud Four
</a>
</span>
</p>
| html |
/*
* Copyright 2014 Space Dynamics Laboratory - Utah State University Research Foundation.
*
* Licensed under the Apache License, Version 2.0 (the "License");
* you may not use this file except in compliance with the License.
* You may obtain a copy of the License at
*
* http://www.apache.org/licenses/LICENSE-2.0
*
* Unless required by applicable law or agreed to in writing, software
* distributed under the License is distributed on an "AS IS" BASIS,
* WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
* See the License for the specific language governing permissions and
* limitations under the License.
*/
package edu.usu.sdl.openstorefront.web.rest.resource;
import edu.usu.sdl.openstorefront.doc.APIDescription;
import edu.usu.sdl.openstorefront.doc.DataType;
import edu.usu.sdl.openstorefront.doc.RequireAdmin;
import edu.usu.sdl.openstorefront.doc.RequiredParam;
import edu.usu.sdl.openstorefront.service.manager.model.TaskRequest;
import edu.usu.sdl.openstorefront.storage.model.UserMessage;
import edu.usu.sdl.openstorefront.validation.ValidationResult;
import edu.usu.sdl.openstorefront.web.rest.model.FilterQueryParams;
import java.util.List;
import javax.ws.rs.BeanParam;
import javax.ws.rs.DELETE;
import javax.ws.rs.GET;
import javax.ws.rs.POST;
import javax.ws.rs.Path;
import javax.ws.rs.PathParam;
import javax.ws.rs.Produces;
import javax.ws.rs.core.GenericEntity;
import javax.ws.rs.core.MediaType;
import javax.ws.rs.core.Response;
/**
* User messages (Queued) used for email
*
* @author dshurtleff
*/
@Path("v1/resource/usermessages")
@APIDescription("User message are queued message for the user.")
public class UserMessageResource
extends BaseResource
{
@GET
@APIDescription("Get a list of user messages")
@RequireAdmin
@Produces({MediaType.APPLICATION_JSON})
@DataType(UserMessage.class)
public Response userMessages(@BeanParam FilterQueryParams filterQueryParams)
{
ValidationResult validationResult = filterQueryParams.validate();
if (!validationResult.valid()) {
return sendSingleEntityResponse(validationResult.toRestError());
}
List<UserMessage> userMessages = service.getUserService().findUserMessages(filterQueryParams);
GenericEntity<List<UserMessage>> entity = new GenericEntity<List<UserMessage>>(userMessages)
{
};
return sendSingleEntityResponse(entity);
}
@GET
@APIDescription("Get an user message")
@RequireAdmin
@Produces({MediaType.APPLICATION_JSON})
@DataType(UserMessage.class)
@Path("/{id}")
public Response findUserMessage(
@PathParam("id") String userMessageId)
{
UserMessage userMessageExample = new UserMessage();
userMessageExample.setUserMessageId(userMessageId);
UserMessage userMessage = service.getPersistenceService().queryOneByExample(UserMessage.class, userMessageExample);
return sendSingleEntityResponse(userMessage);
}
@DELETE
@RequireAdmin
@APIDescription("Removes a user message")
@Path("/{id}")
public void deleteUseMessage(
@PathParam("id")
@RequiredParam String id)
{
service.getUserService().removeUserMessage(id);
}
@POST
@APIDescription("Processes all active user messages now")
@RequireAdmin
@Path("/processnow")
public Response processUserMessages()
{
TaskRequest taskRequest = new TaskRequest();
taskRequest.setAllowMultiple(false);
taskRequest.setName("Process All User Messages Now");
service.getAyncProxy(service.getUserService(), taskRequest).processAllUserMessages(true);
return Response.ok().build();
}
@POST
@APIDescription("Cleanup old user messages according to archive rules")
@RequireAdmin
@Path("/cleanold")
public Response cleanOld()
{
service.getUserService().cleanupOldUserMessages();
return Response.ok().build();
}
}
| java |
<reponame>cheike569/nextjs-simple-sitemap-generator
{
"name": "nextjs-simple-sitemap-generator",
"version": "1.0.7",
"description": "Simple but highly customizable sitemap generator for NextJS.",
"homepage": "https://github.com/cheike569/nextjs-simple-sitemap-generator",
"main": "lib/index.js",
"scripts": {
"test": "env TS_NODE_COMPILER_OPTIONS='{\"module\": \"commonjs\" }' mocha -r ts-node/register 'test/main.test.ts'",
"build": "rm -rf lib && tsc"
},
"types": "lib/index.d.ts",
"keywords": [],
"author": "<NAME>",
"url" : "https://www.webzeile.com/",
"license": "MIT",
"devDependencies": {
"@types/chai": "^4.2.15",
"@types/expect": "^24.3.0",
"@types/fs-extra": "^9.0.8",
"@types/mocha": "^8.2.1",
"chai": "^4.3.4",
"mocha": "^8.3.2",
"ts-node": "^9.1.1"
},
"dependencies": {
"fs-extra": "^9.1.0",
"typescript": "^4.2.3"
}
}
| json |
"""
This file is part of the magtifun.abgeo.dev.
(c) 2021 <NAME> <<EMAIL>>
For the full copyright and license information, please view the LICENSE
file that was distributed with this source code.
"""
from typing import List
from fastapi import APIRouter, Depends
from app.api.dependencies.auth import get_current_user
from app.models.domain.user import User
from app.models.schemas.sms import (
SMSOnSend,
SMSSendResult,
SMSHistoryItem,
SMSHistoryItemRemoveStatus,
)
from app.services.magtifun import send_sms, get_sms_history, remove_sms_from_history
router = APIRouter(prefix="/sms", tags=["SMS"])
@router.post("/", response_model=SMSSendResult)
async def send(
sms: SMSOnSend, current_user: User = Depends(get_current_user)
) -> SMSSendResult:
"""
Send SMS.
"""
return send_sms(current_user.key, sms)
@router.get("/", response_model=List[SMSHistoryItem], name="Get sent SMSs")
async def get_all(
current_user: User = Depends(get_current_user),
) -> List[SMSHistoryItem]:
"""
Get sent SMSs.
"""
return get_sms_history(current_user.key)
@router.delete(
"/{sms_id}",
response_model=SMSHistoryItemRemoveStatus,
name="Remove SMS from history",
)
async def delete(
sms_id: int,
current_user: User = Depends(get_current_user),
) -> SMSHistoryItemRemoveStatus:
"""
Remove SMS from history.
"""
return SMSHistoryItemRemoveStatus(
status=remove_sms_from_history(sms_id, current_user.key)
)
| python |
<reponame>FastComments/fastcomments-integrations
{
"name": "fastcomments-integrations-core-test-harness",
"scripts": {
"run-tests-all-projects": "node run-tests-all-projects.js",
"run-tests-single-project": "node run-tests-single-project.js"
},
"dependencies": {
"puppeteer": "^10.1.0",
"commander": "^8.0.0"
}
}
| json |
package data
import (
"context"
"path/filepath"
"testing"
"github.com/evergreen-ci/evergreen"
"github.com/evergreen-ci/evergreen/db"
"github.com/evergreen-ci/evergreen/testutil"
"github.com/stretchr/testify/suite"
)
type cliUpdateConnectorSuite struct {
suite.Suite
ctx Connector
setup func()
degrade func()
cancel func()
}
func TestUpdateConnector(t *testing.T) {
s := &cliUpdateConnectorSuite{
ctx: &DBConnector{},
}
s.setup = func() {
ctx, cancel := context.WithCancel(context.Background())
s.cancel = cancel
s.NoError(evergreen.GetEnvironment().Configure(ctx, filepath.Join(evergreen.FindEvergreenHome(), testutil.TestDir, testutil.TestSettings), nil))
s.NoError(db.ClearCollections(evergreen.ConfigCollection))
}
s.degrade = func() {
flags := evergreen.ServiceFlags{
CLIUpdatesDisabled: true,
}
s.NoError(evergreen.SetServiceFlags(flags))
}
suite.Run(t, s)
}
func TestMockUpdateConnector(t *testing.T) {
s := &cliUpdateConnectorSuite{
ctx: &MockConnector{},
}
s.setup = func() {
}
s.degrade = func() {
s.ctx.(*MockConnector).MockCLIUpdateConnector.degradedModeOn = true
}
suite.Run(t, s)
}
func (s *cliUpdateConnectorSuite) SetupSuite() {
evergreen.ResetEnvironment()
s.setup()
}
func (s *cliUpdateConnectorSuite) TearDownSuite() {
if s.cancel != nil {
s.cancel()
}
}
func (s *cliUpdateConnectorSuite) Test() {
v, err := s.ctx.GetCLIUpdate()
s.Require().NoError(err)
s.Require().NotNil(v)
s.NotEmpty(v.ClientConfig.LatestRevision)
}
func (s *cliUpdateConnectorSuite) TestDegradedMode() {
s.degrade()
v, err := s.ctx.GetCLIUpdate()
s.NoError(err)
s.Require().NotNil(v)
s.True(v.IgnoreUpdate)
s.NotEmpty(v.ClientConfig.LatestRevision)
}
| go |
<gh_stars>1-10
#!/usr/bin/env python
import urllib2
import cv2
import multiprocessing
import Queue #needed separately for the Empty exception
import time, datetime
import sys
from PySide import QtGui, QtCore
import numpy as np
def printnow(string):
print string
sys.stdout.flush()
class SystemCameraVideoProcess(multiprocessing.Process):
def __init__(self, outputqueue):
multiprocessing.Process.__init__(self)
self.outputqueue = outputqueue
self.exit = multiprocessing.Event()
self.sleeping = multiprocessing.Event()
self.camera_id = 0
def run(self):
camera = cv2.VideoCapture(self.camera_id)
while not self.exit.is_set():
if self.sleeping.is_set():
time.sleep(0.1)
continue
hello, cv_img = camera.read()
# resize to 320x240
if (cv_img is not None) and cv_img.data:
cv_img = cv2.resize(cv_img,(320,240),interpolation=cv2.INTER_NEAREST)
vis = cv2.cvtColor(cv_img, cv2.COLOR_BGR2RGB)
tstamp = datetime.datetime.now()
try:
self.outputqueue.put((tstamp, vis), False)
except Queue.Full:
continue
camera.release()
def isAwake(self):
return not self.sleeping.is_set()
def shutdown(self):
self.exit.set()
def sleep(self):
self.sleeping.set()
def wake(self):
self.sleeping.clear()
class SystemCamera0VideoProcess(SystemCameraVideoProcess):
def __init__(self, outputqueue):
SystemCameraVideoProcess.__init__(self, outputqueue)
self.camera_id = 0
class SystemCamera1VideoProcess(SystemCameraVideoProcess):
def __init__(self, outputqueue):
SystemCameraVideoProcess.__init__(self, outputqueue)
self.camera_id = 1
class SystemCamera2VideoProcess(SystemCameraVideoProcess):
def __init__(self, outputqueue):
SystemCameraVideoProcess.__init__(self, outputqueue)
self.camera_id = 2
class SystemCamera3VideoProcess(SystemCameraVideoProcess):
def __init__(self, outputqueue):
SystemCameraVideoProcess.__init__(self, outputqueue)
self.camera_id = 3
class WalkeraVideoProcess(multiprocessing.Process):
def __init__(self, outputqueue):
multiprocessing.Process.__init__(self)
self.outputqueue = outputqueue
self.exit = multiprocessing.Event()
self.sleeping = multiprocessing.Event()
self.password_mgr = urllib2.HTTPPasswordMgrWithDefaultRealm()
self.top_level_url = "http://192.168.10.1:8080"
def run(self):
buffer = ''
self.state = 0
self.password_mgr.add_password(None, self.top_level_url, 'admin', 'admin123')
self.handler = urllib2.HTTPBasicAuthHandler(self.password_mgr)
self.opener = urllib2.build_opener(self.handler)
try:
self.opener.open("http://192.168.10.1:8080/?action=stream")
urllib2.install_opener(self.opener)
self.resp = urllib2.urlopen("http://192.168.10.1:8080/?action=stream")
except Exception, e:
print "exception opening Walkera stream in urllib2:", e
sys.stdout.flush()
self.exit.set()
time.sleep(0.1)
while not self.exit.is_set():
if self.sleeping.is_set():
time.sleep(0.1)
continue
data = self.resp.read(4096) #recv buffer
buffer += data
while buffer.find('\n') != -1: # delim='\n'
line, buffer = buffer.split("\n", 1)
if self.state==0:
if line[0:20] == "--boundarydonotcross":
self.state = 1
elif self.state==1:
# print line.split(":")
self.state = 2
elif self.state==2:
#print line
datalength = int(line.split(":")[1][1:-1])
self.state = 3
#print buffer
elif self.state==3:
self.state = 4
#walkera_timestamp = float(line.split(":")[1][1:-1])
#print "timestamp:", walkera_timestamp
sys.stdout.flush()
else:
while(len(buffer) < datalength):
bytes_remaining = datalength - len(buffer)
data = self.resp.read(bytes_remaining)
buffer += data
self.state = 0
#buffer contains one image
try:
cv_img = cv2.imdecode(np.fromstring(buffer, dtype=np.uint8), cv2.CV_LOAD_IMAGE_COLOR)
vis = cv2.cvtColor(cv_img, cv2.COLOR_BGR2RGB)
except Exception, e:
print Exception, e
sys.stdout.flush()
tstamp = datetime.datetime.now()
try:
self.outputqueue.put((tstamp, vis), False)
except Queue.Full:
continue
def isAwake(self):
return not self.sleeping.is_set()
def shutdown(self):
self.exit.set()
def sleep(self):
self.sleeping.set()
def wake(self):
self.sleeping.clear()
class VideoTestWidget(QtGui.QWidget):
def __init__(self, input_process_class, target_FPS=30.0):
super(VideoTestWidget, self).__init__()
self.process_output_queue = multiprocessing.Queue(maxsize=1)
self.process = input_process_class(self.process_output_queue)
self.process.start()
self.managed_objects = []
layout = QtGui.QGridLayout()
self.pixmap = QtGui.QLabel()
self.pixmap.setMinimumSize(320,240)
self.pixmap.setMaximumSize(320,240)
self.pixmap.setScaledContents(True)
layout.addWidget(self.pixmap, 0, 0)
self.setLayout(layout)
self.draw_images_t = QtCore.QTimer()
self.draw_images_t.timeout.connect(self.drawImage)
self.draw_images_t.start(1000.0/target_FPS)
def drawImage(self):
try:
tstamp, cv_img = self.process_output_queue.get(False)
if len(cv_img.shape) > 2:
height, width, bytesPerComponent = cv_img.shape
bytesPerLine = bytesPerComponent * width;
qimage = QtGui.QImage(cv_img.data, width, height, bytesPerLine, QtGui.QImage.Format_RGB888)
else:
height, width = cv_img.shape
qimage = QtGui.QImage(cv_img.data, width, height, QtGui.QImage.Format_Indexed8)
self.pixmap.setPixmap(QtGui.QPixmap.fromImage(qimage))
except Queue.Empty:
pass
def shutdown(self):
self.process.shutdown()
self.process.terminate()
for o in self.managed_objects:
o.shutdown()
if __name__ == '__main__':
app = QtGui.QApplication(sys.argv)
'''
SystemCamera1VideoProcess
WalkeraVideoProcess
'''
widget = QtGui.QWidget()
layout = QtGui.QGridLayout()
vid0 = VideoTestWidget(SystemCamera0VideoProcess)
vid1 = VideoTestWidget(SystemCamera1VideoProcess)
layout.addWidget(vid0, 0, 0)
layout.addWidget(vid1, 0, 1)
widget.setLayout(layout)
vid0.managed_objects.append(vid1)
app.aboutToQuit.connect(vid0.shutdown)
widget.show()
sys.exit(app.exec_()) | python |
# -*- coding: utf-8 -*-
from django.apps import AppConfig
class UsermanagmentConfig(AppConfig):
name = 'userManagment'
| python |
<gh_stars>0
'''
bam_vs_bed.py - count context that reads map to
======================================================
:Author: <NAME>
:Release: $Id$
:Date: |today|
:Tags: Genomics NGS Intervals BAM BED Counting
Purpose
-------
This script takes as input a :term:`BAM` file from an RNASeq experiment
and a :term:`bed` formatted file. The :term:`bed` formatted file needs
at least four columns. The fourth (name) column is used to group counts.
It counts the number of alignments overlapping in the first input
file and that overlap each feature in the second file. Annotations in the
:term:`bed` file can be overlapping - they are counted independently.
This scripts requires bedtools_ to be installed.
Options
-------
-a, --bam-file / -b, --bed-file
These are the input files. They can also be provided as provided as
positional arguements, with the bam file being first and the (gziped
or uncompressed) bed file coming second
-m, --min-overlap
Using this option will only count reads if they overlap with a bed entry
by a certain minimum fraction of the read.
Example
-------
Example::
python bam_vs_bed.py in.bam in.bed.gz
Usage
-----
Type::
cgat bam_vs_bed BAM BED [OPTIONS]
cgat bam_vs_bed --bam-file=BAM --bed-file=BED [OPTIONS]
where BAM is either a bam or bed file and BED is a bed file.
Type::
cgat bam_vs_bed --help
for command line help.
Command line options
--------------------
'''
import sys
import collections
import itertools
import subprocess
import CGAT.Experiment as E
import CGAT.IOTools as IOTools
import pysam
import CGAT.Bed as Bed
def main(argv=None):
"""script main.
parses command line options in sys.argv, unless *argv* is given.
"""
if not argv:
argv = sys.argv
# setup command line parser
parser = E.OptionParser(version="%prog version: $Id$",
usage=globals()["__doc__"])
parser.add_option("-m", "--min-overlap", dest="min_overlap",
type="float",
help="minimum overlap [%default]")
parser.add_option("-a", "--bam-file", dest="filename_bam",
metavar="bam", type="string",
help="bam-file to use (required) [%default]")
parser.add_option("-b", "--bed-file", dest="filename_bed",
metavar="bed", type="string",
help="bed-file to use (required) [%default]")
parser.add_option(
"-s", "--sort-bed", dest="sort_bed",
action="store_true",
help="sort the bed file by chromosomal location before "
"processing. "
"[%default]")
parser.add_option(
"--assume-sorted", dest="sort_bed",
action="store_false",
help="assume that the bed-file is sorted by chromosomal location. "
"[%default]")
parser.add_option(
"--split-intervals", dest="split_intervals",
action="store_true",
help="treat split BAM intervals, for example spliced intervals, "
"as separate intervals. Note that a single alignment might be "
"counted several times as a result. "
"[%default]")
parser.set_defaults(
min_overlap=0.5,
filename_bam=None,
filename_bed=None,
sort_bed=True,
split_intervals=False,
)
# add common options (-h/--help, ...) and parse command line
(options, args) = E.Start(parser, argv=argv)
filename_bam = options.filename_bam
filename_bed = options.filename_bed
if filename_bam is None and filename_bed is None:
if len(args) != 2:
raise ValueError(
"please supply a bam and a bed file or two bed-files.")
filename_bam, filename_bed = args
if filename_bed is None:
raise ValueError("please supply a bed file to compare to.")
if filename_bam is None:
raise ValueError("please supply a bam file to compare with.")
E.info("intersecting the two files")
min_overlap = options.min_overlap
options.stdout.write("category\talignments\n")
# get number of columns of reference bed file
for bed in Bed.iterator(IOTools.openFile(filename_bed)):
ncolumns_bed = bed.columns
break
E.info("assuming %s is bed%i format" % (filename_bed, ncolumns_bed))
if ncolumns_bed < 4:
raise ValueError("please supply a name attribute in the bed file")
# get information about
if filename_bam.endswith(".bam"):
format = "-abam"
samfile = pysam.Samfile(filename_bam, "rb")
total = samfile.mapped
# latest bedtools uses bed12 format when bam is input
ncolumns_bam = 12
# count per read
sort_key = lambda x: x.name
else:
format = "-a"
total = IOTools.getNumLines(filename_bam)
# get bed format
ncolumns_bam = 0
for bed in Bed.iterator(IOTools.openFile(filename_bam)):
ncolumns_bam = bed.columns
break
if ncolumns_bam > 0:
E.info("assuming %s is bed%i fomat" % (filename_bam, ncolumns_bam))
if ncolumns_bam == 3:
# count per interval
sort_key = lambda x: (x.contig, x.start, x.end)
else:
# count per interval category
sort_key = lambda x: x.name
# use fields for bam/bed file (regions to count with)
data_fields = [
"contig", "start", "end", "name",
"score", "strand", "thickstart", "thickend", "rgb",
"blockcount", "blockstarts", "blockends"][:ncolumns_bam]
# add fields for second bed (regions to count in)
data_fields.extend([
"contig2", "start2", "end2", "name2",
"score2", "strand2", "thickstart2", "thickend2", "rgb2",
"blockcount2", "blockstarts2", "blockends2"][:ncolumns_bed])
# add bases overlap
data_fields.append("bases_overlap")
data = collections.namedtuple("data", data_fields)
options.stdout.write("total\t%i\n" % total)
if total == 0:
E.warn("no data in %s" % filename_bam)
return
# SNS: sorting optional, off by default
if options.sort_bed:
bedcmd = "<( zcat %s | sort -k1,1 -k2,2n)" % filename_bed
else:
bedcmd = filename_bed
if options.split_intervals:
split = "-split"
else:
split = ""
# IMS: newer versions of intersectBed have a very high memory
# requirement unless passed sorted bed files.
statement = """bedtools intersect %(format)s %(filename_bam)s
-b %(bedcmd)s
%(split)s
-sorted -bed -wo -f %(min_overlap)f""" % locals()
E.info("starting counting process: %s" % statement)
proc = E.run(statement,
return_popen=True,
stdout=subprocess.PIPE)
E.info("counting")
counts_per_alignment = collections.defaultdict(int)
take_columns = len(data._fields)
def iter(infile):
for line in infile:
if not line.strip():
continue
yield data._make(line[:-1].split()[:take_columns])
for read, overlaps in itertools.groupby(iter(proc.stdout), key=sort_key):
annotations = [x.name2 for x in overlaps]
for anno in annotations:
counts_per_alignment[anno] += 1
for key, counts in counts_per_alignment.iteritems():
options.stdout.write("%s\t%i\n" % (key, counts))
# write footer and output benchmark information.
E.Stop()
if __name__ == "__main__":
sys.exit(main(sys.argv))
| python |
<filename>src/Main/Currency/Currency.ts
export type Currencies = "Coins"
export abstract class Currency {
amount: number;
constructor(amount = 0) {
this.amount = amount;
this.updateHTML();
}
spend(amount: number):void {
if (this.amount >= amount) {
this.amount -= amount;
this.updateHTML();
}
}
gain(amount: number):void {
this.amount += amount;
this.updateHTML();
}
set(amount: number):void {
this.amount = amount;
this.updateHTML();
}
abstract updateHTML(): void;
} | typescript |
{
"name": "opensphere-electron",
"version": "1.0.0",
"description": "OpenSphere is a pluggable GIS web application that supports both 2D and 3D views.",
"productName": "OpenSphere",
"main": "app/src/main.js",
"scripts": {
"guide": "make -C docs clean html",
"guide:auto": "sphinx-autobuild docs docs/_build/html",
"lint": "eslint 'app/src/**/*.js'",
"start": "electron .",
"create-installers": "electron-builder -mwl --x64",
"create-installer:linux": "electron-builder --linux --x64",
"create-installer:mac": "electron-builder --mac --x64",
"create-installer:win": "electron-builder --win --x64",
"postinstall": "electron-builder install-app-deps"
},
"repository": "https://github.com/ngageoint/opensphere-electron",
"keywords": [
"OpenSphere",
"Electron"
],
"license": "Apache-2.0",
"dependencies": {
"bluebird": "^3.7.2",
"config": "^3.3.3",
"electron-is-dev": "^1.2.0",
"electron-log": "^4.3.1",
"electron-updater": "^4.3.5",
"slash": "^3.0.0"
},
"devDependencies": {
"electron": "8.5.5",
"electron-builder": "22.9.1",
"eslint": "^7.26.0",
"eslint-config-google": "^0.14.0"
}
}
| json |
#!/usr/bin/env python
# -*- coding: utf-8 -*-
"""Functional tests for `copra.rest.Client` class.
Without any additional user input, this module will test all of the
unauthenticated methods of the copra.rest.Client.
An API key for the Coinbase Pro sandbox is required to test the authenticated
methods. The key information as well as the ids of a few test accounts are
read in to this module as environment variables by the dotenv module from a
file named .env. The .env file must reside in the same directory as this test
module.
An example .env file named .env.sample is provided. To test the authenticated
methods, fill out the .env.sample file accordingly and rename it to .env.
"""
import os.path
if os.path.isfile(os.path.join(os.path.dirname(__file__), '.env')):
from dotenv import load_dotenv
load_dotenv()
else:
print("\n** .env file not found. Authenticated methods will be skipped. **\n")
import asyncio
from datetime import datetime, timedelta
import os
import json
import random
import time
from uuid import uuid4
from asynctest import TestCase, skipUnless, expectedFailure
from dateutil import parser
from copra.rest import APIRequestError, Client, SANDBOX_URL
from copra.rest.client import USER_AGENT
KEY = os.getenv('KEY')
SECRET = os.getenv('SECRET')
PASSPHRASE = os.getenv('PASSPHRASE')
TEST_AUTH = True if (KEY and SECRET and PASSPHRASE) else False
TEST_BTC_ACCOUNT = os.getenv('TEST_BTC_ACCOUNT')
TEST_USD_ACCOUNT = os.getenv('TEST_USD_ACCOUNT')
TEST_USD_PAYMENT_METHOD = os.getenv('TEST_USD_PAYMENT_METHOD')
TEST_USD_COINBASE_ACCOUNT = os.getenv('TEST_USD_COINBASE_ACCOUNT')
HTTPBIN = 'http://httpbin.org'
class TestRest(TestCase):
"""Tests for copra.rest.Client"""
def setUp(self):
self.client = Client(self.loop)
if TEST_AUTH:
self.auth_client = Client(self.loop, SANDBOX_URL, auth=True,
key=KEY, secret=SECRET,
passphrase=<PASSWORD>)
def tearDown(self):
self.loop.create_task(self.client.close())
if TEST_AUTH:
self.loop.run_until_complete(self.auth_client.cancel_all(stop=True))
self.loop.create_task(self.auth_client.close())
# try to avoid public rate limit, allow for aiohttp cleanup and
# all outstanding Coinbase actions to complete
self.loop.run_until_complete(asyncio.sleep(1))
async def test_user_agent(self):
async with Client(self.loop, HTTPBIN) as client:
headers, body = await client.get('/user-agent')
self.assertEqual(body['user-agent'], USER_AGENT)
async def test__handle_error(self):
async with Client(self.loop, HTTPBIN) as client:
with self.assertRaises(APIRequestError) as cm:
headers, body = await client.get('/status/404')
async def test_delete(self):
async with Client(self.loop, HTTPBIN) as client:
headers, body = await client.delete('/delete')
self.assertEqual(body['args'], {})
self.assertEqual(body['headers']['User-Agent'], USER_AGENT)
self.assertIsInstance(headers, dict)
self.assertIn('Content-Type', headers)
self.assertIn('Content-Length', headers)
params = {'key1': 'item1', 'key2': 'item2'}
headers, body = await client.delete('/delete', params=params)
self.assertEqual(body['args'], params)
async def test_get(self):
async with Client(self.loop, HTTPBIN) as client:
headers, body = await client.get('/get')
body['args'].pop('no-cache', None)
self.assertEqual(body['args'], {})
self.assertEqual(body['headers']['User-Agent'], USER_AGENT)
self.assertIsInstance(headers, dict)
self.assertIn('Content-Type', headers)
self.assertIn('Content-Length', headers)
params = {'key1': 'item1', 'key2': 'item2'}
headers, body = await client.get('/get', params=params)
self.assertEqual(body['args'], params)
async def test_post(self):
async with Client(self.loop, HTTPBIN) as client:
headers, body = await client.post('/post')
self.assertEqual(body['form'], {})
self.assertEqual(body['headers']['User-Agent'], USER_AGENT)
self.assertIsInstance(headers, dict)
self.assertIn('Content-Type', headers)
self.assertIn('Content-Length', headers)
data = {"key1": "item1", "key2": "item2"}
headers, body = await client.post('/post', data=data)
self.assertEqual(json.loads(body['data']), data)
async def test_products(self):
keys = {'id', 'base_currency', 'quote_currency', 'base_min_size',
'base_max_size', 'quote_increment', 'display_name', 'status',
'margin_enabled', 'status_message', 'min_market_funds',
'max_market_funds', 'post_only', 'limit_only', 'cancel_only'}
# Sometimes returns 'accesible' as a key. ??
products = await self.client.products()
self.assertIsInstance(products, list)
self.assertGreater(len(products), 1)
self.assertIsInstance(products[0], dict)
self.assertGreaterEqual(len(products[0]), len(keys))
self.assertGreaterEqual(products[0].keys(), keys)
async def test_order_book(self):
keys = {'sequence', 'bids', 'asks'}
ob1 = await self.client.order_book('BTC-USD', level=1)
self.assertIsInstance(ob1, dict)
self.assertEqual(ob1.keys(), keys)
self.assertIsInstance(ob1['bids'], list)
self.assertEqual(len(ob1['bids']), 1)
self.assertEqual(len(ob1['bids'][0]), 3)
self.assertIsInstance(ob1['asks'], list)
self.assertEqual(len(ob1['asks']), 1)
self.assertEqual(len(ob1['asks'][0]), 3)
ob2 = await self.client.order_book('BTC-USD', level=2)
self.assertIsInstance(ob2, dict)
self.assertEqual(ob2.keys(), keys)
self.assertIsInstance(ob2['bids'], list)
self.assertEqual(len(ob2['bids']), 50)
self.assertEqual(len(ob2['bids'][0]), 3)
self.assertIsInstance(ob2['asks'], list)
self.assertEqual(len(ob2['asks']), 50)
self.assertEqual(len(ob2['asks'][0]), 3)
ob3 = await self.client.order_book('BTC-USD', level=3)
self.assertIsInstance(ob3, dict)
self.assertEqual(ob3.keys(), keys)
self.assertIsInstance(ob3['bids'], list)
self.assertGreater(len(ob3['bids']), 50)
self.assertEqual(len(ob3['bids'][0]), 3)
self.assertIsInstance(ob3['asks'], list)
self.assertGreater(len(ob3['asks']), 50)
self.assertEqual(len(ob3['asks'][0]), 3)
async def test_ticker(self):
keys = {'trade_id', 'price', 'size', 'bid', 'ask', 'volume', 'time'}
tick = await self.client.ticker('BTC-USD')
self.assertIsInstance(tick, dict)
self.assertEqual(tick.keys(), keys)
async def test_trades(self):
keys = {'time', 'trade_id', 'price', 'size', 'side'}
trades, before, after = await self.client.trades('BTC-USD')
self.assertIsInstance(trades, list)
self.assertIsInstance(trades[0], dict)
self.assertIsInstance(before, str)
self.assertIsInstance(after, str)
self.assertEqual(len(trades), 100)
self.assertEqual(trades[0].keys(), keys)
trades, before, after = await self.client.trades('BTC-USD', 5)
self.assertEqual(len(trades), 5)
trades_after, after_after, before_after = await self.client.trades('BTC-USD', 5, after=after)
self.assertLess(trades_after[0]['trade_id'], trades[-1]['trade_id'])
trades_before, after_before, before_before = await self.client.trades('BTC-USD', 5, before=before)
if trades_before:
self.assertGreater(trades_before[-1]['trade_id'], trades[0]['trade_id'])
else:
self.assertIsNone(after_before)
self.assertIsInstance(after_after, str)
await asyncio.sleep(20)
trades_before, after_before, before_before = await self.client.trades('BTC-USD', 5, before=before)
if (trades_before):
self.assertGreater(trades_before[-1]['trade_id'], trades[0]['trade_id'])
else:
self.assertIsNone(after_before)
self.assertIsInstance(after_after, str)
async def test_historic_rates(self):
rates = await self.client.historic_rates('BTC-USD', 900)
self.assertIsInstance(rates, list)
self.assertEqual(len(rates[0]), 6)
self.assertEqual(rates[0][0] - rates[1][0], 900)
end = datetime.utcnow()
start = end - timedelta(days=1)
rates = await self.client.historic_rates('LTC-USD', 3600, start.isoformat(), end.isoformat())
self.assertIsInstance(rates, list)
self.assertEqual(len(rates), 24)
self.assertEqual(len(rates[0]), 6)
self.assertEqual(rates[0][0] - rates[1][0], 3600)
async def test_get_24hour_stats(self):
keys = {'open', 'high', 'low', 'volume', 'last', 'volume_30day'}
stats = await self.client.get_24hour_stats('BTC-USD')
self.assertIsInstance(stats, dict)
self.assertEqual(stats.keys(), keys)
async def test_currencies(self):
keys = {'id', 'name', 'min_size', 'status', 'message', 'details'}
currencies = await self.client.currencies()
self.assertIsInstance(currencies, list)
self.assertGreater(len(currencies), 1)
self.assertIsInstance(currencies[0], dict)
self.assertEqual(currencies[0].keys(), keys)
async def test_server_time(self):
time = await self.client.server_time()
self.assertIsInstance(time, dict)
self.assertIn('iso', time)
self.assertIn('epoch', time)
self.assertIsInstance(time['iso'], str)
self.assertIsInstance(time['epoch'], float)
@skipUnless(TEST_AUTH, "Authentication credentials not provided.")
async def test_accounts(self):
keys = {'id', 'currency', 'balance', 'available', 'hold', 'profile_id'}
accounts = await self.auth_client.accounts()
self.assertIsInstance(accounts, list)
self.assertIsInstance(accounts[0], dict)
self.assertGreaterEqual(accounts[0].keys(), keys)
@skipUnless(TEST_AUTH and TEST_BTC_ACCOUNT, "Auth credentials and test BTC account ID required")
async def test_account(self):
keys = {'id', 'currency', 'balance', 'available', 'hold', 'profile_id'}
account = await self.auth_client.account(TEST_BTC_ACCOUNT)
self.assertIsInstance(account, dict)
self.assertEqual(account.keys(), keys)
self.assertEqual(account['id'], TEST_BTC_ACCOUNT)
self.assertEqual(account['currency'], 'BTC')
@skipUnless(TEST_AUTH and TEST_BTC_ACCOUNT, "Auth credentials and test BTC account ID required")
async def test_account_history(self):
# Assumes market_order works.
orders = []
for i in range(1,6):
size = 0.001 * i
order = await self.auth_client.market_order('buy', 'BTC-USD', size)
orders.append(order)
await asyncio.sleep(0.25)
history, before, after = await self.auth_client.account_history(
TEST_BTC_ACCOUNT, limit=3)
keys = {'amount', 'balance', 'created_at', 'details', 'id', 'type'}
self.assertIsInstance(history, list)
self.assertEqual(len(history), 3)
self.assertEqual(history[0].keys(), keys)
self.assertEqual(history[0]['type'], 'match')
self.assertEqual(history[0]['details']['order_id'], orders[4]['id'])
self.assertEqual(history[0]['details']['product_id'], 'BTC-USD')
after_history, after_before, after_after = await self.auth_client.account_history(TEST_BTC_ACCOUNT, after=after)
self.assertGreater(history[-1]['id'], after_history[0]['id'])
original_history, _, _ = await self.auth_client.account_history(TEST_BTC_ACCOUNT, before=after_before)
self.assertEqual(original_history, history)
@skipUnless(TEST_AUTH and TEST_BTC_ACCOUNT, "Auth credentials and test BTC account ID required")
async def test_holds(self):
# Assumes cancel, cancel_all and limit_order work
await self.auth_client.cancel_all(stop=True)
holds, _, _ = await self.auth_client.holds(TEST_BTC_ACCOUNT)
offset = len(holds)
orders = []
for i in range(1, 8):
size = .001 * i
price = 10000 + i * 1000
order = await self.auth_client.limit_order('sell', 'BTC-USD', price, size)
orders.append(order)
await asyncio.sleep(.25)
holds, _, _ = await self.auth_client.holds(TEST_BTC_ACCOUNT)
keys = {'amount', 'created_at', 'id', 'ref', 'type'}
self.assertEqual(len(holds), 7 + offset)
self.assertEqual(holds[0].keys(), keys)
self.assertEqual(float(holds[0]['amount']), .007)
self.assertEqual(orders[6]['id'], holds[0]['ref'])
holds, before, after = await self.auth_client.holds(TEST_BTC_ACCOUNT,
limit=5)
self.assertEqual(len(holds), 5)
after_holds, after_before, after_after = await self.auth_client.holds(
TEST_BTC_ACCOUNT, after=after)
self.assertEqual(len(after_holds), 2 + offset)
original_holds, _, _ = await self.auth_client.holds(TEST_BTC_ACCOUNT,
before=after_before, limit=5)
self.assertEqual(original_holds, holds)
for order in orders[4:]:
resp = await self.auth_client.cancel(order['id'])
self.assertEqual(resp[0], order['id'])
holds, _, _ = await self.auth_client.holds(TEST_BTC_ACCOUNT)
total = 0
for hold in holds:
if hold['type'] == 'order':
total += float(hold['amount'])
self.assertAlmostEqual(total, 0.01)
@skipUnless(TEST_AUTH, "Auth credentials required")
async def test_limit_order(self):
# Assumes cancel works
for side, base_price in (('buy', 1), ('sell', 50000)):
# default time_in_force
price = base_price + (random.randint(1, 9) / 10)
size = random.randint(1, 10) / 1000
order = await self.auth_client.limit_order(side, 'BTC-USD',
price=price, size=size)
await self.auth_client.cancel(order['id'])
keys = {'created_at', 'executed_value', 'fill_fees', 'filled_size',
'id', 'post_only', 'price', 'product_id', 'settled', 'side',
'size', 'status', 'stp', 'time_in_force', 'type'}
self.assertEqual(order.keys(), keys)
self.assertEqual(float(order['price']), price)
self.assertEqual(float(order['size']), size)
self.assertEqual(order['product_id'], 'BTC-USD')
self.assertEqual(order['side'], side)
self.assertEqual(order['stp'], 'dc')
self.assertEqual(order['type'], 'limit')
self.assertEqual(order['time_in_force'], 'GTC')
# client_oid, explicit time_in_force
price = base_price + (random.randint(1, 9) / 10)
size = random.randint(1, 10) / 1000
client_oid = str(uuid4())
order = await self.auth_client.limit_order(side, 'BTC-USD',
price=price, size=size,
time_in_force='GTC',
client_oid=client_oid)
await self.auth_client.cancel(order['id'])
self.assertEqual(order.keys(), keys)
self.assertEqual(float(order['price']), price)
self.assertEqual(float(order['size']), size)
self.assertEqual(order['product_id'], 'BTC-USD')
self.assertEqual(order['side'], side)
self.assertEqual(order['stp'], 'dc')
self.assertEqual(order['type'], 'limit')
self.assertEqual(order['time_in_force'], 'GTC')
# IOC time_in_force
price = base_price + (random.randint(1, 9) / 10)
size = random.randint(1, 10) / 1000
order = await self.auth_client.limit_order(side, 'BTC-USD',
price=price, size=size,
time_in_force='IOC')
try:
await self.auth_client.cancel(order['id'])
except APIRequestError:
pass
self.assertEqual(order.keys(), keys)
self.assertEqual(float(order['price']), price)
self.assertEqual(float(order['size']), size)
self.assertEqual(order['product_id'], 'BTC-USD')
self.assertEqual(order['side'], side)
self.assertEqual(order['stp'], 'dc')
self.assertEqual(order['type'], 'limit')
self.assertEqual(order['time_in_force'], 'IOC')
# FOK time_in_force
price = base_price + (random.randint(1, 9) / 10)
size = random.randint(1, 10) / 1000
order = await self.auth_client.limit_order(side, 'BTC-USD',
price=price, size=size,
time_in_force='FOK')
if 'reject_reason' in order:
keys = {'created_at', 'executed_value', 'fill_fees', 'filled_size',
'id', 'post_only', 'price', 'product_id', 'reject_reason',
'settled', 'side', 'size', 'status', 'time_in_force',
'type'}
try:
await self.auth_client.cancel(order['id'])
except APIRequestError:
pass
self.assertEqual(order.keys(), keys)
self.assertEqual(float(order['price']), price)
self.assertEqual(float(order['size']), size)
self.assertEqual(order['product_id'], 'BTC-USD')
self.assertEqual(order['side'], side)
self.assertEqual(order['type'], 'limit')
self.assertEqual(order['time_in_force'], 'FOK')
# GTT time_in_force, iterate cancel_after
for ca_str, ca_int in [('min', 60), ('hour', 3600), ('day', 86400)]:
o_time = await self.client.server_time()
o_time = float(o_time['epoch'])
price = base_price + (random.randint(1, 9) / 10)
size = random.randint(1, 10) / 1000
order = await self.auth_client.limit_order(side, 'BTC-USD',
price=price, size=size,
time_in_force='GTT',
cancel_after=ca_str)
await self.auth_client.cancel(order['id'])
keys = {'created_at', 'executed_value', 'expire_time', 'fill_fees',
'filled_size', 'id', 'post_only', 'price', 'product_id', 'settled',
'side', 'size', 'status', 'stp', 'time_in_force', 'type'}
self.assertEqual(order.keys(), keys)
self.assertEqual(float(order['price']), price)
self.assertEqual(float(order['size']), size)
self.assertEqual(order['product_id'], 'BTC-USD')
self.assertEqual(order['side'], side)
self.assertEqual(order['stp'], 'dc')
self.assertEqual(order['type'], 'limit')
self.assertEqual(order['time_in_force'], 'GTT')
e_time = parser.parse(order['expire_time']).timestamp()
self.assertLessEqual(e_time - o_time - ca_int, 1.0)
@skipUnless(TEST_AUTH, "Auth credentials required")
async def test_limit_order_stop(self):
# Assumes cancel works
#stop loss
order = await self.auth_client.limit_order('sell', 'BTC-USD', 2.1, .001,
stop='loss', stop_price=2.5)
try:
await self.auth_client.cancel(order['id'])
except APIRequestError:
pass
keys = {'created_at', 'executed_value', 'fill_fees', 'filled_size',
'id', 'post_only', 'price', 'product_id', 'settled', 'side',
'size', 'status', 'stp', 'time_in_force', 'type', 'stop',
'stop_price'}
self.assertEqual(order.keys(), keys)
self.assertEqual(float(order['price']), 2.1)
self.assertEqual(float(order['size']), .001)
self.assertEqual(order['product_id'], 'BTC-USD')
self.assertEqual(order['side'], 'sell')
self.assertEqual(order['stp'], 'dc')
self.assertEqual(order['type'], 'limit')
self.assertEqual(order['time_in_force'], 'GTC')
self.assertEqual(order['stop'], 'loss')
self.assertEqual(float(order['stop_price']), 2.5)
#stop entry
order = await self.auth_client.limit_order('buy', 'BTC-USD', 9000, .001,
stop='entry', stop_price=9550)
try:
await self.auth_client.cancel(order['id'])
except APIRequestError:
pass
keys = {'created_at', 'executed_value', 'fill_fees', 'filled_size',
'id', 'post_only', 'price', 'product_id', 'settled', 'side',
'size', 'status', 'stp', 'time_in_force', 'type', 'stop',
'stop_price'}
self.assertEqual(order.keys(), keys)
self.assertEqual(float(order['price']), 9000)
self.assertEqual(float(order['size']), .001)
self.assertEqual(order['product_id'], 'BTC-USD')
self.assertEqual(order['side'], 'buy')
self.assertEqual(order['stp'], 'dc')
self.assertEqual(order['type'], 'limit')
self.assertEqual(order['time_in_force'], 'GTC')
self.assertEqual(order['stop'], 'entry')
self.assertEqual(float(order['stop_price']), 9550)
@skipUnless(TEST_AUTH, "Auth credentials required")
async def test_market_order(self):
# Assumes cancel works
for side in ('buy', 'sell'):
# Size
size = random.randint(1, 10) / 1000
order = await self.auth_client.market_order(side, 'BTC-USD', size=size)
keys = {'created_at', 'executed_value', 'fill_fees', 'filled_size',
'funds', 'id', 'post_only', 'product_id', 'settled', 'side',
'size', 'status', 'stp', 'type'}
if side == 'sell':
keys.remove('funds')
self.assertEqual(order.keys(), keys)
self.assertEqual(float(order['size']), size)
self.assertEqual(order['product_id'], 'BTC-USD')
self.assertEqual(order['side'], side)
self.assertEqual(order['stp'], 'dc')
self.assertEqual(order['type'], 'market')
self.assertEqual(order['post_only'], False)
await asyncio.sleep(.5)
# Funds
funds = 100 + random.randint(1, 10)
order = await self.auth_client.market_order(side, 'BTC-USD', funds=funds)
keys = {'created_at', 'executed_value', 'fill_fees', 'filled_size',
'funds', 'id', 'post_only', 'product_id', 'settled', 'side',
'specified_funds', 'status', 'stp', 'type'}
if side == 'sell':
keys.add('size')
self.assertEqual(order.keys(), keys)
self.assertEqual(order['product_id'], 'BTC-USD')
self.assertEqual(order['side'], side)
self.assertEqual(order['stp'], 'dc')
self.assertEqual(float(order['specified_funds']), funds)
self.assertEqual(order['type'], 'market')
self.assertEqual(order['post_only'], False)
await asyncio.sleep(.5)
#client_oid
client_oid = str(uuid4())
order = await self.auth_client.market_order('sell', 'BTC-USD', funds=100,
client_oid=client_oid, stp='dc')
self.assertEqual(order.keys(), keys)
self.assertEqual(order.keys(), keys)
self.assertEqual(order['product_id'], 'BTC-USD')
self.assertEqual(order['side'], side)
self.assertEqual(order['stp'], 'dc')
self.assertEqual(float(order['funds']), 100)
self.assertEqual(order['type'], 'market')
self.assertEqual(order['post_only'], False)
await asyncio.sleep(.5)
# This really shouldn't raise an error, but as of 11/18, the Coinbase
# sandbox won't accept an stp other dc even though the Coinbase API
# documentation claims otherwise.
with self.assertRaises(APIRequestError):
order = await self.auth_client.market_order('sell', 'BTC-USD',
funds=100, client_oid=client_oid, stp='cb')
@skipUnless(TEST_AUTH, "Auth credentials required")
async def test_market_order_stop(self):
# Assumes cancel works
# stop loss
order = await self.auth_client.market_order('sell', 'BTC-USD', .001,
stop='loss', stop_price=2.5)
try:
await self.auth_client.cancel(order['id'])
except APIRequestError:
pass
keys = {'created_at', 'executed_value', 'fill_fees', 'filled_size',
'id', 'post_only', 'product_id', 'settled', 'side', 'size',
'status', 'stop', 'stop_price', 'stp', 'type'}
self.assertEqual(order.keys(), keys)
self.assertEqual(float(order['size']), .001)
self.assertEqual(order['product_id'], 'BTC-USD')
self.assertEqual(order['side'], 'sell')
self.assertEqual(order['stp'], 'dc')
self.assertEqual(order['type'], 'market')
self.assertEqual(order['post_only'], False)
self.assertEqual(order['stop'], 'loss')
self.assertEqual(float(order['stop_price']), 2.5)
await asyncio.sleep(0.5)
# stop entry
order = await self.auth_client.market_order('buy', 'BTC-USD', .001,
stop='entry', stop_price=10000)
try:
await self.auth_client.cancel(order['id'])
except APIRequestError:
pass
keys = {'created_at', 'executed_value', 'fill_fees', 'filled_size',
'funds', 'id', 'post_only', 'product_id', 'settled', 'side',
'size', 'status', 'stop', 'stop_price', 'stp', 'type'}
self.assertEqual(order.keys(), keys)
self.assertEqual(float(order['size']), .001)
self.assertEqual(order['product_id'], 'BTC-USD')
self.assertEqual(order['side'], 'buy')
self.assertEqual(order['stp'], 'dc')
self.assertEqual(order['type'], 'market')
self.assertEqual(order['post_only'], False)
self.assertEqual(order['stop'], 'entry')
self.assertEqual(float(order['stop_price']), 10000)
@skipUnless(TEST_AUTH, "Auth credentials required")
async def test_cancel(self):
# Assumes limit_order and market_order work.
l_order = await self.auth_client.limit_order('buy', 'BTC-USD',
price=1, size=1)
m_order = await self.auth_client.market_order('sell', 'BTC-USD', .001)
s_order = await self.auth_client.limit_order('sell', 'BTC-USD', 2, 5,
stop='loss', stop_price=10)
resp = await self.auth_client.cancel(l_order['id'])
self.assertEqual(len(resp), 1)
self.assertEqual(resp[0], l_order['id'])
with self.assertRaises(APIRequestError):
await self.auth_client.cancel(m_order['id'])
resp = await self.auth_client.cancel(s_order['id'])
self.assertEqual(len(resp), 1)
self.assertEqual(resp[0], s_order['id'])
@skipUnless(TEST_AUTH, "Auth credentials required")
async def test_cancel_all(self):
# Assumes market_order, limit_order, and orders work
await self.auth_client.cancel_all(stop=True)
orders, _, _ = await self.auth_client.orders(['open', 'active'])
self.assertEqual(len(orders), 0)
await asyncio.sleep(0.5)
for price in (1, 2, 3):
order = await self.auth_client.limit_order('buy', 'BTC-USD',
price=price, size=1)
await asyncio.sleep(0.5)
for price in (20000, 30000, 40000):
order = await self.auth_client.limit_order('sell', 'LTC-USD',
price=price, size=0.01)
await asyncio.sleep(0.5)
order = await self.auth_client.limit_order('buy', 'ETH-USD', 1, .01)
order = await self.auth_client.market_order('sell', 'LTC-USD', .02,
stop='loss', stop_price=1)
order = await self.auth_client.limit_order('buy', 'LTC-USD', 8000, .01,
stop='entry', stop_price=6500)
order = await self.auth_client.market_order('buy', 'ETH-USD', .03,
stop='entry', stop_price=2000)
orders, _, _ = await self.auth_client.orders(['open', 'active'])
self.assertEqual(len(orders), 10)
resp = await self.auth_client.cancel_all('BTC-USD')
self.assertEqual(len(resp), 3)
await asyncio.sleep(.5)
orders, _, _ = await self.auth_client.orders(['open', 'active'])
self.assertEqual(len(orders), 7)
resp = await self.auth_client.cancel_all()
self.assertEqual(len(resp), 4)
await asyncio.sleep(.5)
orders, _, _ = await self.auth_client.orders(['open', 'active'])
self.assertEqual(len(orders), 3)
resp = await self.auth_client.cancel_all(product_id='LTC-USD', stop=True)
self.assertEqual(len(resp), 2)
await asyncio.sleep(.5)
orders, _, _ = await self.auth_client.orders(['open', 'active'])
self.assertEqual(len(orders), 1)
resp = await self.auth_client.cancel_all(stop=True)
self.assertEqual(len(resp), 1)
await asyncio.sleep(.5)
orders, _, _ = await self.auth_client.orders(['open', 'active'])
self.assertEqual(orders, [])
@skipUnless(TEST_AUTH, "Auth credentials required")
async def test_orders(self):
# Assumes limit_order, market_order, and cancel_all work
await self.auth_client.cancel_all(stop=True)
orders, _, _, = await self.auth_client.orders(['open', 'active'])
self.assertEqual(len(orders), 0)
open_ids = []
for i in range(1, 4):
price = 1 + i /10
size = .001 * i
order = await self.auth_client.limit_order('buy', 'BTC-USD',
price=price, size=size)
open_ids.append(order['id'])
open_orders, _, _ = await self.auth_client.orders('open')
self.assertEqual(len(open_orders), 3)
self.assertEqual(open_orders[0]['id'], open_ids[2])
self.assertEqual(open_orders[1]['id'], open_ids[1])
self.assertEqual(open_orders[2]['id'], open_ids[0])
active_ids = []
for i in range(1,4):
price = i + 1
stop_price = i
size = .01 * i
order = await self.auth_client.limit_order('sell', 'LTC-USD',
price=price, size=size,
stop='loss', stop_price=stop_price)
active_ids.append(order['id'])
active_orders, _, _ = await self.auth_client.orders('active')
self.assertEqual(len(active_orders), 3)
self.assertEqual(active_orders[0]['id'], active_ids[2])
self.assertEqual(active_orders[1]['id'], active_ids[1])
self.assertEqual(active_orders[2]['id'], active_ids[0])
market_ids = []
for i in range(1,4):
size = 0.001 * i
order = await self.auth_client.market_order('buy', 'BTC-USD',
size=0.01)
market_ids.append(order['id'])
await asyncio.sleep(0.25)
all_orders, _, _, = await self.auth_client.orders('all')
self.assertGreaterEqual(len(all_orders), 9)
self.assertEqual(all_orders[0]['id'], market_ids[2])
self.assertEqual(all_orders[1]['id'], market_ids[1])
self.assertEqual(all_orders[2]['id'], market_ids[0])
self.assertEqual(all_orders[3]['id'], active_ids[2])
oa_orders, _, _, = await self.auth_client.orders(['open', 'active'])
self.assertGreaterEqual(len(all_orders), 9)
self.assertEqual(oa_orders[0]['id'], active_ids[2])
self.assertEqual(oa_orders[1]['id'], active_ids[1])
self.assertEqual(oa_orders[2]['id'], active_ids[0])
self.assertEqual(oa_orders[3]['id'], open_ids[2])
self.assertEqual(oa_orders[4]['id'], open_ids[1])
self.assertEqual(oa_orders[5]['id'], open_ids[0])
oa_btc_orders, _, _ = await self.auth_client.orders(['open', 'active'],
'BTC-USD')
self.assertEqual(oa_btc_orders[0]['id'], open_ids[2])
self.assertEqual(oa_btc_orders[1]['id'], open_ids[1])
self.assertEqual(oa_btc_orders[2]['id'], open_ids[0])
orders, before, after = await self.auth_client.orders('all', limit=5)
self.assertEqual(len(orders), 5)
self.assertEqual(orders[0]['id'], market_ids[2])
self.assertEqual(orders[4]['id'], active_ids[1])
after_orders, after_before, after_after = await self.auth_client.orders(
'all', after=after)
self.assertEqual(after_orders[0]['id'], active_ids[0])
original_orders, _, _ = await self.auth_client.orders('all', before=after_before)
self.assertEqual(original_orders, orders)
await self.auth_client.cancel_all(stop=True)
await asyncio.sleep(.5)
oa_orders, _, _, = await self.auth_client.orders(['open', 'active'])
self.assertEqual(len(oa_orders), 0)
@skipUnless(TEST_AUTH, "Auth credentials required")
async def test_get_order(self):
# Assumes limit_order and market_order work
ids = []
for i in range(1, 4):
price = 1 + i/10
size = .001 * i
order = await self.auth_client.limit_order('buy', 'BTC-USD',
price=price, size=size)
ids.append(order['id'])
for i in range(1, 4):
size = .001 * i
order = await self.auth_client.market_order('sell', 'BTC-USD',
size=size)
ids.append(order['id'])
oid = random.choice(ids)
order = await self.auth_client.get_order(oid)
self.assertEqual(order['id'], oid)
oid = random.choice(ids)
order = await self.auth_client.get_order(oid)
self.assertEqual(order['id'], oid)
@skipUnless(TEST_AUTH, "Auth credentials required")
async def test_fills(self):
# Assumes market_order works
orders = []
for i in range(1, 5):
btc_size = .001 * i
ltc_size = .01 * i
side = random.choice(['buy', 'sell'])
order = await self.auth_client.market_order(side, 'BTC-USD', size=btc_size)
orders.append(order)
await asyncio.sleep(.25)
order = await self.auth_client.market_order(side, 'LTC-USD', size=ltc_size)
orders.append(order)
await asyncio.sleep(.25)
fills, _, _ = await self.auth_client.fills(product_id='BTC-USD')
keys = {'created_at', 'fee', 'liquidity', 'order_id', 'price',
'product_id', 'profile_id', 'settled', 'side', 'size',
'trade_id', 'usd_volume', 'user_id'}
self.assertGreaterEqual(len(fills), 4)
self.assertEqual(fills[0]['order_id'], orders[6]['id'])
fills, before, after = await self.auth_client.fills(product_id='LTC-USD', limit=3)
self.assertEqual(len(fills), 3)
self.assertEqual(fills[0]['order_id'], orders[7]['id'])
after_fills, after_before, after_after = await self.auth_client.fills(
product_id='LTC-USD', after=after)
self.assertLess(after_fills[0]['trade_id'], fills[-1]['trade_id'])
original_fills, _, _ = await self.auth_client.fills(product_id='LTC-USD',
before=after_before)
self.assertEqual(original_fills, fills)
order = random.choice(orders)
fills, _, _ = await self.auth_client.fills(order_id=order['id'])
self.assertGreaterEqual(len(fills), 1)
total = 0
for fill in fills:
total += float(fill['size'])
self.assertAlmostEqual(total, float(order['size']))
@skipUnless(TEST_AUTH, "Auth credentials required")
async def test_payment_methods(self):
keys = {'id', 'type', 'name', 'currency', 'primary_buy', 'primary_sell',
'allow_buy', 'allow_sell', 'allow_deposit', 'allow_withdraw',
'limits'}
methods = await self.auth_client.payment_methods()
self.assertIsInstance(methods, list)
self.assertIsInstance(methods[0], dict)
self.assertGreaterEqual(methods[0].keys(), keys)
@skipUnless(TEST_AUTH, "Auth credentials required")
async def test_coinbase_accounts(self):
keys = {'id', 'name', 'balance', 'currency', 'type', 'primary', 'active'}
accounts = await self.auth_client.coinbase_accounts()
self.assertIsInstance(accounts, list)
self.assertIsInstance(accounts[0], dict)
self.assertGreaterEqual(accounts[0].keys(), keys)
@expectedFailure
@skipUnless(TEST_AUTH and TEST_USD_ACCOUNT and TEST_USD_PAYMENT_METHOD,
"Auth credentials, test USD account, and test USD payment method required.")
async def test_deposit_payment_method(self):
# As of 11/25/18 this call returns a 401 error:
# "refresh of oauth token failed"
resp = await self.auth_client.deposit_payment_method(1500, 'USD',
TEST_USD_PAYMENT_METHOD)
keys = {'amount', 'currency', 'id', 'payout_at'}
self.assertIsInstance(resp, dict)
self.assertEqual(resp.keys(), keys)
self.assertEqual(float(resp['amount']), 1500.0)
self.assertEqual(resp['currency'], 'USD')
@skipUnless(TEST_AUTH and TEST_USD_ACCOUNT and TEST_USD_COINBASE_ACCOUNT,
"Auth credentials, test USD account, and test usd Coinbase account required")
async def test_deposit_cointbase(self):
resp = await self.auth_client.deposit_coinbase(150, 'USD',
TEST_USD_COINBASE_ACCOUNT)
keys = {'amount', 'currency', 'id'}
self.assertIsInstance(resp, dict)
self.assertEqual(resp.keys(), keys)
self.assertEqual(resp['currency'], 'USD')
self.assertEqual(float(resp['amount']), 150.0)
@expectedFailure
@skipUnless(TEST_AUTH and TEST_USD_ACCOUNT and TEST_USD_PAYMENT_METHOD,
"Auth credentials, test USD account, and test USD payment method required.")
async def test_withdraw_payment_method(self):
# As of 11/25/18 this call returns a 401 error:
# "refresh of oauth token failed"
resp = await self.auth_client.withdraw_payment_method(1500, 'USD',
TEST_USD_PAYMENT_METHOD)
keys = {'amount', 'currency', 'id', 'payout_at'}
self.assertIsInstance(resp, dict)
self.assertEqual(resp.keys(), keys)
self.assertEqual(float(resp['amount']), 1500.0)
self.assertEqual(resp['currency'], 'USD')
@skipUnless(TEST_AUTH and TEST_USD_ACCOUNT and TEST_USD_COINBASE_ACCOUNT,
"Auth credentials, test USD account, and test usd Coinbase account required")
async def test_withdraw_cointbase(self):
resp = await self.auth_client.withdraw_coinbase(75, 'USD',
TEST_USD_COINBASE_ACCOUNT)
keys = {'amount', 'currency', 'id'}
self.assertIsInstance(resp, dict)
self.assertEqual(resp.keys(), keys)
self.assertEqual(resp['currency'], 'USD')
self.assertEqual(float(resp['amount']), 75.0)
@expectedFailure
@skipUnless(TEST_AUTH, "Auth credentials required")
async def test_withdraw_crypto(self):
# As of 11/25/18 this call returns a 401 error:
# "refresh of oauth token failed - The funds were transferred to
# Coinbase for processing, but failed to withdraw to
# 0x5ad5769cd04681FeD900BCE3DDc877B50E83d469. Please manually withdraw
# from Coinbase."
address = "0x5ad5769cd04681FeD900BCE3DDc877B50E83d469"
resp = await self.auth_client.withdraw_crypto(.001, 'LTC', address)
@expectedFailure
@skipUnless(TEST_AUTH, "Auth credentials required")
async def test_stablecoin_conversion(self):
# As of 11/25/18 this call returns a 400 error:
# "USDC is not enabled for your account"
resp = await self.auth_client.stablecoin_conversion('USD', 'USDC', 100)
keys = {'amount', 'id', 'from', 'from_account_id', 'to', 'to_account_id'}
self.assertIsInstance(resp, dict)
self.assertEqual(resp.keys(), keys)
self.assertEqual(float(resp['amount']), 100.0)
self.assertEqual(resp['from'], 'USD')
self.assertEqual(resp['to'], 'USDC')
@expectedFailure
@skipUnless(TEST_AUTH, "AUTH credentials required")
async def test_fees(self):
#As of 10/15/19, the Sandbox server returns a 500 error:
#"Internal server error"
keys = {'maker_fee_rate', 'taker_fee_rate', 'usd_volume'}
fees = await self.auth_client.fees()
self.assertIsInstance(tick, dict)
self.assertEqual(tick.keys(), keys)
@skipUnless(TEST_AUTH and TEST_BTC_ACCOUNT, "Auth credentials and test BTC account ID required")
async def test_reports(self):
# Combines tests for create_report and report_status
orders = []
for i in range(1, 4):
size = .001 * i
side = random.choice(['buy', 'sell'])
order = await self.auth_client.market_order(side, 'BTC-USD', size=size)
orders.append(order)
await asyncio.sleep(.25)
keys = {'id', 'type', 'status'}
end = datetime.utcnow()
start = end - timedelta(days=1)
end = end.isoformat()
start = start.isoformat()
resp1 = await self.auth_client.create_report('account', start, end,
account_id=TEST_BTC_ACCOUNT)
self.assertIsInstance(resp1, dict)
self.assertEqual(resp1.keys(), keys)
self.assertEqual(resp1['type'], 'account')
resp2 = await self.auth_client.create_report('fills', start, end,
product_id='BTC-USD')
self.assertIsInstance(resp2, dict)
self.assertEqual(resp2.keys(), keys)
self.assertEqual(resp2['type'], 'fills')
resp3 = await self.auth_client.create_report('fills', start, end,
product_id='BTC-USD', report_format='csv',
email='<EMAIL>')
self.assertIsInstance(resp3, dict)
self.assertEqual(resp3.keys(), keys)
self.assertEqual(resp3['type'], 'fills')
await asyncio.sleep(10)
status1 = await self.auth_client.report_status(resp1['id'])
keys = {'completed_at', 'created_at', 'expires_at', 'file_url', 'id',
'params', 'status', 'type', 'user_id'}
statuses = {'pending', 'creating', 'ready'}
self.assertIsInstance(status1, dict)
self.assertEqual(status1.keys(), keys)
self.assertEqual(status1['id'], resp1['id'])
self.assertEqual(status1['type'], 'account')
self.assertIn(status1['status'], statuses)
self.assertEqual(status1['params']['start_date'], start)
self.assertEqual(status1['params']['end_date'], end)
self.assertEqual(status1['params']['format'], 'pdf')
self.assertEqual(status1['params']['account_id'], TEST_BTC_ACCOUNT)
status2 = await self.auth_client.report_status(resp2['id'])
self.assertIsInstance(status2, dict)
self.assertEqual(status2.keys(), keys)
self.assertEqual(status2['id'], resp2['id'])
self.assertEqual(status2['type'], 'fills')
self.assertIn(status2['status'], statuses)
self.assertEqual(status2['params']['start_date'], start)
self.assertEqual(status2['params']['end_date'], end)
self.assertEqual(status2['params']['format'], 'pdf')
self.assertEqual(status2['params']['product_id'], 'BTC-USD')
status3 = await self.auth_client.report_status(resp3['id'])
self.assertIsInstance(status3, dict)
self.assertEqual(status3.keys(), keys)
self.assertEqual(status3['id'], resp3['id'])
self.assertEqual(status3['type'], 'fills')
self.assertIn(status3['status'], statuses)
self.assertEqual(status3['params']['start_date'], start)
self.assertEqual(status3['params']['end_date'], end)
self.assertEqual(status3['params']['email'], '<EMAIL>')
self.assertEqual(status3['params']['format'], 'csv')
@skipUnless(TEST_AUTH, "Auth credentials required")
async def test_trailing_volume (self):
tv = await self.auth_client.trailing_volume()
keys ={'product_id', 'volume', 'exchange_volume', 'recorded_at'}
self.assertIsInstance(tv, list)
self.assertIsInstance(tv[0], dict)
self.assertEqual(tv[0].keys(), keys)
| python |
161 Written Answers PHALGUNA 24, 1911 (SAKA)
Expansion of these capacities and also of the marketing infrastructure of the dairy cooperatives is under way to cope up with the increasing inflow of milk under the programme.
Access to Safe Water and Hygiene in Rural Areas
557. SHRI SHANTILAL PURUSHOTTAMDAS PATEL: Will the Minister of AGRICULTURE be pleased to state:
(a) whether the rural water supply programmes have achieved the desired sucess; and
(b) if so, the details in this regard and if not, the steps proposed to be taken by Union Government to solve the Problem?
THE DEPUTY PRIME MINISTER AND MINISTER OF AGRICULTURE (SHRI DEVI LAL): (a) Yes, Sir.
(b) Out of 1,61,722 problem villages which were to be covered with safe drinking water facilities during the Seventh Plan, only about 6358 problem villages in the States Of Assam, Jammu & Kashmir, Punjab, Haryana, Himachal Pradesh, Rajasthan, Uttar Pradesh, Meghalaya, Mizoram, Nagaland and Maharashtra are likely to spill over to Eighth Plan. All other problem villages would have safe drinking water facilities by 31.3.1990. Apart from coverage of problem villages, action has been taken to set up desalination plants, defluoridation plants, iron removal plants, Solar Photo Voltaic Pumping Systems, Eradication of guinea worm and setting up of stationary and mobile laboratories for water quality surviellance programme.
Written Answers 162
Citizenship Rights to Migrants settled in Jammu & Kashmir
558. ŚHRI JANAK RAJ GUPTA: Will the Minister of HOME AFFAIRS be pleased to state:
(a) whether some people migrated form West Pakistan during 1947 and settled on the borders of Jammu and Kathua districts of Jammu and Kashmir have not been given the citizenship rights so far;
(b) whether the issue has been taken up with the state Government; and
(c) if so, the outcome thereof?
THE MINISTER OF HOME AFFAIRS (SHRI MUFTI MOHAMMAD SAYEED): (a) the persons who had migrated from West Pakistan have not been granted permanent resident certificates of the State of Jammu And Kashmir under the provision of the Jammu and Kashmir Constitution. They however, enjoy the rights to vote for Parliamentary elections.
(b) and (c) Or the issuf being taken up with them the Jammu and Kashmir has informed that except for the constitutional status as the permanent residents of the State, there is no bar for these people for running an industry, plying transport, obtaining agricultural loans and setting up of self employment units.
Appointment of Governors
SHRI PARASRAM BHARDWAJ:
SHRI HARISH RAWAT:
163 Written Answers
Will the Minister of HOME AFFAIRS be pleased to state:
(a) whether in the matter of recent appointments of Governors, the concerned
Chief Ministers were consulted; and
(b) if so, the names of such states?
THE MINISTER OF HOME AFFAIRS ( SHRI MUFTI MOHAMMAD SAYEED) :- (a) Yes, Sir.
(b) Appointments of Governments for the states of Andhra Pradesh, Bihar, Haryana, Himachal Pradesh, Jammu & Kashmir, Kerala, Maharashtra, Madhya Pradesh, Mizoram, Orissa, Rajasthan, Sikkim, Tripura, Uttar Pradesh and West Bengal have been made recently.
Development of Telecommunication Service in Orissa
560. SHRI D. AMAT: Will the Minister of COMMUNICATIONS be pleased to state the details of the programmes pertaining to development of telecommunication services in Orissa during the year 1990-91?
THE MINISTER OF ENERGY AND
MINISTER OF CIVIL AVIATION (SHRI ARIF MOHAMMAD KHAN): A statement giving details is given below.
The major development programmes for the development of Telecommunication Services in Orissa during the year 1990-91
subject to the availability of equipment are as below:
Written Answers 164
Automatisation of 15 nos. of manual telephone exchanges.
Replacement of 12 nos. of automatic telephone exchanges with modern digital exchanges.
Augmentation of Switching capacity of 10 nos. of existing telephone exchanges (increasing the Switching capacity by 4600 lines)
Installation of 1 RLU (2000 Lines).
Replacement of Stronger Telex at Bhubaneswar, Rourkela and Cuttack by electronic telex.
Installation of National Telex at 6 places.
Upgradation of Long Distance Transmission media by installation
-Cuttack-Bhubaneswar 140 Megabits optical Fibre system.
-Bhubaneswar-Puri 140 Megabits Digital Microwave system.
-Sambalpur-Bargarh 34 Megabits Digital Coaxial system.
| english |
IPL 2022: Julian Wood saw it coming before anyone else and the pioneering power-hitting coach reckons the role of cricketers with brute power will only increase in the rapidly evolving Twenty20 format, forcing the “touch and skill” batters to reinvent their game.
Wood, a former first-class cricketer from England, is in India for his maiden IPL stint with Punjab Kings.
It was an accidental meeting with the head coach of American baseball club Texas Rangers 12 years ago that changed the way Wood looked at the game of cricket.
with ridiculous ease.
The fact that he is now part of the IPL also indicates that teams want their players to hone their big-hitting skills.
“The time is right now where instead of having general batting coaches, you will see more of them having specialised batting coaches like me. Franchisees too have decided that this is the way we need to go forward,” Wood told PTI in an exclusive interaction.
From baseball, Wood learnt how to generate power through the body and how science is at work when a batter makes the right contact with the ball to deposit into the stands.
“For lack of knowledge, former players and coaches feel that power-hitting it is just about keeping your hands up and clearing your front leg (to whack it out of the park). It is a lot more than that.
The Punjab squad is loaded with power-hitters like Liam Livingstone, Shahrukh Khan, Jonny Bairstow and Odean Smith with the more traditional Shikhar Dhawan and Mayank Agarwal expected to open the batting.
The modern game is meant for power but do proper batters have a chance?
“Oh Absolutely. I think the confusion comes to players when they try to hit from a normal batting position. You can’t hit from a batting position and you can’t bat from a hitting position, you must have that body awareness and understanding.
“The guys who generate power through rhythm, timing and balance, the key thing for them is really working on their rhythm, timing and balance. If they do that, they’ll get to the right place at the right time, which is the ball.
“You coach those guys differently than the bigger guys but the key thing is knowing what sort of player they are and adapting and working with them,” said the 53-year-old.
Wood feels during the first 10 overs, the focus should be on collecting fours and the next 10 has to be all about sixes. He also underlined the importance of minimising dot balls as “you can’t just hit sixes every ball”.
“If you have hit five sixes in 20 balls, you’ve done really well. You still got 15 balls to bat normally. I want the batter to be in a position where if the bowler doesn’t nail his skill, you can put him away, but if he does nail those skills, you get your singles and limit the dots which is a massive part of the game.
“You face three dots and then suddenly your whole mannerism changes because there’s pressure on you. You can only bat freely and flowingly when there is little tension in the body. When the tension starts to take over (hitting becomes tougher),” he said.
Wood has worked with the likes of Joe Root, Ben Stokes, Livingstone and even India’s Prithvi Shaw in his school days besides stints in England cricket and BBL.
Going forward, players will have to choose between red ball and white ball cricket and there will be very few who could play across formats, said Wood.
| english |
package ed.dci;
import java.util.ArrayList;
public class SetDemo {
public ArrayList<Integer>[] buckets;
private int tamanoActual = 0;
private double factorIndicador;
SetDemo(int tamano, double factorIndicador){
buckets = new ArrayList[tamano];
for(int i=0;i<tamano;i++){
buckets[i] = new ArrayList<Integer>();
}
this.factorIndicador = factorIndicador;
}
/**
* Función de Hashing de esta claseSet.
* @param valor - Parámetro de entrada.
* @return - Valor de la función de hash
*/
private int hashCode(int valor){
return valor;
}
/**
* Agrega un objeto donde le corresponde según la funcion de hash.
* @param valor - Valor a agregar.
* @return - True: si no estaba presente y se agregó; False: en caso opuesto.
*/
public boolean add(int valor){
if (!contiene(valor)){
int i = hashCode(valor) % buckets.length;
buckets[i].add(0,valor);
tamanoActual++;
double average = tamanoActual / (double) buckets.length;
if (average > factorIndicador){
reinsertarTodo();
}
return true;
}
return false; // modificar si es necesario
}
/**
* Busca con la funcion de hash si el número ingresado se encuantra en la tabla.
* @param valor - Número a buscar.
* @return - True: si se encuentra; False: si no.
*/
public boolean contiene(int valor){
int i = hashCode(valor) % buckets.length;
return buckets[i].contains(valor); // modificar si es necesario
}
/**
* Reescalado del tamaño de los buckets del set para la mantención de la eficiencia de busqueda.
*/
private void reinsertarTodo(){
ArrayList<Integer>[] oldBuckets = buckets;
buckets = new ArrayList[oldBuckets.length * 2];
for(int i=0;i<(oldBuckets.length*2);i++){
buckets[i] = new ArrayList<Integer>();
}
for (ArrayList<Integer> arrayList : oldBuckets) {
for ( int element : arrayList ) {
int i = hashCode(element) % buckets.length;
buckets[i].add(0,element);
}
}
}
}
| java |
{
"vorgangId": "137444",
"VORGANG": {
"WAHLPERIODE": "12",
"VORGANGSTYP": "Bericht, Gutachten, Programm",
"TITEL": "Prüfung der Wahlrechtsvorschriften und der Vorschriften über das Wahlprüfungsverfahren (G-SIG: 12000212)",
"INITIATIVE": "Bundesregierung",
"AKTUELLER_STAND": "Nicht abgeschlossen - Einzelheiten siehe Vorgangsablauf",
"SIGNATUR": "",
"GESTA_ORDNUNGSNUMMER": "",
"WICHTIGE_DRUCKSACHE": {
"DRS_HERAUSGEBER": "BT",
"DRS_NUMMER": "11/6435",
"DRS_TYP": "Unterrichtung",
"DRS_LINK": "http://dipbt.bundestag.de:80/dip21/btd/11/064/1106435.pdf"
},
"EU_DOK_NR": "",
"SACHGEBIET": "Bundestag",
"SCHLAGWORT": [
"Briefwahl",
"Bundestag",
"Bundestagswahl",
{
"_fundstelle": "true",
"__cdata": "Wahlprüfung"
}
],
"ABSTRAKT": "Entsprechung des Wunsches nach Erleichterung der Briefwahl für Briefwähler mit Wohnsitz im Ausland (Europawahlgesetz 30.3.1988, S.502; Bundeswahlgesetz 20.12.1988, S.2422), evtl. Überprüfung der Stimmzettel des Wahlbezirkes, für den eine Unregelmäßigkeit behauptet wird; Regelung der Fristen für einen Wahleinspruch oder für eine Wahlprüfungsbeschwerde, Verlängerung der Amtszeit des Wahlprüfungsausschusses bis zur Konstituierung seines Nachfolgers "
},
"VORGANGSABLAUF": {
"VORGANGSPOSITION": [
{
"ZUORDNUNG": "BT",
"URHEBER": "Unterrichtung, Urheber : Bundesregierung ",
"FUNDSTELLE": "13.02.1990 - BT-Drucksache 11/6435",
"FUNDSTELLE_LINK": "http://dipbt.bundestag.de:80/dip21/btd/11/064/1106435.pdf"
},
{
"ZUORDNUNG": "BT",
"URHEBER": "Antrag auf erneute Behandlung, Urheber : Fraktion der CDU/CSU, Fraktion der FDP, Fraktion der SPD ",
"FUNDSTELLE": "07.03.1991 - BT-Drucksache 12/210",
"FUNDSTELLE_LINK": "http://dipbt.bundestag.de:80/dip21/btd/12/002/1200210.pdf",
"ZUWEISUNG": [
{
"AUSSCHUSS_KLARTEXT": "Ausschuss für Wahlprüfung, Immunität und Geschäftsordnung",
"FEDERFUEHRUNG": "federführend"
},
{
"AUSSCHUSS_KLARTEXT": "Innenausschuss"
}
],
"VP_ABSTRAKT": "Antrag auf erneute Überweisung an die Ausschüsse"
},
{
"ZUORDNUNG": "BT",
"URHEBER": "Beratung über Antrag auf erneute Behandlung",
"FUNDSTELLE": "12.03.1991 - BT-Plenarprotokoll 12/13, S. 644B",
"FUNDSTELLE_LINK": "http://dipbt.bundestag.de:80/dip21/btp/12/12013.pdf#P.644",
"BESCHLUSS": {
"BESCHLUSSSEITE": "644B",
"BESCHLUSSTENOR": "einstimmige Annahme Drs 12/210"
}
}
]
}
}
| json |
India must abandon the soft approach and adopt an aggressive policy !
India must abandon the soft approach and adopt an aggressive policy !
The court order to appoint a Hindu priest at the place of Hindu worship shows the plight of Hindus in the country of majority Hindus.
The Police have taken the Madarasa operator Fahimuddin in custody.
It clearly shows the attitude of the agitators that it was pre-planned and its purpose was to disrupt the Government and people’s life.
Every Hindu should realise that not by taking Jal Samadhi, but only by actively struggling, Hindu Rashtra can be established.
Whether the Supreme Court will take cognisance of Deb’s comments that reflect such grave disrespect. | english |
// ============================================================================
//
// Copyright (C) 2006-2016 Talend Inc. - www.talend.com
//
// This source code is available under agreement available at
// %InstallDIR%\features\org.talend.rcp.branding.%PRODUCTNAME%\%PRODUCTNAME%license.txt
//
// You should have received a copy of the agreement
// along with this program; if not, write to Talend SA
// 9 rue Pages 92150 Suresnes, France
//
// ============================================================================
package org.talend.dataprofiler.core.ui.editor.analysis;
import java.util.ArrayList;
import java.util.List;
import java.util.Map;
import org.eclipse.jface.viewers.TableViewer;
import org.eclipse.swt.SWT;
import org.eclipse.swt.layout.GridData;
import org.eclipse.swt.layout.GridLayout;
import org.eclipse.swt.widgets.Composite;
import org.eclipse.swt.widgets.Control;
import org.eclipse.ui.forms.events.ExpansionAdapter;
import org.eclipse.ui.forms.events.ExpansionEvent;
import org.eclipse.ui.forms.events.HyperlinkAdapter;
import org.eclipse.ui.forms.events.HyperlinkEvent;
import org.eclipse.ui.forms.widgets.ExpandableComposite;
import org.eclipse.ui.forms.widgets.ImageHyperlink;
import org.eclipse.ui.forms.widgets.ScrolledForm;
import org.talend.dataprofiler.common.ui.editor.preview.ICustomerDataset;
import org.talend.dataprofiler.core.ImageLib;
import org.talend.dataprofiler.core.i18n.internal.DefaultMessagesImpl;
import org.talend.dataprofiler.core.model.ModelElementIndicator;
import org.talend.dataprofiler.core.model.dynamic.DynamicIndicatorModel;
import org.talend.dataprofiler.core.ui.editor.composite.AnalysisColumnTreeViewer;
import org.talend.dataprofiler.core.ui.editor.preview.CompositeIndicator;
import org.talend.dataprofiler.core.ui.editor.preview.IndicatorUnit;
import org.talend.dataprofiler.core.ui.editor.preview.model.ChartTableFactory;
import org.talend.dataprofiler.core.ui.editor.preview.model.ChartTypeStatesFactory;
import org.talend.dataprofiler.core.ui.editor.preview.model.TableTypeStatesFactory;
import org.talend.dataprofiler.core.ui.editor.preview.model.TableWithData;
import org.talend.dataprofiler.core.ui.editor.preview.model.states.IChartTypeStates;
import org.talend.dataprofiler.core.ui.editor.preview.model.states.pattern.PatternStatisticsState;
import org.talend.dataprofiler.core.ui.editor.preview.model.states.table.ITableTypeStates;
import org.talend.dataprofiler.core.ui.events.DynamicChartEventReceiver;
import org.talend.dataprofiler.core.ui.events.EventEnum;
import org.talend.dataprofiler.core.ui.events.EventManager;
import org.talend.dataprofiler.core.ui.events.IEventReceiver;
import org.talend.dataprofiler.core.ui.pref.EditorPreferencePage;
import org.talend.dataprofiler.core.ui.utils.TOPChartUtils;
import org.talend.dataprofiler.core.ui.utils.WorkbenchUtils;
import org.talend.dataprofiler.core.ui.utils.pagination.UIPagination;
import org.talend.dataquality.analysis.Analysis;
import org.talend.dataquality.analysis.ExecutionLanguage;
import org.talend.dataquality.indicators.Indicator;
import org.talend.dq.analysis.explore.DataExplorer;
import org.talend.dq.helper.RepositoryNodeHelper;
import org.talend.dq.indicators.preview.EIndicatorChartType;
import org.talend.dq.indicators.preview.table.ChartDataEntity;
import org.talend.repository.model.IRepositoryNode;
/**
*
* DOC mzhao UIPagination.MasterPaginationInfo class global comment. Detailled comment
*/
public class ResultPaginationInfo extends IndicatorPaginationInfo {
private ColumnAnalysisDetailsPage masterPage;
// Added TDQ-9272 20140806, only use the Dynamic model for SQL mode.
private boolean isSQLMode = true;
public ResultPaginationInfo(ScrolledForm form, List<? extends ModelElementIndicator> modelElementIndicators,
ColumnAnalysisDetailsPage masterPage, UIPagination uiPagination) {
super(form, modelElementIndicators, uiPagination);
this.masterPage = masterPage;
if (masterPage.getTreeViewer() != null) {
ExecutionLanguage language = ((AnalysisColumnTreeViewer) masterPage.getTreeViewer()).getLanguage();
if (ExecutionLanguage.JAVA.equals(language)) {
isSQLMode = false;
}
}
}
/**
* create CollapseAll Link for column name.
*
* @param composite
* @param label: column name
*/
private void createCollapseAllLink(Composite composite, final String label) {
ImageHyperlink collapseAllImageLink = uiPagination.getToolkit().createImageHyperlink(composite, SWT.NONE);
collapseAllImageLink.setToolTipText(DefaultMessagesImpl.getString("CollapseColumn")); //$NON-NLS-1$
WorkbenchUtils.setHyperlinkImage(collapseAllImageLink, ImageLib.getImage(ImageLib.COLLAPSE_ALL));
collapseAllImageLink.addHyperlinkListener(new HyperlinkAdapter() {
@Override
public void linkActivated(HyperlinkEvent e) {
Control[] expandableComposite = getIndicatorExpandableCompositeList(label);
for (Control control : expandableComposite) {
if (control instanceof ExpandableComposite) {
((ExpandableComposite) control).setExpanded(false);
((ExpandableComposite) control).getParent().pack();
}
}
form.reflow(true);
}
});
}
/**
* create ExpandAll Link for column name.
*
* @param composite
* @param label: column name
*/
private void createExpandAllLink(Composite composite, final String label) {
ImageHyperlink expandAllImageLink = uiPagination.getToolkit().createImageHyperlink(composite, SWT.NONE);
expandAllImageLink.setToolTipText(DefaultMessagesImpl.getString("ExpandColumn")); //$NON-NLS-1$
WorkbenchUtils.setHyperlinkImage(expandAllImageLink, ImageLib.getImage(ImageLib.EXPAND_ALL));
expandAllImageLink.addHyperlinkListener(new HyperlinkAdapter() {
@Override
public void linkActivated(HyperlinkEvent e) {
Composite compositePar = columnCompositeMap.get(label);
if (compositePar.getParent() instanceof ExpandableComposite) {
((ExpandableComposite) compositePar.getParent()).setExpanded(true);
}
Control[] expandableComposite = getIndicatorExpandableCompositeList(label);
for (Control control : expandableComposite) {
if (control instanceof ExpandableComposite) {
((ExpandableComposite) control).setExpanded(true);
((ExpandableComposite) control).getParent().pack();
}
}
form.reflow(true);
}
});
}
/**
* get all indicator subsection under the column name.
*
* @param label: column name
* @return
*/
protected Control[] getIndicatorExpandableCompositeList(String label) {
Composite composite = columnCompositeMap.get(label);
Control[] children = composite.getChildren();
return children;
}
@Override
protected void render() {
clearDynamicList();
allExpandableCompositeList.clear();
columnCompositeMap.clear();
for (ModelElementIndicator modelElementIndicator : modelElementIndicators) {
ExpandableComposite exComp = uiPagination.getToolkit().createExpandableComposite(
uiPagination.getChartComposite(),
ExpandableComposite.TWISTIE | ExpandableComposite.CLIENT_INDENT | ExpandableComposite.EXPANDED
| ExpandableComposite.LEFT_TEXT_CLIENT_ALIGNMENT);
needDispostWidgets.add(exComp);
allExpandableCompositeList.add(exComp);
// MOD klliu add more information about the column belongs to which table/view.
IRepositoryNode modelElementRepositoryNode = modelElementIndicator.getModelElementRepositoryNode();
IRepositoryNode parentNodeForColumnNode = RepositoryNodeHelper.getParentNodeForColumnNode(modelElementRepositoryNode);
String label = parentNodeForColumnNode.getObject().getLabel();
if (label != null && !label.equals("")) { //$NON-NLS-1$
label = label.concat(".").concat(modelElementIndicator.getElementName());//$NON-NLS-1$
} else {
label = modelElementIndicator.getElementName();
}
// ~
exComp.setText(DefaultMessagesImpl.getString("ColumnAnalysisResultPage.Column", label)); //$NON-NLS-1$
exComp.setLayout(new GridLayout());
exComp.setLayoutData(new GridData(GridData.FILL_BOTH));
// MOD xqliu 2009-06-23 bug 7481
exComp.setExpanded(EditorPreferencePage.isUnfoldingAnalyzedEelementsResultPage());
// ~
// TDQ-11525 msjian : Add "expand all" and "fold all" icon buttons in the "Analysis Results" section
Composite collapseExpandComposite = uiPagination.getToolkit().createComposite(exComp);
GridLayout gdLayout = new GridLayout();
gdLayout.numColumns = 2;
collapseExpandComposite.setLayout(gdLayout);
createCollapseAllLink(collapseExpandComposite, label);
createExpandAllLink(collapseExpandComposite, label);
exComp.setTextClient(collapseExpandComposite);
// TDQ-11525~
Composite comp = uiPagination.getToolkit().createComposite(exComp);
comp.setLayout(new GridLayout());
comp.setLayoutData(new GridData(GridData.FILL_BOTH));
exComp.setClient(comp);
createResultDataComposite(comp, modelElementIndicator);
columnCompositeMap.put(label, comp);
exComp.addExpansionListener(new ExpansionAdapter() {
@Override
public void expansionStateChanged(ExpansionEvent e) {
uiPagination.getChartComposite().layout();
form.reflow(true);
}
});
uiPagination.getChartComposite().layout();
masterPage.registerSection(exComp);
}
}
private void createResultDataComposite(Composite comp, ModelElementIndicator modelElementIndicator) {
if (modelElementIndicator.getIndicators().length != 0) {
Map<EIndicatorChartType, List<IndicatorUnit>> indicatorComposite = CompositeIndicator.getInstance()
.getIndicatorComposite(modelElementIndicator);
for (EIndicatorChartType chartType : indicatorComposite.keySet()) {
List<IndicatorUnit> units = indicatorComposite.get(chartType);
if (!units.isEmpty()) {
if (chartType == EIndicatorChartType.UDI_FREQUENCY) {
for (IndicatorUnit unit : units) {
List<IndicatorUnit> specialUnit = new ArrayList<IndicatorUnit>();
specialUnit.add(unit);
createChart(comp, chartType, specialUnit);
}
} else {
createChart(comp, chartType, units);
}
}
}
}
}
/**
* DOC bZhou Comment method "createChart".
*
* @param comp
* @param chartType
* @param units
*/
private void createChart(Composite comp, EIndicatorChartType chartType, List<IndicatorUnit> units) {
DynamicIndicatorModel dyModel = new DynamicIndicatorModel();
// MOD TDQ-8787 20140618 yyin: to let the chart and table use the same dataset
Object chart = null;
Object dataset = null;
// Added TDQ-8787 20140722 yyin:(when first switch from master to result) if there is some dynamic event for the
// current indicator, use its dataset directly (TDQ-9241)
IEventReceiver event = EventManager.getInstance().findRegisteredEvent(units.get(0).getIndicator(),
EventEnum.DQ_DYMANIC_CHART, 0);
// get the dataset from the event
if (event != null) {
dataset = ((DynamicChartEventReceiver) event).getDataset();
}// ~
// Added TDQ-8787 2014-06-18 yyin: add the current units and dataset into the list
List<Indicator> indicators = null;
dyModel.setChartType(chartType);
this.dynamicList.add(dyModel);
if (EIndicatorChartType.SUMMARY_STATISTICS.equals(chartType)) {
// for the summary indicators, the table show 2 more than the bar chart
dyModel.setSummaryIndicators(getIndicatorsForTable(units, true));
}
// create UI
ExpandableComposite subComp = uiPagination.getToolkit().createExpandableComposite(comp,
ExpandableComposite.TWISTIE | ExpandableComposite.CLIENT_INDENT | ExpandableComposite.EXPANDED);
subComp.setText(chartType.getLiteral());
subComp.setLayoutData(new GridData(GridData.FILL_BOTH));
// MOD xqliu 2009-06-23 bug 7481
subComp.setExpanded(EditorPreferencePage.isUnfoldingIndicatorsResultPage());
// ~
final Composite composite = uiPagination.getToolkit().createComposite(subComp, SWT.NULL);
composite.setLayout(new GridLayout(2, false));
composite.setLayoutData(new GridData(GridData.FILL_BOTH));
Analysis analysis = masterPage.getAnalysisHandler().getAnalysis();
// create table viewer firstly
ITableTypeStates tableTypeState = TableTypeStatesFactory.getInstance().getTableState(chartType, units);
ChartDataEntity[] dataEntities = tableTypeState.getDataEntity();
TableWithData chartData = new TableWithData(chartType, dataEntities);
TableViewer tableviewer = tableTypeState.getTableForm(composite);
tableviewer.setInput(chartData);
tableviewer.getTable().pack();
dyModel.setTableViewer(tableviewer);
DataExplorer dataExplorer = tableTypeState.getDataExplorer();
ChartTableFactory.addMenuAndTip(tableviewer, dataExplorer, analysis);
if (EIndicatorChartType.TEXT_STATISTICS.equals(chartType) && dataEntities != null && dataEntities.length > 0) {
// only text indicator need
indicators = getIndicators(dataEntities);
} else {
indicators = getIndicators(units);
}
dyModel.setIndicatorList(indicators);
// create chart
try {
if (!EditorPreferencePage.isHideGraphicsForResultPage() && TOPChartUtils.getInstance().isTOPChartInstalled()) {
IChartTypeStates chartTypeState = ChartTypeStatesFactory.getChartState(chartType, units);
boolean isPattern = chartTypeState instanceof PatternStatisticsState;
if (event == null) {
chart = chartTypeState.getChart();
if (chart != null && isSQLMode) {// chart is null for MODE. Get the dataset by this way for SQL mode
if (EIndicatorChartType.BENFORD_LAW_STATISTICS.equals(chartType)) {
dataset = TOPChartUtils.getInstance().getDatasetFromChart(chart, 2);
if (dataset == null) {
dataset = TOPChartUtils.getInstance().getDatasetFromChart(chart, 1);
}
dyModel.setSecondDataset(TOPChartUtils.getInstance().getDatasetFromChart(chart, 0));
} else {
dataset = TOPChartUtils.getInstance().getDatasetFromChart(chart, 1);
if (dataset == null) {
dataset = TOPChartUtils.getInstance().getDatasetFromChart(chart, -1);
}
}
}
} else {
chart = chartTypeState.getChart(dataset);
}
dyModel.setDataset(dataset);
if (chart != null) {
if (!isPattern) { // need not to decorate the chart of Pattern(Regex/Sql/UdiMatch)
TOPChartUtils.getInstance().decorateChart(chart, false);
} else {
TOPChartUtils.getInstance().decoratePatternMatching(chart);
}
Object chartComposite = TOPChartUtils.getInstance().createTalendChartComposite(composite, SWT.NONE, chart,
true);
dyModel.setBawParentChartComp(chartComposite);
Map<String, Object> menuMap = createMenuForAllDataEntity((Composite) chartComposite, dataExplorer, analysis,
((ICustomerDataset) chartTypeState.getDataset()).getDataEntities());
// call chart service to create related mouse listener
if (EIndicatorChartType.BENFORD_LAW_STATISTICS.equals(chartType)
|| EIndicatorChartType.FREQUENCE_STATISTICS.equals(chartType)) {
TOPChartUtils.getInstance().addMouseListenerForChart(chartComposite, menuMap, false);
} else {
TOPChartUtils.getInstance().addMouseListenerForChart(chartComposite, menuMap, true);
}
}
}
// TDQ-11886 add these 2 catches.make it continue to work even if encounter some problems.
} catch (Error e) {
log.error(DefaultMessagesImpl.getString("IndicatorPaginationInfo.FailToCreateChart"), e); //$NON-NLS-1$
} catch (Exception exp) {
log.error(DefaultMessagesImpl.getString("IndicatorPaginationInfo.FailToCreateChart"), exp); //$NON-NLS-1$
}
subComp.setClient(composite);
subComp.addExpansionListener(new ExpansionAdapter() {
@Override
public void expansionStateChanged(ExpansionEvent e) {
form.reflow(true);
}
});
masterPage.registerSection(subComp);
}
/**
* get the indicators from the data entities, which maybe sorted, and the order is changed.
*
* @param dataEntities
* @return
*/
private List<Indicator> getIndicators(ChartDataEntity[] dataEntities) {
List<Indicator> indicators = new ArrayList<Indicator>();
for (ChartDataEntity entity : dataEntities) {
indicators.add(entity.getIndicator());
}
return indicators;
}
}
| java |
<reponame>dalek7/umbrella
<!DOCTYPE html PUBLIC "-//W3C//DTD XHTML 1.0 Transitional//EN"
"http://www.w3.org/TR/xhtml1/DTD/xhtml1-transitional.dtd">
<html xmlns="http://www.w3.org/1999/xhtml">
<head>
<meta http-equiv="Content-Type" content="text/html; charset=utf-8" />
<title>Using Queues — MathJax v1.1 documentation</title>
<link rel="stylesheet" href="_static/mj.css" type="text/css" />
<link rel="stylesheet" href="_static/pygments.css" type="text/css" />
<script type="text/javascript">
var DOCUMENTATION_OPTIONS = {
URL_ROOT: '',
VERSION: '1.1',
COLLAPSE_INDEX: false,
FILE_SUFFIX: '.html',
HAS_SOURCE: true
};
</script>
<script type="text/javascript" src="_static/jquery.js"></script>
<script type="text/javascript" src="_static/underscore.js"></script>
<script type="text/javascript" src="_static/doctools.js"></script>
<!--<script type="text/javascript" src="../../MathJax.js?config=TeX-AMS-MML_HTMLorMML"></script>-->
<link rel="top" title="MathJax v1.1 documentation" href="index.html" />
<link rel="up" title="Synchronizing your code with MathJax" href="synchronize.html" />
<link rel="next" title="Using Signals" href="signals.html" />
<link rel="prev" title="Using Callbacks" href="callbacks.html" />
</head>
<body>
<div class="related">
<h3>Navigation</h3>
<ul>
<li class="right" style="margin-right: 10px">
<a href="genindex.html" title="General Index"
accesskey="I">index</a></li>
<li class="right" >
<a href="signals.html" title="Using Signals"
accesskey="N">next</a> |</li>
<li class="right" >
<a href="callbacks.html" title="Using Callbacks"
accesskey="P">previous</a> |</li>
<li><a href="index.html">MathJax v1.1 documentation</a> »</li>
<li><a href="synchronize.html" accesskey="U">Synchronizing your code with MathJax</a> »</li>
</ul>
</div>
<div class="document">
<div class="documentwrapper">
<div class="bodywrapper">
<div class="body">
<div class="section" id="using-queues">
<span id="id1"></span><h1>Using Queues<a class="headerlink" href="#using-queues" title="Permalink to this headline">¶</a></h1>
<p>The <cite>callback queue</cite> is one of MathJax’s main tools for synchronizing
its actions, both internally, and with external programs, like
javascript code that you may write as part of dynamic web pages.
Because many actions in MathJax (like loading files) operate
asynchornously, MathJax needs a way to coordinate those actions so
that they occur in the right order. The
<cite>MathJax.Callback.Queue</cite> object provides that mechanism.</p>
<p>A <cite>callback queue</cite> is a list of commands that will be performed one at
a time, in order. If the return value of one of the commands is a
<cite>Callback</cite> object, processing is suspended until that callback is
called, and then processing of the commands is resumed. In this way,
if a command starts an asynchronous operation like loading a file, it
can return the callback for that file-load operation and the queue
will wait until the file has loaded before continuing. Thus a queue
can be used to guarantee that commands don’t get performed until other
ones are known to be finished, even if those commands usually operate
asynchronously.</p>
<div class="section" id="constructing-queues">
<h2>Constructing Queues<a class="headerlink" href="#constructing-queues" title="Permalink to this headline">¶</a></h2>
<p>A queue is created via the <tt class="xref py py-meth docutils literal"><span class="pre">MathJax.Callback.Queue()</span></tt> command,
which returns a <cite>MathJax.Callback.Queue</cite> object. The queue
itself consists of a series of commands given as callback
specifications (see <a class="reference internal" href="callbacks.html#using-callbacks"><em>Using Callbacks</em></a> for
details on callbacks), which allow you to provide functions (together
with their arguments) to be executed. You can provide the collection
of callback specifications when the queue is created by passing them
as arguments to <tt class="xref py py-meth docutils literal"><span class="pre">MathJax.Callback.Queue()</span></tt>, or you can create an
empty queue to which commands are added later. Once a
<cite>MathJax.Callback.Queue</cite> object is created, you can push
additional callbacks on the end of the queue; if the queue is empty,
the command will be performed immediately, while if the queue is
waiting for another command to complete, the new command will be
queued for later processing.</p>
<p>For example,</p>
<div class="highlight-javascript"><div class="highlight"><pre><span class="kd">function</span> <span class="nx">f</span><span class="p">(</span><span class="nx">x</span><span class="p">)</span> <span class="p">{</span><span class="nx">alert</span><span class="p">(</span><span class="nx">x</span><span class="p">)}</span>
<span class="kd">var</span> <span class="nx">queue</span> <span class="o">=</span> <span class="nx">MathJax</span><span class="p">.</span><span class="nx">Callback</span><span class="p">.</span><span class="nx">Queue</span><span class="p">([</span><span class="nx">f</span><span class="p">,</span> <span class="mi">15</span><span class="p">],</span> <span class="p">[</span><span class="nx">f</span><span class="p">,</span> <span class="mi">10</span><span class="p">],</span> <span class="p">[</span><span class="nx">f</span><span class="p">,</span> <span class="mi">5</span><span class="p">]);</span>
<span class="nx">queue</span><span class="p">.</span><span class="nx">Push</span><span class="p">([</span><span class="nx">f</span><span class="p">,</span> <span class="mi">0</span><span class="p">]);</span>
</pre></div>
</div>
<p>would create a queue containing three commands, each calling the
function <tt class="docutils literal"><span class="pre">f</span></tt> with a different input, that are performed in order. A
fourth command is then added to the queue, to be performed after the
other three. In this case, the result will be four alerts, the first
with the number 15, the second with 10, the third with 5 and the
fourth with 0. Of course <tt class="docutils literal"><span class="pre">f</span></tt> is not a function that operates
asynchronously, so it would have been easier to just call <tt class="docutils literal"><span class="pre">f</span></tt> four
times directly. The power of the queue comes from calling commands
that could operate asynchronously. For example:</p>
<div class="highlight-javascript"><div class="highlight"><pre><span class="kd">function</span> <span class="nx">f</span><span class="p">(</span><span class="nx">x</span><span class="p">)</span> <span class="p">{</span><span class="nx">alert</span><span class="p">(</span><span class="nx">x</span><span class="p">)}</span>
<span class="nx">MathJax</span><span class="p">.</span><span class="nx">Callback</span><span class="p">.</span><span class="nx">Queue</span><span class="p">(</span>
<span class="p">[</span><span class="nx">f</span><span class="p">,</span> <span class="mi">1</span><span class="p">],</span>
<span class="p">[</span><span class="s2">"Require"</span><span class="p">,</span> <span class="nx">MathJax</span><span class="p">.</span><span class="nx">Ajax</span><span class="p">,</span> <span class="s2">"[MathJax]/extensions/AMSmath.js"</span><span class="p">],</span>
<span class="p">[</span><span class="nx">f</span><span class="p">,</span> <span class="mi">2</span><span class="p">]</span>
<span class="p">);</span>
</pre></div>
</div>
<p>Here, the command <tt class="docutils literal"><span class="pre">MathJax.Ajax.require("extensions/AMSmath.js")</span></tt> is
queued between two calls to <tt class="docutils literal"><span class="pre">f</span></tt>. The first call to <tt class="docutils literal"><span class="pre">f(1)</span></tt> will be
made immediately, then the <tt class="xref py py-meth docutils literal"><span class="pre">MathJax.Ajax.Require()</span></tt> statement will
be performed. Since the <tt class="docutils literal"><span class="pre">Require</span></tt> method loads a file, it operates
asynchronously, and its return value is a <cite>MathJax.Callback</cite>
object that will be called when the file is loaded. The call to
<tt class="docutils literal"><span class="pre">f(2)</span></tt> will not be made until that callback is performed,
effectively synchronizing the second call to <tt class="docutils literal"><span class="pre">f</span></tt> with the completion
of the file loading. This is equivalent to</p>
<div class="highlight-javascript"><div class="highlight"><pre><span class="nx">f</span><span class="p">(</span><span class="mi">1</span><span class="p">);</span>
<span class="nx">MathJax</span><span class="p">.</span><span class="nx">Ajax</span><span class="p">.</span><span class="nx">Require</span><span class="p">(</span><span class="s2">"[MathJax]/extensions/AMSmath.js"</span><span class="p">,</span> <span class="p">[</span><span class="nx">f</span><span class="p">,</span> <span class="mi">2</span><span class="p">]);</span>
</pre></div>
</div>
<p>since the <tt class="docutils literal"><span class="pre">Require()</span></tt> command allows you to specify a (single)
callback to be performed on the completion of the file load. Note,
however, that the queue could be used to synchronize several file
loads along with multiple function calls, so is more flexible.</p>
<p>For example,</p>
<div class="highlight-javascript"><div class="highlight"><pre><span class="nx">MathJax</span><span class="p">.</span><span class="nx">Callback</span><span class="p">.</span><span class="nx">Queue</span><span class="p">(</span>
<span class="p">[</span><span class="s2">"Require"</span><span class="p">,</span> <span class="nx">MathJax</span><span class="p">.</span><span class="nx">Ajax</span><span class="p">,</span> <span class="s2">"[MathJax]/extensions/AMSmath.js"</span><span class="p">],</span>
<span class="p">[</span><span class="nx">f</span><span class="p">,</span> <span class="mi">1</span><span class="p">],</span>
<span class="p">[</span><span class="s2">"Require"</span><span class="p">,</span> <span class="nx">MathJax</span><span class="p">.</span><span class="nx">Ajax</span><span class="p">,</span> <span class="s2">"[MathJax]/config/local/AMSmathAdditions.js"</span><span class="p">],</span>
<span class="p">[</span><span class="nx">f</span><span class="p">,</span> <span class="mi">2</span><span class="p">]</span>
<span class="p">);</span>
</pre></div>
</div>
<p>would load the AMSmath extension, then call <tt class="docutils literal"><span class="pre">f(1)</span></tt> then load the
local AMSmath modifications, and then call <tt class="docutils literal"><span class="pre">f(2)</span></tt>, with each action
waiting for the previous one to complete before being performed
itself.</p>
</div>
<div class="section" id="callbacks-versus-callback-specifications">
<h2>Callbacks versus Callback Specifications<a class="headerlink" href="#callbacks-versus-callback-specifications" title="Permalink to this headline">¶</a></h2>
<p>If one of the callback specifications is an actual callback object
itself, then the queue will wait for that action to be performed
before proceeding. For example,</p>
<div class="highlight-javascript"><div class="highlight"><pre> <span class="nx">MathJax</span><span class="p">.</span><span class="nx">Callback</span><span class="p">.</span><span class="nx">Queue</span><span class="p">(</span>
<span class="p">[</span><span class="nx">f</span><span class="p">,</span> <span class="mi">1</span><span class="p">],</span>
<span class="nx">MathJax</span><span class="p">.</span><span class="nx">Ajax</span><span class="p">.</span><span class="nx">Require</span><span class="p">(</span><span class="s2">"[MathJax]/extensions/AMSmath.js"</span><span class="p">),</span>
<span class="p">[</span><span class="nx">f</span><span class="p">,</span> <span class="mi">2</span><span class="p">],</span>
<span class="p">);</span>
</pre></div>
</div>
<p>starts the loading of the AMSmath extension before the queue is
created, and then creates the queue containing the call to <tt class="docutils literal"><span class="pre">f</span></tt>, the
callback for the file load, and the second call to <tt class="docutils literal"><span class="pre">f</span></tt>. The queue
performs <tt class="docutils literal"><span class="pre">f(1)</span></tt>, waits for the file load callback to be called, and
then calls <tt class="docutils literal"><span class="pre">f(2)</span></tt>. The difference between this and the second
example above is that, in this example the file load is started before
the queue is even created, so the file is potentially loaded and
executed before the call to <tt class="docutils literal"><span class="pre">f(1)</span></tt>, while in the example above, the
file load is guaranteed not to begin until after <tt class="docutils literal"><span class="pre">f(1)</span></tt> is executed.</p>
<p>As a further example, consider</p>
<div class="highlight-javascript"><div class="highlight"><pre><span class="nx">MathJax</span><span class="p">.</span><span class="nx">Callback</span><span class="p">.</span><span class="nx">Queue</span><span class="p">(</span>
<span class="nx">MathJax</span><span class="p">.</span><span class="nx">Ajax</span><span class="p">.</span><span class="nx">Require</span><span class="p">(</span><span class="s2">"[MathJax]/extensions/AMSmath.js"</span><span class="p">),</span>
<span class="p">[</span><span class="nx">f</span><span class="p">,</span> <span class="mi">1</span><span class="p">],</span>
<span class="nx">MathJax</span><span class="p">.</span><span class="nx">Ajax</span><span class="p">.</span><span class="nx">Require</span><span class="p">(</span><span class="s2">"[MathJax]/config/local/AMSmathAdditions.js"</span><span class="p">),</span>
<span class="p">[</span><span class="nx">f</span><span class="p">,</span> <span class="mi">2</span><span class="p">]</span>
<span class="p">);</span>
</pre></div>
</div>
<p>in comparison to the example above that uses <tt class="docutils literal"><span class="pre">["Require",</span>
<span class="pre">MathJax.Ajax,</span> <span class="pre">"[MathJax]/extensions/AMSmath.js"]</span></tt> and <tt class="docutils literal"><span class="pre">["Require",</span>
<span class="pre">MathJax.Ajax,</span> <span class="pre">"[MathJax]/config/local/AMSmathAdditions.js"]</span></tt> instead. In that
example, <tt class="docutils literal"><span class="pre">AMSmath.js</span></tt> is loaded, then <tt class="docutils literal"><span class="pre">f(1)</span></tt> is called, then the
local additions are loaded, then <tt class="docutils literal"><span class="pre">f(2)</span></tt> is called.</p>
<p>Here, however, both file loads are started before the queue is
created, and are operating in parallel (rather than sequentially as in
the earlier example). It is possible for the loading of the local
additions to complete before the AMSmath extension is loaded in this
case, which was guaranteed <strong>not</strong> to happen in the other example.
Note, however, that <tt class="docutils literal"><span class="pre">f(1)</span></tt> is guaranteed not to be performed until
after the AMSmath extensions load, and <tt class="docutils literal"><span class="pre">f(2)</span></tt> will not occur until
after both files are loaded.</p>
<p>In this way, it is possible to start asynchronous loading of several
files simultaneously, and wait until all of them are loaded (in
whatever order) to perform some command. For instance,</p>
<div class="highlight-javascript"><div class="highlight"><pre><span class="nx">MathJax</span><span class="p">.</span><span class="nx">Callback</span><span class="p">.</span><span class="nx">Queue</span><span class="p">(</span>
<span class="nx">MathJax</span><span class="p">.</span><span class="nx">Ajax</span><span class="p">.</span><span class="nx">Require</span><span class="p">(</span><span class="s2">"file1.js"</span><span class="p">),</span>
<span class="nx">MathJax</span><span class="p">.</span><span class="nx">Ajax</span><span class="p">.</span><span class="nx">Require</span><span class="p">(</span><span class="s2">"file2.js"</span><span class="p">),</span>
<span class="nx">MathJax</span><span class="p">.</span><span class="nx">Ajax</span><span class="p">.</span><span class="nx">Require</span><span class="p">(</span><span class="s2">"file3.js"</span><span class="p">),</span>
<span class="nx">MathJax</span><span class="p">.</span><span class="nx">Ajax</span><span class="p">.</span><span class="nx">Require</span><span class="p">(</span><span class="s2">"file4.js"</span><span class="p">),</span>
<span class="p">[</span><span class="nx">f</span><span class="p">,</span> <span class="s2">"all done"</span><span class="p">]</span>
<span class="p">);</span>
</pre></div>
</div>
<p>starts four files loading all at once, and waits for all four to
complete before calling <tt class="docutils literal"><span class="pre">f("all</span> <span class="pre">done")</span></tt>. The order in which they
complete is immaterial, and they all are being requested
simultaneously.</p>
</div>
<div class="section" id="the-mathjax-processing-queue">
<h2>The MathJax Processing Queue<a class="headerlink" href="#the-mathjax-processing-queue" title="Permalink to this headline">¶</a></h2>
<p>MathJax uses a queue stored as <tt class="docutils literal"><span class="pre">MathJax.Hub.queue</span></tt> to regulate its
own actions so that they operate in the right order even when some
of them include asynchronous operations. You can take advantage of
that queue when you make calls to MathJax methods that need to be
synchronized with the other actions taken by MathJax. It may not
always be apparent, however, which methods fall into that category.</p>
<p>The main source of asynchronous actions in MathJax is the loading of
external files, so any action that may cause a file to be loaded may
act asynchronously. Many important actions do so, including some that
you might not expect; e.g., typesetting mathematics can cause files to
be loaded. This is because some TeX commands, for example, are rare
enough that they are not included in the core TeX input processor, but
instead are defined in extensions that are loaded automatically when
needed. The typesetting of an expression containing one of these TeX
commands can cause the typesetting process to be suspended while the
file is loaded, and then restarted when the extension has become
evailable.</p>
<p>As a result, any call to <tt class="xref py py-meth docutils literal"><span class="pre">MathJax.Hub.Typeset()</span></tt> (or
<tt class="xref py py-meth docutils literal"><span class="pre">MathJax.Hub.Process()</span></tt>, or <tt class="xref py py-meth docutils literal"><span class="pre">MathJax.Hub.Update()</span></tt>, etc.)
could return long before the mathematics is actually typeset, and the
rest of your code may run before the mathematics is available. If you
have code that relys on the mathematics being visible on screen, you
will need to break that out into a separate operation that is
synchronized with the typesetting via the MathJax queue.</p>
<p>Furthermore, your own typesetting calls may need to wait for file loading
to occur that is already underway, so even if you don’t need to access
the mathematics after it is typeset, you may still need to queue the
typeset command in order to make sure it is properly synchronized with
<em>previous</em> typeset calls. For instance, if an earlier call
started loading an extension and you start another typeset call before
that extension is fully loaded, MathJax’s internal state may be in
flux, and it may not be prepared to handle another typeset operation
yet. This is even more important if you are using other libraries
that may call MathJax, in which case your code may not be aware of the
state that MathJax is in.</p>
<p>For these reasons, it is always best to perform typesetting operations
through the MathJax queue, and the same goes for any other action
that could cause files to load. A good rule of thumb is that, if a
MathJax function includes a callback argument, that function may operate
asynchronously; you should use the MathJax queue to perform it and
any actions that rely on its results.</p>
<p>To place an action in the MathJax queue, use the
<tt class="xref py py-meth docutils literal"><span class="pre">MathJax.Hub.Queue()</span></tt> command. For example</p>
<div class="highlight-javascript"><div class="highlight"><pre><span class="nx">MathJax</span><span class="p">.</span><span class="nx">Hub</span><span class="p">.</span><span class="nx">Queue</span><span class="p">([</span><span class="s2">"Typeset"</span><span class="p">,</span><span class="nx">MathJax</span><span class="p">.</span><span class="nx">Hub</span><span class="p">,</span><span class="s2">"MathDiv"</span><span class="p">]);</span>
</pre></div>
</div>
<p>would queue the command <tt class="docutils literal"><span class="pre">MathJax.Hub.Typeset("MathDiv")</span></tt>, causing
the contents of the DOM element with <cite>id</cite> equal to <tt class="docutils literal"><span class="pre">MathDiv</span></tt> to be
typeset.</p>
<p>One of the uses of the MathJax queue is to allow you to synchronize an
action with the startup process for MathJax. If you want to have a
function performed after MathJax has become completely set up (and
performed its initial typesetting of the page), you can push it onto
the <tt class="docutils literal"><span class="pre">MathJax.Hub.queue</span></tt> so that it won’t be performed until MathJax
finishes everything it has queued when it was loaded. For example,</p>
<div class="highlight-html"><div class="highlight"><pre><span class="nt"><script </span><span class="na">type=</span><span class="s">"text/javascript"</span> <span class="na">src=</span><span class="s">"/MathJax/MathJax.js"</span><span class="nt">></script></span>
<span class="nt"><script></span>
<span class="nx">MathJax</span><span class="p">.</span><span class="nx">Hub</span><span class="p">.</span><span class="nx">Queue</span><span class="p">(</span><span class="kd">function</span> <span class="p">()</span> <span class="p">{</span>
<span class="c1">// ... your startup commands here ...</span>
<span class="p">});</span>
<span class="nt"></script></span>
</pre></div>
</div>
</div>
</div>
</div>
</div>
</div>
<div class="sphinxsidebar">
<div class="sphinxsidebarwrapper">
<h3><a href="index.html">Table Of Contents</a></h3>
<ul>
<li><a class="reference internal" href="#">Using Queues</a><ul>
<li><a class="reference internal" href="#constructing-queues">Constructing Queues</a></li>
<li><a class="reference internal" href="#callbacks-versus-callback-specifications">Callbacks versus Callback Specifications</a></li>
<li><a class="reference internal" href="#the-mathjax-processing-queue">The MathJax Processing Queue</a></li>
</ul>
</li>
</ul>
<h4>Previous topic</h4>
<p class="topless"><a href="callbacks.html"
title="previous chapter">Using Callbacks</a></p>
<h4>Next topic</h4>
<p class="topless"><a href="signals.html"
title="next chapter">Using Signals</a></p>
<div id="searchbox" style="display: none">
<h3>Quick search</h3>
<form class="search" action="search.html" method="get">
<input type="text" name="q" size="18" />
<input type="submit" value="Go" />
<input type="hidden" name="check_keywords" value="yes" />
<input type="hidden" name="area" value="default" />
</form>
<p class="searchtip" style="font-size: 90%">
Enter search terms or a module, class or function name.
</p>
</div>
<script type="text/javascript">$('#searchbox').show(0);</script>
</div>
</div>
<div class="clearer"></div>
</div>
<div class="related">
<h3>Navigation</h3>
<ul>
<li class="right" style="margin-right: 10px">
<a href="genindex.html" title="General Index"
>index</a></li>
<li class="right" >
<a href="signals.html" title="Using Signals"
>next</a> |</li>
<li class="right" >
<a href="callbacks.html" title="Using Callbacks"
>previous</a> |</li>
<li><a href="index.html">MathJax v1.1 documentation</a> »</li>
<li><a href="synchronize.html" >Synchronizing your code with MathJax</a> »</li>
</ul>
</div>
<div class="footer">
© Copyright 2011 Design Science.
Created using <a href="http://sphinx.pocoo.org/">Sphinx</a> 1.0.7.
</div>
</body>
</html> | html |
Leaving millions of his fans heartbroken and shocked, Sandalwood Power Star Puneeth Rajkumar breathed his last due to a massive cardiac arrest on Friday morning (Oct 29). Puneeth’s last rites will be performed tomorrow at Sree Kanteerava Studio in Bengaluru.
Meanwhile, unable to accept Puneeth’s untimely demise, 3 of his die-hard fans lost their lives in various parts of Karnataka. A 30-year-old fan named Muniyappa, a resident of Maruru village, Hanur taluk in Chamrajnagar district wept uncontrollably in front of the TV and collapsed all of a sudden due to a cardiac arrest. Muniyappa is survived by his wife and two children.
In another tragic news, another Puneeth fan named Parashuram Demannanavar, a resident of Shindolli village in Belagavi district, also passed away following a heart attack at 11 pm last night. Parashuram reportedly kept crying watching the tragic news until his death.
In yet another tragic incident, a young fan named Rahul Gaadivaddara from Athani, Belagavi ended his life by suicide after being heartbroken with Puneeth’s demise. Rahul reportedly decorated Puneeth’s photo with flowers before hanging himself to death.
Meanwhile, thousands of fans and celebrities from across the country are rushing to Kanteerva Stadium to see Puneeth one last time and pay their respects.
Articles that might interest you:
- Has Vaishnavi Chaitanya signed a three-film deal with Dil Raju?
| english |
New Delhi: Veteran India pacer Jhulan Goswami Thursday announced her retirement from T20I with immediate effect, BCCI announced in a statement. The 35-year-old Goswami, represented India in 68 T20Is picking up 56 wickets that included a five-wicket haul against Australia in 2012.
“India pacer Jhulan Goswami has decided to call it quits from the shortest format of the game. Goswami thanked the BCCI and her teammates for all the love and support she garnered during her stint with the T20I team and wished them luck going forward,” the statement read.
“BCCI and the entire women’s national team wishes her the best and looks forward to her valuable contributions when she represents India in other two formats,” the statement further added.
“This was something I had been thinking for a long time. I want to concentrate on ODIs now and I feel I was not able to give my best in both formats,” Goswami told this agency over phone.
Goswami, a veteran of 10 Tests and 169 ODIs is currently the leading wicket-taker in the women’s ODI format and also the first woman cricketer to take 200 ODI wickets. | english |
American athlete Justin Gatlin and Jamaica’s Usain Bolt, the ‘fastest man on earth’, have both progressed to the finals of the 100m event at the IAAF World Championships held at the Bird’s Nest stadium in Beijing, China today. Tyson Gay and Asafa Powell were among the others who qualified.
Bolt had a final time of 9. 96 seconds, unable to break the 100m record which is currently held by him – 9. 58 seconds, set in Berlin in 2009. He topped the first heats. Gatlin topped his own set of heats, bettering Bolt’s time by 0. 19sec at 9. 77, and was followed by compatriot Mike Rodgers, who had 9. 86sec.
The third heat also saw iconic names – the top two spots were taken by Tyson Gay of the U. S. A who, with 9. 96 seconds was tied with Bolt, followed by Jamaica’s Powell, a knonw 100m specialist, who was 0. 01s behind, finishing at 9. 97.
The United States of America dominated the 100m semis, with 4 of 8 athletes in the finals from the country. Apart from icons Gatlin and Gay, Mike Rodgers and Trayvon Brommell also qualified for the finals, to be held later this evening. Canada’s Andre de Grasse and French athlete Jimmy Vicaut are the other two athletes who will round up the 8 finalists at the event. | english |
from __future__ import print_function
from hams_admin import HamsConnection, DockerContainerManager
from hams_admin.deployers import python as python_deployer
import json
import requests
from datetime import datetime
import time
import numpy as np
import signal
import sys
def predict(addr, x, batch=False):
url = "http://%s/simple-example/predict" % addr
if batch:
req_json = json.dumps({'input_batch': x})
else:
req_json = json.dumps({'input': list(x)})
headers = {'Content-type': 'application/json'}
start = datetime.now()
r = requests.post(url, headers=headers, data=req_json)
end = datetime.now()
latency = (end - start).total_seconds() * 1000.0
print("'%s', %f ms" % (r.text, latency))
def feature_sum(xs):
return [str(sum(x)) for x in xs]
# Stop Hams on Ctrl-C
def signal_handler(signal, frame):
print("Stopping Hams...")
hams_conn = HamsConnection(DockerContainerManager())
hams_conn.stop_all()
sys.exit(0)
if __name__ == '__main__':
signal.signal(signal.SIGINT, signal_handler)
hams_conn = HamsConnection(DockerContainerManager())
hams_conn.start_hams()
# python_deployer.create_endpoint(hams_conn, "simple-example", "doubles",
# feature_sum)
time.sleep(2)
# For batch inputs set this number > 1
# batch_size = 1
# try:
# while True:
# if batch_size > 1:
# predict(
# hams_conn.get_query_addr(),
# [list(np.random.random(200)) for i in range(batch_size)],
# batch=True)
# else:
# predict(hams_conn.get_query_addr(), np.random.random(200))
# time.sleep(0.2)
# except Exception as e:
# hams_conn.stop_all() | python |
// Copyright 2017 the V8 project authors. All rights reserved.
// Use of this source code is governed by a BSD-style license that can be
// found in the LICENSE file.
(function testNonConfigurableProperty() {
function ownKeys(x) { return ["23", "length"]; }
var target = [];
var proxy = new Proxy(target, {ownKeys:ownKeys});
Object.defineProperty(target, "23", {value:true});
assertEquals(["23", "length"], Object.getOwnPropertyNames(proxy));
})();
(function testPreventedExtension() {
function ownKeys(x) { return ["42", "length"]; }
var target = [];
var proxy = new Proxy(target, {ownKeys:ownKeys});
target[42] = true;
Object.preventExtensions(target);
assertEquals(["42", "length"], Object.getOwnPropertyNames(proxy));
})();
| javascript |
import FileManagerComponent from "./FileManager.svelte";
import { setLang } from "./lang";
import type { Options } from "./types";
import config from "./config";
export class FileManager {
private fm: FileManagerComponent | null = null;
static registered = new Map();
private options: Options;
constructor(private element: HTMLElement, options: Partial<Options> = {}) {
this.options = { ...config, ...options };
}
static get observedAttributes() {
return ["hidden", "endpoint"];
}
connectedCallback() {
this.element.style.setProperty("display", "block");
const endpointAttr = this.element.getAttribute("endpoint");
if (endpointAttr) {
this.options.endpoint = endpointAttr!;
}
this.options.readOnly = this.element.hasAttribute("readonly");
if (!this.options.endpoint && !this.options.getFiles) {
throw new Error("You must define an endpoint for this custom element");
}
setLang(document.documentElement.getAttribute("lang") || "en");
this.fm = new FileManagerComponent({
target: this.element,
props: {
hidden: this.element.hidden,
layout: this.element.getAttribute("layout") || "grid",
lazyFolders: this.element.hasAttribute("lazy-folders"),
options: this.options,
},
});
}
attributeChangedCallback(name: string, oldValue: any, newValue: any) {
if (name === "hidden" && this.fm) {
this.fm.$set({ hidden: newValue !== null });
}
if (name === "endpoint") {
this.options.endpoint = newValue;
}
}
disconnectedCallback() {
this?.fm?.$destroy();
}
static register(name = "file-manager", options?: Partial<Options>) {
if (!this.registered.has(name)) {
// A class cannot be used multiple time to declare a custom element so we need to creates
// a fresh class for every "register" call
class AnonymousFileManager extends HTMLElement {
private decorated: FileManager;
constructor() {
super();
this.decorated = new FileManager(this, options);
}
static get observedAttributes() {
return FileManager.observedAttributes;
}
connectedCallback() {
return this.decorated.connectedCallback();
}
attributeChangedCallback(name: string, oldValue: any, newValue: any) {
return this.decorated.attributeChangedCallback(
name,
oldValue,
newValue
);
}
}
customElements.define(name, AnonymousFileManager);
this.registered.set(name, true);
}
}
}
| typescript |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.