language stringclasses 15
values | src_encoding stringclasses 34
values | length_bytes int64 6 7.85M | score float64 1.5 5.69 | int_score int64 2 5 | detected_licenses listlengths 0 160 | license_type stringclasses 2
values | text stringlengths 9 7.85M |
|---|---|---|---|---|---|---|---|
Markdown | UTF-8 | 1,336 | 2.78125 | 3 | [] | no_license | # Yireo ExampleAddressFieldComment
This module integrates a new fields (`comment`) in various ways in the existing fieldsets of a shipment address. The `comment` field follows the pattern of a Custom Attribute (EAV value).
- Setup procedure
- Shipment Address step in the checkout
- Address form under the Customer Account
### Installation
```
composer require yireo-training/magento2-example-address-field-comment:dev-master
```
### Setup procedure
Through a file `Setup/InstallData.php` the field `comment` is added to the database, plus it is added to some forms in the backend. Right after this step, you can already enter and save values for this Custom Attribute `comment` in the backend.
### Address form under the Customer Account
This is actually bad code: While the customer-entity can be cleanly extended using a form API, the address form is not easy to extend: Its fields are hard-coded in PHTML. Therefore, a plugin was created (`etc/di.xml`) to hack the new field `comment` (`Block/Address/Edit/Field/Comment.php`) into the right place.
The block-class calls for the `comment` value through the Custom Attribute code.
### Shipment Address step in the checkout
The `comment` value is added automatically to the checkout, because it is a Custom Attribute. This method is the recommended approach for simple values.
|
C# | UTF-8 | 2,118 | 3.296875 | 3 | [] | no_license | using System;
using System.Text;
namespace _04.MorseCodeTranslator
{
class Program
{
static void Main(string[] args)
{
var input = Console.ReadLine().Split();
StringBuilder output = new StringBuilder();
for (int i = 0; i < input.Length; i++)
{
switch (input[i])
{
case ".-": output.Append('A'); break;
case "-...": output.Append('B'); break;
case "-.-.": output.Append('C'); break;
case "-..": output.Append('D'); break;
case ".": output.Append('E'); break;
case "..-.": output.Append('F'); break;
case "--.": output.Append('G'); break;
case "....": output.Append('H'); break;
case "..": output.Append('I'); break;
case ".---": output.Append('J'); break;
case "-.-": output.Append('K'); break;
case ".-..": output.Append('L'); break;
case "--": output.Append('M'); break;
case "-.": output.Append('N'); break;
case "---": output.Append('O'); break;
case ".--.": output.Append('P'); break;
case "--.-": output.Append('Q'); break;
case ".-.": output.Append('R'); break;
case "...": output.Append('S'); break;
case "-": output.Append('T'); break;
case "..-": output.Append('U'); break;
case "...-": output.Append('V'); break;
case ".--": output.Append('W'); break;
case "-..-": output.Append('X'); break;
case "-.--": output.Append('Y'); break;
case "--..": output.Append('Z'); break;
case "|": output.Append(' '); break;
default:
break;
}
}
Console.WriteLine(output);
}
}
}
|
Java | WINDOWS-1252 | 737 | 3.1875 | 3 | [] | no_license | package singleton;
/**
* @author Emerson Pereira
*/
public class TrianguloEquilatero implements FigurasGeometricas{
private int base, altura;
private static TrianguloEquilatero equilatero = null;
private TrianguloEquilatero(){
}
public static TrianguloEquilatero getTrianguloEqui(){
if (equilatero == null){
equilatero = new TrianguloEquilatero();
}
return equilatero;
}
public void calculaArea(){
System.out.println("A area do triangulo equilatero instanciado : " + (base*altura)/2);
}
public int getBase() {
return base;
}
public void setBase(int base) {
this.base = base;
}
public int getAltura() {
return altura;
}
public void setAltura(int altura) {
this.altura = altura;
}
}
|
Python | UTF-8 | 2,270 | 3.359375 | 3 | [
"BSD-2-Clause"
] | permissive | def exo1():
"""
A first way to denoise the image is to apply the local median filter
implemented with the function |perform_median_filtering| on
each channel |M(:,:,i)| of the image, to get a denoised image |Mindep| with SNR |pindep|.
"""
Mindep = zeros(n, n, 3)
for i in 1: 3:
Mindep(: , : , i) = perform_median_filtering(M(: , : , i), k)
pindep = snr(M0, Mindep)
imageplot(clamp(M), strcat(['Noisy, SNR = ' num2str(pnoisy)]), 1, 2, 1)
imageplot(clamp(Mindep), strcat(['Denoised, SNR = ' num2str(pindep)]), 1, 2, 2)
def exo2():
"""
Compute the median |med| of the points in |X|
using the iterative reweighted least squares algorithm.
This computed median |med| should be stored in the result as
|Mmed(x,y,:)| (you need to reshape |med| so that its size is |[1 1 3]|).
"""
med = mean(X, 2)
niter = 8; energy = []
for i in 1: niter:
% comute the distance from med to the points
dist = sqrt(sum((X-repmat(med, [1 w*w])).^2))
% compute the weight, take care of not dividing by 0
weight = 1./ max(dist, 1e-10); weight = weight/ sum(weight)
% compute the weighted mean
med = sum(repmat(weight, [3 1]).*X, 2)
energy(end + 1) = sum(dist)
def exo3():
"""
Implement the 3D median filter by looping through all the pixel |(x,y)|.
isplay the results
"""
Mmed = zeros(n, n, 3)
niter = 15; t = 0
for x in 1: n:
selx = x-k: x + k; selx = mod(selx-1, n) + 1
for y in 1: n:
t = t + 1; % progressbar(t, n*n)
% extract patch
sely = y-k: y + k; sely = mod(sely-1, n) + 1
X = reshape(M(selx, sely, : ), [w*w 3])'
% compute median
med = mean(X, 2)
for i in 1: niter:
dist = sqrt(sum((X-repmat(med, [1 w*w])).^2))
weight = 1./ max(dist, 1e-10); weight = weight/ sum(weight)
med = sum(repmat(weight, [3 1]).*X, 2)
% store result
Mmed(x, y, : ) = reshape(med, [1 1 3])
pmed = snr(M0, Mmed)
imageplot(clamp(Mindep), strcat(['1D median, SNR = ' num2str(pindep)]), 1, 2, 1)
imageplot(clamp(Mmed), strcat(['3D median, SNR = ' num2str(pmed)]), 1, 2, 2)
|
Java | UTF-8 | 125 | 1.84375 | 2 | [] | no_license | package id.go.bandung.salary.dao;
import id.go.bandung.salary.model.Opd;
public interface OpdDao {
public Opd getOpd();
}
|
Markdown | UTF-8 | 3,252 | 3.40625 | 3 | [] | no_license | # 题目
`AddTwoNumbers`
## code
```rust
// Definition for singly-linked list.
// #[derive(PartialEq, Eq, Clone, Debug)]
// pub struct ListNode {
// pub val: i32,
// pub next: Option<Box<ListNode>>
// }
//
// impl ListNode {
// #[inline]
// fn new(val: i32) -> Self {
// ListNode {
// next: None,
// val
// }
// }
// }
pub fn add_two_numbers_n(l1: Option<Box<ListNode>>, l2: Option<Box<ListNode>>, add: i32) -> Option<Box<ListNode>> {
if l1.is_none() {
if l2.is_none() {
if add != 0 {
return Some(Box::new(ListNode {
val: 1,
next: None,
}))
}
return None;
}
return add_two_numbers_n(l2, l1, add);
}
if l2.is_none() {
if add == 0 {
return l1;
}
let rl1 = l1.unwrap();
let n_add = 1 + rl1.val;
let r_add = n_add % 10;
let add = n_add / 10;
let kNode = add_two_numbers_n(rl1.next, None, add);
return Some(Box::new(ListNode {
val: r_add,
next: kNode,
}));
}
let rl1 = l1.unwrap();
let rl2 = l2.unwrap();
let n_add = add + rl1.val + rl2.val;
let x_add = n_add / 10;
let val = n_add % 10;
let lNode = add_two_numbers_n(rl1.next, rl2.next, x_add);
return Some(Box::new(ListNode{
val,
next: lNode,
}))
}
impl Solution {
pub fn add_two_numbers(l1: Option<Box<ListNode>>, l2: Option<Box<ListNode>>) -> Option<Box<ListNode>> {
return add_two_numbers_n(l1, l2, 0);
}
}
#[macro_export]
macro_rules! node_list {
() => { None };
($key: expr) => { Some(Box::new(ListNode{next: None, val: $key})) };
($e: expr, $( $key: expr),+) => { Some(Box::new(ListNode{next: node_list!($($key),*), val: $e})) };
}
#[cfg(test)]
mod tests {
use super::*;
#[test]
fn test_add_two_numbers() {
assert_eq!(None, Solution::add_two_numbers(None, None));
assert_eq!(node_list!(2,4,3), Solution::add_two_numbers(None, node_list!(2,4,3)));
assert_eq!(node_list!(4,7,7), Solution::add_two_numbers(node_list!(2,3,4), node_list!(2,4,3)));
assert_eq!(node_list!(4,7,0,1), Solution::add_two_numbers(node_list!(2,3,4), node_list!(2,4,6)));
}
}
```
## 测试结果
Success
Details
Runtime: 4 ms, faster than 82.69% of Rust online submissions for Add Two Numbers.
Memory Usage: 2.4 MB, less than 33.33% of Rust online submissions for Add Two Numbers.

##思路
通过递归来实现链表元素加和,功能非常简单,忽略了c语言种修改链表元素值的方式来完成这道题。
### 复杂度
O(n)
### 困难:
主要是不能够修改链表元素值,让我感觉多余的申请了空间,不知道还有没有别的更好的方案
### 可能的优化思路:
直接修改链表的值, 但是这个似乎对rust来说很困难
### 归类:
链表
### 类似题型:
#### 1) Reverse Nodes in k-Group
#### 2) Partition List
#### 3) Reverse Linked List
|
TypeScript | UTF-8 | 712 | 2.984375 | 3 | [
"MIT"
] | permissive | import { File } from '../domain/file';
export class FileResponse {
private name: string;
private children?: Array<FileResponse>;
constructor(name: string, children?: Array<FileResponse>) {
this.name = name;
this.children = children;
}
public static fromAggregate(file: File): FileResponse {
if (file.getFileIsDirectory().getValue()) {
return new FileResponse(
file.getFileName().getValue(),
file.getFiles().map((f) => this.fromAggregate(f)),
);
} else {
return new FileResponse(file.getFileName().getValue());
}
}
public getName(): string {
return this.name;
}
public getChildren(): FileResponse[] {
return this.children;
}
}
|
Markdown | UTF-8 | 7,594 | 2.578125 | 3 | [
"MIT"
] | permissive | [](https://supertokens.io)
[](https://github.com/supertokens/auth-node-mysql-ref-jwt/blob/master/LICENSE)
<a href="https://supertokens.io/discord">
<img src="https://img.shields.io/discord/603466164219281420.svg?logo=discord"
alt="chat on Discord"></a>
[](https://join.slack.com/t/webmobilesecurity/shared_invite/enQtODM4MDM2MTQ1MDYyLTFiNmNhYzRlNGNjODhkNjc5MDRlYTBmZTBiNjFhOTFhYjI1MTc3ZWI2ZjY3Y2M3ZjY1MGJhZmRiNDFjNDNjOTM)
# Browser Tabs Lock
Using this package, you can easily get lock functionality across tabs on all modern browsers.
**This library was originally designed to be used as a part of our project - SuperTokens - an open source auth solution for web and mobile apps. Support us by checking it out [here](https://supertokens.io).**
We are also offering free, one-to-one implementation support:
- Schedule a short call with us on https://calendly.com/supertokens-rishabh.
## Some things to note about:
- This is not a reentrant lock. So please do not attempt to re-acquire a lock using the same lock instance with the same key without releasing the acquired lock / key first.
- Theoretically speaking, it is impossible to have foolproof locking built on top of javascript in the browser. One can only make it so that in all practical scenarios, it emulates locking.
## Installation using Node:
```bash
npm i --save browser-tabs-lock
```
### Usage in an async function:
```js
import SuperTokensLock from "browser-tabs-lock";
let superTokensLock = new SuperTokensLock()
async function lockingIsFun() {
if (await superTokensLock.acquireLock("hello", 5000)) {
// lock has been acquired... we can do anything we want now.
// ...
await superTokensLock.releaseLock("hello");
} else {
// failed to acquire lock after trying for 5 seconds.
}
}
```
### Usage using callbacks:
```js
import SuperTokensLock from "browser-tabs-lock";
let superTokensLock = new SuperTokensLock()
superTokensLock.acquireLock("hello", 5000).then((success) => {
if (success) {
// lock has been acquired... we can do anything we want now.
// ...
superTokensLock.releaseLock("hello").then(() => {
// lock released, continue
});
} else {
// failed to acquire lock after trying for 5 seconds.
}
});
```
## Installation using plain JS
As of version 1.2.0 of browser-tabs-lock the package can also be used as in plain javascript script.
### Add the script
```html
<script
type="text/javascript"
src="https://cdn.jsdelivr.net/gh/supertokens/browser-tabs-lock@1.2/bundle/bundle.js">
</script>
```
### Creating and using the lock
```js
let lock = new supertokenslock.getNewInstance();
lock.acquireLock("hello")
.then(success => {
if (success) {
// lock has been acquired... we can do anything we want now.
...
lock.releaseLock("hello").then(() => {
// lock released, continue
});
} else {
// failed to acquire lock after trying for 5 seconds.
}
});
```
Also note, that if your web app only needs to work on google chrome, you can use the [Web Locks API](https://developer.mozilla.org/en-US/docs/Web/API/Lock) instead. This probably has true locking!
## Migrating from 1.1x to 1.2x
In some cases, version 1.1x did not entirely ensure mutual exclusion. To explain the problem:
Lets say you create two lock instances L1 and L2. L1 acquires a lock with key K1 and is performing some action that takes 20 seconds to finish.
Immediately after L1 acquires a lock, L2 tries to acquire a lock with the same key(K1). Normally L2 would not be able to acquire the lock until L1 releases it (in this case after 20 seconds) or when the tab that uses L1 is closed abruptly. However it is seen that sometimes L2 is able to acquire the lock automatically after 10 seconds (note that L1 has still not released the lock) - thereby breaking mutual exclusion.
This bug has been fixed and released in version 1.2x of browser-tabs-lock. We highly recommend users to upgrade to 1.2x versions.
After upgrading the only change that requires attention to is that ```lock.releaseLock``` is now an asynchronous function and needs to be handled accordingly.
#### Using async/await
Simply change calls to releaseLock from
```js
lock.releaseLock("hello");
```
to
```js
await lock.releaseLock("hello");
```
#### Using callbacks
Simple change calls to releaseLock from
```js
lock.releaseLock("hello");
```
to
```js
lock.releaseLock("hello")
.then(() => {
// continue
});
```
## Test coverage
In an effort to make this package as production ready as possible we use puppeteer to run browser-tabs-lock in a headless browser environment and perform the following action:
- Create 15 tabs in the browser. Each tab tries to acquire a lock with the same key(K1) and then updates a counter in local storage(C1) as well as updates a counter local to that tab(Ct). The local counter(Ct) serves as a way to know how many times that particular tab has updated local storage counter(C1). This process happens recursively for 20 seconds. After 20 seconds we signal all tabs to stop and after all of them have stopped, we calculate the sum of all the local counter values(sum(Ct...Cn)) for each tab and compare that with the value in local storage(C1). If the two values are the same and the value in local storage matches an estimated value then we know that all tabs use locking in a proper manner.
- Create a tab(T1) which acquires a lock with a key(K1). We then create another tab(T2) that tries to acquire a lock with the same key(K1) and after waiting for some time we verify that the second tab(T2) does not acquire the lock. We close both tabs, note however that tab 1(T1) still had not released the lock. We create another tab(T3) that tries to acquire a lock with the same key(K1) and we verify that the tab is able to acquire the lock. This way we can be sure that locks are released when the tab that holds it(in this case T1) is closed abruptly.
- Create a tab that creates two separate instances of the lock object I1 and I2. I1 acquires a lock with a key(K1), immediately after I2 tries to acquire the lock with the same key(K1). We verify that I2 cannot acquire the lock even after some time has passed. I1 then releases the lock and immediately after I2 tries the acquire it, we verify that I2 can now acquire the lock.
- Create a tab that holds 15 separate lock instances. Each instance tries to acquire the lock using the same key(K1), it then updates a counter(C1) in local storage and also updates a local counter value specific to this instance (Ci). After incrementing the counters the instance recursively repeats this process. We wait for 20 seconds after which we signal each instance to stop and wait for all of them to stop. We then get the counter value from storage(C1) and add all local counter values(sum(Ci....Cn)) and compare the 2 values. We verify that the values are the same and the value in local storage(C1) matches an estimated value. This way we can be sure that in a single tab multiple lock instances using the same key work correctly.
## Support, questions and bugs
For now, we are most reachable via team@supertokens.io and via the GitHub issues feature
## Authors
Created with :heart: by the folks at [SuperTokens](https://supertokens.io). We are a startup passionate about security and solving software challenges in a way that's helpful for everyone! Please feel free to give us feedback at team@supertokens.io, until our website is ready :grinning:
|
Markdown | UTF-8 | 1,613 | 2.765625 | 3 | [
"MIT"
] | permissive | ---
layout: nodes.liquid
section: smartContract
date: Last Modified
title: 'Introduction to Chainlink Keepers (Beta)'
permalink: 'docs/chainlink-keepers/introduction/'
whatsnext:
{
'Making Keeper Compatible Contracts': '/docs/chainlink-keepers/compatible-contracts/',
}
---

{% include keepers-beta %}
A major limitation of smart contracts is that they can't trigger or initiate their own functions at arbitrary times or under arbitrary conditions. State change will only occur when a transaction is initiated by another account (such as user, oracle, or contract).
Chainlink Keepers allow you to register Upkeep for your contract. When the conditions you specify are met, the Chainlink Keeper Network will execute a method on your contract. [Learn how the network works.](../overview)
An example Decentralized Finance (DeFi) use case would be to detect when a debt position in a smart contract is insufficiently collateralized. The contract could be triggered to liquidate the position automatically. Capabilities like this generally can't be automated on-chain and must be handled by an off-chain service due to smart contracts inability to self-execute.
The Chainlink Keeper Network is a decentralized solution where independent Keeper nodes are incentivized to check and perform Upkeep correctly.
To use the Chainlink Keeper Network, you'll need to:.
1. Write a compatible contract, or make an existing contract compatible
1. Register Upkeep for your contract on the Chainlink Keeper Network
1. Fund your Upkeep with LINK
|
Markdown | UTF-8 | 4,494 | 2.75 | 3 | [] | no_license | ---
title: Project Setup
tags:
- project setup
- new folder
- rocket league mods
---
# Project Setup
## Overview
UDK is from a different era of software project development, so it doesn’t have a lot of the nice features (or the look) that we have grown accustomed to. It can be a bit annoying to navigate and understand, so the next few sections will be a walkthrough of how you should set it up.
## Folder Setup
I use Windows 10 and the Steam version of Rocket League, so please keep that in mind through the entirety of this guide. Because Rocket League is no longer officially supported on Mac or Linux as of 2019, and installing UDK is also unlikely to work, you're almost certainly going to have to be on Windows also.
* My Rocket League install location is `C:\Games\SteamApps\common\rocketleague`
* All Rocket League assets live inside `\TAGame\CookedPCConsole`
* Several of my modded maps live inside `\mods`, a folder I created
`C:\Games\SteamApps\common\rocketleague\TAGame\CookedPCConsole` will be referred to as `{CookedPCConsole}` for simplicity.
* As mentioned in the [UDK install process](03_installing), everything will live inside `{UDK Folder}`
* **Each individual custom map should live inside a folder within `{UDK Folder}\UDKGame\Content\Maps\`**
* In addition to these folders, I highly highly recommend making two Windows File Explorer shortcuts, or bookmarking these locations
* One that points to `{CookedPCConsole}`
* One that points to the Steam Workshop folder for Rocket League. This may be found in the Steam install location (the first half of `{CookedPCConsole}`), but instead of `\rocketleague\` it will be `\workshop\content\252950\`. Each downloaded map has a custom identifier in here, and it can be incredibly valuable to open them up in UDK and see how things are made.
* Keep these within the `\Maps\` folder
* Within `C:\UDK\` I also have a folder named `Assets`. This is where I keep Blender projects, exported meshes, custom textures, screenshots, and whatever else (each within a categorized folder) that I might want to have easy access to.
* Also within `C:\UDK\` I keep a folder called `Workshop`, within which I have a folder for each map. See the section titled Publishing a Map for more information.
* All other programs and utilities, like [UE Viewer](../resources/downloads), [Bakkesmod](https://bakkesmod.com), and [miscellaneous downloads](../resources/downloads), live in their own folder outside of the UDK install. You can keep them here too if that makes more sense.
::: tip
I highly recommend creating folder shortcuts between all of these places, because it’s easy to get lost. It also gets tiresome navigating folders all day.
:::
## Batch Script For Quickly Testing Maps (BSFQTM) <Badge text="important" type="tip"/>
Create a new text file, then rename it something like UtopiaOverwrite.bat. To use this script, simply drag your map file onto it, and it will overwrite the Rocket Labs Utopia Retro (donut) map, which is not used in any online multiplayer playlist.
::: warning
Make a backup of this map (Labs_Utopia_P.upk) somewhere safe.
:::
```sh
@echo off
echo "%~1"
echo F|xcopy /y "%~1" "{CookedPCConsole}\Labs_Utopia_P.upk"
```
CookedPCConsole is the folder containing all of Rocket League’s assets, within the install folder:
**C:\Program Files (x86)\Steam\steamapps\common\rocketleague\TAGame\CookedPCConsole**
Meaning that the script for me is:
```sh
@echo off
echo "%~1"
echo F|xcopy /y "%~1" "C:\Program Files (x86)\Steam\steamapps\common\rocketleague\TAGame\CookedPCConsole\Labs_Utopia_P.upk"
```
For ease of use, I copy this script into the folder of each of my in-progress maps.
## Starting a New Project
When you run UDK Editor, you will be faced with a welcome splash screen and a few options.

If you want to have an animated skysphere around your world, use one of the Lighting templates. If you just want an entirely clean slate, go with the Blank Map option.
After making the project, the first thing you are going to want to do is save your new project with a better name than Untitled-3. I recommend putting it in a dedicated folder such as:
`{UDK Folder}\UDKGame\Content\Maps\MyNewMap`
It is no problem at all to change the name of a project after the fact, so don’t feel like you are locked in to a folder name or a project name. |
Python | UTF-8 | 15,675 | 2.6875 | 3 | [] | no_license | from .gamestate import GameState
from .player import Player
from .constants import LEAGUE, PLAYER_COUNT, MOVE_PATTERN, MOVETRAIN_PATTERN, \
BUILD_PATTERN, MAX_LEVEL, UNIT_COST, BUILDING_TYPE, MAX_TURNS
from .action import Action, ACTIONTYPE
from .building import Building
from .unit import Unit
import random
import time
import math
class Engine:
def __init__(self, league: LEAGUE = LEAGUE.WOOD3, sleep = 0, \
debug = False, strict = False, seed = None, auto_restart=False,\
silence=False, idle_limit = 25):
self.__players = []
self.current_player: Player = None
self.__state: GameState = None
self.__league: LEAGUE = None
self.__started = False
self.__turns = 0
self.__actions = []
self.__move_count = 0
self.__gameover = False
self.__sleep = sleep
self.__debug = debug
self.__strict = strict
self.__error_log = []
self.seed = seed if seed else random.randint(0, 2 * 31)
self.set_league(league)
self.__auto_restart = auto_restart
self.__silence = silence
self.__idle_limit = idle_limit
self.__result = ""
def restart(self, new_seed=True):
if self.__started:
self.print("Force restarting")
[p.reset() for p in self.__players]
self.__started = True
self.__gameover = False
self.__turns = 0
self.__actions.clear()
self.__move_count = 0
self.__idle_count = [0, 0]
random.seed(time.thread_time_ns())
if not self.__state:
self.__state = GameState(random.randint(0, 2** 31) if new_seed else self.seed \
, self.__league)
self.__state.generate_map(self.__league)
else:
self.__state.reset()
self.__state.create_hq(PLAYER_COUNT)
self.current_player = random.choice(self.__players)
self.send_init_messages()
while not self.__gameover:
self.gameloop()
def get_map(self):
return self.__state.get_map()
def set_league(self, league: LEAGUE):
if self.__started:
return self.print("Can't change league when the game already started")
self.__league = league
[p.set_league(self.__league) for p in self.__players]
def add_player(self, player_class, *args, **kwargs):
if len(self.__players) >= PLAYER_COUNT:
return self.print(f'Current player capacity is {PLAYER_COUNT}')
new_player = player_class(len(self.__players), *args, **kwargs)
new_player.set_league(self.__league)
self.__players.append(new_player)
return new_player
def set_player(self, players):
assert len(players) == 2
assert isinstance(players[0], Player)
assert isinstance(players[1], Player)
self.__players = players
def get_players(self):
return self.__players
def start(self):
if len(self.__players) != PLAYER_COUNT:
return self.print("Not enough player to initiate the game")
self.restart(new_seed=False)
def send_init_messages(self):
for player in self.__players:
player.send_init_input(self.__state.get_mine_spots())
for row in self.__state.get_map():
for cell in row:
if cell.is_mine():
player.send_init_input(f'{cell.get_x()} {cell.get_y()}')
player.init()
def get_turns(self):
return self.__turns
def get_player(self, index: int):
return self.__players[index]
def get_income(self, index: int):
return self.__state.get_income(index)
def get_gold(self, index: int):
return self.__state.get_gold(index)
def is_debug(self):
return self.__debug
def is_strict(self):
return self.__strict
def game_over(self):
self.__gameover= True
self.__started = False
if self.__auto_restart:
self.restart()
def get_result(self):
return self.__result
def activate_players(self):
[p.activate() for p in self.__players]
def deactivate_players(self):
[p.deactivate() for p in self.__players]
def gameloop(self):
if self.__gameover:
self.game_over()
return
self.debug("\n\n\n")
if self.__turns // 2 >= MAX_TURNS:
scores = self.__state.get_scores()
self.print(f'Scores [ {self.__players[0]}: {scores[0]} --- {self.__players[1]}: {scores[1]} ]')
if scores[0] > scores[1]:
self.__result = f'{self.__players[0]} [{scores[0]} --- {scores[1]}] {self.__players[1]}'
self.kill_player(self.__players[1])
elif scores[1] > scores[0]:
self.__result = f'{self.__players[0]} [{scores[0]} --- {scores[1]}] {self.__players[1]}'
self.kill_player(self.__players[0])
else:
self.print("Wow a tie")
self.__result = f'Tie [{scores[0]} --- {scores[1]}]'
self.__gameover = True
self.game_over()
self.__players[0].add_score(scores[0])
self.__players[1].add_score(scores[1])
return
success = 0
self.__state.init_turn(self.current_player.get_index())
self.__state.send_state(self.current_player)
try:
self.current_player.update()
except Exception as e:
self.print(f'{self.current_player} crashed at turn {self.__turns // 2}')
self.print(e)
if self.__strict:
raise e
try:
self.parse_action()
success = sum([self.execute_action(action) for action in self.__actions])
if not success:
self.__idle_count[self.current_player.get_index()] += 1
else:
self.__idle_count[self.current_player.get_index()] = 0
self.check_idle()
except Exception as e:
self.print("Caught Exception while executing an action:")
self.print(e)
scores = self.__state.get_scores()
for player in self.__players:
if self.current_player is player:
player.add_score(scores[player.get_index()] - 100)
else:
player.add_score(scores[player.get_index()])
self.kill_player(self.current_player)
if success != len(self.__actions) and not self.__debug and not self.__strict:
self.debug("Some actions failed, enable strict mode to see more details")
self.__actions.clear()
self.__turns += 1
self.check_hq_capture()
self.current_player = self.__players[\
(self.__players.index(self.current_player) + 1)\
% len(self.__players)]
if self.__sleep > 0:
time.sleep(self.__sleep)
def check_idle(self):
if sum([idle_time >= self.__idle_limit for idle_time in self.__idle_count]) == PLAYER_COUNT:
scores = self.__state.get_scores()
self.__players[0].add_score(scores[0] - 50)
self.__players[1].add_score(scores[1] - 50)
self.__result = f'{self.__players[0]} [{scores[0]} --- {scores[1]}] {self.__players[1]} | idle({self.__idle_limit})'
self.kill_all()
def kill_all(self):
[p.lose() for p in self.__players]
self.__gameover = True
self.game_over()
def kill_player(self, player: Player):
player.lose()
[p.win() for p in self.__players if not p is player]
self.__gameover = True
self.game_over()
def parse_action(self):
# Referee.java readInput
msg = self.current_player.get_message()
self.debug(f'{self.current_player} Message: {msg}')
actions_str = msg.split(";")
for action_str in actions_str:
action_str = action_str.strip(" ")
if action_str == "WAIT":
continue
# Ignore Msg Command
if not self.match_move_train(self.current_player, action_str) and \
not self.match_build(self.current_player, action_str):
# self.throw(f'Message not matching any regex {action_str}')
self.print("Invalid Input: " + action_str)
def match_move_train(self, player: Player, msg: str):
if not MOVETRAIN_PATTERN.match(msg):
return False
args = msg.split()
action_type = ACTIONTYPE.MOVE if msg.startswith("MOVE") else ACTIONTYPE.TRAIN
try:
id_or_level = int(args[1])
x = int(args[2])
y = int(args[3])
except:
self.throw("Invalid Integer")
return True
if not self.__state.in_bound(x, y):
return self.throw("Coordinate not in map")
if action_type == ACTIONTYPE.TRAIN:
if self.__league == LEAGUE.WOOD3 and id_or_level != 1:
return self.throw("WOOD3 can't train unit level that's not 1")
self.create_train_action(player, id_or_level, x, y, msg)
else:
self.create_move_action(player, id_or_level, x, y, msg)
return True
def match_build(self, player: Player, msg: str):
if not BUILD_PATTERN.match(msg):
return False
args = msg.split()
try:
build_type = Building.convert_type(args[1])
x = int(args[2])
y = int(args[3])
except:
return self.throw("Invalid Integer")
finally:
if not self.__state.in_bound(x, y):
return self.throw("Coordinate not in map")
action = Action(msg, ACTIONTYPE.BUILD, player.get_index(), \
self.__state.get_cell(x, y), build_type)
self.__actions.append(action)
return True
def create_train_action(self, player: Player, level: int,
x: int, y: int, action_str: str):
if level <= 0 or level > MAX_LEVEL:
return self.throw(f'Invalid Level: {level}')
action = Action(action_str, ACTIONTYPE.TRAIN, player.get_index(),
level, self.__state.get_cell(x, y))
self.__actions.append(action)
def create_move_action(self, player: Player, unit_id: int,
x: int, y: int, action_str: str):
if not self.__state.get_unit(unit_id):
return self.throw(f'Unknown unit ID: {unit_id}')
if player.get_index() != self.__state.get_unit(unit_id).get_owner():
return self.throw("Trying to move enemy unit")
action = Action(action_str, ACTIONTYPE.MOVE, player.get_index(),
unit_id, self.__state.get_cell(x, y))
self.__actions.append(action)
def execute_action(self, action: Action):
if action.get_type() != ACTIONTYPE.MOVE and \
not action.get_cell().is_playable(action.get_player()):
return self.throw(f'Cell is not playable at {action.get_cell()}')
if action.get_type() == ACTIONTYPE.MOVE:
return self.make_move_action(action)
elif action.get_type() == ACTIONTYPE.TRAIN:
return self.make_train_action(action)
else:
return self.make_build_action(action)
def make_move_action(self, action: Action):
if not self.__state.get_unit(action.get_unit_id()).can_play():
return self.throw("Invalid move (unit can't move)")
unit_id: int = action.get_unit_id()
unit: Unit = self.__state.get_unit(unit_id)
if not action.get_cell().is_capturable(action.get_player(), unit.get_level()) and \
abs(unit.get_x() - action.get_cell().get_x()) + abs(unit.get_y() - action.get_cell().get_y()) == 1:
return self.throw("Not capturable")
next_cell = self.__state.get_next_cell(unit, action.get_cell())
if next_cell.get_x() == unit.get_cell().get_x() and \
next_cell.get_y() == unit.get_cell().get_y():
return self.throw("Unit can't stay still")
self.__state.move_unit(unit, next_cell)
self.__state.compute_all_active_cells()
return True
def make_train_action(self, action: Action):
player: Player = self.__players[action.get_player()]
if self.__state.get_gold(player.get_index()) < UNIT_COST[action.get_level()]:
return self.throw("Not enough gold to train unit")
if not action.get_cell().is_capturable(action.get_player(), action.get_level()):
return self.throw("Can't capture cell" + str(action.get_cell()))
unit = Unit(action.get_cell(), action.get_player(), action.get_level())
self.__state.add_unit(unit)
self.__state.compute_all_active_cells()
return True
def make_build_action(self, action: Action):
if self.__league == LEAGUE.WOOD3:
return self.throw("No build in WOOD3")
if self.__league == LEAGUE.WOOD2 and \
action.get_build_type() == BUILDING_TYPE.TOWER:
return self.throw("No TOWER in WOOD2")
if not action.get_cell().is_free():
return self.throw("Cell is not free to build on")
if not action.get_cell().get_owner() != action.get_player():
return self.throw("Must be built on own territory")
if self.__state.get_gold(action.get_player()) < \
self.__state.get_building_cost(action.get_build_type(), action.get_player()):
return self.throw("Not enough gold to build")
if action.get_build_type() == BUILDING_TYPE.MINE and not action.get_cell().is_mine():
return self.throw("Must build mine on a cell with mine")
if action.get_build_type() == BUILDING_TYPE.TOWER and action.get_cell().is_mine():
return self.throw("Can't build tower on a mine")
# Finally
building = Building(action.get_cell(), action.get_player(), action.get_build_type())
self.__state.add_building(building)
return True
def check_hq_capture(self):
for hq in self.__state.get_HQs():
if hq.get_cell().get_owner() != hq.get_owner():
loser_index = hq.get_owner()
winner_index = hq.get_cell().get_owner()
scores = self.__state.get_scores()
reward = round(math.sqrt(max([(MAX_TURNS - self.__turns) // 2, 0]))) * 100 + 1000
winner_score = scores[winner_index] + reward
loser_score = scores[loser_index] + reward // 10
self.__result = f'{self.__players[0]} [{winner_score} --- {loser_score}] {self.__players[1]} | {self.__players[winner_index]} caputure HQ'
self.__players[winner_index].add_score(winner_score)
self.__players[loser_index].add_score(loser_score)
self.kill_player(self.__players[loser_index])
def log_errors(self):
self.print("\n".join(self.__error_log))
def clear_errors(self):
self.print(f'{len(self.__error_log)} error logs cleared')
self.__error_log.clear()
def throw(self, msg: str):
if self.__strict:
raise Exception(msg)
else:
self.debug(msg)
self.__error_log.append(msg)
return False
def debug(self, msg: str):
if self.__debug:
self.print(msg)
def print(self, *args):
if not self.__silence:
print(*args) |
C | UTF-8 | 394 | 3.953125 | 4 | [] | no_license |
#include <stdio.h>
void swapInt(int *a, int *b)
{
int temp;
temp = *a;
*a = *b;
*b = temp;
}
int main()
{
int i1 = 1;
int i2 = 2;
printf("before: i1: %d, i2: %d\n", i1, i2);
swapInt(&i1,&i2);
printf("after: i1: %d, i2: %d\n", i1, i2);
}
/* output of program:
before: i1: 1, i2: 2
after: i1: 2, i2: 1*/
|
Java | UTF-8 | 1,641 | 2.59375 | 3 | [] | no_license | package ec.europa.eu.testcentre.client.gui;
import java.awt.AWTEvent;
import java.awt.AlphaComposite;
import java.awt.Color;
import java.awt.Font;
import java.awt.Graphics;
import java.awt.Graphics2D;
import java.awt.Point;
import java.awt.event.AWTEventListener;
import java.awt.event.MouseEvent;
import java.awt.event.MouseListener;
import javax.swing.JFrame;
import javax.swing.JPanel;
import javax.swing.SwingUtilities;
public class BetterGlassPane extends JPanel implements MouseListener {
private final JFrame frame;
private String message;
private Font font;
public BetterGlassPane(JFrame frame) {
super(null);
this.frame = frame;
setOpaque(false);
addMouseListener(this);
message=" loading ... ";
font = new Font("Serif", Font.PLAIN, 24);
}
protected void paintComponent(Graphics g) {
Graphics2D g2 = (Graphics2D) g;
g2.setColor(Color.BLACK);
g2.setComposite(AlphaComposite.getInstance(AlphaComposite.SRC_OVER, 0.7f));
g2.setFont(font);
g2.fillRect(0, 0, getWidth(), getHeight());
g2.drawString(message, getWidth() / 2 , getHeight() / 2);
g2.dispose();
}
public void setMessage(String message) {
this.message = message;
repaint();
}
public void mouseClicked(MouseEvent e) {
e.consume();
}
public void mousePressed(MouseEvent e) {
e.consume();
}
public void mouseReleased(MouseEvent e) {
return;
}
public void mouseEntered(MouseEvent e) {
return;
}
public void mouseExited(MouseEvent e) {
return;
}
} |
Python | UTF-8 | 282 | 2.59375 | 3 | [] | no_license | # coding: utf-8
import os, sys
sys.path.append(os.getcwd())
import xml.etree.ElementTree as ET
if __name__ == '__main__':
tree = ET.parse('data/nlp.txt.xml')
root = tree.getroot()
elem_list = root.findall(".//word")
for elem in elem_list:
print(elem.text)
|
C++ | UTF-8 | 2,593 | 3.4375 | 3 | [] | no_license | class Solution {
public:
int trap(vector<int>& height) {
if (height.size() == 0) return 0;
int rightWall = 0, leftWall = height.size() - 1;
int hLen = height.size();
int water = 0;
for (int i = 1; i < hLen; i++){
if (height[i] >= height[rightWall]){
for (int j = rightWall; j < i; j++){
water += height[rightWall] - height[j];
}
rightWall = i;
}
}
if (rightWall == leftWall) return water;
for (int i = leftWall - 1; i >= 0; i--){
if (height[i] > height[leftWall]){
for (int j = leftWall; j > i; j--){
water += height[leftWall] - height[j];
}
leftWall = i;
}
}
return water;
}
};
/*
栈的应用
*/
class Solution {
public:
int trap(vector<int>& height) {
int hLen = height.size(), ans = 0, current = 0;
stack<int> st;
while (current < hLen){
while (!st.empty() && height[current] > height[st.top()]){
int top = st.top();
st.pop();
if (st.empty()) break;
int distance = current - st.top() - 1;
int boundedHeight = min(height[current], height[st.top()]) - height[top];
ans += distance * boundedHeight;
}
st.push(current++);
}
return ans;
}
};
/*
双指针法
如果一端有更高的条形快(例如右端),积水的高度依赖于当前方向的高度(从左到右)。
当我们发现另一侧(右侧)的条形块高度不是最高的,我们则开始从相反的方向遍历(从右到左)。
我们必须在遍历时维护left_max和right_max,但是我们现在可以使用两个指针交替进行,实现一次遍历即可完成。
*/
class Solution {
public:
int trap(vector<int>& height) {
int left = 0, right = height.size() - 1;
int ans = 0;
int leftMax = 0, rightMax = 0;
while (left < right){
if (height[left] < height[right]) {
height[left] >= leftMax ? (leftMax = height[left]) : ans += (leftMax - height[left]);
++left;
}
else {
height[right] >= rightMax ? (rightMax = height[right]) : ans += (rightMax - height[right]);
--right;
}
}
return ans;
}
}; |
Java | UTF-8 | 597 | 2.546875 | 3 | [] | no_license | package com.angel.mensajes.client;
import java.awt.Canvas;
import java.awt.Graphics;
import java.awt.Graphics2D;
import java.awt.RenderingHints;
public class ClientCanvas extends Canvas{
ClientModel clientModel;
public ClientModel getClientModel() {
return clientModel;
}
public void setClientModel(ClientModel clientModel) {
this.clientModel = clientModel;
}
@Override
public void paint(Graphics graphics) {
super.paint(graphics);
Graphics2D g = (Graphics2D) graphics;
g.setRenderingHint(RenderingHints.KEY_ANTIALIASING, RenderingHints.VALUE_ANTIALIAS_ON);
}
}
|
JavaScript | UTF-8 | 785 | 3.125 | 3 | [] | no_license | const user = {
name: "Emilio Dias Mazzola",
nickname: 'mazzolinha',
time_range: {
from: 1942,
to: 1961
},
games: 856,
active: true,
birthday: 1921,
position: 'Meio campo',
times: ['Comandatuba', 'Fiorentina'],
gols: 102,
marcarGol (gol) {
return this.gols = this.gols + gol
},
toogleActivePlayer () {
this.active = !this.active;
},
getTeams () {
return `${this.name} aka ${this.nickname}. Played in ${ this.times.join() } between ages of ${ this.time_range.from } until year of ${this.time_range.to}.
He started her careen in ${this.times[0]} in ${this.time_range.from}`
}
}
user.marcarGol(5);
user.toogleActivePlayer()
user.getTeams()
console.log(user.getTeams())
console.log(user) |
Markdown | UTF-8 | 6,108 | 2.734375 | 3 | [
"MIT"
] | permissive | # PSID.jl
[](https://travis-ci.com/aaowens/PSID.jl)
[](https://codecov.io/gh/aaowens/PSID.jl)
The Panel Study of Income Dynamics (PSID) is a longitudinal public dataset which has been following a collection of families and their descendants since 1968. It provides a breadth of information about labor supply and life-cycle dynamics. More information is available at https://psidonline.isr.umich.edu/.
This package produces a labeled panel of individuals with a consistent individual ID across time. You provide a JSON file describing the variables you want. An example input file can be found at [examples/user_input.json.](https://github.com/aaowens/PSID.jl/blob/master/examples/user_input.json). Currently only variables in the family files can be added, but in the future it should be possible to support variables in the individual files or the supplements.
# Example
An example workflow can be found on my blog post [here](https://aaowens.github.io/julia/2020/02/11/Using-the-Panel-Study-of-Income-Dynamics.html).
# Instructions
To add this package, use
```
(@v1.6) pkg> add PSID
```
Next, download the PSID data files yourself. The package can't automatically fetch them because the PSID requires you to register for a free account before using the data.
The list of data files required to be in the current directory can be found [here](https://github.com/aaowens/PSID.jl/blob/master/src/allfiles_hash.json). These files are
1. The PSID codebook in XML format. You used to be able to download this from the PSID here https://simba.isr.umich.edu/downloads/PSIDCodebook.zip , but the link is broken. I put it in my Google Drive here https://drive.google.com/file/d/1CPwM5tsphdezi4RqlHGMkS1hiLLRZIT7/view .
2. The zipped PSID family files and cross-year individual file, which can be downloaded here https://simba.isr.umich.edu/Zips/ZipMain.aspx. Do not extract the files--leave them zipped. You need to download every family file from 1968 to 2019, and you also need to download the cross-year individual file.
3. The XLSX cross-year index for the variables, which can be downloaded here https://psidonline.isr.umich.edu/help/xyr/psid.xlsx.
After acquiring the data, run
```
julia> using PSID
julia> makePSID("user_input.json")
# to not code missings, makePSID("user_input.json", codemissings = false)
```
It will verify the required files exist and then construct the data. If successful, it will print `Finished constructing individual data, saved to output/allinds.csv` after about 5 minutes.
This will use about 15 GB of RAM. It may not work on machines without much memory.
## The input JSON file
The file passed to `makePSID` describes the variables you want.
```
{
"name_user": "hours",
"varID": "V465",
"unit": "head"
},
```
There are three fields, `name_user`, `varID`, and `unit`. `name_user` is a name chosen by you. `varID` is one of the codes assigned by the PSID to this variable. These can be looked up in the PSID [cross-year index](https://simba.isr.umich.edu/VS/i.aspx). For example, hours above can be found in the crosswalk at ` Family Public Data Index 01>WORK 02>Hours and Weeks 03>annual in prior year 04>head 05>total:`. Clicking on the variable info will show the the list of years and associated IDs when that variable is available. Choose any of the IDs for `varID`, it does not matter. `PSID.jl` will look up all available years for that variable in the crosswalk. You must also indicate the unit, which can be `head`, `spouse`, or `family`. This makes sure the variable is assigned to the correct individual.
# Features
This package provides the following features:
1. Automatically labels missing values by searching the value labels from the codebook for strings like "NA", "Inap.", or "Missing".
2. Tries to produce consistent value labels across years for categorical variables. This is difficult because the labels in the PSID sometimes change between years. This package uses an algorithm to try to harmonize the labels when possible by removing common subsets. For example, in one year race is labeled as "Asian" but in the next year it is "Asian, Pacific Islander". The first is a subset of the second, so the final label will be "Asian, Pacific Islander". When this is not possible, the final label will be "A or B or C" for however many incomparable labels were found.
3. Matches the individuals across time to produce a panel with consistent (ID, year) keys and their associated variables.
4. Produces consistent individual or spouse variables for individuals. In the input JSON file, you must indicate whether a variable is family level, household head level, or household spouse level. The final output will have variables of the form `VAR_family`, `VAR_ind`, or `VAR_spouse`. When the individual is a household head, `VAR_ind` will come from the household head version of that variable, and `VAR_spouse` will come from the household spouse version. If the individual is a household spouse, it is the reverse. Both individuals will get all family level variables.
5. It's easiest to track individuals, but this package also produces a consistent family ID by treating a family as a combination of head and spouse (if spouse exists). If you keep only household heads and drop years before 1970, (famid, year) should be an ID.
# Notable Omissions
Certain variables are not in the family files. For example, the wealth data are in separate files, and there is some unique information in the individual file directly. In the future I plan to add support for these data, but you can manually add them by constructing the unique individual ID yourself as (ER30001 * 1000) + ER30002, and then joining your data on that ID with the dataset produced by PSID.jl.
Please file issues if you find a bug.
# Donate your input JSON
If you've made an input JSON file containing variables useful for some topic, feel free to file an issue or make a PR to add your file to the examples.
|
Python | UTF-8 | 7,960 | 2.5625 | 3 | [
"Apache-2.0"
] | permissive | # Copyright 2021 NREL
# Licensed under the Apache License, Version 2.0 (the "License"); you may not
# use this file except in compliance with the License. You may obtain a copy of
# the License at http://www.apache.org/licenses/LICENSE-2.0
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
# License for the specific language governing permissions and limitations under
# the License.
# See https://floris.readthedocs.io for documentation
from __future__ import annotations
import attrs
from attrs import define, field
import numpy as np
from floris.type_dec import (
FromDictMixin,
NDArrayFloat,
floris_array_converter
)
from floris.simulation import Grid
@define
class FlowField(FromDictMixin):
wind_speeds: NDArrayFloat = field(converter=floris_array_converter)
wind_directions: NDArrayFloat = field(converter=floris_array_converter)
wind_veer: float = field(converter=float)
wind_shear: float = field(converter=float)
air_density: float = field(converter=float)
turbulence_intensity: float = field(converter=float)
reference_wind_height: float = field(converter=float)
time_series : bool = field(default=False)
n_wind_speeds: int = field(init=False)
n_wind_directions: int = field(init=False)
u_initial_sorted: NDArrayFloat = field(init=False, default=np.array([]))
v_initial_sorted: NDArrayFloat = field(init=False, default=np.array([]))
w_initial_sorted: NDArrayFloat = field(init=False, default=np.array([]))
u_sorted: NDArrayFloat = field(init=False, default=np.array([]))
v_sorted: NDArrayFloat = field(init=False, default=np.array([]))
w_sorted: NDArrayFloat = field(init=False, default=np.array([]))
u: NDArrayFloat = field(init=False, default=np.array([]))
v: NDArrayFloat = field(init=False, default=np.array([]))
w: NDArrayFloat = field(init=False, default=np.array([]))
het_map: list = field(init=False, default=None)
dudz_initial_sorted: NDArrayFloat = field(init=False, default=np.array([]))
turbulence_intensity_field: NDArrayFloat = field(init=False, default=np.array([]))
@wind_speeds.validator
def wind_speeds_validator(self, instance: attrs.Attribute, value: NDArrayFloat) -> None:
"""Using the validator method to keep the `n_wind_speeds` attribute up to date."""
if self.time_series:
self.n_wind_speeds = 1
else:
self.n_wind_speeds = value.size
@wind_directions.validator
def wind_directions_validator(self, instance: attrs.Attribute, value: NDArrayFloat) -> None:
"""Using the validator method to keep the `n_wind_directions` attribute up to date."""
self.n_wind_directions = value.size
def initialize_velocity_field(self, grid: Grid) -> None:
# Create an initial wind profile as a function of height. The values here will
# be multiplied with the wind speeds to give the initial wind field.
# Since we use grid.z, this is a vertical plane for each turbine
# Here, the profile is of shape (# turbines, N grid points, M grid points)
# This velocity profile is 1.0 at the reference wind height and then follows wind shear as an exponent.
# NOTE: the convention of which dimension on the TurbineGrid is vertical and horizontal is
# determined by this line. Since the right-most dimension on grid.z is storing the values
# for height, using it here to apply the shear law makes that dimension store the vertical
# wind profile.
wind_profile_plane = (grid.z_sorted / self.reference_wind_height) ** self.wind_shear
dwind_profile_plane = self.wind_shear * (1 / self.reference_wind_height) ** self.wind_shear * (grid.z_sorted) ** (self.wind_shear - 1)
# If no hetergeneous inflow defined, then set all speeds ups to 1.0
if self.het_map is None:
speed_ups = 1.0
# If heterogeneous flow data is given, the speed ups at the defined
# grid locations are determined in either 2 or 3 dimensions.
else:
if len(self.het_map[0][0].points[0]) == 2:
speed_ups = self.calculate_speed_ups(self.het_map, grid.x_sorted, grid.y_sorted)
elif len(self.het_map[0][0].points[0]) == 3:
speed_ups = self.calculate_speed_ups(self.het_map, grid.x_sorted, grid.y_sorted, grid.z_sorted)
# Create the sheer-law wind profile
# This array is of shape (# wind directions, # wind speeds, grid.template_array)
# Since generally grid.template_array may be many different shapes, we use transposes
# here to do broadcasting from left to right (transposed), and then transpose back.
# The result is an array the wind speed and wind direction dimensions on the left side
# of the shape and the grid.template array on the right
if self.time_series:
self.u_initial_sorted = (self.wind_speeds[:].T * wind_profile_plane.T).T * speed_ups
self.dudz_initial_sorted = (self.wind_speeds[:].T * dwind_profile_plane.T).T * speed_ups
else:
self.u_initial_sorted = (self.wind_speeds[None, :].T * wind_profile_plane.T).T * speed_ups
self.dudz_initial_sorted = (self.wind_speeds[None, :].T * dwind_profile_plane.T).T * speed_ups
self.v_initial_sorted = np.zeros(np.shape(self.u_initial_sorted), dtype=self.u_initial_sorted.dtype)
self.w_initial_sorted = np.zeros(np.shape(self.u_initial_sorted), dtype=self.u_initial_sorted.dtype)
self.u_sorted = self.u_initial_sorted.copy()
self.v_sorted = self.v_initial_sorted.copy()
self.w_sorted = self.w_initial_sorted.copy()
def finalize(self, unsorted_indices):
self.u = np.take_along_axis(self.u_sorted, unsorted_indices, axis=2)
self.v = np.take_along_axis(self.v_sorted, unsorted_indices, axis=2)
self.w = np.take_along_axis(self.w_sorted, unsorted_indices, axis=2)
def calculate_speed_ups(self, het_map, x, y, z=None):
if z is not None:
# Calculate the 3-dimensional speed ups; reshape is needed as the generator adds an extra dimension
speed_ups = np.reshape(
[het_map[0][i](x[i:i+1,:,:,:,:], y[i:i+1,:,:,:,:], z[i:i+1,:,:,:,:]) for i in range(len(het_map[0]))],
np.shape(x)
)
# If there are any points requested outside the user-defined area, use the
# nearest-neighbor interplonat to determine those speed up values
if np.isnan(speed_ups).any():
idx_nan = np.where(np.isnan(speed_ups))
speed_ups_out_of_region = np.reshape(
[het_map[1][i](x[i:i+1,:,:,:,:], y[i:i+1,:,:,:,:], z[i:i+1,:,:,:,:]) for i in range(len(het_map[1]))],
np.shape(x)
)
speed_ups[idx_nan] = speed_ups_out_of_region[idx_nan]
else:
# Calculate the 2-dimensional speed ups; reshape is needed as the generator adds an extra dimension
speed_ups = np.reshape(
[het_map[0][i](x[i:i+1,:,:,:,:], y[i:i+1,:,:,:,:]) for i in range(len(het_map[0]))],
np.shape(x)
)
# If there are any points requested outside the user-defined area, use the
# nearest-neighbor interplonat to determine those speed up values
if np.isnan(speed_ups).any():
idx_nan = np.where(np.isnan(speed_ups))
speed_ups_out_of_region = np.reshape(
[het_map[1][i](x[i:i+1,:,:,:,:], y[i:i+1,:,:,:,:]) for i in range(len(het_map[1]))],
np.shape(x)
)
speed_ups[idx_nan] = speed_ups_out_of_region[idx_nan]
return speed_ups
|
PHP | UTF-8 | 1,200 | 2.859375 | 3 | [] | no_license | <?php
require_once('./checkSession.php'); // 引入判斷是否登入機制
require_once('./db.inc.php'); // 引用資料庫連線
// // SQL 敘述
$sql = "INSERT INTO `museum`
(`musName`, `musId`,`musimg`)
VALUES (?, ?, ?)";
if ($_FILES["musImg"]["error"] === 0) {
// 為上傳檔案命名
$strDatetime = date("YmdHis");
// 找出副檔名
$extension = pathinfo($_FILES["musImg"]["name"], PATHINFO_EXTENSION);
// 建立完整名稱
$imgFileName = $strDatetime . "." . $extension;
// 移動暫存檔案到實際存放位置
$isSuccess = move_uploaded_file($_FILES["musImg"]["tmp_name"], "./images/" . $imgFileName);
// 若上傳失敗,則不會繼續往下執行,回到管理頁面
if (!$isSuccess) {
header("Refresh: 3; url=./addEvent.php");
echo "圖片上傳失敗";
exit();
}
}
// // 繫結用陣列
$arr = [
$_POST['musName'],
$_POST['musId'],
$imgFileName
];
$stmt = $pdo->prepare($sql);
$stmt->execute($arr);
if ($stmt->rowCount() > 0) {
header("Location: ./musList.php");
echo "新增成功";
} else {
header("Refresh: 3; url=./addMus.php");
echo "新增失敗";
}
|
Markdown | UTF-8 | 3,032 | 3.671875 | 4 | [] | no_license |
# Brick Breaker
For my final project, I am going to recreate the game Brick Breaker.
This is a game that was developed by Ali Asaria and released in 1999. Because this
was one of the first games I remember playing religiously as a kid on my dad's Blackberry,
I thought it would be interesting to understand the code and creation of one of the
most iconic, well known games in the world.
In my version of Brick Breaker, there will be a paddle that moves horizontally
across the screen in response to keyboard input (arrow keys). There will also be a
ball that is given a random velocity at the start of the game. The game will be
held within a box on the screen that ball and paddle may not move outside of.
The game box will also contain different types of bricks in a specified layout.
The bricks that are easiest to break will by given a lower point value, and the bricks
that are harder to break will be a given a higher point value. For each brick that is
broken, the player's score will be updated and displayed on the screen. Once all the
bricks are gone, the user will have won the game. The functionality of the paddle will
be made so that if the paddle is moved in the same direction as the the one the ball
bounces in when it comes in contact with the paddle, the speed of the ball will slightly
increase. If the ball hits the paddle and the paddle and ball are moving
in opposite directions at the time of contact, the ball's speed will slightly decrease.
There will be a upper and lower bound on the ball's speed, however. The player will
be given 3 lives before the game ends. If the ball touches the ground of the game box
and is not caught by the paddle, the player will lose a life.
_Weekly breakdown for what I will accomplish over the course of three weeks:_
**WEEK 1**
During the first week, I will create the game box, paddle, and ball. I will give
the paddle functionality with keyboard input, and will create movement for the ball.
I will also make it so that they are displayed on the game screen with Cinder.
**WEEK 2**
During the second week, I will add the different bricks to the game box. I will give them
different strengths, and will give them an associated point value. I will also implement
the speed changes based on the ball and paddle movement at the time of contact.
I will also make it that so they are displayed on the game screen with Cinder.
**WEEK 3**
During the third week, I will make the bricks so that they are breakable. Breaking the bricks
will change the score, so I will also create a game score display and add the 3 lives game component.
I will also make it so that they are displayed on the game screen with Cinder.
_**STRETCH GOALS**_
If I finish my project early, I will add mystery boxes that will appear on the screen for
a limited amount of time. If a mystery box is hit, the user will gain some ability that will
help them rid of bricks faster while boosting the player's score. The mystery box's effects
will last for a limited amount of time.
|
JavaScript | UTF-8 | 1,783 | 4 | 4 | [] | no_license | // If life was easy, we could just do things the easy way:
// var getElementsByClassName = function (className) {
// return document.getElementsByClassName(className);
// };
// // But instead we're going to implement it from scratch:
// const getElementsByClassName = (targetClassName, node = document.body) => {
// // traverse the DOM: inquiring about next level down each time.
// // when you hit node with the specified className => add it to the foundNodeArr;
// // declare collection variable;
// let foundNodeArr = [];
// // expand node collection to one level lower;
// let nodesBelow = node.childNodes;
// //NBB: the inquiry into the children is what encourages function spawning
// // deal with undefined
// if (node.classList) {
// // checkIf Node has targetClassName - if Y, then add to foundNodeArr;
// if (node.classList.contains(targetClassName)) foundNodeArr.push(node);
// }
// // run everything at this next level down through the checker
// nodesBelow.forEach(node => {
// console.log(node);
// foundNodeArr = [...foundNodeArr, ...getElementsByClassName(targetClassName, node)];
// });
// //NB: this is the Open Loop Maker that engenders recursion
// // return everything found
// // (if lower level recursion - this gets tacked onto foundNodeArr)
// return foundNodeArr;
// };
const getElementsByClassName = (targetClassName, node = document.body) => {
let foundNodeArr = [];
let nodesBelow = node.childNodes;
if (node.classList) {
if (node.classList.contains(targetClassName)) foundNodeArr.push(node);
}
nodesBelow.forEach(node => {
console.log(node);
foundNodeArr = [...foundNodeArr, ...getElementsByClassName(targetClassName, node)];
});
return foundNodeArr;
};
|
C++ | UTF-8 | 1,892 | 2.78125 | 3 | [] | no_license | #pragma once
#include "hittable.h"
class Isotropic : public Material
{
Texture* albedo_;
public:
Isotropic(Texture* t)
: albedo_(t)
{
}
bool scatter(const Ray& r, const HitInfo& info, vec3& attenuation, Ray& scattered) const
{
scattered = Ray(info.point, rg_.getPointInUnitSphere());
attenuation = albedo_->value(info.uv, info.point);
return true;
}
};
class ConstantMedium : public Hittable
{
Hittable* me_;
Material* material_;
float density_;
mutable RandomGenerator rg_;
public:
ConstantMedium(Hittable* h, float density, Texture* albedo)
: me_(h)
, density_(density)
{
material_ = new Isotropic(albedo);
}
bool hit(const Ray& r, float tMin, float tMax, HitInfo& info) const
{
HitInfo hit1;
HitInfo hit2;
if (me_->hit(r, -FLT_MAX, FLT_MAX, hit1))
{
if (me_->hit(r, hit1.t + 0.0001f, FLT_MAX, hit2))
{
if (hit1.t < tMin) hit1.t = tMin;
if (hit2.t > tMax) hit2.t = tMax;
if (hit1.t >= hit2.t) return false;
if (hit1.t < 0.0f) hit1.t = 0.0f;
float rayLength = r.direction().length();
float dInBoundary = (hit2.t - hit1.t) * rayLength;
float hitDistance = -(1.0f / density_) * log(rg_.getZeroToOne());
if (hitDistance < dInBoundary)
{
info.t = hit1.t + hitDistance / rayLength;
info.point = r.pointAt(info.t);
info.normal = vec3(1.0f, 0.0f, 0.0f);
info.material = material_;
return true;
}
}
}
return false;
}
AABB getBounds(float time0, float time1) const
{
return me_->getBounds(time0, time1);
}
};
|
Python | UTF-8 | 985 | 4.125 | 4 | [] | no_license | """
7. Напишите программу, доказывающую или проверяющую, что для множества
натуральных чисел выполняется равенство: 1+2+...+n = n(n+1)/2,
где n - любое натуральное число.
Решите через рекурсию. Решение через цикл не принимается.
Для оценки Отлично в этом блоке необходимо выполнить 5 заданий из 7
"""
def left_sum(n, start, result):
if start > n:
return result
else:
result += start
return left_sum(n, start + 1, result)
def right_sum(n):
return n * (n + 1) / 2
def check_form(n):
left = left_sum(n, 1, 0)
right = right_sum(n)
if left == right:
print('Левая и правая части равны')
else:
print('Левая и правая части неравны')
check_form(10)
|
TypeScript | UTF-8 | 1,885 | 2.625 | 3 | [
"MIT"
] | permissive |
import { Typegoose, InstanceType } from "typegoose";
import { Model, Types } from "mongoose";
import { ObjectId } from "mongodb";
class UpdateResult {
ok: number;
n: number;
nModified: number;
}
export class RepositoryBase<T extends Typegoose> {
protected dbModel: Model<InstanceType<T>>;
constructor(modelType: new () => T) {
this.dbModel = new modelType().getModelForClass(modelType);
}
async create(item: T) {
return await this.dbModel.create(item);
}
async retrieve() {
return await this.dbModel.find({}).exec();
}
async update(_id: Types.ObjectId, item: Partial<T>): Promise<UpdateResult> {
try {
return await this.dbModel.update({ _id }, item).exec();
} catch (err) {
console.log(err);
throw err;
}
}
async delete(_id: string) {
return await this.dbModel.deleteOne({ _id: this.toObjectId(_id) }).exec();
}
async findById(_id: string) {
return await this.dbModel.findById(new ObjectId(_id)).exec();
}
protected async _find(criteria: any) {
return await this.dbModel.find(criteria).exec();
}
async find(criteria: Partial<T>) {
return await this.dbModel.find(criteria).exec();
}
async findLike(likeOptions: { [P in keyof T]?: string }, limit?: number, options?: Partial<T>) {
const filter = Object.keys(likeOptions).map((k: string) => ({ [k]: new RegExp((<any>likeOptions)[k], "i") }));
const query = this.dbModel.find({ $or: filter, ...options });
if (limit) query.limit(limit || 8);
return await query.exec();
}
async findOne(conditions?: any) {
return await this.dbModel.findOne(conditions).exec();
}
private toObjectId(_id: string): Types.ObjectId {
return Types.ObjectId.createFromHexString(_id);
}
} |
Python | UTF-8 | 106 | 3.40625 | 3 | [] | no_license | # Write a Python program to remove duplicates from a list.
a=[1,2,1,5,3,2,3,4,5]
a=list(set(a))
print(a) |
Markdown | UTF-8 | 1,420 | 3.1875 | 3 | [] | no_license | # 计算机指令集结构
"Instruction set architecture is the structure of a computer that a machine language programmer (or a compiler) must understand to write a correct (timing independent) program for that machine"
好的指令集:
- 可实现
- 可编程
- 兼容性
分类:
- 堆栈结构
- 累加器结构
- 通用寄存器结构(RM和RR结构)
## 指令系统设计
- 控制指令
- 操作数的类型和大小
- 寻址方式
- 编址方式
- 定位方式
字节编址字访问
缺点:
- 地址信息浪费(字内偏移,32位浪费2位,64位3位)
- 大小端问题
- 读写逻辑复杂(先读后写)
信息宽度不超过主存宽度的信息必须放在一个存储字内,不跨边界
## 指令系统的优化
- Balance amongst competing forces
- As many registers as possible
- Impact on instruction size and program size
- Simplicity of decoding
### 指令操作码的优化
1. 固定长度
2. Huffman编码:对高概率事件用短位数
3. 扩展编码:采用有限几种固定长度的码长
信息冗余量=1-平均编码长/理论最佳编码长,其中理论最佳编码长为
$$
H = -\sum_{i=1}^n p_i\times log_2p_i
$$
### 地址码的优化
缩短地址码的方法:用一个短地址码表示大地址空间
- 使用间接寻址方式(在主存储器低端开辟专门存放间接地址的区域
- 变址寻址方式
- 寄存器间接寻址方式 |
TypeScript | UTF-8 | 1,016 | 2.640625 | 3 | [] | no_license | import { Schema, Document } from 'mongoose';
import { ApiProperty } from '@nestjs/swagger';
export const VerseSchema = new Schema({
number: String,
text: String,
book: String,
translation: String,
chapter: String
});
export class Verse extends Document {
@ApiProperty({
name: 'number',
description: 'Number for the scripture',
example: 23
})
number: number;
@ApiProperty({
name: 'text',
description: 'Scripture text',
example: 'The LORD is my shepherd; I have all that I need.'
})
text: string;
@ApiProperty({
name: 'book',
description: 'Book of the scripture',
example: 'psalm'
})
book: string;
@ApiProperty({
name: 'chapter',
description: 'Chapter of the scripture',
example: 1
})
chapter: number;
@ApiProperty({
name: 'version',
description: 'Number for the scripture',
example: 'nlt'
})
version: string;
}
|
C++ | UTF-8 | 4,876 | 2.890625 | 3 | [] | no_license | #include "include/pipeline_scan.h"
#include "gtest/gtest.h"
#include <cstdlib>
namespace byteslice{
class PipelineScanTest: public ::testing::Test{
public:
virtual void SetUp(){
std::srand(std::time(0));
column1 = new Column(ColumnType::kByteSlicePadRight, bit_width_, num_);
column2 = new Column(ColumnType::kByteSlicePadRight, bit_width_, num_);
//Populate with random values
for(size_t i=0; i < num_; i++){
column1->SetTuple(i, std::rand() & mask_);
}
for(size_t i=0; i < num_; i++){
column2->SetTuple(i, std::rand() & mask_);
}
}
virtual void TearDown(){
delete column1;
delete column2;
}
protected:
size_t num_ = 13.5*kNumTuplesPerBlock;
Column* column1;
Column* column2;
const size_t bit_width_ = 16;
const WordUnit mask_ = (1ULL << bit_width_) - 1;
};
TEST_F(PipelineScanTest, Blockwise){
WordUnit lit1 = mask_ * 0.2;
WordUnit lit2 = mask_ * 0.5;
PipelineScan scan;
scan.AddPredicate(AtomPredicate(column1, Comparator::kLess, lit1));
scan.AddPredicate(AtomPredicate(column2, Comparator::kLess, lit2));
BitVector* bitvector = new BitVector(num_);
bitvector->SetOnes();
scan.ExecuteBlockwise(bitvector);
size_t result = bitvector->CountOnes();
std::cout << result << "\t";
//Verify
for(size_t i=0; i < num_; i++){
EXPECT_EQ(bitvector->GetBit(i), column1->GetTuple(i) < lit1 && column2->GetTuple(i) < lit2);
}
//Compare with columnwise
bitvector->SetOnes();
column1->Scan(Comparator::kLess, lit1, bitvector, Bitwise::kSet);
std::cout << bitvector->CountOnes() << "\t";
column2->Scan(Comparator::kLess, lit2, bitvector, Bitwise::kAnd);
std::cout << bitvector->CountOnes() << std::endl;
EXPECT_EQ(result, bitvector->CountOnes());
delete bitvector;
}
TEST_F(PipelineScanTest, Columnwise){
WordUnit lit1 = mask_ * 0.2;
WordUnit lit2 = mask_ * 0.5;
PipelineScan scan;
scan.AddPredicate(AtomPredicate(column1, Comparator::kLess, lit1));
scan.AddPredicate(AtomPredicate(column2, Comparator::kLess, lit2));
BitVector* bitvector = new BitVector(num_);
bitvector->SetOnes();
scan.ExecuteColumnwise(bitvector);
size_t result = bitvector->CountOnes();
std::cout << result << "\t";
//Verify
for(size_t i=0; i < num_; i++){
EXPECT_EQ(bitvector->GetBit(i), column1->GetTuple(i) < lit1 && column2->GetTuple(i) < lit2);
}
//Compare with columnwise
bitvector->SetOnes();
column1->Scan(Comparator::kLess, lit1, bitvector, Bitwise::kSet);
std::cout << bitvector->CountOnes() << "\t";
column2->Scan(Comparator::kLess, lit2, bitvector, Bitwise::kAnd);
std::cout << bitvector->CountOnes() << std::endl;
EXPECT_EQ(result, bitvector->CountOnes());
delete bitvector;
}
TEST_F(PipelineScanTest, Standard){
WordUnit lit1 = mask_ * 0.2;
WordUnit lit2 = mask_ * 0.5;
PipelineScan scan;
scan.AddPredicate(AtomPredicate(column1, Comparator::kLess, lit1));
scan.AddPredicate(AtomPredicate(column2, Comparator::kLess, lit2));
BitVector* bitvector = new BitVector(num_);
bitvector->SetOnes();
scan.ExecuteStandard(bitvector);
size_t result = bitvector->CountOnes();
std::cout << result << "\t";
//Verify
for(size_t i=0; i < num_; i++){
EXPECT_EQ(bitvector->GetBit(i), column1->GetTuple(i) < lit1 && column2->GetTuple(i) < lit2);
}
//Compare with columnwise
bitvector->SetOnes();
column1->Scan(Comparator::kLess, lit1, bitvector, Bitwise::kSet);
std::cout << bitvector->CountOnes() << "\t";
column2->Scan(Comparator::kLess, lit2, bitvector, Bitwise::kAnd);
std::cout << bitvector->CountOnes() << std::endl;
EXPECT_EQ(result, bitvector->CountOnes());
delete bitvector;
}
TEST_F(PipelineScanTest, Naive){
WordUnit lit1 = mask_ * 0.2;
WordUnit lit2 = mask_ * 0.5;
PipelineScan scan;
scan.AddPredicate(AtomPredicate(column1, Comparator::kLess, lit1));
scan.AddPredicate(AtomPredicate(column2, Comparator::kLess, lit2));
BitVector* bitvector = new BitVector(num_);
bitvector->SetOnes();
scan.ExecuteNaive(bitvector);
size_t result = bitvector->CountOnes();
std::cout << result << "\t";
//Verify
for(size_t i=0; i < num_; i++){
EXPECT_EQ(bitvector->GetBit(i), column1->GetTuple(i) < lit1 && column2->GetTuple(i) < lit2);
}
//Compare with columnwise
bitvector->SetOnes();
column1->Scan(Comparator::kLess, lit1, bitvector, Bitwise::kSet);
std::cout << bitvector->CountOnes() << "\t";
column2->Scan(Comparator::kLess, lit2, bitvector, Bitwise::kAnd);
std::cout << bitvector->CountOnes() << std::endl;
EXPECT_EQ(result, bitvector->CountOnes());
delete bitvector;
}
}
|
C# | UTF-8 | 661 | 2.65625 | 3 | [] | no_license | using System;
namespace Ditto
{
[Serializable]
public class DittoConfigurationException : Exception
{
public DittoConfigurationException(string msg, params object[] args)
: base(string.Format(msg, args))
{
}
// Constructor needed for serialization
// when exception propagates from a remoting server to the client.
protected DittoConfigurationException(System.Runtime.Serialization.SerializationInfo info,System.Runtime.Serialization.StreamingContext context) { }
public override string ToString()
{
return this.Message + this.StackTrace;
}
}
} |
Python | UTF-8 | 997 | 4.5 | 4 | [
"CC-BY-4.0"
] | permissive | # Supporting class for testing purposes.
class Node:
def __init__(self,data):
self.data = data
self.next = None
# This function will reverse a linked list.
# ie: Given the list A -> B -> C, the list should be modified to be C -> B -> A
def reverse(head):
if head == None: return None
current = head
newHead = reverseHelp(current, None)
return newHead
def reverseHelp(current, last):
if current.next == None:
current.next = last
return current
else:
h = reverseHelp(current.next, current)
current.next = last
return h
# This will print the linked list in order.
def printList(head):
current = head
s = ""
while current != None:
s += str(current.data)
if current.next != None: s += " -> "
current = current.next
print(s)
# Create the linked list A -> B -> C
L = Node('A')
L.next = Node('B')
L.next.next = Node('C')
printList(L)
R = reverse(L)
print()
printList(R)
|
C++ | UTF-8 | 1,102 | 2.640625 | 3 | [] | no_license | /**
* @file caret.cpp
* @brief
* @author Frederic SCHERMA (frederic.scherma@dreamoverflow.org)
* @date 2001-12-25
* @copyright Copyright (c) 2001-2017 Dream Overflow. All rights reserved.
* @details
*/
#include "o3d/gui/precompiled.h"
#include "o3d/gui/caret.h"
using namespace o3d;
// default constructor. take an TextZone object
Caret::Caret(TextZone *textZone) :
m_textZone(textZone),
m_isVisible(True),
m_blindTime(500)
{
O3D_ASSERT(m_textZone != nullptr);
}
void Caret::resetVisibilityState()
{
m_blindTime.reset();
m_isVisible = True;
}
void Caret::draw(const Vector2i &pos)
{
m_blindTime.update();
if (m_blindTime.check())
m_isVisible = !m_isVisible;
if (m_isVisible && m_textZone && m_textZone->font())
{
ABCFont *font = m_textZone->font();
Int32 textHeight = m_textZone->fontHeight();
font->setTextHeight(textHeight);
font->write(Vector2i(
m_textZone->pos().x(),
m_textZone->pos().y() + textHeight) + pos + m_pixelPos,
"",0);
}
}
void Caret::setPixelPosition(const Vector2i &pos)
{
m_pixelPos = pos;
}
|
Java | UTF-8 | 815 | 2.609375 | 3 | [] | no_license | package ru.iandreyshev.parserrss.models.web;
import org.junit.Test;
import static org.junit.Assert.*;
public class UrlTest {
private static final String VALID_URL = "http://domain.com/";
private static final String URL_WITHOUT_PROTOCOL = "domain.com/";
private static final String URL_WITH_PORT = "http://domain.com:8080/";
private static final String URL_WITH_SUB_DOMAIN = "http://domain.com.ru/";
@Test
public void returnNullIfParseNullString() {
assertNull(Url.parse(null));
}
@Test
public void returnNullIfParseEmptyString() {
assertNull(Url.parse(""));
}
@Test
public void returnUrlStringInToStringMethod() {
final Url url = Url.parse(VALID_URL);
assertNotNull(url);
assertEquals(VALID_URL, url.toString());
}
} |
Java | UTF-8 | 5,681 | 1.703125 | 2 | [
"Apache-2.0"
] | permissive | /*
* Copyright (c) 2018 - Frank Hossfeld
*
* Licensed under the Apache License, Version 2.0 (the "License"); you may not
* use this file except in compliance with the License. You may obtain a copy of
* the License at
*
* http://www.apache.org/licenses/LICENSE-2.0
*
* Unless required by applicable law or agreed to in writing, software
* distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
* WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
* License for the specific language governing permissions and limitations under
* the License.
*
*/
package de.gishmo.gwtbootstartermvp4g2.server.resource.generator.impl.gxt;
import com.google.gwt.user.client.ui.Widget;
import com.sencha.gxt.core.client.util.Margins;
import com.sencha.gxt.widget.core.client.button.TextButton;
import com.sencha.gxt.widget.core.client.container.VerticalLayoutContainer;
import com.sencha.gxt.widget.core.client.event.SelectEvent;
import com.squareup.javapoet.FieldSpec;
import com.squareup.javapoet.MethodSpec;
import com.squareup.javapoet.TypeName;
import com.squareup.javapoet.TypeSpec;
import de.gishmo.gwt.gwtbootstartermvp4g2.shared.model.Mvp4g2GeneraterParms;
import de.gishmo.gwtbootstartermvp4g2.server.resource.generator.impl.AbstractNavigationSourceGenerator;
import javax.lang.model.element.Modifier;
import java.io.File;
public class NavigationGxtSourceGenerator
extends AbstractNavigationSourceGenerator {
private NavigationGxtSourceGenerator(Builder builder) {
super();
this.mvp4g2GeneraterParms = builder.mvp4g2GeneraterParms;
this.directoryJava = builder.directoryJava;
this.clientPackageJavaConform = builder.clientPackageJavaConform;
}
public static Builder builder() {
return new Builder();
}
@Override
protected TypeName getBaseElement() {
return TypeName.get(Widget.class);
}
@Override
protected void createViewMethod(TypeSpec.Builder typeSpec) {
MethodSpec.Builder method = MethodSpec.methodBuilder("createView")
.addAnnotation(Override.class)
.addModifiers(Modifier.PUBLIC)
.addStatement("container = new $T()",
VerticalLayoutContainer.class);
this.mvp4g2GeneraterParms.getPresenters()
.forEach(presenterData -> {
TypeSpec selectHandler = TypeSpec.anonymousClassBuilder("")
.addSuperinterface(SelectEvent.SelectHandler.class)
.addMethod(MethodSpec.methodBuilder("onSelect")
.addAnnotation(Override.class)
.addModifiers(Modifier.PUBLIC)
.addParameter(SelectEvent.class,
"event")
.addStatement("getPresenter().doNavigateTo($S)",
presenterData.getName())
.build())
.build();
method.addStatement("$T textButton$L = new $T($S)",
TextButton.class,
presenterData.getName(),
TextButton.class,
presenterData.getName())
.addStatement("textButton$L.addSelectHandler($L)",
presenterData.getName(),
selectHandler)
.addStatement("container.add(textButton$L, new $T(1, -1, new $T(12)))",
presenterData.getName(),
VerticalLayoutContainer.VerticalLayoutData.class,
Margins.class);
});
typeSpec.addMethod(method.build());
}
@Override
protected FieldSpec getContainerFieldSpec() {
return FieldSpec.builder(VerticalLayoutContainer.class,
"container",
Modifier.PRIVATE)
.build();
}
public static class Builder {
Mvp4g2GeneraterParms mvp4g2GeneraterParms;
File directoryJava;
String clientPackageJavaConform;
public Builder mvp4g2GeneraterParms(Mvp4g2GeneraterParms mvp4g2GeneraterParms) {
this.mvp4g2GeneraterParms = mvp4g2GeneraterParms;
return this;
}
public Builder directoryJava(File directoryJava) {
this.directoryJava = directoryJava;
return this;
}
public Builder clientPackageJavaConform(String clientPackageJavaConform) {
this.clientPackageJavaConform = clientPackageJavaConform;
return this;
}
public NavigationGxtSourceGenerator build() {
return new NavigationGxtSourceGenerator(this);
}
}
}
|
JavaScript | UTF-8 | 2,336 | 2.546875 | 3 | [] | no_license | import {
DELETE_PHOTO_FAIL,
DELETE_PHOTO_REQUEST,
DELETE_PHOTO_SUCCESS,
GET_ALL_PHOTO_FAIL,
GET_ALL_PHOTO_REQUEST,
GET_ALL_PHOTO_SUCCESS,
GET_PHOTO_BY_LABEL_FAIL,
GET_PHOTO_BY_LABEL_REQUEST,
GET_PHOTO_BY_LABEL_SUCCESS,
UPLOAD_PHOTO_BY_URL_FAIL,
UPLOAD_PHOTO_BY_URL_REQUEST,
UPLOAD_PHOTO_BY_URL_SUCCESS,
} from './constants';
import axios from 'axios';
export const getPhotos = () => async (dispatch) => {
try {
dispatch({ type: GET_ALL_PHOTO_REQUEST });
const { data } = await axios.get('/api/unsplash-app/photo/all');
dispatch({
type: GET_ALL_PHOTO_SUCCESS,
payload: data,
});
} catch (error) {
dispatch({
type: GET_ALL_PHOTO_FAIL,
payload: error.response.data,
});
}
};
export const getPhotoByLabel = (labelName) => async (dispatch) => {
try {
dispatch({ type: GET_PHOTO_BY_LABEL_REQUEST });
const { data } = await axios.get(
`/api/unsplash-app/photo/label/${labelName}`
);
dispatch({
type: GET_PHOTO_BY_LABEL_SUCCESS,
payload: data,
});
} catch (error) {
dispatch({
type: GET_PHOTO_BY_LABEL_FAIL,
payload: error.response.data,
});
}
};
export const uploadPhotoByUrl = (labelValue, urlValue) => (dispatch) => {
const formUrlEncoded = (x) =>
Object.keys(x).reduce(
(p, c) => p + `&${c}=${encodeURIComponent(x[c])}`,
''
);
dispatch({ type: UPLOAD_PHOTO_BY_URL_REQUEST });
axios({
method: 'POST',
url: '/api/unsplash-app/photo/byUrl',
headers: { 'Content-Type': 'application/x-www-form-urlencoded' },
data: formUrlEncoded({
label: labelValue,
photoUrl: urlValue,
}),
})
.then((response) =>
dispatch({
type: UPLOAD_PHOTO_BY_URL_SUCCESS,
payload: response.data,
})
)
.catch((error) =>
dispatch({
type: UPLOAD_PHOTO_BY_URL_FAIL,
payload: error.response.data,
})
);
};
export const deletePhoto = (id) => async (dispatch) => {
try {
dispatch({ type: DELETE_PHOTO_REQUEST });
const { data } = await axios.delete(`/api/unsplash-app/photo/remove/${id}`);
dispatch({
type: DELETE_PHOTO_SUCCESS,
payload: data,
});
} catch (error) {
dispatch({
type: DELETE_PHOTO_FAIL,
payload: error.response.data,
});
}
};
|
Markdown | UTF-8 | 7,188 | 3.3125 | 3 | [] | no_license | # Exercise: Augmented Reality {#exercise-augmented-reality status=draft}
## Skills learned
* Understanding of all the steps in the image pipeline.
* Writing markers on images to aid in debugging.
## Introduction
During the lectures, we have explained one direction of the image pipeline:
image -> [feature extraction] -> 2D features -> [ground projection] -> 3D world coordinates
In this exercise, we are going to look at the pipeline in the opposite direction.
It is often said that:
> "The inverse of computer vision is computer graphics."
The inverse pipeline looks like this:
3D world coordinates -> [image projection] -> 2D features -> [rendering] -> image
## Instructions
* Do intrinsics/extrinsics camera calibration of your robot as per the instructions.
* Write the program `dt-augmented-reality` as specified below in [](#exercise-augmented-reality-spec).
Then verify the results in the following 3 situations.
### Calibration pattern
* Put the robot in the middle of the calibration pattern.
* Run the program `dt-augmented-reality` with map file `calibration_pattern.yaml`.
(Adjust the position until you get perfect match of reality and augmented reality.)
### Lane
* Put the robot in the middle of a lane.
* Run the program `dt-augmented-reality` with map file `lane.yaml`.
(Adjust the position until you get perfect match of reality and augmented reality.)
### Intersection
* Put the robot at a stop line at a 4-way intersection in Duckietown.
* Run the program `dt-augmented-reality` with map file `intersection_4way.yaml`.
(Adjust the position until you get perfect match of reality and augmented reality.)
### Submission
Submit the images according to location-specific instructions.
## Specification of `dt-augmented-reality` {#exercise-augmented-reality-spec}
The program is invoked with this syntax:
$ dt-augmented-reality ![map file] [![robot name]]
where `![map file]` is a YAML file containing the map (specified in [](#exercise-augmented-reality-map)).
If [![robot name]] is not given, it defaults to the hostname.
The program does the following:
1. It loads the intrinsic / extrinsic calibration parameters for the given robot.
2. It reads the map file.
3. It listens to the image topic `/![robot name]/camera_node/image/compressed`.
4. It reads each image, projects the map features onto the image, and then writes the resulting image to the topic
/![robot name]/AR/![map file basename]
where `![map file basename]` is the basename of the file without the extension.
## Specification of the map {#exercise-augmented-reality-map}
The map file contains a 3D polygon, defined as a list of points and a list of segments
that join those points.
The format is similar to any data structure, with a few changes:
1. Points are referred to by name.
2. It is possible to specify which reference frame each point is. (This will help make this into
a general tool for debugging various types of problems).
Here is an example of the file contents (hopefully self-explanatory).
This describes 3 points, and two lines.
points:
# define three named points: center, left, right
center: [axle, [0, 0, 0]] # [reference frame, coordinates]
left: [axle, [0.5, 0.1, 0]]
right: [axle, [0.5, -0.1, 0]]
segments:
- points: [center, left]
color: [rgb, [1, 0, 0]]
- points: [center, right]
color: [rgb, [1, 0, 0]]
### Reference frame specification
The reference frames are defined as follows:
- `axle`: center of the axle; coordinates are 3D.
- `camera`: camera frame; coordinates are 3D.
- `image01`: a reference frame in which 0,0 is top left, and 1,1 is bottom right of the image; coordinates are 2D.
(Other image frames will be introduced later, such as the `world` and `tile` reference frame, which
need the knowledge of the location of the robot.)
### Color specification
RGB colors are written as:
[rgb, [![R], ![G], ![B]]]
where the RGB values are between 0 and 1.
Moreover, we support the following strings:
- `red` is equivalent to `[rgb, [1,0,0]]`
- `green` is equivalent to `[rgb, [0,1,0]]`
- `blue` is equivalent to `[rgb, [0,0,1]]`
- `yellow` is equivalent to `[rgb, [1,1,0]]`
- `magenta` is equivalent to `[rgb, [1,0,1]]`
- `cyan` is equivalent to `[rgb, [0,1,1]]`
- `white` is equivalent to `[rgb, [1,1,1]`
- `black` is equivalent to `[rgb, [0,0,0]]`
## "Map" files
### `hud.yaml`
This pattern serves as a simple test that we can draw lines in image coordinates:
points:
TL: [image01, [0, 0]]
TR: [image01, [0, 1]]
BR: [image01, [1, 1]]
BL: [image01, [1, 0]]
segments:
- points: [TL, TR]
color: red
- points: [TR, BR]
color: green
- points: [BR, BL]
color: blue
- points: [BL, TL]
color: yellow
The expected result is to put a border around the image:
red on the top, green on the right, blue on the bottom, yellow on the left.
### `calibration_pattern.yaml`
TODO: to write
### `lane.yaml`
We want something like this:
0
| | | . | | |
| | | . | | |
| | | . | | |
| | | . | | |
| | | . | | |
| | | . | | |
WW L WY L WW
1 2 3 4 5 6
Then we have:
points:
p1: [axle, [0, 0.254, 0]]
q1: [axle, [D, 0.254, 0]]
p2: [axle, [0, 0.2286, 0]]
q2: [axle, [D, 0.2286, 0]]
p3: [axle, [0, 0.0127, 0]]
q3: [axle, [D, 0.0127, 0]]
p4: [axle, [0, -0.0127, 0]]
q4: [axle, [D, -0.0127, 0]]
p5: [axle, [0, -0.2286, 0]]
q5: [axle, [D, -0.2286, 0]]
p6: [axle, [0, -0.254, 0]]
q6: [axle, [D, -0.254, 0]]
segments:
- points: [p1, q1]
color: white
- points: [p2, q2]
color: white
- points: [p3, q3]
color: yellow
- points: [p4, q4]
color: yellow
- points: [p5, q5]
color: white
- points: [p6, q6]
color: white
### `intersection_4way.yaml`
TODO: to write
## Suggestions
Start by using the file `hud.yaml`. To visualize it, you do not need the
calibration data. It will be helpful to make sure that you can do the easy
parts of the exercise: loading the map, and drawing the lines.
## Useful APIs
### Loading a YAML file
To load a YAML file, use the function `yaml_load` from `duckietown_utils`:
from duckietown_utils import yaml_load
with open(filename) as f:
contents = f.read()
data = yaml_load(contents)
### Reading the calibration data for a robot
TODO: Ask Liam / Andrea for this part
### Path name manipulation
From a file name like `"/path/to/map1.yaml"`, you can obtain the basename without extension `map1` using the following code:
filename = "/path/to/map1.yaml"
basename = os.path.basename(filename) # => map1.yaml
root, ext = os.path.splitext(basename)
# root = 'map1'
# ext = '.yaml'
### Drawing primitives
TODO: add here the OpenCV primitives
|
C# | UTF-8 | 2,164 | 2.625 | 3 | [
"WTFPL"
] | permissive | using System;
using System.Collections.Generic;
using System.Linq;
using System.Text;
using System.Threading.Tasks;
using System.IO;
using CozyNote.Database;
using CozyNote.Model.ObjectModel;
using Nancy;
namespace CozyNote.ServerCore
{
public static class ModuleHelper
{
public static User GetUser(string username, string password)
{
if (DbHolding.User.IsExist(username))
{
var User = DbHolding.User.Get(username);
if (User.pass == password)
{
return User;
}
}
return null;
}
public static Tuple<string, int> GetNotebookInfo(int id)
{
if (DbHolding.Notebook.IsExist(id))
{
var notebook = DbHolding.Notebook.Get(id);
return Tuple.Create(notebook.name, notebook.note_list.Count);
}
return null;
}
public static Notebook GetNotebook(int id, string password)
{
if (DbHolding.Notebook.IsExist(id))
{
var notebook = DbHolding.Notebook.Get(id);
if (notebook.pass == password)
{
return notebook;
}
}
return null;
}
public static Note GetNote(int notebookid, string notebookpass, int noteid)
{
var notebook = GetNotebook(notebookid, notebookpass);
if (notebook != null)
{
if (notebook.note_list.Contains(noteid) && DbHolding.Note.IsExist(noteid))
{
var note = DbHolding.Note.Get(noteid);
return note;
}
}
return null;
}
public static string ReadBodyData(this NancyModule module)
{
var body = module.Request.Body;
body.Seek(0, SeekOrigin.Begin);
using (var reader = new StreamReader(body))
{
var result = reader.ReadToEnd();
return result;
}
}
}
}
|
Shell | UTF-8 | 635 | 3.171875 | 3 | [] | no_license | #!/bin/bash
OS=linux
ARCH=amd64
COMPONENT=backend
BUILD_OUT_DIR=~/houyi/${COMPONENT}
WORK_DIR=../
mkdir -p ${BUILD_OUT_DIR}
CGO_ENABLED=0 GOOS=${OS} GOARCH=${ARCH} go build -tags netgo -o ${BUILD_OUT_DIR}/${COMPONENT} -v ${WORK_DIR}/main.go
RUN_SHELL=run.sh
chmod u+x ${RUN_SHELL}
cp ${RUN_SHELL} ${BUILD_OUT_DIR}/
cat <<EOF > Dockerfile
FROM alpine:3.7
COPY ${COMPONENT} /opt/ms/
COPY ${RUN_SHELL} /opt/ms/
EXPOSE 80
WORKDIR /opt/ms/
ENTRYPOINT ["/opt/ms/${RUN_SHELL}"]
EOF
mv Dockerfile ${BUILD_OUT_DIR}/
BUILD_RUN_SHELL_DOCKER=build-docker.sh
chmod u+x ${BUILD_RUN_SHELL_DOCKER}
cp ${BUILD_RUN_SHELL_DOCKER} ${BUILD_OUT_DIR}/
|
Markdown | UTF-8 | 427 | 2.5625 | 3 | [] | no_license | # PHP RPC CLIENT
## 简介
PHP RPC 客户端,使用JSON对载荷进行编码和解码
## 使用方法
```php
use Tourscool\RpcClient\Client;
$client = new Client([
'endpoints'=>[
'tcp://192.168.1.251:20182' ,
'tcp://192.168.1.252:2010'
]
]);
try {
$result = $client->service('Greeting')->sayHello('Vincent');
print_r($result);
}catch(RpcException $e){
echo 'RPC fail';
}
``` |
Markdown | UTF-8 | 2,388 | 2.984375 | 3 | [
"MIT"
] | permissive | # CliToolkit [](https://dev.azure.com/phil-harmoniq/devops/_build/latest?definitionId=7) [](https://www.nuget.org/packages/CliToolkit)
## Installing
Install using the .NET CLI or your IDE's package manager:
```bash
dotnet add package CliToolkit
```
## Usage
Begin by inheriting `CliApp` in your main class and overriding the default `OnExecute()` method. Public properties will have their values injected from command-line arguments detected using either `--PascalCase` or `--kebab-case`. Public properties types that derive from `CliCommand` will automatically be registered as a sub-command route.
## Example
```c#
using CliToolkit;
public class Program : CliApp
{
static int Main(string[] args)
{
var app = new CliAppBuilder<Program>()
.Start(args);
return app.ExitCode;
}
[CliOptions(Description = "Simulate a long-running process")]
public TimerCommand Timer { get; set; }
[CliOptions(Description = "Display help menu")]
public bool Help { get; set; }
public override void OnExecute(string[] args)
{
if (Help)
{
PrintHelpMenu();
return;
}
throw new CliException("Please specify a sub-command.");
}
}
public class TimerCommand : CliCommand
{
public string Title { get; set; } = "Default";
public int Seconds { get; set; } = 5;
public bool TimeStamp { get; set; }
public override void OnExecute(string[] args)
{
Console.WriteLine($"{Title} timer start");
for (var i = 1; i <= Seconds; i++)
{
Thread.Sleep(TimeSpan.FromSeconds(1));
Console.WriteLine(i);
}
if (TimeStamp)
{
Console.WriteLine(DateTime.Now);
}
}
}
```
```bash
$ dotnet run timer --title Custom --seconds 3 --time-stamp true
Custom timer start
1
2
3
8/9/2020 8:44:19 PM
```
This is a small portion of the example in [`CliToolkit.TestApp`](CliToolkit.TestApp). Go check it out for the full implementation.
## Supporting Links
- Check out the [CliToolkit wiki](https://github.com/phil-harmoniq/CliToolkit/wiki) for more usage details
- Check out some of my other projects on my [personal site](http://phil.harmoniq.dev/)
|
Python | UTF-8 | 598 | 3.328125 | 3 | [] | no_license | class ArgumentError(Exception):
"""
Exception raised for errors in the input arguments.
"""
def __init__(self, argument, value, argument2=None, value2=None, message="Argument Exception error"):
self.argument = argument
self.value = value
self.argument2 = argument2
self.value2 = value2
self.message = message
if argument2 is None and value2 is None:
super().__init__(self.message, self.argument, self.value)
else:
super().__init__(self.message, self.argument, self.value, self.argument2, self.value2)
|
C++ | UTF-8 | 4,505 | 2.984375 | 3 | [] | no_license | #include "../Rendering/Shader.h"
#include "../Debug/Log.h"
#include <fstream>
#define DEBUG_PRINTING 0
Shader::Shader() :
m_shaderProgram(0)
{
}
Shader::Shader(const std::string& in_name, const std::string& in_vertexShaderSrc, const std::string& in_fragShaderSrc) :
m_shaderProgram(0),
m_name(in_name)
{
CreateShaderProgram(in_vertexShaderSrc, in_fragShaderSrc);
}
Shader::~Shader()
{
if(m_shaderProgram)
{
glDeleteProgram(m_shaderProgram);
}
}
Uniform* Shader::operator[](const char* in_uniformName)
{
return m_uniforms[in_uniformName].get();
}
GLint Shader::GetAttributeLocation(const char* in_attributeName)
{
return glGetAttribLocation(GetProgramID(), in_attributeName);
}
void Shader::Use()
{
glUseProgram(GetProgramID());
}
GLint Shader::GetUniformLocation(const char* in_uniformName) const
{
return glGetUniformLocation(GetProgramID(), in_uniformName);
}
void Shader::AddUniform(const char* in_uniformName)
{
Uniform* newUniform = new Uniform(GetUniformLocation(in_uniformName));
std::unique_ptr<Uniform> uniquePtrUniform(newUniform);
m_uniforms[in_uniformName] = std::move(uniquePtrUniform);
}
//-------------------------------------------------------------------------------------
// CreateShaderProgram
//
// @param vertexShaderSource - The vertex shader we want to use
// @param fragShaderSource - The fragment shader we want to use
//
// @return - The index for the shader program we've just created
//-------------------------------------------------------------------------------------
void Shader::CreateShaderProgram(const std::string& in_vertexShaderSrc, const std::string& in_fragShaderSrc)
{
GLuint vertexShader = CreateShaderFromFile(in_vertexShaderSrc, GL_VERTEX_SHADER);
GLuint fragShader = CreateShaderFromFile(in_fragShaderSrc, GL_FRAGMENT_SHADER);
if(!ShaderCompilationCheck(vertexShader, fragShader, m_name))
{
if(vertexShader != 0)
DeleteShader(vertexShader);
if(fragShader != 0)
DeleteShader(fragShader);
return;
}
// Actually create the shader program
m_shaderProgram = glCreateProgram();
glAttachShader(m_shaderProgram, vertexShader);
glAttachShader(m_shaderProgram, fragShader);
// Telling the program which buffer the fragment shader is writing to
glBindFragDataLocation(m_shaderProgram, 0, "outColor");
glLinkProgram(m_shaderProgram);
// Detach and delete the shaders
DeleteShader(vertexShader);
DeleteShader(fragShader);
#if DEBUG_PRINTING == 1
if(glIsProgram(m_shaderProgram) == GL_TRUE)
Log().Get() << "isProgram success";
else
Log().Get() << "isProgram fail";
#endif
return;
}
bool Shader::ShaderCompilationCheck(const GLuint in_vertexShader, const GLuint in_fragmentShader, const std::string in_shaderName) const
{
GLint vertexStatus, fragmentStatus;
glGetShaderiv(in_vertexShader, GL_COMPILE_STATUS, &vertexStatus);
glGetShaderiv(in_fragmentShader, GL_COMPILE_STATUS, &fragmentStatus);
if(vertexStatus == GL_TRUE && fragmentStatus == GL_TRUE)
{
#if DEBUG_PRINTING == 1
Log().Get() << in_shaderName + " shader compilation SUCCESS";
#endif
return true;
}
else
{
if(vertexStatus == GL_FALSE)
{
#if DEBUG_PRINTING == 1
Log().Get() << "Vertex shader compilation FAILED";
#endif
}
if(fragmentStatus == GL_FALSE)
{
#if DEBUG_PRINTING == 1
Log().Get() << "Fragment shader compilation FAILED";
#endif
}
#if DEBUG_PRINTING == 1
Log().Get() << "Invalid shaders";
#endif
return false;
}
}
GLuint Shader::CreateShaderFromFile(const std::string& in_path, const GLenum& in_shaderType)
{
// Get the shader
std::string shaderSrcString = LoadShaderFromFile(in_path);
if(shaderSrcString.empty())
{
return 0;
}
const GLchar* shaderSrc = shaderSrcString.c_str();
// Loading Vertex Shader
GLuint shader = glCreateShader(in_shaderType);
glShaderSource(shader, 1, &shaderSrc, NULL);
glCompileShader(shader);
return shader;
}
std::string Shader::LoadShaderFromFile(const std::string& in_path) const
{
std::string shaderSrc = "";
std::ifstream myFile;
myFile.open(in_path);
if(myFile.is_open() && !myFile.bad())
{
return shaderSrc.assign(std::istreambuf_iterator<char>(myFile), std::istreambuf_iterator<char>());
}
return "";
}
void Shader::DeleteShader(GLuint in_shader)
{
glDetachShader(m_shaderProgram, in_shader);
glDeleteShader(in_shader);
} |
TypeScript | UTF-8 | 3,348 | 2.546875 | 3 | [
"MIT"
] | permissive | import {mock, instance, when, verify, deepEqual} from 'ts-mockito';
import {UserRepository} from 'src/Infrastructure/User/Repository/UserRepository';
import {EncryptionAdapter} from 'src/Infrastructure/Adapter/EncryptionAdapter';
import {CreateUserCommand} from 'src/Application/User/Command/CreateUserCommand';
import {CreateUserCommandHandler} from 'src/Application/User/Command/CreateUserCommandHandler';
import {IsEmailAlreadyExist} from 'src/Domain/User/Specification/IsEmailAlreadyExist';
import {EmailAlreadyExistException} from 'src/Domain/User/Exception/EmailAlreadyExistException';
import {User} from 'src/Domain/User/User.entity';
describe('CreatUserCommandHandler', () => {
const email = 'mathieu@fairness.coop';
const command = new CreateUserCommand(
'Mathieu',
'MARCHOIS',
'mathieu@FAIRNESS.coop',
'plainPassword'
);
let userRepository: UserRepository;
let encryption: EncryptionAdapter;
let isEmailAlreadyExist: IsEmailAlreadyExist;
let commandHandler: CreateUserCommandHandler;
beforeEach(() => {
userRepository = mock(UserRepository);
encryption = mock(EncryptionAdapter);
isEmailAlreadyExist = mock(IsEmailAlreadyExist);
commandHandler = new CreateUserCommandHandler(
instance(userRepository),
instance(encryption),
instance(isEmailAlreadyExist)
);
});
it('testEmailAlreadyExist', async () => {
when(isEmailAlreadyExist.isSatisfiedBy(email)).thenResolve(true);
try {
await commandHandler.execute(command);
} catch (e) {
expect(e).toBeInstanceOf(EmailAlreadyExistException);
expect(e.message).toBe('user.errors.email_already_exist');
verify(isEmailAlreadyExist.isSatisfiedBy(email)).once();
verify(encryption.hash('plainPassword')).never();
verify(encryption.hash('mathieu@fairness.coopplainPassword')).never();
verify(
userRepository.save(
deepEqual(
new User(
'Mathieu',
'MARCHOIS',
'mathieu@fairness.coop',
'hashToken',
'hashPassword'
)
)
)
).never();
}
});
it('testRegisterSuccess', async () => {
const createdUser: User = mock(User);
when(createdUser.getId()).thenReturn(
'fcf9a99f-0c7b-45ca-b68a-bfd79d73a49f'
);
when(isEmailAlreadyExist.isSatisfiedBy(email)).thenResolve(false);
when(encryption.hash(command.password)).thenResolve('hashPassword');
when(
userRepository.save(
deepEqual(
new User(
'Mathieu',
'MARCHOIS',
'mathieu@fairness.coop',
'hashToken',
'hashPassword'
)
)
)
).thenResolve(instance(createdUser));
when(encryption.hash(email + command.password)).thenResolve('hashToken');
expect(await commandHandler.execute(command)).toBe(
'fcf9a99f-0c7b-45ca-b68a-bfd79d73a49f'
);
verify(isEmailAlreadyExist.isSatisfiedBy(email)).once();
verify(encryption.hash('plainPassword')).once();
verify(encryption.hash('mathieu@fairness.coopplainPassword')).once();
verify(
userRepository.save(
deepEqual(
new User(
'Mathieu',
'MARCHOIS',
'mathieu@fairness.coop',
'hashToken',
'hashPassword'
)
)
)
).once();
verify(createdUser.getId()).once();
});
});
|
JavaScript | UTF-8 | 2,538 | 2.8125 | 3 | [] | no_license | const queryString = window.location.search;
const params = queryString.split("=")[1];
const url = "http://localhost:3000/products";
const urlOrders = "http://localhost:3000/orders";
const productPlace = document.getElementById("output");
const getProduct = async (url, params) => {
try {
const response = await fetch(`${url}/${params}`).then((res) => res.json());
return response;
} catch (e) {
return console.error(e);
}
};
getProduct(url, params).then((data) => productRender(data, productPlace));
const productRender = async (data, element) => {
if (data.length === 0) {
return console.log("No data");
}
const outputData = await data.filter((item) => item._id === params);
const div = document.createElement("div");
const img = document.createElement("img");
const pricePlace = document.createElement("p");
const prTitle = document.createElement("p");
const about = document.createElement("p");
div.style.margin = "3rem 1rem 3rem 2rem";
div.style.padding = "2rem";
div.style.textAlign = "center";
div.style.border = "1px solid black";
div.style.borderRadius = "2rem";
prTitle.style.fontWeight = "bold";
prTitle.style.fontSize = "2rem";
prTitle.style.margin = "0 0 1rem 0";
about.style.textAlign = "justify";
outputData.forEach((chekedPrd) => {
prTitle.textContent = chekedPrd.title;
about.textContent = chekedPrd.description;
pricePlace.textContent = `€ ${chekedPrd.price}`;
img.src = chekedPrd.image;
img.alt = chekedPrd.title;
img.style.width = "20rem";
div.append(prTitle, img, about, pricePlace);
element.appendChild(div);
document.getElementById("orderForm").addEventListener("submit", (e) => {
e.preventDefault();
const client = e.target.elements[0].value.trim();
const clEmail = e.target.elements[1].value.trim();
fetch(urlOrders, {
method: "POST",
headers: {
"Content-Type": "application/json",
},
body: JSON.stringify({
name: client,
email: clEmail,
product_id: params,
price: chekedPrd.price,
}),
})
.then((res) => res.json())
.then((data) => {
if (data.length > 0) {
alert(`Jūsų užsakyta prekė yra ${chekedPrd.title}`);
setTimeout(() => {
window.location.href = "index.html";
}, 1000);
} else {
alert(`Negalime užsakyti šios prekės. Pamėginkite vėliau`);
}
});
});
});
};
|
Go | UTF-8 | 5,446 | 2.6875 | 3 | [] | no_license | package main
import (
"bytes"
"errors"
"fmt"
"io"
"net"
"strconv"
"strings"
"time"
"github.com/jhwbarlow/tcp-audit-common/pkg/event"
"github.com/jhwbarlow/tcp-audit-common/pkg/tcpstate"
)
const (
familyInet = "AF_INET"
protocolTCP = "IPPROTO_TCP"
)
// ErrIrrelevantEvent is an error returned if the event read from
// the provided byte stream is not a TCPv4 event.
var errIrrelevantEvent error = errors.New("irrelevant event")
// EventParser is an interface which describes objects which convert a byte
// slice/"stream" containing a TCP state-change event into an event object.
type eventParser interface {
toEvent(str []byte) (*event.Event, error)
}
// TraceFSEventParser is a parser of tracefs TCP state-change events.
type traceFSEventParser struct {
fieldParser fieldParser
}
func newTraceFSEventParser(fieldParser fieldParser) *traceFSEventParser {
return &traceFSEventParser{fieldParser}
}
// ToEvent creates a TCP state-change event object from the supplied byte
// slice/"stream"
func (ep *traceFSEventParser) toEvent(str []byte) (*event.Event, error) {
time := time.Now().UTC()
command, err := parseCommand(&str)
if err != nil {
return nil, fmt.Errorf("parsing command from event: %w", err)
}
pidStr, err := ep.fieldParser.nextField(&str, spaceBytes, true)
if err != nil {
return nil, fmt.Errorf("parsing PID from event: %w", err)
}
pid, err := strconv.ParseInt(pidStr, 10, 64)
if err != nil {
return nil, fmt.Errorf("converting PID to integer: %w", err)
}
if _, err := ep.fieldParser.nextField(&str, colonSpaceBytes, true); err != nil {
return nil, fmt.Errorf("skipping metadata from event: %w", err)
}
if _, err := ep.fieldParser.nextField(&str, colonSpaceBytes, true); err != nil {
return nil, fmt.Errorf("skipping tracepoint from event: %w", err)
}
// Begin tagged data
tags, err := ep.fieldParser.getTaggedFields(&str)
if err != nil {
return nil, fmt.Errorf("parsing tagged fields: %w", err)
}
family, ok := tags["family"]
if ok { // Family will not be present if using tcp_set_state
if family != familyInet {
return nil, errIrrelevantEvent
}
}
protocol, ok := tags["protocol"]
if ok { // Protocol will not be present if using tcp_set_state
if protocol != protocolTCP {
return nil, errIrrelevantEvent
}
}
sPort, ok := tags["sport"]
if !ok {
return nil, errors.New("source port not present in event")
}
sourcePort, err := strconv.ParseUint(sPort, 10, 16)
if err != nil {
return nil, fmt.Errorf("converting source port to integer: %w", err)
}
dPort, ok := tags["dport"]
if !ok {
return nil, errors.New("destination port not present in event")
}
destPort, err := strconv.ParseUint(dPort, 10, 16)
if err != nil {
return nil, fmt.Errorf("converting destination port to integer: %w", err)
}
sAddr, ok := tags["saddr"]
if !ok {
return nil, errors.New("source address not present in event")
}
sourceIP := net.ParseIP(sAddr)
if sourceIP == nil {
return nil, errors.New("could not parse source address")
}
dAddr, ok := tags["daddr"]
if !ok {
return nil, errors.New("destination address not present in event")
}
destIP := net.ParseIP(dAddr)
if destIP == nil {
return nil, errors.New("could not parse destination address")
}
/* sAddrV6, ok := tags["saddrv6"]
if !ok {
return nil, errors.New("source IPv6 address not present in event")
}
dAddrV6, ok := tags["daddrv6"]
if !ok {
return nil, errors.New("destination IPv6 address not present in event")
} */
oldState, ok := tags["oldstate"]
if !ok {
return nil, errors.New("old state not present in event")
}
canonicalOldState, err := canonicaliseState(oldState)
if err != nil {
return nil, fmt.Errorf("canonicalising old state: %w", err)
}
newState, ok := tags["newstate"]
if !ok {
return nil, errors.New("new state not present in event")
}
canonicalNewState, err := canonicaliseState(newState)
if err != nil {
return nil, fmt.Errorf("canonicalising new state: %w", err)
}
return &event.Event{
Time: time,
CommandOnCPU: command,
PIDOnCPU: int(pid),
SourceIP: sourceIP,
DestIP: destIP,
SourcePort: uint16(sourcePort),
DestPort: uint16(destPort),
OldState: canonicalOldState,
NewState: canonicalNewState,
}, nil
}
func canonicaliseState(state string) (tcpstate.State, error) {
switch state {
case "TCP_CLOSE":
state = "CLOSED"
case "TCP_FIN_WAIT1":
state = "FIN-WAIT-1"
case "TCP_FIN_WAIT2":
state = "FIN-WAIT-2"
case "TCP_SYN_RECV":
state = "SYN-RECEIVED"
default:
state = strings.TrimPrefix(state, "TCP_")
state = strings.ReplaceAll(state, "_", "-")
}
return tcpstate.FromString(state)
}
func parseCommand(str *[]byte) (command string, err error) {
defer panicToErr("parsing next field", &err) // Catch any unexpected slicing errors without panicking
// Get index of colon, then work backwards to the last dash.
// This is needed as the command is delimited by a dash, but may contain a dash itself!
idx := bytes.Index(*str, colonSpaceBytes)
if idx == -1 { // No ': ' present
return "", io.ErrUnexpectedEOF
}
for ; (*str)[idx] != byte('-') && idx > 0; idx-- {
}
if idx == 0 { // No command present
return "", io.ErrUnexpectedEOF
}
cmd := (*str)[:idx]
*str = (*str)[idx+1:]
// Strip leading padding spaces
for idx = 0; cmd[idx] == byte(' '); idx++ {
}
command = string(cmd[idx:])
return command, nil
}
|
Java | UTF-8 | 1,612 | 2.671875 | 3 | [] | no_license | package edu.curso.java.spring.dao;
import java.sql.*;
import java.util.HashMap;
import java.util.List;
import java.util.Map;
import javax.sql.DataSource;
import org.springframework.beans.factory.annotation.Autowired;
import org.springframework.jdbc.core.RowMapper;
import org.springframework.jdbc.core.simple.*;
import org.springframework.stereotype.*;
import edu.curso.java.spring.bo.Producto;
@Repository
public class ProductoDAOImp implements ProductoDAO {
private SimpleJdbcTemplate jdbcTemplate;
@Autowired
public void setDataSource(DataSource dataSource) {
this.jdbcTemplate = new SimpleJdbcTemplate(dataSource);
}
@Override
public void insertaProducto(Producto producto) {
String sql = "insert into producto(id, nombre, precio) values "
+ "(:id,:nombre, :precio)";
Map<String, Object> parameters = new HashMap<String, Object>();
parameters.put("id", producto.getId());
parameters.put("nombre", producto.getNombre());
parameters.put("precio", producto.getPrecio());
jdbcTemplate.update(sql, parameters);
}
@Override
public List<Producto> listarProductos() {
List<Producto> productos = jdbcTemplate.query(
"select id, nombre, precio from producto",
new RowMapper<Producto>() {
public Producto mapRow(ResultSet rs, int rowNum)
throws SQLException {
Producto producto = new Producto();
producto.setId(rs.getLong("id"));
producto.setNombre(rs.getString("nombre"));
producto.setPrecio(rs.getDouble("precio"));
return producto;
}
});
return productos;
}
} |
Java | UTF-8 | 840 | 2.171875 | 2 | [] | no_license | package model;
/**
* Created by David Lu on 2016-05-15.
*/
public class Version {
public final String[] versions = {
"3.0.0",
"2.2.0",
"2.1.0",
"2.0.0",
"1.4.0",
"1.3.0",
"1.2.1",
"1.2.0",
"1.1.0",
"1.0.0",
"0.7.1",
"0.7.0",
"0.6.0",
"0.5.0",
};
public final String CurrentVersion = versions[0];
private String buildNumber;
private String buildDate;
private String buildHash;
private String buildEnterpriseReady;
private String[] versionsWithoutHotFixes;
public Integer[] splitVersion(String version) {
String[] split = version.split(".");
//TODO: Split it
return null;
}
public Version() {
}
}
|
Java | UTF-8 | 12,663 | 1.796875 | 2 | [] | no_license | package com.lyriad.e_commerce.Activities;
import android.content.Intent;
import android.net.Uri;
import android.util.Log;
import android.view.View;
import android.widget.EditText;
import android.widget.ImageView;
import android.widget.RelativeLayout;
import android.widget.Toast;
import androidx.annotation.Nullable;
import androidx.appcompat.app.AppCompatActivity;
import android.os.Bundle;
import com.bumptech.glide.Glide;
import com.eyalbira.loadingdots.LoadingDots;
import com.google.android.material.checkbox.MaterialCheckBox;
import com.google.android.material.dialog.MaterialAlertDialogBuilder;
import com.google.android.material.floatingactionbutton.ExtendedFloatingActionButton;
import com.google.android.material.textfield.TextInputLayout;
import com.google.firebase.storage.FirebaseStorage;
import com.google.firebase.storage.StorageReference;
import com.lyriad.e_commerce.R;
import com.lyriad.e_commerce.Sessions.UserSession;
import com.lyriad.e_commerce.Tasks.SendDataTask;
import com.lyriad.e_commerce.Utils.Constants;
import com.lyriad.e_commerce.Utils.Response;
import com.lyriad.e_commerce.Utils.Validator;
import com.mikhaellopez.circularimageview.CircularImageView;
import com.theartofdev.edmodo.cropper.CropImage;
import com.theartofdev.edmodo.cropper.CropImageView;
import org.json.JSONException;
import org.json.JSONObject;
import java.util.Objects;
import java.util.UUID;
public class RegisterUserActivity extends AppCompatActivity implements View.OnClickListener {
private UserSession session;
private CircularImageView image;
private EditText name, username, email, phone, password, confirmPassword;
private MaterialCheckBox provider;
private ExtendedFloatingActionButton regButton;
private LoadingDots progress;
private StorageReference fireStorage;
private Uri imageUri;
@Override
protected void onCreate(Bundle savedInstanceState) {
super.onCreate(savedInstanceState);
setContentView(R.layout.activity_register_user);
fireStorage = FirebaseStorage.getInstance().getReference();
imageUri = null;
ImageView backButton = findViewById(R.id.reg_user_back_button);
RelativeLayout imageLayout = findViewById(R.id.reg_user_image_layout);
image = findViewById(R.id.reg_user_image);
name = findViewById(R.id.reg_user_name);
username = findViewById(R.id.reg_user_username);
email = findViewById(R.id.reg_user_email);
phone = findViewById(R.id.reg_user_phone);
password = findViewById(R.id.reg_user_password);
confirmPassword = findViewById(R.id.reg_user_confirm_password);
provider = findViewById(R.id.reg_user_provider);
regButton = findViewById(R.id.reg_user_button);
progress = findViewById(R.id.reg_user_progress);
if (Objects.equals(getIntent().getStringExtra("action"), "update")) {
session = new UserSession(RegisterUserActivity.this);
Glide.with(RegisterUserActivity.this).load(session.getImage()).into(image);
name.setText(session.getName());
username.setText(session.getUsername());
email.setText(session.getEmail());
phone.setText(session.getPhone());
provider.setChecked(session.isProvider());
TextInputLayout passwordLayout = findViewById(R.id.reg_user_password_layout);
TextInputLayout confirmPasswordLayout = findViewById(R.id.reg_user_confirm_password_layout);
username.setEnabled(false);
email.setEnabled(false);
passwordLayout.setVisibility(View.GONE);
confirmPasswordLayout.setVisibility(View.GONE);
regButton.setText("Update user");
}
backButton.setOnClickListener(this);
imageLayout.setOnClickListener(this);
regButton.setOnClickListener(this);
}
@Override
public void onClick(View v) {
switch (v.getId()) {
case R.id.reg_user_back_button:
finish();
break;
case R.id.reg_user_image_layout:
CropImage.activity()
.setGuidelines(CropImageView.Guidelines.ON)
.setAspectRatio(1, 1)
.start(this);
break;
case R.id.reg_user_button:
if (Objects.equals(getIntent().getStringExtra("action"), "update")) {
updateUser();
} else {
registerUser();
}
break;
}
}
@Override
protected void onActivityResult(int requestCode, int resultCode, @Nullable Intent data) {
super.onActivityResult(requestCode, resultCode, data);
if (requestCode == CropImage.CROP_IMAGE_ACTIVITY_REQUEST_CODE &&
resultCode == RESULT_OK && data != null){
CropImage.ActivityResult result = CropImage.getActivityResult(data);
imageUri = result.getUri();
image.setImageURI(imageUri);
}
}
private void registerUser() {
if (!Validator.isInternetConnected(RegisterUserActivity.this)) {
new MaterialAlertDialogBuilder(RegisterUserActivity.this)
.setTitle("Network Error")
.setMessage("You are not connected to the internet")
.setPositiveButton("Try again", (dialog, which) -> registerUser())
.setNegativeButton("Exit", (dialog, which) -> finish())
.show();
return;
}
if (!allFieldsValid()
|| Validator.isPasswordValid(password)
|| password.getText().toString().equals(confirmPassword.getText().toString())
|| imageUri == null) {
return;
}
regButton.setVisibility(View.GONE);
progress.setVisibility(View.VISIBLE);
String ext = imageUri.toString().substring(imageUri.toString().lastIndexOf('.'));
StorageReference filePath = fireStorage
.child("profile")
.child(UUID.randomUUID().toString() + ext);
final SendDataTask regPhotoTask = new SendDataTask(Constants.API_UPDATE_USER, "PUT", response -> {
Log.i("EVENT", "Photo saved successfully in api");
finish();
}, error -> {
Log.i("WARNING", "Photo couldn't be saved in api");
finish();
});
final SendDataTask regUserTask = new SendDataTask(Constants.API_REGISTER_USER, "POST", (Response.Listener<JSONObject>) response -> {
try {
if (response.getBoolean("success")) {
filePath.putFile(imageUri).addOnSuccessListener(taskSnapshot -> filePath.getDownloadUrl().addOnSuccessListener(uri -> {
try {
JSONObject updateData = new JSONObject();
updateData.put("contact", response.getString("contact"));
updateData.put("isProvider", response.getBoolean("isProvider"));
updateData.put("name", response.getString("name"));
updateData.put("photo", uri.toString());
updateData.put("id", response.getLong("ID"));
regPhotoTask.execute(updateData);
} catch (JSONException e) {
e.printStackTrace();
Toast.makeText(RegisterUserActivity.this, "There was an error", Toast.LENGTH_LONG).show();
regButton.setVisibility(View.VISIBLE);
progress.setVisibility(View.GONE);
}
})).addOnFailureListener(e -> {
Toast.makeText(RegisterUserActivity.this, "There was an error", Toast.LENGTH_LONG).show();
regButton.setVisibility(View.VISIBLE);
progress.setVisibility(View.GONE);
});
} else {
Toast.makeText(RegisterUserActivity.this, response.getString("message"), Toast.LENGTH_LONG).show();
regButton.setVisibility(View.VISIBLE);
progress.setVisibility(View.GONE);
}
} catch (JSONException e) {
e.printStackTrace();
Toast.makeText(RegisterUserActivity.this, "There was an error", Toast.LENGTH_LONG).show();
regButton.setVisibility(View.VISIBLE);
progress.setVisibility(View.GONE);
}
}, error -> {
Toast.makeText(RegisterUserActivity.this, error.getMessage(), Toast.LENGTH_LONG).show();
regButton.setVisibility(View.VISIBLE);
progress.setVisibility(View.GONE);
});
try {
JSONObject registerData = new JSONObject();
registerData.put("contact", phone.getText().toString().trim());
registerData.put("email", email.getText().toString().trim());
registerData.put("isProvider", provider.isChecked());
registerData.put("name", name.getText().toString().trim());
registerData.put("photo", JSONObject.NULL);
registerData.put("user", username.getText().toString().trim());
registerData.put("password", password.getText().toString());
regUserTask.execute(registerData);
} catch (JSONException e) {
e.printStackTrace();
Toast.makeText(RegisterUserActivity.this, "There was an error", Toast.LENGTH_LONG).show();
regButton.setVisibility(View.VISIBLE);
progress.setVisibility(View.GONE);
}
}
private void updateUser () {
final String[] imgLink = new String[1];
if (!Validator.isInternetConnected(RegisterUserActivity.this)) {
new MaterialAlertDialogBuilder(RegisterUserActivity.this)
.setTitle("Network Error")
.setMessage("You are not connected to the internet")
.setPositiveButton("Try again", (dialog, which) -> updateUser())
.setNegativeButton("Exit", (dialog, which) -> finish())
.show();
return;
}
if (!allFieldsValid()) {
return;
}
regButton.setVisibility(View.GONE);
progress.setVisibility(View.VISIBLE);
if (imageUri != null) {
String ext = imageUri.toString().substring(imageUri.toString().lastIndexOf('.'));
StorageReference filePath = fireStorage
.child("profile")
.child(UUID.randomUUID().toString() + ext);
filePath.putFile(imageUri).addOnSuccessListener(taskSnapshot -> filePath.getDownloadUrl().addOnSuccessListener(uri -> imgLink[0] = uri.toString()));
} else {
imgLink[0] = session.getImage().toString();
}
final SendDataTask updateUserTask = new SendDataTask(Constants.API_UPDATE_USER, "PUT", response -> {
session.setName(name.getText().toString().trim());
session.setImage(imgLink[0]);
session.setPhone(phone.getText().toString().trim());
session.setProvider(provider.isChecked());
Log.i("EVENT", "User updated successfully");
finish();
}, error -> {
Log.i("ERROR", "Error updating user in api");
Toast.makeText(RegisterUserActivity.this, "Error updating user", Toast.LENGTH_LONG).show();
regButton.setVisibility(View.VISIBLE);
progress.setVisibility(View.GONE);
});
try {
JSONObject updateData = new JSONObject();
updateData.put("contact", phone.getText().toString().trim());
updateData.put("isProvider", provider.isChecked());
updateData.put("name", name.getText().toString().trim());
updateData.put("photo", imgLink[0]);
updateData.put("id", session.getUserId());
updateUserTask.execute(updateData);
} catch (JSONException e) {
e.printStackTrace();
Toast.makeText(RegisterUserActivity.this, "There was an error", Toast.LENGTH_LONG).show();
regButton.setVisibility(View.VISIBLE);
progress.setVisibility(View.GONE);
}
}
private boolean allFieldsValid() {
return (Validator.isEmpty(name, username)
|| Validator.isEmailValid(email)
|| Validator.isPhoneValid(phone));
}
}
|
JavaScript | UTF-8 | 480 | 2.515625 | 3 | [] | no_license | import React , { useEffect, useRef } from 'react';
const Search = ({ search, setSearch}) => {
const inputRef = useRef();
useEffect(() => {
inputRef.current.focus()
}, [])
const handleSearch = event => {
setSearch(event.target.value)
}
return (
<header>
<input ref={inputRef} type="text" placeholder="Search" value={search} onChange={handleSearch} />
</header>
)
}
export default Search; |
C# | UTF-8 | 18,743 | 2.828125 | 3 | [] | no_license | using System;
using System.Text;
using System.Windows.Forms;
using System.Collections;
using System.Xml;
namespace Unity
{
/// <summary>
/// This class is used to store presence information about a user
/// </summary>
public class OutOfOfficeAssistantStatus : ListViewItem
{
//This is used when listing statuses in case we want to delete one.
public string ID = "";
public string CurrentStatus="";
//If there is no start date/time and no end date/time, we know this is for today
//If there is not end date, we know this status is applied indefinately
//public string Timezone = "";
public string StartDay="";
public string StartMonth="";
public string StartYear="";
public string StartHour="";
public string StartMinute="";
public string EndDay="";
public string EndMonth="";
public string EndYear="";
public string EndHour="";
public string EndMinute="";
public OutOfOfficeAssistantStatus()
{}
public OutOfOfficeAssistantStatus(string currentStatus)
{
try
{
LoadValues(currentStatus.Split(';'));
}
catch (GeneralError ge)
{
throw ge;
}
catch (Exception ex)
{
throw new GeneralError(ex, "OutOfOfficeAssistantStatus", "OutOfOfficeAssistantStatus", ex.ToString());
}
}
public XmlDocument GetXmlValues()
{
try
{
//<UnityConnect version='1.0'>
//<Command Action='GetPresence'>
//<Id></Id>
//<CurrentStatus></CurrentStatus>
//<Timezone></Timezone>
//<StartDay></StartDay>
//<StartMonth></StartMonth>
//<StartYear></StartYear>
//<StartHour></StartHour>
//<StartMinute></StartMinute>
//<EndDay></EndDay>
//<EndMonth></EndMonth>
//<EndYear></EndYear>
//<EndHour></EndHour>
//<EndMinute></EndMinute>
//</Command>
//</UnityConnect>
XmlDocument doc = new XmlDocument();
doc.LoadXml("<UnityConnect version='1.0'><Command Action='GetPresence'><Id></Id><CurrentStatus></CurrentStatus><Timezone></Timezone><StartDay></StartDay><StartMonth></StartMonth><StartYear></StartYear><StartHour></StartHour><StartMinute></StartMinute><EndDay></EndDay><EndMonth></EndMonth><EndYear></EndYear><EndHour></EndHour><EndMinute></EndMinute></Command></UnityConnect>");
XmlHelper.SetElementValue(doc, "Id", ID, false);
XmlHelper.SetElementValue(doc, "CurrentStatus", CurrentStatus, false);
//XmlHelper.SetElementValue(doc, "Timezone", Timezone, false);
XmlHelper.SetElementValue(doc, "StartDay", StartDay, false);
XmlHelper.SetElementValue(doc, "StartMonth", StartMonth, false);
XmlHelper.SetElementValue(doc, "StartYear", StartYear, false);
XmlHelper.SetElementValue(doc, "StartHour", StartHour, false);
XmlHelper.SetElementValue(doc, "StartMinute", StartMinute, false);
XmlHelper.SetElementValue(doc, "EndDay", EndDay, false);
XmlHelper.SetElementValue(doc, "EndMonth", EndMonth, false);
XmlHelper.SetElementValue(doc, "EndYear", EndYear, false);
XmlHelper.SetElementValue(doc, "EndHour", EndHour, false);
XmlHelper.SetElementValue(doc, "EndMinute", EndMinute, false);
return doc;
}
catch (GeneralError ge)
{
throw ge;
}
catch (Exception ex)
{
throw new GeneralError(ex, "OutOfOfficeAssistantStatus", "GetXmlValues", ex.ToString());
}
}
public string GetValues()
{
try
{
StringBuilder sb = new StringBuilder();
sb.Append("ID=" + ID + ";");
sb.Append("CurrentStatus=" + CurrentStatus + ";");
//sb.Append("Timezone=" + Timezone + ";");
if(StartDay.Length==1)
sb.Append("StartDay=0" + StartDay + ";");
else
sb.Append("StartDay=" + StartDay + ";");
if(StartMonth.Length==1)
sb.Append("StartMonth=0" + StartMonth + ";");
else
sb.Append("StartMonth=" + StartMonth + ";");
sb.Append("StartYear=" + StartYear + ";");
sb.Append("StartHour=" + StartHour + ";");
if(StartMinute.Length==1)
sb.Append("StartMinute=0" + StartMinute + ";");
else
sb.Append("StartMinute=" + StartMinute + ";");
if(EndDay.Length==1)
sb.Append("EndDay=0" + EndDay + ";");
else
sb.Append("EndDay=" + EndDay + ";");
if(EndMonth.Length==1)
sb.Append("EndMonth=0" + EndMonth + ";");
else
sb.Append("EndMonth=" + EndMonth + ";");
sb.Append("EndYear=" + EndYear + ";");
sb.Append("EndHour=" + EndHour + ";");
if (EndMinute.Length == 1)
sb.Append("EndMinute=0" + EndMinute + ";");
else
sb.Append("EndMinute=" + EndMinute + ";");
return sb.ToString();
}
catch (GeneralError ge)
{
throw ge;
}
catch (Exception ex)
{
throw new GeneralError(ex, "OutOfOfficeAssistantStatus", "ToValues", ex.ToString());
}
}
private void LoadValues(string[] values)
{
try
{
//Use the details to fill in the fields of this update
foreach (string field in values)
{
if (field.Contains("="))
{
string fieldname = field.Split('=')[0].ToString();
string fieldvalue = field.Split('=')[1].ToString();
switch (fieldname)
{
case "ID":
ID = fieldvalue;
break;
case "CurrentStatus":
CurrentStatus = fieldvalue;
break;
//case "Timezone":
// Timezone = fieldvalue;
// break;
case "StartHour":
StartHour = fieldvalue;
break;
case "StartMinute":
StartMinute = fieldvalue;
break;
case "StartDay":
StartDay = fieldvalue;
break;
case "StartMonth":
StartMonth = fieldvalue;
break;
case "StartYear":
StartYear = fieldvalue;
break;
case "EndHour":
EndHour = fieldvalue;
break;
case "EndMinute":
EndMinute = fieldvalue;
break;
case "EndDay":
EndDay = fieldvalue;
break;
case "EndMonth":
EndMonth = fieldvalue;
break;
case "EndYear":
EndYear = fieldvalue;
break;
default:
UnityMessageBox.LogException("OutOfOfficeAssistantStatus", "OutOfOfficeAssistantStatus", "The fieldname " + fieldname + " was not recognised in an OutOfOfficeUpdate [presence] message.");
break;
}
}
}
}
catch (GeneralError ge)
{
throw ge;
}
catch (Exception ex)
{
throw new GeneralError(ex, "OutOfOfficeAssistantStatus", "LoadValues", ex.ToString());
}
}
public OutOfOfficeAssistantStatus(OutOfOfficeUpdateReceivedArgs.UserPresence presence)
{
try
{
LoadValues(presence.UpdateDetails.Split(';'));
}
catch (GeneralError ge)
{
UnityMessageBox.Show(ge);
}
catch (Exception ex)
{
UnityMessageBox.Show("OutOfOfficeAssistantStatus", "OutOfOfficeAssistantStatus", ex);
}
}
public string GetStatusDescription(bool includeTimezone)
{
try
{
if(CurrentStatus.Length==0)
return ApplicationBase.CurrentLanguage.GetPhrase("OutOfOfficeAssistant.Unknown", "Unknown status");
else
{
StringBuilder sb = new StringBuilder(ApplicationBase.CurrentLanguage.GetPhrase("OutOfOfficeAssistant.IAm", "I am") + " ");
sb.Append(CurrentStatus.ToLower());
if((StartDay==EndDay) && (StartMonth==EndMonth) && (StartYear==EndYear))
{
#region today
if(StartDay.Length!=0)
{
if((DateTime.Now.Year==Convert.ToInt32(StartYear)) && (DateTime.Now.Month==Convert.ToInt32(StartMonth)) && (DateTime.Now.Day==Convert.ToInt32(StartDay)))
{
if((StartHour.Length==0) && (EndHour.Length==0))
sb.Append(" " + ApplicationBase.CurrentLanguage.GetPhrase("OutOfOfficeAssistant.AllDayToday", "all day today."));
else
{
sb.Append(" " + ApplicationBase.CurrentLanguage.GetPhrase("OutOfOfficeAssistant.Today", "today"));
if(StartHour.Length!=0)
sb.Append(" " + ApplicationBase.CurrentLanguage.GetPhrase("OutOfOfficeAssistant.Start", "from") + " " + StartHour + ":" + StartMinute);
if(EndHour.Length!=0)
sb.Append(" " + ApplicationBase.CurrentLanguage.GetPhrase("OutOfOfficeAssistant.End", "until") + " " + EndHour + ":" + EndMinute + ".");
else
sb.Append(".");
}
}
else
{
if((StartHour.Length==0) && (EndHour.Length==0))
{
sb.Append(" " + ApplicationBase.CurrentLanguage.GetPhrase("OutOfOfficeAssistant.AllDayOn", "all day on") + " ");
sb.Append(StartDay + " " + GetMonthName(StartMonth) + ".");
}
else
{
sb.Append(" " + ApplicationBase.CurrentLanguage.GetPhrase("OutOfOfficeAssistant.On", "on") + " ");
sb.Append(StartDay + " " + GetMonthName(StartMonth));
if(StartHour.Length!=0)
sb.Append(" " + ApplicationBase.CurrentLanguage.GetPhrase("OutOfOfficeAssistant.Start", "from") + " " + StartHour + ":" + StartMinute);
if(EndHour.Length!=0)
sb.Append(" " + ApplicationBase.CurrentLanguage.GetPhrase("OutOfOfficeAssistant.End", "until") + " " + EndHour + ":" + EndMinute + ".");
else
sb.Append(".");
}
}
}
else
sb.Append(" " + ApplicationBase.CurrentLanguage.GetPhrase("OutOfOfficeAssistant.AllDayToday", "all day today."));
#endregion
}
else
{
#region multiple days
sb.Append(" " + ApplicationBase.CurrentLanguage.GetPhrase("OutOfOfficeAssistant.Start", "from") + " ");
DateTime tom = DateTime.Now.AddDays(1);
if ((StartDay.Length == 0) || ((Convert.ToInt32(StartDay) == DateTime.Now.Day) && (Convert.ToInt32(StartMonth) == DateTime.Now.Month) && (Convert.ToInt32(StartYear) == DateTime.Now.Year)))
{
if (StartHour.Length != 0)
sb.Append(StartHour + ":" + StartMinute + " " + ApplicationBase.CurrentLanguage.GetPhrase("OutOfOfficeAssistant.Today", "today") + " ");
else
sb.Append(ApplicationBase.CurrentLanguage.GetPhrase("OutOfOfficeAssistant.Today", "today") + " ");
}
else
{
//Is the start date tomorrow?
if ((tom.Year == Convert.ToInt32(StartYear)) && (tom.Month == Convert.ToInt32(StartMonth)) && (tom.Day == Convert.ToInt32(StartDay)))
{
if (StartHour.Length != 0)
sb.Append(StartHour + ":" + StartMinute);
sb.Append(" " + ApplicationBase.CurrentLanguage.GetPhrase("OutOfOfficeAssistant.Tomorrow", "tomorrow") + " ");
}
else
{
sb.Append(StartDay + " " + GetMonthName(StartMonth) + " ");
if (StartHour.Length != 0)
sb.Append(StartHour + ":" + StartMinute + " ");
}
}
sb.Append(ApplicationBase.CurrentLanguage.GetPhrase("OutOfOfficeAssistant.End", "until") + " ");
if(EndDay.Length==0)
sb.Append(ApplicationBase.CurrentLanguage.GetPhrase("OutOfOfficeAssistant.FurtherNotice", "further notice."));
else
{
//If the end date is tomorrow, say "tomorrow"
if((tom.Year==Convert.ToInt32(EndYear)) && (tom.Month==Convert.ToInt32(EndMonth)) && (tom.Day==Convert.ToInt32(EndDay)))
{
if(EndHour.Length!=0)
sb.Append(EndHour + ":" + EndMinute + " " + ApplicationBase.CurrentLanguage.GetPhrase("OutOfOfficeAssistant.Tomorrow", "tomorrow."));
else
sb.Append(ApplicationBase.CurrentLanguage.GetPhrase("OutOfOfficeAssistant.Tomorrow", "tomorrow."));
}
else
{
sb.Append(EndDay + " " + GetMonthName(EndMonth));
if(EndHour.Length!=0)
sb.Append(" " + ApplicationBase.CurrentLanguage.GetPhrase("Misc.At", "at") + " " + EndHour + ":" + EndMinute + ".");
else
sb.Append(".");
}
}
#endregion
}
return sb.ToString();
}
}
catch (Exception ex)
{
throw new GeneralError(ex, "OutOfOfficeAssistant", "GetStatusDescription", ex.ToString());
}
}
private string GetMonthName(string monthNumber)
{
switch(monthNumber)
{
case "01":
return ApplicationBase.CurrentLanguage.GetPhrase("Month.1", "January");
case "1":
return ApplicationBase.CurrentLanguage.GetPhrase("Month.1", "January");
case "02":
return ApplicationBase.CurrentLanguage.GetPhrase("Month.2", "February");
case "2":
return ApplicationBase.CurrentLanguage.GetPhrase("Month.2", "February");
case "03":
return ApplicationBase.CurrentLanguage.GetPhrase("Month.3", "March");
case "3":
return ApplicationBase.CurrentLanguage.GetPhrase("Month.3", "March");
case "04":
return ApplicationBase.CurrentLanguage.GetPhrase("Month.4", "April");
case "4":
return ApplicationBase.CurrentLanguage.GetPhrase("Month.4", "April");
case "05":
return ApplicationBase.CurrentLanguage.GetPhrase("Month.5", "May");
case "5":
return ApplicationBase.CurrentLanguage.GetPhrase("Month.5", "May");
case "06":
return ApplicationBase.CurrentLanguage.GetPhrase("Month.6", "June");
case "6":
return ApplicationBase.CurrentLanguage.GetPhrase("Month.6", "June");
case "07":
return ApplicationBase.CurrentLanguage.GetPhrase("Month.7", "July");
case "7":
return ApplicationBase.CurrentLanguage.GetPhrase("Month.7", "July");
case "08":
return ApplicationBase.CurrentLanguage.GetPhrase("Month.8", "August");
case "8":
return ApplicationBase.CurrentLanguage.GetPhrase("Month.8", "August");
case "09":
return ApplicationBase.CurrentLanguage.GetPhrase("Month.9", "September");
case "9":
return ApplicationBase.CurrentLanguage.GetPhrase("Month.9", "September");
case "10":
return ApplicationBase.CurrentLanguage.GetPhrase("Month.10", "October");
case "11":
return ApplicationBase.CurrentLanguage.GetPhrase("Month.11", "November");
case "12":
return ApplicationBase.CurrentLanguage.GetPhrase("Month.12", "December");
default:
return ApplicationBase.CurrentLanguage.GetPhrase("Misc.Unknown", "Unknown");
}
}
}
public class OutOfOfficeAssistantStatusList : ArrayList
{
public OutOfOfficeAssistantStatus Add(OutOfOfficeAssistantStatus obj)
{
base.Add(obj);
return obj;
}
public void Insert(int index, OutOfOfficeAssistantStatus obj)
{
base.Insert(index, obj);
}
public void Remove(OutOfOfficeAssistantStatus obj)
{
base.Remove(obj);
}
new public OutOfOfficeAssistantStatus this[int index]
{
get { return (OutOfOfficeAssistantStatus)base[index]; }
set { base[index] = value; }
}
}
}
|
Markdown | UTF-8 | 7,273 | 2.671875 | 3 | [] | no_license | # AGTH 用户手册
本汉化版原文提取自 AGTH 的帮助选项。[原文](./agth-help.md)中一些不标准的计算机术语将根据自己的理解转化为标准计算机术语进行翻译。
- 关于hook一词,用作动词时译为“提取”,用作名词时译为“钩子”(钩子函数)。
- 关于context一词,在被提取的文本中和Windows指令中含义不同。
## 加载选项
> 语法:`agth.exe <options> exe_file_to_load command_line`
译者注:'locale_id' 可以通过[此网页](https://ss64.com/locale.html)取得。
- `/L[locale_id]` - 使用 AppLocale 将应用程序区域设置为 'locale_id'。(默认值:411)
- `/R[locale_id]` - 将应用程序区域设置为 'locale_id'。(默认值:411)
- `/P[{process_id|Nprocess_name}]` - 附加到进程,此选项会使`/L`、`/R`选项和`exe_file_to_load`、`command_line`参数无效。
## 接口选项
译者注:默认参数值中,半角冒号用于按顺序分隔多个参数值。
- `/B[split_mul][:[min_time][:unconditional_split_time]]` - 设置段落分隔符(默认值:4:24:1000)
- `/C[time]` - 在'time'毫秒后,将捕获的文本拷贝至剪贴板。 (默认值:150)
- `/Fnew_name@[context][:subcontext][;new_name2@...]` - 重命名'context'为'new_name',并去掉所有的'subcontext'。(默认值:0:any)??
- `/KF[len_base][:len_mul]` - 以单个短语最长'len_base'个字符、最多重复'len_mul'次为标准去除重复的短语。(例:no no no no no => no)(默认值: 32:16)
- `/KS[number]` - 去除重复'number'次的字符。(例:Nooooo => No)(默认参数值:1)
- `/NA` - 较不严格的文本传输访问控制。??
- `/NF` - 禁止过滤某些字符。??
- `/NX` - 当所有附加的进程结束时,退出AGTH程序。
- `/T` - 使AGTH程序窗口总在最前方。
- `/W[context][:subcontext]` - 自动选择上下文。(默认值:0:any)??
## 提取选项
> `/H[X]{A|B|W|S|Q}[N][data_offset[*drdo]][:sub_offset[*drso]]@addr[:module[:{name|#ordinal}]]`
- `/NC` - 不提取子线程的文本。
- `/NH` - 不使用默认的提取方法。(例如GetWindowTextA)
- `/NJ` - 对非 Unicode 编码的文本,使用线程所使用的代码页代替 Shift-JIS。 (对非日文文本使用)
- `/NS` - 不使用子线程的上下文([CONTEXT](https://docs.microsoft.com/zh-cn/windows/desktop/api/winnt/ns-winnt-_arm64_nt_context))。
- `/S[IP_address]` - 将文本发送到自定义的计算机。(默认值:local computer)
- `/V` - 使用系统级上下文(system context)处理文本线程。
- `/X[sets_mask]` - 使用扩展的提取方法。(默认值:1;最大值:2)
> 选项 `/L`, `/R`, `/F`, `/W`, `/X`, `/H` 中的所有数字(除了序数#ordinal之外)都是无前缀的十六进制表示。
## 设置额外的自定义提取方法
> `/H[X]{A|B|W|S|Q}[N][data_offset[*drdo]][:sub_offset[*drso]]@addr[:[module[:{name|#ordinal}]]]`
### 提取类型
- `A` - 双字节字符集字符([DBCS](https://en.wikipedia.org/wiki/DBCS))
- `B` - 双字节字符集字符(大端模式)
- `W` - 通用字符集字符([UCS](https://en.wikipedia.org/wiki/Universal_Coded_Character_Set))
- `S` - 变长字节编码字符串([MBCS](https://en.wikipedia.org/wiki/Variable-width_encoding#MBCS))
- `Q` - UTF-16 字符串
### 参数
- `X` - 使用硬件断点。
- `N` - 不使用上下文。
- `data_offset` - 子符或字符串指针的栈偏移量。
- `drdo` - `date_offset`的引用层次。
- `sub_offset` - 子上下文的栈偏移量。
- `drso` - 对`sub_offset`增加引用层次。
- `addr` - 钩子的地址。
- `module` - 钩子的地址所属的模块名称。
- `name` - 'module'导出时所使用的名称。
- `ordinal` - 'name'的引用次数。
> 'data_offset'和'sub_offset'的值若为负值,表示引用寄存器中的值:
- `-4` 使用 EAX 的值。
- `-8` 使用 ECX 的值。
- `-C` 使用 EDX 的值。
- `-10` 使用 EBX 的值。
- `-14` 使用 ESP 的值。
- `-18` 使用 EBP 的值。
- `-1C` 使用 ESI 的值。
- `-20` 使用 EDI 的值。
> "Add a level of indirection" means in C/C++ style: (*(ESP+data_offset)+drdo) insted of (ESP+data_offset)
> All numbers except ordinal are hexadecimal without any prefixes
## 使用提供的回调函数设置额外的提取方法
> `/H[X]C<code>@addr[:[module[:{name|#ordinal}]]]`
- `<code>` - 使用BASE64编码的原始的 x86 地址无关代码,最后一位是校验和。
> `<code>`以一个回调函数开始,它可以使用以下变量:
- `eax`, `ecx`, `edx`, `ebp` - 寄存器的原始值。
- `ebx` - 回调函数的起始地址。
- `esi` - 寄存器`esi`的值。
- `edi` - 指向AGTH提供的回调函数首地址的指针。
- `[esp+04h]` - 被提取的指令的地址。
- `[esp+08h]` - 寄存器`ebx`的值。
- `[esp+0Ch]` - 寄存器`esi`的值。
- `[esp+10h]` - 寄存器`edi`的值。
> 你编写的函数应该以`ret`指令结束,并且不允许修改包括调用栈在内的全局变量。
> 除了`esp`之外,你不需要保存通用寄存器的值。
> 注意:所有回调函数的内存访问权限都是RWE(读、写、执行),所以可以用于存储全局变量,但是请注意线程间的互斥与同步。
## AGTH 提供的回调函数指针,遵守[stdcall函数调用约定](https://docs.microsoft.com/en-us/cpp/cpp/stdcall?view=vs-2019)
- `[edi+00h]` 函数原型 `void SendText(char *Name, DWORD Context, DWORD SubContext, char/wchar_t *Text, DWORD TextLenLimit, DWORD UnicodeFlag)` - 发送文本; `Name`是可以为NULL的UTF-8字符串;当`UnicodeFlag`为0时,`Text` 以多字节字符集([MBCS](https://docs.microsoft.com/en-us/cpp/text/unicode-and-mbcs?view=vs-2019))表示,当`UnicodeFlag`为1时为用 UTF-16 编码的字符串。
- `[edi+04h]` 函数原型 `BOOL SetHook(void *Handler, void *Location, DWORD HardwareFlag)` - 对`Location`地址设置回调函数; 目前最多设置4个回调函数;如果设置`HardwareFlag`为1会强制使用硬件断点;如果设置失败,将会返回0。
- `[edi+08h]` 函数原型 `BOOL RemoveHook(void *Handler, void *Location)`- 移除回调函数;参数`Handler`或`Location`为0时将移除所有回调函数;如果移除失败,将会返回0。
- `[edi+0Ch]` 函数原型 `BOOL IsBadReadPtr(void *Ptr, DWORD Len)`- 如果从`Ptr`地址开始、长度为`len`的内存都是可读的,则返回0;
- `[edi+10h]` 函数原型 `void *memmem(void *Haystack, DWORD HaystackLen, void *Needle, DWORD NeedleLen)` - 使用[字符串搜索算法](https://en.wikipedia.org/wiki/String-searching_algorithm)从`Haystack`地址开始搜索指针`Needle`指向的字符串;返回第一个匹配的字符串的首地址,如果在指定的范围内没有匹配,则返回0。
- `[edi+14h]` - `DWORD:HMODULE GetModuleHandle(wchar_t *ModuleName)` - 返回在eax寄存器中`ModuleName`模块的地址和它的长度(位移edx寄存器中),对于exe可执行文件可以使用NULL;如果没有找到,返回0:0。
- `[edi+18h]` - `void *GetProcAddress(HMODULE Module, char *ProcName)` - 返回`Module`中`ProcName`函数的地址;如果没有找到,返回0。
|
Java | UTF-8 | 1,441 | 1.820313 | 2 | [
"MIT",
"LicenseRef-scancode-unknown-license-reference"
] | permissive | /*
* This class is distributed as part of the Botania Mod.
* Get the Source Code in github:
* https://github.com/Vazkii/Botania
*
* Botania is Open Source and distributed under the
* Botania License: http://botaniamod.net/license.php
*/
package vazkii.botania.client.integration.jei.orechid;
import mezz.jei.api.helpers.IGuiHelper;
import net.minecraft.network.chat.Component;
import net.minecraft.world.item.ItemStack;
import net.minecraft.world.item.crafting.RecipeType;
import org.jetbrains.annotations.NotNull;
import vazkii.botania.common.block.BotaniaFlowerBlocks;
import vazkii.botania.common.crafting.BotaniaRecipeTypes;
import vazkii.botania.common.crafting.OrechidIgnemRecipe;
import vazkii.botania.common.lib.LibMisc;
public class OrechidIgnemRecipeCategory extends OrechidRecipeCategoryBase<OrechidIgnemRecipe> {
public static final mezz.jei.api.recipe.RecipeType<OrechidIgnemRecipe> TYPE =
mezz.jei.api.recipe.RecipeType.create(LibMisc.MOD_ID, "orechid_ignem", OrechidIgnemRecipe.class);
public OrechidIgnemRecipeCategory(IGuiHelper guiHelper) {
super(guiHelper, new ItemStack(BotaniaFlowerBlocks.orechidIgnem), Component.translatable("botania.nei.orechidIgnem"));
}
@NotNull
@Override
public mezz.jei.api.recipe.RecipeType<OrechidIgnemRecipe> getRecipeType() {
return TYPE;
}
@Override
protected RecipeType<OrechidIgnemRecipe> recipeType() {
return BotaniaRecipeTypes.ORECHID_IGNEM_TYPE;
}
}
|
Java | ISO-8859-1 | 4,926 | 3.046875 | 3 | [] | no_license | import java.util.ArrayList;
import java.util.List;
public class Map {
private List<Pigeon> pigeonList;
private List<Nourriture> nourritureList;
private List<Nourriture> mauvaisList;
private InterfaceUtilisateur ui;
private boolean starting;
// size 10/10
public Map(InterfaceUtilisateur ui) {
this.pigeonList = new ArrayList<>();
this.nourritureList = new ArrayList<>();
this.mauvaisList = new ArrayList<>();
this.ui = ui;
this.starting = true;
this.init();
this.ui.init(this);
this.ui.run();
this.run();
}
public void init() {
this.addPigeon(new Pigeon(new Case(150, 0), this));
this.addPigeon(new Pigeon(new Case(0, 200), this));
this.addPigeon(new Pigeon(new Case(490, 0), this));
this.addPigeon(new Pigeon(new Case(250, 350), this));
this.addPigeon(new Pigeon(new Case(360, 500), this));
this.addPigeon(new Pigeon(new Case(490, 300), this));
}
public void run() {
int peur = 0;
int effrayer = 0;
int probEffrayer = 10000; // every ~20secs
System.out.println("---StartThread---");
System.out.println(this.toString());
System.out.println("---Initialisation du Thread Pigeon---");
for (Pigeon p : pigeonList) {
p.start();
try {
Thread.sleep(50);
// TODO Auto-generated catch block
} catch (InterruptedException e) {
e.printStackTrace();
}
}
while (this.starting) {
try {
peur = (int) (Math.random() * (probEffrayer - effrayer));// repositione les pigeons toute les 20 sec dans une position aleatoire
if (peur == 0) { // permet d'effaryer les pigeons lorsqu'il sont trop proche l'un de l'autre
effrayer = 0;
for (Pigeon p : pigeonList) {
if (p.isInterrupted()) { // booleen : test si le Thread est interrompu
p.notify(); // positionne les Threads dans la file d'attente
}
p.dispertion();// empeche les pigeons de se telescoper
}
}
this.ui.reDraw();
Thread.sleep(1);
effrayer++;
} catch (InterruptedException e) {
e.printStackTrace();
}
}
pigeonList.forEach(pigeon -> pigeon.kill()); // libre la mmoire
System.out.println(this.toString());
}
public void addPigeon(Pigeon p) {
this.pigeonList.add(p);
}
/*
*
*AJOUTER DE LA NOURRITURE
*
*/
public void addNourriture(Nourriture n) {
for (Nourriture nourriture : nourritureList) {
if (nourriture.Comestible()) { // si nourriture mangeable
nourriture.rot(); // creation de la nourriture
mauvaisList.add(nourriture); // passage de la nourriture mangeable non mangeable
}
}
this.nourritureList.add(n);
}
/*
*
* NOURRITURE A PROXIMITER
*
*/
public Nourriture nourritureProxi(Entite e) {
Nourriture proxi = null;
int minDist = Integer.MAX_VALUE;
Case p = e.getPosition();
for (Nourriture n : nourritureList) {
if (n.Comestible()) {
int distance = p.distanceTo(n.getPosition());
if (distance < minDist) {
proxi = n;
}
}
}
return proxi;
}
/*
*
* DELETE DE LA NOURRITURE
*
*/
public void deleteNourriture(Nourriture n) {
this.ui.mangeNourriture(n.getDessin());
this.nourritureList.remove(n);
}
@Override
public String toString(){
return "Map{\n\tNourriture: " + this.listF(this.nourritureList) + ",\n\tPigeon: " + this.listB(this.pigeonList) + "\n}";
}
private String listB(List<Pigeon> objects) {
String str = "[";
for (Object o : objects) {
str += o + ", ";
}
str += "]";
return str;
}
private String listF(List<Nourriture> objects) {
String str = "[";
for (Object o : objects) {
str += o + ", ";
}
str += "]";
return str;
}
public List<Pigeon> getPigeonList() {
return pigeonList;
}
public List<Nourriture> getNourritureList() {
return nourritureList;
}
}
|
Java | IBM852 | 1,533 | 3.859375 | 4 | [] | no_license | package com.castgroup.test;
import java.util.ArrayList;
import java.util.List;
import java.util.Scanner;
public class CalculateMaxDifference {
public static void main(String[] args) {
System.out.println("Digite o tamanho da sua lista de inteiros:");
Scanner scan = new Scanner(System.in);
int totalOfNumbers = scan.nextInt();
while(totalOfNumbers > 210 || totalOfNumbers < 1)
{
System.out.println("O numero de elementos deve ser maior que 0 e menor que 210.");
totalOfNumbers = scan.nextInt();
}
System.out.println("Digite os numeros:\n");
List<Integer> listOfNumbers = new ArrayList<Integer>();
for (int i=0; i<totalOfNumbers; i++)
{
int value = scan.nextInt();
while(value > 106 || value < -106)
{
System.out.println("O numero deve esta entre -106 e 106.");
value = scan.nextInt();
}
listOfNumbers.add(value);
}
System.out.println("A maior diferena : " + maxDifference(listOfNumbers));
}
private static int maxDifference(List<Integer> listOfNumbers) {
int maxDifference = listOfNumbers.get(0);
for (int i=1; i<listOfNumbers.size(); i++)
{
for (int j=0; j<i; j++) {
if((listOfNumbers.get(i) > listOfNumbers.get(j))) {
int difference = listOfNumbers.get(i) - listOfNumbers.get(j);
if(difference > maxDifference)
{
maxDifference = difference;
}
}
}
}
return maxDifference;
}
}
|
Markdown | UTF-8 | 6,051 | 2.796875 | 3 | [
"LicenseRef-scancode-dco-1.1"
] | no_license | <!-- badges: start -->
[](https://CRAN.R-project.org/package=Pade)
[](https://cran.r-project.org/package=Pade)
[](https://doi.org/10.5281/zenodo.4270254)
[](https://bestpractices.coreinfrastructure.org/projects/2033)
[](https://github.com/aadler/Pade/actions)
[](https://app.codecov.io/gh/aadler/Pade?branch=master)
<!-- badges: end -->
# Padé
## Mathematics
### Introduction
For a given function, its
[Taylor series](https://en.wikipedia.org/wiki/Taylor_series) is the "best"
polynomial representations of that function. If the function is being evaluated
at 0, the Taylor series representation is also called the Maclaurin series. The
error is proportional to the first "left-off" term. Also, the series is only a
good estimate in a small radius around the point for which it is calculated
(e.g. 0 for a Maclaurin series).
Padé approximants estimate functions as the quotient of two polynomials.
Specifically, given a Taylor series expansion of a function \(T(x)\) of order
\(L + M\), there are two polynomials, \(P_L(x)\) of order \(L\) and \(Q_M(x)\)
of order \(M\), such that \(\frac{P_L(x)}{Q_M(x)}\), called the Padé approximant
of order \([L/M]\), "agrees" with the original function in order \(L + M\).
More precisely, given
\begin{equation}
A(x) = \sum_{j=0}^\infty a_j x^j
\end{equation}
the Padé approximant of order \([L/M]\) to \(A(x)\) has the property that
\begin{equation}
A(x) - \frac{P_L(x)}{Q_M(x)} = \mathcal{O}\left(x^{L + M + 1}\right)
\end{equation}
The Padé approximant consistently has a wider radius of convergence than its
parent Taylor series, often converging where the Taylor series does not. This
makes it very suitable for numerical computation.
### Calculation
With the normalization that the first term of \(Q(x)\) is always 1, there is a
set of linear equations which will generate the unique Padé approximant
coefficients. Letting \(a_n\) be the coefficients for the Taylor series,
one can solve:
\[
\begin{align}
&a_0 &= p_0\\
&a_1 + a_0q_1 &= p_1\\
&a_2 + a_1q_1 + a_0q_2 &= p_2\\
&a_3 + a_2q_1 + a_1q_2 + a_0q_3 &= p_3\\
&a_4 + a_3q_1 + a_2q_2 + a_1q_3 + a_0q_4 &= p_4\\
&\vdots&\vdots\\
&a_{L+M} + a_{L+M-1}q_1 + \ldots + a_0q_{L+M} &= p_{L+M}
\end{align}
\]
remembering that all \(p_k, k > L\) and \(q_k, k > M\) are 0.
### Function Input and Output
Given integers `L` and `M`, and vector `A`, a vector of Taylor series
coefficients, in increasing order and length at least `L + M + 1`, the `Pade`
function returns a list of two elements, `Px` and `Qx`, which are the
coefficients of the Padé approximant numerator and denominator respectively,
in increasing order.
## Citation
If you use the package, please cite it as:
Avraham Adler (2015). Pade: Padé Approximant Coefficients.
R package version 1.0.5. https://CRAN.R-project.org/package=Pade
doi: 10.5281/zenodo.4270254
A BibTeX entry for LaTeX users is:
```
@Manual{,
title = {Pade: Padé Approximant Coefficients},
author = {Avraham Adler},
year = {2015},
note = {R package version 1.0.5},
url = {https://CRAN.R-project.org/package=Pade},
doi = {10.5281/zenodo.4270254},
}
```
## Contributions
Please ensure that all contributions comply with both
[R and CRAN standards for packages](https://cran.r-project.org/doc/manuals/r-release/R-exts.html).
### Versioning
This project attempts to follow [Semantic Versioning](https://semver.org/)
### Changelog
This project attempts to follow the changelog system at
[Keep a CHANGELOG](https://keepachangelog.com/)
### Dependencies
This project intends to have as few dependencies as possible. Please consider
that when writing code.
### Style
Please conform to this
[coding style guide](https://www.avrahamadler.com/coding-style-guide/) as best
possible.
### Documentation
Please provide valid .Rd files and **not** roxygen-style documentation.
### Tests
Please review the current test suite and supply similar `tinytest`-compatible
unit tests for all added functionality.
### Submission
If you would like to contribute to the project, it may be prudent to first
contact the maintainer via email. A request or suggestion may be raised as an
issue as well. To supply a pull request (PR), please:
1. Fork the project and then clone into your own local repository
2. Create a branch in your repository in which you will make your changes
3. Ideally use -s to sign-off on commits under the
[Developer Certificate of Origin](https://developercertificate.org/).
4. If possible, sign commits using a GPG key.
5. Push that branch and then create a pull request
At this point, the PR will be discussed and eventually accepted or rejected by
the lead maintainer.
## Roadmap
### Major
* There are no plans for major changes at this time
### Minor
* There are no plans for minor changes at this time
## Security
### Expectations
This package is a calculation engine and requires no secrets or private
information. It also has no compiled code. Dissemination is handled by CRAN.
Bugs are reported via the tracker and handled as soon as possible.
### Assurance
The threat model is that a malicious actor would "poison" the package code by
adding in elements having nothing to do with the package's purpose but which
would be used for malicious purposes. This is protected against by having the
email account of the maintainer—used for verification by CRAN—protected by a
physical 2FA device (Yubikey) which is carried by the lead maintainer.
|
Python | UTF-8 | 470 | 2.734375 | 3 | [] | no_license | from gtts import gTTS
import os
from reddit import generate_text
def generate_audio():
text_title = generate_text()
text = text_title["text"]
title = text_title["title"]
language = "en"
print("Generating Speech")
output = gTTS(text = text, lang = language, slow = False)
print("Saving in file")
audio_file_name = "op.mp3"
output.save(audio_file_name)
print("Audio Generated")
return {"audio": audio_file_name, "title": title} |
Python | UTF-8 | 5,391 | 2.71875 | 3 | [
"Apache-2.0"
] | permissive | #!/usr/bin/python
import numpy as np
import random
import seaborn as sns; sns.set()
import matplotlib
import matplotlib.pyplot as plt
import matplotlib.patches as patches
from matplotlib.ticker import AutoMinorLocator
plt.rcParams['ps.useafm'] = True
plt.rcParams['pdf.use14corefonts'] = True
plt.rcParams['text.usetex'] = True #Let TeX do the typsetting
plt.rcParams['font.family'] = 'Times New Roman' # ... for regular text
def generate_color():
color = '#{:02x}{:02x}{:02x}'.format(*map(lambda x: random.randint(0, 255), range(3)))
return color
def plot_square(ax, jobid, start_time, end_time, gpulist, pattern, color, legend=True):
""" position moving forward """
""" (x_start, y_start), (x, y) """
lower_gpu = np.array(gpulist).min() + 0.05
num_gpus = len(gpulist) - 0.1
end_time = end_time - start_time
label = ""
if legend:
label = "J"+str(jobid)
for p in [
patches.Rectangle(
(start_time, lower_gpu), end_time, num_gpus,
hatch=pattern,
facecolor=color,
alpha=0.4,
label=label,
),
]:
ax.add_patch(p)
# label = "J" + str(jobid)
# ax.text(start_time + (end_time/2), lower_gpu + (num_gpus/2), label, fontsize=14,
# bbox={'facecolor':'white', 'alpha':0.5, 'pad':1})
def add_text(ax, xposition, yposition, label, fontsize, background=False):
if background:
ax.text(xposition, yposition, label, fontsize=fontsize,
# bbox={'facecolor':'white', 'alpha':0.2, 'pad':1})
bbox={'facecolor':'white', 'pad':1})
else:
ax.text(xposition, yposition, label, fontsize=fontsize)
return ax
def add_job_square(ax, id, job, pattern, color):
start_time = job['start_time']
end_time = job['end_time']
gpus = job['gpus']
print "Job: ", id, " start: ", start_time, " end: ", end_time, " gpus: ", gpus
label = True
for gpu in gpus:
plot_square(ax, id, int(start_time), int(end_time), [int(gpu)], pattern, color, label)
label = False
return ax
if __name__ == '__main__':
# fig = plt.figure()
# ax = fig.add_subplot(111)
fig_size = (40, 10)
fig, axs = plt.subplots(2, 1, figsize=fig_size)
num_gpus = 4
ax = axs[0]
ax.set_ylim([0, num_gpus])
ax.set_xlim([0, 450])
jobs = dict()
jobs[0] = dict()
jobs[0]['submitted_time'] = 1.0021739006
jobs[0]['start_time'] = 1.0021739006
jobs[0]['end_time'] = 65.4728679657
jobs[0]['gpus'] = ["2"]
jobs[1] = dict()
jobs[1]['submitted_time'] = 16.0388100147
jobs[1]['start_time'] = 16.0388100147
jobs[1]['end_time'] = 119.978269815
jobs[1]['gpus'] = ["0"]
jobs[2] = dict()
jobs[2]['submitted_time'] = 25.0712928772
jobs[2]['start_time'] = 25.0712928772
jobs[2]['end_time'] = 124.009707928
jobs[2]['gpus'] = ["3"]
jobs[3] = dict()
jobs[3]['submitted_time'] = 26.084225893
jobs[3]['start_time'] = 66.4836959839
jobs[3]['end_time'] = 258.143490791
jobs[3]['gpus'] = ["1", "2"]
jobs[4] = dict()
jobs[4]['submitted_time'] = 30.1170678139
jobs[4]['start_time'] = 125.017673016
jobs[4]['end_time'] = 288.233713865
jobs[4]['gpus'] = ["0", "3"]
jobs[5] = dict()
jobs[5]['submitted_time'] = 31.125289917
jobs[5]['start_time'] = 259.151671886
jobs[5]['end_time'] = 417.476558924
jobs[5]['gpus'] = ["1", "2"]
patterns = ['/', 'o', 'x', '-', '+', 'O', '.', '*'] # more patterns
colors = ['#f2b1bc', '#02e0bd', '#7cc8f0', '#9083de', '#07a998', '#5a71ff', '#224fc2', '#19f2fb', '#8e9e1f', '#3266c8',
'#2b2c08', '#975ce0', '#e1c295', '#95e4c9', '#5d160e', '#4b5241', '#7a55f8', '#ac3320', '#58aa2d', '#953164']
"""start_time, gpulist, end_time, pattern, color"""
job = sorted(jobs.iteritems(), key=lambda t: t[1]['submitted_time'])
previous_submitted_time = -1
maxy = 0
for id, job in jobs.iteritems():
label = "J" + str(id)
yposition = num_gpus
if (job['submitted_time'] - previous_submitted_time) <= 2:
yposition += 0.2
maxy += 0.2
ax = add_job_square(ax, id, job, patterns[id], colors[id])
ax = add_text(ax, job['submitted_time'] - 2, yposition, label)
previous_submitted_time = job['submitted_time']
ax = add_text(ax, previous_submitted_time + 10, (num_gpus + (maxy/3)), "} Job Arrivals", background=False)
ax.xaxis.set_major_locator(matplotlib.ticker.MultipleLocator(6))
# ax.set_xticklabels(ax.xaxis.get_majorticklabels(), rotation=90)
ax.xaxis.set_ticklabels([], minor=False)
ax.yaxis.set_major_locator(matplotlib.ticker.MultipleLocator(1))
ax.yaxis.set_minor_locator(AutoMinorLocator(2))
ax.yaxis.set_ticklabels([0,1,2,3], minor=True)
ax.yaxis.set_ticklabels([], minor=False)
ax.legend(loc='upper center', bbox_to_anchor=(0.5, 1.1), ncol=10, fancybox=True, shadow=True)
ax.set_ylabel('GPU IDs')
ax.xaxis.grid(True, which='minor')
# plt.grid(which='minor')
axs[1].set_ylim([0, num_gpus])
axs[1].set_xlim([0, 450])
axs[1].xaxis.set_major_locator(matplotlib.ticker.MultipleLocator(6))
for tick in axs[1].get_xticklabels():
tick.set_rotation(45)
plt.subplots_adjust(wspace=0, hspace=0.05)
sns.set_style("ticks")
plt.show() |
Markdown | UTF-8 | 13,667 | 2.796875 | 3 | [
"Apache-2.0"
] | permissive | ## Introduction to Core Spring
**Objective:** In this lab you'll come to understand the basic workings of the _Reward Network_ reference
application and you'll be introduced to the tools you'll use throughout the course. Once you will have familarized yourself with the tools and the application domain, you will implement and test the [rewards application](https://virtuant.github.io/spring-core/reference-domain.html) using Plain Old Java objects (POJOs).
**Successful outcome:** At the end of the lab you will see that the application logic will be clean and not coupled with infrastructure APIs. You'll understand that you can develop and unit test your business logic without using Spring. Furthermore, what you develop in this lab will be directly runnable in a Spring environment without change.
**File location:** `~/labs`
>**Note:** In every lab, read to the end of eachnumbered section before doing anything. There are often tips and notes to help you, but they may be just over the next page or off the bottom of the screen. If you get stuck, don't hesitate to ask for help!
##### What you will learn:
1. Basic features of the Spring Tool Suite
2. Core _RewardNetwork_ Domain and API
3. Basic interaction of the key components within the domain
---
### Steps
<!--STEP-->
<img src="https://user-images.githubusercontent.com/558905/40613898-7a6c70d6-624e-11e8-9178-7bde851ac7bd.png" align="left" width="50" height="50" title="ToDo Logo" />
<h2>1. Getting Started with the Spring Tool Suite</h2>
The Spring Tool Suite (STS) is a free IDE built on the Eclipse Platform. In this section, you will become familiar with the Tool Suite. You will also understand how the lab projects have been structured.
<!--STEP-->
<img src="https://user-images.githubusercontent.com/558905/40613898-7a6c70d6-624e-11e8-9178-7bde851ac7bd.png" align="left" width="50" height="50" title="ToDo Logo" />
<h2>2. Launch the Tool Suite</h2>
Launch the Spring Tool Suite by using the shortcut link on your desktop.

After double-clicking the shortcut, you will see the STS splash image appear.

You will be asked to select a workspace. You should accept the default location offered. You can optionally check the box labeled _use this as the default and do not ask again._
<!--STEP-->
<img src="https://user-images.githubusercontent.com/558905/40613898-7a6c70d6-624e-11e8-9178-7bde851ac7bd.png" align="left" width="50" height="50" title="ToDo Logo" />
<h2>3. Understanding the Eclipse/STS project structure</h2>
If you've just opened STS, it may be still starting up. Wait several moments until the progress indicator on the bottom right finishes. When complete, you should have no red error markers within the _Package Explorer_or _Problems_ views
Now that STS is up and running, you'll notice that, within the _Package Explorer_ view on the
left, projects are organized by _WorkingSets_. Working Sets are essentially folders that contain a group of Eclipse projects. These working sets represent the various labs you will work through during this course. Notice that they all begin with a number so that the labs are organized in order as they occur in this lab guide.
<!--STEP-->
<img src="https://user-images.githubusercontent.com/558905/40613898-7a6c70d6-624e-11e8-9178-7bde851ac7bd.png" align="left" width="50" height="50" title="ToDo Logo" />
<h2>4. Browse Working Sets and projects</h2>
If it is not already open, expand the _01-spring-intro_ Working Set.
Within you'll find two projects: _spring-intro_ and _spring-intro-solution_. _spring-intro_ corresponds to the
start project. This pair of _start_ and _solution_ projects is a common
pattern throughout the labs in this course.
Open the _spring-intro_ project and expand its _ReferencedLibraries_ node. Here you'll see a number of dependencies similar to the screenshot below:

>**Tip:** This screenshot uses the "Hierarchical" Package Presentation view instead of the "Flat" view (the default). See the [Eclipse](eclipse.html#package-explorer-tip "E.2. Package Explorer View") tips section on how to toggle between the two views.
For the most part, these dependencies are straightforward and probably similar to what you're used to in your own projects. For example, there are several dependencies on Spring Framework jars, on Hibernate, DOM4J, etc.
In addition to having dependencies on a number of libraries, all lab projects also have a dependency on a common project called _rewards-common_.

This project is specific to Spring training courseware, and contains a number of types such as _MonetaryAmount_, _SimpleDate_, etc. You'll make use of these types throughout the course. Take a moment now to explore the contents of that jar and notice that if you double-click on the classes, the sources are available for you to browse.
<!--STEP-->
<img src="https://user-images.githubusercontent.com/558905/40613898-7a6c70d6-624e-11e8-9178-7bde851ac7bd.png" align="left" width="50" height="50" title="ToDo Logo" />
<h2>5. Configure the TODOs in STS</h2>
In the next labs, you will often be asked to work with TODO instructions. They are displayed in the `Tasks` view in Eclipse/STS. If not already displayed, click on `Window -> Show View -> Tasks` (be careful, _not `Task List`_). If you can't
see the `Tasks` view, try clicking `Other ...` and looking under `General`.
By default, you see the TODOs for all the active projects in
Eclipse/STS. To limit the TODOs for a specific project, execute the
steps summarized in the following screenshots:

>**Caution:** It is possible, you might not be able to see the TODOs defined within the XML files. In this case, you can check the configuration in `Preferences -> General -> Editors -> Structured Text Editor -> Task Tags` pane. Make sure `Enable searching for Task Tags` is selected.

On the `Filters` tab, verify if XML content type is selected. In case of refresh issues, you may have to uncheck it and then check it again.
<!--STEP-->
<img src="https://user-images.githubusercontent.com/558905/40613898-7a6c70d6-624e-11e8-9178-7bde851ac7bd.png" align="left" width="50" height="50" title="ToDo Logo" />
<h2>6. Understanding the 'Reward Network' Application Domain and API</h2>
Before you begin to use Spring to configure an application, the pieces of the application must be understood. If you haven't
already done so, take a moment to review [Reward Dining: The Course Reference Domain](reference-domain.html "Reward Dining: The Course Reference Domain") in the preface to this Lab Guide.
This overview will guide you through understanding the background of the Reward Network application domain and thus provide context for the rest of the course.
The rewards application consists of several pieces that work together to reward accounts for dining at restaurants. In this lab, most of these pieces have been implemented for you. However, the central piece, the `RewardNetwork`, has not.
<!--STEP-->
<img src="https://user-images.githubusercontent.com/558905/40613898-7a6c70d6-624e-11e8-9178-7bde851ac7bd.png" align="left" width="50" height="50" title="ToDo Logo" />
<h2>. Review the `RewardNetwork` implementation class</h2>
The `RewardNetwork` is responsible for carrying out `rewardAccountFor(Dining)` operations. In this step you'll be working in a class that implements this interface. See the implementation class below:

Take a look at your `spring-intro` project in STS. Navigate into the `src/main/java` source folder and you'll see
the root `rewards` package. Within that package you'll find the `RewardNetwork` Java interface definition:

The classes inside the root `rewards` package fully define the public interface for the application, with `RewardNetwork` being the central element. Open `RewardNetwork.java` and review it.
Now expand the `rewards.internal` package and open the implementation class `RewardNetworkImpl.java`.
<!--STEP-->
<img src="https://user-images.githubusercontent.com/558905/40613898-7a6c70d6-624e-11e8-9178-7bde851ac7bd.png" align="left" width="50" height="50" title="ToDo Logo" />
<h2>8. Review the `RewardNetworkImpl` configuration logic</h2>
`RewardNetworkImpl` should rely on three supporting data access services called 'Repositories' to do
its job. These include:
1\. An `AccountRepository` to load `Account` objects to make benefit contributions to.
2\. A `RestaurantRepository` to load `Restaurant` objects to calculate how much benefit to reward an account for dining.
3\. A `RewardRepository` to track confirmed reward transactions for accounting and reporting purposes.
This relationship is shown graphically below:

Locate the single constructor and notice all three dependencies are injected when the `RewardNetworkImpl` is constructed.
<!--STEP-->
<img src="https://user-images.githubusercontent.com/558905/40613898-7a6c70d6-624e-11e8-9178-7bde851ac7bd.png" align="left" width="50" height="50" title="ToDo Logo" />
<h2>9. Implement the `RewardNetworkImpl` application logic</h2>
In this step you'll implement the application logic necessary to complete a `rewardAccountFor(Dining)` operation, delegating to your dependents as you go.
Start by reviewing your existing RewardNetworkImpl `rewardAccountFor(Dining)` implementation.
As you will see, it doesn't do anything at the moment.
Inside the task view in Eclipse/STS, complete all the TODOs.
Implement them as shown in
[Figure 1.10](spring-intro-lab.html#fig-reward-account-for-dining-sequence-si "Figure 1.10. The RewardNetworkImpl.rewardAccountFor(Dining) sequence")

>**Tip:** Use Eclipse's [autocomplete](eclipse.html#field-autocomplete-tip "E.4. Field Auto-Completion") to help you as you define each method call and variable assignment. You should not need to use operator `new` in your code. Everything you need is returned by the methods you use. The interaction diagram doesn't show what each call returns, but most of them return something. You get the credit card and merchant numbers from the `Dining`object.
<!--STEP-->
<img src="https://user-images.githubusercontent.com/558905/40613898-7a6c70d6-624e-11e8-9178-7bde851ac7bd.png" align="left" width="50" height="50" title="ToDo Logo" />
<h2>10. Unit test the `RewardNetworkImpl` application logic</h2>
How do you know the application logic you just wrote actually works? You don't, not without a test that proves it. In this step you'll review and run an automated JUnit test to verify what you just coded is correct.
Navigate into the `src/test/java` source folder and you'll see the root `rewards` package. All tests for the rewards application reside within this tree at the same level as the source they exercise. Drill down into the `rewards.internal` package and you'll see `RewardNetworkImplTests`, the JUnit test for your `RewardNetworkImpl` class.

Inside `RewardNetworkImplTests` you can notice that in the `setUp()` method, 'stub' repositories have been created and injected into the `RewardNetworkImpl` class using the constructor.
>**Note:**All the tests in this course use JUnit 5 instead of JUnit 4. Hence the `setUp()`method is annotated with `@BeforeEach`instead of `@Before`. An overview of JUnit 5 is in this [Appendix C, ***JUnit 5***](appendix_junit5.html "Appendix C. JUnit 5").
Review the only test method in the class. It calls `rewardNetwork.rewardAccountFor(Dining)` and then makes assert statements to evaluate the result of calling this method. In this way the unit test is able to construct an instance of RewardNetworkImpl using the mock objects as dependencies and verify that the logic you implemented functions as expected.
Once you reviewed the test logic, run the test. To run, right-click on `RewardNetworkImplTests` and select _Run As -> JUnit Test_.
### Results
When you have the green bar, congratulations! You've completed this lab. You have just developed and unit tested a component of a realistic Java application, exercising application behavior successfully in a test environment inside your IDE. You used stubs to test your application logic in isolation, without involving external dependencies such as a database or Spring. And your application logic is clean and decoupled from infrastructureAPIs.
You are finished!
<button type="button"><a href="https://virtuant.github.io/hadoop-overview-spark-hwx/">Go Back</a></button>
<br>
<br> |
Python | UTF-8 | 3,080 | 3.328125 | 3 | [] | no_license | # # Lifted from: https://stackoverflow.com/a/40338475
import pygame
from car import Car, Passenger
def main():
pygame.init()
clock = pygame.time.Clock()
background = pygame.image.load("Atlanta_1_base.png")
size =[1584, 1584]
# Making the window the size of the background image
screen = pygame.display.set_mode(size)
# Where the player starts
player = Car([300, 300])
# Defining player keypresses as movement
player.move = [pygame.K_LEFT, pygame.K_RIGHT, pygame.K_UP, pygame.K_DOWN]
# How fast the player can move in the X or Y direction, how fast the image onscreen loads
fps = player.vy = player.vx = 24
passenger = Passenger([100, 60])
passenger_group = pygame.sprite.Group()
passenger_group.add(passenger)
player_group = pygame.sprite.Group()
player_group.add(player)
# Start the game
while True:
clock.tick(fps)
for event in pygame.event.get():
if event.type == pygame.QUIT:
return False
key = pygame.key.get_pressed()
# Left/Right travel
for i in range(2):
if key[player.move[i]]:
player.rect.x += player.vx * [-1, 1][i]
if key[player.move[1]]:
player.current_view = player.car_right
if key[player.move[0]]:
player.current_view = player.car_left
# Up/Down travel
for i in range(2):
if key[player.move[2:4][i]]:
player.rect.y += player.vy * [-1, 1][i]
if key[player.move[2]]:
player.current_view = player.car_back
if key[player.move[3]]:
player.current_view = player.car_front
## first parameter takes a single sprite
## second parameter takes sprite groups
## third parameter is a do kill commad if true
## all group objects colliding with the first parameter object will be
## destroyed. The first parameter could be bullets and the second one
## targets although the bullet is not destroyed but can be done with
## simple trick bellow
# hit = pygame.sprite.spritecollide(player, passenger_group, True)
hit = pygame.sprite.spritecollide(player, passenger_group, True)
if hit:
## if collision is detected call a function in your case destroy
## bullet
pygame.quit()
passenger.image.fill((0, 0, 0))
# Put
screen.blit(background, [0,0])
screen.blit(player.current_view, (player.rect.x, player.rect.y))
# if not hit:
# screen.blit(passenger.image, (passenger.rect.x, passenger.rect.y))
# player.image_selector(screen, player.current_view)
# # Kind of already handled by the blit but probably important for collisions
player_group.draw(screen)
# # Necessary to bring Lisa to life
passenger_group.draw(screen)
pygame.display.update()
pygame.quit()
if __name__ == '__main__':
main() |
Python | UTF-8 | 12,122 | 2.609375 | 3 | [
"NCSA",
"LicenseRef-scancode-unknown-license-reference"
] | permissive | import os
import sys
from collections import OrderedDict
import numpy as np
__author__ = 'Yanning Li and Juan Carlos Martinez'
"""
This script generates the configuration file for estimation. The configuration file consists of blocks separated by a
blank line. Each block includes a specific sensor network configuration and algorithms to be used.
"""
# =========================================================================================== #
# ======================== Generate I57 Configuration ======================================= #
# =========================================================================================== #
# ===================================================
# Configure the I57 network
start_loc = 353.1
end_loc = 6753.1
cell_length = 200
# the order of main freeway sections
fwy_secs = [51413, 51427, 51426]
# the length of sections
len_secs = [6939.5, 13.6, 303.14]
# the offset of freeway sections. that is the absolute distance of the section from the entrance of freeway
offset_secs = np.concatenate([[0], np.cumsum(np.array(len_secs))])
update_file = True
if update_file is True:
directory = os.path.dirname(os.path.realpath(__file__))
if sys.platform == 'win32':
f = open(directory + '\\..\\I57_configurations_input.txt', 'w+')
elif sys.platform == 'darwin':
f = open(directory + '/../I57_configurations_input.txt', 'w+')
# ================================================================ #
# ======================= A few utility functions ================ #
def __get_relative_loc(dist):
"""
This function returns the relative location on the freeway using dist to the entrace
:param dist: meters, to the entrance
:return: sect, rel_dist(m)
"""
sect = 0
rel_dist = 0.0
for i, offset in enumerate(offset_secs):
if dist < offset:
# found
return sect, rel_dist
else:
# keep updating
sect = fwy_secs[i]
rel_dist = dist - offset
# if did not found
raise Exception('Error: location {0} is out of the network'.format(dist))
# ================================================================ #
# =============== Generate I57 deployment ============= #
# ================================================================ #
if False:
I57 = OrderedDict()
# The location of sensors mapped to the discretized grid.
I57['RADARSB1'] = [353.1, 'radar']
I57['RADARSB2'] = [1153.1, 'radar']
I57['RADARSB3'] = [1953.1, 'radar']
I57['RADARSB4'] = [2753.1, 'radar']
I57['RADARSB5'] = [3553.1, 'radar']
I57['RADARSB6'] = [4353.1, 'radar']
I57['RTMSSB7'] = [5153.1, 'rtms']
I57['RADARSB8'] = [5953.1, 'radar']
I57['RADARSB9'] = [6753.1, 'radar']
flow_std = {'radar': 0.3, 'rtms': 0.2}
speed_std = {'radar': 5.0, 'rtms': 5.0}
f.write('config_id:configI57\n')
for sensor in I57.keys():
abs_loc = I57[sensor][0]
sensor_type = I57[sensor][1]
if sensor_type == 'radar':
sensor_model = 'RADAR'
elif sensor_type == 'rtms':
sensor_model = 'RTMS'
sec, rel_dist = __get_relative_loc(abs_loc)
cus_line = 'sensor;id:{0};type:{1};section:{2};distance:{3};flow_std_specs:{4};speed_std_specs:{5};aggregation_sec:30'.format(
sensor, sensor_type, sec, rel_dist, flow_std[sensor_type], speed_std[sensor_type])
# att_list = []
# for key in sensor_paras[sensor_model].keys():
# entry = ':'.join( [key, str( sensor_paras[sensor_model][key])])
# att_list.append(entry)
# att_line = ';'.join(att_list)
# line = cus_line + att_line + '\n'
f.write(cus_line + '\n')
algorithms = ['enkfANupdated']
num_ensembles = 200
for algorithm in algorithms:
# write the algorithm
if algorithm == 'constNAN':
f.write(
'algorithm;id:constNAN;type:interpolation;interpolation_option:constant;queue_threshold:17.88;missing_data:blank\n')
elif algorithm == 'constNU':
f.write(
'algorithm;id:constNU;type:interpolation;interpolation_option:constant;queue_threshold:17.88;missing_data:no_update\n')
elif algorithm == 'constFILL':
f.write(
'algorithm;id:constFILL;type:interpolation;interpolation_option:constant;queue_threshold:17.88;missing_data:fill\n')
elif algorithm == 'linearNAN':
f.write(
'algorithm;id:linearNAN;type:interpolation;interpolation_option:linear;queue_threshold:17.88;missing_data:blank\n')
elif algorithm == 'linearNU':
f.write(
'algorithm;id:linearNU;type:interpolation;interpolation_option:linear;queue_threshold:17.88;missing_data:no_update\n')
elif algorithm == 'linearFILL':
f.write(
'algorithm;id:linearFILL;type:interpolation;interpolation_option:linear;queue_threshold:17.88;missing_data:fill\n')
elif 'enkf' in algorithm:
f.write('algorithm;id:enkfANupdated;type:enkf_AN;queue_threshold:17.88;num_ensembles:{0};'.format(
num_ensembles) +
'vm:23.56;beta:0.6214;rhoc:0.0578;wc:-4.88;' +
'std_cell:0.01;std_oncell:0.015;std_offcell:0.012;std_qin:0.1;std_qout:0.2;' +
'init_rho:0;init_qin:0.5;init_qout:0.0\n')
elif algorithm == 'fisb':
f.write('algorithm;id:fisb;type:fisb;queue_threshold:17.88;' +
'omg_c:-17.57;omg_f:84.83;sigma_fac:0.75;tau_fac:0.75;v_crit:76.94;delta_v:20;lag:0;' +
'time_diff_cutoff:150\n')
else:
raise Exception('unrecognized algorithm')
# ================================================================ #
# ================================================================ #
# ================= Exploring all configurations ================= #
# ================================================================ #
if False:
# ================================================
# the complete set of algorithms to evaluate
algorithms = ['linearFILL', 'fisb', 'enkfANupdated']
# ================================================
# The complete set of num_sensors to generate
num_sensors_scenarios = [2, 3, 4, 5, 6, 7, 9, 12, 17, 33]
# ================================================
# the complete set of types of sensors to generate
sensor_types = ['IDEAL', 'RADAR', 'RADARx2', 'RADARx4', 'RADARx8',
'RTMS', 'RTMSx2', 'RTMSx4', 'RTMSx8', 'ICONE', 'ICONEx2']
# errors, in veh/s and m/s
sensor_errors = OrderedDict()
sensor_errors['IDEAL'] = [0.2, 5.0, 'vol_gt'] # changed the variance
sensor_errors['RADAR'] = [0.3, 5.0, 'radar']
sensor_errors['RADARx2'] = [0.3, 5.0, 'radar']
sensor_errors['RADARx4'] = [0.3, 5.0, 'radar']
sensor_errors['RADARx8'] = [0.3, 5.0, 'radar']
sensor_errors['RTMS'] = [0.2, 5.0, 'rtms']
sensor_errors['RTMSx2'] = [0.2, 5.0, 'rtms']
sensor_errors['RTMSx4'] = [0.2, 5.0, 'rtms']
sensor_errors['RTMSx8'] = [0.2, 5.0, 'rtms']
sensor_errors['ICONE'] = [0.3, 3.0, 'radar']
sensor_errors['ICONEx2'] = [0.3, 3.0, 'radar']
# get the location of each sensor
num_cells = int((end_loc - start_loc) / cell_length)
sensor_locations = OrderedDict()
for num_sensors in num_sensors_scenarios:
tmp_spacing = np.linspace(0, num_cells, num_sensors)
# round the location of each sensor to its closest cell boundary
sensor_locations[num_sensors] = np.array([round(i) for i in tmp_spacing]) * 200.0 + start_loc
uneven_spacing = sensor_locations[num_sensors][1:] - sensor_locations[num_sensors][:-1]
print('{0} sensors, avg {1}, {2}'.format(num_sensors,
np.mean(uneven_spacing), uneven_spacing))
# write the file
if update_file is True:
# for all different spacing
for num_sensors in sensor_locations.keys():
# for all different sensor types
for sensor_type in sensor_types:
config_name = 'configHOMO_{0}_{1}'.format(num_sensors, sensor_type)
f.write('config_id:{0}\n'.format(config_name))
# icone sensors uses 400 ensembles
if 'ICONE' in sensor_type:
num_ensembles = 400
agg_interval = 60
else:
num_ensembles = 200
agg_interval = 30
for abs_loc in sensor_locations[num_sensors]:
sec, rel_dist = __get_relative_loc(abs_loc)
cus_line = 'sensor;id:{0}Loc{1}m;type:{2};section:{3};distance:{4};'.format(sensor_type,
100 * int(round(abs_loc / 100.0)),
sensor_errors[sensor_type][2],
sec, rel_dist,) \
+ 'flow_std_specs:{0};speed_std_specs:{1};aggregation_sec:{2}\n'.format(
sensor_errors[sensor_type][0],
sensor_errors[sensor_type][1],
agg_interval)
f.write(cus_line)
# each algorithm will run on the sensor grid
for algorithm in algorithms:
# write the algorithm
if algorithm == 'constNAN':
f.write(
'algorithm;id:constNAN;type:interpolation;interpolation_option:constant;' +
'queue_threshold:17.88;missing_data:blank\n')
elif algorithm == 'constNU':
f.write(
'algorithm;id:constNU;type:interpolation;interpolation_option:constant;' +
'queue_threshold:17.88;missing_data:no_update\n')
elif algorithm == 'constFILL':
f.write(
'algorithm;id:constFILL;type:interpolation;interpolation_option:constant;' +
'queue_threshold:17.88;missing_data:fill\n')
elif algorithm == 'linearNAN':
f.write(
'algorithm;id:linearNAN;type:interpolation;interpolation_option:linear;' +
'queue_threshold:17.88;missing_data:blank\n')
elif algorithm == 'linearNU':
f.write(
'algorithm;id:linearNU;type:interpolation;interpolation_option:linear;' +
'queue_threshold:17.88;missing_data:no_update\n')
elif algorithm == 'linearFILL':
f.write(
'algorithm;id:linearFILL;type:interpolation;interpolation_option:linear;' +
'queue_threshold:17.88;missing_data:fill\n')
elif algorithm == 'enkfANupdated':
f.write(
'algorithm;id:enkfANupdated;type:enkf_AN;queue_threshold:17.88;num_ensembles:{0};'.format(
num_ensembles) +
'vm:23.56;beta:0.6214;rhoc:0.0578;wc:-4.88;' +
'std_cell:0.01;std_oncell:0.015;std_offcell:0.012;std_qin:0.1;std_qout:0.2;' +
'init_rho:0;init_qin:0.5;init_qout:0.0\n')
elif algorithm == 'fisb':
f.write('algorithm;id:fisb;type:fisb;queue_threshold:17.88;' +
'omg_c:-17.57;omg_f:84.83;sigma_fac:0.75;tau_fac:0.75;v_crit:76.94;delta_v:20;lag:0;' +
'time_diff_cutoff:150\n')
else:
raise Exception('unrecognized algorithm')
f.write('\n')
f.close()
|
C++ | UTF-8 | 6,821 | 3.484375 | 3 | [] | no_license | //
// BoardCreator.cpp
// Sudoku
//
// Created by Aidan Blant on 9/10/20.
// Copyright © 2020 Aidan Blant. All rights reserved.
//
#include "BoardCreator.hpp"
#include "Board.cpp"
#include <vector>
#include <unistd.h>
#define GetCurrentDir getcwd
class BoardCreator{
public:
vector<string> boardList;
BoardCreator(int numBoards){
while( boardList.size() <numBoards ){
addBoard(makeBoard());
}
}
// Initialize from file of completeBoards
BoardCreator(string inputFile){
string line;
ifstream myfile (inputFile);
if (myfile.is_open()){
while ( getline (myfile,line) ){
// Make board with the line, then add it to boardList
boardList.push_back( line );
}
myfile.close();
}
else cout << "Unable to open file\n";
}
private:
// MARK: - Board Transformation Operations
// Swap two rows in a board string rows 0-8
void swapRows(string& board, int a, int b){
// so any given row will start on a/b * 9, and go for 9 characters
string tempA = board.substr(a*9, 9);
string tempB = board.substr(b*9, 9);
board.replace(a*9, 9, tempB);
board.replace(b*9, 9, tempA);
}
// Swap two columns in a board string rows 0-8
void swapCols(string& board, int a, int b){
for( int i = 0; i < 9; i++){
// i is row, so i*9 + a or b
int tempA = board[i*9+a];
board[i*9+a] = board[i*9+b];
board[i*9+b] = tempA;
}
}
void moveRight(string& board){
for( int i = 0; i < 9; i++ ){
string firstSix = board.substr(i*9,6);
board.replace((i*9),3,board.substr(i*9+6,3));
board.replace(i*9+3,6,firstSix);
}
}
void moveLeft(string& board){
// Go row by row moving all tiles 3 to the left
for( int i = 0; i < 9; i++ ){
string firstThree = board.substr(i*9,3);
board.replace((i*9),6,board.substr(i*9+3,6));
board.replace((i*9+6),3,firstThree);
}
}
// Rotate Right/Left based on the transpose + horizontal flip formula
void rotateRight(string& board){
// Transpose over diagonal
for( int i = 0; i < 9; i++ ){
for( int j = i; j < 9; j++ ){
swap(board[ i*9 + j ],board[ j*9 + i ]);
}
}
//Flip horizontally
for( int i = 0; i < 9; i++ ){ // row
for( int j = 0; j < 9/2; j++){ // col
swap( board[i*9+j], board[ i*9 + (8-j) ] );
}
}
}
void rotateLeft(string& board){
// Transpose over diagonal
for( int i = 0; i < 9; i++ ){
for( int j = i; j < 9; j++ ){
swap(board[ i*9 + j ],board[ j*9 + i ]);
}
}
//Flip vertically
for( int i = 0; i < 9/2; i++ ){ // row
for( int j = 0; j < 9; j++){ // col
swap( board[i*9+j], board[ (8-i)*9 + j]);
}
}
}
void rotate180(string& board){
// Transpose over diagonal
for( int i = 0; i < 9; i++ ){
for( int j = i; j < 9; j++ ){
swap(board[ i*9 + j ],board[ j*9 + i ]);
}
}
// Transpose over opposite diagonal
for( int i = 0; i < 9; i++){
for( int j = 0; j < (9-i); j++){
swap(board[ i*9 + j ],board[ (8-i) + ((8-j)*9) ]);
}
}
}
// Return a board created from transformations on seedboard
string makeBoard(){
string seedBoard = "123456789456789123789123456234567891567891234891234567345678912678912345912345678";
for( int j = 0; j < 1000; j++ ){
int op = rand() % 26;
// TODO: These are preset based on following scheme
// 0 = rows01, 1=row12, 2=swap02 | 3=rows34, 4=rows45, 5=rows35
// 9 = col01, 10=col12, etc...
// To maintain repeatability from seedString
int rowOrCol2 = rand() % 3;
int rowOrCol3 = rand() % 3;
int rowOrCol4 = rand() % 3;
// pick board row or col 0-3
// swap two rows or columns in it
if( op < 9){
swapRows(seedBoard, rowOrCol2*3 + rowOrCol3, rowOrCol2*3+rowOrCol4);
}else if( op < 18 ){
swapCols(seedBoard, rowOrCol2*3 + rowOrCol3, rowOrCol2*3+rowOrCol4);
}else if( op < 19 ){
moveLeft(seedBoard);
}else if( op < 20 ){
moveRight(seedBoard);
}else if( op < 21 ){
rotateLeft(seedBoard);
}else if(op < 22 ){
rotateRight(seedBoard);
}else{
rotate180(seedBoard);
}
}
return seedBoard;
}
// Add board if not a duplicate
void addBoard(string newBoard){
if( !existsInBoardList(newBoard) ){
boardList.push_back(newBoard);
return;
}
return;
}
bool existsInBoardList(string toCheck){
if (std::find(boardList.begin(), boardList.end(), toCheck) != boardList.end())
{
cout << "Already exists in board" << std::endl;
return true;
}
return false;
}
public:
/* Display ß*/
void printIntArray(int board[81]){
for( int i = 0; i < 81; i++ ){
if( board[i] == 0 ){
cout << endl;
return;
}
cout << board[i];
}
cout << endl;
}
void printFill(bool fill[3][9][9]){
cout << "___________________________________" << endl;
cout << "Rows: Cols: Boxes: |" << endl;
for(int j = 0; j < 9; j++){
for( int i = 0; i < 3; i++ ){
for( int k = 0; k < 9; k++){
if( fill[i][j][k] == true ){
cout << 1;
}else {
cout << 0;
}
}
cout << " | ";
}
cout << endl;
}
cout << "___________________________________" << endl;
}
// I/O
int exportFilledBoardList(string fileName){
if( boardList.size() == 0 ){
cout << "No Boards in BoardList to export" << endl;
return 1; // Error, No Boards in BoardList to Export
}
ofstream myfile;
myfile.open (fileName+".txt");
for( int i = 0; i < boardList.size(); i++ ){
myfile << boardList[i] << "\n";
}
myfile.close();
return 0;
}
};
|
Markdown | UTF-8 | 23,379 | 2.5625 | 3 | [] | no_license | **<b>
### <center><name>DEFENCE FORCES RETIREMENT BENEFITS (PENSION INCREASES) ACT 1967 - Assented to 8 November 1967 (HISTACT CHAP 2033 #DATE 08:11:1967) </name></center>
</b>**
**<b>
### <name>DEFENCE FORCES RETIREMENT BENEFITS (PENSION INCREASES) ACT 1967 - TABLE OF PROVISIONS </name>
</b>**
## TABLE
<tables> <tt><lf> DEFENCE FORCES RETIREMENT BENEFITS (PENSION INCREASES) ACT 1967<lf> <lf> TABLE OF PROVISIONS<lf> Section<lf> 1\. Short title<lf> 2\. Commencement<lf> 3\. Interpretation<lf> 4\. Increases in pensions of certain eligible pensioners<lf> 5\. Application of increases to commuted pensions, &c.<lf> 6\. Adjustment in relation to previous increase<lf> 7\. Increased rates of pensions payable to certain eligible pensioners<lf> 8\. Increase in certain widows' pensions<lf> 9\. Increased rates of pensions payable to certain widows<lf> 10\. Application of increases to suspended pensions<lf> 11\. Re-engagement of pensioners<lf> 12\. Rate of invalidity pension payable on reclassification<lf> 13\. Increases not payable to certain superannuation pensioners<lf> 14\. Payment of pension increases<lf> 15\. Application </lf></lf></lf></lf></lf></lf></lf></lf></lf></lf></lf></lf></lf></lf></lf></lf></lf></lf></lf></tt></tables>
**<b>
### <name>DEFENCE FORCES RETIREMENT BENEFITS (PENSION INCREASES) ACT 1967 - SECT. 1\. Short title. </name>
</b>**
## SECT
<sect> DEFENCE FORCES RETIREMENT BENEFITS (PENSION INCREASES) ACT 1967<lf> An Act to provide for Increases in certain Defence Forces Retirement pensions.<lf> 1\. This Act may be cited as the Defence Forces Retirement Benefits (Pension Increases) Act 1967.*<lf> </lf></lf></lf></sect>
**<b>
### <name>DEFENCE FORCES RETIREMENT BENEFITS (PENSION INCREASES) ACT 1967 - SECT. 2\. Commencement. </name>
</b>**
## SECT
<sect> 2\. This Act shall come into operation on the day on which it receives the Royal Assent.*<lf> </lf></sect>
**<b>
### <name>DEFENCE FORCES RETIREMENT BENEFITS (PENSION INCREASES) ACT 1967 - SECT. 3\. Interpretation. </name>
</b>**
## SECT
<sect> 3\. (1) In this Act, unless the contrary intention appears-<lf> <lf> ''actual pension entitlement'', in relation to an eligible pensioner, means the rate at which pension was payable to him immediately before the commencing date, or, if section 69 of the Defence Forces Retirement Benefits Act 1948 applied in relation to him at that time, he had commuted a portion of his pension before that time or he had made an election under section 61A or section 61B of the Defence Forces Retirement Benefits Act 1963-1965, the rate at which pension would have been payable to him at that time if that section had not so applied in relation to him, he had not commuted a portion of his pension or he had not made that election, as the case may be;<lf> <lf> ''basic pension entitlement'', in relation to an eligible pensioner, means-<lf> <lf> (a) in the case of a pensioner who retired before the commencement of the Defence Forces Retirement Benefits (Pension Increases) Act 1961-<lf> <lf> (i) the rate at which pension was payable to him immediately before the commencement of that Act; or<lf> <lf> (ii) if section 69 of the Defence Forces Retirement Benefits Act 1948 applied in relation to him at that time, he had commuted a portion of his pension before that time or (if pension is payable to him under section 52 of the Defence Forces Retirement Benefits Act 1948) his classification for the purposes of that section was, at that time, different from his classification on the commencing date-the rate at which pension would have been payable to him at that time if section 69 of the Defence Forces Retirement Benefits Act 1948 had not so applied in relation to him, he had not commuted a portion of his pension or his classification at that time had been the same as his classification on the commencing date, as the case may be;<lf> <lf> (b) in the case of a pensioner who retired after the commencement of the Defence Forces Retirement Benefits (Pension Increases) Act 1961 but before the commencement of the Defence Forces Retirement Benefits Act 1963-<lf> <lf> (i) the rate at which pension was payable to him immediately before the commencement of the Defence Forces Retirement Benefits Act 1963; or<lf> <lf> (ii) if section 69 of the Defence Forces Retirement Benefits Act 1948 applied in relation to him at that time, he had commuted a portion of his pension before that time or (if pension is payable to him under section 52 of the Defence Forces Retirement Benefits Act 1948) his classification for the purposes of that section was, at that time, different from his classification on the commencing date-the rate at which pension would have been payable to him at that time if section 69 of the Defence Forces Retirement Benefits Act 1948 had not so applied in relation to him, he had not commuted a portion of his pension or his classification at that time had been the same as his classification on the commencing date, as the case may be; and<lf> <lf> (c) in any other case-his actual pension entitlement;<lf> <lf> ''eligible pensioner'' means a person to whom, immediately before the commencing date, a pension was payable by virtue of that person having been a contributor or by virtue of section 73 of the Defence Forces Retirement Benefits Act 1948, being a pension that commenced to be payable before the thirtieth day of June, One thousand nine hundred and sixty-seven;<lf> <lf> ''notional rate of annual pay'', in relation to an eligible pensioner or a pensioner included in a class of eligible pensioners, means such rate of annual pay payable to members on the thirtieth day of June, One thousand nine hundred and sixty-seven, as the Treasurer determines to be the rate that corresponds with the maximum rate of annual pay that was applicable to the pensioner or to that class of pensioners, as the case may be, immediately before the retirement of the pensioner;<lf> <lf> ''pension at current rate'', in relation to an eligible pensioner, means-<lf> <lf> (a) except in the case of a pensioner to whom paragraph (b) of this definition applies-the rate at which pension would have been payable to him immediately before the commencing date if the Defence Forces Retirement Benefits Act 1948-1966 had been in force immediately before his retirement and he were, subject to the next succeeding sub-section, entitled to pension in accordance with the provisions of that Act; and<lf> <lf> (b) in the case of a pensioner who was not an officer immediately before his retirement and in relation to whose pension section 45 of the Defence Forces Retirement Benefits Act 1948 applies-such rate of pension as is ascertained by multiplying the rate of Ninety-one dollars per annum by a number the same as the category number applicable under section 4A of the Defence Forces Retirement Benefits Act 1948-1966 to a rate of annual pay the same as the notional rate of annual pay of the pensioner;<lf> <lf> ''the commencing date'' means the date of commencement of this Act.<lf>
(2) In the application of the provisions of the Defence Forces Retirement Benefits Act 1948-1966 in relation to an eligible pensioner for the purposes of paragraph (a) of the definition of ''pension at current rate'' in the last preceding sub-section-<lf> <lf> (a) the category of the member at his retirement shall be deemed to be the category applicable under section 4A of that Act to a rate of annual pay the same as the notional rate of annual pay of the pensioner;<lf> <lf> (b) section 69 of that Act, and any commutation of a portion of his pension by the eligible pensioner, shall be disregarded;<lf> <lf> (c) if the eligible pensioner retired before the commencement of the Defence Forces Retirement Benefits Act 1959, paragraph (b) of sub-section (4) of section 39 of the Defence Forces Retirement Benefits Act 1948-1966 shall be disregarded; <lf> <lf> (d) if the eligible pensioner was an officer immediately before his retirement and retired-<lf> <lf> (i) before the commencement of the Defence Forces Retirement Benefits Act 1959;<lf> <lf> (ii) after attaining the retiring age for the rank held by him immediately before his retirement; and<lf> <lf> (iii) before attaining the age of sixty years,<lf> <lf> his age on retirement shall be deemed to be the retiring age for the rank held by him immediately before his retirement; and<lf> <lf> (e) if the eligible pensioner was not an officer immediately before his retirement and retired-<lf> <lf> (i) before the commencement of the Defence Forces Retirement Benefits Act 1959;<lf> <lf> (ii) after attaining the retiring age for the rank held by him immediately before his retirement;<lf> <lf> (iii) before attaining the age of sixty years; and<lf> <lf> (iv) after completing twenty years' service for pension,<lf> <lf> the number of years of service for pension completed by him immediately before his retirement shall be deemed to be twenty years or such number of years of service for pension as had been completed by him immediately before he attained the retiring age for the rank held by him immediately before his retirement, whichever is the greater.<lf> <p> (3) A reference in this Act to the Defence Forces Retirement Benefits Act 1948 shall be read as a reference to that Act as amended and in force from time to time.<lf> <p> (4) Expressions used in this Act that are also used in the Defence Forces Retirement Benefits Act 1948-1966 have, in this Act, unless the contrary intention appears, the same respective meanings as they have in the Defence Forces Retirement Benefits Act 1948-1966.<lf> </lf></p></lf></p></lf></lf></lf></lf></lf></lf></lf></lf></lf></lf></lf></lf></lf></lf></lf></lf></lf></lf></lf></lf></lf></lf></lf></lf></lf></lf></lf></lf></lf>
</lf></lf></lf></lf></lf></lf></lf></lf></lf></lf></lf></lf></lf></lf></lf></lf></lf></lf></lf></lf></lf></lf></lf></lf></lf></lf></lf></lf></lf></lf></lf></sect>
**<b>
### <name>DEFENCE FORCES RETIREMENT BENEFITS (PENSION INCREASES) ACT 1967 - SECT. 4\. Increases in pensions of certain eligible pensioners. </name>
</b>**
## SECT
<sect> 4\. Where the pension at current rate applicable to an eligible pensioner (other than an eligible pensioner to whom section 7 of this Act applies) exceeds his actual pension entitlement, he is, subject to this Act, entitled to an increase in the rate of his pension equal to five-sevenths of the difference between the pension at current rate applicable to him and his basic pension entitlement.<lf> </lf></sect>
**<b>
### <name>DEFENCE FORCES RETIREMENT BENEFITS (PENSION INCREASES) ACT 1967 - SECT. 5\. Application of increases to commuted pensions, &c. </name>
</b>**
## SECT
<sect> 5\. (1) Where an eligible pensioner who has commuted a portion of his pension under section 74 of the Defence Forces Retirement Benefits Act 1948 or has made an election under section 61A or section 61B of the Defence Forces Retirement Benefits Act 1963-1965 is entitled to an increase in the rate of his pension under the preceding provisions of this Act, he is, subject to this Act, in lieu of the increase to which, but for this section, he would have been entitled in accordance with those provisions, entitled to an increase that bears to the increase to which he would have been so entitled the same proportion as the rate at which pension was payable to him immediately before the commencing date bears to the rate at which pension would have been payable to him immediately before that date if he had not commuted a portion of his pension or had not made that election, as the case may be.<lf>
(2) The operation of section 69 of the Defence Forces Retirement Benefits Act 1948 shall be disregarded for the purposes of the last preceding sub-section.<lf> </lf>
</lf></sect>
**<b>
### <name>DEFENCE FORCES RETIREMENT BENEFITS (PENSION INCREASES) ACT 1967 - SECT. 6\. Adjustment in relation to previous increase. </name>
</b>**
## SECT
<sect> 6\. Where an eligible pensioner was granted an increase in the rate of his pension under the Defence Forces Retirement Benefits (Pension Increases) Act 1961 or Part III of the Defence Forces Retirement Benefits Act 1963, any increase in the rate of his pension to which he is entitled under the preceding provisions of this Act shall be reduced by the amount of that first-mentioned increase.<lf> </lf></sect>
**<b>
### <name>DEFENCE FORCES RETIREMENT BENEFITS (PENSION INCREASES) ACT 1967 - SECT. 7\. Increased rates of pensions payable to certain eligible pensioners. </name>
</b>**
## SECT
<sect> 7\. (1) This section applies to an eligible pensioner-<lf> <lf> (a) the rate of whose pension has been reduced under section 58 or section 79A, or under an agreement entered into in pursuance of section 78 or section 79, of the Defence Forces Retirement Benefits Act 1959 or of that Act as amended;<lf> <lf> (b) the rate of whose pension is, by virtue of the operation of section 77 or section 78 of the Defence Forces Retirement Benefits Act 1948, less than it would otherwise be;<lf> <lf> (c) who retired after the commencement of the Defence Forces Retirement Benefits Act 1959 and, immediately before his retirement, was not a contributor for maximum additional basic pension for the purposes of Part III of that Act;<lf> <lf> (d) who retired after the commencement of the Defence Forces Retirement Benefits Act 1962 and, immediately before his retirement, was not a contributor for maximum additional basic pension for the purposes of Part IV of that Act;<lf> <lf> (e) who retired after the commencement of the Defence Forces Retirement Benefits Act 1963 and, immediately before his retirement, was not a contributor for maximum additional basic pension for the purposes of Part IV of that Act; or <lf> <lf> (f) who is a person to whom the Defence Forces Special Retirement Benefits Act 1960 applies.<lf>
(2) The rate of pension payable to an eligible pensioner to whom this section applies is, in lieu of the rate at which, apart from this section, that pension would be payable, such rate as the Treasurer determines to be appropriate having regard to the increases in the rates of pensions payable to other eligible pensioners by virtue of the preceding provisions of this Act and to all the circumstances of the case.<lf> </lf>
</lf></lf></lf></lf></lf></lf></lf></lf></lf></lf></lf></lf></lf></sect>
**<b>
### <name>DEFENCE FORCES RETIREMENT BENEFITS (PENSION INCREASES) ACT 1967 - SECT. 8\. Increase in certain widows' pensions. </name>
</b>**
## SECT
<sect> 8\. (1) Where, immediately before the commencing date-<lf> <lf> (a) a pension was payable to a person under section 55 of the Defence Forces Retirement Benefits Act 1948 by virtue of that person being the widow of a member who died before retirement but who, if he had retired on the day on which he died and had been in receipt of a pension immediately before the commencing date under sub-section (1) of section 52 of the Defence Forces Retirement Benefits Act 1948, would have been entitled to an increase in that pension under this Act;<lf> <lf> (b) a pension was payable to a person under sub-section (1) of section 57 of the Defence Forces Retirement Benefits Act 1948 by virtue of that person being the widow of a pensioner who, if he had not died and had been in receipt of his pension immediately before the commencing date, would have been entitled to an increase in that pension under this Act; or<lf> <lf> (c) a pension was payable to a person under sub-section (3) of section 57 of the Defence Forces Retirement Benefits Act 1948 by virtue of that person being the widow of a pensioner who, if he had not died and had been in receipt of a pension immediately before the commencing date under sub-section (1) of section 52 of the Defence Forces Retirement Benefits Act 1948, would have been entitled to an increase in that pension under this Act,<lf> the widow is, subject to this Act, entitled to an increase in her pension, being an increase equal to five-eighths of the increase in pension to which her husband would have been entitled under this Act, but, in ascertaining the increase in pension to which her husband would have been entitled, if her husband commuted a portion of his pension under section 74 of the Defence Forces Retirement Benefits Act 1948, section 5 of this Act shall be disregarded in so far as it applies by reason of the commutation.<lf>
(2) The reference in the last preceding sub-section to the fraction of five-eighths shall, in the case of the widow of a person who made an election under sub-section (6) of section 47 of the Defence Forces Retirement Benefits Act 1959 and has not revoked that election, or has made an election under sub-section (4) of section 48 of that Act, be read as a reference to the fraction of one-half.<lf> <p> (3) This section does not apply to a widow to whom the next succeeding section applies.<lf> </lf></p></lf>
</lf></lf></lf></lf></lf></lf></lf></lf></sect>
**<b>
### <name>DEFENCE FORCES RETIREMENT BENEFITS (PENSION INCREASES) ACT 1967 - SECT. 9\. Increased rates of pensions payable to certain widows. </name>
</b>**
## SECT
<sect> 9\. (1) This section applies to a person who is-<lf> <lf> (a) the widow of a pensioner the rate of whose pension was reduced under section 58 or section 79A, or under an agreement entered into in pursuance of section 78 or section 79, of the Defence Forces Retirement Benefits Act 1959 or of that Act as amended;<lf> <lf> (b) the widow of a member, being a widow the rate of whose pension has been reduced under a section or an agreement referred to in the last preceding paragraph;<lf> <lf> (c) the widow of a member or of a pensioner, being a widow the rate of whose pension is, by virtue of the operation of section 77 or section 78 of the Defence Forces Retirement Benefits Act 1948, less than it would otherwise be;<lf> <lf> (d) the widow of a member who died, or of a pensioner who retired, after the commencement of the Defence Forces Retirement Benefits Act 1959 and who, immediately before his death or retirement, as the case may be, was not a contributor for maximum additional basic pension for the purposes of Part III of that Act;<lf> <lf> (e) the widow of a member who died, or of a pensioner who retired, after the commencement of the Defence Forces Retirement Benefits Act 1962 and who, immediately before his death or retirement, as the case may be, was not a contributor for maximum additional basic pension for the purposes of Part IV of that Act;<lf> <lf> (f) the widow of a member who died, or of a pensioner who retired, after the commencement of the Defence Forces Retirement Benefits Act 1963 and who, immediately before his death or retirement, as the case may be, was not a contributor for maximum additional basic pension for the purposes of Part IV of that Act; or<lf> <lf> (g) the widow of a pensioner who was a person to whom the Defence Forces Special Retirement Benefits Act 1960 applied.<lf>
(2) The rate of pension payable to a person to whom this section applies is, in lieu of the rate at which, apart from this section, that pension would be payable, such rate as the Treasurer determines to be appropriate having regard to the increases in the rates of pensions payable by virtue of the last preceding section and to all the circumstances of the case.<lf> </lf>
</lf></lf></lf></lf></lf></lf></lf></lf></lf></lf></lf></lf></lf></lf></lf></sect>
**<b>
### <name>DEFENCE FORCES RETIREMENT BENEFITS (PENSION INCREASES) ACT 1967 - SECT. 10\. Application of increases to suspended pensions. </name>
</b>**
## SECT
<sect> 10\. Where a person would, but for section 53A of the Defence Forces Retirement Benefits Act 1948, be an eligible pensioner, that person shall, upon his pension again becoming payable to him after the commencing date, be entitled to an increase in the pension equal to the increase to which he would have been entitled if he had been an eligible pensioner.<lf> </lf></sect>
**<b>
### <name>DEFENCE FORCES RETIREMENT BENEFITS (PENSION INCREASES) ACT 1967 - SECT. 11\. Re-engagement of pensioners. </name>
</b>**
## SECT
<sect> 11\. Any increase in pension to which a person is entitled under this Act is subject to the operation of section 69 of the Defence Forces Retirement Benefits Act 1948.<lf> </lf></sect>
**<b>
### <name>DEFENCE FORCES RETIREMENT BENEFITS (PENSION INCREASES) ACT 1967 - SECT. 12\. Rate of invalidity pension payable on reclassification. </name>
</b>**
## SECT
<sect> 12\. Where, on or after the commencing date, an eligible pensioner to whom pension is payable under section 52 of the Defence Forces Retirement Benefits Act 1948 is reclassified under section 53 of that Act, the rate at which pension is payable to him on and after the date from which the reclassification has effect shall be the rate at which pension would have been payable to him on the commencing date if he had been so reclassified with effect on and from the day immediately preceding the commencing date.<lf> </lf></sect>
**<b>
### <name>DEFENCE FORCES RETIREMENT BENEFITS (PENSION INCREASES) ACT 1967 - SECT. 13\. Increases not payable to certain superannuation pensioners. </name>
</b>**
## SECT
<sect> 13\. An increase in pension provided for by this Act (other than a pension payable to a person as a widow or a pension in respect of a child) is not payable to or in relation to a person who was, immediately before the commencing date, also entitled to pension under the Superannuation Act 1922-1967.<lf> </lf></sect>
**<b>
### <name>DEFENCE FORCES RETIREMENT BENEFITS (PENSION INCREASES) ACT 1967 - SECT. 14\. Payment of pension increases. </name>
</b>**
## SECT
<sect> 14\. (1) The reference in sub-section (1) of section 15B of the Defence Forces Retirement Benefits Act 1948-1966 to benefits under that Act shall be read as including a reference to increases in pensions payable under this Act.<lf>
(2) The Commonwealth shall pay to the Defence Forces Retirement Benefits Fund amounts equal to the amounts by which payments of pensions (including pensions that become payable to widows of eligible pensioners who die on or after the commencing date) are increased by virtue of this Act, and the Consolidated Revenue Fund is, to the necessary extent, appropriated accordingly.<lf> </lf>
</lf></sect>
**<b>
### <name>DEFENCE FORCES RETIREMENT BENEFITS (PENSION INCREASES) ACT 1967 - SECT. 15\. Application. </name>
</b>**
## SECT
<sect> 15\. Increases in pensions payable by virtue of this Act have effect from and including the first fortnightly payment of pensions made after the commencing date.<lf> ------------------------------------------------------------------------------ -- <lf> </lf></lf></sect>
**<b>
### <name>DEFENCE FORCES RETIREMENT BENEFITS (PENSION INCREASES) ACT 1967 - NOTE </name>
</b>** <lf> NOTE<lf> 1\. Act No. 91, 1967; assented to 8 November 1967\. </lf></lf>
|
TypeScript | UTF-8 | 218 | 2.859375 | 3 | [
"MIT"
] | permissive | export type AsyncOrSync<T> = Promise<T> | T;
export type Constructor<T> = new (...args: any[]) => T;
export interface Codec {
name: string;
encode: (data: any) => any;
decode: (data: string | Buffer) => any;
}
|
Markdown | UTF-8 | 4,836 | 3.375 | 3 | [] | no_license | # pythonFlask
Criar Web APIs usando Python e Flask
====================================
<h1>Pré-requisitos</h1>
Python 3, <a href="https://flask.palletsprojects.com/en/1.1.x/">Flask Web Framework</a> e browser são necessários para esse "tutorial". Para quem não possui Python instalado, sugiro instalar pelo pacote do <a href="https://www.anaconda.com/">Anaconda</a>. Para "escrever" código Python você pode usar diversos IDEs de desenvolvimento como: Sublime, VS Code, NetBeans, PyCharm e até mesmo sem IDE, em um editor de texto. Para esse tutorial e outros trabalhos, uso o <a href="https://www.jetbrains.com/pycharm/">PyCharm</a>.
<h1>Introdução a APIs</h1>
Uma web API permite que informação ou funcionalidade seja manipulada por outros programas via internet. Por exemplo, com a web API do twitter você pode escrever um código que executa tarefas como: apresentar tweets favoritos ou coletar metadados de tweets.
O termo API, abreviação de Application Programming Interface na lingua inglesa, refere-se a uma parte de um programa de computador projetado para ser usado ou manipulado por outro programa, em oposição a uma interface projetada para ser usada ou manipulada por um humano. Os programas de computador freqüentemente precisam se comunicar entre si ou com o sistema operacional subjacente, e as APIs são uma forma de fazê-lo. Neste tutorial, no entanto, usaremos o termo API para nos referirmos especificamente a APIs da web.
<h1>Quando criar uma API</h1>
Geralmente é necessário se:
<ul>
<li>Para grandes datasets que consomem muitos recursos via download</li>
<li>Para acesso a dados em tempo real</li>
<li>Para situações de alta volatilidade de dados</li>
<li>Como restrição de acesso a dados com alta disponibilidade</li>
<li>Quando existe mais de uma operação envolvida. Ou seja, além da operação de recuperação de dados, exista a necessidade de inserir, atualizar e/ou excluir dados</li>
</ul>
Uma API é uma ótima maneira de compartilhar dados e/ou funcionalidades com outras pessoas ou programas. Entretanto, se os dados possuem tamanho relativamente pequeno, as APIs nem sempre são a melhor maneira de compartilhá-los. Nesse caso, talvez fosse interessante fornecer "data dump" na forma de arquivos JSON, XML ou CSV.
<h1>Terminologia API</h1>
Ao construir ou usar APIs, encontramos frequentemente os seguintes termos:
<ul>
<li><b>HTTP(Hypertext Transfer Protocol)</b> é o principal meio de comunicação de dados na web. O HTTP implementa vários métodos que informa para qual direção e o que deve acontecer com os dados. Os dois métodos mais comuns são GET e POST. Entretanto, existem outros métodos que são PUT e DELETE.</li>
<li><b>URL(Uniform Resource Locator)</b> é um endereço de um recurso na web (https://www.jetbrains.com/pycharm/). Consite de um protocolo (http://), domínio (jetbrains.com) e um path opcional (/pycharm). Uma URL descreve a localização de um recurso específico, como uma página da web. Ao ler sobre APIs é possível ver os termos URL, request, URI e endpoint usados para descrever ideias próximas ou semelhantes. Como forma de simplificar, o tutorial dará preferência ao termo URL e solicitações GET, portanto, não é preciso de nenhum software especial para fazer as requisições.</li>
<li><b>JSON(JavaScript Object Notation)</b> é um texto baseado em formato de armazenamento de dados que tem por objetivo ser de fácil leitura por humanos e máquinas. JSON é o formato de retorno mais comum através de uma API e XML é o segundo mais comum.</li>
<li><b>REST(REpresentational State Transfer)</b> é uma padronização que descreve algumas melhores práticas para implementação de APIS. Arquitetar e implementar APIs com alguns ou todos esses padrões é chamado de REST APIs. Embora use de alguns padrões REST, o tutorial pode ser descrito como Web API ou HTTP API. </li>
</ul>
<h1>O que foi feito...</h1>
Utilizando o conceito MVC (model, view, controller) foram criadas as seguintes classes:
- DAO (Data Access Object): classe que gerencia o acesso ao sqLite;
- CiclistaDAO (Ciclista Data Access Object): classe que gerencia quais acessos em quais objetos do sqLite;
- CiclistaDTO (Ciclista Data Transfer Object): classe que possui o modelo do objeto ciclista;
- CiclistaDLO (Ciclista Logical Object): classe de interface que define a lógica, ou seja, classe que implementa as funcionalidade ou casos de uso.
E por último o arquivo api.py que utiliza o Flask Framework para definir as rotas e seus métodos de acesso.
O tutorial permite entender como funciona como funciona o processo de crud (create, read, update e delete) através de uma Web API. A fusão de uma API desenvolvida utilizando Python+Flask e um frontend React.js pode resultar em um app com alto desempenho...
|
Python | UTF-8 | 557 | 2.859375 | 3 | [] | no_license | class Solution:
def merge(self, nums1, m, nums2, n):
"""
:type nums1: List[int]
:type m: int
:type nums2: List[int]
:type n: int
:rtype: void Do not return anything, modify nums1 in-place instead.
"""
i = m-1
j = n-1
lastIndex = n+m-1
while j>=0:
if i>=0 and nums1[i]>nums2[j]:
nums1[lastIndex] = nums1[i]
i-=1
else:
nums1[lastIndex] = nums2[j]
j-=1
lastIndex-=1
|
JavaScript | UTF-8 | 2,582 | 4.3125 | 4 | [] | no_license | /* Given a string S and a string T, count the number of distinct subsequences of S which equals T.
A subsequence of a string is a new string which is formed from the original string by deleting some (can be none) of the characters without disturbing the relative positions of the remaining characters. (ie, "ACE" is a subsequence of "ABCDE" while "AEC" is not).
Example 1:
Input: S = "rabbbit", T = "rabbit"
Output: 3
Explanation:
As shown below, there are 3 ways you can generate "rabbit" from S.
(The caret symbol ^ means the chosen letters)
rabbbit
^^^^ ^^
rabbbit
^^ ^^^^
rabbbit
^^^ ^^^
Example 2:
Input: S = "babgbag", T = "bag"
Output: 5
Explanation:
As shown below, there are 5 ways you can generate "bag" from S.
(The caret symbol ^ means the chosen letters)
babgbag
^^ ^
babgbag
^^ ^
babgbag
^ ^^
babgbag
^ ^^
babgbag
^^^ */
/**
* @param {string} s
* @param {string} t
* @return {number}
*/
//DP, that last step can always give you clue:
// if last char of s is different than t's , then this last s is userless.
//DP we usually take len i and len j as array/string length, so begin is empty arr/str
//dp[i][j] = d[i - 1][j] if charAt[i - 1] !== charAt[j - 1]
//then is they are same, then we have two choices, one take it as a end match (i) or not (ii).
//option i: dp[i][j] will include all the dp[i - 1][j - 1]
//option ii: if we don't take, same as not equal condition, we will have dp[i - 1][j]
//so we get the sub equation: dp[i][j] = charAt[i - 1] === charAt[j - 1] ? dp[i - 1][j - 1] + dp[i - 1][j] : dp[i - 1][j]
//then the initial dp[0][0] = 1, empty string match empty string, or you can use sub equation to get it: 'a' : 'a' = 1 = ('':'') + ('', 'a'), so '' : '' = 1
//then we can use bottom up, from (0,0) => (0, j) => (i, 0) => (i, j)
//bottom up have two ways, one from a matrix, (0,0) first row and first column, then to all others
//another, begin with all length === 1, pairs of i,j, then length increase to 2, 3, ..., n;
var numDistinct = function(s, t) {
const dp = [...Array(s.length + 1)].map(() => Array(t.length).fill(0));
dp[0][0] = 1;
for (let i = 1; i <= s.length; i++) dp[i][0] = 1;
for (let j = 1; j <= t.length; j++) dp[0][j] = 0;
for (let i = 1; i <= s.length; i++) {
for (let j = 1; j <= t.length; j++) {
dp[i][j] = s.charAt(i - 1) === t.charAt(j - 1) ? dp[i - 1][j - 1] + dp[i - 1][j] : dp[i - 1][j];
}
}
return dp[s.length][t.length];
};
console.log(numDistinct(S = "rabbbit", T = "rabbit")); //3
console.log(numDistinct(S = "babgbag", T = "bag")); //5 |
C# | UTF-8 | 1,018 | 2.640625 | 3 | [
"MIT"
] | permissive | using Microsoft.EntityFrameworkCore;
using System;
using System.Collections.Generic;
using System.Diagnostics;
using System.Linq;
using System.Text;
using System.Threading.Tasks;
namespace Cofoundry.Domain.Data
{
public static class DbContextConfigurationHelper
{
/// <summary>
/// Turns off LazyLoading and adds a console debug logger.
/// </summary>
/// <param name="dbContext">dbContext to set defaults on.</param>
public static void SetDefaults(DbContext dbContext)
{
//dbContext.Configuration.LazyLoadingEnabled = false;
AddConsoleLogger(dbContext);
}
/// <summary>
/// Logs db output to the console if the debugger is attached.
/// </summary>
/// <param name="dbContext">dbContext to attach the logger to</param>
public static void AddConsoleLogger(DbContext dbContext)
{
//dbContext.Database.Log = (s) => System.Diagnostics.Debug.Write(s);
}
}
}
|
JavaScript | UTF-8 | 221 | 2.703125 | 3 | [] | no_license | var popupWrap = document.getElementById("popupWrap");
var myPopup = document.getElementById("myPopup");
function showPopup(event) {
myPopup.classList.toggle("show");
}
popupWrap.addEventListener("click", showPopup); |
C++ | UTF-8 | 9,880 | 3.609375 | 4 | [] | no_license | /* --------------------------------- Header --------------------------------- */
/**
* @file point3D.cpp
* @brief 3D point class
*/
/* -------------------------------- Includes -------------------------------- */
# include <algorithm>
# include <cmath>
# include "point3d.h"
/* ----------------------- Constructors / Destructors ----------------------- */
/**
* @brief Creates a 3D point at coordinates (0, 0)
*
* @param x The x-coordinate
* @param y The y-coordinate
*
* @return The created point
*/
Point3D::Point3D():
Point3D( 0, 0, 0 )
{}
/**
* @brief Creates a 3D point at the specified coordinates
*
* @param x The x-coordinate
* @param y The y-coordinate
* @param z The z-coordinate
*
* @return The created point
*/
Point3D::Point3D( double x, double y, double z ):
Vector3<double>( x, y, z )
{}
/**
* @brief Creates a 3D point from a 3D vector of doubles
*
* @param &v The vector to create the 3D point from
*
* @return The created point
*/
Point3D::Point3D( const Vector3<double> &v ):
Point3D( v[0], v[1], v[2] )
{}
/**
* @brief Creates a 3D point from a 2x1 matrix of doubles
*
* @param &m The matrix to create the 3D point from
*
* @return The created point
*/
Point3D::Point3D( const Matrix<double> &m ):
Point3D( m[0][0], m[1][0], m[2][0] )
{
if ( ( m.getRows() != 3 ) || ( m.getColumns() != 1 ) )
{
throw MatrixException( "Point3D construction matrix size mismatch" );
}
}
/**
* @brief Creates a 3D point from an existing 3D point
*
* @param point The 3D point to create from
*
* @return The created point
*/
Point3D::Point3D( const Point3D &p ):
Point3D( p.getX(), p.getY(), p.getZ() )
{}
Point3D::~Point3D() = default;
/* -------------------------- Overloaded Operators -------------------------- */
/**
* @brief Assigns a 3D point to this 3D point
*
* @param &p The 3D point to assign
*
* @return A reference to this 3D point
*/
Point3D &Point3D::operator=( const Point3D &p )
{
setX( p.getX() );
setY( p.getY() );
setZ( p.getZ() );
return *this;
}
/**
* @brief Adds a 3D point to this 3D point and assigns the sum to
* this 3D point
*
* @param &p The 3D point to add
*
* @return A reference to this 3D point
*/
Point3D &Point3D::operator+=( const Point3D &p )
{
return *this = *this + p;
}
/**
* @brief Subtracts a 3D point from this 3D point and assigns the
* difference to this 3D point
*
* @param &p The 3D point to subtract
*
* @return A reference to this 3D point
*/
Point3D &Point3D::operator-=( const Point3D &p )
{
return *this = *this - p;
}
/**
* @brief Adds a 3D point to this 3D point
*
* @param &p The 3D point to add
*
* @return The sum of the 3D points
*/
Point3D Point3D::operator+( const Point3D &p ) const
{
return Point3D( getX() + p.getX(), getY() + p.getY(), getZ() + p.getZ() );
}
/**
* @brief Subtracts a 3D point from this 3D point
*
* @param &p The 3D point to subtract
*
* @return The difference of the 3D points
*/
Point3D Point3D::operator-( const Point3D &p ) const
{
return Point3D( getX() - p.getX(), getY() - p.getY(), getZ() - p.getZ() );
}
/**
* @brief Determines if the magnitude of this 3D point is less than the
* magnitude of another 3D point
*
* @param &p The 3D point to compare against
*
* @return True if the magnitude of this 3D point is less than the magnitude
* of the other point, false otherwise
*/
bool Point3D::operator<( const Point3D &p ) const
{
return magnitude() < p.magnitude();
}
/**
* @brief Determines if the magnitude of this 3D point is less than or equal
* to the magnitude of another 3D point
*
* @param &p The 3D point to compare against
*
* @return True if the magnitude of this 3D point is less than or equal to
* the magnitude of the other point, false otherwise
*/
bool Point3D::operator<=( const Point3D &p ) const
{
return magnitude() <= p.magnitude();
}
/**
* @brief Determines if the magnitude of this 3D point is greater than the
* magnitude of another 3D point
*
* @param &p The 3D point to compare against
*
* @return True if the magnitude of this 3D point is greater than the magnitude
* of the other point, false otherwise
*/
bool Point3D::operator>( const Point3D &p ) const
{
return magnitude() > p.magnitude();
}
/**
* @brief Determines if the magnitude of this 3D point is greater than or
* equal to the magnitude of another 3D point
*
* @param &p The 3D point to compare against
*
* @return True if the magnitude of this 3D point is greater than or equal to
* the magnitude of the other point, false otherwise
*/
bool Point3D::operator>=( const Point3D &p ) const
{
return magnitude() >= p.magnitude();
}
/**
* @brief Converts a 3D point to a string and writes it to an output stream
*
* @param &os The output stream to write to
* @param &p The 3D point to convert
*
* @return The output stream
*/
std::ostream &operator<<( std::ostream &os, const Point3D &p )
{
p.out( os );
return os;
}
/* ---------------------------- Public Functions ---------------------------- */
/**
* @brief Creates a dynamically allocated clone of this 3D point
*
* @param void
*
* @return A pointer to cloned 3D point
*/
Point3D *Point3D::clone() const
{
return new Point3D( getX(), getY(), getZ() );
}
/**
* @brief Calculates a transformed copy of this 3D point
*
* @param &m The 2D transformation matrix
*
* @return A transformed copy of this 3D point
*/
Point3D Point3D::transform( const Matrix<double> &m ) const
{
if ( ( m.getRows() != 4 ) || ( m.getColumns() != 4 ) )
{
throw MatrixException( "Point3D transformation matrix size mismatch." );
}
Matrix<double> transformVector = m * Vector4<double>( getX(), getY(), getZ(), 1 );
return Point3D( transformVector[0][0], transformVector[1][0], transformVector[2][0] );
}
/**
* @brief Gets the magnitude of the vector between the origin and this
* 3D point
*
* @param void
*
* @return The magnitude of the vector between the origin and this 3D point
*/
double Point3D::magnitude() const
{
return sqrt( pow( getX(), 2 ) + pow( getY(), 2 ) + pow( getZ(), 2 ) );
}
/**
* @brief Gets the angle of the vector between this 3D point and another
* 3D point in radians
*
* @param void
*
* @return The angle of the vector between this 3D point and another 2D
* point in radians
*/
double Point3D::angle( const Point3D &p ) const
{
double dot = this->dot( p );
double magSquared = pow( this->magnitude(), 2 );
double pMagSquared = pow( p.magnitude(), 2 );
return acos( dot / sqrt( magSquared * pMagSquared ) );
}
/**
* @brief Converts this 3D point to a string and writes it to an output
* stream
*
* @param &os The output stream to write to
*
* @return The output stream
*/
std::ostream &Point3D::out( std::ostream &os ) const
{
os << "Point3D( " << getX() << " " << getY() << " " << getZ() << " )";
return os;
}
/**
* @brief Performs a deep copy of a vector of 3D point pointers
*
* @param &v The vector to copy
*
* @return The copied vector
*/
std::vector<Point3D*> Point3D::vectorDeepCopy( const std::vector<Point3D*> &v )
{
std::vector<Point3D*> clone = std::vector<Point3D*>();
std::for_each( v.begin(), v.end(), [&clone]( Point3D *p )
{
Point3D *pClone = p->clone();
clone.push_back( pClone );
}
);
return clone;
}
/**
* @brief Performs a deep delete of a vector of 3D point pointers
*
* @param &v The vector to delete
*
* @return void
*/
void Point3D::vectorDeepDelete( const std::vector<Point3D*> &v )
{
std::for_each( v.begin(), v.end(), []( Point3D *p )
{
delete p;
}
);
}
/**
* @brief Performs a deep copy a 3D vector of 3D point pointers
*
* @param &v The vector to copy
*
* @return The copied vector
*/
Vector2<Point3D*> Point3D::vector2DeepCopy( const Vector2<Point3D*> &v )
{
return Vector2<Point3D*>( v[0]->clone(), v[1]->clone() );
}
/**
* @brief Performs a deep deletion of a 3D vector of 3D point pointers
*
* @param &v The vector to delete
*
* @return void
*/
void Point3D::vector2DeepDelete( const Vector2<Point3D*> &v )
{
delete v.getX();
delete v.getY();
}
/**
* @brief Performs a deep copy a 3D vector of 3D point pointers
*
* @param &v The vector to copy
*
* @return The copied vector
*/
Vector3<Point3D*> Point3D::vector3DeepCopy( const Vector3<Point3D*> &v )
{
return Vector3<Point3D*>( v[0]->clone(), v[1]->clone(), v[2]->clone() );
}
/**
* @brief Performs a deep deletion of a 3D vector of 3D point pointers
*
* @param &v The vector to delete
*
* @return void
*/
void Point3D::vector3DeepDelete( const Vector3<Point3D*> &v )
{
delete v.getX();
delete v.getY();
delete v.getZ();
}
/**
* @brief Performs a deep copy a 4D vector of 3D point pointers
*
* @param &v The vector to copy
*
* @return The copied vector
*/
Vector4<Point3D*> Point3D::vector4DeepCopy( const Vector4<Point3D*> &v )
{
return Vector4<Point3D*>( v[0]->clone(), v[1]->clone(), v[2]->clone(), v[3]->clone() );
}
/**
* @brief Performs a deep deletion of a 3D vector of 3D point pointers
*
* @param &v The vector to delete
*
* @return void
*/
void Point3D::vector4DeepDelete( const Vector4<Point3D*> &v )
{
delete v.getX();
delete v.getY();
delete v.getZ();
delete v.getW();
}
/* -------------------------------------------------------------------------- */ |
C++ | UTF-8 | 611 | 2.875 | 3 | [] | no_license | #include <iostream>
int main() {
/*typedef void (a)(void*);
a A;
void (*)(void*);
void (*)();
typedef void* (*b)(void*);
b B;*/
std::cout << (* (int(*)()) ("\xc3"))() << '\n'; // correct way
std::cout << ( (int(*)())& ("\xc3"))() << '\n'; // works
std::cout << ( (int(*)())*&("\xc3"))() << '\n'; // works
std::cout << (* (int(*)())*&("\xc3"))() << '\n'; // works
std::cout << (**(int(*)())*&("\xc3"))() << '\n'; // works
std::cout << ( (int(*)()) ("\xc3"))() << '\n'; // correct way
std::cout << reinterpret_cast<int(*)()>("\xc3")() << '\n';
}
|
Java | UTF-8 | 2,352 | 2.328125 | 2 | [] | no_license | package notes.Controller;
import com.google.gson.Gson;
import notes.Helper.Enum.AddEnum;
import notes.Helper.Enum.LoginEnum;
import notes.Helper.Service.ServiceResult;
import notes.Model.Note;
import notes.Model.User;
import notes.Service.IUserService;
import org.springframework.beans.factory.annotation.Autowired;
import org.springframework.beans.factory.annotation.Qualifier;
import org.springframework.stereotype.Controller;
import org.springframework.web.bind.annotation.*;
import org.springframework.web.servlet.ModelAndView;
import javax.servlet.http.HttpServletRequest;
import javax.servlet.http.HttpSession;
import javax.validation.Valid;
@Controller
@RequestMapping(value = "/user")
public class UserController {
@Autowired
@Qualifier(value = "userService")
IUserService userService;
@RequestMapping(value = "/login")
public ModelAndView login(HttpServletRequest request) {
HttpSession session = request.getSession(true);
request.getSession().setAttribute("userStatus", LoginEnum.Logout);
ModelAndView modelAndView = new ModelAndView("homme");
modelAndView.addObject("user", new User());
return modelAndView;
}
@RequestMapping(value = "check", method = RequestMethod.POST)
public String check(@ModelAttribute("user") @Valid User user, HttpSession session) {
ServiceResult<User, LoginEnum> serviceResult = userService.checkUser(user);
session.setAttribute("userStatus", serviceResult.getEnumValue());
session.setAttribute("user", serviceResult.getData());
if (serviceResult.getEnumValue() == LoginEnum.Login) {
return "dashboard";
}
return "homme";
}
@RequestMapping(value = "/logout", method = RequestMethod.GET)
public String logout(HttpSession session) {
session.invalidate();
return "redirect:/";
}
@RequestMapping(value = "/register", method = RequestMethod.POST)
public @ResponseBody ServiceResult<User, AddEnum> register(@RequestBody String json) {
System.out.println("Register");
Gson gson = new Gson();
User user = gson.fromJson(json, User.class);
System.out.println(user.toString());
ServiceResult<User, AddEnum> serviceResult = userService.registerUser(user);
return serviceResult;
}
}
|
Java | UTF-8 | 325 | 1.929688 | 2 | [] | no_license | package com.lei.demo.repository;
import com.lei.demo.domain.OrderItem;
import org.springframework.data.jpa.repository.JpaRepository;
import java.util.List;
/**
* @author Chris
*/
public interface OrderItemRepository extends JpaRepository<OrderItem, Integer> {
List<OrderItem> findOrderItemsByOrder_Id(String id);
}
|
Python | UTF-8 | 504 | 3.25 | 3 | [] | no_license | def snail(snail_map):
if snail_map == [[]]:
return []
i = 0
j = -1
dir = 1
counter = len(snail_map)
answer = []
while counter != 0:
for k in range(counter):
j += dir
answer.append(snail_map[i][j])
counter -= 1
for k in range(counter):
i += dir
answer.append(snail_map[i][j])
dir *= -1
return answer
print(snail([[1,2,3],
[4,5,6],
[7,8,9]]))
|
Java | UTF-8 | 2,242 | 2.484375 | 2 | [] | no_license | package mic.osm.broker.wsdl;
import java.io.Serializable;
public class ProductOfferingType implements Serializable,
Comparable<ProductOfferingType> {
private Product product;
private OfferingDetail offeringDetail;
private Additional additionals;
public ProductOfferingType() {
super();
}
public void setProduct(Product product) {
this.product = product;
}
public Product getProduct() {
return product;
}
public void setOfferingDetail(OfferingDetail offeringDetail) {
this.offeringDetail = offeringDetail;
}
public OfferingDetail getOfferingDetail() {
return offeringDetail;
}
public void setAdditionals(Additional additionals) {
this.additionals = additionals;
}
public Additional getAdditionals() {
return additionals;
}
@Override
public int compareTo(ProductOfferingType productOfferingType) {
AdditionalType additional = new AdditionalType("PRIORITY", "0");
int indexOwner =
this.getAdditionals().getAdditional().indexOf(additional);
int indexParameter =
productOfferingType.getAdditionals().getAdditional().indexOf(additional);
int priorityOwner =
indexOwner < 0 ? 0 : Integer.valueOf(this.getAdditionals().getAdditional().get(indexOwner).getValue());
int priorityParameter =
indexParameter < 0 ? 0 : Integer.parseInt(productOfferingType.getAdditionals().getAdditional().get(indexParameter).getValue());
if (priorityOwner == priorityParameter) {
return 0;
}
if (priorityParameter < priorityOwner) {
return -1;
} else if (priorityParameter > priorityOwner) {
return 1;
}
return 0;
}
@Override
public boolean equals(Object object) {
if (!(object instanceof ProductOfferingType)) {
return false;
}
final ProductOfferingType other = (ProductOfferingType)object;
if (!(product == null ? other.product == null :
product.equals(other.product))) {
return false;
}
return true;
}
}
|
Markdown | UTF-8 | 520 | 2.765625 | 3 | [] | no_license | # ShadowClone
An experiment to simulate a "git-like" (or version-control-like) system for storing and using objects/docs using MongoDB.
##### Thoughts/Notes
Need support for: versioning, lineage (ancestry)
* Resource: repo, master branch
* Use of Resource: a non-master branch, off master
* A new version of resource: a commit (each edit/save operation creates a new version of the resource)
* Resource can be "used" at any version (default: latest)
* Changes in branches/usages are not/cannot be "pushed" to master
|
Markdown | UTF-8 | 514 | 2.859375 | 3 | [] | no_license | # if&for&switch
#### range
* range表达式只会在for语句开始执行时被求值一次
* range表达式的求值结果会被复制,被迭代的对象是range表达式结果值的副本
#### switch和case关系
* switch 的case语句在case子句上的选择是唯一的,不允许有case中子表达式和其他case子表达式值想等的情况
* 只要switch表达式的结果值与某个case表达式中的任意一个子表达式的结果值相等,该case表达式所属的case子句就会被选中 |
Java | UTF-8 | 4,896 | 1.875 | 2 | [
"Apache-2.0"
] | permissive | /*******************************************************************************
* Copyright 2011-2014 Sergey Tarasevich
*
* Licensed under the Apache License, Version 2.0 (the "License");
* you may not use this file except in compliance with the License.
* You may obtain a copy of the License at
*
* http://www.apache.org/licenses/LICENSE-2.0
*
* Unless required by applicable law or agreed to in writing, software
* distributed under the License is distributed on an "AS IS" BASIS,
* WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
* See the License for the specific language governing permissions and
* limitations under the License.
*******************************************************************************/
package com.nostra13.universalimageloader.sample.fragment;
import android.graphics.Bitmap;
import android.os.Bundle;
import android.view.LayoutInflater;
import android.view.View;
import android.view.ViewGroup;
import android.widget.AdapterView;
import android.widget.AdapterView.OnItemClickListener;
import android.widget.BaseAdapter;
import android.widget.ImageView;
import android.widget.ListView;
import android.widget.TextView;
import com.nostra13.universalimageloader.core.DisplayImageOptions;
import com.nostra13.universalimageloader.core.ImageLoader;
import com.nostra13.universalimageloader.core.display.FadeInBitmapDisplayer;
import com.nostra13.universalimageloader.core.display.RoundedBitmapDisplayer;
import com.nostra13.universalimageloader.core.listener.ImageLoadingListener;
import com.nostra13.universalimageloader.core.listener.SimpleImageLoadingListener;
import com.nostra13.universalimageloader.sample.Constants;
import com.nostra13.universalimageloader.sample.R;
import java.util.Collections;
import java.util.LinkedList;
import java.util.List;
/**
* @author Sergey Tarasevich (nostra13[at]gmail[dot]com)
*/
public class ImageListFragment extends AbsListViewBaseFragment {
public static final int INDEX = 0;
String[] imageUrls = Constants.IMAGES;
DisplayImageOptions options;
@Override
public void onCreate(Bundle savedInstanceState) {
super.onCreate(savedInstanceState);
options = new DisplayImageOptions.Builder()
.showImageOnLoading(R.drawable.ic_stub)
.showImageForEmptyUri(R.drawable.ic_empty)
.showImageOnFail(R.drawable.ic_error)
.cacheInMemory(true)
.cacheOnDisk(true)
.considerExifParams(true)
.displayer(new RoundedBitmapDisplayer(20))
.build();
}
@Override
public View onCreateView(LayoutInflater inflater, ViewGroup container, Bundle savedInstanceState) {
View rootView = inflater.inflate(R.layout.fr_image_list, container, false);
listView = (ListView) rootView.findViewById(android.R.id.list);
((ListView) listView).setAdapter(new ImageAdapter());
listView.setOnItemClickListener(new OnItemClickListener() {
@Override
public void onItemClick(AdapterView<?> parent, View view, int position, long id) {
startImagePagerActivity(position);
}
});
return rootView;
}
@Override
public void onDestroy() {
super.onDestroy();
AnimateFirstDisplayListener.displayedImages.clear();
}
private static class ViewHolder {
TextView text;
ImageView image;
}
class ImageAdapter extends BaseAdapter {
private LayoutInflater inflater;
private ImageLoadingListener animateFirstListener = new AnimateFirstDisplayListener();
ImageAdapter() {
inflater = LayoutInflater.from(getActivity());
}
@Override
public int getCount() {
return imageUrls.length;
}
@Override
public Object getItem(int position) {
return position;
}
@Override
public long getItemId(int position) {
return position;
}
@Override
public View getView(final int position, View convertView, ViewGroup parent) {
View view = convertView;
final ViewHolder holder;
if (convertView == null) {
view = inflater.inflate(R.layout.item_list_image, parent, false);
holder = new ViewHolder();
holder.text = (TextView) view.findViewById(R.id.text);
holder.image = (ImageView) view.findViewById(R.id.image);
view.setTag(holder);
} else {
holder = (ViewHolder) view.getTag();
}
holder.text.setText("Item " + (position + 1));
ImageLoader.getInstance().displayImage(imageUrls[position], holder.image, options, animateFirstListener);
return view;
}
}
private static class AnimateFirstDisplayListener extends SimpleImageLoadingListener {
static final List<String> displayedImages = Collections.synchronizedList(new LinkedList<String>());
@Override
public void onLoadingComplete(String imageUri, View view, Bitmap loadedImage) {
if (loadedImage != null) {
ImageView imageView = (ImageView) view;
boolean firstDisplay = !displayedImages.contains(imageUri);
if (firstDisplay) {
FadeInBitmapDisplayer.animate(imageView, 500);
displayedImages.add(imageUri);
}
}
}
}
} |
JavaScript | UTF-8 | 316 | 2.59375 | 3 | [] | no_license | const randomInt = (max, min = 0) => Math.floor(Math.random() * (max - min + 1)) + min;
const getUID = () => Math.random().toString(36).substr(2, 9);
const updateObject = (oldObject, updatedProperties) => {
return {
...oldObject,
...updatedProperties,
};
};
export { randomInt, getUID, updateObject };
|
Python | UTF-8 | 658 | 4.0625 | 4 | [] | no_license | """
리스트 rainbow의 마지막 값을 이용해서 무지개의 마지막 색을 출력하는 코드입니다. 3번째줄이 last_color에 rainbow의 마지막 값을 저장하도록 수정해 보세요.
rainbow=['빨강','주황','노랑','초록','파랑','남색','보라']
#rainbow를 이용해서 last_color에 값을 저장하세요
last_color =
print('무지개의 마지막 색은 {}이다'.format(last_color) )
"""
rainbow=['빨강','주황','노랑','초록','파랑','남색','보라']
#rainbow를 이용해서 last_color에 값을 저장하세요
last_color = rainbow[-1]
print('무지개의 마지막 색은 {}이다'.format(last_color) ) |
Markdown | UTF-8 | 16,418 | 2.53125 | 3 | [] | no_license | # Loan example amount quarkus
The purpose of the project is to take a list of loan offers with rates and amounts, and then validate that a loan
request can be fulfilled based on that list of offers.
This could of course be done using a simple command line interface, and in a few hundred lines of code, but that would
not be enterprisey! On a more serious note it would also not be useable as a service deployed to a company's
infrastructure.
The other purpose of this project is to demonstrate and test out a bunch of modern java based tooling and see how that
might create a better workflow. These tools include [quarkus](https://quarkus.io/) and its integration
with [kafka](http://kafka.apache.org/). The project makes quite heavy use of reactive programming (which is offered by
quarkus) including [microprofile](https://microprofile.io/) and [mutiny](https://smallrye.io/smallrye-mutiny).
The project has been heavily influenced
by [red hats coffee shop demo - quarkus-cafe-demo](https://github.com/jeremyrdavis/quarkus-cafe-demo) and draws on much
of the [quarkus documentation](quarkus.io/guides/).
## Project structure
This project contains a number of services each of which performs a specific purpose. A depiction of the full
architecture can be seen below:

The project is made up of 4 modules, 2 of which are services, 1 is a client, and 1 is a library.
There is also a set of three docker containers, 2 for running kafka (kafka and zookeeper) and 1 for a mongodb database.
### `core-service`
- receives loan offer requests on a `/loan-offer` endpoint, parses them and puts them on a kafka topic, `loan-offers-in`
- receives loan request requests on a `/loan-request` endpoint, parses them and puts them on a kafka
topic, `loan-requests-in`
- receives `loan-available-events` from the kafka topic `loans-available` and stores them in an in-memory map
- receives requests to get loans, either as a list of those that currently exist, or as a stream of server-sent events
> NB: This project uses convention of "in" for commands. If there were events that were going explicitly for user
> consumption then the stream would be called out. It is not called that because `loans-available` would actually be an
> internal stream in a real world application, and instead you'd have something like `loans-notifications-out` for updating
> the user.
### `loan-offers-service`
- receives loan offer commands from a kafka topic `loan-offers-in` and stores them in the mongodb database
- receives loan request commands from a kafka topic `loan-requests-in`. Upon receiving this command the service reads
the loans database, calculates availability and puts an `loan-available-event` on the `loans-available` kafka topic.
- The way the service gets the lowest offers is by returning queries from the database ordered by rate and picking the
first n loan offers enough to cover the balance.
- receives delete-record requests on a `/delete-records` endpoint. This deletes all records in the database, thus
setting it back to fresh.
### `loan-client`
- Contains a `loan-client` that can be used to send requests to the services. See
the [Operation of loan client CLI](#Operation-of-loan-client-CLI) section below for more information on how to use
this.
### `loan-amount-domain`
- Contains models that would otherwise be duplicated across the other modules
### `test-utils`
- Some utilities used for starting kafka and getting producers and consumers for it for use in tests
## Developing
### Requirements
There are a number of requirements for developing the project. These include:
- java 11+ (used adoptopenkjdk 11.0.11+9, note that there is an error with the spotless plugin when using java
16, [see here](https://github.com/diffplug/spotless/issues/834))
- docker with docker-compose (used docker-desktop 3.3.3)
- maven 3+ (optional - can use warpper instead, used 3.8.3)
- mongosh (optional - only for validating operation within mongo database, used 0.12.1)
### Building
The project can be built using the following command from the root of the project
```shell
./mvnw clean install
```
This will build the project and run all unit tests.
> NB: Please be aware that for the services in this project the jar is not an uber-jar (except for the `loan-client`)
> and so is not runnable by itself, and instead must be run using the quarkus run jar.
Jacoco coverage reports are available in each of the modules that have tests, in the `target/jacoco-report` folder.
#### Integration tests
You can also run the integration tests during the build using:
```shell
./mvnw clean install -Pintegration
```
Or run afterwards separately:
```shell
./mvnw verify -Pintegration
```
> NB: The integration tests use testcontainers which 1) take a while to pull from dockerhub,
> and 2) cause a problem if you have to use an internal mirror. Plus the fact they take 40s each means they are too slow
> to be in every build
#### Linting
Linting is provided by `spotless` ([link](https://github.com/diffplug/spotless)). This is not built into the ordinary
build process as a step. Rather it must be running using:
```shell
./mvnw spotless:apply
```
### Running the application in prod mode locally
To run application using docker run:
```shell
docker-compose -f setup/docker-compose.yaml up --remove-orphans --force-recreate --build
```
This will build and start the docker containers. Note that you must have built the jars beforehand
using `mvn clean install`!
When you're done you can clean up the containers. For cleaning up the containers afterwards use:
```shell
docker-compose -f setup/docker-compose.yaml down --remove-orphans
```
### Running the application in dev mode locally
To run the application in dev mode you first need to start the kafka stubs:
```shell
docker-compose -f setup/docker-compose-dev.yaml up --remove-orphans --force-recreate --build
```
You then need to open new terminals/shells and start the services, one in each terminal:
**Core service:**
```shell
cd ./core-service
../mvnw compile quarkus:dev
```
**Loan offers service**
```shell
cd ./loan-offers-service
../mvnw compile quarkus:dev
```
Alternatively, intellij has functionality for running quarkus configurations from 2020.3 onwards.
> **_NOTE:_** Quarkus has a Dev UI, which is available in dev mode only. This can be found at
> http://localhost:8080/q/dev/ for the `core-service` and http://localhost:8081/q/dev/ for the `loan-offers-service`
When you're done, you can stop the app processes and clean up the containers. For cleaning up the containers afterwards
use:
```shell
docker-compose -f setup/docker-compose-dev.yaml down --remove-orphans
```
### Sending requests and interacting with the services
You can use the loan client to make requests to the services.
To run the client, first `cd` into the `loan-client` directory. Command line instructions and help are available when
you run `./zopa-rate`. The options presented are as follows:
```
Usage: zopa-rate [-hV] [COMMAND]
-h, --help Show this help message and exit.
-V, --version Print version information and exit.
Commands:
send-offers Send a loan to the service
loan-request Create a request for a loan
list-available-loans List loans that are available and have been processed
on the service
reset-records Make a request to reset the currently stored records
```
#### Operation of loan client CLI
To send a csv file of loan offers, use the following command:
```shell
./zopa-rate send-offers -f offers.csv
```
A file of example loan offers is provided, called `offers.csv`
To send a loan-request run:
```shell
./zopa-rate loan-request -a 1700
```
This command will wait until getting a request back from the services.
> NB: You may have to exit this command with CTRL-C as it sets up a client to listen to a stream of server sent events, and may take up to about 15 seconds to disconnect.
You can view all the loan-available-events recorded by on the service using the `list-available-loans` command:
```shell
./zopa-rate list-available-loans
```
**_IMPORTANT:_** You must run the `reset-records` command in order to reset the loan-offers stored by the mongodb
database. This is so that you can run a different set of loan offers through the service.
#### Using the HTTP request file
There is also an HTTP request file that can be used directly in the `intellij` IDE to run requests. This is
the `./requests/loan_amount_example-endpoints.http` file.
#### Viewing received messages from server sent events in the browser
There is also a web page available at `http://localhost:8080` which listens for server sent events - loan available
events.
To see this in operation, open the web page and then send a few requests using the `loan-client`, you should see the
requests come in, albeit unformatted.
#### Viewing messages as they get sent over kafka
It is possible to view the logs for the containers using `docker logs <container_name`, or looking at the service's
stdouts when running in dev mode. (Each request is logged).
However, what if we want to see messages as they go across kafka? Well do this we first need to dxec into the kafka
container:
```shell
docker exec -it setup_kafka_1 /bin/bash
```
> NB: This should be the name of the kafka container if you have used the commands above. If not find the name of the
> container with the strimzi/kafka image, using `docker ps` and replace the name above.
We can the `cd` into the `bin` folder, and run the `kafka-console-consumer` as shown below:
```shell
cd bin
./kafka-console-consumer.sh --bootstrap-server localhost:9092 --topic $TOPIC_NAME --from-beginning
```
Replace $TOPIC_NAME with any of the below:
- loans-available
- loan-requests-in
- loan-offers-in
> NB: Descriptions of each of the topics and what they are used for is above in the
> [Project structure](#Project-structure) section.
#### Viewing records in the database
You may also want to see what is going in the database. When the database is running, login with:
```shell
mongosh -u admin -p some-pw
```
Then use the following commands to inspect the objects that are stored in there.
```shell
use loans
db.auth("db-user", "some-password")
show collections
db.LoanOffer.find()
```
## Caveats and considerations
There are a great many caveats and things that could have been done better in the construction of this project. These
will be explained here.
### Architecture and design
There a number of points to be made on the architecture and design of this project:
- Obbviously this is overly complicated for what it does, using kafka and a database and server and all. However, I
believe this is justified because:
- It shows the way that a real service might be structured, using real modern enterprise components - you wouldn't
have a CLI alone to do your business logic - how would your customers use that? You'd want it on a service so you
can easily connect to it from multiple locations and so that you have the application state all in one place.
- It shows how you might go about doing request/response with kafka using an id (borrowerId/requesterId in this
case, though this would be on a per-request basis in a real app)
- It hints at CQRS a little - with commands and queries being separate, albeit with only an in memory aggregate/view
and only for loan-available events
- In terms of design, I would do probably do proper event storming to come up with the events.
- For doing a command line app, you'd probably want more validation of the file, the data, and the file types.
- The current way of deleting all records from the table is obviously a bit of a cop out because in reality you probably
wouldn't want to do this delete all at once, rather you'd delete single records when a member revokes a loan-offer,
and you'd want to have events for that too.
- Strings have been used almost throughout for currencies. The reason for this is because the can store an arbitrary
amount of rounding with accuracy (compared with floating point) and it is easier for conversion to kafka and vice
versa. `BigDecimal` is used for anything to do with calculating money, or alternatively it might be viable to use the
java currency types, but `BigDecimal` allows for a higher degree of precision.
- With more time I'd probably put more builders on classes, and also make some of the accesses private and final. In
this case this wasn't done because it makes json serialisation more faffy.
### Deployments
- The way that the project is currently structured means that it is only really deployable locally. This would not be
the case in reality. Rather you'd have something like a helm chart, jenkinsfile or something else to deploy it with,
most likely to kubernetes using gitops with something like argocd.
- Secrets wouldn't just be stored in the properties files like that, they'd be stored in a secrets manager, either cloud
based or something like hashicorp vault. Those secrets would then be pulled on app startup.
### Testing rationale
- The level of testing is not appropriate for a project of this type. The testing that has been done is somewhat
indicative of what would be done, in terms of showing a bit of everything (including using various standard '
enterprisey' libraries such as `assertj`, `equalsVerifier` and `mockito`), but not in terms scope and coverage of the
whole project.
- Basically the only module that has been tested significantly is the `loan-offers-service`and, for unit tests,
the `loan-amount-domain`.
- For unit testing on the numeric functions the main routes through the functions have been tested, but I haven't done
tests for things like nulls, divides by zeros or various other kind of edge cases/exceptions.
- In a real corporate context this would be probably done by an in house library and you wouldn't be doing all of
these calculations yourself.
- For integration tests I've tried to show some of quarkus' functionality with quarkus test and test resources, as well
as using test containers. This allows spinning up of each module
- There is an additional type of testing, which might be termed "acceptance" testing, "end-user" testing, or "system"
testing. This kind of test would use something like rest-assured, or cucumber. This type of testing has not been done,
though it might be argued that it is partially done by the fact that the `loan-client` exists.
## Additional future directions
There are a number of additional directions that this project could be taken in.
- You'd want to do way more validation on the balances and accounts if you were going to do loans properly. You might
want to have some kind double entry bookkeeping system when taking payments from one member
- On a more simple note you'd probably want to do hibernate bean validation for certain input and output
requirements.
- Maybe you'd have libraries for things like the domain model, and not put it in the same repo as the actual
services. You'd most likely have libraries to do any sort of calculation and validation with money - both for
consistency across the company and for accuracy in implementation.
- The rest of the updates would likely be adding additional functionality which would likely invole more services.
- You might want to do something with the fulfilment of those loans offered, which would most likely involve
accounts and payments services. You'd have to have a mechanism for a customer making a request, and then have a
fulfilment flow where the borrower takes up the offer, and the lender who accepts as well. You'd also have to
figure out other things like whether you offer out the same loans to multiple people at the same time, or when
they have been offered out do those loans go into some kind of "offer-pending" state? There are many more
considerations for doing actual fulfilment. You might want to have a separate service itself for doing loan
fulfilment.
- The other thing that seems missing is the notion of a member, and all the stuff that surrounds that, like
authentication. If there was a member service, then you'd be able to say that a specific member has requested a
loan, and do things like only return loans to members that they have requested (not all loans available as now).
You'd be able to have sessions and do more on the front end in this regard.
|
Java | UTF-8 | 753 | 3.0625 | 3 | [] | no_license | package com.ar.puzzles.tc;
import java.util.StringTokenizer;
/**
* SRM 172 DIV 2
*
* @author Alan Ross
*/
public class BadClock
{
public double nextAgreement( String trueTime, String skewTime, int hourlyGain )
{
int t0 = parseTime( trueTime );
int t1 = parseTime( skewTime );
int h12 = 12 * 60 * 60;
if( hourlyGain < 0 )
{
hourlyGain *= -1;
int tmp = t0;
t0 = t1;
t1 = tmp;
}
while( t0 < t1 )
{
t0 += h12;
}
return ( double ) ( t0 - t1 ) / hourlyGain;
}
private int parseTime( String s )
{
StringTokenizer st = new StringTokenizer( s, ":" );
return (
Integer.parseInt( st.nextToken() ) * 60 * 60 +
Integer.parseInt( st.nextToken() ) * 60 +
Integer.parseInt( st.nextToken() )
);
}
} |
Shell | UTF-8 | 8,201 | 3.9375 | 4 | [] | no_license | #!/bin/bash
#
# Copyright (C) 2017 Intel Corporation
# Author: 2017 Lv Zheng <lv.zheng@intel.com>
#
# NAME:
# acpimt.sh - launch multi-threads running Linux kernel ACPI
# transactions.
#
# SYNOPSIS:
# acpimt.sh [-c amlfile]
# [-e acpiname]
# [-f acpifile]
# [-m kernel_module]
# [-d acpidbg] [-t seconds]
#
# DESCRIPTION:
# This program is used as a test facility validating multi-thread
# support in Linux ACPI subsystem. It can launch ACPI transactions
# as test cases using child processes and executes them in parallel
# endlessly. Test processes can be terminated by Ctrl-C.
#
# The launchable tests can be:
# -c: Use /sys/kernel/debug/acpi/custom_method to customize
# control methods (CONFIG_ACPI_CUSTOM_METHOD) repeatedly.
# -e: Use acpidbg to evaluate a named object w/o arguments
# (CONFIG_ACPI_DEBUGGER_USER) repeatedly.
# -f: Use cat to read a sysfs/procfs file repeatedly.
# -m: Use modprobe to load/unload a module repeatedly.
#
# Options:
# -d: Full path to acpidbg (in case it cannot be reached via
# %PATH%.
# -t: Seconds to sleep between test operations.
########################################################################
# Global Settings
########################################################################
SLEEPSEC=10
TEMPFILE=`mktemp`
ACPIDBG=acpidbg
ACPINAMES=
ACPIFILES=
AMLFILES=
KERNMODS=
########################################################################
# Usages
########################################################################
usage() {
echo "Usage: `basename $0` [-d acpidbg] [-t second]"
echo " [-c amlfile]"
echo " [-e acpiname]"
echo " [-f acpifile]"
echo " [-m kernel_module]"
echo ""
echo "This program is used as a test facility validating multi-thread"
echo "support in Linux ACPI subsystem. It can launch ACPI transactions"
echo "as test cases using child processes and executes them in parallel"
echo "endlessly. Test processes can be terminated by Ctrl-C."
echo ""
echo "Test cases are:"
echo "-c: Use /sys/kernel/debug/acpi/custom_method to customize"
echo " control methods (CONFIG_ACPI_CUSTOM_METHOD) repeatedly."
echo "-e: Use acpidbg to evaluate a named object w/o arguments"
echo " (CONFIG_ACPI_DEBUGGER_USER) repeatedly."
echo "-f: Use cat to read a sysfs/procfs file repeatedly."
echo "-m: Use modprobe to load/unload a module repeatedly."
echo ""
echo "Options are:"
echo "-d: Full path to acpidbg (in case it cannot be reached via"
echo " %PATH%."
echo "-t: Seconds to sleep between test operations."
}
fatal_usage() {
usage
exit 1
}
########################################################################
# Loadable Module
########################################################################
find_module() {
curr_modules=`lsmod | cut -d " " -f1`
for m in $curr_modules; do
if [ "x$m" = "x$1" ]; then
return 0
fi
done
return 1
}
remove_module() {
find_module $1
if [ $? -eq 0 ]; then
echo "Removing $1 ..."
modprobe -r $1
if [ $? -ne 0 ]; then
echo "Failed to rmmod $1."
return 1
fi
fi
return 0
}
insert_module() {
find_module $1
if [ $? -ne 0 ]; then
echo "Inserting $1 ..."
modprobe $1
if [ $? -ne 0 ]; then
echo "Failed to insmod $1."
return 1
fi
fi
return 0
}
########################################################################
# Endless Test Control
########################################################################
endless_term1() {
echo "Terminating parent process..."
echo "stop" > $TEMPFILE
}
endless_term2() {
echo "stopped" > $TEMPFILE
}
endless_stop1() {
if [ ! -f $TEMPFILE ]; then
return 0
fi
cat $TEMPFILE | grep "stop" > /dev/null
}
endless_stop2() {
if [ ! -f $TEMPFILE ]; then
return 0
fi
cat $TEMPFILE | grep "stopped" > /dev/null
}
endless_exit() {
wait
remove_module acpi_dbg
rm -f $TEMPFILE
}
endless_init() {
echo > $TEMPFILE
if [ ! -d /sys/kernel/debug/acpi ]; then
mount -t debugfs none /sys/kernel/debug
fi
if [ ! -x $ACPIDBG ]; then
echo "$ACPIDBG is not executable."
return 1
fi
if [ ! -f /sys/kernel/debug/acpi/custom_method ]; then
echo "ACPI_CUSTOM_METHOD is not configured."
return 1
fi
insert_module acpi_dbg || return 1
trap endless_term1 2 3 15
}
########################################################################
# Test Facility - Namespace Object Evaluation
########################################################################
acpieval() {
while :
do
endless_stop1
if [ $? -eq 0 ]; then
echo "Terminating child process - acpieval $1..."
break
fi
echo "-----------------------------------"
echo "evaluate $1"
$ACPIDBG -b "ex $1"
echo "-----------------------------------"
sleep $SLEEPSEC
done
endless_term2
}
########################################################################
# Test Facility - Method Customization
########################################################################
acpicust() {
while :
do
endless_stop1
if [ $? -eq 0 ]; then
echo "Terminating child process - acpicust $1..."
break
fi
echo "==================================="
echo "customize $1"
cat $1 > /sys/kernel/debug/acpi/custom_method
echo "==================================="
sleep $SLEEPSEC
done
endless_term2
}
########################################################################
# Test Facility - Kernel Exported Files
########################################################################
acpicat() {
while :
do
endless_stop1
if [ $? -eq 0 ]; then
echo "Terminating child process - acpicat $1..."
break
fi
echo "+++++++++++++++++++++++++++++++++++"
echo "concatenate $1"
cat $1
echo "+++++++++++++++++++++++++++++++++++"
sleep $SLEEPSEC
done
endless_term2
}
########################################################################
# Test Facility - Dynamic Module Load/Unload
########################################################################
acpimod() {
res=0
while :
do
endless_stop1
if [ $? -eq 0 ]; then
echo "Terminating child process - acpimod $1..."
break
fi
find_module $1
if [ $? -eq 0 ]; then
echo "***********************************"
echo "remove $1"
remove_module $1
res=$?
echo "***********************************"
if [ $res -ne 0 ]; then
echo "Terminated child process - acpimod $1 (rmmod)."
exit
fi
else
echo "***********************************"
echo "insert $1"
insert_module $1
res=$?
echo "***********************************"
if [ $res -ne 0 ]; then
echo "Terminated child process - acpimod $1 (insmod)."
exit
fi
fi
sleep $SLEEPSEC
done
endless_term2
}
########################################################################
# Script Entry Point
########################################################################
while getopts "c:d:e:f:hm:t:" opt
do
case $opt in
c) AMLFILES="$AMLFILES $OPTARG";;
e) ACPINAMES="$ACPINAMES $OPTARG";;
d) ACPIDBG=$OPTARG;;
f) ACPIFILES="$ACPIFILES $OPTARG";;
m) KERNMODS="$KERNMODS $OPTARG";;
t) SLEEPSEC=$OPTARG;;
h) usage;;
?) fatal_usage;;
esac
done
shift $(($OPTIND - 1))
# Startup
endless_init || exit 2
# Perform sanity checks
for amlfile in $AMLFILES; do
if [ ! -f $amlfile ]; then
echo "$amlfile is missing."
exit 1
fi
done
for acpifile in $ACPIFILES; do
if [ ! -f $acpifile ]; then
echo "$acpifile is missing."
exit 1
fi
done
# Lauch test cases
for amlfile in $AMLFILES; do
acpicust $amlfile &
done
for acpiname in $ACPINAMES; do
acpieval $acpiname &
done
for acpifile in $ACPIFILES; do
acpicat $acpifile &
done
for kmod in $KERNMODS; do
acpimod $kmod &
done
# Wait children
while :
do
endless_stop2
if [ $? -eq 0 ]; then
echo "Terminated parent process."
break
fi
endless_stop1
if [ $? -eq 0 ]; then
sleep $SLEEPSEC
echo "Force terminating parent process..."
endless_term2
else
sleep $SLEEPSEC
fi
done
# Cleanup
endless_exit
|
Python | UTF-8 | 4,990 | 2.6875 | 3 | [] | no_license | #!/usr/bin/env python
# -*- coding: utf-8 -*-
"""
GET /v2.0/subnets/{subnet_id}
Show subnet
指定したサブネットの情報を表示する
"""
"""
実行例
bash-4.4$ ./k5-show-subnet.py 38701f66-4610-493f-9c15-78f81917f362
GET /v2.0/subnets/{subnet_id}
=========== ====================================
name iida-subnet-1
id 38701f66-4610-493f-9c15-78f81917f362
az jp-east-1a
cidr 192.168.0.0/24
gateway_ip 192.168.0.1
tenant_id a5001a8b9c4a4712985c11377bd6d4fe
network_id 93a83e0e-424e-4e7d-8299-4bdea906354e
enable_dhcp True
=========== ====================================
"""
import json
import logging
import os
import sys
def here(path=''):
"""相対パスを絶対パスに変換して返却します"""
if getattr(sys, 'frozen', False):
# cx_Freezeで固めた場合は実行ファイルからの相対パス
return os.path.abspath(os.path.join(os.path.dirname(sys.executable), path))
else:
# 通常はこのファイルの場所からの相対パス
return os.path.abspath(os.path.join(os.path.dirname(__file__), path))
# libフォルダにおいたpythonスクリプトを読みこませるための処理
if not here("../lib") in sys.path:
sys.path.append(here("../lib"))
if not here("../lib/site-packages") in sys.path:
sys.path.append(here("../lib/site-packages"))
try:
from k5c import k5c
except ImportError as e:
logging.exception("k5cモジュールのインポートに失敗しました: %s", e)
sys.exit(1)
try:
from tabulate import tabulate
except ImportError as e:
logging.exception("tabulateモジュールのインポートに失敗しました: %s", e)
sys.exit(1)
#
# APIにアクセスする
#
def access_api(subnet_id=""):
"""REST APIにアクセスします"""
# 接続先
url = k5c.EP_NETWORK + "/v2.0/subnets/" + subnet_id
# Clientクラスをインスタンス化
c = k5c.Client()
# GETメソッドで取得して、結果のオブジェクトを得る
r = c.get(url=url)
return r
#
# 結果を表示する
#
def print_result(result):
"""結果を表示します"""
# ステータスコードは'status_code'キーに格納
status_code = result.get('status_code', -1)
# ステータスコードが異常な場合
if status_code < 0 or status_code >= 400:
print(json.dumps(result, indent=2))
return
# データは'data'キーに格納
data = result.get('data', None)
if not data:
logging.error("no data found")
return
# サブネット情報はデータオブジェクトの中の'subnet'キーにオブジェクトとして入っている
#"data": {
# "subnet": {
# "cidr": "192.168.0.0/24",
# "name": "iida-subnet-1",
# "availability_zone": "jp-east-1a",
# "allocation_pools": [
# {
# "start": "192.168.0.2",
# "end": "192.168.0.254"
# }
# ],
# "ip_version": 4,
# "tenant_id": "a5001a8b9c4a4712985c11377bd6d4fe",
# "gateway_ip": "192.168.0.1",
# "dns_nameservers": [],
# "network_id": "ce5ae176-3478-45c0-9a8f-59975e4ba28d",
# "enable_dhcp": true,
# "host_routes": [],
# "id": "8ed6dd7b-2ae3-4f68-81c9-e5d9e074b67a"
# }
#},
sn = data.get('subnet', {})
# 表示用に配列にする
subnets = []
subnets.append(['name', sn.get('name', '')])
subnets.append(['id', sn.get('id', '')])
subnets.append(['az', sn.get('availability_zone', '')])
subnets.append(['cidr', sn.get('cidr', '')])
subnets.append(['gateway_ip', sn.get('gateway_ip', '')])
subnets.append(['tenant_id', sn.get('tenant_id', '')])
subnets.append(['network_id', sn.get('network_id', '')])
subnets.append(['enable_dhcp', sn.get('enable_dhcp', '')])
# サブネット情報を表示
print("GET /v2.0/subnets/{subnet_id}")
print(tabulate(subnets, tablefmt='rst'))
if __name__ == '__main__':
import argparse
def main():
"""メイン関数"""
parser = argparse.ArgumentParser(description='Shows information for a specified subnet.')
parser.add_argument('subnet_id', metavar='subnet-id', help='Subnet id.')
parser.add_argument('--dump', action='store_true', default=False, help='Dump json result and exit.')
args = parser.parse_args()
subnet_id = args.subnet_id
dump = args.dump
if subnet_id == '-':
import re
regex = re.compile('^([a-f0-9]{8}-?[a-f0-9]{4}-?4[a-f0-9]{3}-?[89ab][a-f0-9]{3}-?[a-f0-9]{12}).*', re.I)
for line in sys.stdin:
match = regex.match(line)
if match:
uuid = match.group(1)
result = access_api(subnet_id=uuid)
print("subnet_id: {}".format(uuid))
print_result(result)
print("")
sys.stdout.flush()
return 0
# 実行
result = access_api(subnet_id=subnet_id)
# 中身を確認
if dump:
print(json.dumps(result, indent=2))
return 0
# 表示
print_result(result)
return 0
# 実行
sys.exit(main())
|
Python | UTF-8 | 1,516 | 2.75 | 3 | [] | no_license | from race.hashes import *
import numpy as np
import argparse
import sys
import os
''' Tool to evaluate ground-truth KDEs
'''
parser = argparse.ArgumentParser(description = "Evaluate ground truth KDEs. Produces a results file data.gtruth")
parser.add_argument("data", help="npy file with (n x d) data entries")
parser.add_argument("queries",help="npy file with (m x d) queries")
parser.add_argument("kernel_id", type=int, help="0: L2 LSH kernel, 1: Angular kernel")
parser.add_argument("bandwidth", type=float, help="density estimate bandwidth")
args = parser.parse_args()
# x = lambda a, b, c : a + b + c
# Gaussian kernel:
# kernel = lambda x,y,w : np.exp(-1.0/(2*w) * np.linalg.norm(x - y)**2)
if args.kernel_id == 0:
kernel = lambda x,y,w : P_L2(np.linalg.norm(x-y),w)
elif args.kernel_id == 1:
kernel = lambda x,y,w : P_SRP(x,y)**(int(w))
else:
print("Unsupported kernel id.")
sys.exit()
dataset = np.load(args.data)
queries = np.load(args.queries)
NQ,d = queries.shape
N,d = dataset.shape
print("Processing ground truth for",NQ," queries and",N," dataset vectors")
sys.stdout.flush()
results = np.zeros(NQ)
for j,data in enumerate(dataset):
for i,query in enumerate(queries):
results[i] += kernel(data,query,args.bandwidth)
if j % 100 == 0:
sys.stdout.write('\r')
sys.stdout.write('Progress: {0:.4f}'.format(j/N * 100)+' %')
sys.stdout.flush()
results = results / N
output_filename = os.path.splitext(args.data)[0]+'.gtruth'
np.savetxt(output_filename, results, delimiter=',')
|
Python | UTF-8 | 1,696 | 3.015625 | 3 | [] | no_license | import numpy as np
import matplotlib.pyplot as pt
iter = 50000
learning_rate = 0.2
data = np.array([[3,1.5,1],
[2,1,0],
[4,1.5,1],
[3,1,0],
[3.5,0.5,1],
[2,0.5,0],
[5.5,1,1],
[1,1,0]],dtype=np.float64)
#Separate inputs from outputs
x = data[:,[0,1]]
y = data[:,2]
unknown = np.array([4.5, 1],dtype=np.float64)
w1 = np.random.random()
w2 = np.random.random()
b = np.random.random()
def sigmoid(x):
return 1/(1+1+np.exp(-x))
def sigmoid_deriv(x):
return sigmoid(x) - (sigmoid(x) **2)
#plot scatter
for i in range(0,len(x),1):
color = ''
if y[i] == 0:
color = 'b'
if y[i] == 1:
color = 'r'
pt.scatter(x[i,0],x[i,1],c=color)
pt.grid()
pt.axis([0,6,0,2])
#Training loop
#Iterate over timestep
derivSSE_w1 = 0
derivSSE_w2 = 0
derivSSE_b = 0
N = float(len(data))
for i in range(0,iter,1):
#iterate over data series
SSE = 0
for j in range (0,len(data),1):
z = x[j,0]*w1 +x[j,1]*w2 + b
y_mod = sigmoid(z)
SSE += (y[j] - y_mod)**2
#print('At point x(0)=',x[j,0],'SSE=',SSE)
derivSSE_w1 += (-2/N) * (y[j]-y_mod) * x[j,0]
derivSSE_w2 += (-2/N) * (y[j]-y_mod) * x[j,1]
derivSSE_b += (-2/N) * (y[j]-y_mod)
w1 = w1 - (derivSSE_w1*learning_rate)
w2 = w2 - (derivSSE_w2*learning_rate)
b = b - (derivSSE_b*learning_rate)
# if i%100 == 0:
# print('At point x(0)=',x[j,0],'SSE=',SSE)
if i%1000 ==0:
print('MSE at i of ',i,'is =',SSE/N)
print('w1=',w1,'w2=',w2,'b=',b)
|
PHP | UTF-8 | 450 | 2.859375 | 3 | [] | no_license | <?php
class DatabaseFactory {
public static function getDatabaseInstance() {
global $db;
$dbConfigName = $db['dbserver'];
switch($dbConfigName) {
case 'mysql' :
return Mysql::getInstance($db);
case 'pgsql' :
return Pssql::getInstance($db);
default :
die('Provide database server name in database_config.php');
}
}
} |
TypeScript | UTF-8 | 2,379 | 2.828125 | 3 | [
"MIT"
] | permissive | import { Type } from "schema-verify";
import {
integerVerify,
pageVerify,
naturalVerify,
limitInfoVerify,
} from "../verify/builder/index";
import ErrMsg from "../error/builder/index";
interface LimitInfo {
offset: number;
step: number;
}
class Limit {
protected limitInfo: LimitInfo = {} as LimitInfo;
limitBuild(query: string): string {
const limitInfo: LimitInfo = this.limitInfo;
if (!limitInfoVerify(limitInfo)) {
return query;
}
const offset: number = limitInfo.offset;
const step: number = limitInfo.step;
if (offset === 0) {
return `${query} LIMIT ${step}`;
}
if (step === -1) {
return `${query} OFFSET ${offset}`;
}
return `${query} LIMIT ${step} OFFSET ${offset}`;
}
limit(offset: number, step?: number): void {
if (!integerVerify(offset)) {
throw new Error(ErrMsg.errorOffset);
}
if (Type.undefined.isNot(step) && !integerVerify(step)) {
throw new Error(ErrMsg.errorStep);
}
let limitInfo: LimitInfo = {} as LimitInfo;
if (Type.number.is(offset) && Type.number.is(step)) {
step = step || 0;
limitInfo = {
offset,
step,
};
}
if (Type.number.is(offset) && !Type.number.is(step)) {
limitInfo = {
offset: 0,
step: offset,
};
}
this.limitInfo = limitInfo;
}
offset(offset: number): void {
if (!integerVerify(offset)) {
throw new Error(ErrMsg.errorOffset);
}
this.limitInfo = {
offset: offset,
step: -1,
};
}
step(step: number): void {
if (!integerVerify(step)) {
throw new Error(ErrMsg.errorStep);
}
this.limitInfo = {
offset: 0,
step,
};
}
paging(page: number, size: number): void {
if (!pageVerify(page)) {
throw new Error(ErrMsg.errorPage);
}
if (!naturalVerify(size)) {
throw new Error(ErrMsg.errorSize);
}
const offset = (page - 1) * size;
this.limitInfo = {
offset,
step: size,
};
}
}
export default Limit;
|
JavaScript | UTF-8 | 3,996 | 2.515625 | 3 | [] | no_license | import React, { Component, Fragment } from 'react';
import RandomTop from './RandomTop';
import RandomBottom from './RandomBottom';
import { bottomFilterData, topFilterData } from '../config/properties';
import { filterItemsByFilters } from '../util/filterUtils';
const getRandom = (min, max) => Math.floor(Math.random() * (max - min + 1)) + min;
const midi = `Midi Skirt`;
const maxi = `Maxi Skirt`;
const pants = `Pants`;
const leggings = `Leggings`;
export default class GenerateOutfit extends Component {
constructor() {
super();
this.state = {
generatedTop: {},
generatedBottom: {},
keepTop: false,
keepBottom: false,
bottomFilters: bottomFilterData.map(item => item.name),
topFilters: topFilterData.map(item => item.name),
};
}
clickHandler = (randomTop, randomBottom) => {
const { keepTop, keepBottom } = this.state;
if (keepBottom === true && keepTop === true) this.setState({});
else if (keepTop) this.setState({ generatedBottom: randomBottom });
else if (keepBottom) this.setState({ generatedTop: randomTop });
else this.setState({ generatedTop: randomTop, generatedBottom: randomBottom });
}
handleTop = () => {
this.setState(prevState => ({ keepTop: !prevState.keepTop }));
}
handleBottom = () => {
this.setState(prevState => ({ keepBottom: !prevState.keepBottom }));
}
handleFilter = ({ target: { name } }) => {
this.setState((previousState) => {
let bottomFilters = [...previousState.bottomFilters];
if (bottomFilters.includes(name)) {
bottomFilters = bottomFilters.filter(item => item !== name);
} else {
bottomFilters.push(name);
}
return { bottomFilters };
});
}
handleFilterTop = ({ target: { name } }) => {
this.setState((previousState) => {
let topFilters = [...previousState.topFilters];
if (topFilters.includes(name)) {
topFilters = topFilters.filter(item => item !== name);
} else {
topFilters.push(name);
}
return { topFilters };
});
}
render() {
const { tops, bottoms } = this.props;
const { bottomFilters, topFilters } = this.state;
// filter tops & bottoms depending on checkboxes
const filteredTops = filterItemsByFilters(topFilters, tops);
const filteredBottoms = filterItemsByFilters(bottomFilters, bottoms);
const {
generatedTop, generatedBottom, keepTop, keepBottom,
} = this.state;
const randomTop = filteredTops[getRandom(0, filteredTops.length - 1)];
const randomBottom = filteredBottoms[getRandom(0, filteredBottoms.length - 1)];
return (
<div style={{ marginBottom: `3rem` }}>
<h2
style={{ marginBottom: `1.5rem`, marginTop: `2rem` }}
className="grey ui header"
>Generate An Outfit
</h2>
<div>
<button
type="button"
className="ui yellow button"
style={{ marginBottom: `1.5rem` }}
onClick={() => this.clickHandler(randomTop, randomBottom)}
>{generatedTop.id ? `Refresh` : `Go`}
</button>
<div className="ui center middle aligned centered grid">
{generatedTop.id
&& (
<RandomTop
{...this.state}
handleTop={this.handleTop}
handleFilterTop={this.handleFilterTop}
/>
)}
{generatedBottom.id
&& (
<div className="one wide centered column">
<div className="centered column">
<i className="grey huge plus icon" />
</div>
</div>
)}
{generatedBottom.id
&& (
<RandomBottom
{...this.state}
handleBottom={this.handleBottom}
handleFilter={this.handleFilter}
/>
)}
</div>
</div>
</div>
);
}
}
|
JavaScript | UTF-8 | 764 | 2.53125 | 3 | [] | no_license | var colors = require('colors');
var mongoose = require('mongoose');
var Employee = mongoose.model('Employee');
exports.getEmployees = getEmployees;
exports.getEmployee = getEmployee;
function getEmployees (callback) {
Employee.find().sort('name.last').exec(callback);
}
function getEmployee (employeeId ,callback){
Employee.findOne({
id:employeeId
}).populate('team').exec(callback);
}
setTimeout(function(){
callback(null,employeeDb);
},500);
function getEmployee (employeeId,callback){
getEmployees(function(error,data){
if(error){
return callback(error);
}
var result = data.find(function(item){
return item.id === employeeId;
});
callback(null,result);
});
} |
Java | UTF-8 | 879 | 2.078125 | 2 | [] | no_license | package com.lyq.bean;
public class Type {
private int typeid;
private String typename;
private String rhythm;
private String origin;
private String rep;
private String artist;
public int getTypeid() {
return typeid;
}
public void setTypeid(int typeid) {
this.typeid = typeid;
}
public String getTypename() {
return typename;
}
public void setTypename(String typename) {
this.typename = typename;
}
public String getRhythm() {
return rhythm;
}
public void setRhythm(String rhythm) {
this.rhythm = rhythm;
}
public String getOrigin() {
return origin;
}
public void setOrigin(String origin) {
this.origin = origin;
}
public String getRep() {
return rep;
}
public void setRep(String rep) {
this.rep = rep;
}
public String getArtist() {
return artist;
}
public void setArtist(String artist) {
this.artist = artist;
}
} |
Java | UTF-8 | 939 | 2.71875 | 3 | [] | no_license | package com.goit.goitonline.module5ex1;
import junit.framework.TestCase;
import org.junit.Test;
/**
* Created by GRSV on 20.04.2016.
*/
public class GetMinOrMaxElementsInIntegerArrayTest extends TestCase {
@Test
public void testGetMinElementsInIntegerArray() throws Exception {
GetMinOrMaxElementsInIntegerArray minMaxInIntArray = new GetMinOrMaxElementsInIntegerArray();
int[] numbersArray = {-5,70,21,23,220,-46,33};
int result = -46;
assertEquals(result, minMaxInIntArray.getMinElementsInIntegerArray(numbersArray));
}
@Test
public void testGetMaxElementsInIntegerArray() throws Exception {
GetMinOrMaxElementsInIntegerArray minMaxInIntArray = new GetMinOrMaxElementsInIntegerArray();
int[] numbersArray = {-5,70,21,23,220,-46,33};
int result = 220;
assertEquals(result, minMaxInIntArray.getMaxElementsInIntegerArray(numbersArray));
}
}
|
Shell | UTF-8 | 5,131 | 3.609375 | 4 | [
"Apache-2.0"
] | permissive | #!/bin/bash
VERBOSE=0
TIME=
msg()
{
if [ "$VERBOSE" -ne 0 ] ; then
echo "$*"
fi
}
#default location to start searching for files.
BASE_DIR=$PWD
PRUNE_FILES='
-name *~ -prune -o
-name .git* -prune -o
-name .svn -prune -o
-name *.o -prune -o
-name *.gif -prune -o
-name *.jpg -prune -o
-name *.png -prune -o
-name *.svn -prune -o
-name *.ttf -prune -o
-name *.woff* -prune -o
-name *.jar -prune -o
-name *.css -prune -o
-name *.tgz -prune -o
-name *.zip -prune -o
-name *.gz -prune -o
-name *.tar.gz -prune -o
-name *.pyc -prune -o
-name cscope.* -prune'
# -path */target -prune
# -name *.d -prune'
# possible prunes
# resources/node/node_modules/versionator/node_modules/lodash/vendor/benchmark.js
INCLUDE_FILES='
-name *.[cshSylxi] -o
-name *.cc -o
-name Makefile* -o
-name GNUmakefile* -o
-name config.* -o
-name files.* -o
-name *.js -o
-name *.asp -o
-name *.java -o
-name *.cpp -o
-name *.conf -o
-name *_feature_def -o
-name *.xml -o
-name *.xsd -o
-name *.dict -o
-name *.sh -o
-name *.pl -o
-name *.py -o
-name *.ft -o
-name *.ks -o
-name *.exp -o
-name *.post -o
-name *.php -o
-name *.txt
'
INCLUDE_FILES=' -type f '
# add all files that match find filter starting at BASE_DIR.
find_all_cscope_files()
{
echo "Creating full index of files."
set -f
find $BASE_DIR -type f $PRUNE_FILES -o \
\( \
$INCLUDE_FILES \
\) \
-exec echo {} \; >> $FIND_FILES
if [ "$?" -ne 0 ] ; then
exit 1
fi
set +f
FILE_CNT=`sed -n '$=' $FIND_FILES`
echo "find cscope files: $FILE_CNT"
}
list_all_files()
{
echo "Creating list of all files."
set -f
find $BASE_DIR -type f $PRUNE_FILES -o -print >> $ALL_FILES
set +f
FILE_CNT=`sed -n '$=' $ALL_FILES`
echo "all files: $FILE_CNT"
}
add_missing_files()
{
diff -u0 $ALL_FILES $FIND_FILES | grep "^-" | cut -c 2- | tr -d '"' | grep -v "^-" > $MISSED_FILES
ignored_file_cnt=$(sed -n '$=' $MISSED_FILES)
echo "diff all/cscope files: $ignored_file_cnt"
FILES=$(cat $MISSED_FILES)
for file in $FILES; do
file "$file" >> $MISSED_FILES_TYPES
done
grep ASCII $MISSED_FILES_TYPES > $SUGGESTED_FILES.tmp
ascii_file_cnt=$(sed -n '$=' $SUGGESTED_FILES.tmp)
echo "ascii files: $ascii_file_cnt"
grep -v ASCII $MISSED_FILES_TYPES > $IGNORED_FILES_TYPES
non_ascii_file_cnt=$(sed -n '$=' $IGNORED_FILES_TYPES)
echo "non-ascii files: $non_ascii_file_cnt"
rm -f $SUGGESTED_FILES
while read file; do
tmp=$(echo "$file" | sed -e 's/\(.*\):.*/\1/')
echo $tmp >> $SUGGESTED_FILES
done < $SUGGESTED_FILES.tmp
cat $FIND_FILES | sort > $CSCOPE_FILES
cat $SUGGESTED_FILES | sort >> $CSCOPE_FILES
cscope_file_cnt=$(sed -n '$=' $CSCOPE_FILES)
echo "cscope files: $cscope_file_cnt"
}
build_db()
{
cd $BASE_DIR && eval $TIME cscope -b -q -i $CSCOPE_FILES
}
usage()
{
prog=$(basename $0)
echo "$prog [-h] | [-d <BASE DIR>] [-v]"
echo
echo " -h print this help message"
echo " -d option allows user to select which dir to start indexing files."
echo " Default is to start from $BASE_DIR"
echo " -v print extra msgs while indexing files."
echo ""
}
parseargs()
{
while getopts "vhd:" flag
do
#echo "flag [$flag] optind [$OPTIND] optarg [$OPTARG] "
case "$flag" in
v)
VERBOSE=1
TIME=time
;;
d) # trim white space before assigning
BASE_DIR=$(echo $OPTARG | sed -e 's/^[ \t]*//;s/[ \t]*$//')
;;
:) echo "Option $OPTARG missing value"
usage
exit 1
;;
h) usage
exit 0
;;
*) echo "Unknown option $OPTARG"
usage
exit 1
;;
esac
done
}
# #########################################################################
# MAIN
#
ALL_FILES=$BASE_DIR/cscope.all-files.txt
FIND_FILES=$BASE_DIR/cscope.find-files.txt
MISSED_FILES=$BASE_DIR/cscope.missed-files.txt
MISSED_FILES_TYPES=$BASE_DIR/cscope.missed-files-types.txt
IGNORED_FILES_TYPES=$BASE_DIR/cscope.ignored-files-types.txt
SUGGESTED_FILES=$BASE_DIR/cscope.suggested-files.txt
CSCOPE_FILES=$BASE_DIR/cscope.files
CSCOPE_TMP_FILES=$BASE_DIR/cscope.tmp
parseargs $*
rm -f $BASE_DIR/cscope.*
echo "Building file list starting at: [$BASE_DIR]"
eval $TIME list_all_files
eval $TIME find_all_cscope_files
eval $TIME add_missing_files
grep -v cscope $CSCOPE_FILES > $CSCOPE_TMP_FILES
mv $CSCOPE_TMP_FILES $CSCOPE_FILES
echo "Running cscope to build database"
build_db
echo "Created files:"
ls -hs $BASE_DIR/cscope*
|
C++ | UTF-8 | 427 | 2.984375 | 3 | [] | no_license | #include <iostream>
#include <cstdlib>
using namespace std;
int main(void) {
int n;
cin >> n;
int neven, nodd;
neven = nodd = 0;
for (int i = 0; i < n; i++) {
int ai;
cin >> ai;
if (ai % 2 == 0) {
neven++;
} else {
nodd++;
}
}
if ((nodd == 1 and neven == 0) or
(nodd % 2 == 0) {
cout << "YES" << endl;
} else {
cout << "NO" << endl;
}
return EXIT_SUCCESS;
}
|
Markdown | UTF-8 | 2,364 | 3.296875 | 3 | [] | no_license | # Topics in Object-Oriented Programming
## 20 Apr: Graphics using Swing
Components, layouts, and application design in Swing. We'll do a simple GUI this week,
and something more advanced later.
Students may also be interested in JavaFX, where component-based interfaces are defined in XML (fxml files) instead of Java code. This is the way UIs are defined in Android, iOS, and other frameworks.
JavaFX is included with the Java SDK. To visually create user interfaces, also download the
SceneBuilder application.
## 27 Apr: Intro to Design Patterns, Review of UML
Design Patterns describe useful solutions to common design problems. We will learn how to
identify situations where a design pattern can help, and some of the most common patterns.
Every pattern has an associated *Context*, *Motivation* or "forces" involved,
*Applicability*, and, of course, *Solution* (the design itself).
## 04 May: Principles and Design Patterns, Java 8 New Features
1. Review of Lab 10 in review directory.
2. More Design Patterns. Slides in patterns directory.
3. Some principles or guidance useful in design and coding (misc directory).
4. Anonymous Classes.
5. Java 8 new features: Lambdas, changes to Interfaces, and Streams (java8 directory).
## Course TAs
* Natpapas Kraiwichchanaporn (Fahrung) frfahrung95@gmail.com
* Tunchanok Ngamsaowaros (Pan) pan_tunchanok@hotmail.com
* Kanyakorn Jewmaidang (Pun) punzjang@hotmail.com
* Orphitcha Lertpringkop (Aim) aimpitcha@gmail.com
* Khochapak Sunsaksawat (Nap) khochapak@gmail.com
* Chawin Kasikitpongpan (Ball) ballpor98@gmail.com
Senior SKEs offering additional feedback on code and program design:
* Sarun (Map) mapfaps@gmail.com
* Atit (Grief) grief.d.lament@gmail.com
## References
[BIGJ] Horstmann, *Big Java*, 4E or 5E. The 4th and 5th editions are almost identical. I think some chapters in the 4th edition are better organized. 5E has better page layout.
[JTUT] *The Java Tutorial* from Oracle. Recommend you install this on your computer.
[UMLD] *UML Distilled*, 3E, by Martin Fowler. A good, concise book about UML. Chapter 2 is very good intro to the software development process. In this course we will use class, sequence, and state machine diagrams.
[OODP] Horstmann, *Object-Oriented Design and Patterns*, 2E (2006). Condensed from "Big Java" with more emphasis on OO concepts and design. Many SKE graduates say this was the best book they read.
[JavaDoc] The Java Documentation. You are **required** to install this on your computer. This is a great source of knowledge about the Java platform.
|
Swift | UTF-8 | 1,572 | 2.75 | 3 | [] | no_license | //
// TrafficLightCell.swift
// TrafficLightMVP
//
// Created by Eugene Kireichev on 01/06/2020.
// Copyright © 2020 Eugene Kireichev. All rights reserved.
//
import UIKit
class TrafficLightCell: UICollectionViewCell {
weak var delegate: InterfaceActionDelegate?
@IBOutlet weak var greenLightButton: UIButton!
@IBOutlet weak var yellowLightButton: UIButton!
@IBOutlet weak var redLightButton: UIButton!
@IBOutlet weak var trafficLightDescriptionLabel: UILabel!
@IBOutlet weak var trafficLightSegmentedControl: UISegmentedControl!
@IBAction func tapLightButton(_ sender: UIButton) {
delegate?.tapActionForElement(with: sender.tag)
}
@IBAction func changeTrafficLightSegmentedControl(_ sender: UISegmentedControl) {
delegate?.tapActionForElement(with: sender.selectedSegmentIndex)
}
override func prepareForReuse() {
super.prepareForReuse()
roundTheButtons()
}
func updateView(with trafficLight: TrafficLights) {
[greenLightButton, yellowLightButton, redLightButton].forEach { button in
button.backgroundColor = (button.tag == trafficLight.rawValue) ? trafficLight.color : .darkGray
}
trafficLightDescriptionLabel.text = trafficLight.description
trafficLightSegmentedControl.selectedSegmentIndex = trafficLight.rawValue
}
func roundTheButtons() {
[greenLightButton, yellowLightButton, redLightButton].forEach { button in
button.layer.cornerRadius = button.bounds.width / 2
}
}
}
|
Java | UTF-8 | 1,607 | 2.734375 | 3 | [] | no_license | //Santiago Yeomans
//A01251000
import java.util.HashMap;
import java.util.Map;
import org.newdawn.slick.Music;
import org.newdawn.slick.SlickException;
import org.newdawn.slick.Sound;
/*
* Clase para reproducir sonidos
* Tutorial visto para reproducir sonido:
* https://www.youtube.com/watch?v=HRaJXVuZjRM
*/
public class AudioPlayer{
public static Map<String, Sound> soundMap = new HashMap<String, Sound>();
public static Map<String, Music> musicMap = new HashMap<String, Music>();
public static void load() {
try {
//Agregar los sonidos
soundMap.put("click", new Sound("click.wav"));
soundMap.put("arrancar", new Sound("MotorArrancar.wav"));
soundMap.put("claxon", new Sound("claxon.wav"));
soundMap.put("gameover", new Sound("gameover.wav"));
soundMap.put("lamboStart", new Sound("lamboStart.ogg"));
soundMap.put("audiStart", new Sound("audistart.ogg"));
soundMap.put("bugattiStart", new Sound("bugattiStart.wav"));
soundMap.put("viperStart", new Sound("viperStart.wav"));
soundMap.put("lotusStart", new Sound("lotusStart.wav"));
soundMap.put("koenisStart", new Sound("koenisStart.wav"));
soundMap.put("mercedesStart", new Sound("mercedesStart.wav"));
//Agregar sonido del motor mientras se está avanzando
//Agregar la musica
musicMap.put("music", new Music("intro.wav"));
//Posiblemente agregar musica extra
} catch (SlickException e) {
e.printStackTrace();
}
}
public static Music getMusic(String key) {
return musicMap.get(key);
}
public static Sound getSound(String key) {
return soundMap.get(key);
}
} |
Java | UTF-8 | 1,974 | 2.28125 | 2 | [] | no_license | package com.park61.moduel.firsthead.adapter;
import android.content.Context;
import android.support.v7.widget.RecyclerView;
import android.support.v7.widget.RecyclerView.ViewHolder;
import android.view.LayoutInflater;
import android.view.View;
import android.view.ViewGroup;
import android.widget.ImageView;
import android.widget.TextView;
import com.park61.R;
import com.park61.common.tool.ImageManager;
import com.park61.moduel.firsthead.bean.SignBabyBean;
import java.util.List;
public class SignDataAdapter extends RecyclerView.Adapter<SignDataAdapter.SignViewHolder> {
private Context context;
private List<SignBabyBean> datas;
public SignDataAdapter(Context c, List<SignBabyBean> list) {
this.context = c;
this.datas = list;
}
@Override
public SignViewHolder onCreateViewHolder(ViewGroup parent, int viewType) {
View v = LayoutInflater.from(context).inflate(R.layout.item_join_people, parent, false);
return new SignViewHolder(v);
}
@Override
public void onBindViewHolder(SignViewHolder holder, int position) {
SignBabyBean b = datas.get(position);
ImageManager.getInstance().displayCircleImg(holder.imgHead, b.getUserPic());
holder.babyName.setText(b.getChildName());
holder.babyClass.setText(b.getClassName());
holder.signTime.setText(b.getShowApplyDate());
}
@Override
public int getItemCount() {
return datas.size();
}
class SignViewHolder extends ViewHolder{
ImageView imgHead;
TextView babyName;
TextView babyClass;
TextView signTime;
public SignViewHolder(View v) {
super(v);
imgHead = (ImageView) v.findViewById(R.id.head_img);
babyName = (TextView) v.findViewById(R.id.baby_name);
babyClass = (TextView) v.findViewById(R.id.baby_class);
signTime = (TextView) v.findViewById(R.id.sign_time);
}
}
}
|
Python | UTF-8 | 4,021 | 2.640625 | 3 | [] | no_license | from flask import Blueprint, make_response, request, jsonify
from app.models.task import Task
from app import db
from datetime import date
from slack_sdk import WebClient
from slack_sdk.errors import SlackApiError
import os
# Blueprints
task_bp = Blueprint("task_bp", __name__, url_prefix="/tasks")
# Helper Functions
def get_task_with_task_id(task_id):
return Task.query.get_or_404(task_id, description={"details": "Invalid data"})
def post_slack_message(task):
slack_client = WebClient(token=os.environ["SLACK_BOT_TOKEN"])
try:
response = slack_client.chat_postMessage(channel="#task-notifications",
text=f"Someone just completed the task {task.title}")
except SlackApiError as e:
assert e.response["error"]
# Routes
@task_bp.route("", methods = ["POST"])
def add_tasks():
request_body = request.get_json()
if request_body is None:
return make_response({"details": "Invalid data"}, 400)
if "title" not in request_body or "description" not in request_body or "completed_at" not in request_body:
return make_response({"details": "Invalid data"}, 400)
new_task = Task(
title=request_body["title"],
description=request_body["description"],
completed_at=request_body['completed_at']
)
db.session.add(new_task)
db.session.commit()
return jsonify({"task": new_task.to_dict()}), 201
@task_bp.route("", methods = ["GET"])
def read_all_tasks():
sort_query = request.args.get("sort")
if sort_query == "asc":
tasks = Task.query.order_by(Task.title.asc())
elif sort_query == "desc":
tasks = Task.query.order_by(Task.title.desc())
else:
tasks = Task.query.all()
task_response = []
for task in tasks:
task_response.append(task.to_dict())
return jsonify(task_response), 200
@task_bp.route("/<task_id>", methods = ["GET"])
def read_one_task(task_id):
task = get_task_with_task_id(task_id)
return jsonify({"task": task.to_dict()})
@task_bp.route("/<task_id>", methods = ["PUT"])
def update_all_task_info(task_id):
task = get_task_with_task_id(task_id)
request_body = request.get_json()
if "id" in request_body:
task.id = request_body["id"]
if "completed_at" in request_body:
task.completed_at = request_body["completed_at"]
task.title = request_body["title"]
task.description = request_body["description"]
db.session.commit()
return make_response({"task": task.to_dict()}, 200)
@task_bp.route("/<task_id>", methods = ["PATCH"])
def update_some_task_info(task_id):
request_body = request.get_json()
task = get_task_with_task_id(task_id)
if "id" in request_body:
task.id = request_body["id"]
if "title" in request_body:
task.title = request_body["title"]
if "description" in request_body:
task.description = request_body["description"]
if "completed_at" in request_body:
task.completed_at = request_body["completed_at"]
db.session.commit()
return make_response(f"Task {task.title} has been updated.", 201)
@task_bp.route("/<task_id>", methods = ["DELETE"])
def delete_task(task_id):
task = get_task_with_task_id(task_id)
db.session.delete(task)
db.session.commit()
return jsonify({'details': f'Task {task.id} "{task.title}" successfully deleted'})
@task_bp.route("<task_id>/mark_complete", methods = ["PATCH"])
def update_as_completion(task_id):
task = get_task_with_task_id(task_id)
if task.completed_at == None:
task.completed_at = date.today()
db.session.commit()
post_slack_message(task)
return make_response({"task": task.to_dict()}, 200)
@task_bp.route("<task_id>/mark_incomplete", methods = ["PATCH"])
def update_as_incompletion(task_id):
task = get_task_with_task_id(task_id)
if task.completed_at != None:
task.completed_at = None
db.session.commit()
return make_response({"task": task.to_dict()}, 200)
|
Python | UTF-8 | 1,898 | 2.8125 | 3 | [] | no_license | #!/usr/bin/env python
from argparse import ArgumentParser
import sys
import yaml
from comp import SRComp
ARENAS = ['A', 'B']
TEAMS_PER_GAME = 4
TEAMS_PER_MATCH = TEAMS_PER_GAME * len(ARENAS)
parser = ArgumentParser(description="SR Competition Schedule Converter")
parser.add_argument("schedule", help="Newline- and pipe-separated schedule.")
parser.add_argument("compstate", help="Competition state git repository path")
parser.add_argument("-o", "--output", required=False, \
help="Output location, defaulting to stdout.")
args = parser.parse_args()
def tidy(lines):
"Strip comments and trailing whitespace"
for line in lines:
line = line.strip()
if not line.startswith('#'):
yield line
def chunks_of_size(list_, size):
list_ = list_[:]
assert len(list_) % size == 0
while len(list_):
chunk = []
for i in range(size):
chunk.append(list_.pop(0))
yield chunk
def numbers_to_tlas(numbers, all_tlas):
teams = []
for num in numbers:
num = int(num)
if num >= len(all_tlas):
continue
tla = tlas[num]
teams.append(tla)
return teams
def build_matches(schedule, tlas):
matches = {}
for i, line in enumerate(tidy(schedule)):
match_teams = line.split('|')
assert len(match_teams) == TEAMS_PER_MATCH
matches[i] = match = {}
for i, game_teams in enumerate(chunks_of_size(match_teams, TEAMS_PER_GAME)):
arena = ARENAS[i]
match[arena] = numbers_to_tlas(game_teams, tlas)
return matches
output = sys.stdout
if args.output is not None:
output = open(args.output, 'w')
schedule = open(args.schedule, 'r').readlines()
comp = SRComp(args.compstate)
tlas = comp.teams.keys()
matches = build_matches(schedule, tlas)
wrapper = dict(matches=matches)
yaml.dump(wrapper, output)
|
Markdown | UTF-8 | 5,298 | 2.546875 | 3 | [] | no_license | # JSON-Video-Feed
## Overview
[Description](#description)
[API](#api)
[Architecture](#architecture)
[Testing](#testing)
[Conclusion](#conclusion)
## Description
**This project is created as an interview task.**
Your task is to create an application that will serve a feed with posts that are based around
videos. An API specification will be provided as an Apiary document where you also have
mock servers (so you can ping them as an existing API). Also, some examples are
connected already so you can make dynamic calls for athletes and posts.
API is quite big, but you shouldn’t implement all routes. At least implement fetching and
displaying of the feed. After that, you can have as much fun as you want: make the video
play/pause in fullscreen, animate actions, add share, prepare for deep linking, make rating
fluid and animated, etc. These are not requirements, but are good places to showcase your
skill set.
Design is not provided, but that doesn’t mean you are required to demonstrate excellent
design skills. We don’t really care much about the design of technical task applications, as
long as you can demonstrate your knowledge and experience with the platform. On the other
hand, if you invest some time into design, it will not go unnoticed. 😁
You don’t have to spend a whole week on this task. In our opinion, you can showcase your
skill set within a couple of hours on working on your application. Everything else is your good
will of leaving a really good impression.
Requirements
There are no restrictions. You can use as many existing libraries as you find fit.
If you prefer, Swift, do it in Swift. If you like Objective C, do it in Objective C.
The point is to showcase your knowledge, but don’t go too deep: write all assumptions in
your code (for some specific cases, it is ok to go with the simple path and write that
assumption in a code block).
Good places to showcase your knowledge:
- Good setup documentation
- Dependency injection
- Tests
- Good scalability
- Clean VCS history
- Having VCS
- Robustness of input and output parameters
- CLEAN, layer architecture, MV***
- General simplicity
- Getting an invite for test application
## API
<a href="https://technicaltaskapi.docs.apiary.io/#introduction/accepted-headers/accept-language">Async API</a> is the API I received by Async. It generates many different properties and values for a video feed.
Below is a sample of the data that you can get from the API (For more please check the website).
```
[
{
"id": 19,
"createdAt": "2019-08-22T12:22:22+00:00",
"createdBefore": "1 year ago",
"author": {
"id": 12,
"name": "RakiticFan4"
},
"sportGroup": {
"id": 8,
"name": "Other"
},
"video": {
"handler": "aslkfjsad",
"url": "https://test-videos.co.uk/vids/bigbuckbunny/mp4/h264/1080/Big_Buck_Bunny_1080_10s_5MB.mp4",
"poster": "https://content.jwplatform.com/thumbs/JxwtpJDu-720.jpg",
"type": "video/mp3",
"length": 125
},
"description": "Winning goal against Russia securing Semi-finals in WorldCup Russia 2018.🇭🇷🇷🇺",
"athlete": {
"id": 21,
"age": 27,
"name": "Ivan Rakitić",
"avatar": "https://drive.google.com/file/d/1ptgaw3aNkgot5PWP_AnlOJ_zNfxUkcto/view",
"club": "FC Barcelona",
"isCelebrity": true,
"country": {
"id": 8,
"name": "Croatia",
"slug": "croatia",
"icon": "https://cdn.countryflags.com/thumbs/croatia/flag-round-250.png"
},
"sport": {
"id": 2,
"slug": "football",
"name": "Football",
"icon": "https://img.icons8.com/ios-filled/48/000000/football2.png"
}
},
"bookmarked": false,
"views": "Ivan Rakitić, Neymar Jr. + 46.8k others"
},
]
```
## Work Process
### API Analyze and Model Define
How I started working?
Firstly, I analized the API. What kind of data it offers? How it works? and more.
Then, based on the requirements, I analized what kind of data I need for the feed. After realizing which data are needed and which ones are not, I realizied which model I have to create and then I visualized them using `UML Diagrams`.
<img src="UML-Diagram.png">
Thanks to the structure of the JSON API, I could easily achieve High Cohesion on my project. Which is really important for reducing module complexity and software maintainance.
## Architecture
<img src="mvc-mvp-mvvm.png">
The architecture of the software is really important. Once you define requirements of the project, it's best to define architecture too. It allows you to manage and understand what it would take to make a particular change, and how the software will comunicate with the user and server or API provider.
Most common architectures for iOS are `MVC` and `MVVM`.<br>
For this simple task, MVC is enough.<br>
## Testing
As I could understand the requirements, the main purpose was parsing `JSON API`, and `Unit Testing` was optional. So, I created some Unit Testing with a small coverage just to demonstrate my familiarity with writing Test Cases.
## Conclusion
At the end, I would like to present it visually how user will comunicate with the app and other services. To mention, I presented it on a `Deployment Diagram`.
<img src="Depoyment-Diagram.png">
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.