text
stringlengths 1
1.04M
| language
stringclasses 25
values |
|---|---|
{"help": "https://data.gov.sg/api/3/action/help_show?name=datastore_search", "success": true, "result": {"resource_id": "223afc36-43b4-4dec-8ee0-a552eb1b8256", "fields": [{"type": "int4", "id": "_id"}, {"type": "numeric", "id": "year"}, {"type": "text", "id": "facility"}, {"type": "numeric", "id": "number"}], "records": [{"facility": "Pedestrian Overhead Bridges", "_id": 1, "number": "336", "year": "1994"}, {"facility": "Pedestrian Underpasses", "_id": 2, "number": "48", "year": "1994"}, {"facility": "Footbridges", "_id": 3, "number": "15", "year": "1994"}, {"facility": "Covered Linkways", "_id": 4, "number": "0", "year": "1994"}, {"facility": "Pedestrian Overhead Bridges", "_id": 5, "number": "350", "year": "1995"}], "_links": {"start": "/api/action/datastore_search?limit=5&resource_id=223afc36-43b4-4dec-8ee0-a552eb1b8256", "next": "/api/action/datastore_search?offset=5&limit=5&resource_id=223afc36-43b4-4dec-8ee0-a552eb1b8256"}, "limit": 5, "total": 92}}
|
json
|
Genshin Impact's 4.0 Special Program recently premiered, with new information about the upcoming Fontaine 4.0 update being revealed. The officials revealed Lyney and three other 5-star reruns who will appear on the Fontaine banners. This implies that players will get an opportunity to summon multiple 5-star signature weapons of these characters.
Hence, the 4.0 weapon banners will feature three 5-star bow weapons and a 5-star polearm. Keep in mind that officials have only confirmed Lyney's signature weapons, while the rest weapons are speculated from previous Genshin Impact banners. Here is everything players need to know.
In Genshin Impact's recent 4.0 Special Program, As the Light Rain Falls Without Reason, Lyney and his signature weapon were officially revealed. The First Great Magic is an upcoming 5-star weapon that will appear alongside Lyney's debut banner. As his signature weapon, the 5-star bow will be tailored to complement Lyney's abilities and passive.
While officials have yet to reveal the weapons stats and effects, reliable sources have leaked this information to the community.
Based on the leaks, The First Great Magic is a CRIT-based bow with the following stats:
The weapon's passive is called Parsifal the Great, where the wielder gains 16% additional damage on their charged attacks. The passive also grants Gimmick and Theatrics stacks that can provide additional ATK or movement speed depending on the party setup.
Officials have yet to reveal any relevant information about the weapon banner. However, players can easily speculate the weapon banner and its order using the character banners officially revealed in the recent 4.0 Special Program.
Here is a quick overview:
- Aqua Simulacra (Crit-DMG)
- The First Great Magic (Crit-DMG)
- Polar Star (Crit-Rate)
- Vortex Vanquisher (ATK%)
Along with Lyney's 5-star bow, we will see the signature weapons of Yelan, Zhongli, and Tartaglia. All these signature weapons are excellent choices for damage-dealing units in Genshin Impact.
Overall, this is a fantastic opportunity for players to summon a Crit-based 5-star bow weapon. The new version update primarily benefits bow enthusiasts, so they should make the most of it.
|
english
|
Braun Strowman hasn't been seen on WWE TV for several months after suffering a neck injury back in the summer. However, he could be teasing a potential return with his most recent Instagram story.
The former Universal Champion shared a video from Survivor Series back in 2017 when Triple H tried to turn on him, but he was able to fight off a Pedigree and deliver two Running Powerslams.
Strowman captioned the image with, "I haven't forgotten about you and what you did!!"
The two men were on the same side as part of that year's Survivor Series, but The Game won't be part of the show this year, so it's unclear how Braun will slot himself back in.
Triple H was forced to retire from in-ring competition because of a heart condition and has since become one of the main men pushing WWE forward from backstage. Strowman clearly has an issue with his actions from 2017, but it's very unlikely he will be able to get a match with The Game out of it.
Will Braun Strowman return at WWE Survivor Series?
Strowman's neck injury was expected to keep him out of action until 2024, but his most recent update surrounding his health was that he had heard good news from his doctor. This has led to many fans believing that his return could be imminent.
Strowman was teaming with Ricochet when he was last seen on TV, but the former Champion has been sidelined himself after suffering a concussion in the fatal four-way match a few weeks ago on RAW.
As of writing, it's unclear if the duo will reunite when Strowman is able to make his WWE return.
Do you think Braun Strowman will make his return at Survivor Series? Share your thoughts in the comments section below.
|
english
|
Saint Petersburg State University, coeducational state institution of higher learning in St. Petersburg, founded in 1819 as the University of St. Petersburg. During World War II the university was evacuated to Saratov. The university’s buildings were severely damaged during the Siege of Leningrad but were later completely restored. The university’s curriculum places greatest emphasis on training teachers and researchers in the physical and social sciences. Tuition is free, and most students receive a government stipend for living expenses. However, admission is very competitive. The university has one of the largest libraries in Russia.
St. Petersburg State University is perhaps the second most important university in Russia, after Moscow State University. It has long been a leading centre for Russian scholarship. Among its better-known professors have been the physicist Aleksandr S. Popov, the chemist Dmitry I. Mendeleyev, and the astronomer Viktor A. Ambartsumian. Among the university’s students have been the physiologist Ivan P. Pavlov and the writer Ivan Turgenev. Vladimir I. Lenin received a degree in law from the university in 1891.
|
english
|
Everyone within Team Australia will be aware of our biggest benefactor, the long suffering @ausbitbank. He's most certainly one of the nicest and most generous people on this platform and he's always happy to help anyone who asks.
This morning, I noticed that he was sitting in the number 21 position, just outside the coveted top 20. I think as a group we can fix this, and get our mate up into the top 20.
So here's what you need to do:
Then go HERE and give your vote to @ausbitbank. Then make sure you leave a comment here so I know you've done it. Then resteem this post so that you tell all your friends to do the same. We need this circulating to as many people as possible to make this happen for our friend.
Now before you say you're too small to make a difference, all it takes is 1 single vote to make a difference.
Lets get the Australian Team spirit together and rally together to get @ausbitbank into the top 20 Witnesses.
Thank you for your support!!!!
|
english
|
table {
width: 100%;
overflow-x: auto;
overflow-y: hidden;
min-width: 500px;
}
th.mat-header-cell {
text-align: left;
max-width: 300px;
font-weight: bold;
}
|
css
|
{"data":[
{
"Name":"Stamina Points",
"Description":"Stamina Points represent the ability to turn a serious blow into a less serious one or to shrug off some attacks through sheer toughness. They act as a buffer that absorbs damage before it starts to deplete your Hit Points. When you take damage, you lose Stamina Points first, then you subtract any leftover damage from your Hit Points. If a creature doesn’t have Stamina Points, damage is subtracted directly from its Hit Points"
},
{
"Name":"Hit Points",
"Description":"Hit Points measure your ability to take physical punishment and keep going. Running out of Hit Points can be deadly."
},
{
"Name":"Effects of Hit Point Damage",
"Description":"Damage doesn’t affect you until your current Hit Points reach 0. If you take damage to your Hit Points equal to or greater than the Hit Points you have remaining, you are reduced to 0 HP, and you’re knocked unconscious and dying (see below). It doesn’t matter how many Stamina Points you later regain (see Recovering Stamina Points on page 251) if you’re out of Hit Points. You can’t be reduced to fewer than 0 HP (however, see Massive Damage below).<br>For example, suppose Navasi has 17 HP and 1 SP. She takes 12 damage, is now at 6 HP and 0 SP, and can function normally. On the next enemy’s turn, that enemy deals 15 damage to her, reducing Navasi to 0 HP. Navasi falls unconscious and is dying."
},
{
"Name":"Massive Damage",
"Description":"If you take damage from a single attack that reduces you to 0 HP and there is damage remaining, you die instantly if the remaining damage is equal to or greater than your maximum Hit Points. If you take damage from a single attack equal to or greater than your maximum Hit Points while you have 0 current HP, you die. <br>Suppose Navasi has a maximum of 22 HP, but she currently has 5 HP and 0 SP. She takes 30 damage from an enemy. Navasi is reduced to 0 HP, with 25 damage remaining. Since this damage is greater than her maximum Hit Points, Navasi dies"
}
]}
|
json
|
import math
import torch
import torch.nn as nn
from torch.distributions import Normal
from torch.nn import init
FixedNormal = Normal
log_prob_normal = FixedNormal.log_prob
FixedNormal.log_probs = lambda self, actions: log_prob_normal(self, actions).sum(-1, keepdim=True)
entropy = FixedNormal.entropy
FixedNormal.entropy = lambda self: entropy(self).sum(-1)
FixedNormal.mode = lambda self: self.mean
class SiLU(nn.Module):
def __init__(self):
super().__init__()
def silu(input):
return input * torch.sigmoid(input)
def forward(self, input):
return self.silu(input)
class GuaussianAction(nn.Module):
def __init__(self, size_in, size_out):
super().__init__()
self.fc_mean = nn.Linear(size_in, size_out)
# ====== INITIALIZATION ======
self.fc_mean.weight.data.mul_(0.1)
self.fc_mean.bias.data.mul_(0.0)
self.logstd = torch.zeros(1, size_out)
def forward(self, x):
action_mean = self.fc_mean(x)
# print(action_mean.shape, self.logstd.shape)
return FixedNormal(action_mean, self.logstd.exp())
class NoisyLinear(nn.Module):
"""Factorised Gaussian NoisyNet"""
def __init__(self, in_features, out_features, sigma0=0.5):
super().__init__()
self.in_features = in_features
self.out_features = out_features
self.weight = nn.Parameter(torch.Tensor(out_features, in_features))
self.bias = nn.Parameter(torch.Tensor(out_features))
self.noisy_weight = nn.Parameter(
torch.Tensor(out_features, in_features))
self.noisy_bias = nn.Parameter(torch.Tensor(out_features))
self.noise_std = sigma0 / math.sqrt(self.in_features)
self.reset_parameters()
self.register_noise()
def register_noise(self):
in_noise = torch.FloatTensor(self.in_features)
out_noise = torch.FloatTensor(self.out_features)
noise = torch.FloatTensor(self.out_features, self.in_features)
self.register_buffer('in_noise', in_noise)
self.register_buffer('out_noise', out_noise)
self.register_buffer('noise', noise)
def sample_noise(self):
self.in_noise.normal_(0, self.noise_std)
self.out_noise.normal_(0, self.noise_std)
self.noise = torch.mm(
self.out_noise.view(-1, 1), self.in_noise.view(1, -1))
def reset_parameters(self):
stdv = 1. / math.sqrt(self.weight.size(1))
self.weight.data.uniform_(-stdv, stdv)
self.noisy_weight.data.uniform_(-stdv, stdv)
if self.bias is not None:
self.bias.data.uniform_(-stdv, stdv)
self.noisy_bias.data.uniform_(-stdv, stdv)
def forward(self, x):
"""
Note: noise will be updated if x is not volatile
"""
normal_y = nn.functional.linear(x, self.weight, self.bias)
if self.training:
# update the noise once per update
self.sample_noise()
noisy_weight = self.noisy_weight * self.noise
noisy_bias = self.noisy_bias * self.out_noise
noisy_y = nn.functional.linear(x, noisy_weight, noisy_bias)
return noisy_y + normal_y
def __repr__(self):
return self.__class__.__name__ + '(' \
+ 'in_features=' + str(self.in_features) \
+ ', out_features=' + str(self.out_features) + ')'
class Flatten(nn.Module):
def forward(self, input):
return input.view(input.size(0), -1)
class BaseActorCriticNetwork(nn.Module):
def __init__(self, input_size, output_size, use_noisy_net=False, use_continuous=False):
super(BaseActorCriticNetwork, self).__init__()
if use_noisy_net:
linear = NoisyLinear
else:
linear = nn.Linear
self.use_continuous = use_continuous
# self.feature = nn.Sequential(
# linear(input_size, 128),
# nn.ReLU(),
# linear(128, 128),
# nn.ReLU()
# )
self.actor = nn.Sequential(
linear(input_size, 128),
nn.ReLU(),
linear(128, 64),
nn.ReLU(),
GuaussianAction(64, output_size) if use_continuous else linear(64, output_size)
)
self.critic = nn.Sequential(
linear(input_size, 128),
nn.ReLU(),
linear(128, 64),
nn.ReLU(),
linear(64, 1)
)
for p in self.modules():
if isinstance(p, nn.Conv2d):
init.xavier_normal_(p.weight)
p.bias.data.zero_()
if isinstance(p, nn.Linear):
init.xavier_normal_(p.weight)
p.bias.data.zero_()
def forward(self, state):
# x = self.feature(state)
policy = self.actor(state)
value = self.critic(state)
return policy, value
class DeepCnnActorCriticNetwork(nn.Module):
def __init__(self, input_size, output_size, use_noisy_net=False):
super(DeepCnnActorCriticNetwork, self).__init__()
if use_noisy_net:
print('use NoisyNet')
linear = NoisyLinear
else:
linear = nn.Linear
self.feature = nn.Sequential(
nn.Conv2d(in_channels=4, out_channels=32, kernel_size=4, stride=1),
nn.ReLU(),
nn.Conv2d(
in_channels=32,
out_channels=64,
kernel_size=5,
stride=2),
nn.ReLU(),
nn.Conv2d(
in_channels=64,
out_channels=128,
kernel_size=4,
stride=1),
nn.ReLU(),
nn.Conv2d(
in_channels=128,
out_channels=256,
kernel_size=4,
stride=2),
nn.ReLU(),
nn.Conv2d(in_channels=256, out_channels=256, kernel_size=4),
nn.ReLU(),
Flatten(),
linear(50176, 512),
nn.ReLU()
)
self.actor = linear(512, output_size)
self.critic = linear(512, 1)
for p in self.modules():
if isinstance(p, nn.Conv2d):
init.kaiming_uniform_(p.weight)
p.bias.data.zero_()
if isinstance(p, nn.Linear):
init.kaiming_uniform_(p.weight, a=1.0)
p.bias.data.zero_()
def forward(self, state):
x = self.feature(state)
policy = self.actor(x)
value = self.critic(x)
return policy, value
class CnnActorCriticNetwork(nn.Module):
def __init__(self, input_size, output_size, use_noisy_net=False):
super(CnnActorCriticNetwork, self).__init__()
if use_noisy_net:
print('use NoisyNet')
linear = NoisyLinear
else:
linear = nn.Linear
self.feature = nn.Sequential(
nn.Conv2d(
in_channels=4,
out_channels=32,
kernel_size=8,
stride=4),
nn.LeakyReLU(),
nn.Conv2d(
in_channels=32,
out_channels=64,
kernel_size=4,
stride=2),
nn.LeakyReLU(),
nn.Conv2d(
in_channels=64,
out_channels=64,
kernel_size=3,
stride=1),
nn.LeakyReLU(),
Flatten(),
linear(
7 * 7 * 64,
512),
nn.LeakyReLU(),
)
self.actor = linear(512, output_size)
self.critic = linear(512, 1)
for p in self.modules():
if isinstance(p, nn.Conv2d):
init.kaiming_uniform_(p.weight)
p.bias.data.zero_()
if isinstance(p, nn.Linear):
init.kaiming_uniform_(p.weight, a=1.0)
p.bias.data.zero_()
def forward(self, state):
x = self.feature(state)
policy = self.actor(x)
value = self.critic(x)
return policy, value
class CuriosityModel(nn.Module):
def __init__(self, input_size, output_size):
super(CuriosityModel, self).__init__()
self.input_size = input_size
self.output_size = output_size
feature_output = 7 * 7 * 64
self.feature = nn.Sequential(
nn.Conv2d(
in_channels=4,
out_channels=32,
kernel_size=8,
stride=4),
nn.LeakyReLU(),
nn.Conv2d(
in_channels=32,
out_channels=64,
kernel_size=4,
stride=2),
nn.LeakyReLU(),
nn.Conv2d(
in_channels=64,
out_channels=64,
kernel_size=3,
stride=1),
nn.LeakyReLU(),
Flatten(),
)
self.inverse_net = nn.Sequential(
nn.Linear(feature_output * 2, 512),
nn.LeakyReLU(),
nn.Linear(512, output_size)
)
self.forward_net = nn.Sequential(
nn.Linear(output_size + feature_output, 512),
nn.LeakyReLU(),
nn.Linear(512, feature_output)
)
for p in self.modules():
if isinstance(p, nn.Conv2d):
init.kaiming_uniform_(p.weight)
p.bias.data.zero_()
if isinstance(p, nn.Linear):
init.kaiming_uniform_(p.weight, a=1.0)
p.bias.data.zero_()
def forward(self, inputs):
state, next_state, action = inputs
encode_state = self.feature(state)
# get pred action
pred_action = torch.cat((encode_state, self.feature(next_state)), 1)
pred_action = self.inverse_net(pred_action)
# ---------------------
# get pred next state
pred_next_state_feature = torch.cat((encode_state, action), 1)
pred_next_state_feature = self.forward_net(pred_next_state_feature)
real_next_state_feature = self.feature(next_state)
return real_next_state_feature, pred_next_state_feature, pred_action
|
python
|
@font-face {
font-family: inconsolata;
src: url(Inconsolata-Regular.ttf)
}
#text {
background-color: gray;
color: white;
padding: 20px 30px 5px 30px;
margin: 10px 0px;
border-radius: 10px;
font-family: inconsolata;
font-size: larger;
}
|
css
|
<gh_stars>0
/*
* Copyright 2020 OPPO ESA Stack Project
*
* Licensed under the Apache License, Version 2.0 (the "License");
* you may not use this file except in compliance with the License.
* You may obtain a copy of the License at
*
* http://www.apache.org/licenses/LICENSE-2.0
*
* Unless required by applicable law or agreed to in writing, software
* distributed under the License is distributed on an "AS IS" BASIS,
* WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
* See the License for the specific language governing permissions and
* limitations under the License.
*/
package esa.httpclient.core.netty;
import esa.commons.http.HttpHeaderNames;
import esa.commons.http.HttpHeaderValues;
import esa.commons.http.HttpHeaders;
import esa.commons.http.HttpVersion;
import esa.httpclient.core.HttpRequest;
import esa.httpclient.core.Scheme;
import esa.httpclient.core.exec.ExecContext;
import esa.httpclient.core.util.LoggerUtils;
import io.netty.channel.Channel;
import io.netty.channel.ChannelFuture;
import io.netty.channel.ChannelHandlerContext;
import io.netty.channel.ChannelPromise;
import io.netty.handler.codec.http2.Http2ConnectionEncoder;
import io.netty.handler.codec.http2.Http2Headers;
import io.netty.handler.codec.http2.HttpConversionUtil;
import java.io.IOException;
import java.net.ConnectException;
import java.net.URI;
import java.util.function.Predicate;
import java.util.function.Supplier;
import java.util.function.ToLongFunction;
import static esa.httpclient.core.netty.Utils.CONNECT_INACTIVE;
abstract class RequestWriterImpl implements RequestWriter {
private static final Predicate<HttpRequest> CONTENT_LENGTH_ABSENT = request -> {
HttpHeaders headers0 = request.headers();
return !headers0.contains(HttpHeaderNames.CONTENT_LENGTH) &&
!headers0.contains(HttpHeaderNames.TRANSFER_ENCODING);
};
private static final Predicate<HttpRequest> CONTENT_TYPE_ABSENT = request ->
!request.headers().contains(HttpHeaderNames.CONTENT_TYPE);
private static final Predicate<HttpRequest> HOST_ABSENT = request
-> !request.headers().contains(HttpHeaderNames.HOST);
@Override
public ChannelFuture writeAndFlush(HttpRequest request,
Channel channel,
ExecContext execCtx,
ChannelPromise headFuture,
boolean useUriEncode,
io.netty.handler.codec.http.HttpVersion version,
boolean http2) throws IOException {
addHostIfAbsent(request, () -> computeHost(request.uri().netURI()));
if (http2) {
Http2ConnectionHandler handler = getH2Handler(channel);
int streamId = request.headers().getInt(HttpConversionUtil.ExtensionHeaderNames.STREAM_ID.text());
return writeAndFlush2(request,
channel,
execCtx,
headFuture,
handler,
streamId,
useUriEncode);
} else {
return writeAndFlush1(request,
channel,
execCtx,
headFuture,
version,
useUriEncode);
}
}
/**
* Decorating of {@link Http2ConnectionEncoder#writeHeaders(ChannelHandlerContext, int, Http2Headers, int,
* boolean, ChannelPromise)} which can handle exhausted of streamId.
*
* @param channel channel
* @param streamId streamId
* @param handler handler
* @param headers headers
* @param endOfStream endOfStream or not
* @param headFuture headFuture
* @return future of write
*/
final ChannelFuture checkAndWriteH2Headers(Channel channel,
Http2ConnectionHandler handler,
Http2Headers headers,
int streamId,
boolean endOfStream,
ChannelPromise headFuture) {
if (streamId < 0) {
headFuture.setFailure(new StreamIdExhaustedException("No more streams can be created on connection: "
+ channel + "(local), and current connection will close gracefully"));
// Simulate a GOAWAY being received due to stream exhaustion on this connection. We use the maximum
// valid stream ID for the current peer.
handler.writeGoAwayOnExhaustion(channel.newPromise());
return headFuture;
}
if (LoggerUtils.logger().isDebugEnabled()) {
LoggerUtils.logger().debug("Send Request:\n" +
headers);
}
return handler.writeHeaders(streamId, headers, endOfStream, headFuture);
}
/**
* Do write and flush using {@link HttpVersion#HTTP_2}.
*
* @param request request
* @param channel channel
* @param execCtx context
* @param headFuture headFuture
* @param streamId channel
* @param handler handler
* @param uriEncodeEnabled uriEncode or not
*
* @return future
* @throws IOException ex
*/
abstract ChannelFuture writeAndFlush2(HttpRequest request,
Channel channel,
ExecContext execCtx,
ChannelPromise headFuture,
Http2ConnectionHandler handler,
int streamId,
boolean uriEncodeEnabled) throws IOException;
/**
* Do write and flush using {@link HttpVersion#HTTP_1_1} or {@link HttpVersion#HTTP_1_0}.
*
* @param request request
* @param channel channel
* @param execCtx context
* @param headFuture headFuture
* @param version version
* @param uriEncodeEnabled enable uriEncode or not
* @return future
* @throws IOException ex
*/
abstract ChannelFuture writeAndFlush1(HttpRequest request,
Channel channel,
ExecContext execCtx,
ChannelPromise headFuture,
io.netty.handler.codec.http.HttpVersion version,
boolean uriEncodeEnabled) throws IOException;
/**
* Adds content-length to request's headers if absent
*
* @param request request
* @param contentLength content length
*/
static void addContentLengthIfAbsent(HttpRequest request, ToLongFunction<HttpRequest> contentLength) {
if (!CONTENT_LENGTH_ABSENT.test(request)) {
return;
}
final long contentLengthVal = contentLength.applyAsLong(request);
if (LoggerUtils.logger().isDebugEnabled()) {
LoggerUtils.logger().debug("content-length is absent, try to set default value: {}, uri: {}",
contentLengthVal, request.uri().toString());
}
request.headers().set(HttpHeaderNames.CONTENT_LENGTH, contentLengthVal);
}
/**
* Adds content-type to request's headers if absent
*
* @param request request
* @param contentType content type
*/
static void addContentTypeIfAbsent(HttpRequest request, Supplier<CharSequence> contentType) {
if (!CONTENT_TYPE_ABSENT.test(request)) {
return;
}
final CharSequence contentTypeVal = contentType.get();
if (LoggerUtils.logger().isDebugEnabled()) {
LoggerUtils.logger().debug("content-type is absent, try to set default value: {}, uri: {}",
contentTypeVal, request.uri().toString());
}
request.headers().set(HttpHeaderNames.CONTENT_TYPE, contentTypeVal);
}
static boolean writeContentNow(ExecContext context, HttpRequest request) {
return !request.headers().contains(HttpHeaderNames.EXPECT, HttpHeaderValues.CONTINUE, true);
}
static String computeHost(URI uri) {
int port = uri.getPort();
if (port <= 0) {
return uri.getHost();
}
if (uri.getScheme().equalsIgnoreCase(Scheme.HTTP.name0())
&& uri.getPort() == Scheme.HTTP.port()) {
return uri.getHost();
}
if (uri.getScheme().equalsIgnoreCase(Scheme.HTTPS.name0())
&& uri.getPort() == Scheme.HTTPS.port()) {
return uri.getHost();
}
return uri.getHost() + ":" + uri.getPort();
}
private static void addHostIfAbsent(HttpRequest request, Supplier<String> host) {
if (!HOST_ABSENT.test(request)) {
return;
}
final String hostVal = host.get();
if (LoggerUtils.logger().isDebugEnabled()) {
LoggerUtils.logger().debug("host is absent, try to set default value: {}, uri: {}",
hostVal, request.uri().toString());
}
request.headers().set(HttpHeaderNames.HOST, host.get());
}
private static Http2ConnectionHandler getH2Handler(Channel channel) throws ConnectException {
Http2ConnectionHandler handler;
if ((handler = channel.pipeline().get(Http2ConnectionHandler.class)) != null) {
return handler;
}
throw CONNECT_INACTIVE;
}
private static class StreamIdExhaustedException extends RuntimeException {
private static final long serialVersionUID = 6638917105569802492L;
private StreamIdExhaustedException(String msg) {
super(msg);
}
}
}
|
java
|
<filename>src/providers/app-identity/app-identity.ts
import { Injectable } from '@angular/core';
import { Logger } from '../../providers/logger/logger';
// providers
import { PersistenceProvider } from '../persistence/persistence';
import * as bitauthService from 'bitauth';
import * as _ from 'lodash';
@Injectable()
export class AppIdentityProvider {
constructor(
private logger: Logger,
private persistenceProvider: PersistenceProvider
) {
this.logger.debug('AppIdentityProvider initialized');
}
public getIdentity(network, cb) {
let pubkey;
let isNew;
this.persistenceProvider.getAppIdentity(network).then(data => {
let appIdentity = data || {};
if (_.isEmpty(appIdentity) || (appIdentity && !appIdentity.priv)) {
isNew = true;
appIdentity = bitauthService.generateSin();
}
try {
pubkey = bitauthService.getPublicKeyFromPrivateKey(appIdentity.priv);
bitauthService.getSinFromPublicKey(pubkey);
if (isNew)
this.persistenceProvider.setAppIdentity(network, appIdentity);
} catch (e) {
return cb(e);
}
return cb(null, appIdentity);
});
}
}
|
typescript
|
{
"iisSettings": {
"windowsAuthentication": false,
"anonymousAuthentication": true,
"iisExpress": {
"applicationUrl": "http://localhost:1843/",
"sslPort": 0
}
},
"profiles": {
"IIS Express": {
"commandName": "IISExpress",
"launchBrowser": true,
"environmentVariables": {
"ASPNETCORE_ENVIRONMENT": "Development"
}
},
"neo3-gui": {
"commandName": "Project",
"launchBrowser": true,
"environmentVariables": {
"NEO_NETWORK": "private",
"ASPNETCORE_ENVIRONMENT": "Development"
}
},
"neo3-gui-testnet": {
"commandName": "Project",
"launchBrowser": true,
"environmentVariables": {
"NEO_NETWORK": "testnet",
"ASPNETCORE_ENVIRONMENT": "Development"
}
},
"neo3-gui-mainnet": {
"commandName": "Project",
"launchBrowser": true,
"environmentVariables": {
"NEO_NETWORK": "mainnet",
"ASPNETCORE_ENVIRONMENT": "Development"
}
}
}
}
|
json
|
ITIAZZA 2K16 is the National Event organized by IT department of K.K.W.I.E.E.R. Event consist of many Technical and Non-technical competitions like assemble-it,PC-hunt,Games and etc. I'm the active volunteer of these event and also done my work in a team, which also show team member quality.
|
english
|
#!/usr/bin/env python
r"""Aggregate, create, and save 1D and 2D plots.
"""
import pdb # noqa: F401
from matplotlib import pyplot as plt
from . import base
class Scatter(base.PlotWithZdata, base.CbarMaker):
r"""Create a scatter plot.
Properties
----------
Methods
-------
Abstract Properties
-------------------
Abstract Methods
----------------
Notes
-----
"""
def __init__(self, x, y, z=None, clip_data=False):
r"""
Parameters
----------
x, y: pd.Series
Data defining (x, y) coordinates.
z: pd.Series, optional
If not None, used to specify the color for each point.
clip_data: bool
If True, remove extreme values at the 0.001 and 0.999 percentitles.
"""
super(Scatter, self).__init__()
self.set_data(x, y, z, clip_data)
self._labels = base.AxesLabels(x="x", y="y", z="z" if z is not None else None)
self._log = base.LogAxes(x=False, y=False)
self.set_path(None)
def _format_axis(self, ax, collection):
super()._format_axis(ax)
x = self.data.loc[:, "x"]
minx, maxx = x.min(), x.max()
y = self.data.loc[:, "y"]
miny, maxy = y.min(), y.max()
# `pulled from the end of `ax.pcolormesh`.
collection.sticky_edges.x[:] = [minx, maxx]
collection.sticky_edges.y[:] = [miny, maxy]
corners = (minx, miny), (maxx, maxy)
ax.update_datalim(corners)
ax.autoscale_view()
def make_plot(self, ax=None, cbar=True, cbar_kwargs=None, **kwargs):
r"""
Make a scatter plot on `ax` using `ax.scatter`.
Paremeters
----------
ax: mpl.axes.Axes, None
If None, create an `Axes` instance from `plt.subplots`.
cbar: bool
If True, create color bar with `labels.z`.
cbar_kwargs: dict, None
If not None, kwargs passed to `self._make_cbar`.
kwargs:
Passed to `ax.pcolormesh`.
"""
if ax is None:
fig, ax = plt.subplots()
data = self.data
if self.clip:
data = self.clip_data(data, self.clip)
if data.loc[:, "z"].unique().size > 1:
zkey = "z"
else:
zkey = None
collection = ax.scatter(x="x", y="y", c=zkey, data=data, **kwargs)
if cbar and zkey is not None:
if cbar_kwargs is None:
cbar_kwargs = dict()
if "cax" not in cbar_kwargs.keys() and "ax" not in cbar_kwargs.keys():
cbar_kwargs["ax"] = ax
cbar = self._make_cbar(collection, **cbar_kwargs)
else:
cbar = None
self._format_axis(ax, collection)
return ax, cbar
|
python
|
<reponame>mvune/links-overviewer
{
"manifest_version": 2,
"name": "Links Overviewer",
"version": "0.1.2",
"description": "Generates a slick overview of user selected links.",
"background": {
"scripts": ["background.js", "data/image-request.js"]
},
"permissions": [
"contextMenus",
"tabs",
"activeTab"
]
}
|
json
|
<filename>src/main/java/io/github/opencubicchunks/cubicchunks/chunk/util/CCWorldGenUtils.java
package io.github.opencubicchunks.cubicchunks.chunk.util;
import io.github.opencubicchunks.cubicchunks.chunk.IBigCube;
import io.github.opencubicchunks.cubicchunks.utils.Coords;
import net.minecraft.world.level.ChunkPos;
import net.minecraft.world.level.chunk.LevelChunkSection;
public class CCWorldGenUtils {
public static boolean areSectionsEmpty(int cubeY, ChunkPos pos, IBigCube cube) {
int emptySections = 0;
for (int yScan = 0; yScan < IBigCube.DIAMETER_IN_SECTIONS; yScan++) {
int sectionY = Coords.cubeToSection(cubeY, yScan);
int sectionIndex = Coords.sectionToIndex(pos.x, sectionY, pos.z);
LevelChunkSection cubeSection = cube.getCubeSections()[sectionIndex];
if (LevelChunkSection.isEmpty(cubeSection)) {
emptySections++;
}
if (emptySections == IBigCube.DIAMETER_IN_SECTIONS) {
return true;
}
}
return false;
}
}
|
java
|
date: 2017-02-13
featuredimage:
featured: True
author: <NAME>
excerpt: Part 1 of this article series focused on the visual (since that’s usually what folks are looking for help with when it comes to design), but it’s important to remember that usability trumps beauty. The most gorgeous website in the world is useless if folks using that website can’t achieve what they want to do.
title: Design for non-designers (part 2)
external: https://blog.prototypr.io/design-for-non-designers-part-2-74d7ab3124f6
category: design, instruction
{% extends "post.html" %}
{% block body %}
{% filter markdown %}
{% endfilter %}
{% endblock body %}
|
html
|
<reponame>chaburkland/class-only-design<gh_stars>1-10
# -*- coding: utf-8 -*-
"""Top-level package for Class Only."""
__author__ = """<NAME>"""
__email__ = "<EMAIL>"
__version__ = "0.3.0"
from class_only_design.api import ClassOnly
from class_only_design.api import Namespace
from class_only_design.api import constant
from class_only_design.constants import autoname
|
python
|
Pragati Vihar resident Akash Singh planned the kidnapping with his two friends in order to get money from his family.
By India Today Web Desk: A 20-year-old man was arrested by Ghaziabad police on Tuesday for allegedly faking his own kidnapping in a ploy to buy a car.
According to police, Pragati Vihar resident Akash Singh planned the kidnapping with his two friends in order to get money from his family.
As part of the plan, he allegedly rented a room in a Noida hotel from where he rehearsed the plot and even made the ransom calls, said a report in Hindustan Times.
On Monday, Singh left his home at around 8 am telling his mother that his friend had called him and that he would be back soon.
“I kept waiting for him but he did not return till late evening and even searched for him, but could not find him. At around 11pm on Monday, we received a call from an unknown person who said my son was with him and demanded Rs 2 lakh in ransom. He threatened to kill my son if I revealed the information to anyone,” Kiran Singh, Akash’s mother, said in her police complaint.
Akash’s parents, however, went to the police and registered a complaint. The caller had used the same mobile number and made four calls to the family till Tuesday afternoon.
Using electronic surveillance, police zeroed in on to the hotel in Noida sector-22 and busted the plot.
According to police, Aakash, who had studied up to class 8, along with his friends, Ankit Kumar and Karan Kumar had planned his own kidnapping.
“The prime suspect had been putting pressure on his family for a car. He got a bike that belonged to his elder brother, but he was not satisfied and planned the plot,” said Anshu Jain, circle officer Indirapuram.
Both Akash and Ankit were arrested by police. The police said that the third suspect is absconding and will be arrested soon.
|
english
|
{
"type": "origins:modify_exhaustion",
"modifier":{
"value": 0.8,
"operation": "multiply_base"
},
"name": "Fast Metabolism",
"description": "A small body performing great movements requires a large amount of food!"
}
|
json
|
<filename>NewCoder/src/Algorithm/CD158.java
package Algorithm;
import java.io.BufferedReader;
import java.io.IOException;
import java.io.InputStreamReader;
import java.io.StreamTokenizer;
// 向有序的环形单链表中插入新节点
public class CD158 {
private static StreamTokenizer st = new StreamTokenizer(new BufferedReader(new InputStreamReader(System.in)));
public static int nextInt() {
try {
st.nextToken();
return (int) st.nval;
} catch (Exception e) {
throw new RuntimeException(e);
}
}
public static Node read(int n) {
Node dummyNode = new Node(-1);
Node cur = dummyNode;
for (int i = 0; i < n; i++) {
cur.next = new Node(nextInt());
cur = cur.next;
}
cur.next = dummyNode.next;
return dummyNode.next;
}
public static class Node {
public int val;
public Node next;
public Node(int val) {
this.val = val;
}
}
public static Node insertNode(Node head, int num) {
//如果链表为空
if (head == null) {
Node node = new Node(num);
node.next = node;
return node;
}
//如果链表不为空
Node node = new Node(num);
Node cur = head.next;
Node pre = head;
while (cur != head) {
if (node.val >= pre.val && node.val <= cur.val) {
pre.next = node;
node.next = cur;
return head;
}
cur = cur.next;
pre = pre.next;
}
//如果遍历一圈还没有插入,则说明存在两种情况
//1、num比head还小,此时插入head前。
if (node.val < head.val) {
node.next = head;
pre.next = node;
return node;
}
//2、num比尾巴还大,插入到最后
if (node.val > pre.val) {
pre.next = node;
node.next = head;
return head;
}
return null;
}
public static void main(String[] args) {
int n = nextInt();
Node head = read(n);
int num = nextInt();
Node newHead = insertNode(head, num);
n++;
while(n-->0){
System.out.print(newHead.val+" ");
newHead = newHead.next;
}
}
}
|
java
|
Aljamain Sterling believes his fellow bantamweight stars Petr Yan and Sean O'Malley are benefitting from preferential treatment from the UFC boss, aka "Dana White privilege."
The reigning 135-pound champion blasted his rivals in a recent appearance on The Residency Podcast. Asked who he believes gets special treatment, Sterling said:
"Petr Yan got a privilege. He got a handout of a title fight, fighting a guy who's coming off of a loss in the division. That's the major one right there. You can see it here and there. I'm not saying this is not for everybody but you get even [Sean] O'Malley, who's getting these fights that are tailor-made for him to look like a superstar."
Sterling specifically mentioned former champion Yan and rising star O'Malley. According to 'Funk Master', the fact that Yan was booked in an interim title fight against Corey Sandhagen, who lost to TJ Dillashaw in his last outing, proves that the UFC matchmakers are doing him favors. Meanwhile, Sterling believes the UFC has been giving O'Malley easy assignments as far as opponents are concerned.
However, it's important to note that Sandhagen is the logical choice as he's the highest-ranked fighter available for a short-notice bout. Dillashaw, who returned from a two-year suspension this year, is unavailable as he's currently recovering from knee surgery.
Aljamain Sterling was originally scheduled to defend his title against Yan in a rematch of their controversial first bout. However, the American was forced to pull out as he wasn't cleared by doctors to compete.
Watch Aljamain Sterling's interview below:
Aljamain Sterling claims he wasn't given 'Dana White privilege'
Aljamain Sterling said that unlike his fellow stars, he did not benefit from special favors. The 32-year-old mentioned:
"I never got those privileges when I came up. I've been thrown into the fire. I've been tested when you've got some of these guys, like I said, Dana White privileges are getting in the way of certain matchups and things like that. I wanna fight everybody. So at the end of the day, it is what it is. As long as you work hard and you truly believe in yourself, anything can happen."
The term "Dana White privilege" was coined by lightweight star Tony Ferguson. It became a fixture in MMA jargon when 'El Cucuy' accused former Bellator lightweight champ Michael Chandler of having "Dana White privilege" as he was awarded a title shot after just one fight in the UFC.
|
english
|
<reponame>nishio/atcoder
if 1:
N = int(input())
Q = int(input())
queries = []
for i in range(Q):
queries.append([int(x) for x in input().split()])
else:
N = 100000
queries = [
[2, 1, 2],
[4, 1, 2]
] * 10000
queries.append([4, 1, 2])
Q = len(queries)
isTransposed = False
xs = list(range(N + 1))
ys = list(range(N + 1))
for q in queries:
f = q[0]
if f == 4:
i, j = q[1:]
if isTransposed:
i, j = j, i
print(N * (xs[i] - 1) + ys[j] - 1)
elif f == 3:
isTransposed = not isTransposed
else:
i, j = q[1:]
if (f == 1) ^ isTransposed:
xs[i], xs[j] = xs[j], xs[i]
else:
ys[i], ys[j] = ys[j], ys[i]
|
python
|
import { Constants, Feature } from 'alpheios-data-models'
import LatinVerbIrregularBaseView from '@views/lang/latin/verb/irregular/latin-verb-irregular-base-view.js'
import LatinVerbIrregularView from '@views/lang/latin/verb/irregular/latin-verb-irregular-view.js'
import LatinVerbIrregularVoiceView from '@views/lang/latin/verb/irregular/latin-verb-irregular-voice-view.js'
import LatinVerbParticipleIrregularView from '@views/lang/latin/verb/irregular/latin-verb-participle-irregular-view.js'
import Table from '@views/lib/table'
export default class LatinVerbSupineIrregularView extends LatinVerbIrregularBaseView {
constructor (homonym, inflectionData) {
super(homonym, inflectionData)
this.id = 'verbSupineConjugationIrregular'
this.name = 'verb-supine-irregular'
this.title = 'Verb Supine Conjugation (Irregular)'
if (this.isImplemented) {
this.createTable()
}
}
static get viewID () {
return 'latin_verb_supine_irregular_view'
}
static get partsOfSpeech () {
return [Constants.POFS_SUPINE]
}
createTable () {
this.table = new Table([this.features.cases])
let features = this.table.features // eslint-disable-line prefer-const
features.columns = []
features.rows = [this.features.cases]
features.columnRowTitles = [this.features.cases]
features.fullWidthRowTitles = []
}
static matchFilter (languageID, inflections) {
return Boolean(
this.languageID === languageID &&
inflections.some(i => this.enabledForInflection(i)))
}
/**
* Gets inflection data for a homonym. For this view we need to use irregular verb inflections only.
* @param {Homonym} homonym - A homonym for which inflection data needs to be retrieved
* @param {Object} options
* @return {InflectionSet} Resulting inflection set.
*/
static getInflectionsData (homonym, options) {
// Select only those inflections that are required for this view
const inflections = homonym.inflections.filter(
i => i[Feature.types.part].value === this.mainPartOfSpeech &&
i.constraints && i.constraints.irregular
)
return this.dataset.createInflectionSet(this.mainPartOfSpeech, inflections, options)
}
/**
* A list of constructors of linked views.
* @return {View[]}
*/
static linkedViewConstructors (homonym) {
return [LatinVerbIrregularView, LatinVerbIrregularVoiceView, LatinVerbParticipleIrregularView]
}
}
|
javascript
|
/*
* All or portions of this file Copyright (c) Amazon.com, Inc. or its affiliates or
* its licensors.
*
* For complete copyright and license terms please see the LICENSE at the root of this
* distribution (the "License"). All use of this software is governed by the License,
* or, if provided, by the license below or the license accompanying this file. Do not
* remove or modify any license notices. This file is distributed on an "AS IS" BASIS,
* WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
*
*/
// Implementation of Twitch ChatPlay feature
#include "ChatPlay_precompiled.h"
#include <AzCore/std/containers/set.h>
#include <AzCore/std/sort.h>
#include <AzCore/std/smart_ptr/enable_shared_from_this.h>
#include <AzCore/std/smart_ptr/make_shared.h>
#include <AzCore/std/smart_ptr/shared_ptr.h>
#include <AzCore/std/smart_ptr/unique_ptr.h>
#include <AzCore/std/smart_ptr/weak_ptr.h>
#include <AzCore/std/string/conversions.h>
#include <AzCore/std/string/regex.h>
#include <HttpRequestor/HttpRequestorBus.h>
#include "ChatPlay.h"
#include "ChatPlaySystemComponent.h"
#include "ChatPlayCVars.h"
#include "IRCStream.h"
#include "LibDyad.h"
#include "StringUtils.h"
#include "WebSocketStream.h"
namespace ChatPlay
{
class ChatPlayImpl;
/******************************************************************************/
// Implementation of ChatChannel
//
class ChatChannelImpl
: public ChatChannel
, public AZStd::enable_shared_from_this < ChatChannelImpl >
{
public:
ChatChannelImpl(const AZStd::string& channelId, ChatPlayImpl* chatPlay);
~ChatChannelImpl() override;
const AZStd::string& GetChannelId() const;
void Connect() override;
void Disconnect() override;
ConnectionState GetConnectionState() override;
CallbackToken RegisterConnectionStateChange(const StateCallback& callback) override;
void UnregisterConnectionStateChange(CallbackToken token) override;
CallbackToken RegisterKeyword(const AZStd::string& keyword, const KeywordCallback& callback) override;
void UnregisterKeyword(CallbackToken token) override;
private:
static CallbackToken ms_callbackToken;
// Channel Id can only be set on construction; makes life more predictable
const AZStd::string m_channelId;
// Reference back to ChatPlay instance
ChatPlayImpl* m_chatPlay;
// Maps tokens to state change callbacks (Used on Dispatch Event Thread)
AZStd::map<CallbackToken, StateCallback> m_stateCallbacks;
// Maps tokens to keyword callbacks (Used on Dispatch Event Thread)
AZStd::map<CallbackToken, KeywordCallback> m_keywordCallbacks;
// Maps tokens for keyword callbacks to the keyword they represent (Used on Dispatch Event Thread)
AZStd::map<CallbackToken, AZStd::string> m_tokenToKeyword;
// Maps keywords to the callback token; reverse of m_keywordCallbacks (Used on Dispatch Event Thread)
AZStd::multimap<AZStd::string, CallbackToken> m_keywordTokens;
// Requires synchronized access (with m_keywordLock)
AZStd::unordered_map<AZStd::string, AZStd::regex> m_keywords;
// Protects the keyword list (see m_keywords)
AZStd::mutex m_keywordLock;
// May only be changed by the DISPATCH EVENT THREAD
ConnectionState m_connectionState;
// The epoch is used to tell which instance of a chatbot we're receiving messages from
dyad::StreamId m_epoch;
// List of IRC server hosts and ports associated with this channel
HostInfoList m_hostInfoList;
// Index of host we are connected to
int m_connectedHostIndex;
// Bool to keep track of if we managed a successful connection
bool m_successfulConnection;
// Message stream handler (IRC or WebSocket)
AZStd::unique_ptr<IStreamHandler> m_streamHandler;
// Sends a notification to the Dispatch Event thread about state changes
void PostConnectionState(dyad::StreamId epoch, ConnectionState state);
// Changes the connection state (for use on Dispatch Event Thread)
void ChangeConnectionState(dyad::StreamId, ConnectionState state);
void OnChatbotReceived(dyad::StreamId epoch, const AZStd::string& msg);
// Event to be raised when a keyword is detected
void KeywordEvent(dyad::StreamId epoch, const AZStd::string& keyword, const AZStd::string& match, const AZStd::string& username);
void OnStreamCreate(dyad::CDyadStream&);
void OnStreamEvent(dyad::CDyadEvent&);
// Processes the data returned by the Http request and stores the list of server hosts and ports in m_hostAndPortList
void ProcessHostList(const Aws::Utils::Json::JsonValue& jsonValue, Aws::Http::HttpResponseCode responseCode);
// Helper function for processing returned host list
bool PopulateHostInfoList(HostInfoList& hostInfoList, const Aws::Utils::Json::JsonValue& jsonValue, bool isWebsocket);
// Construct the API URL for the server list request
AZStd::string MakeServerListURL();
};
/******************************************************************************/
// Implementation of ChatPlay
//
class ChatPlayImpl
: public ChatPlay
, public AZStd::enable_shared_from_this < ChatPlayImpl >
{
public:
ChatPlayImpl();
~ChatPlayImpl() override;
const AZStd::shared_ptr<ChatPlayCVars>& GetVars();
const std::shared_ptr<dyad::IDyad>& GetDyad();
AZStd::weak_ptr<ChatChannel> GetChatChannel(const AZStd::string& channelId) override;
void DestroyChatChannel(const AZStd::string& channelId) override;
void DisconnectAll() override;
AZStd::size_t DispatchEvents() override;
typedef AZStd::function<void()> ChatPlayEvent;
void RegisterEvent(const ChatPlayEvent& event);
void RegisterCredentials(const AZStd::string& username, const AZStd::string& oauthToken) override;
void UnregisterCredentials(const AZStd::string& username) override;
void UnregisterAllCredentials() override;
// Returns the registered oauth token associated with the given username
// Returns null if no credentials registered for given username
const char* GetOAuthToken(const AZStd::string& username);
void SendWhisper(
const AZStd::string& sender,
const AZStd::string& recipient,
const AZStd::string& message,
const WhisperCallback& callback) override;
ChatPlayVoteManager* GetVoteManager() const override;
private:
// Reference to CVar manager
AZStd::shared_ptr<ChatPlayCVars> m_vars;
// Dictionary of pointers to ChatChannels
AZStd::map<AZStd::string, AZStd::shared_ptr<ChatChannel>> m_channelMap;
// Protects access to the channels map
AZStd::mutex m_channelLock;
// Protects access to the event queue
AZStd::mutex m_eventLock;
// Placeholder: Events as callbacks
AZStd::vector<ChatPlayEvent> m_events;
// Reference to dyad instance
std::shared_ptr<dyad::IDyad> m_dyad;
// Container for Twitch IRC credentials
AZStd::map<AZStd::string, AZStd::string> m_credentialMap;
// Vote manager
AZStd::unique_ptr<ChatPlayVoteManager> m_voteManager;
};
/******************************************************************************/
// Implementation of ChatPlayVote
//
class ChatPlayVoteImpl
: public ChatPlayVote
{
public:
ChatPlayVoteImpl(const AZStd::string& name, ChatPlayImpl* chatplay);
~ChatPlayVoteImpl();
const AZStd::string& GetName() const override;
bool AddOption(const AZStd::string& name) override;
void RemoveOption(const AZStd::string& name) override;
void ConfigureOption(const AZStd::string& optionName, int count, bool enabled) override;
bool OptionExists(const AZStd::string& name) override;
int GetOptionCount(const AZStd::string& optionName) override;
void SetOptionCount(const AZStd::string& optionName, int count) override;
bool GetOptionEnabled(const AZStd::string& optionName) override;
void SetOptionEnabled(const AZStd::string& optionName, bool enabled) override;
bool SetChannel(const AZStd::string& name) override;
void ClearChannel() override;
void Visit(const AZStd::function<void(VoteOption& option)>& visitor) override;
void SetEnableStateAll(bool state) override;
void SetCountAll(int count) override;
void SetVoterLimiting(bool limit) override;
void ResetVotedList() override;
private:
ChatPlayImpl* m_chatplay;
const AZStd::string m_name;
AZStd::map<AZStd::string, VoteOption> m_options;
AZStd::weak_ptr<ChatChannel> m_channel;
AZStd::map<AZStd::string, CallbackToken> m_callbacks;
AZStd::mutex m_optionLock;
bool m_voterLimiting;
AZStd::set<AZStd::string> m_votedList;
void OnKeywordSignal(const AZStd::string& option, const AZStd::string& match, const AZStd::string& username);
void RegisterOptions();
void UnregisterOptions();
};
/******************************************************************************/
// Implementation of ChatPlayVoteManager
//
class ChatPlayVoteManagerImpl
: public ChatPlayVoteManager
{
public:
explicit ChatPlayVoteManagerImpl(ChatPlayImpl* chatplay);
~ChatPlayVoteManagerImpl() override = default;
AZStd::weak_ptr<ChatPlayVote> GetVote(const AZStd::string& voteId) override;
void DestroyVote(const AZStd::string& voteId) override;
private:
AZStd::map<AZStd::string, AZStd::shared_ptr<ChatPlayVote>> m_votes;
AZStd::mutex m_votesLock;
ChatPlayImpl* m_chatplay;
};
/******************************************************************************/
// Helper class for sending one-shot whispers
//
class Whisperer
: public AZStd::enable_shared_from_this < Whisperer >
{
public:
Whisperer(const AZStd::weak_ptr<ChatPlayImpl>& chatPlay,
const AZStd::string& sender, const AZStd::string& recipient, const AZStd::string& message,
const WhisperCallback& callback);
void CreateStream();
private:
void OnStreamCreate(dyad::CDyadStream&);
void OnStreamEvent(dyad::CDyadEvent&);
AZStd::string MakeGroupServerListURL();
void QueueCallback(WhisperResult result);
void ProcessHostList(const Aws::Utils::Json::JsonValue& jsonValue, Aws::Http::HttpResponseCode responseCode);
bool PopulateHostInfoList(HostInfoList& hostInfoList, const Aws::Utils::Json::JsonValue& jsonValue, bool isWebsocket);
// Reference back to ChatPlay instance
AZStd::weak_ptr<ChatPlayImpl> m_chatPlay;
// List of group IRC server hosts and ports
HostInfoList m_hostInfoList;
// Index of host we are connected to
int m_connectedHostIndex;
// Bool to keep track of if we managed a successful connection
bool m_successfulConnection;
// State
bool m_queuedCallback;
// Message stream handler (IRC or WebSocket)
AZStd::unique_ptr<IStreamHandler> m_streamHandler;
// Parameters
AZStd::string m_sender;
AZStd::string m_oauthToken;
AZStd::string m_recipient;
AZStd::string m_message;
// Callback
WhisperCallback m_callback;
};
/******************************************************************************/
CallbackToken ChatChannelImpl::ms_callbackToken = 0;
ChatChannelImpl::ChatChannelImpl(const AZStd::string& channelId, ChatPlayImpl* chatPlay)
: m_channelId(channelId)
, m_chatPlay(chatPlay)
, m_connectionState(ConnectionState::Disconnected)
, m_epoch(dyad::InvalidStreamId)
, m_connectedHostIndex(-1)
, m_successfulConnection(false)
{
ChatPlayChannelRequestBus::Handler::BusConnect(m_channelId);
}
ChatChannelImpl::~ChatChannelImpl()
{
ChatPlayChannelRequestBus::Handler::BusDisconnect(m_channelId);
// Disconnect is idempotent, so we can call this unconditionally
Disconnect();
}
const AZStd::string& ChatChannelImpl::GetChannelId() const
{
return m_channelId;
}
void ChatChannelImpl::Connect()
{
/* DISPATCH EVENT THREAD */
switch (m_connectionState)
{
case ConnectionState::Connected:
case ConnectionState::Connecting:
// Connection already established or in progress; do nothing
break;
case ConnectionState::Disconnected:
case ConnectionState::Error:
{
// Prepare the API request for the list of IRC servers
AZStd::string requestUrl = MakeServerListURL().c_str();
HttpRequestor::Callback cb = [=](const Aws::Utils::Json::JsonValue& jsonValue, Aws::Http::HttpResponseCode responseCode)
{
/* HTTP REQUEST MANAGER THREAD */
ProcessHostList(jsonValue, responseCode);
// Create a weak reference for later in case the channel is discarded
// while the stream is still open
auto weak = AZStd::weak_ptr<ChatChannelImpl>(shared_from_this());
// Create a handler that will dispatch the event if the channel is still live
auto eventHandler = [=](dyad::CDyadEvent& event)
{
if (auto shared = weak.lock())
{
shared->OnStreamEvent(event);
}
};
// Create a handler that will dispatch the event if the channel is still live
auto createHandler = [=](dyad::CDyadStream& stream)
{
if (auto shared = weak.lock())
{
shared->OnStreamCreate(stream);
}
};
// Request a stream from Dyad and update the connection state
m_epoch = m_chatPlay->GetDyad()->CreateStream(eventHandler, createHandler);
};
HttpRequestor::Headers headers;
headers["Client-ID"] = m_chatPlay->GetVars()->GetClientID();
EBUS_EVENT(HttpRequestor::HttpRequestorRequestBus, AddRequestWithHeaders, requestUrl, Aws::Http::HttpMethod::HTTP_GET, headers, cb);
ChangeConnectionState(m_epoch, ConnectionState::Connecting);
break;
}
}
}
void ChatChannelImpl::Disconnect()
{
m_chatPlay->GetDyad()->CloseStream(m_epoch);
}
ConnectionState ChatChannelImpl::GetConnectionState()
{
return m_connectionState;
}
CallbackToken ChatChannelImpl::RegisterConnectionStateChange(const StateCallback& callback)
{
CallbackToken token = ++ms_callbackToken;
m_stateCallbacks.emplace(token, callback);
return token;
}
void ChatChannelImpl::UnregisterConnectionStateChange(CallbackToken token)
{
m_stateCallbacks.erase(token);
}
CallbackToken ChatChannelImpl::RegisterKeyword(const AZStd::string& keyword, const KeywordCallback& callback)
{
CallbackToken token = ++ms_callbackToken;
m_keywordCallbacks.emplace(token, callback);
m_tokenToKeyword.emplace(token, keyword);
m_keywordTokens.emplace(keyword, token);
if (m_keywordTokens.count(keyword) == 1)
{
// If the count is 1, then the keyword must have been just added.
// Therefore update the synchronized map with the new keyword
AZStd::regex re(keyword.c_str(), AZStd::regex_constants::icase | AZStd::regex_constants::optimize);
AZStd::lock_guard<AZStd::mutex> lock(m_keywordLock);
m_keywords.emplace(keyword, AZStd::move(re));
}
return token;
}
void ChatChannelImpl::UnregisterKeyword(CallbackToken token)
{
auto i = m_tokenToKeyword.find(token);
if (i != m_tokenToKeyword.end())
{
const AZStd::string& keyword = i->second;
auto lower = m_keywordTokens.lower_bound(keyword);
auto upper = m_keywordTokens.upper_bound(keyword);
for (auto j = lower; j != upper; ++j)
{
if (j->second == token)
{
m_keywordTokens.erase(j);
break;
}
}
if (!m_keywordTokens.count(keyword))
{
AZStd::lock_guard<AZStd::mutex> lock(m_keywordLock);
m_keywords.erase(keyword);
}
}
m_keywordCallbacks.erase(token);
m_tokenToKeyword.erase(token);
}
void ChatChannelImpl::PostConnectionState(dyad::StreamId epoch, ConnectionState state)
{
/* ANY THREAD */
m_chatPlay->RegisterEvent(AZStd::bind(&ChatChannelImpl::ChangeConnectionState, this, epoch, state));
}
void ChatChannelImpl::ChangeConnectionState(dyad::StreamId epoch, ConnectionState state)
{
/* DISPATCH EVENT THREAD */
if (epoch != m_epoch)
{
// Discard old events
return;
}
if (state == ConnectionState::Disconnected && m_connectionState == ConnectionState::Error)
{
// Error state persists until connection
return;
}
m_connectionState = state;
ChatPlayChannelNotificationBus::Event(m_channelId, &ChatPlayChannelNotificationBus::Events::OnConnectionStateChanged, m_connectionState);
if (!m_stateCallbacks.empty())
{
// Extracting the tokens to a local map prevents modifications to the map during one
// of the callbacks from invalidating our iterator
AZStd::vector<CallbackToken> tokens;
tokens.reserve(m_stateCallbacks.size());
for (const auto& callback : m_stateCallbacks)
{
tokens.push_back(callback.first);
}
// Loop through tokens
for (CallbackToken t : tokens)
{
// Check that the token is still valid (i.e. that another callback didn't remove it)
auto it = m_stateCallbacks.find(t);
if (it != m_stateCallbacks.end())
{
if (it->second)
{
it->second(m_connectionState);
}
}
}
}
}
void ChatChannelImpl::OnChatbotReceived(dyad::StreamId epoch, const AZStd::string& msg)
{
/*DYAD THREAD*/
// message = [":" prefix SPACE] command [SPACE params] crlf
AZStd::string prefix;
AZStd::string command;
AZStd::string::size_type marker = 0;
if (!msg.empty() && msg[0] == ':')
{
marker = msg.find(' ');
prefix = msg.substr(1, marker - 1);
}
if (marker != AZStd::string::npos)
{
// +1 to skip ' '
auto start = marker + 1;
marker = msg.find(' ', start);
command = msg.substr(start, marker - start);
}
if (command == "PRIVMSG")
{
// params: <msgtarget> SPACE <text to be sent>
AZStd::string target;
AZStd::string params;
if (marker != AZStd::string::npos)
{
// +1 to skip ' '
auto start = marker + 1;
marker = msg.find(' ', start);
target = msg.substr(start, marker - start);
}
if (marker != AZStd::string::npos)
{
// +2 to skip " :"
if (msg.size() > marker + 2)
{
params = msg.substr(marker + 2); // message
// :sender!<EMAIL> PRIVMSG #recipient :hello
marker = prefix.find('!');
AZStd::string username = prefix.substr(0, marker);
AZStd::lock_guard<AZStd::mutex> lock(m_keywordLock);
for (auto& kvp : m_keywords)
{
const AZStd::string& keyword = kvp.first;
const AZStd::regex& regex = kvp.second;
AZStd::cmatch match;
if (AZStd::regex_search(params.c_str(), match, regex))
{
AZStd::string matched(AZStd::string(match[0]).c_str());
m_chatPlay->RegisterEvent(AZStd::bind(&ChatChannelImpl::KeywordEvent, this, epoch, keyword, matched, username));
}
}
}
}
}
}
void ChatChannelImpl::KeywordEvent(dyad::StreamId epoch, const AZStd::string& keyword, const AZStd::string& match, const AZStd::string& username)
{
/*DISPATCH EVENT THREAD*/
if (epoch != m_epoch)
{
// Discard old events
return;
}
ChatPlayChannelNotificationBus::Event(m_channelId, &ChatPlayChannelNotificationBus::Events::OnKeywordMatched, keyword, match, username);
auto lower = m_keywordTokens.lower_bound(keyword);
auto upper = m_keywordTokens.upper_bound(keyword);
for (auto i = lower; i != upper; ++i)
{
auto callback = m_keywordCallbacks[i->second];
if (callback)
{
callback(match, username);
}
}
}
void ChatChannelImpl::OnStreamCreate(dyad::CDyadStream& stream)
{
/* DYAD THREAD */
if (stream.GetId() != m_epoch)
{
// Discard old events
return;
}
bool connected = false;
for (int i = 0; i < m_hostInfoList.size(); i++)
{
HostInfo& hostInfo = m_hostInfoList[i];
if (hostInfo.connectionFailed) {
// This hostinfo failed already during this connection attempt, try next one
continue;
}
AZ_TracePrintf("ChatPlay", "Connecting to %s:%d (%s)...", hostInfo.address.c_str(), hostInfo.port, hostInfo.websocket ? "WebSocket" : "IRC");
if (stream.Connect(hostInfo.address.c_str(), hostInfo.port))
{
m_connectedHostIndex = i;
dyad::StreamId id = stream.GetId();
AZStd::weak_ptr<ChatChannelImpl> channel = shared_from_this();
auto messageCallback = AZStd::bind(&ChatChannelImpl::OnChatbotReceived, this, id, AZStd::placeholders::_1);
auto rawSend = [=](const char* message, size_t messageLength)
{
if (auto ptr = channel.lock())
{
if (auto chatPlay = ptr->m_chatPlay)
{
AZStd::string toCopy(message, messageLength);
chatPlay->GetDyad()->PostStreamAction(id, [=](dyad::CDyadStream& stream){
stream.Write(toCopy.c_str(), toCopy.size());
});
}
}
};
if (hostInfo.websocket)
{
// Make IRC handler and use WebSocket handler as translation layer
auto ircHandler = AZStd::make_shared<IRCStream>(m_chatPlay->GetVars()->GetUser(), m_chatPlay->GetVars()->GetPassword(), m_channelId.c_str());
m_streamHandler = AZStd::unique_ptr<WebSocketStream>(new WebSocketStream(hostInfo.address.c_str(), ircHandler));
m_streamHandler->SetSendFunction(rawSend);
ircHandler->SetMessageFunction(messageCallback);
}
else
{
// Make IRC handler only
auto ircStream = AZStd::unique_ptr<IRCStream>(new IRCStream(m_chatPlay->GetVars()->GetUser(), m_chatPlay->GetVars()->GetPassword(), m_channelId.c_str()));
ircStream->SetMessageFunction(messageCallback);
m_streamHandler = AZStd::move(ircStream);
m_streamHandler->SetSendFunction(rawSend);
}
connected = true;
break;
}
else
{
// Flag connection as failed
hostInfo.connectionFailed = true;
AZ_Warning("ChatPlay", false, "Failed to connect to %s:%d (%s)", hostInfo.address.c_str(), hostInfo.port, hostInfo.websocket ? "WebSocket" : "IRC");
}
}
// If connected is still false here, all connections failed
if (!connected)
{
AZ_Warning("ChatPlay", false, "Failed to connect to the chat server for the channel \"%s\": all connection configurations failed.", m_channelId.c_str());
PostConnectionState(m_epoch, ConnectionState::Error);
// Reset HostInfo flags
ChatPlayCVars::ResetHostInfoFlags(m_hostInfoList);
// Reset connected host
m_connectedHostIndex = -1;
}
}
void ChatChannelImpl::ProcessHostList(const Aws::Utils::Json::JsonValue& jsonValue, Aws::Http::HttpResponseCode responseCode)
{
/* HTTP REQUEST MANAGER THREAD */
if (static_cast<Aws::Http::HttpResponseCode>(responseCode) == Aws::Http::HttpResponseCode::OK)
{
HostInfoList hostInfoList;
// add IRC hosts to list
if (!PopulateHostInfoList(hostInfoList, jsonValue, false))
{
AZ_Warning("ChatPlay", false, "Error parsing IRC host list for the channel \"%s\".", m_channelId.c_str());
PostConnectionState(m_epoch, ConnectionState::Error);
}
// add websocket hosts to list
if (!PopulateHostInfoList(hostInfoList, jsonValue, true))
{
AZ_Warning("ChatPlay", false, "Error parsing IRC websocket host list for the channel \"%s\".", m_channelId.c_str());
PostConnectionState(m_epoch, ConnectionState::Error);
}
AZStd::sort(hostInfoList.begin(), hostInfoList.end(), [](const HostInfo& a, const HostInfo& b)
{
return a.priority < b.priority;
});
m_hostInfoList.swap(hostInfoList);
}
else
{
// Could not get list of chat server IPs
// TODO: handle this better
AZ_Warning("ChatPlay", false, "Error retrieving IRC host list for the channel \"%s\".", m_channelId.c_str());
PostConnectionState(m_epoch, ConnectionState::Error);
}
}
bool ChatChannelImpl::PopulateHostInfoList(HostInfoList& hostInfoList, const Aws::Utils::Json::JsonValue& jsonValue, bool isWebsocket)
{
const char* jsonNodeName = isWebsocket ? "websockets_servers" : "servers";
auto serverList = jsonValue.GetArray(jsonNodeName);
for (int i = 0; i < serverList.GetLength(); i++)
{
AZStd::string oneHostAndOnePort(serverList.GetItem(i).AsString().c_str());
AZStd::size_t portStartPosition = oneHostAndOnePort.find(':');
if (portStartPosition == AZStd::string::npos)
{
// Returned data is not what we expected
return false;
}
HostInfo hostInfo;
hostInfo.address = oneHostAndOnePort.substr(0, portStartPosition);
AZStd::string stringPort = oneHostAndOnePort.substr(portStartPosition + 1);
hostInfo.port = ::atoi(stringPort.c_str());
hostInfo.websocket = isWebsocket;
hostInfo.ssl = m_chatPlay->GetVars()->IsPortSSL(hostInfo.port, isWebsocket);
hostInfo.priority = m_chatPlay->GetVars()->GetPortPriority(hostInfo.port, isWebsocket);
if (hostInfo.IsValid())
{
hostInfoList.push_back(hostInfo);
}
}
return true;
}
void ChatChannelImpl::OnStreamEvent(dyad::CDyadEvent& event)
{
/* DYAD THREAD */
dyad::StreamId epoch = event.GetStream().GetId();
if (epoch != m_epoch)
{
// Discard old events
return;
}
switch (event.GetType())
{
case dyad::EventType::Accept: // we don't accept
case dyad::EventType::Listen: // we don't listen
break;
case dyad::EventType::Tick: // no action needed
case dyad::EventType::Timeout: // no action needed (timeout leads to closure)
break;
case dyad::EventType::Close:
if (m_successfulConnection)
{
PostConnectionState(epoch, ConnectionState::Disconnected);
// Reset so we try all hosts again next time we try to connect
// Reset HostInfo flags
ChatPlayCVars::ResetHostInfoFlags(m_hostInfoList);
// Reset connected host
m_connectedHostIndex = -1;
// Reset successful connection
m_successfulConnection = false;
}
else
{
HostInfo& hostInfo = m_hostInfoList[m_connectedHostIndex];
AZ_Warning("ChatPlay", false, "Failed to connect to %s:%d (%s)", hostInfo.address.c_str(), hostInfo.port, hostInfo.websocket ? "WebSocket" : "IRC");
// Flag current connection as unsuccessful
hostInfo.connectionFailed = true;
// Try next connection
auto stream = event.GetStream();
OnStreamCreate(stream);
}
break;
case dyad::EventType::Connect:
{
auto handlerState = m_streamHandler->OnConnect();
if (handlerState == IStreamHandler::HandlerState::HANDLER_ERROR)
{
// Handler encountered an error
PostConnectionState(epoch, ConnectionState::Error);
event.GetStream().Close();
}
break;
}
case dyad::EventType::Line:
//Enable if debugging; but not really helpful for everyone in production ...
//CryLog("%s\n", event.GetData());
break;
case dyad::EventType::Error:
if (m_successfulConnection)
{
PostConnectionState(epoch, ConnectionState::Error);
}
break;
case dyad::EventType::Destroy:
break;
case dyad::EventType::Data:
{
auto handlerState = m_streamHandler->OnMessage(event.GetData(), event.GetDataLength());
if (handlerState == IStreamHandler::HandlerState::HANDLER_ERROR)
{
// Handler encountered an error
PostConnectionState(epoch, ConnectionState::Error);
event.GetStream().Close();
}
else if (handlerState == IStreamHandler::HandlerState::CONNECTED)
{
// Successfully connected
PostConnectionState(epoch, ConnectionState::Connected);
m_successfulConnection = true;
const HostInfo& hostInfo = m_hostInfoList[m_connectedHostIndex];
AZ_TracePrintf("ChatPlay", "Successfully connected to %s:%d (%s)", hostInfo.address.c_str(), hostInfo.port, hostInfo.websocket ? "WebSocket" : "IRC");
}
break;
}
case dyad::EventType::Ready:
break;
}
}
AZStd::string ChatChannelImpl::MakeServerListURL()
{
AZStd::string requestUrl = AZStd::string::format("https://%s/servers?channel=%s", m_chatPlay->GetVars()->GetAPIServerAddress(), m_channelId.c_str());
return requestUrl;
}
/******************************************************************************/
ChatPlayImpl::ChatPlayImpl()
{
m_dyad = dyad::IDyad::GetInstance();
AZ_Assert(m_dyad, "ChatPlay was unable to get the Dyad instance"); // This is unexpected, GetInstance should always work; we assert since exceptions are disabled ...
m_vars = ChatPlayCVars::GetInstance();
AZ_Assert(m_vars, "ChatPlay was unable to get the CVars instance"); // This is unexpected, GetInstance should always work; we assert since exceptions are disabled ...
m_voteManager = AZStd::make_unique<ChatPlayVoteManagerImpl>(this);
}
ChatPlayImpl::~ChatPlayImpl()
{
DispatchEvents();
// Very bad; references still exist!
// (*this) is still in use for callbacks ...
CRY_ASSERT(m_channelMap.empty());
}
AZStd::weak_ptr<ChatChannel> ChatPlayImpl::GetChatChannel(const AZStd::string& _channelId)
{
AZStd::shared_ptr<ChatChannel> ptr;
AZStd::string channelId = _channelId;
AZStd::to_lower(channelId.begin(), channelId.end());
// Ensure that the map contains an entry for this channel
auto entry = m_channelMap.find(channelId);
if (entry != m_channelMap.end())
{
// Map entry found; attempt to get the shared_ptr
ptr = entry->second;
}
else
{
// Map entry not found; create one and assign a nullptr
AZStd::lock_guard<AZStd::mutex> lock(m_channelLock);
auto i = m_channelMap.emplace(channelId, AZStd::shared_ptr<ChatChannel>());
if (!i.second)
{
// C++ runtime error; element not found and not insertable!?
CRY_ASSERT(i.second);
return AZStd::weak_ptr<ChatChannel>();
}
entry = i.first;
}
// Ensure that whatever is assigned to the entry is valid; create a new channel if not
if (!ptr)
{
ptr = AZStd::make_shared<ChatChannelImpl>(channelId, this);
entry->second = ptr;
}
return ptr;
}
void ChatPlayImpl::DestroyChatChannel(const AZStd::string& _channelId)
{
AZStd::string channelId = _channelId;
AZStd::to_lower(channelId.begin(), channelId.end());
// Try and find the entry for this channel
auto entry = m_channelMap.find(channelId);
if (entry != m_channelMap.end())
{
// Disconnect and remove the channel
entry->second->Disconnect();
AZStd::lock_guard<AZStd::mutex> lock(m_channelLock);
m_channelMap.erase(entry);
}
}
void ChatPlayImpl::DisconnectAll()
{
// iterate through map of ChatChannels and disconnect them all.
for (auto i : m_channelMap)
{
if (auto ptr = i.second)
{
ptr->Disconnect();
}
}
}
size_t ChatPlayImpl::DispatchEvents()
{
AZStd::vector<ChatPlayEvent> events;
{
AZStd::lock_guard<AZStd::mutex> lock(m_eventLock);
AZStd::swap(m_events, events);
}
for (const auto& event : events)
{
if (event)
{
event();
}
}
// Cleans up after dead channels
//
// TODO: Make the channel post an event to ChatPlay on destruction to
// avoid needing to loop over everything...
for (auto iter = m_channelMap.begin(); iter != m_channelMap.end();)
{
if (iter->second)
{
++iter;
}
else
{
AZStd::lock_guard<AZStd::mutex> lock(m_channelLock);
iter = m_channelMap.erase(iter);
}
}
return events.size();
}
void ChatPlayImpl::RegisterEvent(const ChatPlayEvent& event)
{
/*ANY THREAD*/
AZStd::lock_guard<AZStd::mutex> lock(m_eventLock);
m_events.push_back(event);
}
const AZStd::shared_ptr<ChatPlayCVars>& ChatPlayImpl::GetVars()
{
return m_vars;
}
const std::shared_ptr<dyad::IDyad>& ChatPlayImpl::GetDyad()
{
return m_dyad;
}
AZStd::shared_ptr<ChatPlay> ChatPlay::CreateInstance()
{
AZStd::shared_ptr<ChatPlay> chatplay = AZStd::static_pointer_cast<ChatPlay>(AZStd::make_shared<ChatPlayImpl>());
return chatplay;
}
void ChatPlayImpl::RegisterCredentials(const AZStd::string& _username, const AZStd::string& oauthToken)
{
AZStd::string username = _username;
AZStd::to_lower(username.begin(), username.end());
m_credentialMap[username] = oauthToken;
}
void ChatPlayImpl::UnregisterCredentials(const AZStd::string& _username)
{
AZStd::string username = _username;
AZStd::to_lower(username.begin(), username.end());
m_credentialMap.erase(username);
}
void ChatPlayImpl::UnregisterAllCredentials()
{
m_credentialMap.clear();
}
const char* ChatPlayImpl::GetOAuthToken(const AZStd::string& _username)
{
AZStd::string username = _username;
AZStd::to_lower(username.begin(), username.end());
auto token = m_credentialMap.find(username);
if (token != m_credentialMap.end())
{
return token->second.c_str();
}
// no matching token found
return nullptr;
}
void ChatPlayImpl::SendWhisper(const AZStd::string& _sender, const AZStd::string& _recipient, const AZStd::string& message,
const WhisperCallback& callback)
{
AZStd::string sender = _sender;
AZStd::to_lower(sender.begin(), sender.end());
AZStd::string recipient = _recipient;
AZStd::to_lower(recipient.begin(), recipient.end());
AZStd::make_shared<Whisperer>(shared_from_this(), sender, recipient, message, callback)->CreateStream();
}
ChatPlayVoteManager* ChatPlayImpl::GetVoteManager() const
{
return m_voteManager.get();
}
/******************************************************************************/
ChatPlayVoteImpl::ChatPlayVoteImpl(const AZStd::string& name, ChatPlayImpl* chatplay)
: m_name(name)
, m_chatplay(chatplay)
, m_voterLimiting(false)
{
ChatPlayVoteRequestBus::Handler::BusConnect(m_name);
}
ChatPlayVoteImpl::~ChatPlayVoteImpl()
{
ChatPlayVoteRequestBus::Handler::BusDisconnect(m_name);
ClearChannel();
}
const AZStd::string& ChatPlayVoteImpl::GetName() const
{
return m_name;
}
bool ChatPlayVoteImpl::AddOption(const AZStd::string& name)
{
if (name.empty())
{
return false;
}
if (m_options.find(name) == m_options.end())
{
AZStd::lock_guard<AZStd::mutex> lock(m_optionLock);
m_options.emplace(name, VoteOption(name));
RegisterOptions();
return true;
}
// The option probably already exists
return false;
}
void ChatPlayVoteImpl::RemoveOption(const AZStd::string& name)
{
const auto& registered = m_options.find(name);
if (registered != m_options.end())
{
// Unregister the option from the callback to be safe
auto it = m_callbacks.find(name);
// Try and lock the channel
auto channel = m_channel.lock();
if (channel && it != m_callbacks.end())
{
channel->UnregisterKeyword(it->second);
}
AZStd::lock_guard<AZStd::mutex> lock(m_optionLock);
m_options.erase(registered);
}
}
void ChatPlayVoteImpl::ConfigureOption(const AZStd::string& optionName, int count, bool enabled)
{
auto it = m_options.find(optionName);
if (it != m_options.end())
{
VoteOption& option = it->second;
option.SetCount(count);
option.SetEnabled(enabled);
}
}
bool ChatPlayVoteImpl::OptionExists(const AZStd::string& name)
{
return m_options.find(name) != m_options.end();
}
int ChatPlayVoteImpl::GetOptionCount(const AZStd::string& optionName)
{
auto it = m_options.find(optionName);
if (it != m_options.end())
{
const VoteOption& option = it->second;
return option.GetCount();
}
return 0;
}
void ChatPlayVoteImpl::SetOptionCount(const AZStd::string& optionName, int count)
{
auto it = m_options.find(optionName);
if (it != m_options.end())
{
VoteOption& option = it->second;
option.SetCount(count);
}
}
bool ChatPlayVoteImpl::GetOptionEnabled(const AZStd::string& optionName)
{
auto it = m_options.find(optionName);
if (it != m_options.end())
{
const VoteOption& option = it->second;
return option.GetEnabled();
}
return false;
}
void ChatPlayVoteImpl::SetOptionEnabled(const AZStd::string& optionName, bool enabled)
{
auto it = m_options.find(optionName);
if (it != m_options.end())
{
VoteOption& option = it->second;
option.SetEnabled(enabled);
}
}
bool ChatPlayVoteImpl::SetChannel(const AZStd::string& _name)
{
AZStd::string name = _name;
AZStd::to_lower(name.begin(), name.end());
auto channel = m_channel.lock();
if (channel && channel->GetChannelId() != name.c_str())
{
ClearChannel();
}
if (!name.empty())
{
// Check if ChatPlay is available and enabled
if (m_chatplay)
{
m_channel = m_chatplay->GetChatChannel(name.c_str());
RegisterOptions();
}
}
return !m_channel.expired(); ///< Fix for bad logic when determining the return value of SetChannel.
}
void ChatPlayVoteImpl::Visit(const AZStd::function<void(VoteOption& option)>& visitor)
{
for (auto& registered : m_options)
{
visitor(registered.second);
}
}
void ChatPlayVoteImpl::SetEnableStateAll(bool state)
{
Visit([state](VoteOption& option){
option.SetEnabled(state);
});
}
void ChatPlayVoteImpl::SetCountAll(int count)
{
Visit([count](VoteOption& option){
option.SetCount(count);
});
}
void ChatPlayVoteImpl::SetVoterLimiting(bool limiting)
{
m_voterLimiting = limiting;
}
void ChatPlayVoteImpl::ResetVotedList()
{
m_votedList.clear();
}
void ChatPlayVoteImpl::ClearChannel()
{
UnregisterOptions();
m_channel.reset();
}
void ChatPlayVoteImpl::OnKeywordSignal(const AZStd::string& option, const AZStd::string& /*match*/, const AZStd::string& _username)
{
const auto& registered = m_options.find(option);
if (registered != m_options.end())
{
VoteOption& option = registered->second;
if (option.GetEnabled())
{
if (m_voterLimiting)
{
AZStd::string username = _username;
AZStd::to_lower(username.begin(), username.end());
auto voter = m_votedList.find(username);
if (voter != m_votedList.end())
{
// This user has already voted
return;
}
else
{
// Add user to voted list and proceed to increase vote count
m_votedList.insert(username);
}
}
// Assume one count per signal ...
option.SetCount(option.GetCount() + 1);
}
}
}
void ChatPlayVoteImpl::RegisterOptions()
{
if (auto channel = m_channel.lock())
{
for (auto& option : m_options)
{
const AZStd::string& optionName = option.first;
if (m_callbacks.find(optionName) != m_callbacks.end())
{
continue;
}
auto callback = AZStd::bind(&ChatPlayVoteImpl::OnKeywordSignal, this, optionName, AZStd::placeholders::_1, AZStd::placeholders::_2);
auto token = channel->RegisterKeyword(optionName.c_str(), AZStd::move(callback));
m_callbacks[optionName] = token;
}
}
}
void ChatPlayVoteImpl::UnregisterOptions()
{
if (auto channel = m_channel.lock())
{
for (auto& registered : m_callbacks)
{
channel->UnregisterKeyword(registered.second);
}
}
m_callbacks.clear();
}
/******************************************************************************/
ChatPlayVoteManagerImpl::ChatPlayVoteManagerImpl(ChatPlayImpl* chatplay)
: m_chatplay(chatplay)
{
}
AZStd::weak_ptr<ChatPlayVote> ChatPlayVoteManagerImpl::GetVote(const AZStd::string& voteId)
{
AZStd::shared_ptr<ChatPlayVote> ptr;
// Ensure that the map contains an entry for this channel
auto entry = m_votes.find(voteId);
if (entry != m_votes.end())
{
// Map entry found; attempt to get the shared_ptr
ptr = entry->second;
}
else
{
// Map entry not found; create one and assign a nullptr
AZStd::lock_guard<AZStd::mutex> lock(m_votesLock);
auto i = m_votes.emplace(voteId, AZStd::shared_ptr<ChatPlayVote>());
if (!i.second)
{
// C++ runtime error; element not found and not insertable!?
CRY_ASSERT(i.second);
return AZStd::weak_ptr<ChatPlayVote>();
}
entry = i.first;
}
// Ensure that whatever is assigned to the entry is valid; create a new vote if not
if (!ptr)
{
ptr = AZStd::make_shared<ChatPlayVoteImpl>(voteId, m_chatplay);
entry->second = ptr;
}
return ptr;
}
void ChatPlayVoteManagerImpl::DestroyVote(const AZStd::string& voteId)
{
// Try and find the entry for this vote
auto entry = m_votes.find(voteId);
if (entry != m_votes.end())
{
// Removing the entry will cause the vote to destroy
AZStd::lock_guard<AZStd::mutex> lock(m_votesLock);
m_votes.erase(entry);
}
}
/******************************************************************************/
Whisperer::Whisperer(const AZStd::weak_ptr<ChatPlayImpl>& chatPlay,
const AZStd::string& sender, const AZStd::string& recipient, const AZStd::string& message,
const WhisperCallback& callback)
: m_chatPlay(chatPlay)
, m_sender(sender)
, m_connectedHostIndex(-1)
, m_successfulConnection(false)
, m_queuedCallback(false)
, m_recipient(recipient)
, m_message(message)
, m_callback(callback)
{
}
void Whisperer::CreateStream()
{
AZStd::shared_ptr<ChatPlayImpl> chatPlay = m_chatPlay.lock();
if (!chatPlay)
{
return;
}
m_oauthToken = chatPlay->GetOAuthToken(m_sender);
if (m_oauthToken.empty())
{
// No registered oauth token, queue error callback and return
QueueCallback(WhisperResult::MissingOAuthToken);
return;
}
// Prepare the API request for the list of IRC servers
AZStd::string requestUrl = MakeGroupServerListURL().c_str();
// Note: shared_from_this() can't be used in the class constructor
AZStd::shared_ptr<Whisperer> whisperer = shared_from_this();
HttpRequestor::Callback cb = [=](const Aws::Utils::Json::JsonValue& jsonValue, Aws::Http::HttpResponseCode responseCode) {
/* HTTP REQUEST MANAGER THREAD */
AZStd::shared_ptr<ChatPlayImpl> chatPlay = m_chatPlay.lock();
if (!chatPlay)
{
return;
}
ProcessHostList(jsonValue, responseCode);
// Create a handler that will dispatch the event if the channel is still live
auto eventHandler = [=](dyad::CDyadEvent& event){
whisperer->OnStreamEvent(event);
};
// Create a handler that will dispatch the event if the channel is still live
auto createHandler = [=](dyad::CDyadStream& stream) {
whisperer->OnStreamCreate(stream);
};
// Request a stream from Dyad
chatPlay->GetDyad()->CreateStream(eventHandler, createHandler);
};
EBUS_EVENT(HttpRequestor::HttpRequestorRequestBus, AddRequest, requestUrl, Aws::Http::HttpMethod::HTTP_GET, cb);
}
void Whisperer::OnStreamCreate(dyad::CDyadStream& stream)
{
/* DYAD THREAD */
AZStd::shared_ptr<ChatPlayImpl> chatPlay = m_chatPlay.lock();
if (!chatPlay)
{
stream.Close();
return;
}
bool connected = false;
for (int i = 0; i < m_hostInfoList.size(); i++)
{
HostInfo& hostInfo = m_hostInfoList[i];
if (hostInfo.connectionFailed) {
// This hostinfo failed already during this connection attempt, try next one
continue;
}
AZ_TracePrintf("Whisper", "Connecting to %s:%d (%s)...", hostInfo.address.c_str(), hostInfo.port, hostInfo.websocket ? "WebSocket" : "IRC");
if (stream.Connect(hostInfo.address.c_str(), hostInfo.port))
{
m_connectedHostIndex = i;
dyad::StreamId id = stream.GetId();
AZStd::weak_ptr<Whisperer> whisperer = shared_from_this();
auto rawSend = [=](const char* message, size_t messageLength)
{
if (auto ptr = whisperer.lock())
{
if (auto chatPlay = ptr->m_chatPlay.lock())
{
AZStd::string toCopy(message, messageLength);
chatPlay->GetDyad()->PostStreamAction(id, [=](dyad::CDyadStream& stream){
stream.Write(toCopy.c_str(), toCopy.size());
});
}
}
};
if (hostInfo.websocket)
{
auto ircHandler = AZStd::make_shared<IRCStream>(m_sender.c_str(), m_oauthToken.c_str(), nullptr);
m_streamHandler = AZStd::unique_ptr<WebSocketStream>(new WebSocketStream(hostInfo.address.c_str(), ircHandler));
m_streamHandler->SetSendFunction(rawSend);
}
else
{
m_streamHandler = AZStd::unique_ptr<IRCStream>(new IRCStream(m_sender.c_str(), m_oauthToken.c_str(), nullptr));
m_streamHandler->SetSendFunction(rawSend);
}
//CryLog("[ChatPlay] Connecting to %s:%d (%s)", m_connectedHost.address.c_str(), m_connectedHost.port, m_connectedHost.websocket ? "websocket" : "irc");
connected = true;
break;
}
else
{
hostInfo.connectionFailed = true;
AZ_Warning("Whisper", false, "Failed to connect to %s:%d (%s)", hostInfo.address.c_str(), hostInfo.port, hostInfo.websocket ? "WebSocket" : "IRC");
}
}
// If connected is still false here, all connections failed
if (!connected && !m_queuedCallback)
{
AZ_Warning("Whisper", false, "Failed to connect to the chat server, all connection configurations failed.");
QueueCallback(WhisperResult::ConnectionError);
stream.Close();
// Reset HostInfo flags
ChatPlayCVars::ResetHostInfoFlags(m_hostInfoList);
// Reset connected host
m_connectedHostIndex = -1;
}
}
void Whisperer::OnStreamEvent(dyad::CDyadEvent& event)
{
/* DYAD THREAD */
dyad::StreamId epoch = event.GetStream().GetId();
switch (event.GetType())
{
case dyad::EventType::Accept: // we don't accept
case dyad::EventType::Listen: // we don't listen
break;
case dyad::EventType::Tick: // no action needed
case dyad::EventType::Timeout: // no action needed (timeout leads to closure)
break;
case dyad::EventType::Close:
if (m_successfulConnection)
{
// Reset so we try all hosts again next time we try to connect
// Reset HostInfo flags
ChatPlayCVars::ResetHostInfoFlags(m_hostInfoList);
// Reset connected host
m_connectedHostIndex = -1;
// Reset successful connection
m_successfulConnection = false;
}
else
{
if (m_connectedHostIndex >= 0 && m_connectedHostIndex < m_hostInfoList.size())
{
HostInfo& hostInfo = m_hostInfoList[m_connectedHostIndex];
AZ_Warning("Whisper", false, "Failed to connect to %s:%d (%s)", hostInfo.address.c_str(), hostInfo.port, hostInfo.websocket ? "WebSocket" : "IRC");
if (!m_queuedCallback)
{
// Flag current connection as unsuccessful
hostInfo.connectionFailed = true;
// Try next connection
auto stream = event.GetStream();
OnStreamCreate(stream);
}
}
else
{
// Shouldn't happen
AZ_Warning("Whisper", false, "A whisper's connected host index was out of bounds");
QueueCallback(WhisperResult::ConnectionError);
event.GetStream().Close();
}
}
break;
case dyad::EventType::Connect:
{
auto handlerState = m_streamHandler->OnConnect();
if (handlerState == IStreamHandler::HandlerState::HANDLER_ERROR)
{
// Handler encountered an error (null callback?)
// Close and try next connection method
event.GetStream().Close();
}
break;
}
case dyad::EventType::Line:
//CryLog("%s\n", event.GetData());
break;
case dyad::EventType::Error:
break;
case dyad::EventType::Destroy:
if (!m_queuedCallback)
{
// Destroyed unexpectedly, queue error callback and close stream
// Close and try next connection method
event.GetStream().Close();
}
break;
case dyad::EventType::Data:
{
auto handlerState = m_streamHandler->OnMessage(event.GetData(), event.GetDataLength());
if (handlerState == IStreamHandler::HandlerState::CONNECTED)
{
// Successfully connected
m_successfulConnection = true;
// Prepare and send the whisper command
AZStd::string message = AZStd::string::format("PRIVMSG #%s :/w %s %s\r\n", m_recipient.c_str(), m_recipient.c_str(), m_message.c_str());
if (!m_streamHandler->SendMessage(message.c_str(), message.length()))
{
// Handler was not able to send message
QueueCallback(WhisperResult::ConnectionError);
event.GetStream().Close();
}
}
else if (handlerState == IStreamHandler::HandlerState::MESSAGE_SENT)
{
if (m_connectedHostIndex >= 0 && m_connectedHostIndex < m_hostInfoList.size())
{
// Message sent, queue up the success callback
const HostInfo& hostInfo = m_hostInfoList[m_connectedHostIndex];
AZ_TracePrintf("Whisper", "Successfully sent whisper on %s:%d (%s)", hostInfo.address.c_str(), hostInfo.port, hostInfo.websocket ? "WebSocket" : "IRC");
QueueCallback(WhisperResult::Success);
// Done, we can close the stream
event.GetStream().Close();
}
else
{
// Shouldn't happen
AZ_Warning("Whisper", false, "A whisper's connected host index was out of bounds");
QueueCallback(WhisperResult::ConnectionError);
event.GetStream().Close();
}
}
else if (handlerState == IStreamHandler::HandlerState::HANDLER_ERROR)
{
// Handler encountered an error
// Close, which will trigger next connection attempt
event.GetStream().Close();
}
break;
}
case dyad::EventType::Ready:
break;
}
}
AZStd::string Whisperer::MakeGroupServerListURL()
{
AZStd::shared_ptr<ChatPlayImpl> chatPlay = m_chatPlay.lock();
if (!chatPlay)
{
return AZStd::string();
}
AZStd::string requestUrl = AZStd::string::format("https://%s/servers?cluster=group", chatPlay->GetVars()->GetAPIServerAddress());
return requestUrl;
}
void Whisperer::QueueCallback(WhisperResult result)
{
AZStd::shared_ptr<ChatPlayImpl> chatPlay = m_chatPlay.lock();
if (!chatPlay)
{
return;
}
chatPlay->RegisterEvent(AZStd::bind(m_callback, result));
m_queuedCallback = true;
}
void Whisperer::ProcessHostList(const Aws::Utils::Json::JsonValue& jsonValue, Aws::Http::HttpResponseCode responseCode)
{
/* HTTP REQUEST MANAGER THREAD */
if (static_cast<Aws::Http::HttpResponseCode>(responseCode) == Aws::Http::HttpResponseCode::OK)
{
HostInfoList hostInfoList;
// add IRC hosts to list
if (!PopulateHostInfoList(hostInfoList, jsonValue, false))
{
AZ_Warning("Whisper", false, "Error parsing group IRC host list.");
QueueCallback(WhisperResult::ConnectionError);
}
// add websocket hosts to list
if (!PopulateHostInfoList(hostInfoList, jsonValue, true))
{
AZ_Warning("Whisper", false, "Error parsing group IRC websocket host list.");
QueueCallback(WhisperResult::ConnectionError);
}
AZStd::sort(hostInfoList.begin(), hostInfoList.end(), [](const HostInfo& a, const HostInfo& b)
{
return a.priority < b.priority;
});
m_hostInfoList.swap(hostInfoList);
}
else
{
// Could not get list of chat server IPs
// TODO: handle this better
AZ_Warning("Whisper", false, "Error retrieving group IRC host list.");
QueueCallback(WhisperResult::ConnectionError);
}
}
bool Whisperer::PopulateHostInfoList(HostInfoList& hostInfoList, const Aws::Utils::Json::JsonValue& jsonValue, bool isWebsocket)
{
AZStd::shared_ptr<ChatPlayImpl> chatPlay = m_chatPlay.lock();
if (!chatPlay)
{
return false;
}
const char* jsonNodeName = isWebsocket ? "websockets_servers" : "servers";
auto serverList = jsonValue.GetArray(jsonNodeName);
for (int i = 0; i < serverList.GetLength(); i++)
{
AZStd::string oneHostAndOnePort(serverList.GetItem(i).AsString().c_str());
AZStd::size_t portStartPosition = oneHostAndOnePort.find(':');
if (portStartPosition == AZStd::string::npos)
{
// Returned data is not what we expected
return false;
}
HostInfo hostInfo;
hostInfo.address = oneHostAndOnePort.substr(0, portStartPosition);
AZStd::string stringPort = oneHostAndOnePort.substr(portStartPosition + 1);
hostInfo.port = ::atoi(stringPort.c_str());
hostInfo.websocket = isWebsocket;
hostInfo.ssl = chatPlay->GetVars()->IsPortSSL(hostInfo.port, isWebsocket);
hostInfo.priority = chatPlay->GetVars()->GetPortPriority(hostInfo.port, isWebsocket);
// Ignore priorities < 0
if (hostInfo.priority >= 0)
{
hostInfoList.push_back(hostInfo);
}
}
return true;
}
}
|
cpp
|
Did you know that there was a moment in 2018 when Zendaya unexpectedly found herself in the spotlight simply because of her expressions, and it involved Emily Blunt and Blake Lively? Read on!
Zendaya, the renowned actress and style icon, found herself inadvertently thrust into the spotlight during the 2018 Michael Kors fashion show, as per a report by Teen Vogue. Seated in the front row alongside the equally stunning Blake Lively and Emily Blunt, Zendaya's expression unintentionally became a focal point of the event.
As per a report by Teen Vogue, back in 2018, as the fashion show's anticipation grew, something interesting happened that quickly became a viral sensation on the internet. To Zendaya's left, her friend Emily Blunt was having an interesting and happy chat with Blake Lively. They were all in good spirits, sharing laughter and enjoying each other's company while waiting for the fashion show to start. However, Zendaya seemed to be less involved in their conversation, as her focus appeared to be on the runway.
This particular moment was perfect for people to come up with their own interpretations on the internet. Viewers quickly jumped to conclusions, suggesting that Zendaya was giving a subtle "side eye" to Blake and Emily. Fans flooded Zendaya's social media with questions, asking, "What did they do?"
The Euphoria star wasted no time in settling down the grapevine. She took to Twitter (now X) to dispel any rumors of tension, humorously stating, "Whoa whoa whoa y'all not bout to have me out here lookin shady (Laughing emoji). I was looking at the runway and asking Law when the show was gonna start. Don't do me. They were super nice."
Did the rumor chain stop despite Zendaya’s clarification?
Despite her clarifications, the internet had already embraced the moment and transformed her expression into a meme. Frustration was evident in Zendaya's subsequent tweets as she emphatically stated, "I WAS LOOKING AT THE RUNWAY (Stickers in between)" and added, "Y'all messy that's all lmao."
Furthermore, to provide more context, someone shared a picture of Zendaya laughing with Emily and Blake, showcasing the friendly and warm atmosphere among them. Zendaya couldn't resist getting in on the fun and asked, "Where's this video lmao?"
FAQsWhat is Zendaya's full name? Zendaya's full name is Zendaya Maree Stoermer Coleman.
When did Tom Holland fall in love with Zendaya? They started seeing each other while they were filming Spider-Man.
How did Zendaya become famous? The actress became a household name after she appeared on Shake It Up in 2010.
NET Worth: ~ 41.7 MN USD (RS 345 cr)
|
english
|
<reponame>FudanYuan/neo4j
{"title": "Optimal elephant flow detection.", "fields": ["asymptotically optimal algorithm", "network packet", "byte", "elephant flow", "speedup"], "abstract": "Monitoring the traffic volumes of elephant flows, including the total byte count per flow, is a fundamental capability for online network measurements. We present an asymptotically optimal algorithm for solving this problem in terms of both space and time complexity. This improves on previous approaches, which can only count the number of packets in constant time. We evaluate our work on real packet traces, demonstrating an up to X2.5 speedup compared to the best alternative.", "citation": "Citations (6)", "departments": ["Technion \u2013 Israel Institute of Technology", "Bell Labs", "Technion \u2013 Israel Institute of Technology", "Technion \u2013 Israel Institute of Technology"], "authors": ["<NAME>.....http://dblp.org/pers/hd/b/Ben=Basat:Ran", "<NAME>.....http://dblp.org/pers/hd/e/Einziger:Gil", "<NAME>.....http://dblp.org/pers/hd/f/Friedman:Roy", "<NAME>.....http://dblp.org/pers/hd/k/Kassner:Yaron"], "conf": "infocom", "year": "2017", "pages": 9}
|
json
|
The Federation Internationale de Football Association (FIFA) on Monday unveiled Ibha as the official mascot of FIFA U-17 Women’s World Cup 2022 in India.
New Delhi: The Federation Internationale de Football Association (FIFA) on Monday unveiled Ibha as the official mascot of FIFA U-17 Women's World Cup 2022 in India. Ibha, an Asiatic lioness, was unveiled on the day coinciding with the International Day of the Girl Child and exactly one year to go until the tournament kicks off.
"Ibha is a really exciting and inspiring character, one that young fans across India and around the world will have huge fun enjoying and interacting with in the lead-up to the FIFA U-17 Women's World Cup in India next year.
"2022 is on course to be a hugely significant year for women's football, with future stars of the game set to showcase their skills in India — just nine months before the FIFA Women's World Cup kicks off in Australia and New Zealand in 2023," said Sarai Bareman, FIFA Chief Women's Football Officer in a release.
Also Watch:
|
english
|
package com.google.sps.servlets;
import com.google.appengine.api.datastore.DatastoreService;
import com.google.appengine.api.datastore.DatastoreServiceFactory;
import com.google.appengine.api.datastore.KeyFactory;
import com.google.appengine.api.datastore.Key;
import com.google.appengine.api.datastore.Entity;
import com.google.appengine.api.datastore.PreparedQuery;
import com.google.appengine.api.datastore.Query;
import com.google.appengine.api.datastore.Query.SortDirection;
import com.google.gson.Gson;
import com.google.appengine.api.datastore.EntityNotFoundException;
import java.io.IOException;
import java.io.PrintWriter;
import java.util.ArrayList;
import java.util.Date;
import javax.servlet.RequestDispatcher;
import javax.servlet.annotation.WebServlet;
import javax.servlet.http.HttpServlet;
import javax.servlet.http.HttpServletRequest;
import javax.servlet.http.HttpServletResponse;
@WebServlet("/event")
public class EventServlet extends HttpServlet {
@Override
public void doGet(HttpServletRequest request, HttpServletResponse response) throws IOException {
ArrayList<Event> events = new ArrayList<>();
Key eventKey = KeyFactory.createKey("Event",Long.parseLong(request.getParameter("id")));
DatastoreService datastore = DatastoreServiceFactory.getDatastoreService();
findEvent(datastore, eventKey, events);
String json_events = convertToJsonUsingGson(events);
response.setContentType("application/json;");
response.getWriter().println(json_events);
}
// A separate method to try and handle finding entities.
private void findEvent(DatastoreService datastore, Key eventKey, ArrayList events) {
try {
Entity entity = datastore.get(eventKey);
String eventName = (String) entity.getProperty("eventName");
String organizer = (String) entity.getProperty("organizer");
String location = (String) entity.getProperty("location");
String description = (String) entity.getProperty("description");
Date start = (Date) entity.getProperty("start");
Date end = (Date) entity.getProperty("end");
String centerCoord = (String) entity.getProperty("centerCoord");
String pathCoords = (String) entity.getProperty("pathCoords");
String hashtag = (String) entity.getProperty("hashtag");
String phone = (String) entity.getProperty("phone");
String email = (String) entity.getProperty("email");
Event event = new Event(organizer,eventName,location,description,start,end,eventKey,centerCoord,pathCoords, hashtag, phone, email);
event.setPassword((String) entity.getProperty("password"));
events.add(event);
}
catch (EntityNotFoundException enfe) {
throw new RuntimeException();
}
}
private String convertToJsonUsingGson(ArrayList aList) {
Gson gson = new Gson();
String json = gson.toJson(aList);
return json;
}
}
|
java
|
package ph.hatch.ddd.domain.annotations;
public @interface DomainPolicyImpl {
}
|
java
|
- 6 hrs ago From Genetics To Climate: Is The Dengue Virus Changing?
- 7 hrs ago World Mojito Day: The Mojito Diet, Can This Classic Cocktail Help You Lose Weight?
Born on 2 June 1987, Sonakshi Sinha is among the few actresses in the Bollywood industry, whose style evolution has totally left us speechless. The actress loves to experiment with her style and flaunt different fashionable avatars. From sarees to body-hugging dresses to pant suits, we have seen her acing different types of outfits gracefully and stylishly. However, her pantsuit look is what that makes her stand out completely. The way she pulls them off with confidence, style, and sophistication, she truly inspire us. So, as Sonakshi celebrates her birthday today, take a look at her three boss lady looks in different pantsuits.
Sonakshi Sinha sported a black pant suit and looked super stunning. Her suit consisted of a full-sleeved V-shaped neckline single-breasted black blazer and high-waist flared pants. The black leather statement belt, cinched her waist and added structure to her attire. The Dabangg 3 actress completed her look with black shiny shoes and upped her look with black choker. She let loose her mid-parted straight highlighted tresses and spruced up her look with filled brows, kohled eyes, light eyeshadow, and pink lip shade.
Sonakshi Sinha exuded peace and cheerful vibes in her white pantsuit. Her suit consisted of a full-sleeved open-front knee-length long blazer and high-waist wide leg white pants. She layered her blazer with a strapless black corset top and completed her look with pointed heels. Styled by Mohit Rai, the actress accessorised her look with silver-toned chain neck pieces and rings. Sonakshi let loose her mid-parted straight tresses and rounded out her look with filled brows, kohled eyes, mascara, pink eyeshadow, soft blush, and pink lip shade.
Sonaksi Sinha gave formal fashion goals in her orange pantsuit, that consisted of a cuff-sleeved one-buttoned blazer and matching pants. Styled by Mohit Rai, she layered her blazer with nude-coloured bralette and notched up her look with hoop earrings. Sonakshi let loose her side-parted heavy curled locks and elevated her look with filled brows, pink metallic eyeshadow, contoured cheekbones, and nude pink lip shade.
So, what do you think about these pantsuits of Sonakshi Sinha? Let us know that in the comment section.
Happy birthday, Sonakshi Sinha!
|
english
|
<gh_stars>0
{"id": "GASD53H3S20201", "code": "GASD53H3S", "name": "Africa and Asia in the First World War", "description": "This seminar course examines the First World War in its imperial and colonial context in Africa and Asia. Topics include forgotten fronts in Africa, the Middle East, Asia and the Pacific, colonial armies and civilians, imperial economies and resources, the collapse of empires and the remaking of the colonial world. Same as AFSD53H3 and HISD53H3", "division": "University of Toronto Scarborough", "department": "Dept. of Historical & Cultural Studies (UTSC)", "prerequisites": "8.0 credits, including: 1.0 credit in AFS, GAS or Africa and Asia area HIS courses", "exclusions": "AFSD53H3, HISD53H3", "level": 400, "campus": "UTSC", "term": "2020 Winter", "breadths": [2], "meeting_sections": [{"code": "L01", "instructors": ["S Rockel"], "times": [{"day": "THURSDAY", "start": 54000, "end": 61200, "duration": 7200, "location": "AA 207"}], "size": 5, "enrolment": 5}]}
|
json
|
<filename>hidden-layout-view/src/main/java/com/psx/hiddenlinearlayoutview/Utilities/Constants.java
package com.psx.hiddenlinearlayoutview.Utilities;
public class Constants {
public static int CLICK_DURATION_IN_MILLIS = 500;
public static int MOVE_THRESHOLD_IN_DP = 10;
}
|
java
|
{"name":"The Unkown Tipping Token","symbol":"TUTT","logoURI":"https://raw.githubusercontent.com/TUT-Token/sol_token/main/tut.png","decimals":0,"address":"5APccmJuY2hnmXqN8yjMA97k3obB6xH5Xah1eTiLa5eG","chainId":101,"tags":["social-token"]}
|
json
|
<filename>georiviere/watershed/tests/test_models.py
from django.test import TestCase
from georiviere.watershed.tests import factories
class StationTest(TestCase):
@classmethod
def setUpTestData(cls):
cls.watershed_type = factories.WatershedTypeFactory(name="Toto")
cls.watershed = factories.WatershedFactory(name="Tata", watershed_type=cls.watershed_type)
def test_watershed_str(self):
self.assertEqual(str(self.watershed), 'Toto - Tata')
def test_watershed_type_str(self):
self.assertEqual(str(self.watershed_type), 'Toto')
|
python
|
import { Component } from '@angular/core';
import { NavController, NavParams } from 'ionic-angular';
declare var google;
@Component({
selector: 'page-mapa',
templateUrl: 'mapa.html',
})
export class MapaPage {
map: any;
constructor(public navCtrl: NavController, public navParams: NavParams) {
}
ionViewDidLoad() {
const position = new google.maps.LatLng(-8.0108174, -34.8547737);
const mapOptions = {
zoom: 15,
center: position,
disableDefaultUI: false
}
this.map = new google.maps.Map(document.getElementById('map'), mapOptions);
const marker = new google.maps.Marker({
position: position,
map: this.map,
//Titulo
//title: 'Minha posição',
//Animção
animation: google.maps.Animation.BOUNCE, // DROP
//Icone
//icon: 'assets/imgs/pessoa.png'
});
}
}
|
typescript
|
"""A notebook manager that uses S3 storage. (based on the Azure manager)
http://ipython.org/ipython-doc/dev/interactive/htmlnotebook.html#using-a-different-notebook-store
Drop this file in IPython/frontend/html/notebook
1. Create a new notebook profile - ipython profile create nbserver
2. edit ~/.ipython/profile_nbserver/ipython_notebook_config.py
3. Add these lines:
c.NotebookApp.notebook_manager_class = 'IPython.frontend.html.notebook.s3nbmanager.S3NotebookManager'
c.S3NotebookManager.aws_access_key_id = u""
c.S3NotebookManager.aws_secret_access_key = u""
c.S3NotebookManager.bucket = u'<unique bucket name>'
4. Start with `ipython notebook --profile=nbserver`
Authors:
* <NAME>
* <NAME>
"""
#-----------------------------------------------------------------------------
# Copyright (C) 2012 The IPython Development Team
#
# Distributed under the terms of the BSD License. The full license is in
# the file COPYING, distributed as part of this software.
#-----------------------------------------------------------------------------
#-----------------------------------------------------------------------------
# Imports
#-----------------------------------------------------------------------------
import datetime
import boto
from boto.s3.key import Key
from boto.s3.connection import S3Connection
from tornado import web
from .nbmanager import NotebookManager
from IPython.nbformat import current
from IPython.utils.traitlets import Unicode, Instance
#-----------------------------------------------------------------------------
# Classes
#-----------------------------------------------------------------------------
class S3NotebookManager(NotebookManager):
aws_access_key_id = Unicode('', config=True, help='AWS access key.')
aws_secret_access_key = Unicode('', config=True, help='AWS S3 storage account key.')
bucket = Unicode('', config=True, help='Bucket name for notebooks.')
meta_nbname = "nbname"
def __init__(self, **kwargs):
super(S3NotebookManager, self).__init__(**kwargs)
self.__s3_conn = None
self.log_info()
# Form unique bucket using access key + bucket name
# Wanted to add access_key to this but lower() isn't working
self.__bucket_name = self.bucket
self.__create_container()
@property
def s3_conn(self):
"""Lazy initialize"""
if not self.__s3_conn:
self.__s3_conn = S3Connection(aws_access_key_id = self.aws_access_key_id,
aws_secret_access_key = self.aws_secret_access_key)
return self.__s3_conn
def __create_container(self):
if not self.s3_conn.lookup(self.__bucket_name):
self.s3_conn.create_bucket(self.__bucket_name)
def load_notebook_names(self):
"""On startup load the notebook ids and names from S3
"""
self.mapping = {}
bucket = self.s3_conn.get_bucket(self.__bucket_name)
for item in bucket:
id_ = item.name
# bug in boto doesn't load metadata
# Force metadata load with get_key
item = bucket.get_key(id_)
name = item.get_metadata(self.meta_nbname)
if name:
self.mapping[id_] = name
else:
self.log.info(name)
self.log.info(item.metadata)
self.log.info("Skipping over S3 file with no ipython name: %s" % (id_,))
def list_notebooks(self):
"""List all notebooks in the container.
This version uses `self.mapping` as the authoritative notebook list.
"""
try:
data = [dict(notebook_id=item[0],name=item[1]) for item in self.mapping.items()]
data = sorted(data, key=lambda item: item['name'])
except Exception as e:
self.log.info("Problem sorting, this is the mapping: %s" % (self.mapping.items()))
raise
return data
def read_notebook_object(self, notebook_id):
"""Get the object representation of a notebook by notebook_id."""
if not self.notebook_exists(notebook_id):
raise web.HTTPError(404, u'Notebook does not exist: %s' % notebook_id)
try:
# v1 and v2 and json in the .ipynb files.
bucket = self.s3_conn.get_bucket(self.__bucket_name)
k = Key(bucket)
k.key = notebook_id
data = k.get_contents_as_string()
#self.log.info("downloaded contents: %s" % (data,))
except:
raise web.HTTPError(500, u'Couldn\'t pull out of s3.')
try:
nb = current.reads(data, u'json')
except:
raise web.HTTPError(500, u'Unreadable JSON notebook.')
# Todo: The last modified should actually be saved in the notebook document.
# We are just using the current datetime until that is implemented.
last_modified = datetime.datetime.utcnow()
return last_modified, nb
def write_notebook_object(self, nb, notebook_id=None):
"""Save an existing notebook object by notebook_id."""
try:
new_name = nb.metadata.name
except AttributeError:
raise web.HTTPError(400, u'Missing notebook name')
if notebook_id is None:
notebook_id = self.new_notebook_id(new_name)
if notebook_id not in self.mapping:
raise web.HTTPError(404, u'Notebook does not exist: %s' % notebook_id)
try:
data = current.writes(nb, u'json')
except Exception as e:
raise web.HTTPError(400, u'Unexpected error while saving notebook: %s' % e)
try:
bucket = self.s3_conn.get_bucket(self.__bucket_name)
key = Key(bucket)
key.key = notebook_id
key.set_metadata(self.meta_nbname, new_name)
#self.log.info("Setting contents to: %s" % (data,))
key.set_contents_from_string(data)
except Exception as e:
raise web.HTTPError(400, u'Unexpected error while saving notebook: %s' % e)
self.mapping[notebook_id] = new_name
return notebook_id
def delete_notebook(self, notebook_id):
"""Delete notebook by notebook_id."""
if not self.notebook_exists(notebook_id):
raise web.HTTPError(404, u'Notebook does not exist: %s' % notebook_id)
try:
bucket = self.s3_conn.get_bucket(self.__bucket_name)
k = Key(bucket)
k.key = notebook_id
k.delete()
except Exception as e:
raise web.HTTPError(400, u'Unexpected error while deleting notebook: %s' % e)
else:
self.delete_notebook_id(notebook_id)
def log_info(self):
self.log.info("Serving notebooks from S3 storage: %s, %s", self.aws_access_key_id, self.bucket)
|
python
|
<reponame>nohupped/chef-sentry<gh_stars>1-10
{
"id": "sentry_encrypted",
"admin_username": {
"encrypted_data": "<KEY>",
"iv": "HhJA20bauflHV05+6bC0yg==\n",
"version": 1,
"cipher": "aes-256-cbc"
},
"admin_password": {
"encrypted_data": "<KEY>",
"iv": "NXyrczG2E/bK4QRbFKLAiQ==\n",
"version": 1,
"cipher": "aes-256-cbc"
},
"admin_email": {
"encrypted_data": "<KEY>",
"iv": "chOchwLgqIfXl8zMZ30YGw==\n",
"version": 1,
"cipher": "aes-256-cbc"
},
"database_name": {
"encrypted_data": "<KEY>",
"iv": "RM89xReCZZeJW29Ax0pkyg==\n",
"version": 1,
"cipher": "aes-256-cbc"
},
"database_user": {
"encrypted_data": "<KEY>",
"iv": "ceHlnHG5H+COzdVMOvafdA==\n",
"version": 1,
"cipher": "aes-256-cbc"
},
"database_password": {
"encrypted_data": "<KEY>",
"iv": "kOgusVKw/YkJK20GZoWubA==\n",
"version": 1,
"cipher": "aes-256-cbc"
},
"database_host": {
"encrypted_data": "<KEY>",
"iv": "FHxpxzsY3HZQwSE7YYD4Ig==\n",
"version": 1,
"cipher": "aes-256-cbc"
},
"database_port": {
"encrypted_data": "<KEY>",
"iv": "28g1wGdCC+BXM2G3G2agpQ==\n",
"version": 1,
"cipher": "aes-256-cbc"
},
"signing_token": {
"encrypted_data": "<KEY>",
"iv": "XXn+tkyBfR9Ye1dTN13YeQ==\n",
"version": 1,
"cipher": "aes-256-cbc"
},
"email_host_user": {
"encrypted_data": "<KEY>",
"iv": "UehE2z1wqjHwTj/176QSHA==\n",
"version": 1,
"cipher": "aes-256-cbc"
},
"email_host_password": {
"encrypted_data": "<KEY>",
"iv": "QufPfMNS0buKX/ARDoDIpw==\n",
"version": 1,
"cipher": "aes-256-cbc"
},
"additional_env_vars": {
"encrypted_data": "<KEY>",
"iv": "lqDKQutZopbjjMl0yaS9Uw==\n",
"version": 1,
"cipher": "aes-256-cbc"
}
}
|
json
|
Sebastian Vettel is one of the most successful and popular drivers in the history of F1. The German has raced in single-seater motorsport for 15 long years. He recently retired from F1 at the end of 2022. In his long career, he earned a fair share for himself and his family.
As of 2023, Sebastian Vettel's net worth is said to be around $140 million. Of course, this does not reflect how much the driver earned throughout his career, since he has also spent some of it and given several millions to charity and other causes.
In the latter stages of his F1 career, he was with the Aston Martin team, where he earned around $15 million per year, excluding bonuses and sponsorships. His annual salary was even higher when he was with Ferrari. According to Forbes, his annual salary at the Italian team was around $36. 3 million in 2020. Back in 2017, he was also ranked 18th by Forbes in their list of the highest-paid athletes.
It was quite surprising to learn that when Sebastian Vettel was dominating the sport with Red Bull, he was not earning the most. Rather, it was when he was gradually declining during his time at Ferrari when he earned the most during his entire career.
As of now, Sebastian Vettel is not participating in any kind of official motorsport. He will be seen in the F1 sphere again at the Formula Nurburgring event hosted by Red Bull Racing. He has also been offered a job as a 'Sustainability Manager' by F1 CEO Stefano Domenicali. The German driver has not yet officially commented on the offer.
Only Sebastian Vettel and Max Verstappen have won the world championship with Red Bull. Since team principal Christian Horner has managed both drivers, he is well aware of their traits, specialties, and personalities.
In a recent episode of the Extraordinary Tales with Seb Coe podcast, Horner stated that while every driver has a different style of racing and managing their race, every great driver has that extra capacity to push further. He stated how Vettel had that trait back when he was winning championships, and he has experienced the same with Verstappen. He said:
Max Verstappen recently surpassed Sebastian Vettel's record of most wins with Red Bull (38) after winning the 2023 F1 Monaco GP.
|
english
|
Paddy Considine is confirmed to return to the screen this August with the Game of Thrones prequel House of the Dragon. HBO's fantasy epic, Game of Thrones, which graced our screens from 2011 to 2019, is back but with a spin-off.
After a successful run for nearly a decade, HBO concluded the story of the seven kingdoms with a satisfying end. Although the conclusion was not quite what fans had anticipated, given the quick and unforeseen twist towards the end, the HBO series remains one of the most watched shows ever. Now, it is back again with a prequel that will look into the story of one of our beloved dynasties, House of the Dragon.
Paddy Considine is an English actor, director, and screenwriter who started his career in acting with the director Shane Meadows, who cast him in his first role in the 1999 feature A Room for Romeo Brass. Considine's impressive performance landed him a role in Pawel Pawlikowski's Last Resort the following year.
Paddy Considine started working in the industry with several small features where he was cast in plenty of scene-stealing supporting roles. The actor has been cast in movies like 24 Hour Party People (2002), Born Romantic (2000), and The Martins (2001).
However, fame and recognition came for Considine for his performance as Richard in the 2004 film Dead Man's Shoes, which he co-wrote with his friend and director Shane Meadows. He is also noticeable for his role in Pawlikowski's My Summer of Love (2004).
Paddy Considine's other notable films include 2005's Cinderella Man, Hot Fuzz (2007), and The Bourne Ultimatum (2007), where he can be seen starring in supporting roles. His main features include films like Red Riding: The Year of Our Lord 1980 (2009) and Submarine (2010).
Considine also released his feature-length directorial debut, Tyrannosaur (2011), which won him a BAFTA Award for Outstanding Debut by a British Writer, Director, or Producer.
Paddy Considine plays the role of King Viserys, who would succeed the old King Jaehaerys Targaryen in House of the Dragon. Perceived as a good and decent man, chances of his survival look bleak in the Game of Thrones prequel. As the brutal forces of Westeros plot against the kind-hearted Viserys, what lies in his fate?
Based on George R.R Martin's 2018 book Fire and Blood, House of the Dragon will specifically look into the legacy of the Targaryen family. Set around 200 years before the events of Game of Thrones unfolded, the prequel series will primarily focus on the underlying fortunes and politics of the Targaryen family.
The series will unfold in the backdrop of the ghastly civil war that ensued among the Targaryens, also known as Dance of Dragons. The series will premiere on the HBO streaming platform this August 21, 2022. Watch the trailer here:
Catch Paddy Considine in the much-anticipated Game of Thrones spin-off, House of the Dragon, coming to the HBO streaming platform soon.
|
english
|
<reponame>thanhtinh030291/mobilef<filename>mobile/.expo/web/cache/development/babel-loader/4adafc1a2f2d58cefbda6ed43448cdd8.json<gh_stars>0
{"ast":null,"code":"import Icon from '@expo/vector-icons';\nexport default Icon;","map":{"version":3,"sources":["../src/Icon.ts"],"names":[],"mappings":"AAAA,OAAO,IAAP,MAAiB,oBAAjB;AACA,eAAe,IAAf","sourcesContent":["import Icon from '@expo/vector-icons';\nexport default Icon;\n"],"sourceRoot":""},"metadata":{},"sourceType":"module"}
|
json
|
<gh_stars>0
package top.clydezhou.lab.demo.spring.test.common;
/**
* @author clyde
* @date 2020-07-13 22:44
*/
public class MapperTestBase {
}
|
java
|
<gh_stars>0
import casadi as cs
# noinspection PyUnresolvedReferences
from casadi.casadi import SX
import os
import warnings
import numpy as np
import symbtools as st
from sympy.printing.lambdarepr import lambdarepr, LambdaPrinter
class CassadiPrinter(LambdaPrinter):
"""
This subclass serves to convert scalar sympy expressions to casady functions.
It is strongly inspired by sympy.printing.lambdarepr.NumExprPrinter
"""
_default_settings = {'fully_qualified_modules': False, 'inline': True,
'allow_unknown_functions': True, 'order': None,
'human': True,
'full_prec': True,
'user_functions': {}}
printmethod = "_numexprcode"
_numexpr_functions = {
'sin': 'sin',
'cos': 'cos',
'tan': 'tan',
'asin': 'arcsin',
'acos': 'arccos',
'atan': 'arctan',
'atan2': 'arctan2',
'sinh': 'sinh',
'cosh': 'cosh',
'tanh': 'tanh',
'asinh': 'arcsinh',
'acosh': 'arccosh',
'atanh': 'arctanh',
'log': 'log',
'exp': 'exp',
'sqrt': 'sqrt',
'Abs': 'fabs',
}
def __init__(self, *args, **kwargs):
super().__init__(*args, **kwargs)
# a few functions are called differently in sympy and casadi
# here we create the mapping {a: b, ...} where a is the name of the sympy func and b the name of cs func
self.cs_func_keys = dict([(key, value) for key, value in self._numexpr_functions.items()])
# some functions simply do not exisit in casadi
self.unsupported_funcs = ["conjugate", "im", "re", "where", "complex", "contains"]
for k in self.unsupported_funcs:
pass
# del self.cs_func_keys[k]
self.cs_funcs = dict([(value, getattr(cs, value)) for key, value in self.cs_func_keys.items()])
for k in self._numexpr_functions:
setattr(self, '_print_%s' % k, self._print_Function)
def _print_Function(self, e):
func_name = e.func.__name__
nstr = self._numexpr_functions.get(func_name, None)
if nstr is None:
raise TypeError("numexpr does not support function '%s'" %
func_name)
return "%s(%s)" % (nstr, self._print_seq(e.args))
def _print_seq(self, seq, delimiter=', '):
s = [self._print(item) for item in seq]
if s:
return delimiter.join(s)
else:
return ""
def casidify(expr, state_vect, input_vect):
# source: https://gist.github.com/cklb/60362e1f49ef65f5212fb5eb5904b3fd
"""
Convert a sympy-expression into a casadi expression. This is used by create_casadi_func(...).
:param expr: symbolic expression which is to convert to casadi
:param state_vect: symbolic state vector
:param input_vect: symbolic input vector
"""
syms = []
res = ["rhs = vertcat("]
state_str = ["x = vertcat("]
input_str = ["u = vertcat("]
# extract symbols
for _s in state_vect:
syms.append("{0} = SX.sym('{0}')".format(str(_s)))
state_str.append(str(_s) + ", ")
for _s in input_vect:
syms.append("{0} = SX.sym('{0}')".format(str(_s)))
input_str.append(str(_s) + ", ")
state_str.append(")")
input_str.append(")")
# convert expression
CP = CassadiPrinter()
for entry in expr:
# handle expr
_expr = CP.doprint(entry)
res.append(_expr + ", ")
res.append(")")
ode_str = os.linesep.join(syms
+ res
+ state_str
+ input_str)
scope = dict(SX=cs.SX, MX=cs.MX, vertcat=cs.vertcat, **CP.cs_funcs)
exec(ode_str, scope)
return scope["rhs"], scope["x"], scope["u"]
def create_casadi_func(sp_expr, sp_vars, sp_uu=None, name="cs_from_sp"):
"""
:param sp_expr: sympy expression which should be converted
:param sp_vars: sequence of sympy vars e.g. ( x1, x2, x3, x4, u1, lmd1 ) for a Lagrangian system model with
state_dim = 4, input_dim = 1, constraint_dim = 1
:param sp_uu: sequence of input variables (deprecated) put inputs into `sp_vars`
:param name:
:return: callable casadi function
"""
multiple_args = True
if sp_uu is None:
sp_uu = []
multiple_args = False
else:
msg = "passing parameter `sp_uu` is deprecated and will not work in future symbtools releases anymore. "\
"Please see the docstring of `create_casadi_func()` on how to pass input (and other variables)"
warnings.warn(msg, DeprecationWarning)
expr_cs, xx_cs, uu_cs = casidify(sp_expr, sp_vars, sp_uu)
if multiple_args:
func_cs = cs.Function(name, (xx_cs, uu_cs), (expr_cs,))
else:
func_cs = cs.Function(name, (xx_cs,), (expr_cs,))
func_cs.xx = xx_cs
func_cs.uu = uu_cs
return func_cs
# convenience functions (maybe there is a more elegant way)
# noinspection PyPep8Naming
def seq_to_SX_matrix(seq):
"""
In many cases this is equivalent to cs.vertcat.
"""
n = len(seq)
# leading element:
e0 = SX(seq[0])
if e0.shape == (1, 1):
# we have a sequence of scalars and create a column vector
res = SX(n, 1)
for i, elt in enumerate(seq):
res[i, 0] = elt
return res
else:
# we assume we have a sequence of vectors and want to concatenate them (colstack)
n1, n2 = e0.shape
res = SX(n1, n2*n)
for i, elt in enumerate(seq):
res[:, i] = elt
return res
# noinspection PyPep8Naming
def SX_diag_matrix(seq):
n = len(seq)
res = SX.zeros(n, n)
for i, elt in enumerate(seq):
res[i, i] = elt
return res
def unpack(sx_matrix):
"""
convert SX matrix (vector) to list
"""
n1, n2 = sx_matrix.shape
assert n2 == 1
res = [sx_matrix[i, 0] for i in range(n1)]
return res
def distribute(in_data, *shapes):
"""
Return sequence of arrays which have shapes as in `shapes` and together contain the data in `in_data`.
Call like so: distribute(arr, (1, 2), (7,) (100, 7))
# NOTE: casadi has a different reshape behavior than numpy.
This is useful for easy acces to the optimization results of e.g. casadi.
:param in_data: (almost) flat array
:param shapes: sequence of shapes
:return:
"""
assert isinstance(shapes[0], (tuple, list, np.ndarray))
len_list = [np.prod(s) for s in shapes]
# assert that in_data is almost flat (all but one dims are 1)
assert np.count_nonzero(np.array(in_data.shape) - 1) in (1, 0)
if isinstance(in_data, np.ndarray):
in_data = np.array(in_data).squeeze()
order = "C"
else:
order = "F"
assert sum(len_list) == np.prod(in_data.shape)
start = 0
res = []
for s, l in zip(shapes, len_list):
d = in_data[start:start+l]
if np.prod(d.shape) == 1:
d = np.array(d)
res.append(np.array(d).reshape(s, order=order))
start += l
return res
|
python
|
Due to health issues, SHINee’s Onew will not be participating in the group’s upcoming full-length album, three-day concert, and other promotional activities. SM Entertainment released an official statement on June 9 explaining the 33-year-old singer’s hiatus. Fans, previously worried about his extreme weight loss, grew more concerned after reading the news.
Fans sent supportive messages and expressed their unparalleled love for the singer, otherwise lovingly called by his birth name, Lee Jin-ki. The agency’s statement did not explicitly mention the SHINee member’s health issues but shared that he was advised to rest under medical care.
On June 26, SHINee will celebrate their 15th anniversary with a three-day concert titled SHINee WORLD VI [PERFECT ILLUMINATION] and release their eighth full-length album, HARD. The upcoming album release holds more significance for fans since it marks their first group comeback in two years. The promotions will now only be attended by Key, Minho, and Taemin.
SHINee’s Onew, known for being one of the most talented vocalists in the K-pop industry, recently worried fans during his solo comeback in March. Fans noticed a considerable weight loss, which almost made him unrecognizable by some fans. Many grew increasingly concerned while watching him perform his regular activities.
Fans’ worries increased on June 9 when SM Entertainment announced that the 33-year-old leader of SHINee was “experiencing health issues” and “will be needing medical care and rest. ” It mentioned that the leader will be unable to participate in the group’s upcoming 15th-anniversary celebrations, including full group album promotions.
Fans immediately began posting photos and videos with heartfelt messages to showcase their support for SHINee’s Onew. Take a look at their messages below:
After the news was released, SHINee’s Onew posted a letter to fans with the image of the group’s 15th-anniversary cake on his Instagram account. He apologized for worrying people but asked them to think of his hiatus as a break that would help him continue to be with the group for a long time.
As per translation via Koreaboo, the caption reads as follows:
He ended the letter by assuring fans that he will return healthy.
Meanwhile, Key, Minho, and Taemin will greet fans in the upcoming three-night concert at the KSPO Dome on June 23, 24, and 25. SHINee will also first release a track titled The Feeling from their album HARD on June 10. The album will then be released on June 26, 2023.
|
english
|
<reponame>sonsongithub/sonsongithub.github.io
---
layout: post
title: Evernote SDK for iOSの罠
categories:
- blog
tags:
- Blog
status: publish
type: post
published: true
meta:
_oembed_14ef72493943f1b49ed96cd91ca8a4bf: "{{unknown}}"
_oembed_93a7a70b5935f06d97497024cd7d98cc: "{{unknown}}"
_edit_last: '1'
_wp_old_slug: ''
_oembed_8c8ffd6e5c539bc5be0014137a670cb6: "{{unknown}}"
_oembed_9d019e539b8279f97625d7a4377e2ba4: "{{unknown}}"
_oembed_c74d7a7259bed52b380cc33fd8fe8d35: "{{unknown}}"
_oembed_bf36b4b63089adf03782324dfd2a2317: "{{unknown}}"
author:
login: sonson
email: <EMAIL>
display_name: sonson
first_name: ''
last_name: ''
---
<p>EvernoteがOAuthに移行するので,いまさら・・・移行作業を行っていたら,久々に地雷を踏んだので報告.</p>
<p>Evernote SDKは,<a href="https://github.com/evernote/evernote-sdk-ios" target="_blank">https://github.com/evernote/evernote-sdk-ios</a>のgithubからダウンロードできる.<br />
よくできていて,OAuthの面倒くさい,Web周りのお世話の面倒も見てくるできるSDKだ.<br />
OAuthの認証の処理の流れは,以下.</p>
<ol>
<li>EvernoteSessionをクラスメソッドで初期化</li>
<li>認証を開始</li>
<li>認証が出来てない場合,cosumer keyとsecret keyを使ってプレ認証?し,OAuthの認証ページURLを取得</li>
<li>WebViewを開く</li>
<li>Evernoteへのログインページを開く</li>
<li>ユーザがログイン</li>
<li>ユーザがアクセスを承認</li>
<li>WebViewがサーバとやりとりするURLの中からtokenを抜き取る</li>
<li>ウマー</li>
</ol>
<p>となる.<br />
作ると面倒くさいが,中身だけ見ると処理は結構単純なのだが,ここにバグというか,謎仕様が.<br />
私の環境だと,アクセスを承認した後にtokenを取得できないのである.<br />
困った.githubのissuesにも登録されていない・・・・!<br />
というわけで,デバッグ大会に突入したのである.<a href="http://twitter.com/setoh2000" target="_blank">@setoh2000</a>さん,ご協力ありがとうございました.</p>
<p><strong>「結論」は,アカウント名にアンダースコアが入っていると,サーバの挙動が変わるため,現状のSDKではうまくいかないので,Evernote SDK for iOSに修正が必要.</strong></p>
<p>これは,すでに<a href="https://github.com/evernote/evernote-sdk-ios/issues" target="_blank">issue</a>に登録してあります.</p>
<p>Evernote SDKのやってることは,OAuthの認証の後に,特定のスキーマを持つURLへのアクセスを監視し,それを抜き取って,OAuthのtokenとしている.<br />
その特定のスキーマとは,</p>
<p><code><br />
en-CONSUMERKEY://responsehogehoge...<br />
</code></p>
<p>である.<br />
しかし,これが飛んでこない.かわりに飛んでくるのは,</p>
<p><code><br />
https://sandbox.evernote.com/Home.action?en-CONSUMERKEY://responsehogehoge...<br />
</code></p>
<p>こんなURLである.<br />
それで,この特定のスキーマを持つURLへのアクセスを監視しているENOAuthViewControllerを改造し,https://sandbox.evernote.com/Home.action?en-CONSUMERKEY://を見つけ出すようにするとうまく動くようになった.</p>
<p>この問題はどうやら,既知らしいのだが,Evernoteが対応していないっぽい.→<a href="http://kawairi.jp/weblog/vita/201206266455" target="_blank">参考リンク</a></p>
<p>Evernoteさん早く直してください.</p>
|
html
|
{
"id": 12100,
"cites": 32,
"cited_by": 20,
"reference": [
"<NAME>., (2003), Why not a Political Coase Theorem? Social Conflict, Commitment and Politics, Journal of Comparative Economics, 31(4), 620-652.",
"<NAME>., <NAME>, <NAME> and <NAME>, (2002), Optimal Taxation without State-Contingent Debt, Journal of Political Economy, 110, 1220-1254.",
"<NAME>., (2000), The Political Economy of the Budget Surplus in the United States, Journal of Economic Perspectives, 14(3), 3-19.",
"<NAME>. and <NAME>, (1995), The Political Economy of Budget Deficits, IMF Staff Papers, 1-31.",
"<NAME>. and <NAME>, (1990), A Positive Theory of Fiscal Deficits and Government Debt, Review of Economic Studies, 57, 403-414.",
"<NAME>. and <NAME>, (2005), Why is Fiscal Policy often Procyclical? NBER Working Paper 11600.",
"<NAME>. and <NAME>, (2005), Positive Political Theory II: Strategy and Structure, Ann Arbor, MI: University of Michigan Press.",
"<NAME>., (1991), Majoritarian Incentives, Pork Barrel Programs, and Procedural Control, American Journal of Political Science, 35, 57-90.",
"<NAME>. and <NAME>, (1989), Bargaining in Legislatures, American Political Science Review, 83, 1181-1206.",
"<NAME>., (1979), On the Determination of the Public Debt, Journal of Political Economy, 87, 940-971.",
"<NAME>., (1986), U.S. Deficits since World War I, Scandinavian Journal of Economics, 88(1), 195-222.",
"<NAME>. and <NAME>, (2005), Inefficiency in Legislative Policy-Making: A Dynamic Analysis, NBER Working Paper 11495.",
"<NAME>. and <NAME>, (1998), Sources of Inefficiency in a Representative Democracy: A Dynamic Analysis, American Economic Review, 88(1), 139-156.",
"<NAME>., (1998), The Behavior of U.S. Public Debt and Deficits, Quarterly Journal of Economics, 113(3), 949-963.",
"<NAME>. and <NAME>, (1996), Balanced Budget Rules and Public Deficits: Evidence from the U.S. States, Carnegie-Rochester Conference Series on Public Policy, 45, 13-76.",
"<NAME>. and <NAME>, (1980), The Power to Tax: Analytical Foundations of a Fiscal Constitution, Cambridge: Cambridge University Press.",
"<NAME>. and <NAME>, (1999), Policy Persistence, American Economic Review, 89, 13271336.",
"<NAME>., <NAME>., <NAME>. and <NAME>, (2003), The Survival of the Welfare State, American Economic Review, 93(1), 87-112.",
"<NAME>., (1990), Public Debts and Fiscal Politics: How to Decide? American Economic Review, 80(2), 81-85.",
"<NAME>., <NAME> and <NAME>, (2000), Majority-Rule Bargaining and the Under Provision of Public Investment Goods, Journal of Public Economics, 75, 21-47.",
"<NAME>., (1999), Budget Deficits and Redistributive Politics, Review of Economic Studies, 66(4), 909-928.",
"<NAME>. and <NAME>, (1983), Optimal Fiscal and Monetary Policy in an Economy without Capital, Journal of Monetary Economics, 12, 55-93.",
"<NAME>., (1992), The Case for a New Fiscal Constitution, Journal of Economic Perspectives, 6(2), 13-24.",
"<NAME>. and <NAME>, (2000), Political Economics: Explaining Economic Policy, Cambridge, MA: MIT Press.",
"<NAME>., (1994), State Responses to Fiscal Crises: The Effects of Budgetary Institutions and Politics, Journal of Political Economy, 102, 799-821.",
"<NAME>., (1995), Balanced Budget Rules and Fiscal Policy: Evidence from the States, National Tax Journal, 48(3), 329-336.",
"<NAME>. and <NAME>, eds., (1999), Fiscal Rules and Fiscal Performance, Chicago: University of Chicago Press.",
"<NAME>., (1970), Convex Analysis, Princeton, NJ: Princeton University Press.",
"<NAME>., <NAME> and <NAME>, (1989), Recursive Methods in Economic Dynamics, Cambridge, MA: Harvard University Press.",
"<NAME>., (2000), Debts and Deficits with Fragmented Fiscal Policymaking, Journal of Public Economics, 76, 105-125.",
"Economic and Social Review, 33(3), 263-284. <NAME>. and <NAME>, (1995), Budget Processes and Commitment to Fiscal Discipline, European Economic Review, 39, 771-779.",
"<NAME> and <NAME>, (1981), The Political Economy of Benefits and Costs: A Neoclassical Approach to Distributive Politics, Journal of Political Economy, 89, 642-664."
]
}
|
json
|
<reponame>cb0s/GNPS
package com.gnps.lib.protocol.definition;
import java.net.InetSocketAddress;
import com.gnps.lib.event.EventBus;
import com.gnps.lib.event.protocol.HeartbeatEvent;
/**
* Represents the {@link MessageType#ALIVE Life}-Command.<br>
* This command does not come with payload.
*
* @author <NAME>
* @version 1.0
*/
public final class Life extends Command {
/**
* See {@link Command#Command(InetSocketAddress, byte, String)} for more details.
*/
public Life(InetSocketAddress ip, byte id, String payload) {
super(ip, id, payload);
checkPayloadUpperBound(0);
}
@Override
public boolean handle(EventBus bus) {
HeartbeatEvent he = new HeartbeatEvent(ip);
bus.publish(he);
return true;
}
}
|
java
|
<filename>ScrumPokerTablePy/ScrumPokerTablePy/static/app/directives/deskview.js
(function(){
angular
.module("ScrumPokerTable")
.directive("deskView",[function(){
return {
restrict: "E",
scope: {
desk: "=",
player: "="
},
templateUrl: "app/directives/deskview.html",
link: function link(scope, element, attr) {
scope.$watch("desk", function(desk, oldDesk) {
if(!desk) return;
scope.desk_id = desk.desk_id;
players = desk.players;
scope.max = desk.players
.map(function(p){ return p.card })
.filter(function(c){ return c!=null && c!="?" })
.map(function(c){ return parseInt(c) })
.reduce(function(a,c){ return c > a ? c : a }, 0);
scope.min = desk.players
.map(function(p){ return p.card })
.filter(function(c){ return c!=null && c!="?" })
.map(function(c){ return parseInt(c) })
.reduce(function(a,c){ return c < a ? c : a }, 99);
scope.stateName = {
1: "Waiting for players...",
2: "Vote in progress...",
3: "Vote result"
}[desk.state];
scope.players = desk.players;
scope.complete = desk.state == 3
}, true);
scope.show = function(player){
return scope.complete || (scope.player && player.name.toLowerCase() === scope.player.toLowerCase());
}
scope.getCardStyle = function(player){
if(!scope.complete){
return "";
}
if(parseInt(player.card) === scope.min){
return { "background-color": "#8f8" };
}
if(parseInt(player.card) === scope.max){
return { "background-color": "#f88" };
}
}
}
};
}])
;
})();
|
javascript
|
<gh_stars>10-100
import React from 'react';
import { shallow } from 'enzyme';
import TitleizedField from './TitleizedField';
describe('TitleizedField', () => {
it('transforms text properly', () => {
const record = { issuer: 'coast-guard' };
const field = shallow(<TitleizedField source="issuer" record={record} />);
expect(field.text()).toEqual('Coast Guard');
});
});
|
javascript
|
<filename>plugins/weex-svg/android/dev/app/build/intermediates/blame/res/debug/single/xml.json
[
{
"merged": "/Users/budao/weex-svg/android/dev/app/build/intermediates/res/merged/debug/xml/app_config.xml",
"source": "/Users/budao/weex-svg/android/dev/app/src/main/res/xml/app_config.xml"
},
{
"merged": "/Users/budao/weex-svg/android/dev/app/build/intermediates/res/merged/debug/xml/config.xml",
"source": "/Users/budao/weex-svg/android/dev/app/build/intermediates/exploded-aar/dev/weexplugin/unspecified/res/xml/config.xml"
}
]
|
json
|
$().ready(function(){
suma_montos();
var nacional = $('#nacionales').val();
var extranjero = $('#extranjeros').val();
if ( nacional == 1 && extranjero == 0) {
$('#tabla_nacional').show();
$('#nacional').show();
$('#tabla_extranjero').hide();
$('#extranjero').hide();
}
if (extranjero == 1 && nacional == 0) {
$('#tabla_extranjero').show();
$('#extranjero').show();
$('#tabla_nacional').hide();
$('#nacional').hide();
}
if (extranjero == 1 && nacional == 1) {
$('#tabla_extranjero').show();
$('#extranjero').show();
$('#tabla_nacional').show();
$('#nacional').show();
}
});
/**
*Funcion para autorizar la solicitud generada
*@return void
*/
function autorizaciones(object){
var url = domain('autorizacion/enviar');
var montos_autorizados_nacional = [];
var montos_autorizados_extranjero = [];
var estatus = $(object).attr('estatus');
var inputs = [1,2,3,4,5];
for (var i = 0; i < inputs.length; i++) {
montos_autorizados_nacional[i] = $('#tr_autorizado_monto_nacional_'+inputs[i]).val();
montos_autorizados_extranjero[i] = $('#tr_autorizado_monto_extranjero_'+inputs[i]).val();
}
var fields = {
'id_solicitud' : $('#id_solicitud').val()
,'montos_autorizados_nacional' : montos_autorizados_nacional
,'total_nacional' : $('#total_nacional').text()
,'montos_autorizados_extranjero' : montos_autorizados_extranjero
,'total_extranjero' : $('#total_extranjero').text()
,'estatus' : estatus
}
create_register(url,fields,function(json){
console.log(json);
},function(json){
});
}
/**
*Funcion para la suma de los montos autorizados y al final dar un resultado final
*@return double [description]
*/
function suma_montos(){
var inputs = [1,2,3,4,5];
var total_nacional = 0;
var total_extranjero = 0;
for (var i = 0; i < inputs.length; i++) {
total_nacional = parseFloat( parseFloat($('#tr_autorizado_monto_nacional_'+inputs[i]).val() ) + parseFloat(total_nacional) );
total_extranjero = parseFloat( parseFloat($('#tr_autorizado_monto_extranjero_'+inputs[i]).val() ) + parseFloat(total_extranjero) );
}
$('#total_nacional').text(number_format(total_nacional,2));
$('#total_extranjero').text( number_format(total_extranjero,2) );
$('.total_importe_autorizado').text( number_format(total_nacional,2) );
}
|
javascript
|
Thousands of acres of agricultural land where the paddy crop was already sown got submerged under 2 feet rainwater, even as rescue operations were going on in several flood-hit areas of the state, especially people living along the rivers were being evacuated to safer places.
Until July 3, all the arrived crop had been procured by private players, with 68,130 quintals being bought below the MSP.
Rajya Sabha MP and environmentalist Baba Balbir Singh Seechewal has been actively cleaning the Sutlej river using his own resources, particularly near the Gidderpindi railway bridge, as rains lash the state.
At least 3,000 buses lay parked across several depots and bus stands in Punjab since midnight as the contractual drivers and conductors tried to impress on the government to address their long-pending demands, such as regularisation and parity of salaries among others.
The protestors, including permanent and contractual drivers and conductors, are also planning to gherao the residence of Punjab Chief Minister Bhagwant Mann on Wednesday.
The scam came to light in mid-March this year when hundreds of Indian students, primarily Punjabi, reported that they were facing deportation from Canada due to the fake offer letters provided to them by a Jalandhar-based travel agent, Brijesh Mishra, running a firm called M/S Education Migration Services.
Khaira said, “It is extremely astonishing that now the AAP, which claims to be the party of the common people, is trying to protest an MLA who was illegally living in the house of an NRI.
The victim, who is the daughter of the travel agent's maternal aunt, has revealed that she was subjected to extreme cruelty by her captors in Dubai.
Sahney revealed that the Ministry of External Affairs has issued a list of illegal recruiting agents who have been found to be involved in such fraudulent activities and has imposed a ban on them. This list has 170 recruiting agents from Punjab, he said.
The alleged gas leak on Friday night occurred at a cold storage factory in Dashmesh Nagar, Ladowali Road. Following the incident, residents began experiencing breathing difficulties and a burning sensation in their eyes.
|
english
|
Key Highlights:
Indian Navy has signed contract with Navratna Defence PSU Bharat Limited (BEL) for supply of the first indigenous comprehensive Naval Anti Drone System (NADS) with both hard kill and soft kill capabilities in New Delhi on August 31, 2021.
The contract was signed in the presence of senior Naval officers and DRDO representatives. Indian Navy has provided consistent support and has led in the joint development of the anti-drone system with Defence Research and Development Organisation (DRDO) and BEL.
The NADS, developed by DRDO and manufactured by BEL, is the first indigenously developed anti-drone system to be inducted into the Indian Armed Forces. Multiple Units of BEL, namely Bangaluru, Hyderabad, Pune and Machilipatanam; and DRDO Labs, namely Electronics & Radar Development Establishment (LRDE), Bangaluru;Defence Electronics Research Laboratory (DLRL) and Centre for High Energy Systems and Sciences (CHESS), Hyderabadand Instruments Research & Development Establishment (IRDE)Dehradun; in close collaboration with the Indian Navy, were involved in the making of this fully indigenous system, as part of the Atmanirbar Bharat initiative to counter drone threats of adversaries.
The NADS can instantly detect and jam micro drones and use a laser-based kill mechanism to terminate targets. It will be an effective all-encompassing counter to the increased drone threat to strategic naval installations.
The anti-drone system was first deployed to provide security cover for the Republic Day Parade this year and later during the Prime Minister's Independence Day Address to the Nation from the ramparts of the Red Fort. The system, which offers 360-degree coverage, was also deployed in Ahmedabad for the Modi-Trump roadshow.
The NADS uses the help of Radar, Electro-optical/infrared (EO/IR) sensors and Radio Frequency (RF) detectors to detect and jam the micro drones. The DRDO’s RF/Global Navigation Satellite System (GNSS) detects the frequency which is being used by the controller and the signals are then jammed. The anti-drone technology system of DRDOprovides for both 'soft kill' and 'hard kill' options to the Indian Armed Forces to tackle fast-emerging aerial threats. Both the static and mobile versions of NADS will be supplied to the Indian Navy within a short time from the signing of contract.
Senior civil and military officials of Ministry of Defence and BEL were present on the occasion. The BEL is to sign similar contracts with Army and Air Force also.
|
english
|
import { useCallback, useEffect, useState } from "react";
import { TopicArea } from "../models";
import BackendService from "../services/BackendService";
type UseTopicAreasHook = {
loading: boolean;
topicareas: Array<TopicArea>;
reloadTopicAreas: Function;
};
export function useTopicAreas(): UseTopicAreasHook {
const [loading, setLoading] = useState(false);
const [topicareas, setTopicAreas] = useState([]);
const fetchData = useCallback(async () => {
setLoading(true);
const data = await BackendService.fetchTopicAreas();
setTopicAreas(data);
setLoading(false);
}, []);
useEffect(() => {
fetchData();
}, [fetchData]);
return {
loading,
topicareas,
reloadTopicAreas: fetchData,
};
}
type UseTopicAreaHook = {
topicarea: TopicArea | undefined;
loading: boolean;
};
export function useTopicArea(topicAreaId: string): UseTopicAreaHook {
const [loading, setLoading] = useState(false);
const [topicarea, setTopicArea] = useState<TopicArea | undefined>(undefined);
const fetchData = useCallback(async () => {
setLoading(true);
const data = await BackendService.fetchTopicAreaById(topicAreaId);
setTopicArea(data);
setLoading(false);
}, [topicAreaId]);
useEffect(() => {
fetchData();
}, [fetchData]);
return {
loading,
topicarea,
};
}
|
typescript
|
<filename>src/Aspects/Value/Engine/MultiplexWaitConnection.js
var utils = require('utils');
var debug = require('console');
var Pollymer = require('Pollymer');
var ValueResource = require('Aspects/Value/Engine/ValueResource');
var Connection = require('Engine/Connection');
class MultiplexWaitConnection extends Connection {
constructor(engine, endpoint, engineUnit) {
super(engine);
this.uri = endpoint.endpointUri;
this.request = new Pollymer.Request();
this.resItems = endpoint.items.slice();
this.isActive = false;
this.request.on("finished", (code, result, headers) => {
this.isActive = false;
if (code >= 200 && code < 300) {
utils.forEachOwnKeyValue(result, (uri, item) => {
debug.info(`got data for uri: ${uri}`);
var absoluteUri = utils.toAbsoluteUri(this.uri, uri);
ValueResource.updateValueItemMultiplex(engineUnit._resources, absoluteUri, item.headers, item.body);
});
}
this._engine.update();
});
}
hasChanged(endpoint) {
var removedOrChanged = false;
if (endpoint.items.length != this.resItems.length) {
removedOrChanged = true
} else {
var preferredEndpointItemUris = [];
var i;
for (i = 0; i < endpoint.items.length; i++) {
preferredEndpointItemUris.push(endpoint.items[i].uri);
}
preferredEndpointItemUris.sort();
var pollResourceItemUris = [];
for (i = 0; i < this.resItems.length; i++) {
pollResourceItemUris.push(this.resItems[i].uri);
}
pollResourceItemUris.sort();
for (i = 0; i < preferredEndpointItemUris.length; i++) {
if (preferredEndpointItemUris[i] != pollResourceItemUris[i]) {
removedOrChanged = true;
break;
}
}
}
return removedOrChanged;
}
abort() {
this.request.abort();
}
refresh(endpoint) {
if (!this.isActive) {
var urlSegments = [];
for (var i = 0; i < this.resItems.length; i++) {
var res = this.resItems[i];
var uri = res.uri;
urlSegments.push(`u=${encodeURIComponent(uri)}&inm=${encodeURIComponent(res.etag)}`);
}
var requestUri = `${this.uri}?${urlSegments.join('&')}`;
debug.info(`Multiplex Wait Request URI: ${requestUri}`);
this.request.start('GET', requestUri, {
'Wait': 55
});
this.isActive = true;
}
}
}
module.exports = MultiplexWaitConnection;
|
javascript
|
From October 29 to November 5, Paramount Pictures is celebrating some of the greatest films in its catalog of the past 5 decades. The operation is called “Paramount Decades”. It allows the public to find a selection of legendary films from the Paramount catalog on Video on Demand stores.
From The Godfather to Top Gun, including Truman Show, Ghost or even Without a noise, Shutter Island and Rocketman more recently, relive 5 decades of cinema and discover a selection of films that marked their time.
Find these feature films to rent or buy without subscription commitment on your VOD store. Digital purchase from € 5.99 and rental from October 29 to November 5 on Orange, Rakuten TV, Canal VOD, PlayStation Video or even VideoFutur.
|
english
|
GUWAHATI, Oct 20: A fashion show which showcased the possibilities of golden thread was organized by Pride-East Entertainment Private limited in the city today. The event was organized to promote the preservation of golden thread yielding cocoon Antherea Assamensis.
“The golden thread yielding cocoon is on the brink of extinction due to various reasons including climate change. It is the prime responsibility of the people of the state to strive for its preservation,” CMD of Pride-East Entertainment Ltd, Riniki Bhuyan Sharma said.
“The dreams that the weavers of Sualkuchi weave on the fabric is gradually becoming a style statement and a trend among the new generation. The Muga wear has every potentiality to be famous in the whole world,” Sharma added.
Showstopper of the evening Sol Chauhan walked the ramp along with other models wearing the Muga attires designed by Samant Chauhan. The show was directed by Ketan Bhatia.
|
english
|
import {
Directive,
HostBinding,
ElementRef,
Input,
Renderer2
} from '@angular/core';
import { BehaviorSubject } from 'rxjs';
import { toBoolean, uniqueId } from 'ng-xotb/utility';
@Directive({
selector: 'select[xotb]',
host: {
'[class.xotb-select]': 'true'
}
})
export class XotbSelectInput {
requiredSubject = new BehaviorSubject<boolean>(false);
@HostBinding('attr.aria-describedby') describedBy: string;
@Input() set required(required: any) {
this.requiredSubject.next(toBoolean(required));
}
constructor(private el: ElementRef, private renderer: Renderer2) {
if (!this.el.nativeElement.id) {
this.renderer.setAttribute(
this.el.nativeElement,
'id',
uniqueId('select')
);
}
}
get id() {
return this.el.nativeElement.id;
}
}
|
typescript
|
<filename>src/set1/p4.rs<gh_stars>1-10
use std::io::prelude::*;
use std::io::BufReader;
use std::fs::File;
use serialize::hex::FromHex;
use freq::{english_freq_vec, dict, dict_englishness, relative_englishness, most_english};
use util::xor_bytes;
#[test]
fn run() {
let mut en_freq_sorted = english_freq_vec();
en_freq_sorted.sort_by(|a, b| b.partial_cmp(a).unwrap());
let f = File::open("./data/4.txt").unwrap();
let reader = BufReader::new(f);
let ciphers_bytes: Vec<Vec<u8>> = reader.lines()
.map(|line| line.unwrap().from_hex().unwrap())
.collect();
let mut englishness_map: Vec<(usize, f32)> = ciphers_bytes.iter()
.enumerate()
.map(|(i, bytes)| (i, relative_englishness(bytes, &en_freq_sorted)))
.collect();
englishness_map.sort_by(|a, b| b.1.partial_cmp(&(a.1)).unwrap());
let en_dict = dict("/usr/share/dict/american-english").unwrap();
let (i, k, _) = englishness_map.iter()
.take(5)
.map(|&(i, _)| {
let (k, d_e) = most_english(&ciphers_bytes[i], |m| dict_englishness(m, &en_dict));
(i, k, d_e)
})
.max_by_key(|&(_, _, d_e)| (d_e * 100000.0) as i32)
.unwrap();
let xor = vec![k];
let m = xor_bytes(&ciphers_bytes[i], &xor);
let string = String::from_utf8_lossy(&m);
assert_eq!("Now that the party is jumping", string.trim());
}
|
rust
|
{"title": "Pivot: Fast, Synchronous Mashup Isolation Using Generator Chains.", "fields": ["rewriting", "mashup", "javascript", "thread", "eval"], "abstract": "Pivot is a new JavaScript isolation framework for web applications. Pivot uses iframes as its low-level isolation containers, but it uses code rewriting to implement synchronous cross-domain interfaces atop the asynchronous cross-frame post Message() primitive. Pivot layers a distributed scheduling abstraction across the frames, essentially treating each frame as a thread which can invoke RPCs that are serviced by external threads. By rewriting JavaScript call sites, Pivot can detect RPC invocations, Pivot exchanges RPC requests and responses via post Message(), and it pauses and restarts frames using a novel rewriting technique that translates each frame's JavaScript code into a restart able generator function. By leveraging both iframes and rewriting, Pivot does not need to rewrite all code, providing an order-of-magnitude performance improvement over rewriting-only solutions. Compared to iframe-only approaches, Pivot provides synchronous RPC semantics, which developers typically prefer over asynchronous RPCs. Pivot also allows developers to use the full, unrestricted JavaScript language, including powerful statements like eval().", "citation": "Citations (10)", "departments": ["Microsoft"], "authors": ["<NAME>.....http://dblp.org/pers/hd/m/Mickens:James"], "conf": "sp", "year": "2014", "pages": 15}
|
json
|
<filename>coreimagefilterattributes/JSONFiles/CIColorControls.json
{
"inputBrightness" : {
"CIAttributeClass" : "NSNumber",
"CIAttributeDescription" : "The amount of brightness to apply. The larger the value, the brighter the result.",
"CIAttributeSliderMax" : 1,
"CIAttributeDisplayName" : "Brightness",
"CIAttributeDefault" : 0,
"CIAttributeMin" : -1,
"CIAttributeIdentity" : 0,
"CIAttributeType" : "CIAttributeTypeScalar",
"CIAttributeSliderMin" : -1
},
"CIAttributeFilterDisplayName" : "Color Controls",
"inputSaturation" : {
"CIAttributeClass" : "NSNumber",
"CIAttributeDescription" : "The amount of saturation to apply. The larger the value, the more saturated the result.",
"CIAttributeSliderMax" : 2,
"CIAttributeDisplayName" : "Saturation",
"CIAttributeDefault" : 1,
"CIAttributeMin" : 0,
"CIAttributeIdentity" : 1,
"CIAttributeType" : "CIAttributeTypeScalar",
"CIAttributeSliderMin" : 0
},
"inputImage" : {
"CIAttributeDisplayName" : "Image",
"CIAttributeDescription" : "The image to use as an input image. For filters that also use a background image, this is the foreground image.",
"CIAttributeClass" : "CIImage",
"CIAttributeType" : "CIAttributeTypeImage"
},
"CIAttributeFilterName" : "CIColorControls",
"CIAttributeFilterCategories" : [
"CICategoryColorAdjustment",
"CICategoryVideo",
"CICategoryStillImage",
"CICategoryInterlaced",
"CICategoryNonSquarePixels",
"CICategoryBuiltIn"
],
"CIAttributeReferenceDocumentation" : "http:\/\/developer.apple.com\/cgi-bin\/apple_ref.cgi?apple_ref=\/\/apple_ref\/doc\/filter\/ci\/CIColorControls",
"inputContrast" : {
"CIAttributeClass" : "NSNumber",
"CIAttributeDescription" : "The amount of contrast to apply. The larger the value, the more contrast in the resulting image.",
"CIAttributeSliderMax" : 4,
"CIAttributeDisplayName" : "Contrast",
"CIAttributeDefault" : 1,
"CIAttributeMin" : 0,
"CIAttributeIdentity" : 1,
"CIAttributeType" : "CIAttributeTypeScalar",
"CIAttributeSliderMin" : 0.25
},
"CIAttributeFilterAvailable_iOS" : "5",
"CIAttributeFilterAvailable_Mac" : "10.4"
}
|
json
|
<!DOCTYPE html>
<html>
<head>
<!-- Bootstrap Core CSS -->
<link href="../vendor/bootstrap/css/bootstrap.min.css" rel="stylesheet">
<title>Pitchfest - ECell IIT Madras</title>
</head>
<body style="font-family: Arial; padding: 60px 0; text-align: justify;">
<div>
<img src="../bootcamp/ecell.png" width="30%" style="padding-top: 5%;">
<img src="../bootcamp/esummit.png" width="15%" align="center" style="margin-left: 6%; transform: translate(0px, -30%);">
<img src="../bootcamp/iitm.png" width="10%" align="right" style="padding-top: 3%;">
</div>
<div class="container">
<h1 align="center" style="font-family: 'Montserrat', sans-serif;
">Pitchfest</h1>
<div class="row">
<big><p><strong>The Pitching Milestone</strong></p></big>
<p>Pitchfest'17 brings to you a golden opportunity to impress an elite panel of investors and executives with your innovative start-up ideas, and bag a handsome funding from them.</p>
<big><p><strong>Who is it for?</strong></p></big>
<p>The competition has <strong>two different verticals</strong> catering to start-ups in the following phases-</p>
<p><strong>(Start-ups under both categories will be judged by separate independent panels.)</strong></p>
<ol>
<li><strong>Young start-ups</strong> struggling with initial funds- usually student start-ups from educational institutes who might be looking for <strong>mentoring</strong> <em>,</em> <strong>incubation</strong> <em>and</em> <strong>small funding</strong>.</li>
<li><strong>Operational start-ups</strong> – Start-ups which have started operations (even if at a basic level), which are looking for <strong>funding</strong><strong>.</strong> Start-ups which have already been incubated can also apply under this category.</li>
</ol>
<p><strong>Note that start-ups applying under both categories should be well past the Ideation stage.</strong></p>
<big><p><strong>What is at stake</strong>?</p></big>
<ul>
<li>The most promising start-ups from both categories can hope to get <strong>invaluable mentoring</strong> and <strong>generous funding</strong> from our elite panel of investors.</li>
<li> The <strong>Early Stage start-ups</strong> can also look for the opportunity of being incubated at the <strong>Incubation Cell</strong> , the <strong>Business Incubator of IIT Madras</strong>.</li>
<li> The <strong>Operational start-ups</strong> can look for an <strong>attractive funding</strong> from the investors.</li>
<li> Prizes worth <strong>75,000 INR</strong> will be awarded to the best team across both categories.</li>
</ul>
<style type="text/css">
@import url('https://fonts.googleapis.com/css?family=Montserrat:700');
span
{
font-weight: 600;
color: red;
}
img
{
margin: 0 30px;
}
.linkbutton
{
padding: 25px;
border-radius: 10px;
border-width: 0px;
}
.linkbutton:hover
{
background-color: lightblue;
}
</style>
<div align="center">
<button class="linkbutton" style="" onclick="window.location.href='agenda.html';"><b> Agenda </b></button>
<button class="linkbutton" style="" onclick="window.location.href='details.html';"><b>Registration Details</b></button>
</div>
</body>
</html>
|
html
|
<reponame>leun4m/stochasta
use crate::{CardDeck, CardDrawSequence, Probability, PROBABILITY_ONE, PROBABILITY_ZERO};
use itertools::Itertools;
use std::{collections::HashMap, fmt::Display, hash::Hash};
/// Prefix used for graphviz ids
const GRAPHVIZ_PREFIX: &str = "_";
/// A representation of a card drawing process.
///
/// # Type Parameters
/// - `C`: The type of a single card
#[derive(Clone, Eq, Debug)]
#[cfg_attr(feature = "serde", derive(serde::Serialize, serde::Deserialize))]
pub struct CardDrawTree<C>
where
C: Eq + Hash,
{
probability: Probability,
probability_in_tree: Probability,
nodes: HashMap<C, CardDrawTree<C>>,
}
impl<C> Default for CardDrawTree<C>
where
C: Eq + Hash + Clone,
{
fn default() -> Self {
Self::new()
}
}
impl<C> PartialEq for CardDrawTree<C>
where
C: Eq + Hash,
{
fn eq(&self, rhs: &Self) -> bool {
self.probability == rhs.probability
&& self.probability_in_tree == rhs.probability_in_tree
&& self.nodes == rhs.nodes
}
}
impl<C> CardDrawTree<C>
where
C: Eq + Hash + Clone,
{
/// Creates a new empty tree.
///
/// # Example
///
/// ```
/// use stochasta::CardDrawTree;
///
/// let tree: CardDrawTree<i32> = CardDrawTree::new();
/// assert!(tree.is_empty());
/// ```
#[must_use]
pub fn new() -> Self {
Self {
probability: PROBABILITY_ONE,
probability_in_tree: PROBABILITY_ONE,
nodes: HashMap::new(),
}
}
/// Creates a new empty tree node.
#[must_use]
fn new_node(probability: Probability, parent_probability: Probability) -> Self {
Self {
probability,
probability_in_tree: parent_probability * probability,
nodes: HashMap::<C, CardDrawTree<C>>::new(),
}
}
/// Creates a tree with one level from the given card deck.
///
/// # Example
///
/// ```
/// use stochasta::{CardDeck, CardDrawTree};
///
/// let deck = CardDeck::from(vec!["heads", "tails"]);
/// let tree = CardDrawTree::create_from(&deck);
/// ```
#[must_use]
pub fn create_from(card_deck: &CardDeck<C>) -> Self {
let mut tree = Self::new_node(PROBABILITY_ONE, PROBABILITY_ONE);
for (card, probability) in card_deck.probabilities() {
tree.nodes
.insert(card.clone(), Self::new_node(probability, PROBABILITY_ONE));
}
tree
}
/// Creates a new tree with the number of `draws` with an unshrinking stack.
///
/// The stack won't shrink from drawing cards;
/// instead every drawn card is put back to the stack.
///
/// For a shrinking deck, see [`Self::shrinking()`].
#[must_use]
pub fn without_shrinking(card_deck: &CardDeck<C>, draws: u32) -> Self {
Self::without_shrinking_root_probability(card_deck, draws, PROBABILITY_ONE, PROBABILITY_ONE)
}
/// Creates a new tree with the number of `draws` with a shrinking stack.
///
/// The stack will shrink from drawing cards;
/// once a card is drawn it is no longer part of the stack.
///
/// For a non-shrinking deck, see [`Self::without_shrinking()`].
#[must_use]
pub fn shrinking(card_deck: &CardDeck<C>, draws: u32) -> Self {
Self::shrinking_root_probability(card_deck, draws, PROBABILITY_ONE, PROBABILITY_ONE)
}
fn without_shrinking_root_probability(
card_deck: &CardDeck<C>,
draws: u32,
probability: Probability,
parent_probability: Probability,
) -> Self {
let mut tree = Self::new_node(probability, parent_probability);
if 0 < draws {
for (card, card_probability) in card_deck.probabilities() {
tree.nodes.insert(
card.clone(),
Self::without_shrinking_root_probability(
card_deck,
draws - 1,
card_probability,
probability,
),
);
}
}
tree
}
fn shrinking_root_probability(
card_deck: &CardDeck<C>,
draws: u32,
probability: Probability,
parent_probability: Probability,
) -> Self {
let mut tree = Self::new_node(probability, parent_probability);
if 0 < draws {
for (card, card_probability) in card_deck.probabilities() {
let new_stack = card_deck.draw(card.clone());
tree.nodes.insert(
card.clone(),
Self::shrinking_root_probability(
&new_stack,
draws - 1,
card_probability,
probability,
),
);
}
}
tree
}
/// Returns the probability of a certain sequence in the tree.
///
/// The order is important as well as the position - the first entry will be searched among the
/// root nodes.
///
/// # Example
///
/// ```
/// use stochasta::{CardDeck, CardDrawTree, Probability, PROBABILITY_ONE, PROBABILITY_ZERO};
///
/// let coin = CardDeck::from(vec!["H", "T"]);
/// let tree = CardDrawTree::without_shrinking(&coin, 2);
///
/// assert_eq!(tree.probability_of(&[]), PROBABILITY_ONE);
/// assert_eq!(tree.probability_of(&["H"]), Probability::new(1, 2));
/// assert_eq!(tree.probability_of(&["H", "H"]), Probability::new(1, 4));
/// // 3x heads is impossible when only throwing 2x
/// assert_eq!(tree.probability_of(&["H", "H", "H"]), PROBABILITY_ZERO);
/// ```
#[must_use]
pub fn probability_of(&self, sequence: &[C]) -> Probability {
if sequence.is_empty() {
PROBABILITY_ONE
} else if let Some(node) = self.nodes.get(&sequence[0]) {
node.probability * node.probability_of(&sequence[1..])
} else {
PROBABILITY_ZERO
}
}
/// Returns `true` if the tree has no nodes.
#[must_use]
pub fn is_empty(&self) -> bool {
self.nodes.is_empty()
}
/// Returns all paths.
///
/// # Example
///
/// ```
/// use stochasta::{CardDeck, CardDrawTree, CardDrawSequence, Probability};
///
/// let coin = CardDeck::from(vec!["H", "T"]);
/// let tree = CardDrawTree::without_shrinking(&coin, 2);
///
/// let result = tree.paths();
/// let one_quarter = Probability::new(1, 4);
///
/// assert_eq!(result.len(), 4);
/// assert!(result.contains(&CardDrawSequence::new(vec!["H", "H"], one_quarter)));
/// assert!(result.contains(&CardDrawSequence::new(vec!["H", "T"], one_quarter)));
/// assert!(result.contains(&CardDrawSequence::new(vec!["T", "H"], one_quarter)));
/// assert!(result.contains(&CardDrawSequence::new(vec!["T", "T"], one_quarter)));
/// ```
#[must_use]
pub fn paths(&self) -> Vec<CardDrawSequence<C>> {
self.create_paths(&[])
}
fn create_paths(&self, sequence: &[C]) -> Vec<CardDrawSequence<C>> {
let mut result = Vec::new();
if self.is_empty() {
result.push(CardDrawSequence::new(
Vec::from(sequence),
self.probability_in_tree,
));
} else {
for (card, tree) in self.nodes.iter() {
let mut s = Vec::new();
s.extend(sequence.iter().cloned());
s.push(card.clone());
result.extend(tree.create_paths(&s));
}
}
result
}
}
impl<C> CardDrawTree<C>
where
C: Eq + Hash + Ord + Display,
{
/// Creates a [Graphviz](https://www.graphviz.org/)-graph from the decision tree.
///
/// # Example
///
/// For a more interesting graph this example covers an oddly weighted coin where *heads* is
/// twice as likely to be thrown as *tails*.
///
/// ```
/// use stochasta::{CardDeck, CardDrawTree};
///
/// let odd_coin = CardDeck::from(vec!["heads", "heads", "tails"]);
/// let tree = CardDrawTree::without_shrinking(&odd_coin, 2);
/// let output = r#"digraph {
/// _root[label="", shape="circle"];
/// _root->_heads_2[label="2/3"];
/// _heads_2[label="heads (2/3)"];
/// _heads_2->_heads_3[label="2/3"];
/// _heads_3[label="heads (4/9)"];
/// _heads_2->_tails_4[label="1/3"];
/// _tails_4[label="tails (2/9)"];
/// _root->_tails_5[label="1/3"];
/// _tails_5[label="tails (1/3)"];
/// _tails_5->_heads_6[label="2/3"];
/// _heads_6[label="heads (2/9)"];
/// _tails_5->_tails_7[label="1/3"];
/// _tails_7[label="tails (1/9)"];
/// }"#;
/// assert_eq!(tree.to_graphviz(), output);
/// ```
///
/// # ASCII Visualisation
///
/// This will result in the following graph (here sideways for better visualisation):
///
/// ```plain
/// 2/3
/// +-----[ heads (4/9) ]
/// 2/3 /
/// +-----[ heads (2/3) ]
/// / \ 1/3
/// / +-----[ tails (2/9) ]
/// /
/// O
/// \ 2/3
/// \ +-----[ heads (2/9) ]
/// \ 1/3 /
/// +-----[ tails (1/3) ]
/// \ 1/3
/// +-----[ tails (1/9) ]
///```
///
/// # Output
///
/// - the paths have the probability from their parent node
/// - the cards have additionally the total probability to reach it from the root node in
/// brackets
#[must_use]
pub fn to_graphviz(&self) -> String {
let mut result = String::from("digraph {\n");
let root = "root";
result.push_str(&format!(
"{}{}[label=\"\", shape=\"circle\"];\n",
GRAPHVIZ_PREFIX, root
));
let (subtree, _) = self.to_graphviz_iter(root, 1);
result.push_str(&subtree);
result.push('}');
result
}
fn to_graphviz_sub(&self, root: &str, card: &str, id: u32) -> (String, u32) {
let mut result = String::new();
let new_root = format!("{}_{}", card, id);
result.push_str(&format!(
"{}{}->{}{}[label=\"{}\"];\n",
GRAPHVIZ_PREFIX, root, GRAPHVIZ_PREFIX, new_root, self.probability
));
result.push_str(&format!(
"{}{}[label=\"{} ({})\"];\n",
GRAPHVIZ_PREFIX, new_root, card, self.probability_in_tree
));
let (subtree, new_id) = self.to_graphviz_iter(&new_root, id);
result.push_str(&subtree);
(result, new_id)
}
fn to_graphviz_iter(&self, root: &str, id: u32) -> (String, u32) {
let mut result = String::new();
let mut new_id = id;
for (card, subtree) in self.nodes.iter().sorted_by_key(|&(c, _)| c) {
let (graphviz, last_id) = subtree.to_graphviz_sub(root, &card.to_string(), new_id + 1);
new_id = last_id;
result.push_str(&graphviz);
}
(result, new_id)
}
}
#[cfg(test)]
mod tests {
use super::*;
#[test]
fn probability_of_empty() {
let tree: CardDrawTree<i32> = CardDrawTree::default();
assert_eq!(tree.probability_of(&[]), PROBABILITY_ONE);
}
#[test]
fn probability_of_coin() {
let coin = CardDeck::from(vec!["heads", "tails"]);
let tree = CardDrawTree::create_from(&coin);
assert_eq!(tree.probability_of(&["heads"]), Probability::new(1, 2));
assert_eq!(tree.probability_of(&["tails"]), Probability::new(1, 2));
assert_eq!(tree.probability_of(&["side"]), PROBABILITY_ZERO);
assert_eq!(tree.probability_of(&["heads", "tails"]), PROBABILITY_ZERO);
}
#[test]
fn to_graphviz_empty() {
let deck = CardDeck::<String>::new();
let tree = CardDrawTree::without_shrinking(&deck, 1);
let output = r#"digraph {
_root[label="", shape="circle"];
}"#;
assert_eq!(tree.to_graphviz(), output);
}
#[test]
fn to_graphviz_number() {
let odd_coin = CardDeck::from(vec![7, 42]);
let tree = CardDrawTree::without_shrinking(&odd_coin, 1);
let output = r#"digraph {
_root[label="", shape="circle"];
_root->_7_2[label="1/2"];
_7_2[label="7 (1/2)"];
_root->_42_3[label="1/2"];
_42_3[label="42 (1/2)"];
}"#;
assert_eq!(output, tree.to_graphviz());
}
#[test]
fn shrinking_empty() {
let deck: CardDeck<i32> = CardDeck::new();
let tree = CardDrawTree::shrinking(&deck, 1);
assert!(tree.is_empty());
}
#[test]
fn shrinking_multiple_draws() {
let deck = CardDeck::from(vec![1, 2, 3]);
let tree = CardDrawTree::shrinking(&deck, 3);
assert_eq!(tree.probability_of(&[1, 2, 1]), PROBABILITY_ZERO);
assert_eq!(tree.probability_of(&[1, 2, 2]), PROBABILITY_ZERO);
assert_eq!(tree.probability_of(&[1, 2, 3]), Probability::new(1, 6));
}
#[test]
fn without_shrinking_multiple_draws() {
let deck = CardDeck::from(vec![1, 2, 3]);
let tree = CardDrawTree::without_shrinking(&deck, 3);
assert_eq!(tree.probability_of(&[1, 2, 1]), Probability::new(1, 27));
assert_eq!(tree.probability_of(&[1, 2, 2]), Probability::new(1, 27));
assert_eq!(tree.probability_of(&[1, 2, 3]), Probability::new(1, 27));
}
}
|
rust
|
<filename>clients/android/NewsBlur/src/com/newsblur/fragment/FeedIntelTrainerFragment.java
package com.newsblur.fragment;
import java.util.List;
import java.util.Map;
import android.app.Activity;
import android.app.AlertDialog;
import android.app.Dialog;
import android.content.DialogInterface;
import android.os.Bundle;
import android.support.v4.app.DialogFragment;
import android.view.Gravity;
import android.view.LayoutInflater;
import android.view.View;
import android.widget.LinearLayout;
import android.widget.TextView;
import butterknife.ButterKnife;
import butterknife.Bind;
import com.newsblur.R;
import com.newsblur.domain.Classifier;
import com.newsblur.domain.Feed;
import com.newsblur.util.FeedSet;
import com.newsblur.util.FeedUtils;
import com.newsblur.util.UIUtils;
public class FeedIntelTrainerFragment extends DialogFragment {
private Feed feed;
private FeedSet fs;
private Classifier classifier;
@Bind(R.id.intel_title_header) TextView headerTitles;
@Bind(R.id.intel_tag_header) TextView headerTags;
@Bind(R.id.intel_author_header) TextView headerAuthor;
@Bind(R.id.existing_title_intel_container) LinearLayout titleRowsContainer;
@Bind(R.id.existing_tag_intel_container) LinearLayout tagRowsContainer;
@Bind(R.id.existing_author_intel_container) LinearLayout authorRowsContainer;
@Bind(R.id.existing_feed_intel_container) LinearLayout feedRowsContainer;
public static FeedIntelTrainerFragment newInstance(Feed feed, FeedSet fs) {
FeedIntelTrainerFragment fragment = new FeedIntelTrainerFragment();
Bundle args = new Bundle();
args.putSerializable("feed", feed);
args.putSerializable("feedset", fs);
fragment.setArguments(args);
return fragment;
}
@Override
public Dialog onCreateDialog(Bundle savedInstanceState) {
super.onCreate(savedInstanceState);
feed = (Feed) getArguments().getSerializable("feed");
fs = (FeedSet) getArguments().getSerializable("feedset");
classifier = FeedUtils.dbHelper.getClassifierForFeed(feed.feedId);
final Activity activity = getActivity();
LayoutInflater inflater = LayoutInflater.from(activity);
View v = inflater.inflate(R.layout.dialog_trainfeed, null);
ButterKnife.bind(this, v);
// display known title classifiers
for (Map.Entry<String, Integer> rule : classifier.title.entrySet()) {
View row = inflater.inflate(R.layout.include_intel_row, null);
TextView label = (TextView) row.findViewById(R.id.intel_row_label);
label.setText(rule.getKey());
UIUtils.setupIntelDialogRow(row, classifier.title, rule.getKey());
titleRowsContainer.addView(row);
}
if (classifier.title.size() < 1) headerTitles.setVisibility(View.GONE);
// get the list of suggested tags
List<String> allTags = FeedUtils.dbHelper.getTagsForFeed(feed.feedId);
// augment that list with known trained tags
for (Map.Entry<String, Integer> rule : classifier.tags.entrySet()) {
if (!allTags.contains(rule.getKey())) {
allTags.add(rule.getKey());
}
}
for (String tag : allTags) {
View row = inflater.inflate(R.layout.include_intel_row, null);
TextView label = (TextView) row.findViewById(R.id.intel_row_label);
label.setText(tag);
UIUtils.setupIntelDialogRow(row, classifier.tags, tag);
tagRowsContainer.addView(row);
}
if (allTags.size() < 1) headerTags.setVisibility(View.GONE);
// get the list of suggested authors
List<String> allAuthors = FeedUtils.dbHelper.getAuthorsForFeed(feed.feedId);
// augment that list with known trained authors
for (Map.Entry<String, Integer> rule : classifier.authors.entrySet()) {
if (!allAuthors.contains(rule.getKey())) {
allAuthors.add(rule.getKey());
}
}
for (String author : allAuthors) {
View rowAuthor = inflater.inflate(R.layout.include_intel_row, null);
TextView labelAuthor = (TextView) rowAuthor.findViewById(R.id.intel_row_label);
labelAuthor.setText(author);
UIUtils.setupIntelDialogRow(rowAuthor, classifier.authors, author);
authorRowsContainer.addView(rowAuthor);
}
if (allAuthors.size() < 1) headerAuthor.setVisibility(View.GONE);
// for feel-level intel, the label is the title and the intel identifier is the feed ID
View rowFeed = inflater.inflate(R.layout.include_intel_row, null);
TextView labelFeed = (TextView) rowFeed.findViewById(R.id.intel_row_label);
labelFeed.setText(feed.title);
UIUtils.setupIntelDialogRow(rowFeed, classifier.feeds, feed.feedId);
feedRowsContainer.addView(rowFeed);
AlertDialog.Builder builder = new AlertDialog.Builder(activity);
builder.setTitle(R.string.feed_intel_dialog_title);
builder.setView(v);
builder.setNegativeButton(R.string.alert_dialog_cancel, new DialogInterface.OnClickListener() {
@Override
public void onClick(DialogInterface dialogInterface, int i) {
FeedIntelTrainerFragment.this.dismiss();
}
});
builder.setPositiveButton(R.string.dialog_story_intel_save, new DialogInterface.OnClickListener() {
@Override
public void onClick(DialogInterface dialogInterface, int i) {
FeedUtils.updateClassifier(feed.feedId, classifier, fs, activity);
FeedIntelTrainerFragment.this.dismiss();
}
});
Dialog dialog = builder.create();
dialog.getWindow().getAttributes().gravity = Gravity.BOTTOM;
return dialog;
}
}
|
java
|
<gh_stars>0
{
"name": "purescript-concur-starter",
"version": "0.1.0",
"description": "A Starter kit for Purescript-Concur. Uses Spago and Parcel.",
"main": "index.js",
"scripts": {
"test": "spago test",
"clean": "rimraf .cache .spago .psci_modules output .pulp-cache prod dist",
"build": "spago build",
"start": "spago build && cp index.dev.js index.js && parcel index.html",
"watch": "spago build && parcel watch index.html",
"_1": "THESE BUILD MODES WORK ----------------------------------------------",
"parcel": "rimraf output dist && spago build && cp index.dev.js index.js && cross-env NODE_ENV=development parcel build --public-url ./ index.html && rm index.js",
"zephyr": "rimraf output dist && spago build --purs-args '-g corefn' && zephyr -f Main.main && cp index.dce.js index.js && parcel build --public-url ./ index.html && rm index.js",
"parcel-closure": "rimraf output dist && spago build && cp index.dev.js index.js && parcel build --public-url ./ index.html && java -jar closure-compiler/closure-compiler-v20200517.jar --isolation_mode IIFE --js dist/purescript*.js --js_output_file dist/index.js && cp index.html dist/index.html && rm index.js dist/purescript*",
"zephyr-parcel-closure": "rimraf output dist parcel && spago build --purs-args '-g corefn' && zephyr -f Main.main && cp index.dce.js index.js && parcel build --public-url ./ index.html && java -jar closure-compiler/closure-compiler-v20200517.jar --isolation_mode IIFE --js dist/purescript*.js --js_output_file dist/index.js && cp index.html dist/index.html && rm index.js dist/purescript*",
"spago": "rimraf dist output && mkdir dist && spago bundle-app --main Main --to dist/index.js && cp index.html dist/index.html",
"spago-closure": "rimraf dist output && mkdir dist && spago bundle-app --main Main --to index.js && java -jar closure-compiler/closure-compiler-v20200517.jar --js index.js --js_output_file dist/index.js && cp index.html dist/index.html",
"spago-closure-iife": "rimraf dist output && mkdir dist && spago bundle-app --main Main --to index.js && java -jar closure-compiler/closure-compiler-v20200517.jar --js index.js --isolation_mode IIFE --js_output_file dist/index.js && cp index.html dist/index.html",
"_2": "THESE BUILD MODES DON'T WORK ----------------------------------------------",
"zephyr-closure": "echo 'THIS ONE FAILS. CLOSURE GENERATES INVALID CODE. EXIT.' && false && rimraf dist && mkdir dist && spago build --purs-args '-g corefn' && zephyr -f Main.main && java -jar closure-compiler/closure-compiler-v20200517.jar --js index.dce.js --js 'dce-output/**.js' --js_output_file dist/index.js && cp index.html dist/index.html",
"closure-parcel": "echo 'THIS ONE FAILS. CLOSURE GENERATES INVALID CODE. EXIT.' && false && rimraf output dist && spago build && java -jar closure-compiler/closure-compiler-v20200517.jar --isolation_mode IIFE --js index.dev.js --js 'output/**.js' --js_output_file index.js && parcel build --public-url ./ index.html && rm index.js",
"spago-closure-advanced": "echo \"CLOSURE ADVANCED MODE DOESN'T WORK\" && false && rimraf dist output && mkdir dist && spago bundle-app --main Main --to index.js && java -jar closure-compiler/closure-compiler-v20200517.jar --compilation_level ADVANCED_OPTIMIZATIONS --js index.js --js_output_file dist/index.js && cp index.html dist/index.html",
"spago-closure-iife-advanced": "echo \"CLOSURE ADVANCED MODE DOESN'T WORK\" && false && rimraf dist output && mkdir dist && spago bundle-app --main Main --to index.js && java -jar closure-compiler/closure-compiler-v20200517.jar --isolation_mode IIFE --compilation_level ADVANCED_OPTIMIZATIONS --js index.js --js_output_file dist/index.js && cp index.html dist/index.html"
},
"author": "<NAME> <<EMAIL>> (https://github.com/ajnsit)",
"license": "MIT",
"devDependencies": {
"@rollup/plugin-commonjs": "^13.0.0",
"cross-env": "^7.0.2",
"parcel-bundler": "^1.12.4",
"purescript": "^0.13.8",
"rimraf": "^3.0.2",
"rollup": "^2.16.1",
"sass": "^1.26.8",
"spago": "^0.15.2"
},
"dependencies": {}
}
|
json
|
Rashtrapati Bhavan played a perfect host to the dignitaries from various countries, who attended the swearing-in ceremony of the new government on May 30. All the eminent guests were treated to delicious recipes both vegetarian and non-vegetarian.
However, one dish that stood out as an instant favourite with all the guests was Dal Raisina. Invented by Chef Machindra Kasture in 2010, Dal Raisina is termed as an in-house culinary innovation of Rashtrapati Bhavan. Popular for its velvety texture and mild flavours, Dal Raisina found an indispensable spot in all functions hosted by Rashtrapati Bhavan and was served to Obamas in 2015, when they visited India for Republic Day celebrations.
However, many chefs differ on how long it takes to cook Dal Raisina. It is traditionally cooked for 6 to 8 hours on slow flame while according to the current chef of Rashtrapati Bhavan, it takes around 48 hours to cook this delicacy.
And in this article, we will tell you the quicker version of cooking Dal Raisina. It is served hot with rice.
Soak whole black whole urad dal, rajma separately overnight. Pressure cook separately along with bay leaves until soft.
In a deep kadai, add butter and oil together. Add jeera, garlic, ginger, onions. Cook until the raw flavour goes away.
Add finely chopped tomatoes, salt, sauté well. Add 1⁄4 cup water and cook till the ingredients turn soft.
Add 1⁄4 cup tomato puree, turmeric, coriander and cumin powders. Stir in garam masala and green chillies.
Cook for 6 to 8 minutes. Add dal along with bay leaves soon after oil starts releasing, leaving the sides.
Add water, simmer for 10 minutes with the lid on.
Add crushed kasuri methi leaves, coriander leaves. Garnish with finely chopped ginger.
Add cream, stir well. Cook for 30 minutes with the lid on.
Serve hot with rice.
Nutritional Facts:
Dal Raisina is a powerhouse of proteins. Urda Dal is loaded with potassium, protein, calcium, iron, vitamin B1, B2 and B3, while Rajma contains vitamin C, iron, magnesium, calcium and vitamin B6. Tomatoes are a rich source of lycopene while onions, garlic and ginger aid in digestion.
Coriander, turmeric, cumin are known for anti-inflammatory properties and Kasuri Methi controls blood sugars and promotes digestion. Adding ample amounts of cream provides the dish with good healthy fats.
|
english
|
Along with the expressed concerns mentioned, decscrimination on mental health NEEDS TO BE ADRESSED as well. As I said issues mentioned are definitely valid, but mental health and the stigma being put on thst issue us also an area for great voncern. We are or are supposed to be living in a re to take the stigma away from mental health not adding to it. I can say for 100% the way departments are dealing with mental health issues are only creating a much bigger problem. People who have chikdren are AFRAID TO SEE HELP because if they do their childRen will be removed. There are professionals diagnosing where they are not qualified to do do, For exampke, true story, woman list 3 kids because cyfs says everything she's been through she has GOT TO HAVE a mental illness; despite the numerous assesments, 2 yrs on wait list for psychatry because she is if no harm to her self ir any one else. Her little brother dies and she tries to break free from their abusive controlling father - I think she is syrong. Can one not grieve over the loss of a loved one and if she did gave a mental illness is that a good approach? To remove her chikdren. If a parent seeks help removing the chikdren from the home, especially when there are no indicators of emotional harm...that's punishment for patent and child. Or shoukd a parent struggle with daily activities, wanting to reach out but due to fear is sad, in bed, in pain wanting to become a better parent, us this helping their issue or adding to hit. Back to a parent 1st finding out or inquiring if they have a mental illness, auto matic removal possibly causing suicide.
|
english
|
BLACKPINK is coming to our area for real. Yes, it is finally happening. On July 6, a representative of YG Entertainment (the girl group's agency) revealed that they are currently recording their new album.
BLACKPINK will start shooting their music video in July and make a comeback with new music in August.
The official source stated that much of the group's signature music has long been explored. To turn over a new leaf in their discography and establish a stronger emotional connection with BLINKs (BLACKPINK's fans), they will be expanding their group activities in the future.
The official source further stated that the girl group will embark on a grand-scale world tour in K-pop girl group history along with their comeback and will work on more solo projects, befitting their queen status.
The source continues by stating,
It is "really happening." BLINKs are still processing this thrilling piece of news.
Fans have been waiting for the girl group's return for about two years. Over the years, they have collectively written to YG Entertainment, demanding the group's comeback and a global concert, and have even taken to social media platforms to let their voices be known. It seems like their prayers have been heard.
As soon as YG Entertainment announced BLACKPINK's comeback and the concert tour, the company's stocks rose 10 percent, marking the beginning of happier news.
What have the BLACKPINK members been up to recently?
The members have been working on their solo projects and brand commitments. After Jennie's solo debut in November 2019 with the single SOLO, Rosé became the second member of the girl group to make her solo debut with her single album R, which was released in March 2021, and included two tracks, On the Ground (lead single) and Gone (b-side track).
Next in line was the group's diamond maknae Lisa, who debuted her solo album LALISA, including the title track LALISA and the b-side track MONEY. Meanwhile, Jisoo made her acting debut in JTBC's melo-romance drama Snowdrop alongside Jung Hae-in.
The girls have also been fulfilling their commitment to the luxury brands and are working on exciting collaborations with other artists.
This will mark the group's first comeback in approximately a year and 10 months since they released their first studio-length album, THE ALBUM, in 2020. It had the title track How You Like That and the special collab track Ice Cream with pop queen Selena Gomez.
This is also a memorable comeback for more reasons; besides the happiness of finally meeting BLINKs worldwide, they can also hold large-scale promotions for the first time since the outbreak of COVID-19.
During the promotions for THE ALBUM, they could not participate in several activities due to the ongoing COVID-19 pandemic and the restrictions in place.
|
english
|
Our power rankings for the NBA Defensive Player of the Year award continue to fluctuate, with just over a fortnight left of the regular season. As the Philadelphia 76ers and the Utah Jazz rank inside the three stingiest defenses this season, it is unsurprising that the runaway leaders for the award are from these teams.
The race for the DPOY award has been a close one to call. Both Rudy Gobert and Ben Simmons are making strong cases for themselves. However, since each player has different attributes on the floor, it may well come down to which areas of their play voters feel most accurately represent defensive prowess.
With about ten games left for most teams in the regular season, there is little time remaining for the various candidates to strengthen their cases for the NBA Defensive Player of the Year award.
It has been a fascinating battle up until now between two players who are the anchors of their respective team's defenses.
Our power rankings have changed this week, partly due to Myles Turner's unfortunate injury. Bam Adebayo, helped by a stream of social media support from the Miami Heat fanbase, has made a surge up the rankings.
On that note, let's take a look at our latest DPOY rankings in ascending order:
MP - 31, RPG - 6. 5, BPG - 3. 4, SPG - 0. 9, DWS - 2. 2, DBPM - 1. 7.
Previous NBA DPOY Power Ranking - 3rd (↓2).
Myles Turner was having a good season before he sustained the foot injury that has sidelined him indefinitely.
The Indiana Pacers center was able to focus on his defense while Sabonis took care of the offensive end. Post his injury, his chances of winning his first NBA Defensive Player of the Year title have gone down, despite leading the league for the second time in blocks per game.
Turner is probably the best rim protector on our list. His 3. 4 blocks a night is the most by any player since 2016 and the second-most since 2008. Compared to the rest of our top five, the 25-year-old ranks second behind Gobert for opposition field-goal percentage at 45. 3%, which also puts him in the top 16% of all players.
Previous NBA DPOY Power Ranking - 4th (-).
Despite winning the NBA Defensive Player of the Year award last season, Giannis Antetokounmpo could lose out on the award this campaign. That is not to say the Greek has not been an elite defender; his defensive box +/- is the best in our list and is only 1. 4 short of his total last year.
However, as he now has less defensive responsibility in the Milwaukee Bucks' team, Antetokounmpo's defensive win share could be at its lowest since 2016. His efficiency has been below 100 in the last two campaigns, while it is at 106 this year.
Nevertheless, there are very few players in the league who can effectively match up against Antetokounmpo's size and athleticism.
Although he has defensive stalwarts in Brook Lopez and Jrue Holiday alongside him, the 26-year-old remains the leader of the Milwaukee Bucks. Antetokounmpo has a team-leading net rating of 10 this season, which ranks him seventh in the league for players who have played more than 45 games.
MP - 33. 5, RPG - 9. 1, BPG - 1. 1, SPG - 1. 1, DWS - 2. 9, DBPM - 1. 9.
Previous NBA DPOY Power Ranking - 5th (↑2).
Though his team has had an inconsistent campaign this season, Bam Adebayo has been the Miami Heat's most consistent performer at both ends of the floor.
He is currently the closest challenger to the top two in our list for the NBA Defensive Player of the Year award. Thanks to his exploits, the Miami Heat's defense has improved and is now ranked fifth-best for efficiency.
Adebayo is an elite defender who can guard all positions from one through five. When he switches, the Miami Heat allow only 0. 92 points per possession, which would give them the No. 1 defense in the league.
Moreover, he has only allowed 60 points on 76 isolation situations against All-Stars, which equates to 0. 82 points per iso. That includes his stellar performance against Kyrie Irving on the 17th of April.
After Antetokounmpo, Adebayo is the only other player in our list for the NBA Defensive Player of the Year award who has grabbed one block and one steal per game this season.
MP - 33. 0, RPG - 7. 6, BPG - 0. 7, SPG - 1. 6, DWS - 2. 8, DBPM - 1. 9.
Previous NBA DPOY Power Ranking - 1st (↓1).
Ben Simmons' case for the NBA Defensive Player of the Year was made even stronger recently while he missed four games for the Philadelphia 76ers. In his absence, the 76ers lost all four matches, while their opponents scored 119. 8 points per matchup, 14 more than the team's season average.
On his return against the lowly OKC Thunder, Simmons helped the 76ers contain their opposition to just 90 points while grabbing three steals in 23 minutes.
The 24-year-old has an elite net rating of 7. 5 and is second in our list for least opposition points allowed. The point forward can guard any opponent and often takes on their best player. He has been key to the Philadelphia 76ers maintaining a top-3 defense for most of the season and could lead the franchise to their first NBA title since the 1980s.
Previous NBA DPOY Power Ranking - 2nd (↑1).
Rudy Gobert remains the bookmakers' favorite for this year's NBA Defensive Player of the Year award.
That is despite the Frenchman's momentary brain-freeze on Monday night that cost Utah Jazz a win against the Minnesota Timberwolves. Gobert was quick to hold his hands up after the game and take the blame for calling a switch with Mike Conley Jr. , only to leave his man, D'Angelo Russell, completely open for the winning lay-up.
The Utah Jazz were the first team this year to lock in their playoff place, thanks to Gobert's defensive efforts. The 28-year-old is their anchor in defense and is exceptional in providing cover for his teammates, ranking second in the league for blocked shots.
His defensive win share of 4. 4 is the best among all players, while his 10. 1 defensive rebounds per game rank him at No. 1 as well.
Nevertheless, the race for the NBA Defensive Player of the Year award could come down to the wire. That's because both Utah and Philadelphia have an almost identical defensive efficiency, but Rudy Gobert could get the nod due to his superior rim protection.
|
english
|
Bellator heavyweight Fedor Emelianenko has answered whether or not he'll return to MMA for Francis Ngannou whilst he prepares for his retirement bout with Ryan Bader.
Emelianenko is currently set to end his glittering career whether he wins, loses, or draws against 'Darth' at Bellator 290 tomorrow night. The 46-year-old will walk away from MMA with almost 50 professional fights (40-6) and with many fans regarding the Russian fighter as one of the best heavyweights in MMA history.
Whilst at the Bellator 290 presser, 'The Last Emperor' was posed with an interesting question by journalist 'Mini Khabib.' Emelianenko was asked whether or not he'd consider postponing his retirement if the now free-agent Francis Ngannou was signed by Bellator.
The Russian fighter, with respect to his opponent Ryan Bader, said:
"It is not correct to answer that question with Ryan Bader sat beside me and he's the champion at the moment. This is my last fight and I only think of this as my last fight. I want to have this fight with the champion of Bellator and be done."
'The Predator' was released by the UFC last month after failing to agree on a new contract with the organization. Ngannou had been pushing for their help to co-promote a boxing bout with Tyson Fury, but Dana White continued to refuse. The Cameroonian, therefore, vacated the title upon his release but is expected to face 'The Gypsy King' in a super fight later this year.
Catch the interview here:
Michael Bisping recently stated that Francis Ngannou has a puncher's chance should the Cameroonian face boxing heavyweight Anthony Joshua.
Rumors of a potential clash between the pair were recently confirmed by Joshua's manager Eddie Hearn. The Matchroom Sports chairman appeared on Ariel Helwani's The MMA Hour, where he claimed a bout between the British boxer and Francis Ngannou could be one of the biggest in history:
"The Anthony Joshua fight is just probably one of the biggest fights that could be made across any kind of sport... The value really is the unknown. What happens when two titans from the world of fight sports collide?...We know he punches extremely hard. If he lands one on Joshua, is it the greatest upset of all time? Or will AJ completely steamroll him?...That’s the attraction of that fight, the unknown."
Michael Bisping then weighed in on his YouTube channel. 'The Count' believes the Cameroonian would open up as the underdog but added that it's possible to see him walk away victorious:
"Joshua, potentially, will be there to get caught off a gigantic shot of Francis Ngannou and could you imagine it? Francis Ngannou knocks out Anthony Joshua, that is not that unlikely. He would be the underdog. Of course, he would. Anthony Joshua would be a massive favorite in that fight. But that is not beyond the realms of possibility."
Catch the video here:
|
english
|
Begin typing your search above and press return to search.
Annesha Borah has been awarded the Degree of Doctor of Philosophy (Ph. D) in March 2021 by North-Eastern Hill University (NEHU)
GUWAHATI: Annesha Borah has been awarded the Degree of Doctor of Philosophy (Ph. D) in March 2021 by North-Eastern Hill University (NEHU), Shillong for her thesis entitled 'Heritage tourism in Sonitpur District, Assam'. She conducted her research work under Dr PK Ryngnga, Associated Professor, Department of Geography, NEHU. She is the daughter of late Kalipada Deb and Joona Borah of Tezpur. She is the wife of Madhurjya Das, resident of Sewali Path, Hatigaon.
Also watch: Newly Wed Girl Found Dead: Murder or Suicide?
|
english
|
Ousmane Dembele scored a 90th-minute winner as Barcelona defeated 10-man Valladolid 1-0 Monday to move within a point of Spanish league leader Atletico Madrid.
Dembele scored with a left-footed shot from inside the area to make sure Barcelona can control its own fate in the title race, as it will host Atletico in one of the final rounds in May.
Barcelona had been struggling to break through the tight defensive scheme of Valladolid, which played a man down from the 80th after scar Plano was sent off for a foul from behind to stop a dangerous run by Dembele.
Atletico lost 1-0 at Sevilla on Sunday to see its once-comfortable cushion vanish. Seeking its first league title since 2014, Atletico has been gradually losing ground at the top of the standings. Its lead over Barcelona was more than 10 points a few weeks ago.
Atletico’s gap to third-place Real Madrid which defeated Eibar 2-0 on Saturday is three points with nine rounds left.
Barcelona next visits Real Madrid in the last Clasico of the season on Saturday. Atletico hosts Eibar on Sunday.
Before the match, Lionel Messi received an award for breaking Xavi Hernandez’s record of 767 appearances for the Catalan club. He broke the record in the game against Real Sociedad in the previous round.
Barcelona has won six straight league games and extended its unbeaten streak in the competition to 19 matches. It has won 12 times since a 2-1 loss at Cadiz in December.
Valladolid, which saw Kenan Kodro hit the crossbar with a header less than 10 minutes into the match at the Camp Nou Stadium, stayed three points from the relegation zone. It has won one of its last 12 league games.
Read all the Latest News, Breaking News and Coronavirus News here. Follow us on Facebook, Twitter and Telegram. (This story has not been edited by News18 staff and is published from a syndicated news agency feed - Associated Press)
|
english
|
<filename>package.json<gh_stars>0
{
"name": "@kherock/oidc-client",
"version": "2.0.0-beta.3",
"description": "OpenID Connect (OIDC) & OAuth2 client library",
"repository": {
"type": "git",
"url": "git+https://github.com/kherock/oidc-client.git"
},
"homepage": "https://github.com/kherock/oidc-client#readme",
"license": "Apache-2.0",
"main": "dist/oidc-client.cjs",
"module": "dist/oidc-client.mjs",
"types": "dist/oidc-client.d.ts",
"workspaces": [
"samples/*",
"website"
],
"files": [
"dist"
],
"keywords": [
"authentication",
"oauth2",
"oidc",
"openid",
"OpenID Connect"
],
"scripts": {
"build": "npm run build-esm && npm run build-node && npm run build-browser && npm run build-browser-min && npm run build-types",
"build-esm": "esbuild src/index.ts --bundle --outfile=dist/oidc-client.mjs --format=esm",
"build-node": "esbuild src/index.ts --bundle --outfile=dist/oidc-client.cjs --platform=node",
"build-browser": "esbuild src/index.ts --bundle --outfile=dist/oidc-client.js --platform=browser --global-name=oidc",
"build-browser-min": "npm run build-browser -- --minify --outfile=dist/oidc-client.min.js",
"build-types": "tsc --emitDeclarationOnly && api-extractor run",
"prepack": "npm run build",
"test": "tsc -p test/unit/tsconfig.json && jest",
"lint": "eslint --cache .",
"prepare": "husky install",
"release": "standard-version"
},
"dependencies": {
"jsrsasign": "^10.3.0"
},
"devDependencies": {
"@microsoft/api-extractor": "^7.18.10",
"@testing-library/jest-dom": "^5.5.0",
"@types/jest": "^27.0.2",
"@types/jsrsasign": "^8.0.13",
"@typescript-eslint/eslint-plugin": "^4.31.1",
"@typescript-eslint/parser": "^4.31.1",
"esbuild": "^0.13.2",
"eslint": "^7.32.0",
"eslint-plugin-testing-library": "^4.10.1",
"http-proxy-middleware": "^2.0.1",
"husky": "^7.0.2",
"jest": "^27.2.0",
"lint-staged": "^11.1.2",
"standard-version": "^9.3.1",
"ts-jest": "^27.0.5",
"typescript": "~4.4.3"
},
"engines": {
"node": ">=12.13.0"
},
"lint-staged": {
"*.{js,jsx,ts,tsx}": "eslint --cache --fix"
}
}
|
json
|
/* global URLSearchParams */
import {ReactNode} from 'react';
import {Link as RouterLink, useSearchParams} from 'react-router-dom';
export type NavigationLinkPropsType = {
children?: ReactNode;
className?: string;
isSaveQueries?: boolean;
queries?: Record<string, string>;
title?: string;
to: string;
};
export function NavigationLink(props: NavigationLinkPropsType): JSX.Element {
const {className, to, children, isSaveQueries = true, title, queries: passedQueries = {}} = props;
const [search] = useSearchParams();
const currentQueries: Record<string, string> = Object.fromEntries<string>(search.entries());
const resultQueries: Record<string, string> = isSaveQueries ? {...currentQueries, ...passedQueries} : passedQueries;
const queriesAsString: string = new URLSearchParams(resultQueries).toString();
const queriesAsPartUrl = queriesAsString && `?${queriesAsString}`;
return (
<RouterLink className={className} title={title} to={to + queriesAsPartUrl}>
{children}
</RouterLink>
);
}
|
typescript
|
<gh_stars>0
//
// Created by <NAME> (EI) on 15/09/2017.
//
#ifndef SEQSORTER_FILEREADER_H
#define SEQSORTER_FILEREADER_H
#include <cstring>
#include <fstream>
#include <iostream>
#include <fcntl.h>
#include <sdglib/readers/Common.hpp>
#include <sdglib/utilities/OutputLog.hpp>
#include "kseq.hpp"
struct FastxReaderParams {
uint32_t min_length=0;
};
struct FastaRecord {
int32_t id;
std::string name,seq;
};
template<typename FileRecord>
class FastaReader {
public:
/**
* @brief
* Initialises the FastaReader, opens the file based on the format and instantiates a reader (plain, gzip or bzip2)
* @param params
* Parameters for filtering the records (i.e. min_size, max_size)
* @param filepath
* Relative or absolute path to the file that is going to be read.
*/
explicit FastaReader(FastxReaderParams params, const std::string &filepath) : params(params), numRecords(0) {
sdglib::OutputLog(sdglib::LogLevels::INFO) << "Opening: " << filepath << "\n";
gz_file = gzopen(filepath.c_str(), "r");
if (gz_file == Z_NULL || gz_file == NULL) {
sdglib::OutputLog(sdglib::LogLevels::WARN) << "Error opening FASTA " << filepath << ": " << std::strerror(errno) << std::endl;
throw std::runtime_error("Error opening " + filepath + ": " + std::strerror(errno));
}
ks = new kstream<gzFile, FunctorZlib>(gz_file, gzr);
}
~FastaReader() {
delete ks;
gzclose(gz_file);
}
/**
* @brief
* Calls the file reader and places the fields from the file onto the FileRecord, the ID is set to the
* number of records seen so far.
* @param rec
* Input/Output parameter where the file fields will be stored.
* @return
* Whether the function will generate another object or not
*/
bool next_record(FileRecord& rec) {
int l;
do {
l=(ks -> readFasta(seq));
std::swap(rec.seq, seq.seq);
std::swap(rec.name, seq.name);
rec.id = numRecords;
numRecords++;
stats.totalLength+=rec.seq.size();
} while(rec.seq.size() < params.min_length && l >= 0);
stats.filteredRecords++;
stats.filteredLength+=rec.seq.size();
return (l >= 0);
}
ReaderStats getSummaryStatistics() {
stats.totalRecords = numRecords;
return stats;
}
private:
kstream<gzFile, FunctorZlib> *ks;
kstream<BZFILE, FunctorBZlib2> *bzKS;
kseq seq;
uint32_t numRecords = 0;
gzFile gz_file;
BZFILE * bz_File{};
int fq_File{};
FunctorZlib gzr;
FunctorRead rr;
FastxReaderParams params;
ReaderStats stats;
};
struct FastqRecord{
int64_t id;
std::string name, comment, seq, qual;
};
template<typename FileRecord>
class FastqReader {
public:
/**
* @brief
* Initialises the FastaReader, opens the file based on the format and instantiates a reader (plain, gzip or bzip2)
* @param params
* Parameters for filtering the records (i.e. min_size, max_size)
* @param filepath
* Relative or absolute path to the file that is going to be read.
*/
explicit FastqReader(FastxReaderParams params, const std::string &filepath) : params(params), numRecords(0),eof_flag(false) {
sdglib::OutputLog() << "Opening: " << filepath << "\n";
gz_file = gzopen(filepath.c_str(), "r");
if (gz_file == Z_NULL) {
std::cout << "Error opening FASTQ " << filepath << ": " << std::strerror(errno) << std::endl;
throw std::runtime_error("Error opening " + filepath + ": " + std::strerror(errno));
}
ks = new kstream<gzFile, FunctorZlib>(gz_file, gzr);
}
/**
* @brief
* Calls the file reader and places the fields from the file onto the FileRecord, the ID is set to the
* number of records seen so far.
* @param rec
* Input/Output parameter where the file fields will be stored.
* @return
* Whether the function will generate another object or not
*/
bool next_record(FileRecord& rec) {
int l;
if ( eof_flag) return false;
{
do {
l = (ks->readFastq(seq));
std::swap(rec.seq, seq.seq);
std::swap(rec.qual, seq.qual);
std::swap(rec.name, seq.name);
std::swap(rec.comment, seq.comment);
rec.id = numRecords;
numRecords++;
stats.totalLength += rec.seq.size();
} while (rec.seq.size() < params.min_length && l >= 0);
}
if (l<0) eof_flag=true;
else {
stats.filteredRecords++;
stats.filteredLength += rec.seq.size();
}
return (l >= 0);
}
ReaderStats getSummaryStatistics() {
stats.totalRecords=numRecords;
return stats;
}
~FastqReader() {
gzclose(gz_file);
delete ks;
}
private:
kstream<gzFile, FunctorZlib> *ks;
kseq seq;
uint64_t numRecords=0;
gzFile gz_file;
BZFILE * bz_File{};
int fq_File{};
FunctorZlib gzr;
FastxReaderParams params;
ReaderStats stats;
bool eof_flag;
};
#endif //SEQSORTER_FILEREADER_H
|
cpp
|
Australia, a day before the clash, confirmed an unchanged Playing XI with skipper Pat Cummins admitting that Boland was unlucky to miss out.
Joe Root made is intentions clear to Australia and skipper Pat Cummins had to adjust his field to stop the former England skipper from doing further damage.
Ashes 2023: Scott Boland bowled a fuller ball outside off stump and Root sent it over the third man fence for a six.
The former Pakistan cricketer had a clear answer when asked about Shubman Gill's dismissal in WTC Final.
India had a forgettable outing with the bat on Sunday as they were bundled out for 234 in the chase of 444 against Australia in the World Test Championship final.
Virat Kohli's Instagram story just hours after India's WTC Final loss has left several fans quite confused.
After the match ended, India skipper Rohit Sharma opened up on the controversial dismissal of opening batter Shubman gill on Scott Boland's delivery on Day 4.
Rohit Sharma had a honest answer on how India ended up losing the WTC Final against Australia.
|
english
|
Oracle spokeswoman Deborah Hellinger said she was confirming remarks made by US Treasury Secretary Steven Mnuchin, who told CNBC on Monday that TikTok’s parent company, ByteDance, submitted its proposal to the US government for approval.
“We did get a proposal over the weekend that includes Oracle as the trusted technology partner with Oracle making many representations for national security issues,” Mnuchin said.
Mnuchin said there’s also a commitment to make TikTok’s global operations a U.S.-headquartered company with 20,000 new jobs.
President Donald Trump’s administration has threatened to ban TikTok by Sept. 20 and ordered owner ByteDance to sell its U.S. business, claiming national security risks due to its Chinese ownership. The government worries about user data being funneled to Chinese authorities. TikTok denies it is a national security risk and is suing to stop the administration from enacting the threatened ban.
Much remains unclear about the proposed deal with Oracle, which is pointedly not referring to it as a sale or acquisition.
Any deal must still be reviewed by the Committee on Foreign Investment in the United States, known as CFIUS, a U.S. government group chaired by the Treasury Secretary that studies mergers for national security reasons. Mnuchin said he expects the group to review the proposal this week and later make a recommendation to the president.
The president can approve or deny a transaction recommended by the panel, though Trump has already voiced support for Oracle as a “great company” that could handle the acquisition.
Proposals to acquire TikTok’s U.S. business raised questions among outside observers about how it would be split from the rest of TikTok’s social media platform, which is popular worldwide. ByteDance also owns a similar video app, Douyin, for the Chinese market.
Walmart, which had planned to partner with Microsoft on the acquisition, said Sunday it “continues to have an interest in a TikTok investment” and is talking about it with ByteDance and other parties.
TikTok, which says it has 100 million U.S. users and about 700 million globally, is known for its fun, goofy videos of dancing, lip-syncing, pranks and jokes. It’s recently become home to more political content such as the comedian Sarah Cooper, who drew a large audience by lip-syncing Trump’s often-disjointed statements from public appearances.
But the app has also raised concerns because of its Chinese ownership. The White House has cracked down on a range of Chinese businesses, including telecom equipment makers Huawei and ZTE and messaging app WeChat, over worries that they would enable Chinese authorities to access U.S. user data. Republican and Democratic lawmakers have raised concerns about censorship and children’s privacy.
TikTok denies that it has shared user data with the Chinese government or that it would do so if asked. The company says it has not censored videos at the request of Chinese authorities and insists it is not a national-security threat.
TikTok has sued to stop the ban, but not the sale order. The negotiations have been complicated by several factors, including Trump’s repeated demands that the U.S. government should get a “cut” of any deal, a stipulation and role for the president that experts say is unprecedented.
In addition, the Chinese government in late August unveiled new regulations that restrict exports of technology, likely including the artificial intelligence system TikTok uses to choose which videos to spool up to its users. That means ByteDance would have to obtain a license from China to export such technology to a foreign company.
“The Chinese government has implied it may block export of TikTok’s AI systems, so that might complicate a direct sale,” said Tiffany Li, a visiting professor at the Boston University School of Law.
She said that TikTok’s AI-backed video recommendation system is one of the app’s competitive advantages.
Whether the Oracle-TikTok deal will allow the sidestepping of Chinese export restrictions depends on which entity retains control of TikTok in the U.S., said Paul Haswell, a Hong Kong-based partner at law firm Pinsent Masons.
Both Microsoft and Oracle are known more for their business software offerings than for those intended for consumers.
Oracle primarily makes database software. It competes with tech giants such as Microsoft and Amazon that provide cloud services as well as business-software specialists like Salesforce.
Some analysts see Oracle’s interest in a consumer business as misguided. Oracle should focus on enterprise-market acquisitions and not invest in a consumer app like TikTok that doesn’t fit with the rest of its business, said Jefferies analyst Brent Thill, who compares the idea to Delta Airlines buying a motorcycle company. “It doesn’t make any sense,” he said.
Oracle co-founder Larry Ellison is unusual among tech executives for his public support of Trump, hosting a fundraiser for him in February at his Rancho Mirage, California, estate. The company also hired a former top aide to Vice President Mike Pence; its CEO, Safra Catz, also served on Trump’s transition team.
The president said on Aug. 18 that Oracle was “a great company” that “could handle” buying TikTok. He declined to state his preference between Oracle and Microsoft as buyers.
|
english
|
# Copyright 2008-2012 Nokia Siemens Networks Oyj
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
def get_core_plugins():
from robotide.run import RunAnything
from robotide.recentfiles import RecentFilesPlugin
from robotide.ui.preview import PreviewPlugin
from robotide.ui.keywordsearch import KeywordSearch
from robotide.editor import EditorPlugin
from robotide.editor.texteditor import TextEditorPlugin
from robotide.log import LogPlugin
from robotide.searchtests.searchtests import TestSearchPlugin
from robotide.spec.specimporter import SpecImporterPlugin
return [RunAnything, RecentFilesPlugin, PreviewPlugin, SpecImporterPlugin,
EditorPlugin, TextEditorPlugin, KeywordSearch, LogPlugin, TestSearchPlugin]
|
python
|
All hell broke loose after Richa Chadha reacted to Lt General Dwivedi's statement about the Indian Army being ready to execute any action needed to take back Pakistan Occupied Kashmir. From celebrities to politicians, many condemned the actress for her tweet and a section of the internet alleged that she disrespected the Indian army with her tweet.
Veteran actor Prakash Raj has now reacted to Akshay Kumar's tweet and criticized him for reacting to Richa Chadha's tweet. He wrote, ''Didn’t expect this from you @akshaykumar..having said that @RichaChadha is more relevant to our country than you sir. #justasking''. Actress Swara Bhaskar also came in support of Chadha as she tweeted, ''strength and love to you!''.
Although it is unclear what Prakash Raj is alluding to in his last statement, the actor seems to be pointing at Akshay Kumar's much-discussed Canadian citizenship. The veteran actor is not the only one who seems to think so. A section of the internet started trending 'Canadian Kumar' after Akshay Kumar reacted to Richa Chadha's tweet.
Moreover, Anupam Kher also took to his Twitter to slam Richa Chadha. He, just like Akshay Kumar, shared a screenshot of Richa Chadha's tweet and wrote, ''Trying to become popular among some people by doing evil to the country is the work of cowards and small people. And putting the honor of the army at stake.... What can be more shameful than this.'' It was rather questionable how the seasoned actors used the same screenshot to criticize Richa Chadha.
ALSO SEE: After Vivek Agnihotri, Akshay Kumar Slams Richa Chadha's Galwan Tweet; Says 'Hurts To See This'
|
english
|
Confusion over US secretary of state’s call to Pakistan PM; and a dollar garland for minister.
When an army chief met a prince; and railway minister accused of assault an elderly woman.
Pakistan government can now track overseas Pakistani account and Sharif’s mother defends her son.
Bilawal Bhutto’s first speech in National Assembly widely appreciated; 18-year-old completes toughest horse race.
Here’s what’s happening across the border: Pakistan pays tribute to Atal Bihari Vajpayee, a bureaucratic reshuffle in the offing.
Here’s what’s happening across the border: Sidhu gets a visa to attend Imran’s swearing-in; Pakistani film cast appeals for Indo-Pak peace.
Former PM Nawaz Sharif celebrates I-Day in jail; Pakistani band ‘Junoon’ reunites after 13 years.
Here's what's happening across the border: Six Mercedes Maybachs delivered to Prime Minister's House; Sikh community celebrates Independence Day.
Launched in 2017 as Xi’s pet urban project, Xiong’an has faced delays and limited progress despite the talk of turning the area into the next Shenzhen or Shanghai.
The government had in February estimated GDP growth for 2022-23 to be 7%. The higher-than-expected actual performance is largely due to a strong cross-sector showing in Q4.
Transfer of Technology (ToT) for jet engines was the main thrust of National Security Advisor Ajit Doval's talks with his American counterpart Jack Sullivan in February.
Copyright © 2023 Printline Media Pvt. Ltd. All rights reserved.
|
english
|
Washington, D.C., March 30, 2020 — Swedish authorities should make every effort to locate missing journalist Sajid Hussain Baloch and ensure his safety, the Committee to Protect Journalists said today.
Pakistani Prime Minister Nawaz Sharif made a series of commitments to safeguard press freedom during a meeting with a CPJ delegation last week. Among them was a pledge to speak out in support of media freedom and against attacks on journalists, particularly in high-conflict areas like Baluchistan.
|
english
|
<gh_stars>0
import collections
import os
import dvc.prompt as prompt
import dvc.logger as logger
from dvc.exceptions import (DvcException,
MoveNotDataSourceError,
NotDvcProjectError)
class InitError(DvcException):
def __init__(self, msg):
super(InitError, self).__init__(msg)
class ReproductionError(DvcException):
def __init__(self, dvc_file_name, ex):
self.path = dvc_file_name
msg = "failed to reproduce '{}'".format(dvc_file_name)
super(ReproductionError, self).__init__(msg, cause=ex)
class Project(object):
DVC_DIR = '.dvc'
def __init__(self, root_dir=None):
from dvc.config import Config
from dvc.state import State
from dvc.lock import Lock
from dvc.scm import SCM
from dvc.cache import Cache
from dvc.data_cloud import DataCloud
from dvc.updater import Updater
root_dir = self.find_root(root_dir)
self.root_dir = os.path.abspath(os.path.realpath(root_dir))
self.dvc_dir = os.path.join(self.root_dir, self.DVC_DIR)
self.config = Config(self.dvc_dir)
self.scm = SCM(self.root_dir, project=self)
self.lock = Lock(self.dvc_dir)
# NOTE: storing state and link_state in the repository itself to avoid
# any possible state corruption in 'shared cache dir' scenario.
self.state = State(self, self.config.config)
core = self.config.config[Config.SECTION_CORE]
logger.set_level(core.get(Config.SECTION_CORE_LOGLEVEL))
self.cache = Cache(self)
self.cloud = DataCloud(self, config=self.config.config)
self.updater = Updater(self.dvc_dir)
self._files_to_git_add = []
self._ignore()
self.updater.check()
def __repr__(self):
return "Project: '{root_dir}'".format(root_dir=self.root_dir)
@staticmethod
def find_root(root=None):
if root is None:
root = os.getcwd()
else:
root = os.path.abspath(os.path.realpath(root))
while True:
dvc_dir = os.path.join(root, Project.DVC_DIR)
if os.path.isdir(dvc_dir):
return root
if os.path.ismount(root):
break
root = os.path.dirname(root)
raise NotDvcProjectError(root)
@staticmethod
def find_dvc_dir(root=None):
root_dir = Project.find_root(root)
return os.path.join(root_dir, Project.DVC_DIR)
def _remind_to_git_add(self):
if not self._files_to_git_add:
return
logger.info('\n'
'To track the changes with git run:\n'
'\n'
'\tgit add {files}'
.format(files=' '.join(self._files_to_git_add)))
@staticmethod
def init(root_dir=os.curdir, no_scm=False, force=False):
"""
Creates an empty project on the given directory -- basically a
`.dvc` directory with subdirectories for configuration and cache.
It should be tracked by a SCM or use the `--no-scm` flag.
If the given directory is not empty, you must use the `--force`
flag to override it.
Args:
root_dir: Path to project's root directory.
Returns:
Project instance.
Raises:
KeyError: Raises an exception.
"""
import shutil
from dvc.scm import SCM, Base
from dvc.config import Config
root_dir = os.path.abspath(root_dir)
dvc_dir = os.path.join(root_dir, Project.DVC_DIR)
scm = SCM(root_dir)
if type(scm) == Base and not no_scm:
raise InitError(
"{project} is not tracked by any supported scm tool"
" (e.g. git). Use '--no-scm' if you don't want to use any scm."
.format(project=root_dir)
)
if os.path.isdir(dvc_dir):
if not force:
raise InitError(
"'{project}' exists. Use '-f' to force."
.format(project=os.path.relpath(dvc_dir))
)
shutil.rmtree(dvc_dir)
os.mkdir(dvc_dir)
config = Config.init(dvc_dir)
proj = Project(root_dir)
scm.add([config.config_file])
if scm.ignore_file():
scm.add([os.path.join(dvc_dir, scm.ignore_file())])
logger.info('\nYou can now commit the changes to git.\n')
proj._welcome_message()
return proj
def destroy(self):
import shutil
for stage in self.stages():
stage.remove()
shutil.rmtree(self.dvc_dir)
def _ignore(self):
flist = [
self.state.state_file,
self.lock.lock_file,
self.config.config_local_file,
self.updater.updater_file,
self.updater.lock.lock_file,
] + self.state.temp_files
if self.cache.local.cache_dir.startswith(self.root_dir):
flist += [self.cache.local.cache_dir]
self.scm.ignore_list(flist)
def install(self):
self.scm.install()
def _check_cwd_specified_as_output(self, cwd, stages):
from dvc.exceptions import WorkingDirectoryAsOutputError
cwd_path = os.path.abspath(os.path.normpath(cwd))
for stage in stages:
for output in stage.outs:
if os.path.isdir(output.path) and output.path == cwd_path:
raise WorkingDirectoryAsOutputError(cwd, stage.relpath)
def _check_output_duplication(self, outs, stages):
from dvc.exceptions import OutputDuplicationError
for stage in stages:
for o in stage.outs:
for out in outs:
if o.path == out.path and o.stage.path != out.stage.path:
stages = [o.stage.relpath, out.stage.relpath]
raise OutputDuplicationError(o.path, stages)
def add(self, fname, recursive=False):
from dvc.stage import Stage
fnames = []
if recursive and os.path.isdir(fname):
fnames = []
for root, dirs, files in os.walk(fname):
for f in files:
path = os.path.join(root, f)
if Stage.is_stage_file(path):
continue
if os.path.basename(path) == self.scm.ignore_file():
continue
if self.scm.is_tracked(path):
continue
fnames.append(path)
else:
fnames = [fname]
all_stages = self.stages()
stages = []
self._files_to_git_add = []
with self.state:
for f in fnames:
stage = Stage.create(project=self,
outs=[f],
add=True)
if stage is None:
stages.append(stage)
continue
self._check_output_duplication(stage.outs, all_stages)
stage.save()
stage.dump()
stages.append(stage)
self._remind_to_git_add()
return stages
def remove(self, target, outs_only=False):
from dvc.stage import Stage
stage = Stage.load(self, target)
if outs_only:
stage.remove_outs()
else:
stage.remove()
return stage
def lock_stage(self, target, unlock=False):
from dvc.stage import Stage
stage = Stage.load(self, target)
stage.locked = False if unlock else True
stage.dump()
return stage
def move(self, from_path, to_path):
"""
Renames an output file and modifies the stage associated
to reflect the change on the pipeline.
If the output has the same name as its stage, it would
also rename the corresponding stage file.
E.g.
Having: (hello, hello.dvc)
$ dvc move hello greetings
Result: (greeting, greeting.dvc)
It only works with outputs generated by `add` or `import`,
also known as data sources.
"""
import dvc.output as Output
from dvc.stage import Stage
from_out = Output.loads_from(Stage(self, cwd=os.curdir),
[from_path])[0]
to_path = self._expand_target_path(from_path, to_path)
try:
stage, out = next((stage, out)
for stage in self.stages()
for out in stage.outs
if from_out.path == out.path)
except StopIteration:
raise DvcException("unable to find stage file with output '{path}'"
.format(path=from_path))
if not stage.is_data_source:
raise MoveNotDataSourceError(stage.relpath)
stage_name = os.path.splitext(os.path.basename(stage.path))[0]
from_name = os.path.basename(from_out.path)
if stage_name == from_name:
os.unlink(stage.path)
stage.path = os.path.join(
os.path.dirname(to_path),
os.path.basename(to_path) + Stage.STAGE_FILE_SUFFIX
)
stage.cwd = os.path.join(self.root_dir, os.path.dirname(to_path))
to_out = Output.loads_from(stage,
[os.path.basename(to_path)],
out.cache,
out.metric)[0]
with self.state:
out.move(to_out)
stage.dump()
self._remind_to_git_add()
def _unprotect_file(self, path):
import stat
import uuid
from dvc.system import System
from dvc.utils import copyfile, move, remove
if System.is_symlink(path) or System.is_hardlink(path):
logger.debug("Unprotecting '{}'".format(path))
tmp = os.path.join(os.path.dirname(path), '.' + str(uuid.uuid4()))
move(path, tmp)
copyfile(tmp, path)
remove(tmp)
else:
logger.debug("Skipping copying for '{}', since it is not "
"a symlink or a hardlink.".format(path))
os.chmod(path, os.stat(path).st_mode | stat.S_IWRITE)
def _unprotect_dir(self, path):
for root, dirs, files in os.walk(path):
for f in files:
path = os.path.join(root, f)
self._unprotect_file(path)
def unprotect(self, path):
if not os.path.exists(path):
raise DvcException(
"can't unprotect non-existing data '{}'"
.format(path)
)
if os.path.isdir(path):
self._unprotect_dir(path)
else:
self._unprotect_file(path)
def run(self,
cmd=None,
deps=[],
outs=[],
outs_no_cache=[],
metrics_no_cache=[],
fname=None,
cwd=os.curdir,
no_exec=False,
overwrite=False,
ignore_build_cache=False,
remove_outs=False):
from dvc.stage import Stage
with self.state:
stage = Stage.create(project=self,
fname=fname,
cmd=cmd,
cwd=cwd,
outs=outs,
outs_no_cache=outs_no_cache,
metrics_no_cache=metrics_no_cache,
deps=deps,
overwrite=overwrite,
ignore_build_cache=ignore_build_cache,
remove_outs=remove_outs)
if stage is None:
return None
all_stages = self.stages()
self._check_cwd_specified_as_output(cwd, all_stages)
self._check_output_duplication(stage.outs, all_stages)
self._files_to_git_add = []
with self.state:
if not no_exec:
stage.run()
stage.dump()
self._remind_to_git_add()
return stage
def imp(self, url, out):
from dvc.stage import Stage
stage = Stage.create(project=self,
cmd=None,
deps=[url],
outs=[out])
if stage is None:
return None
self._check_output_duplication(stage.outs, self.stages())
self._files_to_git_add = []
with self.state:
stage.run()
stage.dump()
self._remind_to_git_add()
return stage
def _reproduce_stage(self, stages, node, force, dry, interactive):
stage = stages[node]
if stage.locked:
logger.warning(
"DVC file '{path}' is locked. Its dependencies are"
" not going to be reproduced."
.format(path=stage.relpath)
)
stage = stage.reproduce(force=force, dry=dry, interactive=interactive)
if not stage:
return []
if not dry:
stage.dump()
return [stage]
def reproduce(self,
target=None,
recursive=True,
force=False,
dry=False,
interactive=False,
pipeline=False,
all_pipelines=False,
ignore_build_cache=False):
from dvc.stage import Stage
if not target and not all_pipelines:
raise ValueError()
if not interactive:
config = self.config
core = config.config[config.SECTION_CORE]
interactive = core.get(config.SECTION_CORE_INTERACTIVE, False)
targets = []
if pipeline or all_pipelines:
if pipeline:
stage = Stage.load(self, target)
node = os.path.relpath(stage.path, self.root_dir)
pipelines = [self._get_pipeline(node)]
else:
pipelines = self.pipelines()
for G in pipelines:
for node in G.nodes():
if G.in_degree(node) == 0:
targets.append(os.path.join(self.root_dir, node))
else:
targets.append(target)
self._files_to_git_add = []
ret = []
with self.state:
for target in targets:
stages = self._reproduce(target,
recursive=recursive,
force=force,
dry=dry,
interactive=interactive,
ignore_build_cache=ignore_build_cache)
ret.extend(stages)
self._remind_to_git_add()
return ret
def _reproduce(self,
target,
recursive=True,
force=False,
dry=False,
interactive=False,
ignore_build_cache=False):
import networkx as nx
from dvc.stage import Stage
stage = Stage.load(self, target)
G = self.graph()[1]
stages = nx.get_node_attributes(G, 'stage')
node = os.path.relpath(stage.path, self.root_dir)
if recursive:
ret = self._reproduce_stages(G,
stages,
node,
force,
dry,
interactive,
ignore_build_cache)
else:
ret = self._reproduce_stage(stages,
node,
force,
dry,
interactive)
return ret
def _reproduce_stages(self,
G,
stages,
node,
force,
dry,
interactive,
ignore_build_cache):
import networkx as nx
result = []
for n in nx.dfs_postorder_nodes(G, node):
try:
ret = self._reproduce_stage(stages,
n,
force,
dry,
interactive)
if len(ret) == 0 and ignore_build_cache:
# NOTE: we are walking our pipeline from the top to the
# bottom. If one stage is changed, it will be reproduced,
# which tells us that we should force reproducing all of
# the other stages down below, even if their direct
# dependencies didn't change.
force = True
result += ret
except Exception as ex:
raise ReproductionError(stages[n].relpath, ex)
return result
def _cleanup_unused_links(self, all_stages):
used = []
for stage in all_stages:
for out in stage.outs:
used.append(out.path)
self.state.remove_unused_links(used)
def checkout(self,
target=None,
with_deps=False,
force=False,
recursive=False):
if target and not recursive:
all_stages = self.active_stages()
stages = self._collect(target, with_deps=with_deps)
else:
all_stages = self.active_stages(target)
stages = all_stages
with self.state:
self._cleanup_unused_links(all_stages)
for stage in stages:
if stage.locked:
logger.warning(
"DVC file '{path}' is locked. Its dependencies are"
" not going to be checked out."
.format(path=stage.relpath)
)
stage.checkout(force=force)
def _get_pipeline(self, node):
pipelines = list(filter(lambda g: node in g.nodes(),
self.pipelines()))
assert len(pipelines) == 1
return pipelines[0]
def _collect(self, target, with_deps=False):
import networkx as nx
from dvc.stage import Stage
stage = Stage.load(self, target)
if not with_deps:
return [stage]
node = os.path.relpath(stage.path, self.root_dir)
G = self._get_pipeline(node)
stages = nx.get_node_attributes(G, 'stage')
ret = [stage]
for n in nx.dfs_postorder_nodes(G, node):
ret.append(stages[n])
return ret
def _collect_dir_cache(self,
out,
branch=None,
remote=None,
force=False,
jobs=None):
info = out.dumpd()
ret = [info]
r = out.remote
md5 = info[r.PARAM_CHECKSUM]
if self.cache.local.changed_cache_file(md5):
try:
self.cloud.pull(ret,
jobs=jobs,
remote=remote,
show_checksums=False)
except DvcException as exc:
msg = "Failed to pull cache for '{}': {}"
logger.debug(msg.format(out, exc))
if self.cache.local.changed_cache_file(md5):
msg = "Missing cache for directory '{}'. " \
"Cache for files inside will be lost. " \
"Would you like to continue? Use '-f' to force. "
if not force and not prompt.confirm(msg):
raise DvcException(
"unable to fully collect used cache"
" without cache for directory '{}'"
.format(out)
)
else:
return ret
for i in self.cache.local.load_dir_cache(md5):
i['branch'] = branch
i[r.PARAM_PATH] = os.path.join(info[r.PARAM_PATH],
i[r.PARAM_RELPATH])
ret.append(i)
return ret
def _collect_used_cache(self,
out,
branch=None,
remote=None,
force=False,
jobs=None):
if not out.use_cache or not out.info:
if not out.info:
logger.warning("Output '{}'({}) is missing version "
"info. Cache for it will not be collected. "
"Use dvc repro to get your pipeline up to "
"date.".format(out, out.stage))
return []
info = out.dumpd()
info['branch'] = branch
ret = [info]
if out.scheme != 'local':
return ret
md5 = info[out.remote.PARAM_CHECKSUM]
cache = self.cache.local.get(md5)
if not out.remote.is_dir_cache(cache):
return ret
return self._collect_dir_cache(out,
branch=branch,
remote=remote,
force=force,
jobs=jobs)
def _used_cache(self,
target=None,
all_branches=False,
active=True,
with_deps=False,
all_tags=False,
remote=None,
force=False,
jobs=None,
recursive=False):
cache = {}
cache['local'] = []
cache['s3'] = []
cache['gs'] = []
cache['hdfs'] = []
cache['ssh'] = []
cache['azure'] = []
for branch in self.scm.brancher(all_branches=all_branches,
all_tags=all_tags):
if target:
if recursive:
stages = self.stages(target)
else:
stages = self._collect(target,
with_deps=with_deps)
elif active:
stages = self.active_stages()
else:
stages = self.stages()
for stage in stages:
if active and not target and stage.locked:
logger.warning(
"DVC file '{path}' is locked. Its dependencies are"
" not going to be pushed/pulled/fetched."
.format(path=stage.relpath)
)
for out in stage.outs:
scheme = out.path_info['scheme']
cache[scheme] += self._collect_used_cache(out,
branch=branch,
remote=remote,
force=force,
jobs=jobs)
return cache
@staticmethod
def merge_cache_lists(clists):
merged_cache = collections.defaultdict(list)
for cache_list in clists:
for scheme, cache in cache_list.items():
for item in cache:
if item not in merged_cache[scheme]:
merged_cache[scheme].append(item)
return merged_cache
@staticmethod
def load_all_used_cache(projects,
target=None,
all_branches=False,
active=True,
with_deps=False,
all_tags=False,
remote=None,
force=False,
jobs=None):
clists = []
for project in projects:
with project.state:
project_clist = project._used_cache(target=None,
all_branches=all_branches,
active=False,
with_deps=with_deps,
all_tags=all_tags,
remote=remote,
force=force,
jobs=jobs)
clists.append(project_clist)
return clists
def _do_gc(self, typ, func, clist):
removed = func(clist)
if not removed:
logger.info("No unused {} cache to remove.".format(typ))
def gc(self,
all_branches=False,
cloud=False,
remote=None,
with_deps=False,
all_tags=False,
force=False,
jobs=None,
projects=None):
all_projects = [self]
if projects:
all_projects.extend(Project(path) for path in projects)
all_clists = Project.load_all_used_cache(all_projects,
target=None,
all_branches=all_branches,
active=False,
with_deps=with_deps,
all_tags=all_tags,
remote=remote,
force=force,
jobs=jobs)
if len(all_clists) > 1:
clist = Project.merge_cache_lists(all_clists)
else:
clist = all_clists[0]
with self.state:
self._do_gc('local', self.cache.local.gc, clist)
if self.cache.s3:
self._do_gc('s3', self.cache.s3.gc, clist)
if self.cache.gs:
self._do_gc('gs', self.cache.gs.gc, clist)
if self.cache.ssh:
self._do_gc('ssh', self.cache.ssh.gc, clist)
if self.cache.hdfs:
self._do_gc('hdfs', self.cache.hdfs.gc, clist)
if self.cache.azure:
self._do_gc('azure', self.cache.azure.gc, clist)
if cloud:
self._do_gc('remote', self.cloud._get_cloud(remote,
'gc -c').gc, clist)
def push(self,
target=None,
jobs=1,
remote=None,
all_branches=False,
show_checksums=False,
with_deps=False,
all_tags=False,
recursive=False):
with self.state:
used = self._used_cache(target,
all_branches=all_branches,
all_tags=all_tags,
with_deps=with_deps,
force=True,
remote=remote,
jobs=jobs,
recursive=recursive)['local']
self.cloud.push(used,
jobs,
remote=remote,
show_checksums=show_checksums)
def fetch(self,
target=None,
jobs=1,
remote=None,
all_branches=False,
show_checksums=False,
with_deps=False,
all_tags=False,
recursive=False):
with self.state:
used = self._used_cache(target,
all_branches=all_branches,
all_tags=all_tags,
with_deps=with_deps,
force=True,
remote=remote,
jobs=jobs,
recursive=recursive)['local']
self.cloud.pull(used,
jobs,
remote=remote,
show_checksums=show_checksums)
def pull(self,
target=None,
jobs=1,
remote=None,
all_branches=False,
show_checksums=False,
with_deps=False,
all_tags=False,
force=False,
recursive=False):
self.fetch(target,
jobs,
remote=remote,
all_branches=all_branches,
all_tags=all_tags,
show_checksums=show_checksums,
with_deps=with_deps,
recursive=recursive)
self.checkout(target=target,
with_deps=with_deps,
force=force,
recursive=recursive)
def _local_status(self, target=None, with_deps=False):
status = {}
if target:
stages = self._collect(target,
with_deps=with_deps)
else:
stages = self.active_stages()
for stage in stages:
if stage.locked:
logger.warning(
"DVC file '{path}' is locked. Its dependencies are"
" not going to be shown in the status output."
.format(path=stage.relpath)
)
status.update(stage.status())
return status
def _cloud_status(self,
target=None,
jobs=1,
remote=None,
show_checksums=False,
all_branches=False,
with_deps=False,
all_tags=False):
import dvc.remote.base as cloud
used = self._used_cache(target,
all_branches=all_branches,
all_tags=all_tags,
with_deps=with_deps,
force=True,
remote=remote,
jobs=jobs)['local']
ret = {}
status_info = self.cloud.status(used,
jobs,
remote=remote,
show_checksums=show_checksums)
for md5, info in status_info.items():
name = info['name']
status = info['status']
if status == cloud.STATUS_OK:
continue
prefix_map = {
cloud.STATUS_DELETED: 'deleted',
cloud.STATUS_NEW: 'new',
}
ret[name] = prefix_map[status]
return ret
def status(self,
target=None,
jobs=1,
cloud=False,
remote=None,
show_checksums=False,
all_branches=False,
with_deps=False,
all_tags=False):
with self.state:
if cloud:
return self._cloud_status(target,
jobs,
remote=remote,
show_checksums=show_checksums,
all_branches=all_branches,
with_deps=with_deps,
all_tags=all_tags)
return self._local_status(target,
with_deps=with_deps)
def _read_metric_json(self, fd, json_path):
import json
from jsonpath_rw import parse
parser = parse(json_path)
return [x.value for x in parser.find(json.load(fd))]
def _do_read_metric_xsv(self, reader, row, col):
if col is not None and row is not None:
return [reader[row][col]]
elif col is not None:
return [r[col] for r in reader]
elif row is not None:
return reader[row]
return None
def _read_metric_hxsv(self, fd, hxsv_path, delimiter):
import csv
col, row = hxsv_path.split(',')
row = int(row)
reader = list(csv.DictReader(fd, delimiter=delimiter))
return self._do_read_metric_xsv(reader, row, col)
def _read_metric_xsv(self, fd, xsv_path, delimiter):
import csv
col, row = xsv_path.split(',')
row = int(row)
col = int(col)
reader = list(csv.reader(fd, delimiter=delimiter))
return self._do_read_metric_xsv(reader, row, col)
def _read_metric(self, path, typ=None, xpath=None):
ret = None
if not os.path.exists(path):
return ret
try:
with open(path, 'r') as fd:
if typ == 'json':
ret = self._read_metric_json(fd, xpath)
elif typ == 'csv':
ret = self._read_metric_xsv(fd, xpath, ',')
elif typ == 'tsv':
ret = self._read_metric_xsv(fd, xpath, '\t')
elif typ == 'hcsv':
ret = self._read_metric_hxsv(fd, xpath, ',')
elif typ == 'htsv':
ret = self._read_metric_hxsv(fd, xpath, '\t')
else:
ret = fd.read()
except Exception:
logger.error("unable to read metric in '{}'".format(path))
return ret
def _find_output_by_path(self, path, outs=None):
from dvc.exceptions import OutputDuplicationError
if not outs:
astages = self.active_stages()
outs = [out for stage in astages for out in stage.outs]
abs_path = os.path.abspath(path)
matched = [out for out in outs if out.path == abs_path]
stages = [out.stage.relpath for out in matched]
if len(stages) > 1:
raise OutputDuplicationError(path, stages)
return matched[0] if matched else None
def metrics_show(self,
path=None,
typ=None,
xpath=None,
all_branches=False,
all_tags=False):
res = {}
for branch in self.scm.brancher(all_branches=all_branches,
all_tags=all_tags):
astages = self.active_stages()
outs = [out for stage in astages for out in stage.outs]
if path:
out = self._find_output_by_path(path, outs=outs)
stage = out.stage.path if out else None
if out and all([out.metric,
not typ,
isinstance(out.metric, dict)]):
entries = [(path,
out.metric.get(out.PARAM_METRIC_TYPE, None),
out.metric.get(out.PARAM_METRIC_XPATH, None))]
else:
entries = [(path, typ, xpath)]
else:
metrics = filter(lambda o: o.metric, outs)
stage = None
entries = []
for o in metrics:
if not typ and isinstance(o.metric, dict):
t = o.metric.get(o.PARAM_METRIC_TYPE, typ)
x = o.metric.get(o.PARAM_METRIC_XPATH, xpath)
else:
t = typ
x = xpath
entries.append((o.path, t, x))
for fname, t, x in entries:
if stage:
self.checkout(stage, force=True)
rel = os.path.relpath(fname)
metric = self._read_metric(fname,
typ=t,
xpath=x)
if not metric:
continue
if branch not in res:
res[branch] = {}
res[branch][rel] = metric
for branch, val in res.items():
if all_branches or all_tags:
logger.info('{}:'.format(branch))
for fname, metric in val.items():
logger.info('\t{}: {}'.format(fname, metric))
if res:
return res
if path:
msg = "file '{}' does not exist".format(path)
else:
msg = (
"no metric files in this repository."
" use 'dvc metrics add' to add a metric file to track."
)
raise DvcException(msg)
def _metrics_modify(self, path, typ=None, xpath=None, delete=False):
out = self._find_output_by_path(path)
if not out:
msg = "unable to find file '{}' in the pipeline".format(path)
raise DvcException(msg)
if out.scheme != 'local':
msg = "output '{}' scheme '{}' is not supported for metrics"
raise DvcException(msg.format(out.path, out.path_info['scheme']))
if out.use_cache:
msg = "cached output '{}' is not supported for metrics"
raise DvcException(msg.format(out.rel_path))
if typ:
if not isinstance(out.metric, dict):
out.metric = {}
out.metric[out.PARAM_METRIC_TYPE] = typ
if xpath:
if not isinstance(out.metric, dict):
out.metric = {}
out.metric[out.PARAM_METRIC_XPATH] = xpath
if delete:
out.metric = None
out._verify_metric()
out.stage.dump()
def metrics_modify(self, path=None, typ=None, xpath=None):
self._metrics_modify(path, typ, xpath)
def metrics_add(self, path, typ=None, xpath=None):
if not typ:
typ = 'raw'
self._metrics_modify(path, typ, xpath)
def metrics_remove(self, path):
self._metrics_modify(path, delete=True)
def graph(self, from_directory=None):
import networkx as nx
from dvc.exceptions import OutputDuplicationError
G = nx.DiGraph()
G_active = nx.DiGraph()
stages = self.stages(from_directory)
outs = []
outs_by_path = {}
for stage in stages:
for o in stage.outs:
existing = outs_by_path.get(o.path, None)
if existing is not None:
stages = [o.stage.relpath, existing.stage.relpath]
raise OutputDuplicationError(o.path, stages)
outs.append(o)
outs_by_path[o.path] = o
# collect the whole DAG
for stage in stages:
node = os.path.relpath(stage.path, self.root_dir)
G.add_node(node, stage=stage)
G_active.add_node(node, stage=stage)
for dep in stage.deps:
for out in outs:
if out.path != dep.path \
and not dep.path.startswith(out.path + out.sep) \
and not out.path.startswith(dep.path + dep.sep):
continue
dep_stage = out.stage
dep_node = os.path.relpath(dep_stage.path, self.root_dir)
G.add_node(dep_node, stage=dep_stage)
G.add_edge(node, dep_node)
if not stage.locked:
G_active.add_node(dep_node, stage=dep_stage)
G_active.add_edge(node, dep_node)
return G, G_active
def pipelines(self, from_directory=None):
import networkx as nx
G, G_active = self.graph(from_directory)
return [
G.subgraph(c).copy()
for c in nx.weakly_connected_components(G)
]
def stages(self, from_directory=None):
"""
Walks down the root directory looking for Dvcfiles,
skipping the directories that are related with
any SCM (e.g. `.git`), DVC itself (`.dvc`), or directories
tracked by DVC (e.g. `dvc add data` would skip `data/`)
NOTE: For large projects, this could be an expensive
operation. Consider using some memoization.
"""
from dvc.stage import Stage
if not from_directory:
from_directory = self.root_dir
stages = []
outs = []
for root, dirs, files in os.walk(from_directory):
for fname in files:
path = os.path.join(root, fname)
if not Stage.is_stage_file(path):
continue
stage = Stage.load(self, path)
for out in stage.outs:
outs.append(out.path + out.sep)
stages.append(stage)
def filter_dirs(dname, root=root):
path = os.path.join(root, dname)
if path in (self.dvc_dir, self.scm.dir):
return False
for out in outs:
if path == os.path.normpath(out) or path.startswith(out):
return False
return True
dirs[:] = list(filter(filter_dirs, dirs))
return stages
def active_stages(self, from_directory=None):
import networkx as nx
stages = []
for G in self.pipelines(from_directory):
stages.extend(list(nx.get_node_attributes(G, 'stage').values()))
return stages
def _welcome_message(self):
import colorama
logger.box(
"DVC has enabled anonymous aggregate usage analytics.\n"
"Read the analytics documentation (and how to opt-out) here:\n"
"{blue}https://dvc.org/doc/user-guide/analytics{nc}"
.format(
blue=colorama.Fore.BLUE,
nc=colorama.Fore.RESET
),
border_color='red'
)
logger.info(
"{yellow}What's next?{nc}\n"
"{yellow}------------{nc}\n"
"- Check out the documentation: {blue}https://dvc.org/doc{nc}\n"
"- Get help and share ideas: {blue}https://dvc.org/chat{nc}\n"
"- Star us on GitHub: {blue}https://github.com/iterative/dvc{nc}"
.format(yellow=colorama.Fore.YELLOW,
blue=colorama.Fore.BLUE,
nc=colorama.Fore.RESET)
)
def _expand_target_path(self, from_path, to_path):
if os.path.isdir(to_path) and not os.path.isdir(from_path):
return os.path.join(to_path, os.path.basename(from_path))
return to_path
|
python
|
<filename>00.WorkShops/workshop-forms/src/app/register-form-reactive/register-form-reactive.component.spec.ts
import { async, ComponentFixture, TestBed } from '@angular/core/testing';
import { RegisterFormReactiveComponent } from './register-form-reactive.component';
describe('RegisterFormReactiveComponent', () => {
let component: RegisterFormReactiveComponent;
let fixture: ComponentFixture<RegisterFormReactiveComponent>;
beforeEach(async(() => {
TestBed.configureTestingModule({
declarations: [ RegisterFormReactiveComponent ]
})
.compileComponents();
}));
beforeEach(() => {
fixture = TestBed.createComponent(RegisterFormReactiveComponent);
component = fixture.componentInstance;
fixture.detectChanges();
});
it('should create', () => {
expect(component).toBeTruthy();
});
});
|
typescript
|
England bowling star Stuart Broad could not resist himself from taking a sly dig at David Warner after the Australia opener won the Allan Border medal, Australian cricket’s highest individual honour. David Warner beat Steve Smith and Pat Cummins to claim his third Allan Border medal.
David Warner had a stunning World Cup campaign after he returned to action following the completion of his one-year ban. He finished the World Cup as the second highest run-scorer, scoring 647 runs at an average of 71.88 — one run behind Indian opener Rohit Sharma for the leading scorer of the tournament.
David Warner had then endured a disastrous campaign in the Ashes where he scored just 95 runs in five games. However, he bounced back in style in the home season. In six T20Is against Sri Lanka and Pakistan, the left-hander was dismissed just once and recorded a stunning average of 287 while also scoring his first T20I century.
He started the home Test season by scoring 154 against Pakistan at the Gabba before scoring a career-high 335 not out in the following Test at Adelaide Oval. Last month, he also scored an ODI ton in India.
Barring the Ashes, David Warner delivered consistently for Australia. However, it did not stop Stuart Broad from taking a dig at the southpaw. Stuart Broad was Warner’s nemesis in the series, dismissing the Australian on seven out of ten occasions. He removed Warner for three straight ducks across the Headingley and Old Trafford Tests during the series.
And so when David Warner won the medal, Stuart Broad took to Twitter to cheekily comment on an old tweet from ECB. In the tweet, the ECB had posted a clip of Broad dismissing Warner in the last Ashes.
“Why is this suddenly getting retweeted more today?! ?,” wrote Stuart Broad.
Why is this suddenly getting retweeted more today?! ?
|
english
|
<reponame>iamjack996/Vue-SPA
var config = {
type: Phaser.AUTO,
parent: 'phaser-example',
width: 800,
height: 600,
pixelArt: true,
scene: {
preload: preload,
create: create
}
};
var game = new Phaser.Game(config);
function preload ()
{
this.load.image('poo', 'assets/sprites/poo.png');
this.load.spritesheet('mummy', 'assets/animations/mummy37x45.png', { frameWidth: 37, frameHeight: 45 });
}
function create ()
{
var mummyAnimation = this.anims.create({
key: 'walk',
frames: this.anims.generateFrameNumbers('mummy'),
frameRate: 16,
repeat: 0
});
var sprite = this.add.sprite(50, 300, 'mummy').setScale(4);
sprite.play('walk');
sprite.anims.setRepeat(7);
this.tweens.add({
targets: sprite,
x: 750,
duration: 8800,
ease: 'Linear'
});
sprite.on('animationrepeat-walk', function () {
var poop = this.add.image(sprite.x - 32, 300, 'poo').setScale(0.5);
this.tweens.add({
targets: poop,
props: {
x: {
value: '-=64', ease: 'Power1'
},
y: {
value: '+=50', ease: 'Bounce.easeOut'
}
},
duration: 750
});
}, this);
}
|
javascript
|
<filename>zadanie/templates/django_registration/registration_complete.html<gh_stars>0
{% extends "base.html" %}
{% block main %}
<h1>Almost done.</h1>
<p>You've just registered your account but it is not activated yet.</p>
<p>We've sent you an email with a link attached you can click to confirm registration and activate your account.</p>
<p>Check your email, please.</p>
{% endblock %}
|
html
|
"""
_logging module (imdb package).
"""
import logging
LEVELS = {'debug': logging.DEBUG,
'info': logging.INFO,
'warn': logging.WARNING,
'warning': logging.WARNING,
'error': logging.ERROR,
'critical': logging.CRITICAL}
imdbpyLogger = logging.getLogger('media_browser')
imdbpyStreamHandler = logging.StreamHandler()
imdbpyFormatter = logging.Formatter('%(asctime)s %(levelname)s [%(name)s]' \
' %(pathname)s:%(lineno)d: %(message)s')
imdbpyStreamHandler.setFormatter(imdbpyFormatter)
imdbpyLogger.addHandler(imdbpyStreamHandler)
def setLevel(level):
"""Set logging level for the main logger."""
level = level.lower().strip()
imdbpyLogger.setLevel(LEVELS.get(level, logging.NOTSET))
imdbpyLogger.log(imdbpyLogger.level, 'set logging threshold to "%s"',
logging.getLevelName(imdbpyLogger.level))
#imdbpyLogger.setLevel(logging.DEBUG)
# It can be an idea to have a single function to log and warn:
#import warnings
#def log_and_warn(msg, args=None, logger=None, level=None):
# """Log the message and issue a warning."""
# if logger is None:
# logger = imdbpyLogger
# if level is None:
# level = logging.WARNING
# if args is None:
# args = ()
# #warnings.warn(msg % args, stacklevel=0)
# logger.log(level, msg % args)
|
python
|
<gh_stars>0
"""Hex Grid, by <NAME> <EMAIL>
Displays a simple tessellation of a hexagon grid.
This and other games are available at https://nostarch.com/XX
Tags: tiny, beginner, artistic"""
__version__ = 0
# Set up the constants:
# (!) Try changing these values to other numbers:
X_REPEAT = 19 # How many times to tessellate horizontally.
Y_REPEAT = 12 # How many times to tessellate vertically.
for y in range(Y_REPEAT):
# Display the top half of the hexagon:
for x in range(X_REPEAT):
print(r'/ \_', end='')
print()
# Display the bottom half of the hexagon:
for x in range(X_REPEAT):
print(r'\_/ ', end='')
print()
|
python
|
/*---------------------------------------------------------------------------*
* Copyright (c) 2019 McAfee, LLC - All Rights Reserved. *
*---------------------------------------------------------------------------*/
package com.opendxl.databus.producer;
import com.opendxl.databus.producer.internal.ProducerDefaultConfiguration;
import org.junit.Assert;
import org.junit.Test;
public class ProducerDefaultConfigurationTest {
@Test
public void shouldReturnNullValue() {
String v = ProducerDefaultConfiguration.get(ProducerConfig.BATCH_SIZE_CONFIG);
Assert.assertTrue(v == null);
}
@Test
public void shouldReturnAvalidValue() {
String value = ProducerDefaultConfiguration.get(ProducerDefaultConfiguration.MAX_BLOCK_MS_CONFIG_KEY);
Assert.assertTrue(ProducerDefaultConfiguration.MAX_BLOCK_MS_CONFIG_DEFAULT_VALUE.equals(value));
}
}
|
java
|
<gh_stars>0
/*reset*/
html,body,div,section,article,ul,ol,li{padding:0;margin:0;}
ul,ol,li{list-style:none;}
a,a:hover{text-decoration:none;}
body{height:100%;font-size:14px;line-height:1.2;font-family:'Arial','Helvetica','STHeiti','Microsoft YaHei';}
html,body{overflow:hidden;background-color: rgba(0,0,0,0.5);}
html{font-size: 16px;}
/*页面样式*/
.wrapper{position:absolute;top:0;right:0;bottom:0;left:0;background:#fefefe;overflow:hidden;margin: 0 auto;}
.page{display:none;position:absolute;top:0;right:0;bottom:0;left:0;background:#fff;z-index:1;-webkit-animation-duration:1s;-webkit-animation-timing-function:ease;}
.page.active{display: block;}
.page span{font-size: 36px;}
.page:nth-child(1){background-color: #c25151;}
.page:nth-child(2){background-color: #3aade8;}
.page:nth-child(3){background-color: #fa811e;}
.page:nth-child(4){background-color: #85f488;}
.operate{}
.operate .btn-prev,
.operate .btn-next{position: absolute;width: 5em;height: 5em;line-height: 5em;background-color: rgba(0,0,0,0.8);color: #fff;text-align: center;right: 0;z-index: 99;}
.operate .btn-prev{top: 0;}
.operate .btn-next{bottom: 0;}
.operate .btn-prev::after{content: "<";font-size: 2em;}
.operate .btn-next::after{content: ">";font-size: 2em;}
.operate .trigger-list{position: absolute;width: 1em;right: 2em;top: 50%;-webkit-transform: translateY(-50%);z-index: 99;}
.operate .trigger-list .trigger{display: inline-block;width: 1em;height: 1em;background: rgba(0,0,0,0.7);border-radius: 50%;}
.operate .trigger-list .active{background: rgba(255,255,255,0.7);}
/*动画样式*/
.toIn,
.toOut{-webkit-transform-origin:50% 100%;}
.toIn{display:block;z-index:2;-webkit-animation-name:phoneCoverIn;}
@-webkit-keyframes phoneCoverIn{
0%{-webkit-transform:translateY(100%);}
100%{-webkit-transform:translateY(0);}
}
.toOut{display:block;z-index:1;-webkit-animation-name:phoneCoverOut;}
@-webkit-keyframes phoneCoverOut{
0%{-webkit-transform:translateY(0);}
100%{-webkit-transform:translateY(-30%);}
}
.backIn{display:block;z-index:2;-webkit-animation-name:phoneCoverBackIn;}
@-webkit-keyframes phoneCoverBackIn{
0%{-webkit-transform:translateY(-100%);}
100%{-webkit-transform:translateY(0);}
}
.backOut{display:block;z-index:1;-webkit-animation-name:phoneCoverOutBack;}
@-webkit-keyframes phoneCoverOutBack{
0%{-webkit-transform:translateY(0);}
100%{-webkit-transform:translateY(30%);}
}
|
css
|
Goa: Ramnathi Ashram in Bandora, Goa, doesn’t quite fit the image of an austere ashram. Situated between the brightly painted Shantadurga Temple and Ramnathi Temple, the ashram flaunts its own bright yellows and whites. It rests on top of a hill. The façade and the sides, all that outsiders can see from the road, give the impression of a gated resort or an apartment complex. This benign setting is the home of the Sanatan Sanstha, a right-wing organization that is at the heart of a multi-state investigation into killings of four rationalists.
|
english
|
Zoom Video Communications is launching Zoom Phone Appliances, a combination of hardware from Poly and Yealink with Zoom video meetings, phone and collaboration software.
The all-in-one desk phone has an integrated touch display to start and schedule meetings, take phone calls and collaborate.
Video collaboration players are launching new products and revamping platforms to prepare for hybrid work arrangements.
With its video collaboration footprint, Zoom is aiming to blur lines between video and audio and make it easier for enterprises to procure and manage hardware with minimal integration.
Zoom Phone Appliances include:
|
english
|
package com.hemika.model.rm;
import com.hemika.model.UserData;
import org.springframework.jdbc.core.RowMapper;
//import org.springframework.stereotype.Service;
import java.sql.ResultSet;
import java.sql.SQLException;
public class UserTypeRM implements RowMapper<UserData> {
@Override
public UserData mapRow(ResultSet resultSet, int i) throws SQLException {
UserData userData = new UserData();
userData.setId(resultSet.getInt("id"));
userData.setLabel_en(resultSet.getString("label_en"));
return userData;
}
}
|
java
|
Typically, mods are associated with Minecraft Java Edition. That's because, in technical terms, "mods" cannot be applied to Bedrock Edition. However, Bedrock players are not entirely left out because there are plenty of addons that can change the game.
Mods are often used to spice up the game or change how things look or act. For Bedrock players, two addons exist that can alter that: behavior packs and resource packs.
These can be used to add furniture, which is a popular addon for all crafters since it's one thing the game is sorely lacking. Here are some furniture addons built for Pocket Edition to try out.
Note: This article is subjective and reflects the views of the author.
This furniture addon adds well over 20 pieces of furniture to the game. Furniture is one thing most crafters wish was a more significant part of the game, and this addon suitably addresses that issue. However, instead of being in the Creative menu or craftable, furniture items here are for purchase from a new, specific villager.
Players can trade what is called a Bit-Emerald to get furniture. That same emerald is how the trade is activated by a player using it on the villager.
Bzf Furniture's Addon introduces over 100 new furniture items. Minecraft has very few in the vanilla version of the game, so this mod sets out to remedy that.
Clocks, air conditioners, telephones, televisions, showers, working appliances, and more have all been added thanks to this beautiful addon.
50 new furniture items are added with this addon. The neat part for this one is that most of the items are based on mobs in the game.
It's a great way to introduce new features while keeping it familiar and true to the original game.
This addon changes the game by adding furniture, but it also retextures a major part of the game. It makes Minecraft feel fresh and new, which can be helpful for a game going on 13 years of activity.
There are 25 new items with over 80 variations, including a tv, benches, music players, and more.
This mod was recently updated and is good to go with Minecraft version 1.18, which not every addon can say. It also boasts 180 new furniture items, another thing most addons can't boast. The recent update added several interactions with furniture items, three channels to the television, etc.
They've even added realistic sound effects for the television, the air conditioner, any flowing water, and three songs for the music player.
|
english
|
#include <bits/stdc++.h>
typedef long long ll;
const int mod = 1e9 + 7;
const int MAX = 1e5 + 7;
using namespace std;
typedef vector<int> vi;
vector<int> g[MAX];
int color[MAX];
int n, m, u, v;
queue<int> l;
void pwhite(int i) {
color[i] = 1;
for (auto j : g[i]) {
if (color[j] == -1 ) {
color[j] = 0;
l.push(j);
}
}
}
int main() {
#ifndef ONLINE_JUDGE
freopen("/Users/seeva92/Workspace/Contests/1.txt", "r", stdin);
freopen("/Users/seeva92/Workspace/Contests/2.txt", "w", stdout);
#endif
ios::sync_with_stdio(false);
cin.tie(0);
cin >> n >> m;
for (int i = 0; i < m; i++) {
cin >> u >> v;
g[u].push_back(v);
g[v].push_back(u);
}
memset(color, -1, sizeof color);
pwhite(1);
while (!l.empty()) {
int i = l.front(); l.pop();
for (auto j : g[i]) {
if (color[j] == -1) {
pwhite(j);
}
}
}
for (int i = 1; i <= n; i++) cout << color[i] << " ";
}
|
cpp
|
There have been serious clashes in Belgrade, #Serbia. Protesters stormed the parliament building and have been clashing with police.
The clashes are being framed as “anti-Coronavirus lockdown” protests, but from what I understand speaking to Serbian friends it’s a bit more complicated than that. Seems that eveyone has long been sick of the government’s soft dictator-like ways and now they’ve had enough.
People are essentially stretching their legs and asserting their anger at the state. I’m told eveyone from far-left to far-right is involved in clashes. Police response has been very heavy.
The moment protesters in #Serbia pushed past police and stormed the parliament building last night.
Police beating the shit out of people in Belgrade, #Serbia again tonight. Second night of anti-government protests.
After the lads sat on a bench got beaten by police for sitting down (see further up the thread) they went back to the bench to sit down again lol. Legends.
|
english
|
<filename>public/css/account.css
section.account{
min-width: 200px;
min-height:180px;
position:relative;
}
section.account .body{
padding-top: 10px;
padding-bottom:40px;
}
section.account .amount{
padding: 10px 0;
text-align:center;
font-size: 1.2em;
margin:0 auto;
}
section.account .comments{
font-size: 0.8em;
color:#9299a2;
}
section.account .nav{
position:absolute;
bottom:0;
height:30px;
padding:5px 0;
right:5px;
}
.account_add{
float:right;
clear:both;
margin:0;
}
.account_add .body{
padding: 10px 30px;
}
|
css
|
Dear Sir,
As you must be already aware, Indian Football team has been doing consistently well for the last couple of years. They have risen in Fifa rankings and stand within the top 15 in Asia. We should be encouraging them more and give them more exposure. As a former sportsperson who had made the country proud, I hope you will agree that the team will only get better by playing higher ranked teams. They have qualified for the AFC Asian Cup after a long time and participation in Asian Games will only improve their chances.
Please impress upon the IOA about the need to send the team for the Games. With the World Cup in action and huge number of Indians following the game, this could generate interest in Indian football and only help in getting the younger generation hooked to the game.
We sincerely request you on behalf of all sports loving Indians to send our Indian National Football team to the Asian Games.
|
english
|
<reponame>curtins/horizon
{"status":{"code":200,"http":"Fetched (selfPing) 200 600 and parsed 2/47 entries","nextFetch":1505338,"entriesCountSinceLastMaintenance":5,"velocity":5.1,"popularity":2.379648696655326,"generatedIds":true,"period":600,"lastFetch":1505337,"lastParse":1505337,"lastMaintenanceAt":1505322,"feed":"http://rss.nytimes.com/services/xml/rss/nyt/Movies.xml"},"permalinkUrl":"https://www.nytimes.com/section/movies?partner=rss&emc=rss","standardLinks":{"alternate":[{"title":"NYT > Movies","rel":"alternate","href":"https://www.nytimes.com/section/movies?partner=rss&emc=rss","type":"text/html"}],"self":[{"title":"NYT > Movies","rel":"self","href":"http://www.nytimes.com/services/xml/rss/nyt/Movies.xml","type":"application/rss+xml"}],"image":[{"title":"NYT > Movies","rel":"image","href":"https://static01.nyt.com/images/misc/NYT_logo_rss_250x40.png","type":"image/png"}]},"image":"https://static01.nyt.com/images/misc/NYT_logo_rss_250x40.png","title":"NYT > Movies","updated":1505337370,"id":"nyt-movies-2017-9-13-21","items":[{"id":"https://www.nytimes.com/2017/09/13/movies/the-future-perfect-review.html","published":1505336618,"updated":1505336618,"title":"Review: ‘The Future Perfect’ Gives Voice to a Newcomer in Argentina","summary":"A young Chinese immigrant in Buenos Aires picks up more than grammar when she learns Spanish.","permalinkUrl":"https://www.nytimes.com/2017/09/13/movies/the-future-perfect-review.html?partner=rss&emc=rss","standardLinks":{"alternate":[{"title":"Review: ‘The Future Perfect’ Gives Voice to a Newcomer in Argentina","rel":"alternate","href":"https://www.nytimes.com/2017/09/13/movies/the-future-perfect-review.html?partner=rss&emc=rss","type":"text/html"}]},"actor":{"displayName":"<NAME>","id":"<NAME>"},"categories":["Movies","The Future Perfect (Movie)","<NAME>","<NAME>"],"language":"en"},{"id":"https://www.nytimes.com/2017/09/13/movies/rebel-wilson-awarded-3-6-million-in-defamation-case.html","published":1505336059,"updated":1505336059,"title":"<NAME> Awarded $3.6 Million in Defamation Case","summary":"A justice said the publisher Bauer Media was guilty of “recklessness” for articles in two of its magazines, Woman’s Day and Women’s Weekly.","permalinkUrl":"https://www.nytimes.com/2017/09/13/movies/rebel-wilson-awarded-3-6-million-in-defamation-case.html?partner=rss&emc=rss","standardLinks":{"alternate":[{"title":"<NAME> Awarded $3.6 Million in Defamation Case","rel":"alternate","href":"https://www.nytimes.com/2017/09/13/movies/rebel-wilson-awarded-3-6-million-in-defamation-case.html?partner=rss&emc=rss","type":"text/html"}],"enclosure":[{"title":"<NAME> Awarded $3.6 Million in Defamation Case","rel":"enclosure","href":"https://static01.nyt.com/images/2017/09/14/arts/14wilson-item/14wilson-item-moth.jpg","type":"image/jpeg"}]},"actor":{"displayName":"<NAME>","id":"<NAME>"},"categories":["Movies","Libel and Slander","Wilson, Rebel (1980- )","Bauer Media","Woman's Day","Women's Weekly"],"language":"en"}]}
|
json
|
/*
Copyright 2021 <NAME>.
Licensed under the Apache License, Version 2.0 (the "License");
you may not use this file except in compliance with the License.
You may obtain a copy of the License at
http://www.apache.org/licenses/LICENSE-2.0
Unless required by applicable law or agreed to in writing, software
distributed under the License is distributed on an "AS IS" BASIS,
WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
See the License for the specific language governing permissions and
limitations under the License.
*/
package controllers
import (
"context"
"fmt"
"time"
"k8s.io/apimachinery/pkg/runtime"
"k8s.io/client-go/tools/record"
ctrl "sigs.k8s.io/controller-runtime"
"sigs.k8s.io/controller-runtime/pkg/client"
"sigs.k8s.io/controller-runtime/pkg/controller/controllerutil"
"sigs.k8s.io/controller-runtime/pkg/log"
"github.com/prometheus/client_golang/prometheus"
steniciov1alpha1 "github.com/stenic/sql-operator/api/v1alpha1"
"github.com/stenic/sql-operator/drivers"
)
// SqlHostReconciler reconciles a SqlHost object
type SqlHostReconciler struct {
client.Client
Scheme *runtime.Scheme
Recorder record.EventRecorder
RefreshRate time.Duration
}
//+kubebuilder:rbac:groups=stenic.io,resources=sqlhosts,verbs=get;list;watch;create;update;patch;delete
//+kubebuilder:rbac:groups=stenic.io,resources=sqlhosts/status,verbs=get;update;patch
//+kubebuilder:rbac:groups=stenic.io,resources=sqlhosts/finalizers,verbs=update
//+kubebuilder:rbac:groups="",resources=events,verbs=create;patch
// Reconcile is part of the main kubernetes reconciliation loop which aims to
// move the current state of the cluster closer to the desired state.
//
// For more details, check Reconcile and its Result here:
// - https://pkg.go.dev/sigs.k8s.io/controller-runtime@v0.8.3/pkg/reconcile
func (r *SqlHostReconciler) Reconcile(ctx context.Context, req ctrl.Request) (ctrl.Result, error) {
_ = log.FromContext(ctx)
promLabels := prometheus.Labels{
"crd": "sqlHost",
"namespace": req.Namespace,
"name": req.Name,
}
sqlOperatorActions.With(promLabels).Inc()
var host steniciov1alpha1.SqlHost
if err := r.Get(ctx, req.NamespacedName, &host); err != nil {
// log.Error(err, "unable to fetch SqlDatabase")
return ctrl.Result{}, client.IgnoreNotFound(err)
}
driver, err := drivers.GetDriver(host)
if err != nil {
return ctrl.Result{}, err
}
if err := driver.InitOwnerSchema(ctx); err != nil {
r.Recorder.Event(&host, "Warning", "Error", err.Error())
return ctrl.Result{RequeueAfter: r.RefreshRate * 10}, err
}
scheduledResult := ctrl.Result{RequeueAfter: r.RefreshRate}
finalizerName := "stenic.io/sqlhost-deletion"
// examine DeletionTimestamp to determine if object is under deletion
if host.ObjectMeta.DeletionTimestamp.IsZero() {
// The object is not being deleted, so if it does not have our finalizer,
// then lets add the finalizer and update the object. This is equivalent
// registering our finalizer.
if !controllerutil.ContainsFinalizer(&host, finalizerName) {
controllerutil.AddFinalizer(&host, finalizerName)
if err := r.Update(ctx, &host); err != nil {
return ctrl.Result{}, err
}
}
} else {
// The object is being deleted
if controllerutil.ContainsFinalizer(&host, finalizerName) {
// our finalizer is present, so lets handle any external dependency
var userChildren steniciov1alpha1.SqlUserList
if err := isReferenced(ctx, r.Client, &userChildren, referencedHostKey, &host); err != nil {
r.Recorder.Event(&host, "Warning", "Error", err.Error())
sqlOperatorActionsFailures.With(promLabels).Inc()
return ctrl.Result{}, err
}
if len(userChildren.Items) > 0 {
err := fmt.Errorf(
"%s - [%s/%s] ...",
"Can't delete, found other referencing this object",
userChildren.Items[0].Namespace,
userChildren.Items[0].Name,
)
r.Recorder.Event(&host, "Warning", "Error", err.Error())
// might have been faster than referenced object, reschedule.
return scheduledResult, err
}
var databaseChildren steniciov1alpha1.SqlDatabaseList
if err := isReferenced(ctx, r.Client, &databaseChildren, referencedHostKey, &host); err != nil {
r.Recorder.Event(&host, "Warning", "Error", err.Error())
sqlOperatorActionsFailures.With(promLabels).Inc()
return ctrl.Result{}, err
}
if len(databaseChildren.Items) > 0 {
err := fmt.Errorf(
"%s - [%s/%s] ...",
"Can't delete, found other referencing this object",
databaseChildren.Items[0].Namespace,
databaseChildren.Items[0].Name,
)
r.Recorder.Event(&host, "Warning", "Error", err.Error())
sqlOperatorActionsFailures.With(promLabels).Inc()
// might have been faster than referenced object, reschedule.
return scheduledResult, err
}
// remove our finalizer from the list and update it.
controllerutil.RemoveFinalizer(&host, finalizerName)
if err := r.Update(ctx, &host); err != nil {
return ctrl.Result{}, err
}
}
// Stop reconciliation as the item is being deleted
return ctrl.Result{}, nil
}
return ctrl.Result{}, nil
}
// SetupWithManager sets up the controller with the Manager.
func (r *SqlHostReconciler) SetupWithManager(mgr ctrl.Manager) error {
if err := mgr.GetFieldIndexer().IndexField(context.Background(), &steniciov1alpha1.SqlUser{}, referencedHostKey, func(rawObj client.Object) []string {
object := rawObj.(*steniciov1alpha1.SqlUser)
ns := object.Spec.HostRef.Namespace
if ns == "" {
ns = object.Namespace
}
return []string{ns + "/" + object.Spec.HostRef.Name}
}); err != nil {
return err
}
if err := mgr.GetFieldIndexer().IndexField(context.Background(), &steniciov1alpha1.SqlDatabase{}, referencedHostKey, func(rawObj client.Object) []string {
object := rawObj.(*steniciov1alpha1.SqlDatabase)
ns := object.Spec.HostRef.Namespace
if ns == "" {
ns = object.Namespace
}
return []string{ns + "/" + object.Spec.HostRef.Name}
}); err != nil {
return err
}
return ctrl.NewControllerManagedBy(mgr).
For(&steniciov1alpha1.SqlHost{}).
Complete(r)
}
|
go
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.