added stringdate 2025-04-01 04:05:38 2025-04-01 07:14:06 | created timestamp[us]date 2001-10-09 16:19:16 2025-01-01 03:51:31 | id stringlengths 4 10 | metadata dict | source stringclasses 2
values | text stringlengths 0 1.61M |
|---|---|---|---|---|---|
2025-04-01T04:10:28.456512 | 2022-10-08T15:24:54 | 1401979147 | {
"authors": [
"3vcloud",
"BearsMan"
],
"license": "MIT",
"license_source": "github-api",
"license_type": "permissive",
"provenance": "gharchive-dolma-0004.json.gz:14443",
"repo": "HasKha/GWToolboxpp",
"url": "https://github.com/HasKha/GWToolboxpp/issues/824"
} | gharchive/issue | Crash when trying to talk to any NPC
6.0-20221008-104424-6776-1368.zip
When trying to talk with an NPC to say trying to sell stuff u no longer need etc, the game will send out a dialog info crash with the DMP I checked the debugger which read:
GWToolboxdll.dll GWToolboxdll.dll C:\Users\dronk\Downloads\GWLauncher\plugins\GWToolboxdll.dll N/A Yes Cannot find or open the PDB file. 36 <IP_ADDRESS> 9/25/2022 5:45 PM 7B330000-7B77F000 6.0-20221008-104424-6776-1368.dmp
I highly doubt I am probably the only one, during Speedruns, etc. I am fine.
Can't copy this issue and crash dump isn't helping much, might need a video or something
I'll try to make one when I get back for u @3vcloud
https://youtu.be/lN09Ha0MeZs
Video Link for u @3vcloud
Note: I should have added the other crash I had in Marketplace near KC, I can add a video for that too if needed
https://www.youtube.com/watch?v=9TX0w0ax5I4
@3vcloud this one does not have the crash dump on it
|
2025-04-01T04:10:28.500316 | 2022-04-18T12:50:26 | 1207018571 | {
"authors": [
"Alxcph",
"Lowpolypig",
"TheGreatAxios",
"VBeingLK",
"illjewminati",
"seantrapani",
"zhoraog"
],
"license": "MIT",
"license_source": "github-api",
"license_type": "permissive",
"provenance": "gharchive-dolma-0004.json.gz:14444",
"repo": "HashLips/hashlips_art_engine_app",
"url": "https://github.com/HashLips/hashlips_art_engine_app/issues/4"
} | gharchive/issue | Crashing after a certain number of NFT generation
The app works well until a certain number of generations. It randomly crashes. I setup my output value to 3500 but it crashed at somewhere around 1,200 generations. How do we work around it?
having the same issue except it is at around 7-800 generations.
im trying to generate 33,333 and it crashes around 10,000 every time
getting the same issue around 1200 on a 2200 piece collection
Having the same issue. On a 5000 collection, I haven't been able to make it past 224
Any update on why this may happen?
guys the problem is that the name of some of your layers might be wrong. open dev console and check the error it will tell you which layer has a problem <3
|
2025-04-01T04:10:28.545758 | 2016-05-10T08:02:10 | 153945811 | {
"authors": [
"bransbury",
"somus"
],
"license": "MIT",
"license_source": "github-api",
"license_type": "permissive",
"provenance": "gharchive-dolma-0004.json.gz:14445",
"repo": "Hashnode/mern-starter",
"url": "https://github.com/Hashnode/mern-starter/pull/137"
} | gharchive/pull-request | Bump dependencies
Bumping out-of-date dependencies
@somus - peer dependencies added, however there are some lint issues now. Nothing critical and some that will need to be ignored to avoid a refactor, e.g. no-underscore-dangle for const store = configureStore(window.__INITIAL_STATE__);. Might be best to handle these in a new issue.
@bransbury Thanks for the PR. :) Those lint errors can be handled in a separate issue.
Also, we have started the development of MERN v2. Check out the discussion thread at #146.
|
2025-04-01T04:10:28.575283 | 2023-08-16T05:02:38 | 1852516979 | {
"authors": [
"Leroy231",
"archimonde1111"
],
"license": "MIT",
"license_source": "github-api",
"license_type": "permissive",
"provenance": "gharchive-dolma-0004.json.gz:14446",
"repo": "HaywireInteractive/OnAllFronts-Public",
"url": "https://github.com/HaywireInteractive/OnAllFronts-Public/issues/540"
} | gharchive/issue | Fix crash when going to map while in tank
Issue is that we're calling GetDroneForSoldier but passing in the tank actor instead of player's soldier actor. I believe we have a helper method somewhere that will always get us the player's soldier actor even when it's not possessed, maybe on game mode.
DevNote:
Blocked until https://github.com/HaywireInteractive/OnAllFronts/pull/163 PR is merged.
@archimonde1111 https://github.com/HaywireInteractive/OnAllFronts/pull/163 is now merged
|
2025-04-01T04:10:28.591128 | 2022-10-05T14:46:16 | 1397917697 | {
"authors": [
"cccntu",
"geekinglcq",
"tridao"
],
"license": "BSD-3-Clause",
"license_source": "github-api",
"license_type": "permissive",
"provenance": "gharchive-dolma-0004.json.gz:14447",
"repo": "HazyResearch/flash-attention",
"url": "https://github.com/HazyResearch/flash-attention/issues/54"
} | gharchive/issue | FlashAttention returns all zeros when device is 'cuda:1'
testing code:
import numpy as np
import torch
from flash_attn.flash_attention import FlashAttention
def test(device):
flash = FlashAttention()
d_head = 64
n_heads = 32
flash.softmax_scale = 1 # / (d_head ** 0.5)
batch_size = 4
seq_len = 16
qkv = torch.ones(batch_size, seq_len, 3, n_heads, d_head, dtype=torch.float16)
flash = flash.to(device)
qkv = qkv.to(device)
out, _ = flash(qkv)
print(out.shape, torch.abs(out).type(torch.float32).sum())
return out
if I add the following code and run it,
out = test("cuda:0")
the output is
torch.Size([4, 16, 32, 64]) tensor(131072., device='cuda:0')
if I run this instead
out = test("cuda:1")
the output is
torch.Size([4, 16, 32, 64]) tensor(0., device='cuda:1')
I've verified it's not the hardware issue by adding
import os
os.environ["CUDA_VISIBLE_DEVICES"] = "1"
at the top and it works with cuda:0
Found the same problem when I tried to apply the flash attention to stable diffusion.
I met the problem when I use cuda:6, and it cost me about one whole day to debug 💔.
Sorry about that, I've just reproduced it as well.
Must be because we're not setting the right device somewhere.
Will try to figure out.
I've just pushed a commit that fixed this. Let me know if it works on your side.
I've just pushed a commit that fixed this. Let me know if it works on your side.
Hello, thank you for your attention. However, I tried the newest commit and found the problem is still there and even when I use cuda:0 device. The former version code (commit ID: 8166063a556e17e03e4a0697ba604def1eeb6a99) works well when using cuda:0.
Uploading flash_attentions.py…
There are some information may be helpful:
I use flash_attention for speeding up diffusers and replace diffusers/models/attention.py with the following code, as suggested in https://www.reddit.com/r/StableDiffusion/comments/xmr3ic/speed_up_stable_diffusion_by_50_using_flash/ and https://nn.labml.ai/diffusion/stable_diffusion/model/unet_attention.html#section-45
"""
---
title: Transformer for Stable Diffusion U-Net
summary: >
Annotated PyTorch implementation/tutorial of the transformer
for U-Net in stable diffusion.
---
# Transformer for Stable Diffusion [U-Net](unet.html)
This implements the transformer module used in [U-Net](unet.html) that
gives $\epsilon_\text{cond}(x_t, c)$
We have kept to the model definition and naming unchanged from
[CompVis/stable-diffusion](https://github.com/CompVis/stable-diffusion)
so that we can load the checkpoints directly.
"""
from typing import Optional
import os
import math
import torch
import torch.nn.functional as F
from torch import nn
class AttentionBlock(nn.Module):
"""
An attention block that allows spatial positions to attend to each other. Originally ported from here, but adapted
to the N-d case.
httpS://github.com/hojonathanho/diffusion/blob/1e0dceb3b3495bbe19116a5e1b3596cd0706c543/diffusion_tf/models/unet.py#L66.
Uses three q, k, v linear layers to compute attention.
Parameters:
channels (:obj:`int`): The number of channels in the input and output.
num_head_channels (:obj:`int`, *optional*):
The number of channels in each head. If None, then `num_heads` = 1.
num_groups (:obj:`int`, *optional*, defaults to 32): The number of groups to use for group norm.
rescale_output_factor (:obj:`float`, *optional*, defaults to 1.0): The factor to rescale the output by.
eps (:obj:`float`, *optional*, defaults to 1e-5): The epsilon value to use for group norm.
"""
def __init__(
self,
channels: int,
num_head_channels: Optional[int] = None,
num_groups: int = 32,
rescale_output_factor: float = 1.0,
eps: float = 1e-5,
):
super().__init__()
self.channels = channels
self.num_heads = channels // num_head_channels if num_head_channels is not None else 1
self.num_head_size = num_head_channels
self.group_norm = nn.GroupNorm(num_channels=channels, num_groups=num_groups, eps=eps, affine=True)
# define q,k,v as linear layers
self.query = nn.Linear(channels, channels)
self.key = nn.Linear(channels, channels)
self.value = nn.Linear(channels, channels)
self.rescale_output_factor = rescale_output_factor
self.proj_attn = nn.Linear(channels, channels, 1)
def transpose_for_scores(self, projection: torch.Tensor) -> torch.Tensor:
new_projection_shape = projection.size()[:-1] + (self.num_heads, -1)
# move heads to 2nd position (B, T, H * D) -> (B, T, H, D) -> (B, H, T, D)
new_projection = projection.view(new_projection_shape).permute(0, 2, 1, 3)
return new_projection
def forward(self, hidden_states):
residual = hidden_states
batch, channel, height, width = hidden_states.shape
# norm
hidden_states = self.group_norm(hidden_states)
hidden_states = hidden_states.view(batch, channel, height * width).transpose(1, 2)
# proj to q, k, v
query_proj = self.query(hidden_states)
key_proj = self.key(hidden_states)
value_proj = self.value(hidden_states)
# transpose
query_states = self.transpose_for_scores(query_proj)
key_states = self.transpose_for_scores(key_proj)
value_states = self.transpose_for_scores(value_proj)
# get scores
scale = 1 / math.sqrt(math.sqrt(self.channels / self.num_heads))
attention_scores = torch.matmul(query_states * scale, key_states.transpose(-1, -2) * scale)
attention_probs = torch.softmax(attention_scores.float(), dim=-1).type(attention_scores.dtype)
# compute attention output
hidden_states = torch.matmul(attention_probs, value_states)
hidden_states = hidden_states.permute(0, 2, 1, 3).contiguous()
new_hidden_states_shape = hidden_states.size()[:-2] + (self.channels,)
hidden_states = hidden_states.view(new_hidden_states_shape)
# compute next hidden_states
hidden_states = self.proj_attn(hidden_states)
hidden_states = hidden_states.transpose(-1, -2).reshape(batch, channel, height, width)
# res connect and rescale
hidden_states = (hidden_states + residual) / self.rescale_output_factor
return hidden_states
class SpatialTransformer(nn.Module):
"""
## Spatial Transformer
"""
def __init__(self, channels: int, n_heads: int, d_cond: int, depth: int, num_groups=32, context_dim=None):
"""
:param channels: is the number of channels in the feature map
:param n_heads: is the number of attention heads
:param n_layers: is the number of transformer layers
:param d_cond: is the size of the conditional embedding
"""
super().__init__()
n_layers = depth
d_cond = context_dim
# Initial group normalization
self.norm = torch.nn.GroupNorm(num_groups=32, num_channels=channels, eps=1e-6, affine=True)
# Initial $1 \times 1$ convolution
self.proj_in = nn.Conv2d(channels, channels, kernel_size=1, stride=1, padding=0)
# Transformer layers
self.transformer_blocks = nn.ModuleList(
[BasicTransformerBlock(channels, n_heads, channels // n_heads, d_cond=d_cond) for _ in range(n_layers)]
)
# Final $1 \times 1$ convolution
self.proj_out = nn.Conv2d(channels, channels, kernel_size=1, stride=1, padding=0)
def forward(self, hidden_states: torch.Tensor, context: torch.Tensor):
"""
:param x: is the feature map of shape `[batch_size, channels, height, width]`
:param cond: is the conditional embeddings of shape `[batch_size, n_cond, d_cond]`
"""
# Get shape `[batch_size, channels, height, width]`
x = hidden_states
cond = context
b, c, h, w = x.shape
# For residual connection
x_in = x
# Normalize
x = self.norm(x)
# Initial $1 \times 1$ convolution
x = self.proj_in(x)
# Transpose and reshape from `[batch_size, channels, height, width]`
# to `[batch_size, height * width, channels]`
x = x.permute(0, 2, 3, 1).view(b, h * w, c)
# Apply the transformer layers
for block in self.transformer_blocks:
x = block(x, cond)
# Reshape and transpose from `[batch_size, height * width, channels]`
# to `[batch_size, channels, height, width]`
x = x.view(b, h, w, c).permute(0, 3, 1, 2)
# Final $1 \times 1$ convolution
x = self.proj_out(x)
# Add residual
return x + x_in
class BasicTransformerBlock(nn.Module):
"""
### Transformer Layer
"""
def __init__(self, d_model: int, n_heads: int, d_head: int, d_cond: int):
"""
:param d_model: is the input embedding size
:param n_heads: is the number of attention heads
:param d_head: is the size of a attention head
:param d_cond: is the size of the conditional embeddings
"""
super().__init__()
# Self-attention layer and pre-norm layer
self.attn1 = CrossAttention(d_model, d_model, n_heads, d_head)
self.norm1 = nn.LayerNorm(d_model)
# Cross attention layer and pre-norm layer
self.attn2 = CrossAttention(d_model, d_cond, n_heads, d_head)
self.norm2 = nn.LayerNorm(d_model)
# Feed-forward network and pre-norm layer
self.ff = FeedForward(d_model)
self.norm3 = nn.LayerNorm(d_model)
def forward(self, x: torch.Tensor, cond: torch.Tensor):
"""
:param x: are the input embeddings of shape `[batch_size, height * width, d_model]`
:param cond: is the conditional embeddings of shape `[batch_size, n_cond, d_cond]`
"""
# Self attention
x = self.attn1(self.norm1(x)) + x
# Cross-attention with conditioning
x = self.attn2(self.norm2(x), cond=cond) + x
# Feed-forward network
x = self.ff(self.norm3(x)) + x
#
return x
class CrossAttention(nn.Module):
"""
### Cross Attention Layer
This falls-back to self-attention when conditional embeddings are not specified.
"""
use_flash_attention: bool = int(os.environ.get("USE_FLASH_ATTENTION", 0))==1
use_flash_attention = True
def __init__(self, d_model: int, d_cond: int, n_heads: int, d_head: int, is_inplace: bool = True):
"""
:param d_model: is the input embedding size
:param n_heads: is the number of attention heads
:param d_head: is the size of a attention head
:param d_cond: is the size of the conditional embeddings
:param is_inplace: specifies whether to perform the attention softmax computation inplace to
save memory
"""
super().__init__()
self.is_inplace = is_inplace
self.n_heads = n_heads
self.d_head = d_head
# Attention scaling factor
self.scale = d_head ** -0.5
# Query, key and value mappings
d_attn = d_head * n_heads
self.to_q = nn.Linear(d_model, d_attn, bias=False)
self.to_k = nn.Linear(d_cond, d_attn, bias=False)
self.to_v = nn.Linear(d_cond, d_attn, bias=False)
# Final linear layer
self.to_out = nn.Sequential(nn.Linear(d_attn, d_model))
# Setup [flash attention](https://github.com/HazyResearch/flash-attention).
# Flash attention is only used if it's installed
# and `CrossAttention.use_flash_attention` is set to `True`.
try:
# You can install flash attention by cloning their Github repo,
# [https://github.com/HazyResearch/flash-attention](https://github.com/HazyResearch/flash-attention)
# and then running `python setup.py install`
from flash_attn.flash_attention import FlashAttention
self.flash = FlashAttention()
# Set the scale for scaled dot-product attention.
self.flash.softmax_scale = self.scale
# Set to `None` if it's not installed
except ImportError:
self.flash = None
def forward(self, x: torch.Tensor, cond: Optional[torch.Tensor] = None):
"""
:param x: are the input embeddings of shape `[batch_size, height * width, d_model]`
:param cond: is the conditional embeddings of shape `[batch_size, n_cond, d_cond]`
"""
# If `cond` is `None` we perform self attention
has_cond = cond is not None
if not has_cond:
cond = x
# Get query, key and value vectors
q = self.to_q(x)
k = self.to_k(cond)
v = self.to_v(cond)
# Use flash attention if it's available and the head size is less than or equal to `128`
if CrossAttention.use_flash_attention and self.flash is not None and not has_cond and self.d_head <= 128:
return self.flash_attention(q, k, v)
# Otherwise, fallback to normal attention
else:
return self.normal_attention(q, k, v)
def flash_attention(self, q: torch.Tensor, k: torch.Tensor, v: torch.Tensor):
"""
#### Flash Attention
:param q: are the query vectors before splitting heads, of shape `[batch_size, seq, d_attn]`
:param k: are the query vectors before splitting heads, of shape `[batch_size, seq, d_attn]`
:param v: are the query vectors before splitting heads, of shape `[batch_size, seq, d_attn]`
"""
# Get batch size and number of elements along sequence axis (`width * height`)
batch_size, seq_len, _ = q.shape
# Stack `q`, `k`, `v` vectors for flash attention, to get a single tensor of
# shape `[batch_size, seq_len, 3, n_heads * d_head]`
qkv = torch.stack((q, k, v), dim=2)
# Split the heads
qkv = qkv.view(batch_size, seq_len, 3, self.n_heads, self.d_head)
# Flash attention works for head sizes `32`, `64` and `128`, so we have to pad the heads to
# fit this size.
if self.d_head <= 32:
pad = 32 - self.d_head
elif self.d_head <= 64:
pad = 64 - self.d_head
elif self.d_head <= 128:
pad = 128 - self.d_head
else:
raise ValueError(f'Head size ${self.d_head} too large for Flash Attention')
# Pad the heads
if pad:
qkv = torch.cat((qkv, qkv.new_zeros(batch_size, seq_len, 3, self.n_heads, pad)), dim=-1)
# Compute attention
# $$\underset{seq}{softmax}\Bigg(\frac{Q K^\top}{\sqrt{d_{key}}}\Bigg)V$$
# This gives a tensor of shape `[batch_size, seq_len, n_heads, d_padded]`
out, _ = self.flash(qkv)
# Truncate the extra head size
out = out[:, :, :, :self.d_head]
# Reshape to `[batch_size, seq_len, n_heads * d_head]`
out = out.reshape(batch_size, seq_len, self.n_heads * self.d_head)
# Map to `[batch_size, height * width, d_model]` with a linear layer
return self.to_out(out)
def normal_attention(self, q: torch.Tensor, k: torch.Tensor, v: torch.Tensor):
"""
#### Normal Attention
:param q: are the query vectors before splitting heads, of shape `[batch_size, seq, d_attn]`
:param k: are the query vectors before splitting heads, of shape `[batch_size, seq, d_attn]`
:param v: are the query vectors before splitting heads, of shape `[batch_size, seq, d_attn]`
"""
# Split them to heads of shape `[batch_size, seq_len, n_heads, d_head]`
q = q.view(*q.shape[:2], self.n_heads, -1)
k = k.view(*k.shape[:2], self.n_heads, -1)
v = v.view(*v.shape[:2], self.n_heads, -1)
# Calculate attention $\frac{Q K^\top}{\sqrt{d_{key}}}$
attn = torch.einsum('bihd,bjhd->bhij', q, k) * self.scale
# Compute softmax
# $$\underset{seq}{softmax}\Bigg(\frac{Q K^\top}{\sqrt{d_{key}}}\Bigg)$$
if self.is_inplace:
half = attn.shape[0] // 2
attn[half:] = attn[half:].softmax(dim=-1)
attn[:half] = attn[:half].softmax(dim=-1)
else:
attn = attn.softmax(dim=-1)
# Compute attention output
# $$\underset{seq}{softmax}\Bigg(\frac{Q K^\top}{\sqrt{d_{key}}}\Bigg)V$$
out = torch.einsum('bhij,bjhd->bihd', attn, v)
# Reshape to `[batch_size, height * width, n_heads * d_head]`
out = out.reshape(*out.shape[:2], -1)
# Map to `[batch_size, height * width, d_model]` with a linear layer
return self.to_out(out)
class FeedForward(nn.Module):
"""
### Feed-Forward Network
"""
def __init__(self, d_model: int, d_mult: int = 4):
"""
:param d_model: is the input embedding size
:param d_mult: is multiplicative factor for the hidden layer size
"""
super().__init__()
self.net = nn.Sequential(
GeGLU(d_model, d_model * d_mult),
nn.Dropout(0.),
nn.Linear(d_model * d_mult, d_model)
)
def forward(self, x: torch.Tensor):
return self.net(x)
class GeGLU(nn.Module):
"""
### GeGLU Activation
$$\text{GeGLU}(x) = (xW + b) * \text{GELU}(xV + c)$$
"""
def __init__(self, d_in: int, d_out: int):
super().__init__()
# Combined linear projections $xW + b$ and $xV + c$
self.proj = nn.Linear(d_in, d_out * 2)
def forward(self, x: torch.Tensor):
# Get $xW + b$ and $xV + c$
x, gate = self.proj(x).chunk(2, dim=-1)
# $\text{GeGLU}(x) = (xW + b) * \text{GELU}(xV + c)$
return x * F.gelu(gate)
Let's try to figure this out.
Can you try recompiling FlashAttention?
pip uninstall -y flash_attn && rm -rf build && python setup.py install.
Another thing to try is to put torch.cuda.set_device(qkv.device()) before calling self.flash.
Lmk if any of those help.
@geekinglcq are you on the cutlass branch of FlashAttention? I've also just ported the (same) fix to that branch.
Let's try to figure this out.
Can you try recompiling FlashAttention?
pip uninstall -y flash_attn && rm -rf build && python setup.py install.
Another thing to try is to put torch.cuda.set_device(qkv.device()) before calling self.flash.
Lmk if any of those help.
It works now! Thank you~
Great!
@geekinglcq Did recompilation fix it or do you need to set torch.cuda.set_device(qkv.device())?
Great! @geekinglcq Did recompilation fix it or do you need to set torch.cuda.set_device(qkv.device())?
I just recompiled it.
|
2025-04-01T04:10:28.597153 | 2015-05-04T21:56:23 | 73138240 | {
"authors": [
"ChrisMissal",
"hulahomer",
"pedroreys"
],
"license": "mit",
"license_source": "bigquery",
"license_type": "permissive",
"provenance": "gharchive-dolma-0004.json.gz:14448",
"repo": "HeadspringLabs/gulp-xmlpoke",
"url": "https://github.com/HeadspringLabs/gulp-xmlpoke/issues/4"
} | gharchive/issue | Update package.json to reflect Ownership
Since @pedroreys transferred ownership to HSLabs, update the package.json file to reflect this.
[x] Update file(s)
[ ] Update NPM
@ChrisMissal what email should we put for the author?
I think it's fine to keep it as @pedroreys as long as he's cool with it. I believe it will have to be his or mine to publish to npm as well. I'm not 100% on that though.
I don't really care for the email, so I'm cool keeping it or using some other one. Whatever is the easiest to make it work with npm
@ChrisMissal NPM has now been updated as well.
|
2025-04-01T04:10:28.600276 | 2021-11-24T21:15:39 | 1062915696 | {
"authors": [
"HeadstormOps",
"aVileBroker"
],
"license": "Apache-2.0",
"license_source": "github-api",
"license_type": "permissive",
"provenance": "gharchive-dolma-0004.json.gz:14449",
"repo": "Headstorm/foundry-ui",
"url": "https://github.com/Headstorm/foundry-ui/issues/360"
} | gharchive/issue | Spotlight | Reactions to scrolling/window resize shouldn't smoothly animate
Currently when scrolling while the spotlight is targeting an element, the spotlight will smoothly animate to the new position - which makes it feel like it's lagging behind. Ideally it should just scroll with the page contents. The same happens when the screensize changes
Dev hint: The useMemo for the spotlight rectangle bounds can update as usual, but the useSpring config needs to be set() with immediate: true when it is triggered by a scroll.
:tada: This issue has been resolved in version 1.16.1 :tada:
The release is available on:
npm package (@latest dist-tag)
GitHub release
Your semantic-release bot :package::rocket:
|
2025-04-01T04:10:28.607584 | 2018-04-09T15:54:47 | 312587570 | {
"authors": [
"michaelvidal24",
"petermcclanahan"
],
"license": "Apache-2.0",
"license_source": "github-api",
"license_type": "permissive",
"provenance": "gharchive-dolma-0004.json.gz:14450",
"repo": "HealthCatalyst/Fabric.Identity",
"url": "https://github.com/HealthCatalyst/Fabric.Identity/pull/189"
} | gharchive/pull-request | Adding script to upgrade admin roles from EDWAdmin to Authorization
Adding an upgrade script to insert EDW Admin roles from IdentityRoleBASE into Authorization's RoleUsers table. This will be called at the end of the Register.ps1 script at the end of the installer.
This change is
We changed approach on this item, so closing this PR.
|
2025-04-01T04:10:28.613379 | 2017-09-25T18:15:23 | 260368184 | {
"authors": [
"Kileahh",
"primetoxinz"
],
"license": "mit",
"license_source": "bigquery",
"license_type": "permissive",
"provenance": "gharchive-dolma-0004.json.gz:14451",
"repo": "HearthProject/OneClient",
"url": "https://github.com/HearthProject/OneClient/issues/65"
} | gharchive/issue | OneClient crash
I was testing one client and it crashes, I don't really remember what I was doing between try to import from twitch, or try to get the mod list from curse.
Log file
You were on a network that doesn't allow access to curse or your internet went down, either way this shouldn't be directly our fault.
I think it's because I'm in China, internet have some "problems"
|
2025-04-01T04:10:28.776105 | 2018-09-27T00:09:03 | 364257702 | {
"authors": [
"Hermann-SW"
],
"license": "MIT",
"license_source": "github-api",
"license_type": "permissive",
"provenance": "gharchive-dolma-0004.json.gz:14455",
"repo": "Hermann-SW/webrepl",
"url": "https://github.com/Hermann-SW/webrepl/issues/1"
} | gharchive/issue | Changing any of the leftmost 5 characters of command from history scrambles command
Caused by do_input(''), so raw_input()/input() do not know of the present REPL prompt characters.
REPL prompt needs to be presented by do_input(prompt), and on_message() needs to prevent output of prompt. Sync between prompt detection in on_message() and do_inputprompt) is needed.
Find technical details of fix in this commit.
|
2025-04-01T04:10:28.779164 | 2016-12-31T16:00:47 | 198237431 | {
"authors": [
"egomadking",
"nriley"
],
"license": "mit",
"license_source": "bigquery",
"license_type": "permissive",
"provenance": "gharchive-dolma-0004.json.gz:14456",
"repo": "HermesApp/Hermes",
"url": "https://github.com/HermesApp/Hermes/issues/286"
} | gharchive/issue | Cannot paste password w/keyboard
Cannot paste password w/keyboard. I can right-click paste though. I am using Mac OSX Sierra.
Command-V worked for me in the password field on Sierra (and I pasted my Pandora password a lot while testing the login process…). Can you provide steps to reproduce this issue? Thanks.
I logged out, quit Hermes. Started it back up and CMD-V does not work- it gives me an OS X alert sound. I even quit LaunchBar (I use their pasteboard feature), re-copied my PW, then CMD-V with same result. Is there anything else that you want me to try?
I get the same result with the username field as well.
Tested some more and still couldn't reproduce. Can you paste in other ways (from the Edit menu, if you have the menu bar visible, or by using the contextual menu)? Are you able to paste into other text fields in Hermes after logging in?
The only other fields that I found were in the network preferences. These too gave me the same result. I even ensured that I was copy/pasting an int.
Right-click works as well as well as menubar Edit->paste.
Do you see ⌘V in Hermes’ Edit menu next to Paste? Only thing I can think of at this point is that somehow this is getting intercepted or remapped…
That's got it! I've got it mapped to "Paste and Match Style". Sorry...
|
2025-04-01T04:10:28.797628 | 2024-02-03T22:18:41 | 2116696810 | {
"authors": [
"HerrKnarz",
"Jeshibu"
],
"license": "MIT",
"license_source": "github-api",
"license_type": "permissive",
"provenance": "gharchive-dolma-0004.json.gz:14458",
"repo": "HerrKnarz/Playnite-Extensions",
"url": "https://github.com/HerrKnarz/Playnite-Extensions/issues/87"
} | gharchive/issue | [Enhancement][Metadata Utilities] Add items when (combination of?) other values exist
Consider you've got the tags First-person shooter and Third-person shooter, and you want those to automatically add the genre Shooter to a game.
Another use case for combined requirements: you've got genres Perspective: Third Person and Shooter and you want that to add the genre Third-person shooter
I thought about how to implement that. Here's a rough idea:
support three condition types:
All conditions are met
None of the conditions are met
At least one of the conditions is met
a rule can have any number of conditions. A condition can be the following:
field xy = abc (also support fields like library here)
field xy is empty
field xy != abc
a rule can have any number of actions to take. An action can be the following:
add value ABC to field XY
remove value ABC from field XY
Added in the next release!
|
2025-04-01T04:10:28.802323 | 2024-04-14T07:03:20 | 2242011269 | {
"authors": [
"Hessuew"
],
"license": "MIT",
"license_source": "github-api",
"license_type": "permissive",
"provenance": "gharchive-dolma-0004.json.gz:14459",
"repo": "Hessuew/flamethefreeze",
"url": "https://github.com/Hessuew/flamethefreeze/pull/105"
} | gharchive/pull-request | chore(main): release 3.6.0
:robot: I have created a release beep boop
3.6.0 (2024-04-14)
Features
events: JESUS FEST KUOPIO, ready (af51830)
events: JESUS FEST KUOPIO, ready (#103) (4d9c34c)
JESUS FEST KUOPIO (92536ad)
JESUS FEST KUOPIO (#102) (cd4a4df)
This PR was generated with Release Please. See documentation.
:robot: Release is at https://github.com/Hessuew/flamethefreeze/releases/tag/v3.6.0 :sunflower:
|
2025-04-01T04:10:28.806421 | 2019-02-06T21:41:07 | 407443474 | {
"authors": [
"jonesrick",
"leitwolf7"
],
"license": "MIT",
"license_source": "github-api",
"license_type": "permissive",
"provenance": "gharchive-dolma-0004.json.gz:14460",
"repo": "HewlettPackard/netperf",
"url": "https://github.com/HewlettPackard/netperf/issues/31"
} | gharchive/issue | This tracks down to the following line
https://github.com/HewlettPackard/netperf/blob/bcb868bde7f0203bbab69609f65d4088ba7398db/src/netlib.c#L2609
where the code expects the send() call to always return the exact amount of data that was requested (sizeof(netperf_response)). In my opinion this assumption is wrong. At least on Linux a send() call may transfer lesser bytes as requested.
From https://linux.die.net/man/2/send
The only difference between send() and write(2) is the presence of flags. With a zero flags argument, send() is equivalent to write(2).
Looking on the man page of write() https://linux.die.net/man/2/write
The number of bytes written may be less than count if, for example, there is insufficient space on the underlying physical medium, or the RLIMIT_FSIZE resource limit is encountered (see setrlimit(2)), or the call was interrupted by a signal handler after having written less than count bytes. (See also pipe(7).)
I think its there is a need to loop over all data that should be sent, even when using blocking sockets. I agree that the description in send() is a little bit misleading, but if you check the kernel code for tcp_sendmsg() in tcp.c you can see that it can return when at least 1 octet was sent. It blocks only when no octet was written before (for blocking sockets).
What do you think?
Strictly speaking, indeed a send call may return fewer bytes than requested, although the only place that could conceivably happen is in a TCP test. For a UDP test the sends are all-or-nothing.
That said, the "send_response_n send call failure" message means a failure on the control connection. That is emitted via perror, which means it should include the error message for the corresponding errno - what is the full message being emitted? That will tell us more about the circumstances leading to the message.
|
2025-04-01T04:10:28.810533 | 2017-01-25T13:15:42 | 203100068 | {
"authors": [
"tmiotto"
],
"license": "apache-2.0",
"license_source": "bigquery",
"license_type": "permissive",
"provenance": "gharchive-dolma-0004.json.gz:14461",
"repo": "HewlettPackard/oneview-chef",
"url": "https://github.com/HewlettPackard/oneview-chef/pull/141"
} | gharchive/pull-request | API300 Enclosure
Description
Adds support to enclosure in API300
Issues Resolved
#140
Check List
[x] New functionality includes testing.
[x] All tests pass ($ rake test).
[x] New resources and/or actions have are included in the matchers.rb and matchers_spec.
[ ] New functionality has been documented in the README if applicable.
[ ] New functionality has been thoroughly documented in examples (please include helpful comments).
[x] Changes documented in the CHANGELOG.
The value can really be nil in some patch actions.
Regarding the :refresh, it makes total sense, I'll send a patch to fix it.
|
2025-04-01T04:10:28.816020 | 2022-10-07T13:24:15 | 1401170352 | {
"authors": [
"ClaireHayard",
"alisha-k-kalladassery"
],
"license": "apache-2.0",
"license_source": "bigquery",
"license_type": "permissive",
"provenance": "gharchive-dolma-0004.json.gz:14462",
"repo": "HewlettPackard/terraform-provider-oneview",
"url": "https://github.com/HewlettPackard/terraform-provider-oneview/issues/501"
} | gharchive/issue | Deleting a Server Hardware Monitored fails - DL servers
Scenario/Intent
Hello,
I am trying to destroy a Monitored DL Server Hardware from my infrastructure. However the code fails with an error.
Environment Details
OneView Terraform SDK provider: v7.0.0-13
OneView Appliance Version: 6.10
Expected Result
The server is successfully deleted from the infrastructure.
Actual Result
Input: Configuration file
server_hardwares = [
{
hostname = "ip@"
}
]
In the tfstate
{
"version": 4,
"terraform_version": "0.13.7",
"serial": 3,
"lineage": "6d0dd6e7-da7f-d9f3-a213-eb4f3db628d3",
"outputs": {},
"resources": [
{
"mode": "managed",
"type": "oneview_server_hardware",
"name": "ServerHardwares",
"provider": "provider[\"registry.terraform.io/hewlettpackard/oneview\"]",
"instances": [
{
"index_key": "ip@",
"schema_version": 0,
"attributes": {
"configuration_state": "Monitored",
"force": true,
"hostname": "ip@",
"id": "37353738-3336-584D-5131-303030343037",
"initial_scope_uris": [],
"licensing_intent": "OneViewStandard",
"location_uri": "null",
"maintenance_mode": null,
"mp_dns_name": "",
"mp_firmware_version": "2.10 (02/18/2020)",
"mp_hosts_and_ranges": [],
"mp_ip_address": "",
"name": "ip@",
"one_time_boot": "Normal",
"password": "************",
"power_state": "Off",
"server_group_uri": "null",
"server_hardware_type_uri": "null",
"server_power_state": [],
"server_profile_uri": "null",
"type": "server-hardware-12",
"uid_state": "Unsupported",
"uri": "/rest/server-hardware/37353738-3336-584D-5131-303030343037",
"username": "username",
"uuid": "37353738-3336-584D-5131-303030343037",
"virtual_serial_number": "null",
"virtual_uuid": ""
}
}
]
}
]
}
Result of the terraform plan
Hi Claire,
We are not able to reproduce the issue with the configurations given. Please make sure you have the hardware in oneview within the scope which you are using. Kindly share a detailed log for us to debug further.
I am doing the suppression on DL servers: DL380 Gen10
Variables used in the code
licensing_intent = var.licensing_intent
configuration_state = var.configuration_state
force = var.force
hostname = var.hostname
username = var.username
password = var.password
Details Input values
variable "configuration_state" {
description = "Specifies the desired server state. Valid options are: Managed, Monitored."
type = string
default = "Monitored"
}
variable "licensing_intent" {
description = "The type of product license to assign to the server hardware."
type = string
default = "OneViewStandard"
}
|
2025-04-01T04:10:28.854537 | 2024-07-30T15:52:12 | 2438163559 | {
"authors": [
"HiDeoo",
"imtaotao"
],
"license": "MIT",
"license_source": "github-api",
"license_type": "permissive",
"provenance": "gharchive-dolma-0004.json.gz:14463",
"repo": "HiDeoo/starlight-theme-rapide",
"url": "https://github.com/HiDeoo/starlight-theme-rapide/issues/6"
} | gharchive/issue | Incompatible with starlight-blog
Describe the bug
Incompatible with starlight-blog, when I use this plugin, starlight-blog will fail
To Reproduce
// astro.config.mjs
import starlightBlog from 'starlight-blog';
import starlightThemeRapide from 'starlight-theme-rapide';
export default defineConfig({
plugins: [
starlightThemeRapide(),
starlightBlog({ title: 'Blog' }),
],
})
Expected behavior
Hope it can be compatible
How often does this bug happen?
Every time
System Info
No response
Additional Context
No response
If you can tell me how to change it, I'd be happy to submit a PR to participate.
Thanks for your feedback :raised_hands:
Both plugins use Starlight component overrides to customize and add features to Starlight. When multiple plugins override the same component, there is a conflict that cannot be easily resolved automatically and this is what happens in this case. You should even see in your console some warnings, e.g.:
[WARN] [starlight-blog-plugin] It looks like you already have a `ThemeSelect` component override in your Starlight configuration.
In these cases, manually composing your own override for the <ThemeSelect/> component will be the best solution. To do so, you'll need to edit your configuration:
plugins: [starlightThemeRapide(), starlightBlog()],
components: {
ThemeSelect: "./src/components/ThemeSelect.astro",
},
Then, create a new file ThemeSelect.astro in your src/components/ directory:
---
import RapideThemeSelect from 'starlight-theme-rapide/overrides/ThemeSelect.astro'
---
<div>
<a href="/blog/">Blog</a>
</div>
<RapideThemeSelect {...Astro.props}><slot /></RapideThemeSelect>
<style>
div {
border-inline-end: 1px solid var(--sl-color-gray-5);
display: none;
padding-inline: 1rem;
}
@media (min-width: 50rem) {
div {
display: flex;
}
}
a {
color: var(--sl-color-text-accent);
font-weight: 600;
text-decoration: none;
}
:global(.sl-markdown-content .posts) {
margin-top: 2rem;
}
</style>
This override will render the theme theme select from the starlight-theme-rapide plugin and right before it, render a link to the blog just like the starlight-blog plugin does. You can even customize the link as you wish. Such a solution will render properly:
|
2025-04-01T04:10:28.868872 | 2024-07-22T00:31:43 | 2421679486 | {
"authors": [
"Higgi7567"
],
"license": "MIT",
"license_source": "github-api",
"license_type": "permissive",
"provenance": "gharchive-dolma-0004.json.gz:14464",
"repo": "Higgi7567/Gravity-Simulation",
"url": "https://github.com/Higgi7567/Gravity-Simulation/issues/4"
} | gharchive/issue | Function (update objects attributes)
This function will handle a lot of the mathematics need for this simulation. Every time step, each object will have to update their velocities and positions and test for collisions. Part of this is calculating the net gravitational acceleration on the object. This will be a lot of operations as there will be n^n calculations to run. it may be advantageous to attempt to find means of ignoring inconsequential objects from the calculations. maybe a conditional statement that ignores any acceleration less than some floor value.
Function written and completed at 21:36 on 7/21/2024. PR put in at same time.
|
2025-04-01T04:10:28.873584 | 2022-05-09T07:37:46 | 1229308564 | {
"authors": [
"HighDiceRoller"
],
"license": "MIT",
"license_source": "github-api",
"license_type": "permissive",
"provenance": "gharchive-dolma-0004.json.gz:14465",
"repo": "HighDiceRoller/icepool",
"url": "https://github.com/HighDiceRoller/icepool/issues/30"
} | gharchive/issue | Consider retiring min_outcome argument to Die()
One option would be that weights with no args means the outcomes are equal to *range(len(weights)).
Done
|
2025-04-01T04:10:28.877105 | 2017-07-26T02:12:07 | 245585574 | {
"authors": [
"disi33"
],
"license": "mit",
"license_source": "bigquery",
"license_type": "permissive",
"provenance": "gharchive-dolma-0004.json.gz:14467",
"repo": "HighSchoolHacking/GLS",
"url": "https://github.com/HighSchoolHacking/GLS/pull/253"
} | gharchive/pull-request | add support for new object instantiation
add support for instantiation of new objects
GLS syntax:
new: <TypeName> [<ConstructorArgument>*]
fixes #215
@JoshuaKGoldberg removed "object" from GLS syntax and updated the PR description to reflect that.
Also addressed all your other comments
|
2025-04-01T04:10:28.909691 | 2024-05-29T04:21:13 | 2322320661 | {
"authors": [
"diyashah28"
],
"license": "MIT",
"license_source": "github-api",
"license_type": "permissive",
"provenance": "gharchive-dolma-0004.json.gz:14468",
"repo": "HimanshuNarware/CareerZunction_Intern",
"url": "https://github.com/HimanshuNarware/CareerZunction_Intern/issues/256"
} | gharchive/issue | [Feature Request]: Add a 'Career Resources' Page
Is there an existing issue for this?
[X] I have searched the existing issues
Feature Description
The Career Resources page will contain resources such as resume writing tips, interview preparation guides, and links to career development workshops.
Use Case
This will make the website more informative for the users and would provide better guidance to them. This would also be a great addition to the features of the website.
Benefits
No response
Add ScreenShots
No response
Priority
High
Record
[X] I have read the Contributing Guidelines
[X] I'm a GSSOC'24 contributor
[X] I have starred the repository
@HimanshuNarware please assign this issue to me under GSSOC'24
|
2025-04-01T04:10:28.911975 | 2023-08-30T13:36:06 | 1873699095 | {
"authors": [
"Serrof",
"maisonobe"
],
"license": "apache-2.0",
"license_source": "bigquery",
"license_type": "permissive",
"provenance": "gharchive-dolma-0004.json.gz:14469",
"repo": "Hipparchus-Math/hipparchus",
"url": "https://github.com/Hipparchus-Math/hipparchus/issues/271"
} | gharchive/issue | Unused argument in default initial step of adaptive integrators
In AdaptiveStepsize(Field)Integrator, method initializeStep has a fourth argument mapper that is not used
Good catch, thanks!
|
2025-04-01T04:10:29.260663 | 2016-04-19T21:32:57 | 149590421 | {
"authors": [
"nicoruti",
"templarfelix"
],
"license": "mit",
"license_source": "bigquery",
"license_type": "permissive",
"provenance": "gharchive-dolma-0004.json.gz:14470",
"repo": "Hive2Hive/Hive2Hive",
"url": "https://github.com/Hive2Hive/Hive2Hive/issues/140"
} | gharchive/issue | No have more listener of connection success and disconnected ?
Have other method for seed disconected from server and connected?
Sorry, I don't get the issue. What do you mean?
|
2025-04-01T04:10:29.299758 | 2021-02-07T16:07:56 | 802987798 | {
"authors": [
"MikeMcQuaid",
"SinisterStairs"
],
"license": "BSD-2-Clause",
"license_source": "github-api",
"license_type": "permissive",
"provenance": "gharchive-dolma-0004.json.gz:14473",
"repo": "Homebrew/brew",
"url": "https://github.com/Homebrew/brew/issues/10555"
} | gharchive/issue | Suggestion: Add notes and/or tags to formulae/casks
Provide a detailed description of the proposed feature
It would be nice if an optional --note and/or multiple --tag could specified when install-ling packages.
My use cases are primarily for correlating Homebrew packages with software not managed by Homebrew. For examples, noting why particular packages were installed:
--note "This library was installed because 3rd party software X [not in Homebrew] requires it"
--note "This older version of Java 6 is required to run an applicant's JAR file"
and/or, more sophisticated tagging could be implemented:
Remind myself to delete packages that are only needed temporarily: --tag delete
Categorize packages that will attempt to run as root: --tag sudo
Categorize packages that are games: --tag game
If both notes and tags were implemented, using the examples above, a use case could be:
--note "This version of Java 6 is required to run an applicant's JAR file" --tag delete to note why I installed Java 6, and a tag to remind myself to delete it later
If either --note or --tag were implemented, it'd be natural for them to be usable with other commands:
brew pin --note "Software Z does not support newer versions"
brew list --installed --tag delete to show all installed packages that are tagged for deletion
brew note to add/clear notes to a package (that may or may not be installed)
brew tag to add/remove/clear tags to a package (that may or may not be installed)
brew bundle dump could include notes/tags to be applied, or simply add them as #comments for reference
My apologies in advance if this has been proposed in the past.
What is the motivation for the feature?
Primarily for correlating Homebrew formulae/casks with software, libraries, scripts, etc. that are not managed by Homebrew. Also a way to organize packages, or remind the user about the need for particular packages.
How will the feature be relevant to at least 90% of Homebrew users?
It would help users remember why they installed a package, or to help categorize packages.
For examples:
brew list --installed --tag gfx to show all the packages I installed that are used for graphics
brew list --tag games to show all packages that have been tagged as games
brew list --installed --tag delete to list all the packages I've tagged to be deleted
What alternatives to the feature have been considered?
Excel/Sheets, text file, pen & paper, human memory
Sorry, passing on this. A Brewfile with comments is likely the best current workaround.
|
2025-04-01T04:10:29.304141 | 2017-08-18T14:20:37 | 251260114 | {
"authors": [
"HodorTheCoder",
"ilovezfs"
],
"license": "BSD-2-Clause",
"license_source": "github-api",
"license_type": "permissive",
"provenance": "gharchive-dolma-0004.json.gz:14474",
"repo": "Homebrew/brew",
"url": "https://github.com/Homebrew/brew/issues/3065"
} | gharchive/issue | How to figure out what's triggering "Calling fails_with :llvm is deprecated!" in formula.rb and software_spec.rb
Hello,
Recently after updating brew I started getting this output whenever installing or upgrading packages:
Warning: Calling fails_with :llvm is deprecated!
There is no replacement.
/usr/local/Homebrew/Library/Homebrew/formula.rb:2373:in `fails_with'
Warning: Calling fails_with :llvm is deprecated!
There is no replacement.
/usr/local/Homebrew/Library/Homebrew/software_spec.rb:207:in `fails_with'
Warning: Calling fails_with :llvm is deprecated!
There is no replacement.
/usr/local/Homebrew/Library/Homebrew/software_spec.rb:207:in `fails_with'
Warning: Calling fails_with :llvm is deprecated!
There is no replacement.
/usr/local/Homebrew/Library/Homebrew/software_spec.rb:207:in `fails_with'
I've grep'd the entire directory structure but cannot seem to find what package might be causing these warnings-- is there any way to do a super verbose output during an upgrade to see what might be causing these? I tried doing a
brew list -1 | xargs brew install
to see if I could single out what was causing it but no dice.
Thanks
Please note we will close your issue without comment if you delete or do not fill out the issue checklist and provide ALL the requested information.
Will re-do.
|
2025-04-01T04:10:29.311271 | 2016-06-29T18:21:05 | 162993062 | {
"authors": [
"UniqMartin",
"kevmoo"
],
"license": "BSD-2-Clause",
"license_source": "github-api",
"license_type": "permissive",
"provenance": "gharchive-dolma-0004.json.gz:14475",
"repo": "Homebrew/brew",
"url": "https://github.com/Homebrew/brew/issues/420"
} | gharchive/issue | homebrew/core showing up twice on 'brew tap'...
brew update shows every entry twice.
brew tap shows homebrew/core twice.
I'm sorry, but you provided so little information that it's hard to impossible to provide any advice. What is the output of the following commands?
brew config
brew doctor
brew tap
ls -AlF "$(brew --repository)"/Library/Taps/*
~/> brew config
HOMEBREW_VERSION: 0.9.9
ORIGIN: https://github.com/Homebrew/brew.git
HEAD: db76a0f4cc3838658919570b3453edbcb9ed2fcd
Last commit: 5 hours ago
Core tap ORIGIN: https://github.com/Homebrew/homebrew-core
Core tap HEAD: 14be211146d6c3bb0ab1aa278a9349f97c42ecb4
Core tap last commit: 5 hours ago
HOMEBREW_PREFIX: /Users/kevmoo/homebrew
HOMEBREW_REPOSITORY: /Users/kevmoo/homebrew
HOMEBREW_CELLAR: /Users/kevmoo/homebrew/Cellar
HOMEBREW_BOTTLE_DOMAIN: https://homebrew.bintray.com
CPU: octa-core 64-bit haswell
Clang: 7.3 build 703
Git: 2.7.4 => /usr/local/git/current/bin/git
Perl: /usr/bin/perl
Python: /Users/kevmoo/homebrew/bin/python => /Users/kevmoo/homebrew/Cellar/python/2.7.11/Frameworks/Python.framework/Versions/2.7/bin/python2.7
Ruby: /Users/kevmoo/homebrew/opt/ruby/bin/ruby => /Users/kevmoo/homebrew/Cellar/ruby/2.3.1/bin/ruby
Java: 1.8.0_91, 1.8.0_72, 1.8.0_60
System Ruby: 2.0.0-p648
OS X: 10.11.5-x86_64
Xcode: 7.3.1
CLT: <IP_ADDRESS>.1.1461711523
X11: N/A
~/> brew doctor
Please note that these warnings are just used to help the Homebrew maintainers
with debugging if you file an issue. If everything you use Homebrew for is
working fine: please don't worry and just ignore them. Thanks!
Warning: Your Homebrew is not installed to /usr/local
You can install Homebrew anywhere you want, but some brews may only build
correctly if you install in /usr/local. Sorry!
~/> brew tap
caskroom/cask
dart-lang/dart
homebrew/core
homebrew/core
homebrew/versions
martido/brew-graph
~/> ls -AlF "$(brew --repository)"/Library/Taps/*
/Users/kevmoo/homebrew/Library/Taps/caskroom:
total 0
drwxr-xr-x 29 kevmoo eng 986 Jun 27 10:29 homebrew-cask/
/Users/kevmoo/homebrew/Library/Taps/dart-lang:
total 0
drwxr-xr-x 6 kevmoo eng 204 Jun 22 11:09 homebrew-dart/
/Users/kevmoo/homebrew/Library/Taps/homebrew:
total 0
drwxr-xr-x 11 kevmoo eng 374 Jun 22 11:09 homebrew-core/
drwxr-xr-x 5 kevmoo eng 170 Jun 1 09:19 homebrew-homebrew/
drwxr-xr-x 217 kevmoo eng 7378 Jun 27 10:29 homebrew-versions/
/Users/kevmoo/homebrew/Library/Taps/martido:
total 0
drwxr-xr-x 5 kevmoo eng 170 May 15 17:29 homebrew-brew-graph/
Thanks! I suspect the problematic duplicate entry is /Users/kevmoo/homebrew/Library/Taps/homebrew/homebrew-homebrew. Do you have any idea where it came from? That's a tap directory that should never exist (and there's no corresponding GitHub repository of the same name). It should be safe to remove this directory if you haven't made local modifications to any of the Homebrew formulae. (But you can also move that directory somewhere else, if you aren't entirely sure.) Before you do that, would you mind additionally providing the output of brew tap-info --installed?
That's a whole lota weird
~/> brew tap-info --installed
caskroom/cask: unpinned, 1 formula, 1 command
/Users/kevmoo/homebrew/Library/Taps/caskroom/homebrew-cask (8,198 files, 112.2M)
From: https://github.com/caskroom/homebrew-cask
dart-lang/dart: unpinned, 1 formula
/Users/kevmoo/homebrew/Library/Taps/dart-lang/homebrew-dart (259 files, 224.4K)
From: https://github.com/dart-lang/homebrew-dart
homebrew/core: unpinned, 3588 formulae
/Users/kevmoo/homebrew/Library/Taps/homebrew/homebrew-core (3,970 files, 19M)
From: https://github.com/Homebrew/homebrew-core
homebrew/core: unpinned, 3588 formulae
/Users/kevmoo/homebrew/Library/Taps/homebrew/homebrew-core (3,970 files, 19M)
From: https://github.com/Homebrew/homebrew-core
homebrew/versions: unpinned, 211 formulae
/Users/kevmoo/homebrew/Library/Taps/homebrew/homebrew-versions (1,301 files, 4.5M)
From: https://github.com/Homebrew/homebrew-versions
martido/brew-graph: unpinned, 1 formula
/Users/kevmoo/homebrew/Library/Taps/martido/homebrew-brew-graph (35 files, 23.1K)
From: https://github.com/martido/homebrew-brew-graph
It's not entirely surprising because homebrew/homebrew (or homebrew/homebrew-homebrew) is automatically re-interpreted as homebrew/core for compatibility when Homebrew was still hosted in a single repository named Homebrew/homebrew (now Homebrew/legacy-homebrew).
Just remove /Users/kevmoo/homebrew/Library/Taps/homebrew/homebrew-homebrew and all the duplication should be gone, not to mention that updates should become faster, too.
I suspect the problematic duplicate entry is /Users/kevmoo/homebrew/Library/Taps/homebrew/homebrew-homebrew. Do you have any idea where it came from?
I'd still be interested in an answer to this question.
All good. Not sure if this would be an interesting candidate for a brew doctor check – maybe only if it's reported again?
Thanks much! 😀
It's a really weird problem and I'm still not sure how you ended up in this situation. So far this looks like a singular event, but if more users come forward and report the same, we'll certainly take a closer look and see if this can be prevented or at least properly diagnosed.
I'm glad everything is back to normal for you. 😀
|
2025-04-01T04:10:29.321104 | 2020-12-16T17:41:06 | 769137303 | {
"authors": [
"BrewTestBot",
"MikeMcQuaid",
"dtrodrigues"
],
"license": "BSD-2-Clause",
"license_source": "github-api",
"license_type": "permissive",
"provenance": "gharchive-dolma-0004.json.gz:14476",
"repo": "Homebrew/brew",
"url": "https://github.com/Homebrew/brew/pull/10039"
} | gharchive/pull-request | python: use built-in venv instead of virtualenv when installing formulae
[x] Have you followed the guidelines in our Contributing document?
[x] Have you checked to ensure there aren't other open Pull Requests for the same change?
[x] Have you added an explanation of what your changes do and why you'd like us to include them?
[ ] Have you written new tests for your changes? Here's an example.
[ ] Have you successfully run brew typecheck with your changes locally?
[x] Have you successfully run brew tests with your changes locally?
[ ] Have you successfully run brew man locally and committed any changes?
This PR uses python's built-in venv to install formulae instead of the external virtualenv, which has dependencies and needs to be bootstrapped. The dependencies could also theoretically differ based on Python version.
Current caveat: venv uses ensurepip to install pip and setuptools using the wheels that are bundled with cpython, even if a newer one is installed, which is 20.2.3 for pip and 49.2.1 for setuptools in Python 3.9.1: https://github.com/python/cpython/blob/1e5d33e9b9b8631b36f061103a30208b206fd03a/Lib/ensurepip/init.py#L16-L18, https://github.com/python/cpython/tree/1e5d33e9b9b8631b36f061103a30208b206fd03a/Lib/ensurepip/_bundled
Other downstream package managers patch ensurepip to point to their locally managed pip and setuptools: https://src.fedoraproject.org/rpms/python3.9/blob/2084e749c71fe494a9a4ed76ce9c2d149b76868c/f/00189-use-rpm-wheels.patch, so that's a possible solution to ensure the temporary installation venv is using the latest versions that Homebrew knows about vs either using a known outdated version or blindly using the latest available in PyPI.
Marking as a draft for now and can look into writing new tests if it's decided to go in this direction.
Follow-up to #9435.
Review period will end on 2020-12-17 at 17:41:06 UTC.
Review period will end on 2020-12-17 at 17:41:06 UTC.
Review period ended.
Nice work, thanks @dtrodrigues!
Nice work, thanks @dtrodrigues!
|
2025-04-01T04:10:29.325567 | 2021-04-26T18:50:44 | 868022031 | {
"authors": [
"BrewTestBot",
"MikeMcQuaid",
"mistydemeo"
],
"license": "BSD-2-Clause",
"license_source": "github-api",
"license_type": "permissive",
"provenance": "gharchive-dolma-0004.json.gz:14477",
"repo": "Homebrew/brew",
"url": "https://github.com/Homebrew/brew/pull/11254"
} | gharchive/pull-request | Unbottled: fix use of invalid argument
[x] Have you followed the guidelines in our Contributing document?
[x] Have you checked to ensure there aren't other open Pull Requests for the same change?
[x] Have you added an explanation of what your changes do and why you'd like us to include them?
[ ] Have you written new tests for your changes? Here's an example.
[x] Have you successfully run brew typecheck with your changes locally?
[x] Have you successfully run brew tests with your changes locally?
In #11077 (more specifically 6b5213286c389abe2706e67c1a60f0c9ed376ddd), the exact: keyword argument was renamed to no_older_versions:. However, we didn't rename every place that used it, and this broke the brew unbottled command.
Review period will end on 2021-04-27 at 18:50:44 UTC.
Review period skipped due to critical label.
Thanks @mistydemeo!
|
2025-04-01T04:10:29.334546 | 2023-05-19T17:46:59 | 1717602642 | {
"authors": [
"MikeMcQuaid",
"carlocab",
"reitermarkus",
"scpeters"
],
"license": "BSD-2-Clause",
"license_source": "github-api",
"license_type": "permissive",
"provenance": "gharchive-dolma-0004.json.gz:14478",
"repo": "Homebrew/brew",
"url": "https://github.com/Homebrew/brew/pull/15465"
} | gharchive/pull-request | Guard GITHUB_* variables by GITHUB_ACTIONS.
[x] Have you followed the guidelines in our Contributing document?
[x] Have you checked to ensure there aren't other open Pull Requests for the same change?
[x] Have you added an explanation of what your changes do and why you'd like us to include them?
[ ] Have you written new tests for your changes? Here's an example.
[x] Have you successfully run brew typecheck with your changes locally?
[x] Have you successfully run brew tests with your changes locally?
It seems https://github.com/Homebrew/brew/pull/15447 broke actually passing through those variables on GITHUB_ACTIONS, since CI is not set by default.
Nevermind, CI is actually set. Still, I think it makes more sense to guard the GITHUB_* variables by GITHUB_ACTIONS rather than CI.
Thanks @reitermarkus!
FYI we are using Jenkins to build bottles for osrf/simulation, and our bottle builds have been failing since this was merged. Our scripts have been manually setting the needed GITHUB_* variables along with CI. I appreciate the work that has been done to make it easy to use GitHub actions, but at least for historical reasons we are still using Jenkins and don't have immediate plans to migrate.
We can spoof the GITHUB_ACTIONS environment variable at our own risk I suppose, but I don't know if that will have other consequences.
I'm ok with reverting this or removing the guard completely. Which GITHUB_* variables do you need inside Jenkins?
I'm ok with reverting this or removing the guard completely. Which GITHUB_* variables do you need inside Jenkins?
The needed GITHUB_* variables are the ones used in homebrew-test-bot's lib/tests/formulae_detect.rb:
GITHUB_REPOSITORY
GITHUB_REF
GITHUB_SHA
GITHUB_BASE_REF
https://github.com/gazebo-tooling/release-tools/blob/master/jenkins-scripts/lib/homebrew_bottle_creation.bash#L36-L39
A revert definitely sounds appropriate then. Care to open one?
@carlocab @scpeters have opened one in https://github.com/Homebrew/brew/pull/15478, should auto-merge soon.
Thanks @MikeMcQuaid!
|
2025-04-01T04:10:29.339831 | 2017-06-12T21:08:35 | 235366302 | {
"authors": [
"MikeMcQuaid",
"mansimarkaur",
"mistydemeo"
],
"license": "BSD-2-Clause",
"license_source": "github-api",
"license_type": "permissive",
"provenance": "gharchive-dolma-0004.json.gz:14479",
"repo": "Homebrew/brew",
"url": "https://github.com/Homebrew/brew/pull/2777"
} | gharchive/pull-request | Added tests for language/node.rb
[x] Have you followed the guidelines in our Contributing document?
[x] Have you checked to ensure there aren't other open Pull Requests for the same change?
[x] Have you added an explanation of what your changes do and why you'd like us to include them?
[x] Have you written new tests for your changes? Here's an example.
[x] Have you successfully run brew tests with your changes locally?
@mistydemeo
Just to check, did all of these tests output stdout/stderr? I notice they're all wrapped in shutup do.
@mansimarkaur Nice work so far! I'd recommend reading some posts from my talented coworker @jasonrudolph on testing that I think you'd find helpful:
http://jasonrudolph.com/blog/2008/07/30/testing-anti-patterns-the-ugly-mirror/
http://jasonrudolph.com/blog/2008/07/01/testing-anti-patterns-overspecification/
Thanks for reviewing my PR, @mistydemeo and @MikeMcQuaid. Also, thanks for the links you gave.
I've made the changes you requested.
Please review, @mistydemeo
Thanks @mansimarkaur, great work!
|
2025-04-01T04:10:29.616949 | 2020-02-03T06:03:25 | 558863509 | {
"authors": [
"MatthewDorner",
"ocBruno"
],
"license": "mit",
"license_source": "bigquery",
"license_type": "permissive",
"provenance": "gharchive-dolma-0004.json.gz:14480",
"repo": "HospitalRun/hospitalrun-frontend",
"url": "https://github.com/HospitalRun/hospitalrun-frontend/issues/1783"
} | gharchive/issue | Updating patient causes duplicate due to document version conflict
🐛 Bug Report
When you update an existing document in PouchDB, you need to give the database the _rev of the document that's currently in the database. If you supply an older _rev, it will cause a document update conflict. When we update a Patient, this creates a new _rev but we are not pulling that out of the database and updating our Redux state with it. The logic in saveOrUpdate currently falls back to a regular save when an error is thrown from db.put, so if you try to update a Patient twice it creates a duplicate document (the first update works).
To Reproduce
Create a Patient and add two Related Persons. The Patient will be duplicated.
🐛 Bug Report
When you update an existing document in PouchDB, you need to give the database the _rev of the document that's currently in the database. If you supply an older _rev, it will cause a document update conflict. When we update a Patient, this creates a new _rev but we are not pulling that out of the database and updating our Redux state with it. The logic in saveOrUpdate currently falls back to a regular save when an error is thrown from db.put, so if you try to update a Patient twice it creates a duplicate document (the first update works).
To Reproduce
Create a Patient and add two Related Persons. The Patient will be duplicated.
Hey @MatthewDorner!
I want to learn about pouchdb and thought this issue would give me a chance to dive into it a bit more.
Do you suggest we create a helper to retrieve the document to be updated's _rev value beforeUpdate or keep it stored in redux to begin with. Not sure which solution is the more performant choice for when we start working with large datasets.
For now I will try rework the logic in saveOrUpdate to return appropriate errors.
I'm not totally clear on the Redux stuff yet. Maybe try to find other codebases that use PouchDB and Redux and see how they do it?
I'm not totally clear on the Redux stuff yet. Maybe try to find other codebases that use PouchDB and Redux and see how they do it?
There is also a concept of "isDirty" for telling when a data model has been modified but not been saved to database yet, that probably connects with this issue somehow.
I was taking a look at this code sample and it seems like we are sending a _rev when we update but on error we do seem to save the current state and not supply any error messages.
I'll continue troubleshooting but if anyone has any insights drop in!
async save(entity: T): Promise<T> { const { id, rev, ...valuesToSave } = entity const savedEntity = await this.db.put({ _id: getTime(new Date()).toString(), ...valuesToSave }) return this.find(savedEntity.id) } try { const existingEntity = await this.find(entity.id) const { id, rev, ...restOfDoc } = existingEntity const entityToUpdate = { _id: id, _rev: rev, ...restOfDoc, ...entity, } await this.db.put(entityToUpdate) return this.find(entity.id) } catch (error) { return this.save(entity) }
saveOrUpdate does:
await this.db.put(entityToUpdate)
return this.find(entity.id)
and then updatePatientSuccess is supposed to set that Patient to state. So that all seems like it ought to work. I'd try debugging around those functions and see what the _rev of the patient being passed around it, to see where it's breaking.
saveOrUpdate does:
await this.db.put(entityToUpdate)
return this.find(entity.id)
and then updatePatientSuccess is supposed to set that Patient to state. So that all seems like it ought to work. I'd try debugging around those functions and see what the _rev of the patient being passed around it, to see where it's breaking.
Currently doing some debugging considering I'm having trouble even adding a related person having something to do with missing patientId on save.
|
2025-04-01T04:10:29.621673 | 2020-08-31T04:00:51 | 688879120 | {
"authors": [
"UmairKamran",
"WinstonPoh",
"jackcmeyer",
"mani9896",
"morrme",
"tehKapa"
],
"license": "mit",
"license_source": "bigquery",
"license_type": "permissive",
"provenance": "gharchive-dolma-0004.json.gz:14481",
"repo": "HospitalRun/hospitalrun-frontend",
"url": "https://github.com/HospitalRun/hospitalrun-frontend/issues/2350"
} | gharchive/issue | Patient should be autopopulated when clicking New Appointment from Patient Appointments page
🚀 Feature Proposal
When looking at a patient's appointments on their patient profile page and a user clicks the New Appointment button, the current patient should be autopopulated when the New Appointment page loads
Hey @jackcmeyer i would like to have a try at this issue, and a little guidance would be appreciated
@mani9896 Welcome! Go ahead!
Hi @morrme, first time contributor here. Seeing as this issue is still open mind if I give it a try?
ok @UmairKamran !
Hello! I'm currently finishing up refactoring the Scheduling/appointments module, so it might be good to pause this for a while?
Can you link your PR to this thread for reference?
@WinstonPoh ^
#2338 @tehKapa
Ok! @UmairKamran can you wait for the completion of #2338 before starting?
Sure @tehKapa! I'll keep an eye out for it.
Thanks for the heads up @WinstonPoh
@UmairKamran the PR for my task is here just FYI :) https://github.com/HospitalRun/hospitalrun-frontend/pull/2424/
@UmairKamran #2424 has been merged now :) thanks for your patience!
|
2025-04-01T04:10:29.751701 | 2016-03-20T23:36:14 | 142221155 | {
"authors": [
"TrevorBurnham",
"clbn"
],
"license": "MIT",
"license_source": "github-api",
"license_type": "permissive",
"provenance": "gharchive-dolma-0004.json.gz:14482",
"repo": "HubSpot/react-select-plus",
"url": "https://github.com/HubSpot/react-select-plus/issues/5"
} | gharchive/issue | Selected options are not removed from the dropdown menu
I'm using a component with multi={true}. The options in the groups are not being removed from the list when selected. (Top-level options, i.e. outside of the groups, are removed just fine, though.)
@clbn Thanks for reporting this!
|
2025-04-01T04:10:29.753775 | 2017-02-26T20:28:50 | 210345847 | {
"authors": [
"TrevorBurnham",
"yonatanmk"
],
"license": "MIT",
"license_source": "github-api",
"license_type": "permissive",
"provenance": "gharchive-dolma-0004.json.gz:14483",
"repo": "HubSpot/react-select-plus",
"url": "https://github.com/HubSpot/react-select-plus/issues/55"
} | gharchive/issue | Multiselect disable option should remove the disabled option if it's already selected
In the live demo, if a user were to add chocolate to the multiselect input and then disable the chocolate option, chocolate would remain in the in the input field with the nex label text as so:
Shouldn't chocolate be removed from the input should the user disable the chocolate option?
This is the same behavior as in the upstream React Select project: http://jedwatson.github.io/react-select/
I see what you're saying, but the component isn't so opinionated as to hide options that are included in its values. The visible component state is a 1:1 reflection of the values prop. If an app wants to include an option that can be removed but not re-added, that's weird but it's not technically prohibited.
|
2025-04-01T04:10:29.762841 | 2020-10-01T07:39:34 | 712582954 | {
"authors": [
"HugoZink"
],
"license": "MIT",
"license_source": "github-api",
"license_type": "permissive",
"provenance": "gharchive-dolma-0004.json.gz:14484",
"repo": "HugoZink/IreNFist",
"url": "https://github.com/HugoZink/IreNFist/issues/55"
} | gharchive/issue | Getting cuffed while interacting only works as host
Someone correct me if I'm wrong, but according to the current code, you are only cuffed while interacting if you are the host. This is because the hook checks if you're the host. This should be changed to if self._unit == (the local player unit).
While I did implement this same code on HuskPlayerDamage, it seems unlikely that it actually triggers. If someone can confirm that it does then great, otherwise I'll change the code around a bit.
The reasoning behind this current check is because I don't want clients to get cuffed when playing with non-InF hosts, but now I realize that it just doesn't matter all that much.
Regardless of whether or not this was actually broken, it is now fixed.
|
2025-04-01T04:10:29.766072 | 2020-02-15T07:38:37 | 565712049 | {
"authors": [
"HuguesTHOMAS",
"YoungSun3Pan"
],
"license": "MIT",
"license_source": "github-api",
"license_type": "permissive",
"provenance": "gharchive-dolma-0004.json.gz:14485",
"repo": "HuguesTHOMAS/KPConv",
"url": "https://github.com/HuguesTHOMAS/KPConv/issues/61"
} | gharchive/issue | performance on modelnet40
@HuguesTHOMAS Hi, thanks for your code. I trained KPconv_rigid on Modelnet40, and I got 90.7 accuracy (92.9 reported in paper). I don't know why.
Hi @YoungSun3Pan,
This score was obtain during validation and with a progressive voting scheme (this is the ''vote validation when you plot convergence). We tried several parameters and optimized to get this score. It is the only score of the paper that is not likely to be reproduced. Our focus is more on semantic segmentation.
Best,
Hugues
@HuguesTHOMAS Thanks, i have solved the above problems.
I have a very interesting problem on testing Shapenetpart..
I got stange results during testing (only 40map) , but normal results during traing (85 map on val set)
I think this may just be an orientation issue. The test set is probably aligned in a different direction than the training set.
Check the input preparation and try to visualize models to check their orientation
|
2025-04-01T04:10:29.858631 | 2023-03-24T22:14:38 | 1640120450 | {
"authors": [
"hlomzik",
"robot-ci-heartex"
],
"license": "Apache-2.0",
"license_source": "github-api",
"license_type": "permissive",
"provenance": "gharchive-dolma-0004.json.gz:14486",
"repo": "HumanSignal/label-studio",
"url": "https://github.com/HumanSignal/label-studio/pull/3905"
} | gharchive/pull-request | feat: LSDV-3896-5: Panel collapse
Hi @Travis1282!
This PR was created in response to a PRs in upstream repo:
https://github.com/heartexlabs/label-studio-frontend/pull/1272
LSF PR was stale
|
2025-04-01T04:10:30.155166 | 2019-01-29T12:33:18 | 404269584 | {
"authors": [
"D-M-Moriarty",
"kamal-brill",
"rvema",
"stevegal"
],
"license": "Apache-2.0",
"license_source": "github-api",
"license_type": "permissive",
"provenance": "gharchive-dolma-0004.json.gz:14487",
"repo": "Hygieia/hygieia-core",
"url": "https://github.com/Hygieia/hygieia-core/issues/37"
} | gharchive/issue | Not able to build in Windows 10
Hi all,
I couldn't able to build the hygieia-core project in Windows 10.
Steps to Reproduce:
Clone the repo
Open Command prompt/Power Shell
Navigate to the cloned repo
Run mvn clean install package
Expected:
Build to be successfull
Actual:
Im getting the same issue on mac
<dependency>
<groupId>javax.annotation</groupId>
<artifactId>javax.annotation-api</artifactId>
<version>1.3.2</version>
</dependency>
<dependency>
<groupId>javax.xml.bind</groupId>
<artifactId>jaxb-api</artifactId>
<version>2.2.11</version>
</dependency>
<dependency>
<groupId>com.sun.xml.bind</groupId>
<artifactId>jaxb-core</artifactId>
<version>2.2.11</version>
</dependency>
<dependency>
<groupId>com.sun.xml.bind</groupId>
<artifactId>jaxb-impl</artifactId>
<version>2.2.11</version>
</dependency>
<dependency>
<groupId>javax.activation</groupId>
<artifactId>activation</artifactId>
<version>1.1.1</version>
</dependency>
Adding the following repositories fixed this issue for me
split packages causing issues with the module system? Try jdk8
Added Dependencies that fixed it
@D-M-Moriarty Is it #43 to fix the build issues with JDK11?
no on JDK10
On Tue, Feb 19, 2019 at 5:50 PM Ragha Vema<EMAIL_ADDRESS>wrote:
@D-M-Moriarty https://github.com/D-M-Moriarty Is it #43
https://github.com/Hygieia/hygieia-core/pull/43 to fix the build issues
with JDK11?
—
You are receiving this because you were mentioned.
Reply to this email directly, view it on GitHub
https://github.com/Hygieia/hygieia-core/issues/37#issuecomment-465238579,
or mute the thread
https://github.com/notifications/unsubscribe-auth/ARJ4fddYw-q0rb3rr8gyuOPFT4rT3vDPks5vPDlpgaJpZM4aX86y
.
I will merge it then to make it compatible with JDK10
Can you confirm if this is still an issue?
No it's not an issue for me anymore
On Fri, Apr 19, 2019, 18:14 Ragha Vema<EMAIL_ADDRESS>wrote:
Can you confirm if this is still an issue?
—
You are receiving this because you were mentioned.
Reply to this email directly, view it on GitHub
https://github.com/Hygieia/hygieia-core/issues/37#issuecomment-484960652,
or mute the thread
https://github.com/notifications/unsubscribe-auth/AEJHQ7OCMRENDIZXICLTNKDPRH4X7ANCNFSM4GS7Z2ZA
.
Thanks for confirming!
|
2025-04-01T04:10:30.168444 | 2024-05-10T13:20:29 | 2289723546 | {
"authors": [
"lampajr"
],
"license": "Apache-2.0",
"license_source": "github-api",
"license_type": "permissive",
"provenance": "gharchive-dolma-0004.json.gz:14488",
"repo": "Hyperfoil/Horreum",
"url": "https://github.com/Hyperfoil/Horreum/issues/1682"
} | gharchive/issue | Find usages API not completely working
Describe the bug
Find label usages is not properly working on UI, the API requests seems good but the UI is not properly filled in:
The text should be something like Report config <b>{loc.title}</b> in {loc.where} {loc.name ? loc.name : ""}
As you can see loc.where and loc.name (and also loc.configId)are empty.
After some investigation, the API is properly returning those fields properly filled in.
IIUC the problem seems in the OpenAPI definition, as it was constructed assuming some sort of inheritance which does not actually exists in OpenAPI specs:
In the horreum-api we have an abstract LabelLocation schema which is extended by several locations, e.g., LabelInReport
The autogenerated client code (for the models) does not reflect that inheritance and you'll have two distinct classes, the LabelLocation and LabelInReport (the latter having all fields coming from LabelLocation and their owns)
The UI calls the client method findUsages which is returning LabelLocation[] the most generic one, thus it will simply discard all other fields
To Reproduce
Simply look for a label (in a specific schema having some usages on reports) and click find usages
You should see the table is not properly filled in
If you click the "Go to", it will redirect to an undefined report config id, this is because also loc.configId is undefined.
Version
What is the version of Horreum ?
Discovered on 0.13 but it was affecting also previous versions.
I think the proper way to handle this use case is using the Discriminator.
|
2025-04-01T04:10:30.172602 | 2022-05-11T15:13:55 | 1232815289 | {
"authors": [
"SamYuan1990",
"davidkhala"
],
"license": "Apache-2.0",
"license_source": "github-api",
"license_type": "permissive",
"provenance": "gharchive-dolma-0004.json.gz:14489",
"repo": "Hyperledger-TWGC/tape",
"url": "https://github.com/Hyperledger-TWGC/tape/issues/259"
} | gharchive/issue | replace azp by github action
currently the only blocking is for release publish, and docker package publish
I found
https://github.com/softprops/action-gh-release
https://docs.github.com/cn/repositories/releasing-projects-on-github/automatically-generated-release-notes
@davidkhala , would you please have a try?
@SamYuan1990 这个https://github.com/softprops/action-gh-release应该不是来自verified user,而我们TWGC仓库的action设置如下
是否需要进一步放宽使用限制?
我们能用
https://docs.github.com/cn/repositories/releasing-projects-on-github/automatically-generated-release-notes#example-configuration
这个么?
github自己的release action好像停止维护了。
如果不行的话我们可能需要咨询一下ryjones 了
一个简单的办法是将softprops/action-gh-release@v1加入豁免列表,我正在尝试
done
|
2025-04-01T04:10:30.180472 | 2023-07-12T11:36:49 | 1800788467 | {
"authors": [
"fourthletter",
"sandeepsajan0"
],
"license": "BSD-3-Clause",
"license_source": "github-api",
"license_type": "permissive",
"provenance": "gharchive-dolma-0004.json.gz:14490",
"repo": "HyphaApp/hypha",
"url": "https://github.com/HyphaApp/hypha/pull/3475"
} | gharchive/pull-request | Add status history section to invoice detail page
Refs #3474 (2nd point)
Looks really good! Thank you.
|
2025-04-01T04:10:30.241005 | 2023-05-25T18:40:29 | 1726309325 | {
"authors": [
"cdnninja",
"lucianoberger"
],
"license": "MIT",
"license_source": "github-api",
"license_type": "permissive",
"provenance": "gharchive-dolma-0004.json.gz:14491",
"repo": "Hyundai-Kia-Connect/kia_uvo",
"url": "https://github.com/Hyundai-Kia-Connect/kia_uvo/issues/646"
} | gharchive/issue | Addition of Bluelink/Hyundai Brazil
Hello! I would like to request that the Bluelink Brasil system be added here. How can I help with this problem?
We will need someone to sniff the traffic and provide it.
|
2025-04-01T04:10:30.242311 | 2021-05-27T05:45:55 | 903242490 | {
"authors": [
"Hyuto"
],
"license": "MIT",
"license_source": "github-api",
"license_type": "permissive",
"provenance": "gharchive-dolma-0004.json.gz:14492",
"repo": "Hyuto/Analisis-Sentimen-Corona-DKI-Jakarta",
"url": "https://github.com/Hyuto/Analisis-Sentimen-Corona-DKI-Jakarta/pull/1"
} | gharchive/pull-request | Update - Code Improvement
Code Improvement
Redesigning project directory and file names ✔️
Code Improvement ✔️
Executable scripts and notebooks ❓
Update README ❓
|
2025-04-01T04:10:30.251607 | 2024-11-18T13:05:38 | 2668475566 | {
"authors": [
"GSKang94"
],
"license": "MIT",
"license_source": "github-api",
"license_type": "permissive",
"provenance": "gharchive-dolma-0004.json.gz:14493",
"repo": "I-am-PUID-0/DMB",
"url": "https://github.com/I-am-PUID-0/DMB/issues/89"
} | gharchive/issue | Zilean failed to initialize after last update
Describe the bug
Getting ERROR - riven_backend subprocess: | zilean.validate - Zilean failed to initialize: HTTPConnectionPool(host='localhost', port=8182): Max retries exceeded with url: /healthchecks/ping (Caused by NewConnectionError('<urllib3.connection.HTTPConnection object at 0x7f70f3b2ed50>: Failed to establish a new connection: [Errno 111] Connection refused'))
It was working fine before update
It resolved. Thank you!
|
2025-04-01T04:10:30.259747 | 2017-10-19T13:34:31 | 266844356 | {
"authors": [
"hayfield"
],
"license": "MIT",
"license_source": "github-api",
"license_type": "permissive",
"provenance": "gharchive-dolma-0004.json.gz:14494",
"repo": "IATI/pyIATI",
"url": "https://github.com/IATI/pyIATI/pull/193"
} | gharchive/pull-request | Update version number and release date in changelog - 0.1.0
The intention is to have something to call 0.1.0 today
Should probably wait until #164 is passing before this is merged.
|
2025-04-01T04:10:30.263604 | 2022-10-01T05:25:13 | 1393248310 | {
"authors": [
"imyoungsparda"
],
"license": "MIT",
"license_source": "github-api",
"license_type": "permissive",
"provenance": "gharchive-dolma-0004.json.gz:14495",
"repo": "IAmTamal/Milan",
"url": "https://github.com/IAmTamal/Milan/pull/325"
} | gharchive/pull-request | Fix:Updated navlinks, Now changes color on hover
Issue 📐
**I Ayush Raj have worked for #322 **
Guidelines 🔐
I accept the fact that i have followed the guidelines and have not copied the codes from around the internet
[x] Contribution Guidelines
[x] Code of Conduct
Issue to be closed 🛅
**My pull request closes #322 **
Screenshots 📷
Here are the pictures of changes that i have made 🔽
Yes sir, Sure.
Let me do this
It was nice working on this project. Will keep contributing to this wonderful project.
Thank You so Much @IAmTamal
|
2025-04-01T04:10:30.277224 | 2016-11-26T01:28:02 | 191780126 | {
"authors": [
"dshuffma-ibm",
"ymonk"
],
"license": "Apache-2.0",
"license_source": "github-api",
"license_type": "permissive",
"provenance": "gharchive-dolma-0004.json.gz:14496",
"repo": "IBM-Blockchain/learn-chaincode",
"url": "https://github.com/IBM-Blockchain/learn-chaincode/pull/66"
} | gharchive/pull-request | Compiled my code
Modified chaincode_start.go
i'm going to close this b/c I think you made this PR accidentally? you were probably just trying to update your own fork.
|
2025-04-01T04:10:30.281551 | 2020-03-24T20:24:11 | 587256727 | {
"authors": [
"rayjlinden",
"ullumullu"
],
"license": "Apache-2.0",
"license_source": "github-api",
"license_type": "permissive",
"provenance": "gharchive-dolma-0004.json.gz:14497",
"repo": "IBM-Cloud/redli",
"url": "https://github.com/IBM-Cloud/redli/issues/19"
} | gharchive/issue | Take arguments from stdin
Was almost happy to find this tool. Unfortunately, I need to be able to do this:
redli -h hostname -a abc1234567890abc --tls set bar < filename.json
I.e. load the data from a file. It is a major pain to not have that functionality. SO...
I guess it's back to the drawing board...
I assume you're talking about this feature https://redis.io/topics/rediscli#getting-input-from-other-programs. Right?
|
2025-04-01T04:10:30.307495 | 2017-04-03T19:03:46 | 219030609 | {
"authors": [
"tfrank64"
],
"license": "apache-2.0",
"license_source": "bigquery",
"license_type": "permissive",
"provenance": "gharchive-dolma-0004.json.gz:14498",
"repo": "IBM-Swift/BluePic",
"url": "https://github.com/IBM-Swift/BluePic/issues/396"
} | gharchive/issue | Replace MCA with App ID
When using the deploy to Bluemix button, it failed because I can no longer create new MCA service instances. So we need to migrate to using the App ID service: https://console.ng.bluemix.net/catalog/services/app-id
Looks like I will need to replace the mca server sdk with:
https://github.com/ibm-cloud-security/appid-serversdk-swift
Also, I will replace the BMSFacebookAuth pod with:
https://github.com/ibm-cloud-security/appid-clientsdk-swift
BluemixAppId does not support API endpoint protection currently so we aren't using it. A side affect is we don't have access to the AuthorizationContext.
This means I might have to assign a userId and deviceId manually from the iOS client side.
This is complete, just need to merge to master
|
2025-04-01T04:10:30.318048 | 2017-04-05T09:34:45 | 219523183 | {
"authors": [
"codecov-io",
"shmuelk"
],
"license": "apache-2.0",
"license_source": "bigquery",
"license_type": "permissive",
"provenance": "gharchive-dolma-0004.json.gz:14499",
"repo": "IBM-Swift/Kitura-net",
"url": "https://github.com/IBM-Swift/Kitura-net/pull/184"
} | gharchive/pull-request | Run removeIdleSockets only once at shutdown.
Description
When the HTTP server shuts down, it runs removeIdleSockets to clean things up. This is being run in the epoll wait thread. Unfortunately if there are multiple epoll_wait threads then there can be a situation where multiple threads are trying to modify a Dictionary, which can cause a crash.
Motivation and Context
Fix the broken build
How Has This Been Tested?
Ran Kitura-net unit tests on Linux. This is Linux only code.
Checklist:
[ ] I have submitted a CLA form
[ ] If applicable, I have updated the documentation accordingly.
[ ] If applicable, I have added tests to cover my changes.
Codecov Report
Merging #184 into master will increase coverage by 0.02%.
The diff coverage is n/a.
@@ Coverage Diff @@
## master #184 +/- ##
==========================================
+ Coverage 76.92% 76.94% +0.02%
==========================================
Files 29 29
Lines 4316 4316
==========================================
+ Hits 3320 3321 +1
+ Misses 996 995 -1
Flag
Coverage Δ
#CHTTPParser
67.03% <ø> (+0.05%)
:arrow_up:
#KituraNet
76.94% <ø> (+0.02%)
:arrow_up:
Impacted Files
Coverage Δ
Sources/KituraNet/IncomingSocketManager.swift
85.45% <ø> (ø)
:arrow_up:
Sources/CHTTPParser/http_parser.c
66.94% <0%> (+0.05%)
:arrow_up:
Continue to review full report at Codecov.
Legend - Click here to learn more
Δ = absolute <relative> (impact), ø = not affected, ? = missing data
Powered by Codecov. Last update d0c6d95...6fa7edd. Read the comment docs.
|
2025-04-01T04:10:30.319736 | 2017-01-18T11:58:43 | 201554848 | {
"authors": [
"djones6"
],
"license": "Apache-2.0",
"license_source": "github-api",
"license_type": "permissive",
"provenance": "gharchive-dolma-0004.json.gz:14500",
"repo": "IBM-Swift/Kitura",
"url": "https://github.com/IBM-Swift/Kitura/issues/972"
} | gharchive/issue | Memory leak in Kitura-Session
Kitura-Session leaks memory on Linux, because it uses NSUUID (Foundation) to generate a session ID, and this currently has two leaks (one upon initialization, and another when accessing the uuidString).
I believe this is contributing to the memory growth that @carlbrown has identified.
UUID does not seem to leak and produces the same results. Perhaps we can just swap over to using it instead?
|
2025-04-01T04:10:30.326374 | 2020-02-26T15:08:32 | 571452782 | {
"authors": [
"CLAassistant",
"mbarnach"
],
"license": "Apache-2.0",
"license_source": "github-api",
"license_type": "permissive",
"provenance": "gharchive-dolma-0004.json.gz:14501",
"repo": "IBM-Swift/Kitura",
"url": "https://github.com/IBM-Swift/Kitura/pull/1498"
} | gharchive/pull-request | Add OpenAPI 3.0.x compatibility documentation.
Make the middlewares documentation aware
Add optional getters to the middleware to document better your code.
Keep 100% compatibility with Swagger 2.0 documentation.
Description
Generate an OpenAPI 3.0.x documentation using the Kitura-OpenAPI framework.
The new features are:
a better documentation of the routes;
middlewares can now add extra documentation about their parameters and their security policy;
sub-router are fully supported: sub-router routes are visible in the documentation;
security schemes can be documented
tags are supported;
servers can be fully described.
This PR requires Kitura-OpenAPI PR #29 (https://github.com/IBM-Swift/Kitura-OpenAPI/pull/29) for a correct display of OpenAPI specification. This is not affecting the Swagger 2.0 display.
Motivation and Context
The automatic documentation of routes could be improved.
The OpenAPI 2.0 (aka Swagger 2.0) is outdated and the 3.0 should be used instead.
How Has This Been Tested?
A side project to see the result is available in https://github.com/mbarnach/KituraOpenAPIExample
Checklist:
[X] If applicable, I have updated the documentation accordingly.
[/] If applicable, I have added tests to cover my changes.
I will write the tests now. I think it makes sense to have both in-place, as it is too easy to break something. By the way, in the PR, I've fixed the usage of array in Swagger 2.0 that was broken with some getter with middlewares. It will be big to cover everything, but I will try to setup the groundwork for it.
Thank you for your submission! We really appreciate it. Like many open source projects, we ask that you sign our Contributor License Agreement before we can accept your contribution.You have signed the CLA already but the status is still pending? Let us recheck it.
|
2025-04-01T04:10:30.339276 | 2017-07-31T15:26:13 | 246796328 | {
"authors": [
"codecov-io",
"youming-lin"
],
"license": "Apache-2.0",
"license_source": "github-api",
"license_type": "permissive",
"provenance": "gharchive-dolma-0004.json.gz:14502",
"repo": "IBM-Swift/swift-html-entities",
"url": "https://github.com/IBM-Swift/swift-html-entities/pull/26"
} | gharchive/pull-request | Support pods; update documentations
For #25 and #16
Codecov Report
Merging #26 into master will not change coverage.
The diff coverage is n/a.
@@ Coverage Diff @@
## master #26 +/- ##
=======================================
Coverage 90.52% 90.52%
=======================================
Files 2 2
Lines 401 401
=======================================
Hits 363 363
Misses 38 38
Flag
Coverage Δ
#HTMLEntities
90.52% <ø> (ø)
:arrow_up:
Continue to review full report at Codecov.
Legend - Click here to learn more
Δ = absolute <relative> (impact), ø = not affected, ? = missing data
Powered by Codecov. Last update 17c408a...c107a33. Read the comment docs.
|
2025-04-01T04:10:30.345150 | 2022-02-03T06:12:28 | 1122693629 | {
"authors": [
"d0roppe",
"lmsurpre"
],
"license": "Apache-2.0",
"license_source": "github-api",
"license_type": "permissive",
"provenance": "gharchive-dolma-0004.json.gz:14503",
"repo": "IBM/FHIR",
"url": "https://github.com/IBM/FHIR/issues/3274"
} | gharchive/issue | invalid sort parameter value leads to 500 server error
Describe the bug
invalid sort parameter value leads to 500 Sever Error but should be 400 Bad Request
Caused by: java.lang.IllegalArgumentException: Invalid sortValue: -lastUpdated
at com.ibm.fhir.persistence.HistorySortOrder.of(HistorySortOrder.java:61)
at com.ibm.fhir.persistence.util.FHIRPersistenceUtil.parseSystemHistoryParameters(FHIRPersistenceUtil.java:186)
... 65 more
Environment
main
To Reproduce
invoke whole-system history with a bogus sort value; for example
{{base}}/_history?_sort=bogus
Expected behavior
it should be a 400 Bad Request, not a 500.
Additional context
missed this in the description and QA for #2026
verified that this error is now a code 400, also verified other errors with _history are also code 400
|
2025-04-01T04:10:30.347192 | 2020-05-04T20:55:28 | 612162450 | {
"authors": [
"lmsurpre",
"prb112"
],
"license": "Apache-2.0",
"license_source": "github-api",
"license_type": "permissive",
"provenance": "gharchive-dolma-0004.json.gz:14504",
"repo": "IBM/FHIR",
"url": "https://github.com/IBM/FHIR/pull/1019"
} | gharchive/pull-request | Improve the Exception Case for Db2 Configured Multitenancy #1018
Signed-off-by: Paul Bastide<EMAIL_ADDRESS>
I would think this could cause confusing, what if a user set multitenant to true for derby or postgresql? what should they expect?
IMHO, they should expect an error because those databases don't support that feature.
Its not so different from bootstrap=true which currently only works for Derby.
I would think this could cause confusing, what if a user set multitenant to true for derby or postgresql? what should they expect?
I explain in some is used to trigger custom logic for multitenant datastore support. I added to the readme.
|
2025-04-01T04:10:30.353040 | 2018-06-06T19:00:28 | 329992432 | {
"authors": [
"Tomcli",
"fplk"
],
"license": "Apache-2.0",
"license_source": "github-api",
"license_type": "permissive",
"provenance": "gharchive-dolma-0004.json.gz:14505",
"repo": "IBM/FfDL",
"url": "https://github.com/IBM/FfDL/issues/90"
} | gharchive/issue | Setup more complicated than 3 steps in README
Documented commands do not show all how to setup FfDL from scratch
I noticed that setting up FfDL on macOS is more involved than the steps in the documentation - steps like make minikube, eval $(minikube docker-env) or make docker-build-base are omitted and it would also help to have instructions on how to install dependencies. In general, the following instructions should work:
# Install Docker
# Approximately https://docs.docker.com/docker-for-mac/install/
# Install Go
brew install go
brew install glide # Alternative: curl https://glide.sh/get | sh
export GOPATH=$HOME/go
echo "export GOPATH=$HOME/go" >> ~/.profile
export PATH=${GOPATH}/bin:$PATH
echo "export PATH=\$GOPATH/bin:\$PATH" >> ~/.profile
source ~/.profile
# Install Minikube
brew cask install virtualbox # or use installer from https://www.virtualbox.org/wiki/Downloads
brew cask install minikube
brew install kubernetes-helm
# Hyperkit
curl -LO https://storage.googleapis.com/minikube/releases/latest/docker-machine-driver-hyperkit \
&& chmod +x docker-machine-driver-hyperkit \
&& sudo mv docker-machine-driver-hyperkit /usr/local/bin/ \
&& sudo chown root:wheel /usr/local/bin/docker-machine-driver-hyperkit \
&& sudo chmod u+s /usr/local/bin/docker-machine-driver-hyperkit
# Potential Alternative:
# brew install --build-from-source hyperkit
# Clone FfDL
mkdir -p $GOPATH/src/github.com/IBM && cd $_
git clone https://github.com/IBM/FfDL.git && cd FfDL
# Build FfDL
export VM_TYPE=minikube
# Modify Makefile and change MINIKUBE_DRIVER from xhyve to hyperkit
sed -i '' -e "s/MINIKUBE_DRIVER ?= xhyve/MINIKUBE_DRIVER ?= hyperkit/g" Makefile
glide install
make build
make minikube
eval $(minikube docker-env)
make docker-build-base
make docker-build
make deploy
With two minor things to add...
Probably need to install helm and kubectl as well
Need to add instructions to install Docker
...and three questions:
Would you like me to do this and submit a PR?
Which document should this go into? docs/setup-guide.md?
Do we want to add a fully automatic script like we provide for DIND with the new PR? What does the end game look like regarding setup? Will we try to build one master installation script set for all platforms, one set for each platform or do we ultimately want to push the entire setup into tools like Ansible or helm?
Thanks in advance.
PS regarding troubleshooting:
We should also consider adding a docs/troubleshooting.md.
For instance, I have seen the following issues on Minikube:
If make deploy dies after "Initializing..." most likely VM_TYPE=minikube was not set.
If make deploy gets stuck at "Installing helm/tiller..." most likely helm is not installed.
Does that make sense? Do you want me to seed a troubleshooting file as well? Can you think of additional common errors?
I think we mentioned some of the requirements (like helm and docker) in the prerequisites, but having a setup-guide would definitely help.
For the fully automatic script, I don't think is necessary because we need to maintain it and it's mostly about how to setup the Kubernetes cluster on local machine. But we should point them to some Kubernetes documentations for setting up Kubernetes Cluster other than Minikube.
Moved to architecture repo. Will create PR in near future with setup instructions and troubleshooting.
|
2025-04-01T04:10:30.354326 | 2019-06-06T21:31:37 | 453247318 | {
"authors": [
"cclauss",
"huanzhang12"
],
"license": "Apache-2.0",
"license_source": "github-api",
"license_type": "permissive",
"provenance": "gharchive-dolma-0004.json.gz:14506",
"repo": "IBM/Image-Captioning-Attack",
"url": "https://github.com/IBM/Image-Captioning-Attack/pull/2"
} | gharchive/pull-request | Old style exceptions --> new style for Python 3
Old style exceptions are syntax errors in Python 3 but new style exceptions work as expected in both Python 2 and Python 3.
Merged. Thanks!
|
2025-04-01T04:10:30.366285 | 2019-09-20T23:37:25 | 496584185 | {
"authors": [
"MLnick",
"kmh4321"
],
"license": "Apache-2.0",
"license_source": "github-api",
"license_type": "permissive",
"provenance": "gharchive-dolma-0004.json.gz:14507",
"repo": "IBM/MAX-Question-Answering",
"url": "https://github.com/IBM/MAX-Question-Answering/pull/15"
} | gharchive/pull-request | Enables training on WML.
Documentation
[x] Model /README.md in the root directory includes a training section (use this template)
[x] Model index.md (in https://github.ibm.com/IBMCode/Code-Models) includes a training section (use this template
Model training assets
[x] All model training artifacts are located in the /training/ directory.
[x] How to train this model readme /training/README.md is customized.
[x] How to prepare data for model training readme /training/data_preparation/README.md is customized.
[x] /training/...-model-training.yaml (model training configuration file) is customized (for the model) but does not define values for the three bucket properties, compute_configuration.name is k80, training_data_local.path is set to sample_training_data/ or not set.
[x] /training/wml_setup.py (setup for model training) works as expected
[x] /training/wml_train.py (model training) works as expected
[x] /training/model_training_code/ contains customized versions of train-max-model.sh, training_requirements.txt and model-specific Python training code
Sample assets
[x] Existing sample assets (images, audio files, notebooks, etc) were moved from the /assets directory to /samples (if applicable) and references to them (e.g. in the /README.md updated)
Pre-trained model assets
[x] Pre-trained model assets on COS have been refreshed (if required)
[x] No pre-trained model assets are located in the /assets directory (if applicable)
[x] The /assets directory is no longer present in github.
Model-serving microservice
[x] Docker image is based on MAX-Base image v1.1.3+
[x] Docker image can be built using /Dockerfile to serve pre-trained model assets
[x] Docker image can be built using /Dockerfile to serve custom-trained model assets
[x] Model-serving microservice can serve pre-trained model assets
[x] Model-serving microservice can serve custom-trained model assets
[x] .dockerignore was updated to exclude model training output/temp files (exception generated content in the /custom_assets/ directory), e.g. training/training_output/
[x] .gitignore was updated to exclude model training output/temp files, e.g. training/training_output/
A training-enabled model should meet these requirements:
[x] /training/sample_training_data/README.md was customized (if a small sample data set is provided)
[x] /training/sample_training_data/ contains a small sample training data set
training/max-question-answering-model-building-code.zip file should be removed from the PR
@kmh4321 thanks for the updates. Tests still required and then looks ready.
@MLnick I added some tests but since the model is trained for too few iterations, it's answers will not make sense, so I'm just checking for 200 and a non empty answer response.
@MLnick I added some tests but since the model is trained for too few iterations, it's answers will not make sense, so I'm just checking for 200 and a non empty answer response.
ok sure
Latest commit seemed to re-include training/max-question-answering-model-building-code.zip
Apart from that changes LGTM
|
2025-04-01T04:10:30.374397 | 2018-06-21T02:34:27 | 334315876 | {
"authors": [
"echarpibm",
"tonydiaz"
],
"license": "Apache-2.0",
"license_source": "github-api",
"license_type": "permissive",
"provenance": "gharchive-dolma-0004.json.gz:14508",
"repo": "IBM/carbon-components-react",
"url": "https://github.com/IBM/carbon-components-react/issues/1018"
} | gharchive/issue | DatePicker - Date seems to update to a value not selected at times
[DatePicker]: Start/end date seems to update to a value not selected at times
Detailed description
Describe in detail the issue you're having.
When selecting various start and end dates on the DatePicker at some point it will update the end date value when you are only updating the start date
Is this a feature request (new component, new icon), a bug, or a general issue?
Bug
Is this issue related to a specific component?
DatePicker and DatePickerInput
What did you expect to happen? What happened instead? What would you like to see changed?
I expected the end date would stay the same and only the start date would be changed.
What browser are you working in?
Chrome
What version of the Carbon Design System are you using?
v5.50.0 but I reproduced this on Codesandbox using v 6.5.3
What offering/product do you work on? Any pressing ship or release dates we should be aware of?
Watson Assistant
Steps to reproduce the issue
Codesandbox
Begin with the start date as 6/18/2018 and end date to 6/20/2018
Select the end date date 06/17/2018
Select the end date to 06/20/2018
Notice the start date updates to 6/18/2018 and end date get updated to 6/20/2018 when only the end date should have been updated
Another way:
Begin with the start date as 6/18/2018 and end date to 6/20/2018
Select the end date date 06/11/2018
Select the end date to 06/19/2018
Notice the start date updates to 6/18/2018 and end date get updated to 6/19/2018 when only the end date should have been updated.
Additional information
Screenshots or code
Notes
This appears to be a duplicate of #1014
@echarpibm There is a difference in the issues. This issue is solely on the react component that dates selected won't match the date the component displays.
Issue #1014 is more regarding that the react version of the component doesn't operate the same way the carbon component does. Take a look at the screen captures on the issue you will see that on react when the date range is 6/11-6/15 and you click to update the end date and select something like 6/4 it will automatically update the end date to 6/11 and the start date to 6/4. The carbon component will keep the end date at 6/15 and just update the start date.
|
2025-04-01T04:10:30.384555 | 2021-01-11T11:49:44 | 783305187 | {
"authors": [
"ibm-reggie",
"jhart1685"
],
"license": "Apache-2.0",
"license_source": "github-api",
"license_type": "permissive",
"provenance": "gharchive-dolma-0004.json.gz:14509",
"repo": "IBM/container-registry-go-sdk",
"url": "https://github.com/IBM/container-registry-go-sdk/pull/5"
} | gharchive/pull-request | fix(Project): Hygiene change to bump release version
Signed-off-by: James Hart<EMAIL_ADDRESS>PR summary
Trivial change to trigger CI, having manually created an initial tag. This should allow the semantic-versioning automation to work.
Fixes: <! -- link to issue -->
PR Checklist
Please make sure that your PR fulfills the following requirements:
[x] The commit message follows the Angular Commit Message Guidelines.
[ ] Tests for the changes have been added (for bug fixes / features)
[ ] Docs have been added / updated (for bug fixes / features)
PR Type
[ ] Bugfix
[ ] Feature
[ ] Code style update (formatting, local variables)
[ ] Refactoring (no functional changes, no api changes)
[ ] New tests
[x] Build/CI related changes
[ ] Documentation content changes
[ ] Other (please describe)
What is the current behavior?
What is the new behavior?
Does this PR introduce a breaking change?
[ ] Yes
[x] No
Other information
:tada: This PR is included in version 0.0.6 :tada:
The release is available on GitHub release
Your semantic-release bot :package::rocket:
:tada: This PR is included in version 0.0.6 :tada:
The release is available on GitHub release
Your semantic-release bot :package::rocket:
|
2025-04-01T04:10:30.387963 | 2023-03-16T15:58:52 | 1627798962 | {
"authors": [
"ujjwalchk-it"
],
"license": "Apache-2.0",
"license_source": "github-api",
"license_type": "permissive",
"provenance": "gharchive-dolma-0004.json.gz:14510",
"repo": "IBM/ibm-iam-operator",
"url": "https://github.com/IBM/ibm-iam-operator/pull/608"
} | gharchive/pull-request | platform-auth-idp cm Upgrade handling for IM4.0
This PR is to update platform-auth-idp cm during CP3 upgade :
Before (cp2 format)
$ oc get cm platform-auth-idp -o yaml | grep 'IDENTITY_MGMT_URL\|BASE_OIDC_URL\|IDENTITY_AUTH_DIRECTORY_URL\|IDENTITY_PROVIDER_URL'
BASE_OIDC_URL: https://<IP_ADDRESS>:9443/oidc/endpoint/OP
IDENTITY_AUTH_DIRECTORY_URL: https://<IP_ADDRESS>:3100
IDENTITY_MGMT_URL: https://<IP_ADDRESS>:4500
IDENTITY_PROVIDER_URL: https://<IP_ADDRESS>:4300
f:BASE_OIDC_URL: {}
f:IDENTITY_AUTH_DIRECTORY_URL: {}
f:IDENTITY_MGMT_URL: {}
f:IDENTITY_PROVIDER_URL: {}
During upgrade (cp3 format)
$ oc get cm platform-auth-idp -o yaml | grep 'IDENTITY_MGMT_URL\|BASE_OIDC_URL\|IDENTITY_AUTH_DIRECTORY_URL\|IDENTITY_PROVIDER_URL'
BASE_OIDC_URL: https://platform-auth-service:9443/oidc/endpoint/OP
IDENTITY_AUTH_DIRECTORY_URL: https://platform-auth-service:3100
IDENTITY_MGMT_URL: https://platform-identity-management:4500
IDENTITY_PROVIDER_URL: https://platform-identity-provider:4300
f:BASE_OIDC_URL: {}
f:IDENTITY_AUTH_DIRECTORY_URL: {}
f:IDENTITY_MGMT_URL: {}
f:IDENTITY_PROVIDER_URL: {}
GH Ref https://github.ibm.com/IBMPrivateCloud/roadmap/issues/57809
|
2025-04-01T04:10:30.396891 | 2024-11-29T13:57:43 | 2705207750 | {
"authors": [
"PiotrAniola82",
"kkazmierczyk"
],
"license": "Apache-2.0",
"license_source": "github-api",
"license_type": "permissive",
"provenance": "gharchive-dolma-0004.json.gz:14511",
"repo": "IBM/javacore-analyser",
"url": "https://github.com/IBM/javacore-analyser/pull/58"
} | gharchive/pull-request | #57 Add more progress bars in the code
Fixes #57
I added progress bars in the console for the following long running actions:
Parsing javacore files
Generating xsls/xmls
This branch fails with error
Exception: Security exception: Uncontrolled data used in path expression
2024-12-06 14:36:12,832 [thread: 28428][ERROR][javacore_analyser_batch.py:101] Processing was not successful. Correct the problem and try again. Exiting with error 13
Traceback (most recent call last):
File "C:\Users\P40095820\PycharmProjects\javacore-analyser\src\javacore_analyser\javacore_analyser_batch.py", line 98, in main
process_javacores_and_generate_report_data(files, output_param)
File "C:\Users\P40095820\PycharmProjects\javacore-analyser\src\javacore_analyser\javacore_analyser_batch.py", line 154, in process_javacores_and_generate_report_data
javacore_set.generate_report_files(output_dir)
File "C:\Users\P40095820\PycharmProjects\javacore-analyser\src\javacore_analyser\javacore_set.py", line 129, in generate_report_files
self.__create_output_files_structure(output_dir)
File "C:\Users\P40095820\PycharmProjects\javacore-analyser\src\javacore_analyser\javacore_set.py", line 140, in __create_output_files_structure
raise Exception("Security exception: Uncontrolled data used in path expression")
Exception: Security exception: Uncontrolled data used in path expression
Also, method def __create_output_files_structure(self, output_dir)
should be static
Furthermore, some progress bars are broken:
Populating snapshot collection: 100%|██████████| 279/279 [00:00<00:00, 3489.45 javacore/s]
|
2025-04-01T04:10:30.397941 | 2019-08-10T23:19:04 | 479314849 | {
"authors": [
"starpit"
],
"license": "Apache-2.0",
"license_source": "github-api",
"license_type": "permissive",
"provenance": "gharchive-dolma-0004.json.gz:14512",
"repo": "IBM/kui",
"url": "https://github.com/IBM/kui/issues/2289"
} | gharchive/issue | add proxy test coverage to the core capabilities
we aren't currently running webpack+proxy tests for the core (layer1) capabilities
this also requires that pty-session-status test should be run in the sequential group: it kills the proxy (as part of the test), and so cannot be run in parallel with other webpack+proxy tests
|
2025-04-01T04:10:30.400001 | 2024-04-18T11:08:48 | 2250412347 | {
"authors": [
"MalarvizhiK",
"arpit-srivastava-ibm"
],
"license": "Apache-2.0",
"license_source": "github-api",
"license_type": "permissive",
"provenance": "gharchive-dolma-0004.json.gz:14513",
"repo": "IBM/networking-go-sdk",
"url": "https://github.com/IBM/networking-go-sdk/pull/167"
} | gharchive/pull-request | feat: release managed rulesets
feat: release managed rulesets
Empty commit to trigger a release.
:tada: This PR is included in version 0.46.0 :tada:
The release is available on GitHub release
Your semantic-release bot :package::rocket:
|
2025-04-01T04:10:30.402047 | 2018-04-06T14:33:30 | 311999147 | {
"authors": [
"flatsiedatsie",
"stevemart"
],
"license": "Apache-2.0",
"license_source": "github-api",
"license_type": "permissive",
"provenance": "gharchive-dolma-0004.json.gz:14514",
"repo": "IBM/pixiedust-facebook-analysis",
"url": "https://github.com/IBM/pixiedust-facebook-analysis/issues/47"
} | gharchive/issue | Ethics?
Does this tool allow one to analyse and understand things that can be traced back to individual Facebook user?
Hey @flatsiedatsie, the notebook uses data that is exported from Facebook's Analytics online tool. The data exported from FB Analytics does not include personally identifiable data. See https://developers.facebook.com/docs/analytics/properties
User IDs and user properties can't include any personally identifying information, such as people's names or email addresses.
Further... i'm fairly certain you have to be an admin or a group to look up analytics about the group and the users interacting with the group.
|
2025-04-01T04:10:30.409076 | 2021-01-13T21:22:26 | 785466411 | {
"authors": [
"ibm-devx-automation",
"padamstx"
],
"license": "Apache-2.0",
"license_source": "github-api",
"license_type": "permissive",
"provenance": "gharchive-dolma-0004.json.gz:14515",
"repo": "IBM/platform-services-python-sdk",
"url": "https://github.com/IBM/platform-services-python-sdk/pull/71"
} | gharchive/pull-request | fix(IAM Access Groups): minor code re-gen after recent API changes
PR summary
Re-gen of IAM Access Groups
PR Checklist
Please make sure that your PR fulfills the following requirements:
[ ] The commit message follows the Angular Commit Message Guidelines.
[ ] Tests for the changes have been added (for bug fixes / features)
[ ] Docs have been added / updated (for bug fixes / features)
PR Type
[ ] Bugfix
[ ] Feature
[ ] Code style update (formatting, local variables)
[ ] Refactoring (no functional changes, no api changes)
[ ] New tests
[ ] Build/CI related changes
[ ] Documentation content changes
[ ] Other (please describe)
What is the current behavior?
What is the new behavior?
Does this PR introduce a breaking change?
[ ] Yes
[ ] No
Other information
:tada: This PR is included in version 0.17.5 :tada:
The release is available on GitHub release
Your semantic-release bot :package::rocket:
:tada: This PR is included in version 0.17.5 :tada:
The release is available on GitHub release
Your semantic-release bot :package::rocket:
|
2025-04-01T04:10:30.412218 | 2018-03-25T18:46:23 | 308374292 | {
"authors": [
"alonm",
"coveralls",
"shay-berman"
],
"license": "apache-2.0",
"license_source": "bigquery",
"license_type": "permissive",
"provenance": "gharchive-dolma-0004.json.gz:14516",
"repo": "IBM/ubiquity",
"url": "https://github.com/IBM/ubiquity/pull/202"
} | gharchive/pull-request | Dev - README update
This change is
Coverage remained the same at 54.731% when pulling 8d1525e680d0d030560b440b2b8a4a6caec5a001 on dev into 1878cf7d92f378da99ae7ac9a76534d27bc67a20 on master.
Review status: 0 of 1 files reviewed at latest revision, all discussions resolved.
Comments from Reviewable
|
2025-04-01T04:10:30.422927 | 2021-05-18T12:36:32 | 894332589 | {
"authors": [
"jchailloux",
"xguerin"
],
"license": "Apache-2.0",
"license_source": "github-api",
"license_type": "permissive",
"provenance": "gharchive-dolma-0004.json.gz:14517",
"repo": "IBMStreams/OSStreams",
"url": "https://github.com/IBMStreams/OSStreams/issues/15"
} | gharchive/issue | Failed to install
jchailloux@MacBook-Pro OSStreams-main % make builder
[BLD] antlr3c
[+] Building 61.2s (7/7) FINISHED
=> [internal] load build definition from Dockerfile 0.0s
=> => transferring dockerfile: 1.39kB 0.0s
=> [internal] load .dockerignore 0.0s
=> => transferring context: 2B 0.0s
=> [internal] load metadata for docker.io/library/centos:7 2.0s
=> [auth] library/centos:pull token for registry-1.docker.io 0.0s
=> [1/2] FROM docker.io/library/centos:7@sha256:0f4ec88e21daf75124b8a9e5ca03c37a5e937e0e108a255d890492430789b60e 12.8s
=> => resolve docker.io/library/centos:7@sha256:0f4ec88e21daf75124b8a9e5ca03c37a5e937e0e108a255d890492430789b60e 0.0s
=> => sha256:0f4ec88e21daf75124b8a9e5ca03c37a5e937e0e108a255d890492430789b60e 1.20kB / 1.20kB 0.0s
=> => sha256:e4ca2ed0202e76be184e75fb26d14bf974193579039d5573fb2348664deef76e 529B / 529B 0.0s
=> => sha256:8652b9f0cb4c0599575e5a003f5906876e10c1ceb2ab9fe1786712dac14a50cf 2.75kB / 2.75kB 0.0s
=> => sha256:2d473b07cdd5f0912cd6f1a703352c82b512407db6b05b43f2553732b55df3bc 76.10MB / 76.10MB 7.4s
=> => extracting sha256:2d473b07cdd5f0912cd6f1a703352c82b512407db6b05b43f2553732b55df3bc 5.2s
=> [2/2] RUN yum install -y deltarpm && yum install -y file gcc gcc-c++ make && curl -sL https://www.antlr3.org/download/C/libantlr3c-3.1.3.tar.gz -o libantlr3c-3.1. 45.7s
=> exporting to image 0.4s
=> => exporting layers 0.4s
=> => writing image sha256:709c93d9af11716a936daef77691ee10c644aa700074118760c41543e2987be5 0.0s
=> => naming to docker.io/openstreams/antlr3c:3.1.3.x86_64 0.0s
[PSH] antlr3c
The push refers to repository [docker.io/openstreams/antlr3c]
64106d10e3d5: Preparing
174f56854903: Preparing
denied: requested access to the resource is denied
make[2]: *** [push] Error 1
make[1]: *** [antlr3c-pkg-push] Error 2
make: *** [builder] Error 2
You don't need to build the image. You can simply use those in the docker hub. But if you do, you need to use your local registry instead of the hub.
|
2025-04-01T04:10:30.481005 | 2017-02-09T14:45:20 | 206524286 | {
"authors": [
"joshmoore"
],
"license": "CC-BY-4.0",
"license_source": "github-api",
"license_type": "permissive",
"provenance": "gharchive-dolma-0004.json.gz:14519",
"repo": "IDR/idr-demo.openmicroscopy.org",
"url": "https://github.com/IDR/idr-demo.openmicroscopy.org/pull/12"
} | gharchive/pull-request | Deploy fix
Begin correcting the deployment page to match the state of IDR-0.3.3
Merging to pass off to @manics.
|
2025-04-01T04:10:30.494658 | 2021-09-30T19:20:12 | 1012536335 | {
"authors": [
"Aacashh",
"Dhruv9449",
"hg242322",
"mogiiee",
"sanz17"
],
"license": "MIT",
"license_source": "github-api",
"license_type": "permissive",
"provenance": "gharchive-dolma-0004.json.gz:14521",
"repo": "IEEE-VIT/concy-bot",
"url": "https://github.com/IEEE-VIT/concy-bot/issues/2"
} | gharchive/issue | add: alarm
Create a function of an alarm which when called on discord will set an alarm for that particular time (24 hour clock) and alert the user once that time has reached.
i want to solve this issue
I have assigned you the issue. All the Best !!
i would like to work on this issue
I have ideas regarding how the alert system would work, and I know how to implement them as well. Please let me work on this issue.
|
2025-04-01T04:10:30.555572 | 2016-05-09T11:12:02 | 153753719 | {
"authors": [
"martingrimmer",
"merando"
],
"license": "apache-2.0",
"license_source": "bigquery",
"license_type": "permissive",
"provenance": "gharchive-dolma-0004.json.gz:14522",
"repo": "IIDP/OSTMap",
"url": "https://github.com/IIDP/OSTMap/issues/47"
} | gharchive/issue | commons: KeyHelperClass
build helper class for keys of RawTwitterData table.
byte[] buildPrefix(String timeStamp)
byte[] buildPrefix(long timeStamp)
...
Please extract commons functionality and remove duplicate code from stream and batch processing.
|
2025-04-01T04:10:30.608740 | 2023-09-25T23:13:27 | 1912411814 | {
"authors": [
"IKavanagh",
"thepwrtank18"
],
"license": "MIT",
"license_source": "github-api",
"license_type": "permissive",
"provenance": "gharchive-dolma-0004.json.gz:14523",
"repo": "IKavanagh/BingChatInFirefox",
"url": "https://github.com/IKavanagh/BingChatInFirefox/issues/3"
} | gharchive/issue | Bing Chat is now a microsoft-edge link
https://www.microsoft.com/en-us/edge/launch/bing-chat-paywall?form=QBLH&q=bing ai&ch
Navigating to chat.bing.com still works for me but I don't think it will work indefinitely if Microsoft really do want to block access.
|
2025-04-01T04:10:30.619157 | 2024-06-17T10:59:08 | 2357048737 | {
"authors": [
"EinKaffeeBitte",
"IKavanagh"
],
"license": "MIT",
"license_source": "github-api",
"license_type": "permissive",
"provenance": "gharchive-dolma-0004.json.gz:14524",
"repo": "IKavanagh/BingChatInFirefox",
"url": "https://github.com/IKavanagh/BingChatInFirefox/pull/9"
} | gharchive/pull-request | Add support for Copilot.Microsoft.com
Hi, this pull request should also add support for the new Bing Chat / Copilot URL, https://copilot.microsoft.com.
Uh... the README.md is much more different though as this is from my Fork of this repo so feel free to update README.
Why is support needed for https://copilot.microsoft.com/? It works perfectly for me without the add-on on Firefox.
It still has the conversation limit. Although now microsoft allows people to use copilot in other browsers, they set a limit to other non-edge browsers. On Microsoft edge, the conversation limit is 30. But on other browsers, it is 5.
From: Ian Kavanagh @.>
Sent: 17 June 2024 12:24 PM
To: IKavanagh/BingChatInFirefox @.>
Cc: EinKaffeeBitte @.>; Author @.>
Subject: Re: [IKavanagh/BingChatInFirefox] Add support for Copilot.Microsoft.com (PR #9)
Why is support needed for https://copilot.microsoft.com/? It works perfectly for me without the add-on on Firefox.
—
Reply to this email directly, view it on GitHubhttps://github.com/IKavanagh/BingChatInFirefox/pull/9#issuecomment-2173130147, or unsubscribehttps://github.com/notifications/unsubscribe-auth/BG7AVXQR4D27Q4V2BUK4WGDZH3BPXAVCNFSM6AAAAABJNYORJWVHI2DSMVQWIX3LMV43OSLTON2WKQ3PNVWWK3TUHMZDCNZTGEZTAMJUG4.
You are receiving this because you authored the thread.Message ID: @.***>
Also, please remove now the sidebar as it doesn't work. It says that the site doesn't let the browser display it in an iframe.
From: Ian Kavanagh @.>
Sent: 17 June 2024 12:24 PM
To: IKavanagh/BingChatInFirefox @.>
Cc: EinKaffeeBitte @.>; Author @.>
Subject: Re: [IKavanagh/BingChatInFirefox] Add support for Copilot.Microsoft.com (PR #9)
Why is support needed for https://copilot.microsoft.com/? It works perfectly for me without the add-on on Firefox.
—
Reply to this email directly, view it on GitHubhttps://github.com/IKavanagh/BingChatInFirefox/pull/9#issuecomment-2173130147, or unsubscribehttps://github.com/notifications/unsubscribe-auth/BG7AVXQR4D27Q4V2BUK4WGDZH3BPXAVCNFSM6AAAAABJNYORJWVHI2DSMVQWIX3LMV43OSLTON2WKQ3PNVWWK3TUHMZDCNZTGEZTAMJUG4.
You are receiving this because you authored the thread.Message ID: @.***>
|
2025-04-01T04:10:30.656632 | 2024-02-01T19:36:35 | 2113317300 | {
"authors": [
"maxinelasp",
"tech3371"
],
"license": "MIT",
"license_source": "github-api",
"license_type": "permissive",
"provenance": "gharchive-dolma-0004.json.gz:14525",
"repo": "IMAP-Science-Operations-Center/sds-data-manager",
"url": "https://github.com/IMAP-Science-Operations-Center/sds-data-manager/issues/246"
} | gharchive/issue | Action Items from Integration Testings
List of things we need to figure out and fix:
Infrastructure
indexer.py updates:
Update s3 event rule to listen for all imap/ folder. Right now, we have the rule to allow only l0 data to go through and because of that, it caused some issues. If our l0 data file contained multiple apid(s), our processing code could produce more than one files which won't know known by batch starter lambda.
Decouple StatusTracking and FileCatalog table now. Status table will track these information, instrument, level, upstream_file(?), and batch job informations. File catalog table will only store information about files that were processed and uploaded to s3 bucket.
batch_starter.py updates:
Update --dependency <data> to send result from query API and may be filter out file_path from query result.
Update to use PreProcessingDependency table instead of .json file.
Update to not construct file_path_to_create and instead change --file_path to input_file_path(discuss it).
Update event input and rule to match changes from above.
How to track version and coordinate with imap_processing repo.
path_helper.py updates:
improve init functions of ScienceFilepathManager.py. Maxine has already started this work. moving this function to sds-data-access repo or package.
indexer.py and upload_api.py updates:
Use and verify that upload API lambda is able to use sds-data-access's filepath validator.
imap_processing:
Based on what we decide to pass as command from batch_starter.py to Batch job, we need to update cli.py
May be write generic functions to upload and download processed files. and function to read cdf file?
Improve folder structures to simplify imports
Discuss what is version we are tracking on this repo
A note for the first indexer.py issue: currently, using the API upload function to upload a L1A file does not result in that file appearing in query results. Once this issue is resolved, you should be able to upload an L1A file and see it in the query.
|
2025-04-01T04:10:30.667714 | 2020-10-26T14:01:10 | 729589600 | {
"authors": [
"lesteve"
],
"license": "CC-BY-4.0",
"license_source": "github-api",
"license_type": "permissive",
"provenance": "gharchive-dolma-0004.json.gz:14527",
"repo": "INRIA/scikit-learn-mooc",
"url": "https://github.com/INRIA/scikit-learn-mooc/pull/78"
} | gharchive/pull-request | Add CI status to point directly to CircleCI website.
This would be easier to see the generated website directly in a PR.
Let's merge and see.
|
2025-04-01T04:10:30.765988 | 2024-12-14T13:40:20 | 2739885588 | {
"authors": [
"Asger124",
"andsji"
],
"license": "MIT",
"license_source": "github-api",
"license_type": "permissive",
"provenance": "gharchive-dolma-0004.json.gz:14529",
"repo": "ITU-BDSA2024-GROUP18/Chirp",
"url": "https://github.com/ITU-BDSA2024-GROUP18/Chirp/issues/123"
} | gharchive/issue | Must Have: FIGURE OUT WHAT TO DO ABOUT HELGE AND ADRIAN USERS LOGIN -- SEE DESCRIPTION
In session 08 we were given this requirement:
please make sure that two users, Helge and Adrian respectively can login to your Chirp! applications. They should be able to do so using their email addresses<EMAIL_ADDRESS>and<EMAIL_ADDRESS>respectively). Helge wants to login with the password LetM31n! and Adrian wants to login with the password M32Want_Access.
In our application a user logs in with a Username and not an E-mail. E-mail and Username are unique in our application. Currently there a Helge and a Adrian user in our application and they are created in our db initializer script, and they are created without a password.
Essentially we need to figure out if we want to store a Helge and Adrian user with the requirements above. We need to take into account that helge probably will try to login in through github, if a user in our db has the same Username and or Email as the one on his github, he will not be allowed access. Also how do we interpret the: 'they should be able to do so using their email addresses<EMAIL_ADDRESS>and<EMAIL_ADDRESS>respectively). ' - as we in our application has enforced login through Username and not E-mail.
We have decided as a group to not give the Helge & Adrian users passwords, as these would either have to be hard-coded in the DbInitializer script or manually added by us through the UI of the application. We felt both of these solutions were bad practice.
|
2025-04-01T04:10:30.773860 | 2022-01-19T14:58:33 | 1108204652 | {
"authors": [
"emstoudenmire",
"mtfishman"
],
"license": "Apache-2.0",
"license_source": "github-api",
"license_type": "permissive",
"provenance": "gharchive-dolma-0004.json.gz:14530",
"repo": "ITensor/ITensors.jl",
"url": "https://github.com/ITensor/ITensors.jl/issues/803"
} | gharchive/issue | [ENHANCEMENT] Improve format Github action
We could try this Github action for formatting:
name: Format suggestions
on:
pull_request:
concurrency:
# Skip intermediate builds: always.
# Cancel intermediate builds: only if it is a pull request build.
group: ${{ github.workflow }}-${{ github.ref }}
cancel-in-progress: ${{ startsWith(github.ref, 'refs/pull/') }}
jobs:
format:
runs-on: ubuntu-latest
steps:
- uses: actions/checkout@v2
- uses: julia-actions/setup-julia@latest
with:
version: 1
- run: |
julia -e 'using Pkg; Pkg.add("JuliaFormatter")'
julia -e 'using JuliaFormatter; format("."; verbose=true)'
- uses: reviewdog/action-suggester@v1
with:
tool_name: JuliaFormatter
fail_on_error: true
filter_mode: added
used by ChainRulesCore.jl, which I think actually makes suggestions for formatting issues to change on the PR (whereas ours currently just says if there is a formatting issue, but doesn't suggest a change to make).
I was actually looking the other week to see if there was a way to do this. Would be good to have.
Could the action just run the formatter no matter what? Because in any case when the formatting is wrong, the next step is always to just run the formatter. Or I guess is the issue purely technical about needing to make a new commit after the formatter runs?
I was hoping that is what the above Github Action will do (make a commit to the PR with proposed changes to fix the formatting), but I haven't tested it.
Currently, the Github Action (https://github.com/ITensor/ITensors.jl/blob/8f8229a3ed56012a0252a30f49a421f157bf338a/.github/workflows/format_check.yml) does run no matter what (at every commit to a PR or the main branch), it just doesn't propose a change or make a new commit with formatting changes.
We also have this Github Action (https://github.com/ITensor/ITensors.jl/blob/8f8229a3ed56012a0252a30f49a421f157bf338a/.github/workflows/format_pr.yml) which makes a new PR periodically with formatting fixes. But it would nice if these behaviors were combined into a single action that directly aids when making a PR.
Yes, let's try these out. It would be great if every PR just had the formatter run on it and a new commit created automatically if the formatter results in a change.
I just realized that behavior could become annoying when developing.
For example, say you push a commit to a PR you are working on. Then the format action pushes it's own commit. You continue developing and happen to work on the same lines the format action edited.
Maybe a better way would be to have a link on PRs on Github somewhere where you can manually make a commit that fixes any formatting issues.
Closed by #811.
|
2025-04-01T04:10:30.800111 | 2018-03-27T09:36:49 | 308893166 | {
"authors": [
"Ice3man543",
"codingo"
],
"license": "BSD-2-Clause",
"license_source": "github-api",
"license_type": "permissive",
"provenance": "gharchive-dolma-0004.json.gz:14533",
"repo": "Ice3man543/SubOver",
"url": "https://github.com/Ice3man543/SubOver/pull/9"
} | gharchive/pull-request | Updated instructions to represent go and fixed markdown headings
Removed *.py from usage instructions and fixed headings to be more representative of the information being displayed.
Thanks @codingo
|
2025-04-01T04:10:30.876252 | 2019-04-17T15:45:33 | 434355818 | {
"authors": [
"Y90SMH",
"brockallen",
"fcavaco"
],
"license": "Apache-2.0",
"license_source": "github-api",
"license_type": "permissive",
"provenance": "gharchive-dolma-0004.json.gz:14534",
"repo": "IdentityServer/IdentityServer4",
"url": "https://github.com/IdentityServer/IdentityServer4/issues/3196"
} | gharchive/issue | CSP /connect/authorize endpoint response
Hello,
I have been recently tasked on adding csp to our identityserver4 implementation.
We have a test app that uses idsrv authorize endpoint. i.e. this issue occurs while calling from the test app to the authorize endpoint and redirecting to the test app endpoint with token.
Question / Steps to reproduce the problem
when idsrv redirects back from the authorize endpoint it presents the html below which in turn fails csp validation with:
Refused to execute inline script because it violates the following Content Security Policy directive: "script-src https://XXXX 'self' ". Either the 'unsafe-inline' keyword, a hash ('sha256-orD0/VhH8hLqrLxKHD/HUEMdwqX6/0ve7c5hspX5VJ8='), or a nonce ('nonce-...') is required to enable inline execution.
The response Header is:
Content-Security-Policy: default-src 'self' ; script-src https://XXXX 'self' ;
is there anyway can avoid this issue? thought on making the csp directive on the idsrv more generic regarding the domain name but don't really want to allow inline scripts... ?
Minimal working example
<html>
<head>
<base target='_self'/>
</head>
<body>
<form method='post' action='https://identity-test/signin-oidc'>
<input type='hidden' name='code' value='XXX'/>
<input type='hidden' name='id_token' value='eyXXX'/>
<input type='hidden' name='scope' value='XXX'/>
<input type='hidden' name='state' value='XXX'/>
<input type='hidden' name='session_state' value='XXX'/>
<noscript>
<button>Click to continue</button>
</noscript>
</form>
<script>
window.addEventListener('load', function() {
document.forms[0].submit();
});
</script>
</body>
</html>
Can you add the hash to your CSP? Or have you seen it change between tests?
not really sure how to take advantage of the hash or nonce idsrv sends in the response?!
If your current response header is:
Content-Security-Policy: default-src 'self'; script-src https://XXXX 'self';
Then the following should work:
Content-Security-Policy: default-src 'self'; script-src https://XXXX 'self' 'sha256-orD0/VhH8hLqrLxKHD/HUEMdwqX6/0ve7c5hspX5VJ8=';
when idsrv redirects back from the authorize endpoint it presents the html below which in turn fails csp validation with:
Refused to execute inline script because it violates the following Content Security Policy directive: "script-src https://XXXX 'self' ". Either the 'unsafe-inline' keyword, a hash ('sha256-orD0/VhH8hLqrLxKHD/HUEMdwqX6/0ve7c5hspX5VJ8='), or a nonce ('nonce-...') is required to enable inline execution.
There's no hash in the CSP response headers? This code should be adding it:
https://github.com/IdentityServer/IdentityServer4/blob/master/src/IdentityServer4/src/Endpoints/Results/AuthorizeResult.cs#L130
Any update?
Sorry for just now returning to this, thanks that solved the issue. i.e. adding the sha above to the script-src set,
although as I am using nwebsec package, would love to have a way of not overriding csp already emitted by IS4 !?
|
2025-04-01T04:10:30.878621 | 2020-08-15T23:08:31 | 679656589 | {
"authors": [
"Rafal38",
"RwnRchrds",
"brockallen"
],
"license": "Apache-2.0",
"license_source": "github-api",
"license_type": "permissive",
"provenance": "gharchive-dolma-0004.json.gz:14535",
"repo": "IdentityServer/IdentityServer4",
"url": "https://github.com/IdentityServer/IdentityServer4/issues/4752"
} | gharchive/issue | MissingMethodException: get_Scopes()
When calling app.UseIdentityServer(); during startup of an ASP.NET Core application, the following exception is thrown:
System.MissingMethodException: 'Method not found: 'System.Collections.Generic.ICollection`1<IdentityServer4.Models.Scope> IdentityServer4.Models.ApiResource.get_Scopes()'.'
Did you register any scopes in your Startup?
same issue with version v4.0.4
The current Microsoft templates are not compatible with the v4.x version. You will have to wait for Microsoft to fix them.
|
2025-04-01T04:10:30.886210 | 2020-10-01T15:33:25 | 712955307 | {
"authors": [
"SSMKittel",
"brockallen",
"leastprivilege"
],
"license": "Apache-2.0",
"license_source": "github-api",
"license_type": "permissive",
"provenance": "gharchive-dolma-0004.json.gz:14536",
"repo": "IdentityServer/IdentityServer4",
"url": "https://github.com/IdentityServer/IdentityServer4/issues/4938"
} | gharchive/issue | Secret Expiration behaves badly when DateTimeKind=Local
Using: IdentityServer 4.1.0
When an IClientStore or an IResourceStore returns a Client/ApiResource with a secret that has an expiration, it only works correctly if the expiration is in DateTimeKind.Utc
For DateTimeKind.Local, the secret is effectively offset by the local timezone.
i.e. With positive timezone offsets the secret is invalidated later than it should be.
With negative timezone offsets the secret is invalidated earlier than it should be.
Here's an example which illustrates the problem for ApiResource by using the InMemory store (though it occurs for client secret validation as well)
class Program
{
static async Task Main(string[] args)
{
await Run("UTC expired", DateTime.UtcNow.AddMinutes(-10), true);
await Run("UTC not expired", DateTime.UtcNow.AddMinutes(10), false);
await Run("Local expired", DateTime.Now.AddMinutes(-10), true); // This fails for positive timezone offsets
await Run("Local not expired", DateTime.Now.AddMinutes(10), false); // This fails for negative timezone offsets
}
static async Task Run(
string description,
DateTime secretExpiration,
bool expectedError)
{
var sc = new ServiceCollection();
sc.AddIdentityServer()
.AddDefaultSecretParsers()
.AddDefaultSecretValidators()
.AddInMemoryApiResources(new[] {
new ApiResource("test") {
Enabled = true,
ApiSecrets = new []
{
new Secret("secret".Sha256(), secretExpiration)
}
}
});
var sp = sc.BuildServiceProvider();
var sv = sp.GetRequiredService<IApiSecretValidator>();
var context = new DefaultHttpContext();
context.Request.Headers.Add(
"Authorization",
"Basic " + Convert.ToBase64String(Encoding.UTF8.GetBytes("test:secret")));
var result = await sv.ValidateAsync(context);
if (result.IsError != expectedError)
{
throw new Exception($"{description}: Expected IsError={expectedError} but was IsError={result.IsError}");
}
}
}
Thanks. We'll look into it.
DateTimeExtensions.HasExpired needs to include the DateTimeKind when comparing.
For example, this code emits these values:
var d1 = new DateTime(2020, 01, 01, 9, 0, 0, DateTimeKind.Utc);
var d2 = new DateTime(2020, 01, 01, 9, 0, 0, DateTimeKind.Local);
Console.WriteLine(d1 == d2); // emits true
Console.WriteLine(d1 < d2); // emits false
Console.WriteLine(d1 > d2); // emits false
According to the documentation on DateTime, the operators don't take Kind into account and instead just compares the Ticks value (https://docs.microsoft.com/en-us/dotnet/api/system.datetime.op_greaterthan?view=netcore-3.1).
Ticks appears to be independent of Kind, so both those dates have Ticks =<PHONE_NUMBER>00000000 and are considered equal.
However if you have d2 = d1.ToLocalTime(), you get the erratic behaviour when comparing them:
// Both represent the same instant in time
var d1 = new DateTime(2020, 01, 01, 9, 0, 0, DateTimeKind.Utc);
var d2 = d1.ToLocalTime();
Console.WriteLine(d1.Ticks); //<PHONE_NUMBER>00000000
Console.WriteLine(d2.Ticks); //<PHONE_NUMBER>00000000 (timezone-dependent)
// DateTime operators don't take kind into account and
// gets inconsistent results when comparing different kinds
Console.WriteLine(d1 == d2); // emits false
Console.WriteLine(d1 < d2); // emits true (timezone-dependent)
Console.WriteLine(d1 > d2); // emits false (timezone-dependent)
Even though it's documented that comparing dates should always be the same kind, that seems like terrible behaviour on dotnet's part that's kept for backwards compatibility.
So then presumably calling .ToUniversalTime() on each then comparing would be the acceptable fix?
I would assume so
Submitted a draft PR. Please have a look. Turns out we are doing DateTime comparisons in a few places, so I wanted to get all of them.
While looking into this, it seems that ToUniversalTime is quite expensive if the DateTime is not UTC already. I'm tempted to simply state that we expect all DateTime to be UTC (which is of course how the code is all written). Can you explain why your store is returning non-UTC DateTime values?
Sorry for the late reply.
I believe the datetime was in local time because that's how the database driver (npgsql) handled fields of type "timestamp with time zone" (despite the name, it's a date/time in UTC in postgres).
Yea, since this thread I looked into related topics and IMO the DB layer should be returning DateTimes with the correct Kind set.
|
2025-04-01T04:10:30.940217 | 2016-06-09T14:13:34 | 159418352 | {
"authors": [
"dpino",
"kbara",
"lukego",
"wingo"
],
"license": "apache-2.0",
"license_source": "bigquery",
"license_type": "permissive",
"provenance": "gharchive-dolma-0004.json.gz:14537",
"repo": "Igalia/snabb",
"url": "https://github.com/Igalia/snabb/pull/358"
} | gharchive/pull-request | Packet shenanigans
This patch improves "snabb lwaftr bench" by 19%, is rock solid under a bare-metal lwaftr test with a big binding table, and lets us reach line rate with a virtualized lwaftr, though it is currently less solid than it could be. PTAL, and fingers crossed for the CI!
Cc @lukego because I know you are always curious about these things :)
LGTM.
Good comments @kbara, I added some assertions.
Ah, one more thing then. The headroom is 48 right now which is enough for a 40-byte IPv6 header. However it's not enough for that and a virtio header, which is 12 or 14 bytes. But, in the lwaftr most packets come from virtio, so there will be enough -- but, some packets don't, like NDP packets or something. Better to just reserve a few more bytes so that we know that the virtio interfaces won't have to memmove.
LGTM
thank you!
@wingo This is exciting work :+1:. Getting a ~20% application-level speedup with such a simple change is very nice. This also seems perfectly timed for using the "matrix tests" from @domenkozar to look for benefits in other applications too.
Idea: How about if we could achieve the same effect while preserving the original simple definition of struct packet?
Specifically, I am imagining if we could keep the original definition but allocate packet memory in such a way that we can do the cheap/non-copying shift operation (with a slightly different API). If we would always allocate an extra (say) 256 bytes for each packet, and we would align the allocation such that we can infer the amount of headroom/tailroom available from the low bits of the address, then we could perhaps implement a fast relocate() operation to replace shiftleft and shiftright:
-- packet.relocate(packet, offset) => packet'
function packet.relocate (p1, offset)
-- Get the address of the original packet
local addr1 = ffi.cast("uint8_t*", p1) -- pointer to original packet struct
-- Calculate the address of the new packet struct.
-- The address points to the same memory but at a different offset.
local addr2 = addr1 + offset
-- Check for headroom/tailroom overflow.
-- (Packets need to be allocated with an alignment such that this test makes sense.)
assert(band(addr1, 255) == band(addr1, 255), "packet displaced beyond reserved memory")
-- Create the new packet struct, destructively reusing memory from the original.
local p2 = ffi.cast("struct packet *", addr2)
p2.length = p1.length -- XXX barrier needed? aliasing risks? (These fields may overlap in memory)
return p2
end
Generally I am excited about the idea of having enough performance test coverage in the upstream CI that we can make these kinds of radical changes with confidence & updating all relevant applications at the same time if needed.
Looking forward to kicking the tires on this when a version comes upstream :)
@lukego interesting idea! I wonder though, once you start to consider relocating packets, what is the advantage of:
struct packet { uint8_t data[10*1024]; uint16_t length; }
over
struct packet { uint8_t *data; uint16_t length; }
I think that given that the packet length is on another cache line from the packet data in the current configuration, that there would be the same number of cache accesses to get the indirected data from the pointer, compared with inline allocation. But, definitely a question for the test matrix :)
@wingo Yes! On that second formulation I am also keen to compare passing struct packet by value rather than by reference. The struct is quite tiny - I suppose 16 bytes with default field alignment - so we could store it inline in struct link to avoid a pointer indirection.
|
2025-04-01T04:10:31.010014 | 2021-09-24T07:59:25 | 1006191109 | {
"authors": [
"IlmastroStefanuzzo",
"bydariogamer"
],
"license": "MIT",
"license_source": "github-api",
"license_type": "permissive",
"provenance": "gharchive-dolma-0004.json.gz:14539",
"repo": "IlmastroStefanuzzo/pygame-langtons-ant",
"url": "https://github.com/IlmastroStefanuzzo/pygame-langtons-ant/pull/2"
} | gharchive/pull-request | fix draw_grid function and some minor changes
The changes are:
Use the pygame.gfxdraw.hline and pygame.gfxdraw.vline instead of drawing rectangles (this triplicated the speed of the program)
Specify the type of the grid array (np.uint8). Using floats gives poor performance.
Change all grid[self.position[1]][self.position[0]] to grid[self.position[1], self.position[0]]. The former indexes a column of the array, then an element of the new array (this implies you allocate a copy of a piece of the array in memory every time you index something, which would be really slow). The new method just indexes the element (there is no new allocation at all).
Thanks man
|
2025-04-01T04:10:31.012808 | 2023-08-04T10:44:41 | 1836505713 | {
"authors": [
"Im-Beast"
],
"license": "MIT",
"license_source": "github-api",
"license_type": "permissive",
"provenance": "gharchive-dolma-0004.json.gz:14540",
"repo": "Im-Beast/deno_tui",
"url": "https://github.com/Im-Beast/deno_tui/issues/28"
} | gharchive/issue | feat req: normalize rectangle type between components
What this feature is meaning to achieve
Some components use their own Rectangle implementations, e.g. InputRectangle and TextRectangle.
This creates issues with layouts and some other things.
Solution
Rectangle's should be normalized so that every component uses normal Rectangle. This would mean that some components behaviour should be adjusted to make use of properties it didn't use (e.g. height in Input)
I'll want to do this right after #29
|
2025-04-01T04:10:31.060922 | 2023-01-25T15:55:01 | 1556882798 | {
"authors": [
"codecov-commenter",
"davidorme",
"jacobcook1995",
"vgro"
],
"license": "BSD-3-Clause",
"license_source": "github-api",
"license_type": "permissive",
"provenance": "gharchive-dolma-0004.json.gz:14541",
"repo": "ImperialCollegeLondon/virtual_rainforest",
"url": "https://github.com/ImperialCollegeLondon/virtual_rainforest/pull/152"
} | gharchive/pull-request | How to set up new model documentation
Description
This is my first attempt at a how to doc for setting up a new model, let me know if it makes sense and more importantly whether it actually covers all the necessary steps.
Fixes #148
Type of change
[ ] New feature (non-breaking change which adds functionality)
[ ] Optimization (back-end change that speeds up the code)
[ ] Bug fix (non-breaking change which fixes an issue)
[x] Documentation improvement
Key checklist
[x] Make sure you've run the pre-commit checks: $ pre-commit run -a
[x] All tests pass: $ poetry run pytest
Further checks
[ ] Code is commented, particularly in hard-to-understand areas
[ ] Tests added that prove fix is effective or that feature works
Could you please describe all the imports that are necessary at the start, for example the numpy stuff and the required logger component? I found it in the sol example but not here in the documentation. Maybe a complete example file would be good.
Codecov Report
Merging #152 (b5494ab) into develop (6f2da0d) will increase coverage by 0.01%.
The diff coverage is 100.00%.
@@ Coverage Diff @@
## develop #152 +/- ##
===========================================
+ Coverage 94.06% 94.08% +0.01%
===========================================
Files 11 11
Lines 472 473 +1
===========================================
+ Hits 444 445 +1
Misses 28 28
Impacted Files
Coverage Δ
virtual_rainforest/soil/model.py
94.59% <ø> (-0.15%)
:arrow_down:
virtual_rainforest/core/model.py
100.00% <100.00%> (ø)
:mega: We’re building smart automated test selection to slash your CI/CD build times. Learn more
Oddly can't reply direct to that comment @alexdewar. Don't know why!
We need __init_subclass__ (it was new to me too!) to register the model when the package loads anyway. Does it make makes sense to keep the tests here to make it fall over before before someone actually tries to make an instance of a nameless model, hence calling super().__init__()
I've moved where the model_name's get set to be defined in each model and then pulled from there to be used as a registry name. I put the checks in the __init_subclass__ rather than the __init__ so that invalid models cannot be added to the registry.
I also had to bump the version of isort as the pre-commit check had started failing (on Github) with the old version. I updated the minimum value in pyproject.toml to match this + generated a new poetry lock
I've just PR'd that patch on isort on develop to merge in #122 . Shouldn't be an issue.
|
2025-04-01T04:10:31.103058 | 2019-07-21T17:09:13 | 470803116 | {
"authors": [
"berkas1",
"killua-eu"
],
"license": "MIT",
"license_source": "github-api",
"license_type": "permissive",
"provenance": "gharchive-dolma-0004.json.gz:14542",
"repo": "Industra/manual-nova63",
"url": "https://github.com/Industra/manual-nova63/issues/1"
} | gharchive/issue | suggestions pass 1
suggestions:
quick checklist add item 'frame before starting job' (security precaution)
deduplicate content of notes and security and physical manipulation
rename Security and Physical manipulation to Operation and safety
add to do's / dont's: moving the gantry and the up/down table with loading door opened
add subchapters:
Hazards and risks
fire (ignition of processed material)
laser beam irradiation (loss of eyesight, severe burn injuries) from direct and reflected beam
mechanical damage or injury
fingers / hair / clothes can get pulled or crushed (movement of head/table with loading door opened)
laser head and other parts of the machine can get damaged (curved/non-planar materials)
Safety features and protective equipment
Central stop
Fire extinguisher
Certified laser protecting goggles
Operational accident / malfunction procedures
Job doesn't start
Job didn't finish
Fire
First aid: Please refer to general safety instructions, esp. chapters on
burns, inhalation of toxic fumes, laser beam irradiation and, mechanical injury.
frame before starting job -> there is no security risk in not doing that, it just shows wrong point of origin etc...
duplicity OK
renaming OK
sub chapters added
first aid added
Include example to forbidden Materials,
bullet point 1 - i.e. add: (i.e. PVC, vinyl, artificial leather)
Hazards and Risks, bullet point 2, missing closing )
Add "toxic and/or corrosive fumes" to Hazards and Risks
Consider
Renaming Hazards and Risks to Operational Hazards and Risks
Renaming Operational Accidents / Malfunction Procedures to Operational Malfunction
dropping "fire" from Operational Malfunction
pull in info from https://wiki.oulu.fi/pages/viewpage.action?pageId=69796385 to our wiki and the guide?
agreed, new version: https://github.com/Industra/manual-nova63/releases/latest/download/manual.pdf
Closing this for now, it seems to be ok for first release. Let's continue in a new 'pass 2' issue if necessary.
|
2025-04-01T04:10:31.159467 | 2023-05-23T08:01:18 | 1721463554 | {
"authors": [
"popcornylu"
],
"license": "Apache-2.0",
"license_source": "github-api",
"license_type": "permissive",
"provenance": "gharchive-dolma-0004.json.gz:14544",
"repo": "InfuseAI/piperider",
"url": "https://github.com/InfuseAI/piperider/pull/703"
} | gharchive/pull-request | [Feature] POC: Add lineage graph. Try different graph library.
PR checklist
[ ] Ensure you have added or ran the appropriate tests for your PR.
[ ] DCO signed
Close. Because the PoC completes.
|
2025-04-01T04:10:31.168594 | 2021-02-03T22:57:25 | 800767471 | {
"authors": [
"claredillon",
"spier",
"voborgus"
],
"license": "MIT",
"license_source": "github-api",
"license_type": "permissive",
"provenance": "gharchive-dolma-0004.json.gz:14546",
"repo": "InnerSourceCommons/innersourcecommons.net",
"url": "https://github.com/InnerSourceCommons/innersourcecommons.net/issues/31"
} | gharchive/issue | Should we move the ISC-101 to the website instead?
Is your feature request related to a problem? Please describe.
The Google doc that contains the ISC 101 info currently lives in my gDrive account.
That can be problematic if anybody other than me wants to make changes to it.
Describe the solution you'd like
Therefore I would vote to move this content somewhere else.
Maybe we could have this content somewhere on the new ISC website?
great!
@claredillon can you look on this issue please and see where in the current structure of the site this can be placed?
Sounds fab. Thanks Sebastian. I'll review the doc and incorporate it into either the Community and/or the About ISC pages, as appropriate.
Thanks.
The important piece for me about this is:
Once I decide that the ISC is an interesting community for me to get involved with, I want to know roughly how things work. Not in every last detail but I want to have a good idea what the different "things" in the community are. Being able to see that "at a glance" is helpful for me.
Think similar to the first 2 pages of an onboarding handbook that a new employee might get when joining a company.
So if you end up splitting this content about over multiple pages, please double check if the onboading experience for a new person is still enjoyable.
I've put a (slightly abbreviated) version of this content from the 101 into the text for review for the Community page (description of working groups, principles of how we work, tools etc.). @spier will tag you to take a look once I get it into GitHub.
@claredillon happy to review it once you point me to it.
What format do you have the community page in?
I have a tool that I use to convert gDocs/docx to markdown. So if that would speed things up let me know and I am happy to help.
Thanks @spier. I'm looking at the different templates Hugo gives for layout first, which is what is delaying me - but (in the interest of MVP deadlines ;) - if it doesn't work in the next day or two - I'll come calling for that tool. Though maybe you can leave a link here anyways for future reference! :)
I've added a pull request to create a community.md file which incorporated most of the InnerSource 101 doc.
https://github.com/InnerSourceCommons/innersourcecommons.net/pull/41
https://innersourcecommons.net/community/
@voborgus I have some improvements to push to this page. Would you mind adding me as a contributor to the repo, so that I can create a branch here directly?
@spier for sure, you’re in 👍🏻
Based on what I see at https://innersourcecommons.net/community/ I would consider this issue done, and we can close it if you agree.
|
2025-04-01T04:10:31.180431 | 2019-08-06T04:34:47 | 477164580 | {
"authors": [
"InnoFang",
"NeolithEra"
],
"license": "WTFPL",
"license_source": "github-api",
"license_type": "permissive",
"provenance": "gharchive-dolma-0004.json.gz:14547",
"repo": "InnoFang/what-digit-you-write",
"url": "https://github.com/InnoFang/what-digit-you-write/issues/4"
} | gharchive/issue | Installation fails due to conflicting Jinja2 version
Hi, users are unable to run what-digit-you-write due to dependency conflict with Jinja2 package.
As shown in the following full dependency graph of what-digit-you-write, what-digit-you-write requires Jinja2==2.9.5,while flask 1.1.1 requires Jinja2>=2.10.1.
According to pip’s “first found wins” installation strategy, Jinja2==2.9.5 is the actually installed version.
However, Jinja2==2.9.5 does not satisfy Jinja2>=2.10.1.
Dependency tree
what-digit-you-write-master
| +-appdirs(version range:==1.4.3)
| +-click(version range:==6.7)
| +-flask(version range:>=0.12)
| | +-click(version range:>=5.1)
| | +-itsdangerous(version range:>=0.24)
| | +-jinja2(version range:>=2.10.1)
| | | +-markupsafe(version range:>=0.23)
| | +-werkzeug(version range:>=0.15)
| +-itsdangerous(version range:==0.24)
| +-jinja2(version range:==2.9.5)
| | +-markupsafe(version range:>=0.23)
| +-markupsafe(version range:==1.0)
| +-numpy(version range:==1.13.1)
| +-packaging(version range:==16.8)
| | +-pyparsing(version range:*)
| | +-six(version range:*)
| +-pillow(version range:==5.3.0)
| +-protobuf(version range:==3.2.0)
| +-pyparsing(version range:==2.2.0)
| +-six(version range:==1.10.0)
| +-tensorflow(version range:>=1.0.1)
| +-tensorflow-gpu(version range:>=1.0.1)
| +-werkzeug(version range:==0.12.1)
Thanks for your help.
Best,
Neolith
Solution
Fix your direct dependency to be Jinja2>=2.9.5.
Remove your direct dependency Jinja2, and use Jinja2 transitively introduced by flask.
Fix your direct dependency to be flask==0.12.
I have checked the above revisions will not affect your downstream projects now.
Which solution do you prefer, 1 or 2 or 3?
@InnoFang Please let me know your choice. I can submit a PR to solve this issue.
Solution
Fix your direct dependency to be Jinja2>=2.9.5.
Remove your direct dependency Jinja2, and use Jinja2 transitively introduced by flask.
Fix your direct dependency to be flask==0.12.
I have checked the above revisions will not affect your downstream projects now.
Which solution do you prefer, 1 or 2 or 3?
@InnoFang Please let me know your choice. I can submit a PR to solve this issue.
Thank you for your proposal. I think Solution 2 will be better,
Looking forward to your PR.
|
2025-04-01T04:10:31.197949 | 2020-11-04T16:53:08 | 736269898 | {
"authors": [
"InsanusMokrassar",
"madhead"
],
"license": "Apache-2.0",
"license_source": "github-api",
"license_type": "permissive",
"provenance": "gharchive-dolma-0004.json.gz:14548",
"repo": "InsanusMokrassar/TelegramBotAPI",
"url": "https://github.com/InsanusMokrassar/TelegramBotAPI/issues/191"
} | gharchive/issue | Support for allow_sending_without_reply flag
Added the field allow_sending_without_reply to the methods sendMessage, sendPhoto, sendVideo, sendAnimation, sendAudio, sendDocument, sendSticker, sendVideoNote, sendVoice, sendLocation, sendVenue, sendContact, sendPoll, sendDice, sendInvoice, sendGame, sendMediaGroup to allow sending messages not a as reply if the replied-to message has already been deleted.
[ ] sendMessage
[ ] sendPhoto
[ ] sendVideo
[ ] sendAnimation
[ ] sendAudio
[ ] sendDocument
[ ] sendSticker
[ ] sendVideoNote
[ ] sendVoice
[ ] sendLocation
[ ] sendVenue
[ ] sendContact
[ ] sendPoll
[ ] sendDice
[ ] sendInvoice
[ ] sendGame
[ ] sendMediaGroup
|
2025-04-01T04:10:31.207516 | 2016-01-20T02:05:57 | 127587879 | {
"authors": [
"aouyang1",
"shabss"
],
"license": "apache-2.0",
"license_source": "bigquery",
"license_type": "permissive",
"provenance": "gharchive-dolma-0004.json.gz:14549",
"repo": "InsightDataScience/pegasus",
"url": "https://github.com/InsightDataScience/pegasus/issues/38"
} | gharchive/issue | ec2install on kafka does not set JMX_PORT
I had to manually add JMX_PORT entry (export JMX_PORT=${JMX_PORT:-9999}) in /usr/local/kafka/bin/kafka-server-start.sh
ec2install run from :
ubuntu@ip-172-31-2-169:~/projects/pegasus$ uname -a
Linux ip-172-31-2-169 3.13.0-48-generic #80-Ubuntu SMP Thu Mar 12 11:16:15 UTC 2015 x86_64 x86_64 x86_64 GNU/Linux
Will add this to the config in kafka. Thanks for the find!
This has been updated in the kafka configs. Kafka-manager installation is on its way as well. Closing for now.
|
2025-04-01T04:10:31.209618 | 2022-01-10T23:51:22 | 1098504247 | {
"authors": [
"jhlegarreta"
],
"license": "Apache-2.0",
"license_source": "github-api",
"license_type": "permissive",
"provenance": "gharchive-dolma-0004.json.gz:14550",
"repo": "InsightSoftwareConsortium/ITK",
"url": "https://github.com/InsightSoftwareConsortium/ITK/pull/3076"
} | gharchive/pull-request | STYLE: Use the superclass name in itkTypeMacro
Use the superclass name instead of the Superclass alias in
itkTypeMacro.
PR Checklist
[X] No API changes were made (or the changes have been approved)
[X] No major design changes were made (or the changes have been approved)
Left behind in #3062.
|
2025-04-01T04:10:31.212486 | 2022-03-31T01:16:40 | 1187318002 | {
"authors": [
"jhlegarreta"
],
"license": "Apache-2.0",
"license_source": "github-api",
"license_type": "permissive",
"provenance": "gharchive-dolma-0004.json.gz:14551",
"repo": "InsightSoftwareConsortium/ITK",
"url": "https://github.com/InsightSoftwareConsortium/ITK/pull/3351"
} | gharchive/pull-request | DOC: Document methods in header file
DOC: Document methods in header file
STYLE: Remove unnecessary empty comment blocks
ENH: Remove duplicate, commented ITK_MANUAL_INSTANTIATION guard
PR Checklist
[X] No API changes were made (or the changes have been approved)
[X] No major design changes were made (or the changes have been approved)
[ ] Added test (or behavior not changed)
[X] Updated API documentation (or API not changed)
Some of the methods were introduced in PR #3154.
@PranjalSahu reviews are welcome.
|
2025-04-01T04:10:31.218575 | 2022-06-30T07:52:45 | 1289741644 | {
"authors": [
"dyollb"
],
"license": "Apache-2.0",
"license_source": "github-api",
"license_type": "permissive",
"provenance": "gharchive-dolma-0004.json.gz:14552",
"repo": "InsightSoftwareConsortium/ITK",
"url": "https://github.com/InsightSoftwareConsortium/ITK/pull/3477"
} | gharchive/pull-request | BUG: all regions set to requested in ReconstructBiasField
Description of problem
See https://discourse.itk.org/t/n4biasfieldcorrectionimagefilter-getlogbiasfieldasimage-wrap-bsplinecontrolpointimagefilter-alternatives/5142/2
If (for some reason) the input requested region is different than the largest possible region, the code was
first reconstructing the bias field using the largest possible region
then setting all regions to inputImage->GetRequestedRegion(), which doesn't make sense
To reduce the change to a minimum I still set the requested region based on inputImage->GetRequestedRegion(), but probably we could just remove the line completely.
PR Checklist
[x] No API changes were made (or the changes have been approved)
[x] No major design changes were made (or the changes have been approved)
[ ] Added test (or behavior not changed)
[ ] Updated API documentation (or API not changed)
[ ] Added license to new files (if any)
[ ] Added Python wrapping to new files (if any) as described in ITK Software Guide Section 9.5
[ ] Added ITK examples for all new major features (if any)
Refer to the ITK Software Guide for
further development details if necessary.
thanks @dzenanz for your fast response
|
2025-04-01T04:10:31.300238 | 2021-12-14T18:30:57 | 1080085487 | {
"authors": [
"pradyuman-verma",
"thrilok209"
],
"license": "MIT",
"license_source": "github-api",
"license_type": "permissive",
"provenance": "gharchive-dolma-0004.json.gz:14553",
"repo": "Instadapp/dsa-connectors",
"url": "https://github.com/Instadapp/dsa-connectors/pull/138"
} | gharchive/pull-request | updated sushi-incentive connector
[ ] updated sushi-incentive
Merging this for now, we can deploy it when required.
|
2025-04-01T04:10:31.308728 | 2021-07-06T11:52:32 | 937816065 | {
"authors": [
"thrilok209",
"yaronvel"
],
"license": "MIT",
"license_source": "github-api",
"license_type": "permissive",
"provenance": "gharchive-dolma-0004.json.gz:14554",
"repo": "Instadapp/dsa-connectors",
"url": "https://github.com/Instadapp/dsa-connectors/pull/57"
} | gharchive/pull-request | Add B.Protocol
Protocol: https://bprotocol.org
This connector adds support for interacting with Maker and Compound via B.Protocol smart contract. EDIT:
the connector also add support to B.Protocol BAMM over liquity protocol stability pool.
Users can do everything the can do with Maker and Compound and get exactly the same conditions, and in addition get to share liquidation proceeds, and when applicable B.Protocol governance tokens.
EDIT: User can deposit to liquity's stability pool via B.Protocol and get their account automatically rebalanced whenever liquidation occurs. And this way can get passive yield on their LUSD deposits.
The implementation is a simple clone of the Compound and Maker connectors, with small adaptations to the B.Protocol system.
Using different addresses for CDP manager, and comptroller.
Adjusting the user debt report, to include the debt that might have been partially paid by the B.Protocol liquidators.
EDIT:
ConnectV2BMakerDAO Ethereum mainnet address = 0xB0A1f10FeEfECf25064CE7cdF0a65042F7dE7bF0
ConnectV2BCompound Ethereum mainnet address = 0xa3EeFDc2de9DFA59968bEcff3E15b53E6162460f
ConnectV2BLiquity Ethereum mainnet address = 0x19574E5Dfb40bbD63A4F3bdcF27ed662b329b2ff
Deployed connector:
Connector Name: "B-COMPOUND-A"
Connector Address: 0xa3EeFDc2de9DFA59968bEcff3E15b53E6162460f
Connector Name: "B-MAKERDAO-A"
Connector Address: 0xB0A1f10FeEfECf25064CE7cdF0a65042F7dE7bF0
Connector Name: "B-LIQUITY-A"
Connector Address: 0x19574E5Dfb40bbD63A4F3bdcF27ed662b329b2ff
"B-COMPOUND-A", "B-MAKERDAO-A", "B-LIQUITY-A" connectors whitelisted: https://etherscan.io/tx/0x2ee045e98e449f2d455a303820138617d8d18f86e044be170f9e2c1345233c7f
@thrilok209 the B-COMPOUND connector you whitelisted does not contains the modified compound mapping address. is it ok?
@yaronvel new Compound mapping contract is this: https://etherscan.io/address/0xe7a85d0adDB972A4f0A4e57B698B37f171519e88#code
it's updated on B-COMPOUND connector, right? Or Am I missing something?
@thrilok209 a yes, my mistake.
I was not aware you could also edit my comments, and i saw you deployed the same address I wrote at my comments, so I assumed it was my deployment. But now I see it is not the case, and I guess you edited there.
|
2025-04-01T04:10:31.310250 | 2024-06-27T11:49:09 | 2377934599 | {
"authors": [
"zolokonst"
],
"license": "MIT",
"license_source": "github-api",
"license_type": "permissive",
"provenance": "gharchive-dolma-0004.json.gz:14555",
"repo": "Intechnity-com/OdooJsonRpcClient",
"url": "https://github.com/Intechnity-com/OdooJsonRpcClient/pull/100"
} | gharchive/pull-request | Update OdooModelMapper.cs
Issue like
" Not implemented json mapping value: '$"sale_delay": 0' to Nullable`1" fixed. (Conversion from double added)
It looks, CI has a problem:
#7 48.89 ERROR: Error installing octokit:
#7 48.89 The last version of public_suffix (>= 2.0.2, < 7.0) to support your Ruby & RubyGems was 5.1.1. Try installing it with `gem install public_suffix -v 5.1.1` and then running the current command again
#7 48.89 public_suffix requires Ruby version >= 3.0. The current ruby version is <IP_ADDRESS>.
#7 ERROR: process "/bin/sh -c gem install octokit" did not complete successfully: exit code: 1
|
2025-04-01T04:10:31.317936 | 2018-12-17T14:11:33 | 391729691 | {
"authors": [
"artem-shaporenko",
"dvrogozh"
],
"license": "mit",
"license_source": "bigquery",
"license_type": "permissive",
"provenance": "gharchive-dolma-0004.json.gz:14556",
"repo": "Intel-Media-SDK/MediaSDK",
"url": "https://github.com/Intel-Media-SDK/MediaSDK/pull/1025"
} | gharchive/pull-request | Silently Disable MFE for VDEnc
Disable MFE in case of VDEnc request
Fixes #1010
Signed-off-by: Artem Shaporenko<EMAIL_ADDRESS>
Something strange with android build, will rebase after merge to release branch.
@artem-shaporenko : this would be too late to rebase after merge. Please, rebase now. Make sure this commit is in the history: https://github.com/Intel-Media-SDK/MediaSDK/commit/5d40fe51b5ea8d95fc0d68ec6f766fad765e1d2a. Android build was fixed yesterday.
|
2025-04-01T04:10:31.339152 | 2016-04-08T17:27:09 | 146988982 | {
"authors": [
"blackerpaper",
"changizi",
"ddiakopoulos",
"narner",
"qianyizh"
],
"license": "Apache-2.0",
"license_source": "github-api",
"license_type": "permissive",
"provenance": "gharchive-dolma-0004.json.gz:14557",
"repo": "IntelRealSense/librealsense",
"url": "https://github.com/IntelRealSense/librealsense/issues/103"
} | gharchive/issue | crash when running xcode examples
I have tried running librealsense xcode examples on OS X system with R200 camera. They usually work but sometimes they crash with this error:
malloc: *** error for object 0x100619078: incorrect checksum for freed object - object was probably modified after being freed.
*** set a breakpoint in malloc_error_break to debug
Has anyone experienced this?
I'm experiencing the same behavior. I'm also using an R200 camera on OS X (10.11.3).
Same here.
Intel RealSense R200
Serial number:<PHONE_NUMBER>
Firmware version: <IP_ADDRESS>
OS X 10.11.4
LLVM 7.3 (or XCode 7.3, it's the same)
Crashes even on very simple code, e.g.:
int main(int argc, char **args)
{
rs::context ctx;
return 1;
}
Sometimes it works fine. Sometimes it reports the error when context is being deleted.
Seems to be an issue with libuvc/libusb.
Is the issue only on shutdown of the context or does the app crash in the middle of streaming?
For me it usually happens when the start function is called on the device. But again it's not consistent. Sometimes it works perfectly.
For me, the app would crash in the middle of streaming.
On Fri, Apr 8, 2016 at 8:25 PM, Dimitri Diakopoulos <
<EMAIL_ADDRESS>wrote:
Is the issue only on shutdown of the context or does the app crash in the
middle of streaming?
—
You are receiving this because you commented.
Reply to this email directly or view it on GitHub
https://github.com/IntelRealSense/librealsense/issues/103#issuecomment-207661511
--
Nick
nickarner.com
I haven't done extensive tests on streaming yet.
The issue happens on shutdown of the context.
Sometimes it happens in the call of:
if (ctx->own_usb_ctx)
libusb_exit(ctx->usb_ctx);
Yet it is not consistent.
Alright! I'll spend some time this weekend testing out on a few different OSX configurations. Needless to say, sounds like a software bug in librealsense so I'll try to track it down.
I managed to reproduce the error on another Macbook. Here is the workflow.
(OS X 10.11.4, XCode 7.3)
Install Homebrew
Use Homebrew to install libusb, pkg-config, homebrew/versions/glfw3
Use XCode to open librealsense.xcworkspace
Set launch app to cpp-tutorial-1-depth
Change code of cpp-tutorial-1-depth.cpp to
`
#include <librealsense/rs.hpp>
#include
int main(){
rs::context ctx;
return 1;
}
`
Run cpp-tutorial-1-depth for a few times (keep pressing command+R)
You will see the error after a few runs.
cpp-tutorial-1-depth(6343,0x1001b9000) malloc: *** error for object 0x1003002f8: incorrect checksum for freed object - object was probably modified after being freed.
*** set a breakpoint in malloc_error_break to debug
Program ended with exit code: 9
It doesn't matter if the R200 sensor is plugged in or not.
@changizi @narner @qianyizh I just pushed a commit bfeed0e65da3de8d1a2859c27986205010ec219c. Can you test with the latest?
So far things seem to have improved (the crash would usually happen a few
seconds after starting one of the programs).
On Sat, Apr 9, 2016 at 1:56 PM, Dimitri Diakopoulos <
<EMAIL_ADDRESS>wrote:
@changizi https://github.com/changizi @narner
https://github.com/narner @qianyizh https://github.com/qianyizh I
just pushed a commit bfeed0e
https://github.com/IntelRealSense/librealsense/commit/bfeed0e65da3de8d1a2859c27986205010ec219c.
Can you test with the latest?
—
You are receiving this because you were mentioned.
Reply to this email directly or view it on GitHub
https://github.com/IntelRealSense/librealsense/issues/103#issuecomment-207822682
--
Nick
nickarner.com
I haven't seen the issue after I applied the patch. Tested on both my Macbooks.
@changizi waiting on your confirmation that this fixes the issue so I can close it out :)
I get this error after testing the latest:
Using device 0, an Intel RealSense R200
Serial number:<PHONE_NUMBER>
Firmware version: <IP_ADDRESS>
rs::error was thrown when calling rs_start_device(device:0x100804c18):
uvc_set_ctrl(...) returned LIBUSB_ERROR_PIPE
@changizi can you try a few more times? That error seems unrelated to the memory issue at hand
Looks like the memory issues is gone. Thanks for the fix.
Do you know why I am getting the above error though? If I disconnect and reconnect the sensor before I run each example it runs fine, otherwise I get that error.
@changizi yeah the unplug/replug on OSX with the libuvc backend is probably the oldest bug we have yet to track down. It is documented in #31 and remains open...
@changizi
This issue is linked to the "replug" issued. It's closed now, seems the bug comes from the osx usb driver.
Solution: Just update osx to the newest one (10.11.4) and this issue gone.
Hope this will help.
Closing this out since the memory error has been fixed and disconnection issues are documented in a different thread!
|
2025-04-01T04:10:31.352344 | 2022-09-01T01:28:47 | 1358121749 | {
"authors": [
"2513297244",
"MartyG-RealSense"
],
"license": "Apache-2.0",
"license_source": "github-api",
"license_type": "permissive",
"provenance": "gharchive-dolma-0004.json.gz:14558",
"repo": "IntelRealSense/librealsense",
"url": "https://github.com/IntelRealSense/librealsense/issues/10853"
} | gharchive/issue | D435 camera cannot be opened!
Required Info
Camera Model
{ D400 }
Firmware Version
(Open RealSense Viewer --> Click info)
Operating System & Version
{Win (10)
Kernel Version (Linux Only)
Platform
PC
SDK Version
{2.50 }
Language
{C++ }
Segment
{Robot/Smartphone/VR/AR/others }
Issue Description
Using SDK C + + to open the camera and get the frame display program, which is normal on the developed computer
But change the program to a computer without network and plug the camera into the computer without network
The program failed to open the camera. Tip: This is the first time that this computer without network is plugged into the d435 camera device
What is the reason? Is it because the first time you plug in a computer, you need to install some drivers on the network?
Hi, are you using the re-server ethernet networking system to network the camera, please?
https://dev.intelrealsense.com/docs/open-source-ethernet-networking-for-intel-realsense-depth-cameras
If you are using rs-server then it sounds as though there may be instructions in the C++ code for treating the camera as a Network Device (a networked camera) and so it does not understand how to operate as a normal camera outside of a network. In such a script, commenting out the line of code that defines the camera as a Network Device - such as rs2::net_device** - should enable it to function outside of a network.
The USB 3.0 connection is used,
When the d435 camera is connected to a new computer for the first time through USB 3.0, will it automatically download some files or drivers Online
Thanks very much for the clarification. When a camera is attached to a Windows computer for the first time, the computer will install two RGB and Depth drivers that can be viewed in the Cameras section of the Windows Device Manager interface. It will only install the drivers for that particular camera if they are not currently installed. The image below shows the example of drivers for the D455 camera model listed in the Device Manager.
I performed a test where I disconnected from the internet, uninstalled the drivers and then unplugged the camera and re-inserted it to initiate a driver installation. The drivers reinstalled successfully without an internet connection.
Thank you very much for your reply. I will repeat your experiment and find the problem of my own program
Thanks very much for the clarification. When a particular camera is attached to a Windows computer for the first time, the computer will install two RGB and Depth drivers that can be viewed in the Cameras section of the Windows Device Manager interface. It will only install the drivers for that particular camera upon insertion into the USB port if they are not currently installed. The image below shows the example of drivers for the D455 camera model listed in the Device Manager.
I performed a test where I disconnected from the internet, uninstalled the drivers and then unplugged the camera and re-inserted it to initiate a driver installation. The drivers reinstalled successfully without an internet connection.
OK,I have found the problem. Let me ask you a question: is there an excuse for the SDK to obtain whether the current camera is connected to a USB 2.0 or USB 3.0 port?
Thanks very much for the clarification. When a particular camera is attached to a Windows computer for the first time, the computer will install two RGB and Depth drivers that can be viewed in the Cameras section of the Windows Device Manager interface. It will only install the drivers for that particular camera upon insertion into the USB port if they are not currently installed. The image below shows the example of drivers for the D455 camera model listed in the Device Manager.
I performed a test where I disconnected from the internet, uninstalled the drivers and then unplugged the camera and re-inserted it to initiate a driver installation. The drivers reinstalled successfully without an internet connection.
OK,I have found the problem. Let me ask you a question: is there an excuse for the SDK to obtain whether the current camera is connected to a USB 2.0 or USB 3.0 port?
I have found the interface. I think this topic can be closed. Thank you for your reply!
The USB port can decide whether a connection is USB 2.1 or USB 3 based upon the connector's pins, as USB 3 cables have additional wires and connector pins that USB 2.1 cables do not have.
Even if a USB 3 cable is used though, it may be detected as USB 2.1 if the connector is not inserted all of the way into the port.
Sometimes though it is mis-detected for unknown reasons, such as showing as USB 3,2 in the RealSense Viewer program but 2.1 in the RealSense ROS wrapper.
I'm pleased to hear that you found a solution. Thanks very much for the update. As you suggested, I will close the case. Thanks again!
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.