added
stringdate 2025-04-01 04:05:38
2025-04-01 07:14:06
| created
timestamp[us]date 2001-10-09 16:19:16
2025-01-01 03:51:31
| id
stringlengths 4
10
| metadata
dict | source
stringclasses 2
values | text
stringlengths 0
1.61M
|
|---|---|---|---|---|---|
2025-04-01T06:38:59.872471
| 2016-04-20T14:20:41
|
149781884
|
{
"authors": [
"binhn",
"christo4ferris",
"genggjh",
"srderson"
],
"license": "apache-2.0",
"license_source": "bigquery",
"license_type": "permissive",
"provenance": "gharchive-dolma-0000.json.gz:6876",
"repo": "hyperledger/fabric",
"url": "https://github.com/hyperledger/fabric/issues/1173"
}
|
gharchive/issue
|
Explore tools to allow community to assist in labeling issues
Currently, labeling of issues is limited to only maintainers of this project. Ideally, we would want all members to assist in labeling issues. This is not a permission that exists in GitHub, so we want to investigate if there are any tools/bots that may help. For example, this may be a bot that reads a slack channel or GitHub comments and applies the appropriate label to a issue when asked.
http://partyline.rocks/blog/github/
http://blog.james-carr.org/2014/08/21/open-a-github-issue-from-slack/
Both above can be for our reference I believe.
following the CI meeting we had yesterday with the LF RI engineers, if we move to Gerritt we will need to move to Jira or Bugzilla to manage issues - think this will be an improvement and obviate the need for a bot to fix GH issues.
Prefer to use Gerrit which is easy to review and merge code.
What are the key benefits of Gerrit? I haven't used it. Does it only provide code reviews? Git is quite sufficient for reviewing, commenting, and updating pull requests.
The key capability that we need is for a larger community to be able to manage issues without being a maintainer. If we had to move to Jira or Bugzilla for this then I don't see the value of Gerrit but another tool.
Gerritt can enforce reviews and it can integrate checks from CI and that the legal bits are properly handled. It is far more effective than github in that regard, and on a project where security and quality are of paramount importance such as this, we can be far more certain of what gets merged. Also, because Gerritt is tied into LF identity, we can also be a bit more certain of who is making the contributions (because we cannot enforce TFA on github accounts).
@binhn: we can reference this I think https://review.openstack.org/#/c/309610/. As @christo4ferris mentioned, the gerrit can be integrated with CI easily and we can involve more peoples to review codes. Everyone can give +/-1 on your commit and give comments, core team member can give +/-2 to approve it, CI tool will used to verified it before merge.
@srderson closing this as we are transitioning to Jira RSN (promise;-)
|
2025-04-01T06:38:59.874085
| 2020-06-09T12:59:26
|
635409020
|
{
"authors": [
"nikhil550"
],
"license": "apache-2.0",
"license_source": "bigquery",
"license_type": "permissive",
"provenance": "gharchive-dolma-0000.json.gz:6877",
"repo": "hyperledger/fabric",
"url": "https://github.com/hyperledger/fabric/pull/1382"
}
|
gharchive/pull-request
|
Update private data tutorial for contract api
Should be merged along with the smart contract update: https://github.com/hyperledger/fabric-samples/pull/201
Signed-off-by: NIKHIL E GUPTA<EMAIL_ADDRESS>Type of change
Documentation update
@Mergifyio backport release-2.1
@Mergifyio backport release-2.0
|
2025-04-01T06:38:59.877170
| 2019-12-05T02:36:36
|
533086484
|
{
"authors": [
"joealewine",
"nikhil550",
"rameshthoomu"
],
"license": "apache-2.0",
"license_source": "bigquery",
"license_type": "permissive",
"provenance": "gharchive-dolma-0000.json.gz:6878",
"repo": "hyperledger/fabric",
"url": "https://github.com/hyperledger/fabric/pull/369"
}
|
gharchive/pull-request
|
[FAB-17199] Add new test network tutorial
Signed-off-by: NIKHIL E GUPTA<EMAIL_ADDRESS>Add new test network tutorial as part of getting started section.
Type of change
Documentation update
Description
Users would be guided toward the new test network directly after they install the prerequisites and fabric docker images and samples. The tutorial is the first step toward replacing the build a network tutorial in the long term.
Related issues
The PR to add the test network to the Fabric samples:
https://github.com/hyperledger/fabric-samples/pull/80. (Merged)
The tutorial should be merged after the sample
I don't see documentation for adding a new organization using addOrg3.sh script. What's the plan?
I don't see documentation for adding a new organization using addOrg3.sh script. What's the plan?
We will rebase the adding an org to the network tutorial to the new test network and doc the addOrg3 script there. We will continue to use the byfn tutorial and eyfn until then.
Long term the future of EYFN should be to focus on the process for creating the structure and crypto material for an org, and how to add the org to a channel through a channel update (what the json file and jq command would look like, in other words). We can leverage the updated Config Update doc -- https://hyperledger-fabric.readthedocs.io/en/master/config_update.html -- for the latter, rather than that being the focus of an Add an Org tutorial.
|
2025-04-01T06:38:59.878369
| 2022-02-01T16:10:13
|
1120897377
|
{
"authors": [
"peterbroadhurst"
],
"license": "Apache-2.0",
"license_source": "github-api",
"license_type": "permissive",
"provenance": "gharchive-dolma-0000.json.gz:6879",
"repo": "hyperledger/firefly-cli",
"url": "https://github.com/hyperledger/firefly-cli/pull/137"
}
|
gharchive/pull-request
|
Support multiple token connectors in a stack, with ERC20/ERC721
Signed-off-by: Peter Broadhurst<EMAIL_ADDRESS>
Ok - behavior should be more intuitive now:
# Ethereum default: you'll get erc1155, erc20_erc721
ff init test
# Fabric default: you'll get nothing
ff init -b fabric test
# Ethereum with explicitly no tokens
ff init -t none test
# Ethereum with just ERC-1155
ff init -t erc1155 test
# Note that none does not cancel others out, so this would get erc20_erc721 and erc1155
ff init -t erc20_erc721 -t none -t erc1155 test
Holding off on the merge of this, until we have a complimentary manifest merge into FireFly Core
|
2025-04-01T06:38:59.886538
| 2018-03-06T13:58:14
|
302711039
|
{
"authors": [
"Warchant",
"l4l",
"neewy",
"nickaleks"
],
"license": "Apache-2.0",
"license_source": "github-api",
"license_type": "permissive",
"provenance": "gharchive-dolma-0000.json.gz:6880",
"repo": "hyperledger/iroha",
"url": "https://github.com/hyperledger/iroha/issues/1045"
}
|
gharchive/issue
|
Permission Implementation Improvement
Permissions Implementation Improvement
This document provides an overview of a proposal to improve implementation of permissions in shared model.
Note: This document does not discuss current permission model, its scope is pure technical and does not affect behavior of the system.
Permission Model
Each role has a set of permissions, associated with it. Additionally, each account can have so called grantable permissions, which are permissions given to it by some other account. Grantable permissions allow account to perform some actions on other account.
In order to execute a command, account needs to have certain permission. Each command has unique permissions for its execution. Also, some commands require additional permissions, e.g transferAsset command requires destination account to have can_receive_asset permission.
Each query has 3 distinct permissions associated with it. They allow executing query on a specific account, domain or globally.
For example account A can have permission to get balance of account A, so he will not have permission to get balance of account B, even if they belong to the same domain. However, if account A has domain permission, to get balance, it would be possible to perform query on account B.
Current Permission Implementation
Currently, all permissions in Iroha are implemented as strings. This has exactly one benefit - ease of use for developers. Strings are easy to debug and code working with strings is straightforward.
However, strings have several disadvantages. First of all it is easy to mistype them, for clients and developers. They need to be allocated and take up a lot of space. They are not type-safe and thus error-prone.
This proposal tries to solve given issues while maintaining ease of use.
Enum Based implementation
Enum is a good choice for permissions, since they are just set of constants. It provides type-safety, efficiency, and clarity.
I propose we declare permission Enum - either one, or several for command, query, and grantable permissions, each command and query will have an associated permission as a static const variable of a class.
We would need to define mapping for protobuf and postgres enums. This could be done by assigning same integer values to corresponding permissions in all three enums and then simply performing static_cast. This way we can avoid defining giant switch-cases / maps.
One big issue with enums in C++ is that there is no idiomatic way to get string representation. There are several possible solutions to this issue:
Brute force - define mapping from each permission to string
Rely on implementation, such as protobuf to provide mapping. Define abstract class like PermissionPrinter, which will have virtual method toString(permission) and implement it for each backend of shared model. Since we can map protobuf enums to our enums, we can use protobuf reflection as one possible implementation.
Rely on things such as Boost Preprocessor or other code-generation solutions.
This issue is open for discussion and would like to hear an opinion of other developers.
It provides type-safety
no. Unless, you use scoped enum.
I like defining some kind of singleton or static array `const char* map[] = {"permA", "permB"};
And cast to string like
enum class Perm: uint32_t {A, B};
Perm a = Perm::A;
map[a]; // I am not sure actually, if it has implicit cast to int or not, but idea is like this
no. Unless, you use scoped enum.
Of course we would use scoped enum. I did not mention that since it seems to obvious
So far I have 3 ideas for solving it issue (feel free to propose new one). The first two use additional strings as it is, the last one is prefer depend on the protobuf (thus less transport-agnostic, but more stable).
All the ways describe how define enum Permission, and related toString & fromString functions. Imo that should be enough for replacing (am I missing smth?).
p.s code may cause compile-errors, consider as pseudo-code
Macros. Yes, they are considered evilish but the solution is the most compact and has no code duplication at all.
Constexpr maps, a lot of code duplication but compile-time (-> prob less error-prone), need to write function str -> enum (idk how to do it via templates). It is roughly same to the case enum+pair of methods, I don't show this case, because constexpr-maps is a bit better (for toString)
New permission -> 4 LOC (enum + PermMap::s (x2) + fromString's map)
enum class Permission {
can_receive,
can_transfer,
can_create_domain,
};
template<Permission>
class PermMap {
static const std::string s;
};
template<>
const std::string PermMap<Permission::can_receive>::s = "can_receive";
inline const std::string toString(Permission p) {
return PermMap<p>::s;
}
inline Permission fromString(const std::string s) {
static const std::map<std::string, Permission> m {
{"can_receive", Permission::can_receive},
{"can_transfer", Permission::can_transfer},
{"can_create_domain", Permission::can_create_domain},
...
};
return m[s];
}
Depend on the protobuf, the most elegant way. The only problem that it highly depend on the transport so imo it is a bad thing to use. Also protobuf uses enum (NOT enum class, so weak typing might be a huge problem in future)
New permission -> 0 LOC
Though protobuf updates may need some fixes
using Permission = iroha::protocol::RolePermission;
inline const std::string toString(Permission p) {
return iroha::protocol::RolePermission_Name(p);
}
inline Permission fromString(const std::string &s) {
Permission p;
iroha::protocol::RolePermission_Parse(s, &p);
return p;
}
In order to execute a command, account needs to have certain permission.
Well, it has to have a role
It would be simpler to have switch case for toString, and do not implement fromString at all, since it is not needed much.
I thought that you need fromString operation for deserialization, e.g. in genesis block a role can be created
We use protobuf for deserealization
Good
|
2025-04-01T06:38:59.887852
| 2022-11-03T11:35:12
|
1434513423
|
{
"authors": [
"appetrosyan",
"mversic",
"pesterev"
],
"license": "Apache-2.0",
"license_source": "github-api",
"license_type": "permissive",
"provenance": "gharchive-dolma-0000.json.gz:6881",
"repo": "hyperledger/iroha",
"url": "https://github.com/hyperledger/iroha/issues/2932"
}
|
gharchive/issue
|
Migration tool
For smooth upgrades, one needs to be able to migrate the versioned transactions from the old standard to the new preferably ensuring that the operations are handled automatically when it is safe to do so, but with human intervention requested when that is necessary.
related #2920
As you know, the Client CLI accepts and executes JSON instructions. I guess we should also add versions to them to avoid problems that can appear for our users.
|
2025-04-01T06:38:59.892652
| 2018-11-13T23:48:13
|
380474710
|
{
"authors": [
"PatrickLammers",
"mtn217"
],
"license": "Apache-2.0",
"license_source": "github-api",
"license_type": "permissive",
"provenance": "gharchive-dolma-0000.json.gz:6882",
"repo": "hyperledger/sawtooth-next-directory",
"url": "https://github.com/hyperledger/sawtooth-next-directory/issues/571"
}
|
gharchive/issue
|
S8 [User Story] Spike -- Research LDAP login token
[User Story]
Spike -- Research LDAP login token
Epic Title
#271
User Story Title
Spike -- Research LDAP login token
T-shirt size: S(mall), M(edium), L(arge)
M
Story Points - Fibonacci (1,2,3,5,8,13)
3
User Story
[Value Statement] "As a user of a certain type (specify), I want to do something (action) so I can provide business value"
Description
Find out what is returned after logging in to AD as a user. Could be a JWT token or some other token.
Business Need (the why):
from epic, if needed
Details
All the details needed to accomplish the story. Tasks - can be linked
Additional User Story
linked
Acceptance Criteria/Tests
The list of tests or criteria to be used to validate the story has been completed. May still need UAT or SIT / integration testing with other stories.
Failure
if any, fold in negative tests
Definition of Done
e.g.Passes all regression tests, Passes testing per acceptance criteria, Approved by UI team, Able to show feature in demo
UX/UI Details and/or mockups
if UI change, a mockup or description of what to change, attach image/file
Applications or Systems impacted
upstream/downstream applications/systems
Dependencies
non story dependencies
Assumptions
if any
Performance Considerations
Will the work performed as part of this story impact system performance. If there is potential for a performance hit, where would you expect it to manifest in the system.
Security Considerations
E.g., does this story involve working with Personal Identifying Information (PII)? Note any such security sensitive aspects.
QA Considerations
what might be hard to test or test setup criteria
Acrhitectural/System or Component Impacts
if we're adding something new or significantly changing architecture flag for architect review
Reverse Engineering Required
to document areas not known well by dev team
Questions/Clarifications
Create new issue (add label: question) in this repo and link to this issue
Just like all stories, this one needs an estimate. The template at the top indicates it's a 3. Can we go by this?
@adamgering does this need to be in S8? Seems like it's related to the stories around AD auth but if are able to demo AD functionality now, I'm assuming we are logged in without understanding what is returned. Why do we need this? Thx.
|
2025-04-01T06:38:59.895531
| 2022-03-11T06:42:15
|
1166060781
|
{
"authors": [
"Vishwas1",
"arnabghose997"
],
"license": "Apache-2.0",
"license_source": "github-api",
"license_type": "permissive",
"provenance": "gharchive-dolma-0000.json.gz:6883",
"repo": "hypersign-protocol/hid-node",
"url": "https://github.com/hypersign-protocol/hid-node/issues/99"
}
|
gharchive/issue
|
Enable smart contract support in hypersign network
https://docs.cosmwasm.com/docs/1.0/
Reference: https://github.com/osmosis-labs/osmosis/tree/main/wasmbinding
|
2025-04-01T06:38:59.900912
| 2021-04-26T17:02:54
|
867927303
|
{
"authors": [
"aaron-steinfeld",
"rish691"
],
"license": "Apache-2.0",
"license_source": "github-api",
"license_type": "permissive",
"provenance": "gharchive-dolma-0000.json.gz:6884",
"repo": "hypertrace/hypertrace-core-graphql",
"url": "https://github.com/hypertrace/hypertrace-core-graphql/pull/59"
}
|
gharchive/pull-request
|
feat: Capability to fetch logs along with span
hypertrace/hypertrace#224
In this change, following is done:
Parse the log event request attributes from the span request
Make the span query as usual, subsequently build the log query using requested log attributes and returned spanIds as filter (ex: select attr1, attr2 from logEventView where spanId in [id1, id2, ....]), use the timefilter from the spanQuery as well
maintain a mapping from spanId(referred as id) -> list of logEvents
Major changes in SpanLogEventFetcher, LogEventAttributeRequestBuilder
Test
{
spans(
limit: 100
between: {startTime: "2021-04-26T17:45:31.849Z", endTime: "2021-04-26T18:45:31.849Z"}
offset: 0
orderBy: [{key: "startTime", direction: DESC}]
) {
results {
id
logEvents {
results {
spanId: attribute(key: "spanId")
attribute: attribute(key: "attributes")
}
}
protocolName: attribute(key: "protocolName")
serviceName: attribute(key: "serviceName")
displaySpanName: attribute(key: "displaySpanName")
statusCode: attribute(key: "statusCode")
duration: attribute(key: "duration")
startTime: attribute(key: "startTime")
traceId: attribute(key: "traceId")
__typename
}
total
__typename
}
}
{
"data": {
"spans": {
"results": [
{
"id": "2b1b03a627d8315b",
"logEvents": [],
"protocolName": "HTTP",
"serviceName": "route",
"displaySpanName": "HTTP GET /route",
"statusCode": "200",
"duration": 63,
"startTime":<PHONE_NUMBER>157,
"traceId": "00000000000000006485904421d38a40",
"__typename": "Span"
},
{
"id": "4b374aba5a81f266",
"logEvents": [
"results":[
{
"spanId": "4b374aba5a81f266",
"attribute": {
"event": "GetConn"
}
},
{
"spanId": "4b374aba5a81f266",
"attribute": {
"event": "ConnectStart",
"addr": "<IP_ADDRESS>:8081",
"network": "tcp"
}
},
{
"spanId": "4b374aba5a81f266",
"attribute": {
"event": "ConnectDone",
"addr": "<IP_ADDRESS>:8081",
"network": "tcp"
}
},
{
"spanId": "4b374aba5a81f266",
"attribute": {
"event": "GotConn"
}
}
]
],
"protocolName": "HTTP",
"serviceName": "frontend",
"displaySpanName": "HTTP GET /customer",
"statusCode": "200",
"duration": 292,
"startTime":<PHONE_NUMBER>412,
"traceId": "00000000000000006485904421d38a40",
"__typename": "Span"
}
.....
],
"total": 54,
"__typename": "SpanResultSet"
}
}
}
@aaron-steinfeld for the log and span join to work, the span query must request id attribute and log query must request for spanId, in case when either is missing should it be handled by internally requesting the id/spanId attribute, or can we consider it sort of an informal contract with the consumer of the api?
@aaron-steinfeld for the log and span join to work, the span query must request id attribute and log query must request for spanId, in case when either is missing should it be handled by internally requesting the id/spanId attribute, or can we consider it sort of an informal contract with the consumer of the api?
Thanks for bringing that up! I meant to comment on that and it completely slipped my mind. IMO it should be handled internally, but I'm OK deferring that into a separate PR if you want since it strictly relaxes the API contract.
|
2025-04-01T06:38:59.903040
| 2022-07-24T19:16:16
|
1315986797
|
{
"authors": [
"tyn1998",
"wxharry"
],
"license": "Apache-2.0",
"license_source": "github-api",
"license_type": "permissive",
"provenance": "gharchive-dolma-0000.json.gz:6885",
"repo": "hypertrons/hypertrons-crx",
"url": "https://github.com/hypertrons/hypertrons-crx/issues/423"
}
|
gharchive/issue
|
[Question] Do we need to specify the available node version we are compatible with?
Description
I got this error after I updated the latest commit. The newly added package @plasmohq/edge-addons-api requires a node version >=16.14, but mine is 16.13.1.
This error can be easily fixed by updating node. However, this error could be a problem for some of our contributors.
Do we need to specify the available node version we are compatible with?
Hi, @wxharry. Agree with you, we'd better add "engine" field to package.json to limit the node version and add "Requirement" to README like:
By the way, it is a convention that odd version is less stable than even version, so you are encouraged to update your node from 16.13.1 to 16.14 or other even number version.
Thank you for your information. I'm using 16.16 now.
closed via #437.
|
2025-04-01T06:38:59.904507
| 2022-11-07T09:39:03
|
1438053484
|
{
"authors": [
"tyn1998",
"xueqidove"
],
"license": "Apache-2.0",
"license_source": "github-api",
"license_type": "permissive",
"provenance": "gharchive-dolma-0000.json.gz:6886",
"repo": "hypertrons/hypertrons-crx",
"url": "https://github.com/hypertrons/hypertrons-crx/issues/509"
}
|
gharchive/issue
|
[Question] API request
Description
Can you provide all github apis used by this plug-in
Hi @xueqidove, here is a main API we are using at the moment.
There are several other APIs actually, but they might be deprecated recently, so I will not provide them here.
API has changed, please refer to https://github.com/hypertrons/hypertrons-crx/issues/515
|
2025-04-01T06:38:59.919670
| 2021-09-10T15:17:19
|
993334069
|
{
"authors": [
"LynHyper",
"alecbcs",
"notassigned"
],
"license": "Apache-2.0",
"license_source": "github-api",
"license_type": "permissive",
"provenance": "gharchive-dolma-0000.json.gz:6887",
"repo": "hyprspace/hyprspace",
"url": "https://github.com/hyprspace/hyprspace/issues/32"
}
|
gharchive/issue
|
QUIC not TCP
Discussed in https://github.com/hyprspace/hyprspace/discussions/31
Originally posted by notassigned September 10, 2021
Since this is a VPN we shouldn't be using TCP, since we may run into a TCP meltdown when running a TCP connection over the VPN connection.
I suggest switching the listen addrs to QUIC.
https://openvpn.net/faq/what-is-tcp-meltdown/
I have a branch for this. Just give me write permission and I'll push it.
Does your branch completely replace TCP with QUIC or provide both as options?
There are situations where TCP is the only way the VPN can be formed, such as in very locked down public networks within offices or campuses that for some reason block all UDP traffic including QUIC, so keeping TCP as an option is needed when QUIC won't work.
I just removed the TCP listen addrs and replaced them with QUIC multiaddrs. TCP is still a transport that can be used but the node doesn't advertise listen addrs for it. The reason I removed them is because I'm not entirely sure how libp2p decides which transports to use when dialing a peer. I will test having both transports and see if I can force the order.
Another thing to keep in mind is that hole punching was just merged into go-libp2p (not released yet) and the success rate for QUIC is far higher than TCP.
Here is my branch on a fork: https://github.com/notassigned/hyprspace/tree/add-quic-transport
Ah I see, some form of hyprspace init --force-tcp option would probably need to added, there must be a way to define the order of transports; I'll look out for it if I can find it.
Hi all! Just saw #37 and maybe we should consider adding this as a field in the hyprspace configuration for a given interface? Possibly something like,
accepted_transports:
- tcp
- quic
- etc...
By default maybe we accept both QUIC and TCP but allow users to pick one specifically if they can't use QUIC/TCP for a network.
The dial method will try to dial multiaddrs for the peer until it gets a connection, so if QUIC can't be used and TCP can then the connection to the peer will be established over TCP. But realistically QUIC will work on virtually all networks and provides numerous benefits over TCP like native roaming, faster connection establishment, and native stream muxing.
I think the only potential downside is that if you were on a network which QUIC couldn't be used then you would have to wait an extra few seconds for the QUIC dial to fail.
|
2025-04-01T06:38:59.922359
| 2023-02-13T17:45:35
|
1582791858
|
{
"authors": [
"875d",
"Aleksanaa",
"Bryan2333",
"aksdb",
"aux-op",
"raffaem",
"vaxerski"
],
"license": "BSD-3-Clause",
"license_source": "github-api",
"license_type": "permissive",
"provenance": "gharchive-dolma-0000.json.gz:6888",
"repo": "hyprwm/Hyprland",
"url": "https://github.com/hyprwm/Hyprland/issues/1543"
}
|
gharchive/issue
|
Window rule to remember last size/position
I would like a window rule that just remembers the last size/position of the window and restore it when the window is opened.
how would you even imagine that being tracked? Just by class?
how would you even imagine that being tracked? Just by class?
With the normal rules: class and title, or just class, or just title
how would you even imagine that being tracked? Just by class?
Like all other rules: by class and/or title
Could the feature request in #1426 also solve this one here? Because that one seems a bit more predictable (you know exactly when to save something and therefore where to save something.)
Sometimes they keep the size of tiling in floating state. That's quite annoying.
I have the similar problem. I use Thunar as file manager and set it as floating window. Every time I open it, it shows up in the minimum size, I have to use meta + right mouse button to resize to normal size. It would be better if the window rule that remembers last size is available.
Not sure if this is of any use to anyone - but I was trying to get something similar to work using special workspaces and finally got the result I was after.
windowrulev2 = workspace special:name, class:^(APP)$
windowrulev2 = float, class:^(APP)$
windowrulev2 = size 1000 900, class:^(APP)$
windowrulev2 = move 500 200, class:^(APP)$
I then use the following to exec and toggle:
bind = SUPER, T, togglespecialworkspace, name
exec-once = [workspace special:name] APP
|
2025-04-01T06:38:59.930491
| 2023-11-26T00:56:10
|
2010846001
|
{
"authors": [
"BendTheKn33",
"andresilva",
"brettalcox",
"imxyy1soope1",
"vaxerski"
],
"license": "BSD-3-Clause",
"license_source": "github-api",
"license_type": "permissive",
"provenance": "gharchive-dolma-0000.json.gz:6889",
"repo": "hyprwm/Hyprland",
"url": "https://github.com/hyprwm/Hyprland/issues/3962"
}
|
gharchive/issue
|
Random flickering/artifacting when opengl introspection is not needed
Hyprland Version
Hyprland, built from branch main at commit 99ca26d4eb84e0071264713902e5b287fcab392e (hooksystem: fix missed log include).
Tag: v0.32.3-84-g99ca26d4
flags: (if any)
debug
Bug or Regression?
Regression
Description
Similar to #3952. Weird flickering artifacts in lower right quadrant of screen. Bisected to commit aedcade68dd0615fd919a7249633a554d0accd81
System:
Host: gentoo Kernel: 6.6.2-gentoo arch: x86_64 bits: 64 Desktop: Hyprland
Distro: Gentoo Base System release 2.14
Machine:
Type: Desktop System: ASUS product: N/A v: N/A serial: <superuser required>
Mobo: ASUSTeK model: TUF GAMING Z790-PLUS WIFI v: Rev 1.xx
serial: <superuser required> UEFI: American Megatrends v: 1220
date: 07/28/2023
CPU:
Info: 24-core (8-mt/16-st) model: 13th Gen Intel Core i9-13900KS bits: 64
type: MST AMCP cache: L2: 32 MiB
Speed (MHz): avg: 837 min/max: 800/5600:6000:4300 cores: 1: 1100 2: 800
3: 800 4: 800 5: 800 6: 800 7: 1100 8: 800 9: 1100 10: 1100 11: 800 12: 800
13: 800 14: 800 15: 800 16: 800 17: 800 18: 800 19: 800 20: 801 21: 800
22: 800 23: 800 24: 800 25: 800 26: 800 27: 800 28: 800 29: 800 30: 800
31: 800 32: 800
Graphics:
Device-1: NVIDIA AD102 [GeForce RTX 4090] driver: nvidia v: 545.29.06
Display: wayland server: Xwayland v: 21.1.99 compositor: Hyprland driver:
X: loaded: nvidia gpu: nvidia,nvidia-nvswitch resolution: 1: 2560x1440~165Hz
2: 3840x2160~160Hz
API: EGL v: 1.5 drivers: nvidia,swrast
platforms: gbm,wayland,x11,surfaceless,device
API: OpenGL v: 4.6.0 vendor: nvidia v: 545.29.06 renderer: NVIDIA GeForce
RTX 4090/PCIe/SSE2
API: Vulkan v: 1.3.268 drivers: nvidia surfaces: xcb,xlib,wayland
debug:disable_logs = false
exec-once = waybar & swaybg -o \* -i ~/Pictures/Wallpapers/forest_sunset_pink.jpg -m fill & swayidle -w
exec-once = /usr/libexec/polkit-gnome-authentication-agent-1
exec-once = udiskie -t
exec-once = foot spotify_player & foot nvtop & foot btop
exec-once = wl-paste --type text --watch cliphist store
exec-once = wl-paste --type image --watch cliphist store
monitor=DP-3,3840x2160@159.975,2560x0,1.5
monitor=DP-2,2560x1440@164.958,0x0,1
# NVIDIA
env = LIBVA_DRIVER_NAME,nvidia
env = XDG_SESSION_TYPE,wayland
env = GBM_BACKEND,nvidia-drm
env = __GLX_VENDOR_LIBRARY_NAME,nvidia
env = WLR_NO_HARDWARE_CURSORS,1
env = XDG_CURRENT_DESKTOP,Hyprland
env = XDG_SESSION_DEKSTOP,Hyprland
# try again when latest gamescope works with 535/545
#env = XWAYLAND_NO_GLAMOR,1 # with this you'll need to use gamescope for gaming
xwayland {
force_zero_scaling = true
}
env = GDK_SCALE,1.5
env = QT_SCALE_FACTOR,1.5
env = ELM_SCALE,1.5
env = XCURSOR_SIZE,24
env = XCURSOR_THEME,Sunity-cursors
env = TERM,foot
env = TERMINAL,foot
input {
follow_mouse = 1
kb_options=ctrl:nocaps
}
general {
gaps_in = 8
gaps_out = 15
border_size = 2
col.active_border = rgb(cdd6f4)
col.inactive_border = rgb(11111b)
layout = master
}
decoration {
rounding = 5
}
animations {
enabled = yes
#easeOutExpo
bezier = myBezier, 0.22, 1, 0.36, 1
animation = windows, 1, 7, myBezier, slide
animation = windowsOut, 1, 7, myBezier, slide
animation = border, 1, 7, myBezier
animation = fade, 1, 1, myBezier
animation = fadeDim, 0
animation = workspaces, 1, 5, default
animation = specialWorkspace, 1, 5, default, slidefadevert 50%
}
master {
}
misc {
force_default_wallpaper = 0
vrr = 1
allow_session_lock_restore = true
}
workspace=1,monitor:DP-2,default:true,persistent:true
workspace=2,monitor:DP-3,default:true,persistent:true
workspace=3,monitor:DP-3,default:true,persistent:true
workspace=special, gapsin:70, gapsout:120, on-created-empty:foot
windowrule = float, Rofi
windowrule = noborder, Rofi
windowrule = workspace 3 silent, ^(steam)$
windowrule = workspace 3 silent, ^(discord)$
windowrule = fullscreen,title:^(Steam Big Picture Mode)$
windowrule = tile,title:^(Old School RuneScape)$
$mainMod = SUPER
# keybind for Master layout
bind = SUPER_SHIFT, SPACE, layoutmsg, orientationnext
bind = $mainMod, comma, layoutmsg, addmaster
bind = $mainMod, period, layoutmsg, removemaster
bind = $mainMod, SPACE, layoutmsg, swapwithmaster
bind = $mainMod, RETURN, exec, foot
bind = $mainMod SHIFT, C, killactive
bind = $mainMod SHIFT, Q, exit,
bind = $mainMod, R, exec, sh $HOME/.config/rofi/bin/launcher
bind = $mainMod SHIFT, R, exec, sh $HOME/.config/rofi/bin/runner
bind = $mainMod, P, exec, sh $HOME/.config/rofi/bin/powermenu
bind = $mainMod, V, togglefloating,
bind = $mainMod, F, fullscreen
bind = $mainMod, W, exec, pkill -SIGUSR1 '^waybar$'
bind = $mainMod, C, exec, cliphist list | sh $HOME/.config/rofi/bin/clipboard | cliphist decode | wl-copy
# volume control
bind = $mainMod, MINUS, exec, amixer sset Master 5%-;
bind = $mainMod, EQUAL, exec, amixer sset Master 5%+;
# screenshot
bind = $mainMod, A, exec, shotname=$(date '+%Y-%m-%d-%H:%M:%S').png && grim ~/Pictures/Screenshots/$shotname && dunstify -u low --replace=699 "Screenshot ${shotname} Saved."
bind = $mainMod, S, exec, shotname=$(date '+%Y-%m-%d-%H:%M:%S').png && grim -o "$(hyprctl activeworkspace | grep -m1 "DP-" | cut -d' ' -f7 | sed s/://g)" ~/Pictures/Screenshots/$shotname && dunstify -u low --replace=699 "Screenshot ${shotname} Saved."
bind = $mainMod SHIFT, S, exec, shotname=$(date '+%Y-%m-%d-%H:%M:%S').png && grim -g "$(slurp)" ~/Pictures/Screenshots/$shotname && dunstify -u low --replace=699 "Screenshot ${shotname} Saved."
bind = $mainMod, left, movefocus, l
bind = $mainMod, right, movefocus, r
bind = $mainMod, up, movefocus, u
bind = $mainMod, down, movefocus, d
#vim bindings for move focus
bind = $mainMod, H, movefocus, l
bind = $mainMod, L, movefocus, r
bind = $mainMod, K, movefocus, u
bind = $mainMod, J, movefocus, d
bind = $mainMod, 1, workspace, 1
bind = $mainMod, 2, workspace, 2
bind = $mainMod, 3, workspace, 3
bind = $mainMod, 4, workspace, 4
bind = $mainMod, 5, workspace, 5
bind = $mainMod, 6, workspace, 6
bind = $mainMod, 7, workspace, 7
bind = $mainMod, 8, workspace, 8
bind = $mainMod, 9, workspace, 9
bind = $mainMod, Z, togglespecialworkspace
bind = $mainMod SHIFT, 1, movetoworkspace, 1
bind = $mainMod SHIFT, 2, movetoworkspace, 2
bind = $mainMod SHIFT, 3, movetoworkspace, 3
bind = $mainMod SHIFT, 4, movetoworkspace, 4
bind = $mainMod SHIFT, 5, movetoworkspace, 5
bind = $mainMod SHIFT, 6, movetoworkspace, 6
bind = $mainMod SHIFT, 7, movetoworkspace, 7
bind = $mainMod SHIFT, 8, movetoworkspace, 8
bind = $mainMod SHIFT, 9, movetoworkspace, 9
bind = $mainMod SHIFT, Z, movetoworkspace, special
bindm = $mainMod, mouse:272, movewindow
bindm = $mainMod, mouse:273, resizewindow
bind = $mainMod SHIFT, left, movewindow, l
bind = $mainMod SHIFT, right, movewindow, r
bind = $mainMod SHIFT, up, movewindow, u
bind = $mainMod SHIFT, down, movewindow, d
#vim bindings for move window
bind = $mainMod SHIFT, H, movewindow, l
bind = $mainMod SHIFT, L, movewindow, r
bind = $mainMod SHIFT, K, movewindow, u
bind = $mainMod SHIFT, J, movewindow, d
exec-once=dbus-update-activation-environment --systemd WAYLAND_DISPLAY XDG_CURRENT_DESKTOP
How to reproduce
Most easily reproduce with 2 windows in master/slave layout -- in my case, Firefox as master and foot as slave. Do something in the master window like play a video, change tabs, etc.
Crash reports, logs, images, videos
hyprland.log
https://github.com/hyprwm/Hyprland/assets/7462622/6da60d73-5fde-4b0b-b21c-48da5dbcf9d4
latest commit should the problem
@imxyy1soope1 it's still persistent--that's why I opened this issue since the original issue was closed out.
if you float a window with blur, does the issue go away?
@vaxerski yep that fixes it if i do that
What does floating a window with blur mean exactly? I have the same issue and it is driving me insane!
@BendTheKn33 floating it instead of tiling the window and blur meaning the blur window effect
Thanks, that's what I thought but when I for example toggle a terminal window, which has blur, to float the issue remains. Yes there is less flicker but still it's clearly there and annoying.
Guess I'll wait for the next update and use another WM for the time being.
@BendTheKn33 for me the last usable commit is e40e486f61f2643578b9977b86f408799dbc75fd, so you may try rolling back to that for time being unless there are things in later commits you absolutely need. (Technically, 802ab58f8a129b42d61ec13898fd978e0920f7d8 doesn't have the issue, but it does have the black cursor issue / resolution issue)
you're saying 802ab58 doesn't have the issue? Can you bisect it then after that commit?
@vaxerski Correct--https://github.com/hyprwm/Hyprland/commit/aedcade68dd0615fd919a7249633a554d0accd81 is where it appears
roight. That doesn't help. :/
can any one of you add a glFlush() as a first thing in beginRender()?
@vaxerski doesn't seem to make any difference
:(
patch.txt
?
renamed cuz it's not limited to nv
@vaxerski no dice
Didn't work for me either.
:(
I assume wlroots with nvidia patches + sway-git does not exhibit this issue?
what about d2c3b23ace745d3f79926c71d4cb3c481a394841
hyprlandCrashReport135341.txt
@vaxerski it goes poopy now
see #3998 already fixed in head
@vaxerski that seems to have fixed it!!
the crash or the artifacts?
oke, great, closing then :)
|
2025-04-01T06:38:59.931799
| 2024-03-12T22:52:03
|
2182815717
|
{
"authors": [
"BowlBird",
"vaxerski"
],
"license": "BSD-3-Clause",
"license_source": "github-api",
"license_type": "permissive",
"provenance": "gharchive-dolma-0000.json.gz:6890",
"repo": "hyprwm/hyprlock",
"url": "https://github.com/hyprwm/hyprlock/issues/185"
}
|
gharchive/issue
|
[REQ] Fade out option
Current Behavior
On authentication, the onscreen graphics are immediately killed.
Wanted Behavior
Have an option in the config for the graphics to fade out like how they fade in before the graphics are killed.
#60
|
2025-04-01T06:38:59.942884
| 2017-05-15T13:31:34
|
228715648
|
{
"authors": [
"Wiloud",
"hyyan"
],
"license": "mit",
"license_source": "bigquery",
"license_type": "permissive",
"provenance": "gharchive-dolma-0000.json.gz:6891",
"repo": "hyyan/woo-poly-integration",
"url": "https://github.com/hyyan/woo-poly-integration/issues/162"
}
|
gharchive/issue
|
Stock doesn't decrease after selling one variation of a variable product
Hi there and thanks for your plugin !
Here is the description of an issue I'm experiencing :
Can you reproduce this issue on default Wordpress theme (eg Storefront)?
Yes.
Can you reproduce this issue when all other plugins are disabled except WooCommerce, Polylang and Hyyan WooCommerce Polylang Integration?
Yes.
What product versions and settings are you using when this issue occurs?
PHP: 5.6.30
WordPress: 4.7.4
WooCommerce: 3.0.6
Polylang: 2.13
Hyyan WooCommerce Polylang Integration: 0.29.1
Browser: Chrome (58.0.3029.96)
Steps to Reproduce
Check the stock in a variable product
Go to the variable product in shop
Buy the variable product
Check the stock in variable product which didn't decrease
What I Expected
I expected the stock to decrease itself from 1 item when the user buy 1 item.
What Happened Instead
The stock didn't decrease.
Hey @Wiloud , please note the version 0.29.1 doesn't work with WooCommerce: 3.0.6 , try the to download the plugin from the master branch , it contians the latest updates to support WooCommerce: 3.0.6.
The new version will be released soon in the wordpress repo
Hey @hyyan , thanks a lot for this quick answer.
It worked !
Best !
Wil
|
2025-04-01T06:38:59.965124
| 2022-08-07T13:17:42
|
1331014104
|
{
"authors": [
"hzwer",
"oblessnoob",
"yaroslav-semeniuk"
],
"license": "MIT",
"license_source": "github-api",
"license_type": "permissive",
"provenance": "gharchive-dolma-0000.json.gz:6892",
"repo": "hzwer/Practical-RIFE",
"url": "https://github.com/hzwer/Practical-RIFE/issues/17"
}
|
gharchive/issue
|
Found bugs in inference_img.py
inference_img.py - line 99
with bugs:
res.append(model.inference(img0, img1, (i+1) * 1. / (n+1), args.scale))
fixed:
img_list.append(model.inference(img0, img1, (i+1) / n))
There's no res variable, so I assumed it should be img_list
Formula was incorrect, resulting in a 0.333 ratio instead of 0.5 for x2 interpolation
There's no args.scale argument, so I removed it, but you can add it in the list of arguments to keep it
Thank you! Modified it.
Also, it seems like at line 70, it's not sending the ratio to model.inference(), resulting in a 0.5 ratio every time, even if another was provided. But I'm not 100% sure. My fix:
img_list = [img0, model.inference(img0, img1, args.ratio), img1]
Hey @hzwer - I wrote another fix above, but it seems you didn't see it. I don't know GitHub well. Sorry.
@yaroslav-semeniuk Thank you so much for your feedback!
Hey @hzwer - I wrote another fix above, but it seems you didn't see it. I don't know GitHub well. Sorry.
You can try to fork the project to your own branch, fix it, and pull your branch as a pull request.
|
2025-04-01T06:38:59.980102
| 2015-12-16T00:54:25
|
122402591
|
{
"authors": [
"jamuhl",
"perrin4869"
],
"license": "mit",
"license_source": "bigquery",
"license_type": "permissive",
"provenance": "gharchive-dolma-0000.json.gz:6893",
"repo": "i18next/i18next-sprintf-postProcessor",
"url": "https://github.com/i18next/i18next-sprintf-postProcessor/pull/1"
}
|
gharchive/pull-request
|
Fix small documentation bugs
Just fixed a few broken links (npm is case sensitive and postProcessor gives you 404). Maybe it'd be a good idea to change the name of this repo to lower case too :)
There was a small mistake with the name of the overloadTranslationOptionHandler method
One more thing, bower installation doesn't work yet so I left it as is in case they support case sensitive names.
bower should now be registered...missed that install from repo seems not to work....so will need to register all the new stuff on bower.
renamed the function: overloadTranslationOptionHandler: sprintf.overloadTranslationOptionHandler so it matches with i18next
thank you for the help
|
2025-04-01T06:39:00.051129
| 2023-03-31T18:18:17
|
1649817128
|
{
"authors": [
"ehoogerbeets"
],
"license": "Apache-2.0",
"license_source": "github-api",
"license_type": "permissive",
"provenance": "gharchive-dolma-0000.json.gz:6895",
"repo": "iLib-js/i18nlint",
"url": "https://github.com/iLib-js/i18nlint/pull/12"
}
|
gharchive/pull-request
|
Support linting of source code
added the ability to scan source code files and apply rules
added source-checker Rule for declarative rules
introduced the idea of an intermediate representation object
defines its own type, as each representation can be different, and its source path so that it is self-documenting
multiple parsers can parse the same file, each returning a different intermediate representation. eg. HTML files can be parsed by an HTML parser, a Javascript parser, and a CSS parser
a single parser can produce multiple intermediate representations if it likes, so that the result of parsing by one more parsers is an array of intermediate representations
rules are specific to an intermediate representation type and are only applied to the representations that they know how to parse
pass in an API object for plugins to call
only call in there is currently getLogger to get the lint tool's logger
fixed the plugin loader to be able to load plugins written in ESM.
Currently, node cannot load them from a directory, so you have to load them from the main file instead.
moved more functionality into the Project class so that later after more updates, we can offer linting as a library as well as a command-line tool
Checks will fail until the i18nlint-common library is approved and published to npm
Okay i18nlint-common is published, and this is updated and ready for review. (Yes, sorry it is so big.)
|
2025-04-01T06:39:00.057941
| 2021-02-03T20:45:20
|
800685746
|
{
"authors": [
"cingolo",
"iMicknl"
],
"license": "MIT",
"license_source": "github-api",
"license_type": "permissive",
"provenance": "gharchive-dolma-0000.json.gz:6896",
"repo": "iMicknl/ha-sagemcom-fast",
"url": "https://github.com/iMicknl/ha-sagemcom-fast/issues/2"
}
|
gharchive/issue
|
Android MotoG9 discovery issue
Environment:
Sagemcom F@st 3865bProximus (b-box3)
Release: 8c.514A
GUI Version
<IP_ADDRESS>
The Sagemcom Client seems to have issues while discovering my Androids devices.
After further investigation the issue seems triggered by the 'hostname' and device type values seen into the router. By default in the router you don't have a friendly name so not sure if your application is expecting a value because in the HA entity you are mapping the same friendly name value.
Workaround is: change the device type from 'Generic' to 'Mobile phone' and assign a friendly name then reload the application
Action Taken:
Added the Sagemcom client integration
Reloaded home Assistant
the client doesn't discover the Motorola G9 Android
Sagemcom F@st
<IP_ADDRESS>
12 devices and 12 entities
Tried to reload either the application or HA but no luck
Changed the Motorola device type and the friendly name in the router
Reloaded the client and issue is fixed
Apologies but i cannot share the logs somehow HA didn't print any logs even though i added the required lines in the configuration file.
Currently there not much is logged from the integration, thus you won't see a lot indeed.
Currently we use the following definition for the name, so that shouldn't be the issue. Are you familiar with Python and are you able to try if it works with the Python client directly? (https://github.com/iMicknl/python-sagemcom-api)
@property
def name(self):
"""Return name of the device."""
return self.user_host_name or self.host_name
Ok i will give it a try by using the client.
@cingolo could you have a look if it works with the latest version? :)
Oki, I will run few tests later today
Inviato da Yahoo Mail su Android
Il ven, 9 apr, 2021 alle 14:57, Mick @.***> ha scritto:
@cingolo could you have a look if it works with the latest version? :)
—
You are receiving this because you were mentioned.
Reply to this email directly, view it on GitHub, or unsubscribe.
|
2025-04-01T06:39:00.078890
| 2021-01-18T12:50:49
|
788244173
|
{
"authors": [
"iMicknl",
"inconsx",
"labr3y"
],
"license": "MIT",
"license_source": "github-api",
"license_type": "permissive",
"provenance": "gharchive-dolma-0000.json.gz:6897",
"repo": "iMicknl/ha-tahoma",
"url": "https://github.com/iMicknl/ha-tahoma/issues/351"
}
|
gharchive/issue
|
Open/Close Status indicator (percentage) inconsistency - AwningValance (io:AwningValanceIOComponent)
[x ] I have read the Readme, including the Advanced section regarding debugging.
Describe the bug
percentage indicatory is not consistent overy the somfy products
To Reproduce
configure entity in entity card
Expected behavior
same status indicator in percentage for every somfy cover product
Screenshots
Environment (please complete the following information):
ha-tahoma version: 2020.6.4-15
Home Assistant version: core-2021.1.4
Platform: cover.volant
Device: (if your problem is related to a specific device)
pc
Model:
current_position: 0
ui_class: Awning
widget: AwningValance
controllable_name: 'io:AwningValanceIOComponent'
rssi_level: 32
'core:NameState': Volant
'core:PriorityLockTimerState': 0
'core:StatusState': available
'core:DiscreteRSSILevelState': low
'core:RSSILevelState': 32
'core:ClosureState': 0
'core:OpenClosedState': open
'core:Memorized1PositionState': 105
friendly_name: Volant
supported_features: 527
device_class: awning
Type: controllable_name: 'io:AwningValanceIOComponent'
Additional context
Add any other context about the problem here.
Thanks for reporting @inconsx. It seems that there is an issue in the latest version of ha-tahoma or changes have been made on the side of Somfy TaHoma...
We have had these issues in the past as well, however when we reverse the state, someone else will have issues with exactly the same device. We need to investigate more, will let you know when I do require more details.
Do you know the type (TaHoma vs Connexoon) of your hub and the firmware version of your hub? And just to double check, the status on tahomalink.com is correct?
(possible duplicate of #352 and #350)
Thanks for reporting @inconsx. It seems that there is an issue in the latest version of ha-tahoma or changes have been made on the side of Somfy TaHoma...
We have had these issues in the past as well, however when we reverse the state, someone else will have issues with exactly the same device. We need to investigate more, will let you know when I do require more details.
Do you know the type (TaHoma vs Connexoon) of your hub and the firmware version of your hub? And just to double check, the status on tahomalink.com is correct?
(possible duplicate of #352 and #350)
Hey
It's a Tahoma and the firmware is 2.18.
Status is correct.
Thanks and regards
Hey
It's a Tahoma and the firmware is 2.18.
Status is correct.
Thanks and regards
Could you give https://github.com/iMicknl/ha-tahoma/archive/fix/awning_valance.zip a try? Extract this file and place custom_components/tahoma in your custom_components folder. This should solve your issues.
Hi,
can we reopen this issue.
Volant is now 90 % open but is shown 10% open
Tahoma FW: 2.19
Home Assistant 2021.2.3
current_position: 12
rssi_level: 38
'core:NameState': Volant
'core:PriorityLockTimerState': 0
'core:StatusState': available
'core:DiscreteRSSILevelState': low
'core:RSSILevelState': 38
'core:ClosureState': 88
'core:OpenClosedState': open
'core:Memorized1PositionState': 105
friendly_name: Volant
supported_features: 527
device_class: awning
@inconsx which version of the TaHoma integration are you using?
@iMicknl
2.4.7
@inconsx this fix has been confirmed with others with the same device.. Perhaps there are even differences between the same devices.
Do all buttons work? (open / close / set position). I would like to understand if this is just a cosmetic issue or if this disables the functionality.
@iMicknl
every button is working, yes.
it's just inverted and it's the only entity which shows percentage indicator when closed (before 0% and now inverted 100%)
Hello,
I've got the same issue. It's been like since the start (about 1 year ago).
Beside the awning, all other Somfy equipment report the correct states in HA.
I went through several Tahoma firmwares and several Home Assistant releases, and no improvement as been seen.
Currently I'm on these releases :
Tahoma firmware : 2021.1.4-12
Home Assistant : docker 2021.4.3
HA Tahoma : 2.4.8
Below, on the screenshots, you can see the awning is reported to be open when it's actually closed (Tahoma on android or tahomalink.com show me the correct status). I can still use the HA Tahoma integration to open or close the awning, but I have to know that the numbers and status are incorrect.
@labr3ythank, I would like to see if I can tackle this issue this week. Is anyone willing to work with me via Discord to see if we can debug / fix this issue? Would be great to have this solved before the integration is added to core.
Join us on Discord (https://discord.gg/3Wpn6q9z or add iMick#1903, to discuss in more detail. Eventually the easiest would be if you are able to change your password temporarily and share your credentials privately, so I can have a quick look and see how to fix it :).
I could help with that. I'll try to join you later on discord after work. We can see how and when we can do some testing.
|
2025-04-01T06:39:00.147531
| 2022-04-13T20:27:40
|
1203757642
|
{
"authors": [
"Burtdoe",
"ProudFoxx42069",
"fpcarva"
],
"license": "Apache-2.0",
"license_source": "github-api",
"license_type": "permissive",
"provenance": "gharchive-dolma-0000.json.gz:6898",
"repo": "iPERDance/iPERCore",
"url": "https://github.com/iPERDance/iPERCore/issues/145"
}
|
gharchive/issue
|
checkpoint no page found
https://download.impersonator.org/iper_plus_plus_latest_checkpoints.zip no page found
@fpcarva have you find a different link?
@fpcarva have you find a different link?
https://github.com/iPERDance/iPERCore/blob/main/docs/install.md
scroll down to OneDrive 1 or OneDrive 2
@Burtdoe can i use the OneDrive in the wget command?
|
2025-04-01T06:39:00.155962
| 2016-12-18T16:04:56
|
196283369
|
{
"authors": [
"iTaybb",
"nima2017"
],
"license": "unlicense",
"license_source": "bigquery",
"license_type": "permissive",
"provenance": "gharchive-dolma-0000.json.gz:6899",
"repo": "iTaybb/pySmartDL",
"url": "https://github.com/iTaybb/pySmartDL/issues/11"
}
|
gharchive/issue
|
pause
hello i want to know how can i pause and resume download of file
so thanks
There are Pause and Unpause functions.
|
2025-04-01T06:39:00.159782
| 2016-07-13T12:49:27
|
165312150
|
{
"authors": [
"MVakas",
"iTofu"
],
"license": "MIT",
"license_source": "github-api",
"license_type": "permissive",
"provenance": "gharchive-dolma-0000.json.gz:6900",
"repo": "iTofu/LCBannerView",
"url": "https://github.com/iTofu/LCBannerView/issues/5"
}
|
gharchive/issue
|
Adding in UITableViewCell
I’m adding LCBannerView in a UITableViewCell, it seems that it is adding this again and again as i’m seeing overlapped images after scrolling.
Any suggestions how can I fix this?
Sorry this is my fault... But I can't fix it now...
But I have some Advice, you could change some properties in .m file, like imageName, imageURLs... You could move those from .m file to .h file, and than, you should reset bannerView.imageName, bannerView.imageURLs... in UITableView's delegate method tableView:cellForRowAtIndexPath:, for example:
cell.bannerView.imageName = self.dataSourceArray[indexPath.row][@"imageName"];
cell.bannerView.imageURLs = self.dataSourceArray[indexPath.row][@"imageURLs"];
If I'm free, I will fix it like this way :)
Thanks. 👍
I’ll give it a try.
Hey, I had fix this bug a moment ago, you could run pod update to get the Latest release.
For more information you could see the README's Release Logs.
Thanks much appreciated.
|
2025-04-01T06:39:00.181086
| 2022-12-14T20:46:53
|
1497365651
|
{
"authors": [
"gretanausedaite",
"mayank99"
],
"license": "MIT",
"license_source": "github-api",
"license_type": "permissive",
"provenance": "gharchive-dolma-0000.json.gz:6901",
"repo": "iTwin/iTwinUI",
"url": "https://github.com/iTwin/iTwinUI/pull/854"
}
|
gharchive/pull-request
|
feat: Add new Icon component
CSS: Added new iui-svg-icon class that takes an svg as a child. Allows changing size and fill using data-iui-icon-color and data-iui-icon-size attributes.
React: Added an Icon component, used like this:
<Icon><SvgCloseSmall /></Icon>
<Icon fill='informational'><SvgInfoCircular /></Icon>
Other notes:
I used the name iui-svg-icon to prevent conflicts with iui-icon that is already used in a bunch of components.
I kept backwards compatibility with the data-iui-icon-color attributes because those are already part of our released css.
I tried not to change the icon.scss file because it contains mixins that are used all over the project. I wanted to reuse the icon-sizes map but I was having issues because sass sees xl as string but 2xl as number. and changing the whole map to be explicitly strings would result in dozens of changes in other components, so I just duplicated the icon sizes map.
It's not ideal that our utils folder is a catch-all that is used to dump all our helper mixins and components. It is confusing to keep track of what's internal vs user facing, makes it hard to change things, and results in a bloated util.css file. We should improve this folder in the future.
Pending
[x] narrow down list of sizes (see comment)
[x] test with new svgs (after https://github.com/iTwin/iTwinUI-icons/pull/56)
[x] test with our components (example: passing <Icon> to <Button>)
I've changed the default size to medium because that is the most common use case. Users will need to opt into autoscaling with text using size='auto'.
so what do we do about this? https://github.com/iTwin/iTwinUI/pull/854#issuecomment-1372811544
I've also tested in some of our components like <IconButton> and it displays fine but the wrapping span ends up increasing in height in a little bit.
so what do we do about this? #854 (comment)
I've also tested in some of our components like <IconButton> and it displays fine but the wrapping span ends up increasing in height in a little bit.
Is it avoidable?
Is it avoidable?
I haven't been able to figure out why this is happening. need to investigate and play around a bit more. Maybe someone else can figure it out?
so what do we do about this? #854 (comment)
I've also tested in some of our components like <IconButton> and it displays fine but the wrapping span ends up increasing in height in a little bit.
Is it avoidable?
I haven't been able to figure out why this is happening. need to investigate and play around a bit more. Maybe someone else can figure it out?
should be fixed in bccf588. does that look good?
|
2025-04-01T06:39:00.197745
| 2022-07-28T19:00:15
|
1321357822
|
{
"authors": [
"PeterJBassett",
"markschlosseratbentley",
"pmconne"
],
"license": "MIT",
"license_source": "github-api",
"license_type": "permissive",
"provenance": "gharchive-dolma-0000.json.gz:6902",
"repo": "iTwin/itwinjs-core",
"url": "https://github.com/iTwin/itwinjs-core/issues/3989"
}
|
gharchive/issue
|
Implement normal mapping
Look at this Three.js tutorial about normal mapping: https://sbcode.net/threejs/normalmap/
Could we do something similar in the iTwin.js renderer? This could be valuable when rendering terrain, etc. if the source data has a normal map for a mostly-flat surface.
Requires:
[x] Ability to define and use normal maps on frontend
[ ] Support for generating pattern map coords in shader (@MarcNeely)
[x] Ability to define normal maps in RenderMaterial elements (@pmconne)
[x] Ensure MicroStation connector includes normal maps in material definitions (@pmconne)
[x] Include normal map info in tiles (@pmconne)
Yes, we can do normal mapping. It would require WebGL 2 because max texture units, but that's no longer a problem except for iOS users who refuse to update to 15. IIRC the DGN converter already preserves normal maps (and bump, glow, etc maps) from MicroStation when converting materials.
Appears to be high priority.
(for one user).
@DStradley will look at relevant QV code.
@PeterJBassett We are looking at adding support for normal maps -- could you provide us any information or documentation on how the UVs are stored in the data passed in from FutureOn subsea models?
@PeterJBassett We are looking at adding support for normal maps -- could you provide us any information or documentation on how the UVs are stored in the data passed in from FutureOn subsea models?
cc @MarcNeely @pmconne
Hi @PeterJBassett Just checking in again -- we would like to implement normal mapping for FutureOn subsea models -- do you know what process you are following to create the UV coordinates from a FutureOn normal map?
Hello @MarcNeely - I apologise, I forgot to reply. Here is some information from a colleague.
So first the UV wrapping should be set to ClampToEdgeWrapping. Filter to Linear.
We use tangent space normal maps. Each pixel in the normal map is actually a normal encoded, where R = X, G = Y and B = Z.
The encoding follow the usual :
normal = texture2D( normalMap, vUv ).xyz * 2.0 - 1.0;
The code for genarting the UVs are quite simple, and I think it should be part of the exporter ?
const vertices = new Float32Array(this.resource._heightmapWidth * this.resource._heightmapHeight * 3)
const normals = new Float32Array(this.resource._heightmapWidth * this.resource._heightmapHeight * 3)
const uvs = new Float32Array(this.resource._heightmapWidth * this.resource._heightmapHeight * 2)
let offset = 0
let offset2 = 0
let offset3 = 0
for (let iy = 0; iy < gridY1; iy++) {
const y = iy * segmentHeight - heightHalf
for (let ix = 0; ix < gridX1; ix++) {
const x = ix * segmentWidth - widthHalf
vertices[offset3] = x
vertices[offset3 + 1] = y
vertices[offset3 + 2] = heights[offset]
normals[offset3] = 0
normals[offset3 + 1] = 0
normals[offset3 + 2] = 1
uvs[offset2] = ix / gridX
uvs[offset2 + 1] = 1 - iy / gridY
offset3 += 3
offset2 += 2
offset++
}
}
@MarcNeely @DStradley @pmconne See above reply about UVs + normal maps.
@PeterJBassett
Thanks for the information.
A few more questions--
In the FutureOn example data we have, there is a pattern texture depicting the seabed being drawn in addition to the normal map texture. Is it correct that this is displayed in addition to the normal map?
We also need to know where the UV coordinates for the pattern texture originate. Do you use the same UV coordinates as the normal map texture?
If they are not the same UV coordinates, can they be derived from them using a scaling factor or some kind of transformation?
What is the wrapping mode for the pattern texture?
cc @MarcNeely @DStradley @pmconne
@MarcNeely @DStradley @pmconne
Here is some further info from my colleague.
So, this will get a bit hairy.
There is actually two way of rendering the user can chose from :
Simple color ( in this case, we just use the color defined in “seabedColor”. In this case we just display the seabed with the normal map on top. We usually scale a bit the normals so that it looks a bit better.
We use a texture, in this case the textures can be found inside the “public” assets path of the software. The “texture key” is defined in “seaBedTextureName”. We default to “muddyDiffuse” if this value is not set
const filenamesToLoad = {
rocksDiffuse: 'RockySeabed_dark_01_1024x1024.jpg',
rocksLightDiffuse: 'RockySeabed_light_01_1024x1024.jpg',
rocks2Diffuse: 'rocks2.jpg',
sandsDiffuse: 'SandySeabed_dark_01_1024x1024.jpg',
sandsLightDiffuse: 'SandySeabed_light_01_1024x1024.jpg',
muddyDiffuse: 'MuddySeabed_dark_01_1024x1024.jpg',
muddyLightDiffuse: 'MuddySeabed_light_01_1024x1024.jpg',
desertDiffuse: 'DesertSand_01_1024x1024.jpg',
}
for (const key in filenamesToLoad) {
const filename = filenamesToLoad[key]
const texture = textureLoader.load('/assets/textures/seabed/' + filename, (texture) => {
threeVisualizer.requestRender()
})
texture.wrapS = RepeatWrapping
texture.wrapT = RepeatWrapping
texture.encoding = sRGBEncoding
this.seabedTextures[key] = texture
}
So for example, using my local dev : if “ https://futureon-designer.lvh.me/assets/textures/seabed/RockySeabed_dark_01_1024x1024.jpg
If we use the texture, the shader is a bit particular as we try to tile it depending on the depth so that it does not look to bad. So basically we generate the UVs dynamically :
{
shader.vertexShader =
`
//attribute vec4 homogeneousPosition;
uniform vec2 uvOffsetCustom;
//uniform mat4 modelViewProjectionMatrix;
uniform float orthographicFakeDistance;
varying vec2 vUvCustom;
varying float vDepth;
`
+
insertStringAfterSubstring('<fog_vertex>', shader.vertexShader, `
vec2 modelPosition = (modelMatrix * vec4(position, 0.0)).xy; // 0.0 is used to ignore translation
vUvCustom = uvOffsetCustom + modelPosition;
vec4 viewPosition = modelViewMatrix * vec4(position, 1.0);
if (isOrthographic) {
vDepth = orthographicFakeDistance;
} else {
vDepth = -viewPosition.z;
}
//#if (HAS_PRECALCULATED_HOMOGENEOUS_POSITION == 1)
// gl_Position = homogeneousPosition;
//#else
//gl_Position = modelViewProjectionMatrix * vec4(position, 1.0);
// WEIRD! The modelViewProjectionMatrix had more artifacts than in-shader "projectionMatrix * viewPosition"; flashing artifacts for the infinite seabed,
// and sometimes weird clipping or something making the quad concave. Bug story for future reference: To fix infinite seabed flashing, I wrote code to update
// the position attribute to be the frustum-to-infinite-seabed intersection. Still had flashing. So I did the homogeneous position calculation on CPU and
// passed that directly in instead. That fixed it. Later, I looked at the water, and noticed that even though I don't do any special frustum intersection or
// such (I just set a 1x1 quad's scale to be camera.far * 2), it didn't have any flashing artifacts. I tried doing "projectionMatrix * viewPosition" in
// the shader instead of passing it as a uniform for the seabed material, to match the water material, and sure enough, that was what caused the flashing
// for some reason. Not sure why, but perhaps the uniforms have less precision than the in-shader calculations here? Or perhaps a large number of uniforms
// just messes things up without throwing an error? (Edit: I checked and .capabilities says we have 1024 uniforms available, so weird if it would mess up this way.)
// Take note of this for the future. However, the frustum intersection still makes UVs precise on the seabed, so I will keep that. However, the homogeneous
// position attribute doesn't seem like it's needed, hence I commented it out.
gl_Position = projectionMatrix * viewPosition;
//#endif
`)
}
// Extend fragment shader
{
shader.fragmentShader =
`
varying vec2 vUvCustom;
varying float vDepth;
`
+
replaceSubstring('#include <map_fragment>', shader.fragmentShader, `
float logDepth = log2(vDepth);
float repetitions = 3.0;
vec4 texelColor = mix(
texture2D(map, vUvCustom / clamp(pow(2.0, floor(logDepth)), float(MIN_TILE_SIZE), float(MAX_TILE_SIZE)) * repetitions),
texture2D(map, vUvCustom / clamp(pow(2.0, floor(logDepth) + 1.0), float(MIN_TILE_SIZE), float(MAX_TILE_SIZE)) * repetitions),
fract(logDepth)
);
diffuseColor *= texelColor;
`)
}
Hi @PeterJBassett --
The texture mapping mode you describe above appears to be feasible to implement in iTwin.js as well as regular normal mapping.
Do you know the origin of this texture mapping technique and its name? Is this something you would like to see implemented in iTwin.js? (In addition to normal mapping).
cc @MarcNeely @DStradley @pmconne
Hello @markschlosseratbentley @MarcNeely @DStradley @pmconne
Apologise for the delay replying. There is no standard name for the texture mapping technique.
This is the reply from my colleague:
If you mean the way we keep the seabed detailed as we zoom in and out, it's a shader part i wrote, found in TerrainMaterial.js. Not sure if it has a name. It basically shaderized your old LOD mesh method, but with fading between the two closest levels, to avoid popping as you zoom.
In addition to the frontend code, we need normal maps transferred from the backend. We also need scale.
Still in progress.
@MarcNeely please let me know what I can help with.
Hello @markschlosseratbentley Thanks for your update, apologies for not replying sooner.
Can you provide any more info on how to use the normal mapping feature, eg. how to input the normal map image into iTwin.
@PeterJBassett supply a normal map when creating a RenderMaterialElement.
|
2025-04-01T06:39:00.200684
| 2022-11-18T12:44:18
|
1457396166
|
{
"authors": [
"grigasp"
],
"license": "MIT",
"license_source": "github-api",
"license_type": "permissive",
"provenance": "gharchive-dolma-0000.json.gz:6903",
"repo": "iTwin/presentation",
"url": "https://github.com/iTwin/presentation/issues/1"
}
|
gharchive/issue
|
React to UI code move
All UI code is about to be moved out of core into appui.
### Tasks
- [x] https://github.com/iTwin/presentation/issues/2
- [x] https://github.com/iTwin/presentation/issues/8
- [x] https://github.com/iTwin/presentation/issues/4
- [x] https://github.com/iTwin/presentation/issues/16
- [x] https://github.com/iTwin/presentation/issues/13
- [x] https://github.com/iTwin/presentation/issues/25
- [x] https://github.com/iTwin/presentation/issues/14
- [x] https://github.com/iTwin/presentation/issues/3
- [x] https://github.com/iTwin/presentation/issues/31
- [x] https://github.com/iTwin/presentation/issues/38
- [x] https://github.com/iTwin/presentation/issues/15
- [ ] https://github.com/iTwin/presentation/issues/40
- [ ] https://github.com/iTwin/presentation/issues/43
- [ ] https://github.com/iTwin/presentation/issues/42
All items done
|
2025-04-01T06:39:00.226642
| 2020-06-24T16:29:41
|
644750090
|
{
"authors": [
"dkreft",
"iaincollins"
],
"license": "ISC",
"license_source": "github-api",
"license_type": "permissive",
"provenance": "gharchive-dolma-0000.json.gz:6904",
"repo": "iaincollins/next-auth",
"url": "https://github.com/iaincollins/next-auth/issues/325"
}
|
gharchive/issue
|
Is there a sane way to store tokens created by a service?
Is it possible to use store provided by an existing non-OAuth authentication service?
I'm trying to use Providers.Credentials to integrate with an existing backend services that issues JWTs, but I'm having a heck of a time figuring out how to make this work.
When I POST /auth/tokens with the credentials, the access and refresh tokens are handed back to me. I need to have these tokens available to me in the API layer so that I can add them to the request headers of subsequent requests, but it's not clear to me how to sanely store these for later access (storing them in a global or singleton in a package feels side-effecty and generally "icky").
Here's what I have so far:
async function authorize({ username, password }) {
const client = makeClient()
try {
const {
data: {
access,
refresh,
},
} = await client.post('/auth/tokens', {
email: username,
password,
})
const { data: user } = await client.get('/users/me', {
headers: {
Authorization: makeAuthHeader(access),
}
})
return user
} catch (error) {
console.error('ERROR: %o', error)
return null
}
}
Documentation feedback
[ ] Found the documentation helpful
[x] Found documentation but was incomplete
[ ] Could not find relevant documentation
[ ] Found the example project helpful
[x] Did not find the example project helpful
I've even dug through the source code a little to see if there was a way to "sneak" these tokens into the session, but I didn't see a clear path forward.
Hey thanks for the detail and for feedback on documentation.
I think I understand what you want to do and it makes sense. I actually had to check to see what would happen when I tried it because it's a reasonable expectation but I wasn't sure it was supported or not.
It turns out there is a bug with the credentials flow and user object isn't persisted when you sign in, but is supposed to be. You should be able to access the user object returned in the jwt() and signin() callbacks, but the user is coming though as a function rather than an object.
You can work around that by using a JWT callback option that takes care of that problem by calling the function:
callback: {
jwt: async (token) => {
if (typeof token.user === 'function') {
token.user = token.user()
}
console.log('JWT DEBUG', token)
return Promise.resolve(token)
},
}
This is a legit bug and should be fixed.
@iaincollins Thanks, but the problem isn't that I can't get at the user object....the problem is that there's no easy way to store the access and refresh tokens I get back from my server. After much tinkering around, here's what I currently have. As you can see from the code below, I basically had to pass the res object all the way down into my authorize function so that I could set a pair of cookies. If you can think of a better solution, I'm all ears!
export default function handleRequest(req, res) {
return nextAuth(req, res, makeConfig(req, res))
}
function makeConfig(req, res) {
const authorize = makeAuthorizeFn(req, res)
// redacted
}
export function makeAuthorizeFn(res) {
return async function authorize({ username, password }) {
const client = makeClient()
try {
const {
data: {
access,
refresh,
},
} = await client.post('/auth/tokens', {
email: username,
password,
})
// This is a bit hacky, but at least it gets the tokens back
// to the client.
res.setHeader('Set-Cookie', [
serialize('access', access, { path: '/' }),
serialize('refresh', refresh, { path: '/' }),
])
// redacted
}
Oh sure! The idea is that you should be able to set them on the user object and for that to get persisted (securely) in the JWT.
One thing to note here is that when I tried to set anything other than { id, name, email, image } to the user object, it would not make it into
the session. I didn't dig far enough in the code to figure out why or where
the other keys were be weeded out.
On Wed, Jun 24, 2020 at 11:25 AM Iain Collins<EMAIL_ADDRESS>wrote:
Oh sure! The idea is that you should be able to set them on the user
object and for that to get persisted (securely) in the JWT.
—
You are receiving this because you authored the thread.
Reply to this email directly, view it on GitHub
https://github.com/iaincollins/next-auth/issues/325#issuecomment-648989228,
or unsubscribe
https://github.com/notifications/unsubscribe-auth/AACCS2MZFCSZ6BRRKDJGGFDRYJAKHANCNFSM4OG2QJ2A
.
If you add a property to the user object in this scenario, then it's persisted to the JSON Web Token (assuming you use the work around above until the bug is fixed - when it's fixed you won't need to touch the jwt callback).
The session callback can be used to decide what properties can be safely exposed / exported from the JWT to the client side session object, if that makes sense.
@iaincollins, while I did notice that the example provided in the documentation was setting the user as a function, I do not have that problem because I'm already resolving the authorize function with the user object...but you couldn't know that because I redacted the code to keep the focus where it belongs. :-)
Here's the entirety of my makeAuthorizeFn higher-ordered function:
export function makeAuthorizeFn(res) {
return async function authorize({ username, password }) {
const client = makeClient()
try {
const {
data: {
access,
refresh,
},
} = await client.post('/auth/tokens', {
email: username,
password,
})
// This is a bit hacky, but at least it gets the tokens back
// to the client.
res.setHeader('Set-Cookie', [
serialize('access', access, { path: '/' }),
serialize('refresh', refresh, { path: '/' }),
])
const { data: user } = await client.get('/users/me', {
headers: {
Authorization: makeAuthHeader(access),
}
})
// N.B.: Adding fields to the user object at this point does not work
return user
} catch (error) {
console.error('ERROR: %o', error)
return null
}
}
}
Just for grins and giggles, I tried adding user.foo = "hello, world" where I currently have the N.B. comment, but when the session is dumped, the foo datum is nowhere to be found:
{"user":{"name":"Dan Kreft","email":"dan@kreft.net","image":null},"expires":"2020-07-24T23:20:32.599Z"}
This is what I was referring to in my previous comment.
I cannot simply use the jwt callback, because that callback does not have access to the tokens that were returned to me by the service.
This is what your callbacks needs to look like:
callbacks: {
jwt: async (token) => {
if (typeof token.user === 'function') { token.user = token.user() } // Workaround required for bug
return Promise.resolve(token)
},
session: async (session, token) => {
// Copy properties from token contents to the client side session
//
// By default only 'safe' values like name, email and image which are
// typically needed for presentation purposes (e.g. "you are logged in as…")
// to avoid exposing sensitive information to the client inadvertently.
session.user.data = token.user.data
return Promise.resolve(session)
}
}
The user object returned from the authorize callback is saved to the JWT (e.g. in token.user).
The session() callback controls what data is exposed from the JWT to the client session.
You shouldn't need the JWT callback, it's only used above because of a big with JWT and the Credentials Provider.
The bug related to this is now being tracked in #329
Note that async is not necessary (it actually causes eslint to complain) if you don't have an await in the function. I'll have another look at your proposed solution.
Okay, now I see what's going on here. It's a little confusing the way it's laid out here because I never would have thought that the session() callback's second argument would be the thing returned by authorize().
At this point, though, I'm thinking that I'm probably going to stick with my current res.setHeader() solution because at least with the way I have it laid out now, my client doesn't have to worry about extracting the tokens from the session and figuring out where to put them (e.g. in cookies or in localStorage)...it only has to concern itself with pulling the tokens out of their respective cookies.
Thanks for your diligence...I feel more confident in using NextAuth knowing that you're so attentive. :-)
Note that async is not necessary (it actually causes eslint to complain)
The purpose of it whenever I give an example code is to make it clear the function is async and a promise is expected.
(Otherwise people then immediately ask "How can I do async calls?")
Okay, now I see what's going on here. It's a little confusing the way it's laid out here because I never would have thought that the session() callback's second argument would contain the thing returned by authorize().
To clarify, as per the docs it's always the contents of the JSON Web Token as the second argument to the session() callback. It is present if, and only if, JWT sessions are enabled. This is not always the same as what is returned by authorize() as what is saved to the JWT can be overridden in the jwt() callback.
For users who are using other configurations, other data will be stored in the JWT (e.g. a user object from a database and/or the profile response from an OAuth Provider).
The session() callback provides provide a way selectively expose things to the client securely, in a simple and uniform away that works for all providers and is almost certainly less code and more performant than other solutions.
my client doesn't have to worry about extracting the tokens from the session and figuring out where to put them (e.g. in cookies or in localStorage).
Any properties accessed via a session object are be automatically kept up to date, and kept in sync across tabs and windows, and are persisted across page navigation in a single page app so that people can avoid doing exactly this (which actually, you have done - you have put tem.
I'd recommend to anyone else reading this they use the session property, if nothing else it's much less code to maintain and makes it easier to avoid bugs.
To confirm for anyone reading this in future, this is all you need to do to add data to a session from if you return it in a user object from authorize():
callbacks: {
session: (session, token) => {
session.user.data = token.user.data
return session
}
}
If you do that, they will be there when you access the session object from the client.
@iaincollins I'm taking another look at using the session (I was ignorant of getSession()) and this looks promising, but I don't see a way to update data in the session after login. If I'm to store my tokens in the session, I need to be able to write to it after I refresh my access token. I've tried Googling for a solution, but to no avail. Is there something else I'm missing?
|
2025-04-01T06:39:00.228594
| 2024-04-18T20:06:48
|
2251468053
|
{
"authors": [
"ialbert",
"j-andrews7"
],
"license": "MIT",
"license_source": "github-api",
"license_type": "permissive",
"provenance": "gharchive-dolma-0000.json.gz:6905",
"repo": "ialbert/genescape-central",
"url": "https://github.com/ialbert/genescape-central/issues/10"
}
|
gharchive/issue
|
Download annotations from GUI as CSV
Similar to the image saving, it'd be nice to have a button to download the annotations table directly from the GUI.
links to download both the annotation and the dot file have been added to the interface
|
2025-04-01T06:39:00.249876
| 2015-06-24T14:25:42
|
90691302
|
{
"authors": [
"karlhudsonphillips",
"louisameline"
],
"license": "mit",
"license_source": "bigquery",
"license_type": "permissive",
"provenance": "gharchive-dolma-0000.json.gz:6906",
"repo": "iamceege/tooltipster",
"url": "https://github.com/iamceege/tooltipster/issues/411"
}
|
gharchive/issue
|
Disable tooltips on mobile devices
Hi,
I'm trying to find a way to disable tooltips from being displayed when the viewport is smaller than 800px
Thanks
Hi,
you should probably use a condition like if ($(window).width() >= 800) { $(sel).tooltipster({...}); }.
You may also do this check in functionBefore and return false if the screen is false, in case you want to account for the case when the user can change the resolution of his screen over time.
It's also possible to use a simple media query on .tooltipster-base if you want to hide the tooltips (they will actually still run though, but will be invisible). But beware that it might cause memory leaks in case they're not auto-closing.
Okay thanks
|
2025-04-01T06:39:00.266043
| 2024-04-24T04:57:49
|
2260319002
|
{
"authors": [
"iamstevendao",
"waveo-wangxiao"
],
"license": "MIT",
"license_source": "github-api",
"license_type": "permissive",
"provenance": "gharchive-dolma-0000.json.gz:6907",
"repo": "iamstevendao/vue-tel-input",
"url": "https://github.com/iamstevendao/vue-tel-input/pull/458"
}
|
gharchive/pull-request
|
Expose Input focus & blur event
Expose Input focus & blur event, so user can control the focus / blur event in a form
thanks for your help @waveo-wangxiao!
|
2025-04-01T06:39:00.314832
| 2024-04-25T15:14:35
|
2263872316
|
{
"authors": [
"bahamasbahamas",
"ianarawjo",
"massi-ang"
],
"license": "MIT",
"license_source": "github-api",
"license_type": "permissive",
"provenance": "gharchive-dolma-0000.json.gz:6908",
"repo": "ianarawjo/ChainForge",
"url": "https://github.com/ianarawjo/ChainForge/issues/264"
}
|
gharchive/issue
|
Error When Prompting Mistral/Mixtral Model from AWS Bedrock
Description:
I successfully cloned the repository from the main branch and tested the application with LLMs deployed on Azure using the OpenAI resource, and everything worked as expected. However, I encountered an issue when attempting to prompt a Mistral model hosted on AWS Bedrock.
Steps to Reproduce:
Cloned the repository from the main branch.
Created temporary credentials on AWS for accessing the AWS Bedrock.
Attempted to send prompts to the Mistral/Mixtral LLM models.
Received the following error message for both model types.
Expected Behavior:
I expected that the prompts would be successfully processed by the Mistral/Mixtral models without any errors.
Observed Behavior:
When attempting to send prompts, the application failed to collect responses and suggested re-running the prompt node. The specific error logged was:
Errors collecting responses. Re-run prompt node to retry.
Mistral Mixtral: "c.get is not a function"
Environment:
Operating System: Ubuntu 22.04 + docker
Thanks for this issue. @massi-ang is the point person for Bedrock integrations.
massi-ang
@massi-ang might you know what is going on?
c.get doesn't sound like any code in the ChainForge code base (there is no "get" function or "c.get" call). So, it sounds like this might actually be a bug on the Bedrock side with loading the model. Not sure.
Hi @ianarawjo, do you use the boto3 AWS client to query the Bedrock models? If not, I'm curious—why don't you use it? I've noticed it's more straightforward since you can just use regular API keys without needing a session key.
I didn’t write the Bedrock integration, @massi-ang did. He would have to comment here.
Hi, we use the boto3 client. The @mirai73/bedrock-fm library is just a wrapper around the boto3 client to provide the correct prompt formatting for the different models. The reason why we require temporary credentials and a session token via the UI is to safeguard you against the possibility of accidentally leaking long term credentials.
I'll need to look how allow to use just an AccessKeyId and SecretAccessKey when running locally, that is when using chainforge/app.py serve.
@bahamasbahamas regarding the issue you are facing, I tested the latest main commit and I could not reproduce the error. Does this happen with any flow and prompt or just specific configurations?
HI @bahamasbahamas. I reproduced this problem both on chainforge.ai/play site and when using chainforge serve from a local install. On the other hand I do not get this error when I clone the repo locally and build the react backend from chainforge/react-server with npm run build and then serve it via python chainforge/app.py serve. Similarly there is no error when serving the react app with other means, as you can see by accessing this url https://d1sozkr3w0qe91.cloudfront.net/.
@ianarawjo can you share how the react app is built so that I can get to the root cause of the problem?
There is no difference with how the react app is built —npm run build is what is done in all cases. The sole difference is just adding /play the HTML index page on the /play site. There are no other differences.
|
2025-04-01T06:39:00.337825
| 2017-01-10T11:12:05
|
199793802
|
{
"authors": [
"81735595",
"coveralls",
"iarna"
],
"license": "isc",
"license_source": "bigquery",
"license_type": "permissive",
"provenance": "gharchive-dolma-0000.json.gz:6909",
"repo": "iarna/gauge",
"url": "https://github.com/iarna/gauge/pull/89"
}
|
gharchive/pull-request
|
bugfix:error in node 0.12.7
for issues TypeError: Object.keys called on non-object #88
modify the order of condition.
Coverage remained the same at 93.592% when pulling 17a55be98fcc3d07f306aad58cfb8a7a55c8a473 on 81735595:master into 6971e27a577d165cde360ebed86a59dfc18ac55b on iarna:master.
Coverage remained the same at 93.592% when pulling 17a55be98fcc3d07f306aad58cfb8a7a55c8a473 on 81735595:master into 6971e27a577d165cde360ebed86a59dfc18ac55b on iarna:master.
Coverage remained the same at 93.592% when pulling 17a55be98fcc3d07f306aad58cfb8a7a55c8a473 on 81735595:master into 6971e27a577d165cde360ebed86a59dfc18ac55b on iarna:master.
Coverage remained the same at 93.592% when pulling 17a55be98fcc3d07f306aad58cfb8a7a55c8a473 on 81735595:master into 6971e27a577d165cde360ebed86a59dfc18ac55b on iarna:master.
An error in all versions of node, I expect. Thank you!
|
2025-04-01T06:39:00.354685
| 2023-05-29T14:43:16
|
1730888064
|
{
"authors": [
"aziz07ghm",
"ibarrond"
],
"license": "Apache-2.0",
"license_source": "github-api",
"license_type": "permissive",
"provenance": "gharchive-dolma-0000.json.gz:6910",
"repo": "ibarrond/Pyfhel",
"url": "https://github.com/ibarrond/Pyfhel/issues/193"
}
|
gharchive/issue
|
error with installing pyfhel "ERROR: Could not build wheels for Pyfhel"
I have two issues @ibarrond
the first, I installed pyfhel 2.0.1, but when I tried to import, I got this error
ModuleNotFoundError: No module named 'pyfhel'
which leads me to the main issue that made me use pyfhel version 2.0.1
whenever I try to install the new versions using
pip install pyfhel==3.4.0
or
pip install pyfhel==3.4.1.
or even the plain pip install pyfhel
I got this error over and over again
I need this for my master's thesis. Please help as soon as possible.
Try with Pyfhel, capitalizing the first letter.
For the installation of a more recent version:
Try in a console with admin rights.
Otherwise install it in WSL.
Thank you for your replay
However, I have tried following your instructions, but unfortunately, it didn't work.
I even tried it on kali-linux system VMware version, but it didn't work either, showing the same error message. using sudo the superuser privileges
Closing, since the error was posted as images and not as text.
|
2025-04-01T06:39:00.364706
| 2023-11-03T07:06:16
|
1975565777
|
{
"authors": [
"baack",
"ibbaa"
],
"license": "Apache-2.0",
"license_source": "github-api",
"license_type": "permissive",
"provenance": "gharchive-dolma-0000.json.gz:6911",
"repo": "ibbaa/keepitup",
"url": "https://github.com/ibbaa/keepitup/issues/5"
}
|
gharchive/issue
|
[Feature request] Run ping only till the first success
[Feature request] Run ping only till the first success
Currently user can choose the number of pings to make. But in fact quite often 1 successful in enough to understand that the server is online. While 1 fail should not be enough because the problem can be with user's mobile connection, so additional attempts would be required.
Proposal: Add a new checkbox "Stop after the first successful" near the ping amount (3 by default) settings.
Thanks for the suggestion. I see the point but this cannot be done in the way it's currently implemented. Java/Android API does not provide a library for ICMP/ping. The app simply calls the system ping and parses the output for additional info, Success is decided by the the ping process return code (0=success). The ping count is the option "-c" for the ping command and it simply pings n times and does not stop prematurely. Of course I could do several pings with "-c 1" and aggregate everything by myself but I think it's to much effort for to little gain. The current behaviour also makes sense and I like it, so it would mean 2 different implementations and a switch.
Of course I could do several pings with "-c 1" and aggregate everything by myself but I think it's to much effort for to little gain.
In this case I would definitely do that, call it several times.
The current behavior is not making much sense to me. E.g. what should happend if user set 3 pings and got 2 success and 1 fail? Should the user be notified that server is down or up? To my opinion - it is up, but that means that all pings out of 3 were useless except the first that succeeded, which is exactly what I proposed.
Maybe I am missing some use case for the current behavior.
It's the "standard" behaviour of the ping utility. It reports overall success on at least one successful attempt. For me this makes the most sense. More pings on one try offer additional info, like average latency. However with the suggested way I can do whatever behaviour seems reasonable and I'm not fixed to ping behavior, so it's obviously better but a matter if it's worth the effort, since now it may do additonal pings that may not be necessary, depending on ones view on this, but do not hurt too much either. I'll think about it when I find the time for the pings. The time schedule feature will take a lot of time anyway. Maybe I'm in the mood to do other things between, but not now.
Release 1.5.0 does provide "stop on success" switch for ping and connect
|
2025-04-01T06:39:00.370243
| 2024-08-16T14:52:58
|
2470432897
|
{
"authors": [
"alexanian"
],
"license": "MIT",
"license_source": "github-api",
"license_type": "permissive",
"provenance": "gharchive-dolma-0000.json.gz:6912",
"repo": "ibbis-screening/common-mechanism",
"url": "https://github.com/ibbis-screening/common-mechanism/pull/14"
}
|
gharchive/pull-request
|
Provide Diamond with --jobs, not "--processes"
Description
When investigating the inconsistent --processes tag, I realised that run_diamond.sh was not written in a robust way, and "processes" is not a good description of its function.
Specifically, in this command (pulled out the parts not relevant to parallelization):
ls ${DB_PATH}/nr*.dmnd | parallel -j ${PROCESSES}
diamond blastx --quiet -d ${DB_PATH}/nr.{%}.dmnd \
--threads ${THREADS} -o ${OUTPUT}.{%}.tsv
PROCESSES is not really "processes", it's the number of jobs we tell parallel to run at once
we use the {%} replacement string, which is the job number, to decide which database we use in --d... I think they way we've set this up we need to literally always be counting from 1-6. (which is what screen.py was doing, but not run_pipeline.sh). Based on some local testing of parallel, if we were running with only 2 processes, then the outputs would get (I think? not sure what diamond does with pre-existing output files) overwritten
I have changed this to set the number of jobs to run in parallel based on the threads and CPUs (unless a number is supplied) and made some other parts of run_diamond a bit more robust, then propagated those changes.
Issues:
Addresses #10, though we'll need to change the wiki once this is released
Type of change
[x] Bug fix (non-breaking change which fixes an issue)
[x] Breaking change (fix or feature that would cause existing functionality to not work as expected) (well, it's a breaking change to run_pipeline, but not to commec screen, which is the preferred interface)
Thanks for raising that, I hadn't caught the possible weird problem with integer division. I think (more or less by accident) this will be fine in this case, because GNU parallel interprets --jobs 0 as "run as many jobs in parallel as possible" according to the docs.
In the case where threads is 1, we aren't concerned about needing to manage the number of jobs because we're not multithreeading, so -j 0 --use-cpus-instead-of-cores is the behaviour you want.
|
2025-04-01T06:39:00.392888
| 2021-11-17T01:45:00
|
1055613759
|
{
"authors": [
"daniel-heppner-ibigroup"
],
"license": "MIT",
"license_source": "github-api",
"license_type": "permissive",
"provenance": "gharchive-dolma-0000.json.gz:6913",
"repo": "ibi-group/datatools-ui",
"url": "https://github.com/ibi-group/datatools-ui/pull/737"
}
|
gharchive/pull-request
|
feature(editor): set route to in progress on edit, warn about unapproved routes when exporting
Checklist
[x] Appropriate branch selected (all PRs must first be merged to dev before they can be merged to master)
[x] Any modified or new methods or classes have helpful JSDoc and code is thoroughly commented
[x] The description lists all applicable issues this PR seeks to resolve
[x] The description lists any configuration setting(s) that differ from the default settings
[x] All tests and CI builds passing
[x] The description lists all relevant PRs included in this release (remove this if not merging to master)
Description
Two main changes:
When any part of a route gets edited, the progress gets set to In Progress.
In the Create Snapshot window, any in progress or pending routes will be displayed as a warning, since they will not be included in the published result.
The snapshot pane directly requests the list of routes from the backend, which requires a lock. Therefore, in the Editor Feed Source Panel, a lock is requested before opening the Snapshot pane. After closing, the lock is removed.
A few issues: Why are the tests failing? It looks like it is not related to my code.
Also, I'm having a type issue on line 78 in CreateSnapshotModal.js that shows up in VSCode.
"Cannot call fetchRouteEntities().then because property then is missing in function [1].Flow(InferError)"
I think we finally got there in the end!
|
2025-04-01T06:39:00.396342
| 2023-08-18T14:58:52
|
1856867591
|
{
"authors": [
"philip-cline"
],
"license": "MIT",
"license_source": "github-api",
"license_type": "permissive",
"provenance": "gharchive-dolma-0000.json.gz:6914",
"repo": "ibi-group/datatools-ui",
"url": "https://github.com/ibi-group/datatools-ui/pull/981"
}
|
gharchive/pull-request
|
Speed up feed source retrieval
Checklist
[x] Appropriate branch selected (all PRs must first be merged to dev before they can be merged to master)
[x] Any modified or new methods or classes have helpful JSDoc and code is thoroughly commented
[x] The description lists all applicable issues this PR seeks to resolve
[x] The description lists any configuration setting(s) that differ from the default settings
[ ] All tests and CI builds passing
Description
Companion front end PR for https://github.com/ibi-group/datatools-server/pull/529. This PR makes use of the new FeedSourceSummary endpoint on the front end to speed up the FeedSourceTable.
e2e tests, of course, will fail until the back end PR is merged
Backend is merged, tests should be passing!
|
2025-04-01T06:39:00.401228
| 2024-08-24T01:15:11
|
2484094152
|
{
"authors": [
"lostmygithubaccount"
],
"license": "MIT",
"license_source": "github-api",
"license_type": "permissive",
"provenance": "gharchive-dolma-0000.json.gz:6915",
"repo": "ibis-project/ibis-analytics",
"url": "https://github.com/ibis-project/ibis-analytics/pull/63"
}
|
gharchive/pull-request
|
refactor: simplify, move frameworks, etc.
drop dagster (memory issue that needs investigation + drop orchestration complexiy)
add website
streamlit -> shiny
add a package/CLI
no more VM needed, just run in a GHA
generally a regression on the dashboard visualizations, but all the data is there
going to yolo merge this and go from there
|
2025-04-01T06:39:00.411763
| 2023-02-06T16:33:14
|
1572895750
|
{
"authors": [
"biharicoder",
"sahil11129"
],
"license": "Apache-2.0",
"license_source": "github-api",
"license_type": "permissive",
"provenance": "gharchive-dolma-0000.json.gz:6916",
"repo": "ibm-build-lab/Watson-NLP",
"url": "https://github.com/ibm-build-lab/Watson-NLP/pull/41"
}
|
gharchive/pull-request
|
Updated Notebook
Updated Notebook Pre-Trained Models
Updated Notebook Fine-Tune Models
@sahil11129 Table of contents still has a different format. Please let me or @Abhilasha-Mangal know if you are having trouble formatting it.
|
2025-04-01T06:39:00.430506
| 2024-05-14T18:39:28
|
2296124131
|
{
"authors": [
"AdamBrousseau"
],
"license": "Apache-2.0",
"license_source": "github-api",
"license_type": "permissive",
"provenance": "gharchive-dolma-0000.json.gz:6917",
"repo": "ibmruntimes/ci-jenkins-pipelines",
"url": "https://github.com/ibmruntimes/ci-jenkins-pipelines/pull/223"
}
|
gharchive/pull-request
|
Pass OpenJCEPlus Branch to the build scripts
Also add ability to override openjceplus branch at build launch
Test cases
00:00:21.655 "PUBLISH_NAME": "jdk-<IP_ADDRESS>.2+9_openj9-0.44.0-rc2",
...
00:00:46.663 WARNING: OpenJCEPlus Branch cannot be determined based on PUBLISH_NAME. Using default branch: semeru-java11
00:01:02.701 "PUBLISH_NAME": "jdk-11.0.23+9_openj9-0.44.0-rc2",
...
00:01:27.776 BUILD_CONFIG[OPENJCEPLUS_BRANCH]="semeru-java-11.0.23"
00:00:19.644 "PUBLISH_NAME": "jdk-11+9_openj9-0.44.0-rc2",
...
00:00:43.651 BUILD_CONFIG[OPENJCEPLUS_BRANCH]="semeru-java-11"
00:00:15.454 "PUBLISH_NAME": "jdk-<IP_ADDRESS>+9_openj9-0.44.0-rc2",
...
00:00:40.814 BUILD_CONFIG[OPENJCEPLUS_BRANCH]="semeru-java-<IP_ADDRESS>"
|
2025-04-01T06:39:00.453429
| 2024-10-10T00:00:18
|
2577217922
|
{
"authors": [
"Gum-Joe",
"cybercoder-naj"
],
"license": "MIT",
"license_source": "github-api",
"license_type": "permissive",
"provenance": "gharchive-dolma-0000.json.gz:6919",
"repo": "icdocsoc/mad3",
"url": "https://github.com/icdocsoc/mad3/pull/19"
}
|
gharchive/pull-request
|
Docs: add in the mailmerge scaffold & scripts used for MaDs & document their usage
Adds a new mailmerge directory with the scripts used the MaDs family allocation notification emails in 2024, alongside documentation as to how to use them.
@Gum-Joe could you include this folder in .dockerignore just cause we don't need it there?
@Gum-Joe could you include this folder in .dockerignore just cause we don't need it there?
Fixed in b0ab264b693e0b8f16d1304482dd528436d9a731 @cybercoder-naj
Updates applied, re-review requested
|
2025-04-01T06:39:00.479050
| 2016-09-26T04:16:17
|
179136441
|
{
"authors": [
"chefarov",
"jacky6016",
"moneycat"
],
"license": "apache-2.0",
"license_source": "bigquery",
"license_type": "permissive",
"provenance": "gharchive-dolma-0000.json.gz:6920",
"repo": "ice91/CloudBrush",
"url": "https://github.com/ice91/CloudBrush/issues/5"
}
|
gharchive/issue
|
"Comparison method violates its general contract" when executing the example
Hi,
I tried to run the example on hadoop local mode(Ubuntu 16.04).
But there was an exeption:
Verify Overlap: Exception in thread "main" java.io.IOException: Job failed!
Later I checked brush.details.log. It showed such message:
java.lang.IllegalArgumentException: Comparison method violates its general contract! at java.util.TimSort.mergeHi(TimSort.java:868) at java.util.TimSort.mergeAt(TimSort.java:485) at java.util.TimSort.mergeCollapse(TimSort.java:410) at java.util.TimSort.sort(TimSort.java:214) at java.util.TimSort.sort(TimSort.java:173) at java.util.Arrays.sort(Arrays.java:659) at java.util.Collections.sort(Collections.java:217) at Brush.VerifyOverlap$VerifyOverlapReducer.reduce(VerifyOverlap.java:252) at Brush.VerifyOverlap$VerifyOverlapReducer.reduce(VerifyOverlap.java:1) at org.apache.hadoop.mapred.ReduceTask.runOldReducer(ReduceTask.java:519) at org.apache.hadoop.mapred.ReduceTask.run(ReduceTask.java:420) at org.apache.hadoop.mapred.LocalJobRunner$Job.run(LocalJobRunner.java:260)
I'm not sure whether it was a hadoop configuration mistake or a java version issue.
But I've tried both java 7 and java 8 and got the same result.
Can you provide the software environment in which your experiments were run?
Same error here. Tested on ubuntu cluster. It used to work on earlier versions, maybe java 6?
uname -a
Linux clu04 3.2.0-120-generic #163-Ubuntu SMP Tue Dec 20 15:12:28 UTC 2016 x86_64 x86_64 x86_64 GNU/Linux
java -version
java version "1.7.0_80"
Java(TM) SE Runtime Environment (build 1.7.0_80-b15)
Java HotSpot(TM) 64-Bit Server VM (build 24.80-b11, mixed mode)
hadoop version
Hadoop <IP_ADDRESS>.2.4.2-2
Subversion git@github.com:hortonworks/hadoop.git -r 22a563ebe448969d07902aed869ac13c652b2872
Compiled by jenkins on 2015-03-31T19:40Z
Compiled with protoc 2.5.0
From source with checksum b3481c2cdbe2d181f2621331926e267
This command was run using /usr/hdp/<IP_ADDRESS>-2/hadoop/hadoop-common-<IP_ADDRESS>.2.4.2-2.jar
hadoop log:
Log Type: stderr
Log Upload Time: Fri Apr 21 20:47:23 +0300 2017
Log Length: 790
SLF4J: Class path contains multiple SLF4J bindings.
SLF4J: Found binding in [jar:file:/hadoop/yarn/local/filecache/14/mapreduce.tar.gz/hadoop/share/hadoop/common/lib/slf4j-log4j12-1.7.5.jar!/org/slf4j/impl/StaticLoggerBinder.class]
SLF4J: Found binding in [jar:file:/hadoop/yarn/local/usercache/sbarberakis/appcache/application_1486050547626_0009/filecache/10/job.jar/job.jar!/org/slf4j/impl/StaticLoggerBinder.class]
SLF4J: See http://www.slf4j.org/codes.html#multiple_bindings for an explanation.
SLF4J: Actual binding is of type [org.slf4j.impl.Log4jLoggerFactory]
log4j:WARN No appenders could be found for logger (org.apache.hadoop.ipc.Server).
log4j:WARN Please initialize the log4j system properly.
log4j:WARN See http://logging.apache.org/log4j/1.2/faq.html#noconfig for more info.
Log Type: stdout
Log Upload Time: Fri Apr 21 20:47:23 +0300 2017
Log Length: 0
Log Type: syslog
Log Upload Time: Fri Apr 21 20:47:23 +0300 2017
Log Length: 26679123
Showing 4096 bytes of 26679123 total. Click here for the full log.
2017-04-21 20:47:15,887 INFO [eventHandlingThread] org.apache.hadoop.mapreduce.jobhistory.JobHistoryEventHandler: Copied to done location: hdfs://clu01.softnet.tuc.gr:8020/mr-history/tmp/sbarberakis/job_1486050547626_0009_conf.xml_tmp
2017-04-21 20:47:15,919 INFO [eventHandlingThread] org.apache.hadoop.mapreduce.jobhistory.JobHistoryEventHandler: Moved tmp to done: hdfs://clu01.softnet.tuc.gr:8020/mr-history/tmp/sbarberakis/job_1486050547626_0009.summary_tmp to hdfs://clu01.softnet.tuc.gr:8020/mr-history/tmp/sbarberakis/job_1486050547626_0009.summary
2017-04-21 20:47:15,946 INFO [eventHandlingThread] org.apache.hadoop.mapreduce.jobhistory.JobHistoryEventHandler: Moved tmp to done: hdfs://clu01.softnet.tuc.gr:8020/mr-history/tmp/sbarberakis/job_1486050547626_0009_conf.xml_tmp to hdfs://clu01.softnet.tuc.gr:8020/mr-history/tmp/sbarberakis/job_1486050547626_0009_conf.xml
2017-04-21 20:47:15,965 INFO [eventHandlingThread] org.apache.hadoop.mapreduce.jobhistory.JobHistoryEventHandler: Moved tmp to done: hdfs://clu01.softnet.tuc.gr:8020/mr-history/tmp/sbarberakis/job_1486050547626_0009-1492794892265-sbarberakis-MatchPrefix+SRR001665_1_k21_asm.tmp%2F00%2Dpreprocess+-1492796834354-50-0-FAILED-default-1492794896416.jhist_tmp to hdfs://clu01.softnet.tuc.gr:8020/mr-history/tmp/sbarberakis/job_1486050547626_0009-1492794892265-sbarberakis-MatchPrefix+SRR001665_1_k21_asm.tmp%2F00%2Dpreprocess+-1492796834354-50-0-FAILED-default-1492794896416.jhist
2017-04-21 20:47:15,979 INFO [Thread-465] org.apache.hadoop.mapreduce.jobhistory.JobHistoryEventHandler: Stopped JobHistoryEventHandler. super.stop()
2017-04-21 20:47:15,981 INFO [Thread-465] org.apache.hadoop.mapreduce.v2.app.rm.RMContainerAllocator: Setting job diagnostics to Task failed task_1486050547626_0009_r_000008
Job failed as tasks failed. failedMaps:0 failedReduces:1
2017-04-21 20:47:15,985 INFO [Thread-465] org.apache.hadoop.mapreduce.v2.app.rm.RMContainerAllocator: History url is http://clu01.softnet.tuc.gr:19888/jobhistory/job/job_1486050547626_0009
2017-04-21 20:47:15,998 INFO [Thread-465] org.apache.hadoop.mapreduce.v2.app.rm.RMContainerAllocator: Waiting for application to be successfully unregistered.
2017-04-21 20:47:16,189 INFO [IPC Server handler 23 on 38026] org.apache.hadoop.mapred.TaskAttemptListenerImpl: MapCompletionEvents request from attempt_1486050547626_0009_r_000034_3. startIndex 0 maxEvents 10000
2017-04-21 20:47:16,326 INFO [Socket Reader #1 for port 38026] SecurityLogger.org.apache.hadoop.ipc.Server: Auth successful for job_1486050547626_0009 (auth:SIMPLE)
2017-04-21 20:47:16,344 INFO [IPC Server handler 8 on 38026] org.apache.hadoop.mapred.TaskAttemptListenerImpl: JVM with ID : jvm_1486050547626_0009_r_9895604650191 asked for a task
2017-04-21 20:47:16,344 INFO [IPC Server handler 8 on 38026] org.apache.hadoop.mapred.TaskAttemptListenerImpl: JVM with ID: jvm_1486050547626_0009_r_9895604650191 is invalid and will be killed.
2017-04-21 20:47:16,978 INFO [IPC Server handler 11 on 38026] org.apache.hadoop.mapred.TaskAttemptListenerImpl: Progress of TaskAttempt attempt_1486050547626_0009_r_000034_3 is : 0.0
2017-04-21 20:47:16,999 INFO [Thread-465] org.apache.hadoop.mapreduce.v2.app.rm.RMContainerAllocator: Final Stats: PendingReds:0 ScheduledMaps:0 ScheduledReds:6 AssignedMaps:0 AssignedReds:37 CompletedMaps:50 CompletedReds:1 ContAlloc:214 ContRel:42 HostLocal:45 RackLocal:5
2017-04-21 20:47:17,000 INFO [Thread-465] org.apache.hadoop.mapreduce.v2.app.MRAppMaster: Deleting staging directory hdfs://clu01.softnet.tuc.gr:8020 /user/sbarberakis/.staging/job_1486050547626_0009
2017-04-21 20:47:17,016 INFO [Thread-465] org.apache.hadoop.ipc.Server: Stopping server on 38026
2017-04-21 20:47:17,016 INFO [IPC Server listener on 38026] org.apache.hadoop.ipc.Server: Stopping IPC Server listener on 38026
2017-04-21 20:47:17,016 INFO [IPC Server Responder] org.apache.hadoop.ipc.Server: Stopping IPC Server Responder
2017-04-21 20:47:17,016 INFO [TaskHeartbeatHandler PingChecker] org.apache.hadoop.mapreduce.v2.app.TaskHeartbeatHandler: TaskHeartbeatHandler thread interrupted
brush.details.log.txt
I just tried with jdk1.6.0_91, same result
Hi,
According to the documentation we cloud found right now, the error of "Comparison method violates its general contract" is from the API changes of JAVA version 7 (or since a specific version of JAVA 6).
Please try to add the following arguments while running CloudBrush under Hadoop.
-javaopts -Djava.util.Arrays.useLegacyMergeSort=True
The original Hadoop environment we build the program is on Hadoop 0.20.203 with Java 1.6 (the exactly version is gone, sorry about that). On the other hand, the JAR file attached on the project is a RUNNABLE JAR built on the above specific environment. We strongly suggest users to rebuild the JAR file from source code according to the Hadoop version you choose to avoid some incompatible changes from the API changes between Hadoop 1 and 2.
Sincerely,
Antony.
Hi Antony,
Thanks for your reply.
Running:
hadoop jar CloudBrush.jar -javaopts -Djava.util.Arrays.useLegacyMergeSort=True -reads Ec10k -asm Ec10k_Brush -k 21 -readlen 36
didn't help, but the best way is to rebuild the project indeed.
However I have failed to do so, since I am not really familiar with native java building environment, and there is not any additional MANIFEST.MF or instructions included. Therefore I opened a separate issue about compilation. Any help would be much appreciated.
|
2025-04-01T06:39:00.504909
| 2024-07-02T14:27:44
|
2386422169
|
{
"authors": [
"Neilmagi",
"lucalamoni"
],
"license": "MIT",
"license_source": "github-api",
"license_type": "permissive",
"provenance": "gharchive-dolma-0000.json.gz:6921",
"repo": "ices-tools-dev/fisheriesXplorer",
"url": "https://github.com/ices-tools-dev/fisheriesXplorer/issues/17"
}
|
gharchive/issue
|
Fix ecoregions map selection
the default ecoregion selected should be North Sea and the selection should be visible when the map loads. the map selections needs to be connected to the dropdown selection box as well
It is also sometimes possible for more than 1 region to be selected in the dropdown box
|
2025-04-01T06:39:00.508351
| 2017-03-27T12:43:07
|
217233241
|
{
"authors": [
"Deathturtle",
"icetee",
"niccolomineo"
],
"license": "mit",
"license_source": "bigquery",
"license_type": "permissive",
"provenance": "gharchive-dolma-0000.json.gz:6922",
"repo": "icetee/remote-ftp",
"url": "https://github.com/icetee/remote-ftp/issues/746"
}
|
gharchive/issue
|
File / folder selection in tree view
Hi, would it be possible to allow multiple selection of files and folders? That would be much needed when cherrypicking remote files / folders for mass downloading.
Thank you.
+1
The new version already supports it.
|
2025-04-01T06:39:00.514345
| 2022-12-17T21:26:47
|
1501607267
|
{
"authors": [
"Indeedornot"
],
"license": "ISC",
"license_source": "github-api",
"license_type": "permissive",
"provenance": "gharchive-dolma-0000.json.gz:6923",
"repo": "icflorescu/trpc-sveltekit",
"url": "https://github.com/icflorescu/trpc-sveltekit/issues/48"
}
|
gharchive/issue
|
Custom typings or type definition
Hi!
I'm finding that in both my first and second project I've had issues with transfering Date type over the trpc. Is there a way to define the type to be Date rather the string?
(Apologies for the autolabel)
It seems that the issue has been with how I was integrating superjson into my client therefore not getting correct types
|
2025-04-01T06:39:00.542691
| 2016-10-30T09:31:09
|
186126053
|
{
"authors": [
"sdukhovni"
],
"license": "mit",
"license_source": "bigquery",
"license_type": "permissive",
"provenance": "gharchive-dolma-0000.json.gz:6924",
"repo": "ichung/kataomoi",
"url": "https://github.com/ichung/kataomoi/issues/3"
}
|
gharchive/issue
|
Communicate errors and warnings to user as appropriate
Instead of just spewing everything into the console (which we should still do), we should figure out what to display to the user when various error/warning conditions happen.
34df742
|
2025-04-01T06:39:00.553023
| 2024-11-15T10:07:26
|
2661497824
|
{
"authors": [
"bishalbikram",
"codecov-commenter"
],
"license": "Apache-2.0",
"license_source": "github-api",
"license_type": "permissive",
"provenance": "gharchive-dolma-0000.json.gz:6925",
"repo": "icon-project/intent-contracts",
"url": "https://github.com/icon-project/intent-contracts/pull/9"
}
|
gharchive/pull-request
|
Changelog Entry
version: <log entry>
Checklist:
[ ] I have performed a self-review of my own code
[ ] I have documented my code in accordance with the documentation guidelines
[ ] My changes generate no new warnings
[ ] I have added tests that prove my fix is effective or that my feature works
[ ] I have run the unit tests
[ ] I only have one commit (if not, squash them into one commit).
[ ] I have a descriptive commit message that adheres to the commit message guidelines
Please review the CONTRIBUTING.md file for detailed contributing guidelines.
Codecov Report
All modified and coverable lines are covered by tests :white_check_mark:
Project coverage is 89.51%. Comparing base (c7dae72) to head (3ecba4e).
Report is 3 commits behind head on main.
Additional details and impacted files
@@ Coverage Diff @@
## main #9 +/- ##
============================================
- Coverage 89.62% 89.51% -0.12%
Complexity 77 77
============================================
Files 39 39
Lines 2275 2307 +32
Branches 37 37
============================================
+ Hits 2039 2065 +26
- Misses 219 225 +6
Partials 17 17
Flag
Coverage Δ
solidity
86.11% <ø> (-1.39%)
:arrow_down:
Flags with carried forward coverage won't be shown. Click here to find out more.
see 4 files with indirect coverage changes
|
2025-04-01T06:39:00.557064
| 2021-09-08T14:51:19
|
991227585
|
{
"authors": [
"ICONationDevTeam",
"sink772"
],
"license": "Apache-2.0",
"license_source": "github-api",
"license_type": "permissive",
"provenance": "gharchive-dolma-0000.json.gz:6926",
"repo": "icon-project/javaee-unittest",
"url": "https://github.com/icon-project/javaee-unittest/issues/4"
}
|
gharchive/issue
|
Implicit int cast to BigInteger during interscore calls isn't supported
In the current goloop version, a method with an int return type from a foreign SCORE is implicitely converted to BigInteger through an interscore call.
Related issue talking about this behavior : https://github.com/icon-project/goloop/issues/65
However, that behavior isn't implemented in javaee-unittest and make the tests fail with a ClassCastException:
this.decimals = ((BigInteger) Context.call(token_addr, "decimals")).intValue();
Such line fails to run because the "decimals" method from IRC2 returns an int natively. The unittest package uses the int return type instead of the BigInteger one.
How hard would it be to fix javaee-unittest in order to copy the goloop engine behavior ?
You might have noticed, I have modified the IRC2 javaee-tokens RI for decimals to have a return type of BigInteger.
https://github.com/sink772/javaee-tokens/commit/a4cad6c4c98a5307fd5a0b7e12e5e032cf64146e
This is just a workaround solution, but I believe you don't need to use int type here if you want to get the value via inter-call.
I think it's better to use BigInterger in all cases for external readonly methods.
|
2025-04-01T06:39:00.574204
| 2016-10-26T06:54:40
|
185311153
|
{
"authors": [
"Rpinski",
"biliciburak"
],
"license": "mit",
"license_source": "bigquery",
"license_type": "permissive",
"provenance": "gharchive-dolma-0000.json.gz:6927",
"repo": "icsharpcode/RefactoringEssentials",
"url": "https://github.com/icsharpcode/RefactoringEssentials/issues/261"
}
|
gharchive/issue
|
VS15 Update 3 Support
Hi,
It's said that version <IP_ADDRESS> works under at least vs15 update2, though i've download and installed update3, it still does not work.
It's seen under tools=> extension and update but i can not use it at all. Is it a bug or do i miss something?
Thanks.
Please try with recently released VS 2017 RC. From what I see RE 4.4 works there. If not, please reopen this issue again.
Thx for your return but i dont have a chance to try it in vs17 rc mode, i need it to work on vs15 update3. but i guess it wont be possible.
|
2025-04-01T06:39:00.580652
| 2013-12-29T17:24:40
|
24862859
|
{
"authors": [
"Numpsy",
"bastianeicher",
"dgrunwald",
"piksel"
],
"license": "MIT",
"license_source": "github-api",
"license_type": "permissive",
"provenance": "gharchive-dolma-0000.json.gz:6928",
"repo": "icsharpcode/SharpZipLib",
"url": "https://github.com/icsharpcode/SharpZipLib/issues/40"
}
|
gharchive/issue
|
Create/Extract NTFS Extra Data field for Timestamp etc
SD-1800, originally created on 12/20/2010 01:58:10 by David Pierson
We should have the ability to create the NTFS Extra Data field (0x000a)
via an option in FastZip, and an easy way to create it in ZipFile and
ZipOutputStream.
This would hold the NTFS Last Modified timestamp, avoiding the problem
of the 2 second granularity of the standard entry timestamp.
Relevant forum threads:
http://community.sharpdevelop.net/forums/t/12411.aspx
http://community.sharpdevelop.net/forums/t/14178.aspx
I see that there is already an 'NTTaggedData' class in the extra data handling that appears to handle the NTFS extra data block, but the only place it's referenced from the library is the ifdefed out block at:
https://github.com/icsharpcode/SharpZipLib/blob/fed3bd219f8bd2bac5287b6217b9c4c384bed35f/src/ICSharpCode.SharpZipLib/Zip/ZipEntry.cs#L1103
There is a comment there about disabling it by default to match InfoZip, but i'm not sure what the logic should be at this point - is there a reason to not use the data if present in the zip? (or maybe restrict it to extraction on Windows hosts?)
No idea. I have had it on my agenda to look through the extra data fields...
@Numpsy I added that #if RESPECT_NT_TIMESTAMP a long time ago for an admittedly rather specific use case:
Zero Intall is a cross-platform package manager. The Linux version uses CLIs like tar and unzip (InfoZIP) to extract archives. Zero Install for Windows uses SharpZipLib. After extracting archives (ZIP, TAR, etc.) both versions verify the files using checksums which include the timestamps. Therefore I needed SharpZipLib to produce identical timetamps to InfoZIP.
Perhaps a better, although still rather clumsy, alternative to the #if block might be a public static bool config toggle.
I'm not sure what the default should be as far as extraction goes (as far as FastZip goes, does it need an additional option on top of 'restore timestamps'? don't recall apps like 7-zip offering that).
Saying that, not sure what's supposed to happen if an entry contains both NTFS and Unix extra datas (I don't think anything actually prevents both from existing, even if that would be unusual?)
|
2025-04-01T06:39:00.582722
| 2024-09-30T06:01:43
|
2555641905
|
{
"authors": [
"MinhxNguyen7",
"nav800"
],
"license": "MIT",
"license_source": "github-api",
"license_type": "permissive",
"provenance": "gharchive-dolma-0000.json.gz:6929",
"repo": "icssc/AntAlmanac",
"url": "https://github.com/icssc/AntAlmanac/pull/1009"
}
|
gharchive/pull-request
|
Fix: Default Colors deplete too Rapidly
Summary
This fix adjusts color selection by treating the earliest used default color as unused and reusing it if every default color is in use.
Test Plan
Ensure that the assignment works properly when classes are deleted out of order.
Ensure assignment works when users select changing the colors of courses, particularly to a default color.
Issues
Closes #647
This has been resolved by #1006
|
2025-04-01T06:39:00.591096
| 2021-09-18T08:33:14
|
999971978
|
{
"authors": [
"icy"
],
"license": "MIT",
"license_source": "github-api",
"license_type": "permissive",
"provenance": "gharchive-dolma-0000.json.gz:6930",
"repo": "icy/genvsub",
"url": "https://github.com/icy/genvsub/issues/5"
}
|
gharchive/issue
|
Some latest build may return 0 when using with --help
With my old build, --help will return 2. Somehow it returns zero now (and the tests fail.)
Behavior changed due to upstream fix: https://github.com/golang/go/commit/dcf0929de6a12103a8fd7097abd6e797188c366d and https://github.com/golang/go/issues/37533
|
2025-04-01T06:39:00.593241
| 2023-04-09T09:15:15
|
1659809048
|
{
"authors": [
"devnote-dev",
"straight-shoota"
],
"license": "MIT",
"license_source": "github-api",
"license_type": "permissive",
"provenance": "gharchive-dolma-0000.json.gz:6931",
"repo": "icyleaf/markd",
"url": "https://github.com/icyleaf/markd/issues/57"
}
|
gharchive/issue
|
Markdown renderer
A renderer for markdown (markdown -> markdown, instead of just markdown -> HTML). This would be useful if you just want to insert/edit/remove parts of a markdown document (e.g. setting a default language for fenced code blocks).
Just a note on that use case: Markdown syntax has quite a lot ambiguities and alternative styles to express the same thing. So the process parse -> mutate AST -> stringify can introduce syntactic changes besides the ones that you actually intent to do. Maybe this doesn't matter much for your use case, but it's an annoyance (at least) for many applications.
To avoid this you would need some way to convey information about syntax variations from the parser to the stringifier. Currently the AST is not capable of that.
I see what you mean, for my use-case I'm setting the default language for doc comments in Crystal to crystal but that might cause issues if someone uses a tilde-based code block instead of a back tick one. At the same time, I'm not sure if it's worth implementing support for the variants as that specific edge case is unlikely.
|
2025-04-01T06:39:00.679433
| 2017-04-06T16:26:13
|
219955506
|
{
"authors": [
"PaulTalbot-INL"
],
"license": "apache-2.0",
"license_source": "bigquery",
"license_type": "permissive",
"provenance": "gharchive-dolma-0000.json.gz:6932",
"repo": "idaholab/raven",
"url": "https://github.com/idaholab/raven/issues/69"
}
|
gharchive/issue
|
Underdefined inputs while using PCA
When using the PCA transform for sampling, the user can specify the number of transform variables to use.
If the user chooses too few tranform variables, there may not be enough to provide values to all the original input space, resulting in zeroes for all under-represented original space variables.
This can result in nonsensical inputs being provided to the model (like a total cross section of zero).
We don't have anything in place to warn the user that this phenomenon is happening.
For Change Control Board: Issue Review
This review should occur before any development is performed as a response to this issue.
[ ] 1. Is it tagged with a type: defect or improvement?
[ ] 2. Is it tagged with a priority: critical, normal or minor?
[ ] 3. If it will impact requirements or requirements tests, is it tagged with requirements?
[ ] 4. If it is a defect, can it cause wrong results for users? If so an email needs to be sent to the users.
[ ] 5. Is a rationale provided? (Such as explaining why the improvement is needed or why current code is wrong.)
For Change Control Board: Issue Closure
This review should occur when the issue is imminently going to be closed.
[ ] 1. If the issue is a defect, is the defect fixed?
[ ] 2. If the issue is a defect, is the defect tested for in the regression test system? (If not explain why not.)
[ ] 3. If the issue can impact users, has an email to the users group been written (the email should specify if the defect impacts stable or master)?
[ ] 4. If the issue is a defect, does it impact the latest stable branch? If yes, is there any issue tagged with stable (create if needed)?
[ ] 5. If the issue is being closed without a merge request, has an explanation of why it is being closed been provided?
From wangc on gitlab:
Currently, we do not have an error checking for that. I think if the PCA transformation leads to unrealistic inputs, the application code should catch that. This is because the requirements of inputs are usually determined via the application code, not RAVEN. In addition, if the user chooses too few transform variables, one way we can do is to implement the posteriori error checking, but this needs further discussion. I think this issue is related to the PCA method itself, not the algorithm. I suggest to change the label from defect to improvement. @talbpaul
|
2025-04-01T06:39:00.686716
| 2016-09-10T11:02:02
|
176174171
|
{
"authors": [
"HananeAlSamrout",
"ide"
],
"license": "mit",
"license_source": "bigquery",
"license_type": "permissive",
"provenance": "gharchive-dolma-0000.json.gz:6933",
"repo": "ide/react-native-button",
"url": "https://github.com/ide/react-native-button/issues/48"
}
|
gharchive/issue
|
cannot find React
please modify this line:
import { Component, PropTypes } from 'react';
to
import React,{ Component, PropTypes } from 'react';
in Button.js
It already says that: https://github.com/ide/react-native-button/blob/f72b1c2596b21bed9e93e634d9f7a6d5fd91e797/Button.js#L1
|
2025-04-01T06:39:00.691391
| 2023-07-24T22:31:54
|
1819243465
|
{
"authors": [
"shayant98",
"viktasidenfy"
],
"license": "BSD-3-Clause",
"license_source": "github-api",
"license_type": "permissive",
"provenance": "gharchive-dolma-0000.json.gz:6934",
"repo": "idenfy/FlutterSDK",
"url": "https://github.com/idenfy/FlutterSDK/issues/12"
}
|
gharchive/issue
|
Issue with lottie on IOS
Hi i'm facing an issue only related to the IOS build
dyld[39830]: Symbol not found: _$s6Lottie0A18BackgroundBehaviorO15pauseAndRestoreyA2CmFWC
Referenced from: <82425C10-5ADE-391C-A40A-5AC87F605028> /Users/shayantsital/Library/Developer/CoreSimulator/Devices/890D148E-BEDE-4EBC-A711-88FE6B6A8ADC/data/Containers/Bundle/Application/A324E85D-E968-4B06-8BDB-6F4C6CF9BC77/Runner.app/Frameworks/idenfyviews.framework/idenfyviews
Expected in: <0A8C52DF-5A1D-36CA-9680-C07B5A10339D> /Users/shayantsital/Library/Developer/CoreSimulator/Devices/890D148E-BEDE-4EBC-A711-88FE6B6A8ADC/data/Containers/Bundle/Application/A324E85D-E968-4B06-8BDB-6F4C6CF9BC77/Runner.app/Frameworks/Lottie.framework/Lottie
Message from debugger: Terminated due to signal 6
I got this error everytime i try to run the app (IOS). the runner fails immediately after the xcode build.
I even added the post install script
post_install do |installer|
installer.pods_project.targets.each do |target|
if target.name == "ZIPFoundation" || target.name == "lottie-ios"
target.build_configurations.each do |config|
config.build_settings['BUILD_LIBRARY_FOR_DISTRIBUTION'] = 'YES'
end
end
if target.name == "idenfy_sdk_flutter"
target.build_configurations.each do |config|
config.build_settings['IPHONEOS_DEPLOYMENT_TARGET'] = '11.0'
config.build_settings['ENABLE_BITCODE'] = 'NO'
end
end
target.build_configurations.each do |config|
config.build_settings['ENABLE_BITCODE'] = 'NO'
if Gem::Version.new(config.build_settings['IPHONEOS_DEPLOYMENT_TARGET']) < Gem::Version.new('11.0')
config.build_settings['IPHONEOS_DEPLOYMENT_TARGET'] = '11.0'
end
config.build_settings['GCC_PREPROCESSOR_DEFINITIONS'] ||= [
'$(inherited)',
## dart: PermissionGroup.camera
'PERMISSION_CAMERA=1',
# dart: PermissionGroup. photos
'PERMISSION_PHOTOS=1',
'PERMISSION_PHOTOS_ADD_ONLY=1',
# dart: [PermissionGroup. location, PermissionGroup. locationAlways, PermissionGroup. locationWhenInUse]
'PERMISSION_LOCATION=1',
# dart: PermissionGroup.mediaLibrary
'PERMISSION MEDIA LIBRARY=1'
]
end
flutter_additional_ios_build_settings(target)
end
end
But running with or without the above code results in the same error in XCODE.
pod version
pod 'iDenfySDK/iDenfyLiveness', '8.1.0'
It seems to be a pod related issue. Have you updated the SDK to a newer version, or is it a clean new install?
Maybe a clean pod deintegration and install would help.
Please, try out our sample project in this repository and let us know if the issue persists.
I've tried deintegration, also removed and reinstalled all pods twice. But still get rhe error.
Please try out out sample project and check if the issue can be replicated on your setup
Hi,
Ive tried the sample project and it did work, so what i did was replace the podfile with the one in the example. After validating it seemed to work in our app.
Thank you!
|
2025-04-01T06:39:00.734901
| 2016-04-04T14:52:12
|
145707498
|
{
"authors": [
"HectorLS",
"aight8",
"idiotWu"
],
"license": "mit",
"license_source": "bigquery",
"license_type": "permissive",
"provenance": "gharchive-dolma-0000.json.gz:6935",
"repo": "idiotWu/smooth-scrollbar",
"url": "https://github.com/idiotWu/smooth-scrollbar/issues/3"
}
|
gharchive/issue
|
Additional Scrolling Methods
Inspired by https://github.com/yiminghe/dom-scroll-into-view
You can't use this library since smooth-scrollbar use animation for scrolling.
The only additional functionality is to add an instance method that accept a node with some usefull parameters to improve user experience (onlyScrollIfNeeded, offsetTop etc. -> see dom-scroll-into-view).
Would be very cool when this library would support this!
You mean I need add an instance method that acts like scroll-into-view? In my view this should be done by users: you got the element and measured its position, then scroll to it through instance#scrollTo method. I want this library to focus on scrolling itself.
Yeah I'm currently on it to figure out how to handle this.
It's a little bit complicating and mess up my component with scrolling logic.
The Element.scrollIntoView - method which is supported by the browser not works here so the scrollbar component should carry over this job somehow imo.
For example: How do you want solve anchor problems? You have to work with different data and create a lot of code:
scrollbar.getSize() -> .content and .container
targetNode.offsetTop/offsetLeft
Additionally there is no method to fetch the current scrolling offset trough the instance API. I listen with "addListener" method and save the data from the event somewhere temporarly.
Anchor is a big problem that I don't know how to solve it properly, as for current offset, you can get it from instance.offset property (sorry for the lack of api docs).
@aight8 Hey buddy, I've added scrollIntoView() method support in version 5.3.0, check the documents here and give me a feedback :)
Sorry don't see that post.
Nice! I found some differences to my current implementation which have a slightly other behavior.
When the target element is fully inside of the container: do nothing
When the target element is partially outside of the container: Scroll the outside part into the container + offsetBottom px
Same for the top but then use offsetTop as padding px.
When the alignWithTop option is set then try to align the target element to the top + offsetTop px. (when possible) That's useful for anchors.
This code is only for vertical scrolling (but that was my needs):
let scrollbar = this.getInstance();
const offsetTop = config.offsetTop || 0;
const offsetBottom = config.offsetTop || 0;
const alignWithTop = config.alignWithTop || false;
let nodeOffset = {
y: node.offsetTop
};
let sbOffset = scrollbar.offset;
let { container } = scrollbar.getSize();
let nodeRect = node.getBoundingClientRect();
let nodeRectStart = nodeOffset.y;
let nodeRectEnd = nodeRectStart + nodeRect.height;
let vpRectStart = sbOffset.y;
let vpRectEnd = vpRectStart + container.height;
let newScrollbarY;
if (alignWithTop === true) {
newScrollbarY = node.offsetTop - offsetTop;
} else {
if (nodeRectStart < (vpRectStart + offsetTop)) {
// element it partially out of view on top OR to little top offset
newScrollbarY = node.offsetTop - offsetTop;
}
if (nodeRectEnd > (vpRectEnd - offsetBottom)) {
// element it partially out of view on bottom OR to little bottom offset
newScrollbarY = nodeRectEnd - container.height + offsetBottom;
}
}
if (newScrollbarY) {
scrollbar.scrollTo(0, newScrollbarY, 200);
}
Hmm...I checked Element.scrollIntoViewIfNeeded documents again, it appears that I misunderstood the behavior.
I can, however, do as much as dom-scroll-into-view plugin is. But I'm afraid this scrollbar plugin would be too heavy that I have to call it a library.
I am busy with my school courses currently, if you insist that we should completely support this feature, you can make some PRs and I'll check it once I have time :)
Thanks in advance.
I have the same problem/needs and this functionality would be awesome !
|
2025-04-01T06:39:00.748285
| 2021-08-21T11:20:10
|
976114067
|
{
"authors": [
"Z-snails",
"joelberkeley"
],
"license": "MIT",
"license_source": "github-api",
"license_type": "permissive",
"provenance": "gharchive-dolma-0000.json.gz:6936",
"repo": "idris-community/inigo",
"url": "https://github.com/idris-community/inigo/pull/30"
}
|
gharchive/pull-request
|
format docs for idrall
Cosmetic tweaks to the Idrall docs
Let me know if you'd prefer the old explicit hyperlinks
Thanks @joelberkeley!
|
2025-04-01T06:39:00.785082
| 2020-09-15T03:12:05
|
701571014
|
{
"authors": [
"LicharYuan",
"MingLin-home"
],
"license": "Apache-2.0",
"license_source": "github-api",
"license_type": "permissive",
"provenance": "gharchive-dolma-0000.json.gz:6937",
"repo": "idstcv/GPU-Efficient-Networks",
"url": "https://github.com/idstcv/GPU-Efficient-Networks/issues/6"
}
|
gharchive/issue
|
Some questions about searching
Hi, first of all, thanks for this interesting work. I have some questions about the searching method and results in this paper.
Could I understand the least square regression as this function: f(delta(depth), delta(width)) = dealt(A)? The situation of different kernel sizes and expand ratio is considered in the "Distillation" process?
In your settings, the search space of kernel size is {3,5}, DW-Block expand ration is {3,6,9}, BL-Block expand ratio is {0.5, 0.25}. However, the result of GENet, in which the kernel size all is 3, and the DW-Block expand ratio all is 3, the BL-Block expand ratio all is 0.25. This phenomenon is really weird. I'm looking forward to more analysis here.
Thx.
Hi LicharYuan,
Thanks for the feedback!
For Q1, yes, you are right. For different kernel size and different expansion ratio, we consider them as different block types, which means that, they have their own least square regression coefficients respectively.
For Q2, we agree that the searching results are surprising to us too. Our conjecture is that k=3 is more well optimized in CUDA. For comparison, we trained many k=5 networks but did not obtain better results. For BL-Block, the expansion ratios are all 0.25. We believe there should be some connection between our searching results and the manually design ones which also use 0.25 (in most networks). In our unpublished experiments, we tried to manually set ratio=1/6 in BL-Blocks but never obtained better results. So this phenomenon is not simply because the searching bias but because some deeper unknown reason making 0.25 a preferred value.
Yes, All k=3 is reasonable. I think the result may depend on your searching method . Have you ever try to maually change one or two blocks parameters? for example, change the first the XX-BLOCK kernel size to 5 and channel to 40 for compensating the latency. Is that result still worse?
Have you ever try to maually change one or two blocks parameters?
We did not manually change one or two blocks in our structures. We usually change all blocks in some principle way which is easier to explore.
OK. It sounds like the searching method may not stable when the searching space becomes larger.....
Actually, the searching space with prior knowledge does help to search in my experiments.
Looking forward to your next works!
|
2025-04-01T06:39:00.837837
| 2024-11-25T15:43:00
|
2691232741
|
{
"authors": [
"ieedan"
],
"license": "MIT",
"license_source": "github-api",
"license_type": "permissive",
"provenance": "gharchive-dolma-0000.json.gz:6938",
"repo": "ieedan/jsrepo",
"url": "https://github.com/ieedan/jsrepo/pull/156"
}
|
gharchive/pull-request
|
feat: private repository support 🎉
Adds support for private repositories 🎉
Fixes #155
TODO
[x] Improve init command with a prompt for a token
[x] Update docs
[x] CLI Docs
[x] Add new private repositories documentation
Stackblitz previews don't seem to allow storage of tokens. But it is working for me so I think we are all good.
|
2025-04-01T06:39:00.849433
| 2022-02-07T18:18:04
|
1126336832
|
{
"authors": [
"andrew2net",
"opoudjis",
"rjsparks",
"ronaldtse",
"strogonoff"
],
"license": "BSD-3-Clause",
"license_source": "github-api",
"license_type": "permissive",
"provenance": "gharchive-dolma-0000.json.gz:6939",
"repo": "ietf-ribose/bibxml-service",
"url": "https://github.com/ietf-ribose/bibxml-service/issues/112"
}
|
gharchive/issue
|
Computation of "place" is suspect
at https://dev.bibxml.org/get-one/by-docid/?doctype=IETF&docid=I-D sparks-sipcore-multiple-reasons&query=draft-sparks-sipcore-multiple-reasons&query_format=docid_regex&page=1
the data contains "places" pointing to Fremont, CA.
That is utterly wrong.
I'm guessing there's some attempt to map the knowledge that I work for AMS and AMS headquarters are in Fremont as having anything at all to do with this draft. It does not. Including this information is misleading at best.
If I've guessed correctly at the algorithm that added this, please remove that algorithm. If there's no explicit place declared in the document, do not try to derive one.
@rjsparks The "Fremont, CA" entry applies to the publisher's address, and it came from the assumption that the IETF is the publisher and based where AMS is based.
What is the correct "publisher place" for IETF? Should the "publisher place" be omitted?
Ping @andrew2net the issue refers to this line:
https://github.com/ietf-ribose/relaton-data-ids/blob/main/data/draft-sparks-sipcore-multiple-reasons-00.yaml
Yes, that line should be omitted (that semantic was very surprising to me).
Relaton spec allows contacts to be specified as contributor property, that seems more appropriate… Filed https://github.com/ietf-ribose/relaton-data-ids/issues/12, since this looks like an issue with source data.
@strogonoff in accordance with the other ticket let's remove "publisher place" for IETF. Thanks.
@strogonoff in accordance with the other ticket let's remove "publisher place" for IETF. Thanks.
Are we hiding Relaton’s “place” property for all bibliographic items shown by BibXML service, but leaving the place in source YAML?
For RFC and I-D bibliographic items, we should omit “place” entirely (this is a data source issue).
For other bibliographic items such as for 3GPP or IEEE, the “place” should still be used.
Then this looks like a data source change (cc @andrew2net). Should something be filed in relaton-data-* repositories?
I will not modify the codebase and this could be closed when data sources are reindexed.
For other bibliographic items such as for 3GPP or IEEE, the “place” should still be used.
@ronaldtse what places should be added for 3GPP, IEEE and others?
@andrew2net I believe Ronald means any existing place should be kept for others, but for Internet-Drafts the top-level the place should be removed (and probably for RFCs and rfcsubseries too).
I believe Ronald means any existing place should be kept for others (no change), but for Internet-Drafts the top-level the place should be removed (and probably for RFCs and rfcsubseries too).
Indeed. Let's remove "Place" for RFCs, RFC subseries, and Internet-Draft:
https://github.com/relaton/relaton-ietf/issues/64
We probably should "add" Places for 3GPP and IEEE but those are separate tickets:
https://github.com/relaton/relaton-3gpp/issues/8
https://github.com/relaton/relaton-ieee/issues/15
@strogonoff @ronaldtse there weren't other places, all the documents fetched by relaton-ietf were with pace "Fremont, CA". I removed it for now.
After reindexing the place should be gone, but indexing async task is stuck so I’ll wait for that to be resolved (filed in https://github.com/ietf-ribose/bibxml-infrastructure/issues/17).
Just letting you know that OGC expects to see a place of publication for standards, and will now not have one.
@opoudjis OGC will just need to deal with the lack of place for RFC/RFC subseries/Internet-Drafts. In any case, the lack of publication place is supported by ISO 690.
The place is no longer shown, since sources were reindexed: https://dev.bibxml.org/get-one/by-docid/?doctype=IETF&docid=I-D ietf-lamps-cmp-algorithms
|
2025-04-01T06:39:00.870091
| 2023-05-26T14:43:25
|
1727763350
|
{
"authors": [
"jennifer-richards",
"rjsparks"
],
"license": "BSD-3-Clause",
"license_source": "github-api",
"license_type": "permissive",
"provenance": "gharchive-dolma-0000.json.gz:6940",
"repo": "ietf-tools/datatracker",
"url": "https://github.com/ietf-tools/datatracker/issues/5696"
}
|
gharchive/issue
|
Clean up submission status page after async validation fails
Describe the issue
If a submission fails validation in a way that prevents metadata extraction (e.g., due to a title mismatch as in #5691), the submission status page shows a lot of loud warnings about metadata problems. These are irrelevant because the metadata simply weren't populated. Worse, they distract from the event history at the bottom of the page which has the only hint at what specifically went wrong.
This needs to be cleaned up to show only relevant information and, ideally, to more emphatically describe why the draft was rejected.
Code of Conduct
[X] I agree to follow the IETF's Code of Conduct
Kind of a special case of #4346
Another special case that triggered this was a draft containing SVG artwork including the x attribute on the <svg> element. As I understand it, this is not allowed by the RFC SVG schema. The result is this bug being triggered. The backend logs show a more helpful message that should be relayed to the user:
ietf/submit/utils.py(1328) in process_and_validate_submission(): Traceback (most recent call last):
File "/usr/local/lib/python3.9/site-packages/xml2rfc/writers/base.py", line 2183, in validate
self.v3_rng.assertValid(tree)
File "src/lxml/etree.pyx", line 3643, in lxml.etree._Validator.assertValid
lxml.etree.DocumentInvalid: Invalid attribute x for element svg, line 675
and eventually the less specific (but still better than current)
xml2rfc.writers.base.RfcWriterError: [draft path removed]: Error: Invalid document before running preptool.
Yet another: a draft came in with <artwork [...] src="art/something.ascii-art"></artwork>. This resulted in no actual error message in the log, just a useless "see above" message. Running it through xml2rfc revealed errors about the src file not existing. From the user's perspective, again, the draft just failed.
<rfc category="std" ipr="trust200902" consensus="true" submissionType="IETF" docName="draft-somebody-did-something-00.txt">
fails because of the .txt in docName, but the result on the UI is inscrutable.
A file failed submission because of the version="" in the header. This is accepted without comment by xml2rfc, but causes the following error in the celery logs:
[2023-08-22 07:09:25,002: WARNING/ForkPoolWorker-2] ietf/submit/utils.py(1328) in process_and_validate_submission(): Traceback (most recent call last):
File "/workspace/ietf/submit/utils.py", line 1266, in process_and_validate_submission
render_missing_formats(submission) # makes HTML and text, unless text was uploaded
File "/workspace/ietf/submit/utils.py", line 948, in render_missing_formats
xmltree.tree = prep.prep()
File "/usr/local/lib/python3.9/site-packages/xml2rfc/writers/preptool.py", line 219, in prep
tree = self.dispatch(self.selectors)
File "/usr/local/lib/python3.9/site-packages/xml2rfc/writers/base.py", line 1925, in dispatch
func(e, e.getparent())
File "/usr/local/lib/python3.9/site-packages/xml2rfc/writers/base.py", line 2158, in validate_before
self.die(self.root, 'Expected <rfc> version="3", but found "%s"' % version)
File "/usr/local/lib/python3.9/site-packages/xml2rfc/writers/base.py", line 1837, in die
raise RfcWriterError(msg)
xml2rfc.writers.base.RfcWriterError: /a/www/www6s/staging/draft-thomy-json-ntv-00.xml(12): Error: Expected <rfc> version="3", but found "0"
Changing the attribute to version="3" also causes it to fail, this time with
[2023-08-22 07:11:03,164: WARNING/ForkPoolWorker-2] ietf/submit/utils.py(1328) in process_and_validate_submission(): Traceback (most recent call last):
File "/usr/local/lib/python3.9/site-packages/xml2rfc/writers/base.py", line 2121, in validate
self.v3_rng.assertValid(tree)
File "src/lxml/etree.pyx", line 3643, in lxml.etree._Validator.assertValid
lxml.etree.DocumentInvalid: Did not expect text in element rfc content, line 12
During handling of the above exception, another exception occurred:
Traceback (most recent call last):
File "/workspace/ietf/submit/utils.py", line 1266, in process_and_validate_submission
render_missing_formats(submission) # makes HTML and text, unless text was uploaded
File "/workspace/ietf/submit/utils.py", line 948, in render_missing_formats
xmltree.tree = prep.prep()
File "/usr/local/lib/python3.9/site-packages/xml2rfc/writers/preptool.py", line 219, in prep
tree = self.dispatch(self.selectors)
File "/usr/local/lib/python3.9/site-packages/xml2rfc/writers/base.py", line 1925, in dispatch
func(e, e.getparent())
File "/usr/local/lib/python3.9/site-packages/xml2rfc/writers/base.py", line 2159, in validate_before
if not self.validate('before'):
File "/usr/local/lib/python3.9/site-packages/xml2rfc/writers/preptool.py", line 173, in validate
return super(PrepToolWriter, self).validate(when='%s running preptool'%when, warn=warn)
File "/usr/local/lib/python3.9/site-packages/xml2rfc/writers/base.py", line 2153, in validate
self.die(self.root, 'Invalid document%s.' % (when, ))
File "/usr/local/lib/python3.9/site-packages/xml2rfc/writers/base.py", line 1837, in die
raise RfcWriterError(msg)
xml2rfc.writers.base.RfcWriterError: /a/www/www6s/staging/draft-thomy-json-ntv-00.xml(12): Error: Invalid document before running preptool.
Changing the attribute to version="2" fixes the problem (and presumably is the correct value for the structure of the XML).
As noted on #6221, that PR improves the status page to be less confusing after an error occurred but we should do a better job actually handling the errors we've gathered here before we close this.
It would be good to be more transparent about what the failures actually were.
It would be good to be more transparent about what the failures actually were.
Yes. There's some existing code that grabs exception messages and logs them in the event history. If we can get the actual errors into those event description we'll be almost there. The failures here are hitting generic handling. I'm hoping we can do this in a way that doesn't turn into a lot of special cases...
#6158 related item that should be reflected clearly on the submission status page.
I've renamed this ticket to be more specific about what it's evolved to track.
#7107 is another case that should be addressed
|
2025-04-01T06:39:00.872408
| 2024-06-10T16:35:51
|
2344374159
|
{
"authors": [
"paulwouters",
"rjsparks"
],
"license": "BSD-3-Clause",
"license_source": "github-api",
"license_type": "permissive",
"provenance": "gharchive-dolma-0000.json.gz:6941",
"repo": "ietf-tools/datatracker",
"url": "https://github.com/ietf-tools/datatracker/issues/7519"
}
|
gharchive/issue
|
add a "preview" option to the "edit charter" page
Description
add a "preview" option to the "edit charter" page so we can verify markdown/txt conversion to html before submitting a charter. This avoids silly revisions to be created by incompetent ADs like me :)
Code of Conduct
[X] I agree to follow the IETF's Code of Conduct
This should be provided wherever we accept markdown when someone is logged in.
|
2025-04-01T06:39:00.880417
| 2021-06-20T09:25:48
|
925557849
|
{
"authors": [
"SpencerDawkins",
"robUx4"
],
"license": "CC-BY-4.0",
"license_source": "github-api",
"license_type": "permissive",
"provenance": "gharchive-dolma-0000.json.gz:6942",
"repo": "ietf-wg-cellar/ebml-specification",
"url": "https://github.com/ietf-wg-cellar/ebml-specification/pull/413"
}
|
gharchive/pull-request
|
generate a new document for corrections
The format is simpler that the one in RFC 8540. If needed I can update the format.
We may also add ebml.xml as an annex, making it normative. But it's more work than necessary, possibly leading to new fixes...
Fixes #412
We need to create a new entry in the CELLAR documents https://datatracker.ietf.org/wg/cellar/documents/
@robUx4, thanks for starting this work. I'm not good at generating the EBML specifications, but I did poke around a bit.
If I could make a couple of suggestions (which you may have already taken care of - my apologies, if so):
I'd suggest using draft-ietf-cellar-rfc8794-corrections as the output file name. Unless @mcr has "other thoughts" ("spencer-corrections"?), I don't see that we need to go through the "post as individual and see if the working group adopts it, and then rename" shuffle.
It is going to be much easier to take this draft through the approvals process, at some point in time, if you actually cut-and-paste the text being modified, and then the complete text with corrections, for each correction, using "OLD" and "NEW" to prefix them, just as we would for errata (if we were doing the RFC errata process). If you're correcting a sentence in the middle of a paragraph, show the entire paragraph (OLD and NEW), so that it's obvious to the reader where this text is in the document.
If you can summarize in a sentence or two about why the original text needs to be corrected, that would be very helpful. I see something like that (I think!), so thank you for that.
Just looking at the corrections listed so far - I see that https://www.rfc-editor.org/rfc/rfc8141.html obsoletes RFC 2141, but I'm not seeing that RFC 2141 has been deprecated (unless I'm missing it). Is the goal here to make use of extensions that are allowed in RFC 8141 (https://www.rfc-editor.org/rfc/rfc8141.html#appendix-B),, or just to use the current version of the standards-track URN specification? Either way, we should talk about who deprecated RFC 2141, or change the wording to Obsolete.
done
OK, alhtough for the XML Schema it may look a bit odd
will do
#389 mentions deprecated but it's just obsolete. maybe we don't really need this change ?
Given how RFC errata work, I opted to keep updating the current document with changes, so it can be diff'ed with the current RFC to produce the errata text.
There is now an rfc8794 branch that collects all the errata so we can regenerate a plain with these errata applies.
|
2025-04-01T06:39:00.926678
| 2020-10-14T20:43:23
|
721767344
|
{
"authors": [
"chapulina",
"nkoenig"
],
"license": "Apache-2.0",
"license_source": "github-api",
"license_type": "permissive",
"provenance": "gharchive-dolma-0000.json.gz:6943",
"repo": "ignitionrobotics/ign-launch",
"url": "https://github.com/ignitionrobotics/ign-launch/pull/64"
}
|
gharchive/pull-request
|
Merge ign-launch1 to ign-launch2
Signed-off-by: Nate Koenig<EMAIL_ADDRESS>
The ign-gazebo 3.4.0 failed for amd64 on Focal, which is causing the CI here to fail. We should fix that before merging this.
https://build.osrfoundation.org/job/ign-gazebo3-debbuilder/55/console
ign-gazebo 3.5.0 was correctly released for Focal / amd64. The Jenkins Ubuntu build's failure should be fixed by https://github.com/ignition-tooling/release-tools/pull/339.
I think this is good to go!
|
2025-04-01T06:39:00.931751
| 2021-04-05T12:16:54
|
850327826
|
{
"authors": [
"chapulina",
"darksylinc",
"peci1"
],
"license": "Apache-2.0",
"license_source": "github-api",
"license_type": "permissive",
"provenance": "gharchive-dolma-0000.json.gz:6944",
"repo": "ignitionrobotics/ign-rendering",
"url": "https://github.com/ignitionrobotics/ign-rendering/issues/300"
}
|
gharchive/issue
|
Specific issue templates including GPU
Desired behavior
Issue templates were first added in #234 and then centralized organization-wide in #250. However, I think that it would be useful to ask the reporters for the GPU and driver versions they use when reporting rendering-related issues. I'm not sure about the best approach, though.
Alternatives considered
Leave it as it is. This requires extra effort from maintainers asking for the used GPUs and leaves some issues in a hard to replicate state when no concrete GPU is mentioned.
Implementation suggestion
If you do not want to maintain two versions of the issue template, you might as well add the GPU-specific section to the general template with a note that it is only needed when reporting rendering-related issues.
The issue template should include easy to follow instructions for getting the requested info - e.g. run glxinfo | head and uname -a or something like that (+ of course some instructions for Windows). If the instructions would get too long, they might be written as an ign-docs tutorial linked from the issue template.
This would also affect ign-sensors, ign-gui and ign-gazebo.
you might as well add the GPU-specific section to the general template with a note that it is only needed when reporting rendering-related issues.
This sounds like a reasonable compromise for now. Mind opening a PR to https://github.com/ignitionrobotics/.github?
Another idea that came to mind was adding a specific issue template for rendering bugs, but that would need to be added to the entire org, and I worry it may cause confusion on repositories that don't touch rendering.
On Linux, the following commands can give necessary information about GPU:
lspci -nn | grep VGA
lshw -C display -numeric
glxinfo -B
sudo X -version
Done: https://github.com/ignitionrobotics/.github/pull/3 .
|
2025-04-01T06:39:00.985568
| 2021-10-20T16:43:18
|
1031619356
|
{
"authors": [
"scala-steward"
],
"license": "MIT",
"license_source": "github-api",
"license_type": "permissive",
"provenance": "gharchive-dolma-0000.json.gz:6946",
"repo": "iheartradio/ficus",
"url": "https://github.com/iheartradio/ficus/pull/175"
}
|
gharchive/pull-request
|
Update scalafmt-core to 3.0.7
Updates org.scalameta:scalafmt-core from 2.7.5 to 3.0.7.
GitHub Release Notes - Version Diff
I'll automatically update this PR to resolve conflicts as long as you don't change it yourself.
If you'd like to skip this version, you can just close this PR. If you have any feedback, just mention me in the comments below.
Configure Scala Steward for your repository with a .scala-steward.conf file.
Have a fantastic day writing Scala!
Files still referring to the old version number
The following files still refer to the old version number (2.7.5).
You might want to review and update them manually.
.github/workflows/format.yml
Ignore future updates
Add this to your .scala-steward.conf file to ignore future updates of this dependency:
updates.ignore = [ { groupId = "org.scalameta", artifactId = "scalafmt-core" } ]
labels: library-update, semver-major, old-version-remains
Superseded by #176.
|
2025-04-01T06:39:01.009923
| 2024-01-29T09:01:01
|
2104985440
|
{
"authors": [
"benmwebb",
"drlemmus"
],
"license": "CC0-1.0",
"license_source": "github-api",
"license_type": "permissive",
"provenance": "gharchive-dolma-0000.json.gz:6947",
"repo": "ihmwg/ModelCIF",
"url": "https://github.com/ihmwg/ModelCIF/issues/16"
}
|
gharchive/issue
|
_ma_target_ref_db_details.gene_name item type change
When parsing AlphaFoldDB entries we found that some cases have a value for _ma_target_ref_db_details.gene_name that is incompatible with the item type as defined in ModelCIF (code). Examples are:
AF-A0A0M3KKW8-F1
AF-H6SHY4-F1
AF-O78126-F1
AF-Q6RH29-F1
AF-Q6RH28-F1
AF-Q6RH30-F1
AF-Q6XEC0-F1
We reported this issue to to the helpdesk of AlphaFoldDB who answered:
//
The pipeline for predictions of AlphaFold, retrieves metadata from UniProt.
Since, the mmCIF files were created by Google Deepmind, we can't modify the
content of the field ourselves.
We recommend you to please reach out to https://pdb-dev.wwpdb.org/ to look
into the updating regex for this field as it does accept whitespaces.
//
Rather than changing the regex for the type "code", perhaps the type for _ma_target_ref_db_details.gene_name can be set to "text" or another type that is compatible with Uniprot entries that have a gene name (from the 'GN' record I suppose) with spaces in it.
e.g. https://www.alphafold.ebi.ac.uk/entry/H6SHY4 contains
_ma_target_ref_db_details.gene_name "celH or egH"
while https://www.alphafold.ebi.ac.uk/entry/O78126 contains
_ma_target_ref_db_details.gene_name "MHC class I HLA-A"
|
2025-04-01T06:39:01.827886
| 2022-01-28T13:44:02
|
1117422288
|
{
"authors": [
"ihonore"
],
"license": "MIT",
"license_source": "github-api",
"license_type": "permissive",
"provenance": "gharchive-dolma-0000.json.gz:6948",
"repo": "ihonore/my-brand-api",
"url": "https://github.com/ihonore/my-brand-api/pull/1"
}
|
gharchive/pull-request
|
gL9bboUa-articles-queries-and-user-end-points
What does this PR do?
build a RESTful API for my-brand by adding CRUD functionality on articles,queries and user end-points
Description of Task to be completed?
[x] CRUD operation on articles
[x] CRUD operation on queries
[x] CRUD operations on user
[x] Display articles on blog
[x] Posting comments
How should this be manually tested?
clone this project and install all required dependencies by using npm install , then run npm run dev command
test end-points by using postman
Any background context you want to provide?
you can connect to the to the local database or atlas by changing connection string in .env file
What are the relevant pivotal tracker stories?
https://trello.com/c/K5JX40AU/26-article-crud-operations
Screenshots (if appropriate)
N/A
Questions: No
Please finish the last check list
Finished
|
2025-04-01T06:39:04.290392
| 2020-07-22T08:13:48
|
663566642
|
{
"authors": [
"francescolovat",
"khaeru"
],
"license": "Apache-2.0",
"license_source": "github-api",
"license_type": "permissive",
"provenance": "gharchive-dolma-0000.json.gz:6949",
"repo": "iiasa/ixmp",
"url": "https://github.com/iiasa/ixmp/pull/352"
}
|
gharchive/pull-request
|
typo in docstring reporting/computations/select()
Small typo in docstring of ixmp/reporting/computations/select().
More typos will be added here in case they will be ecountered.
The check failures are expected/handled in #357, so I will merge this.
|
2025-04-01T06:39:04.301807
| 2016-06-23T23:53:06
|
162048494
|
{
"authors": [
"iissnan",
"lesleyandrez"
],
"license": "mit",
"license_source": "bigquery",
"license_type": "permissive",
"provenance": "gharchive-dolma-0000.json.gz:6950",
"repo": "iissnan/hexo-theme-next",
"url": "https://github.com/iissnan/hexo-theme-next/pull/958"
}
|
gharchive/pull-request
|
Add translate to pt-BR
Great theme.
I intend to polularizá it here in Brazil
Thank's
Thanks. :+1:
|
2025-04-01T06:39:04.358148
| 2021-02-21T11:28:39
|
812832821
|
{
"authors": [
"galuszka",
"hoffmanncedric",
"ilcato",
"vkamenski"
],
"license": "Apache-2.0",
"license_source": "github-api",
"license_type": "permissive",
"provenance": "gharchive-dolma-0000.json.gz:6952",
"repo": "ilcato/homebridge-fibaro-hc3",
"url": "https://github.com/ilcato/homebridge-fibaro-hc3/issues/25"
}
|
gharchive/issue
|
[Fibaro_HC3] Error getting data from Home Center: Error: unable to get local issuer certificate
[2/21/2021, 12:20:44 PM] [Fibaro_HC3] Error getting data from Home Center: Error: unable to get local issuer certificate
at TLSSocket.onConnectSecure (_tls_wrap.js:1497:34)
at TLSSocket.emit (events.js:315:20)
at TLSSocket._finishInit (_tls_wrap.js:932:8)
at TLSWrap.ssl.onhandshakedone (_tls_wrap.js:706:12) {
code: 'UNABLE_TO_GET_ISSUER_CERT_LOCALLY'
}
(node:3902) UnhandledPromiseRejectionWarning: Error: unable to get local issuer certificate
at TLSSocket.onConnectSecure (_tls_wrap.js:1497:34)
at TLSSocket.emit (events.js:315:20)
at TLSSocket._finishInit (_tls_wrap.js:932:8)
at TLSWrap.ssl.onhandshakedone (_tls_wrap.js:706:12)
(Use node --trace-warnings ... to show where the warning was created)
(node:3902) UnhandledPromiseRejectionWarning: Unhandled promise rejection. This error originated either by throwing inside of an async function without a catch block, or by rejecting a promise which was not handled with .catch(). To terminate the node process on unhandled promise rejection, use the CLI flag --unhandled-rejections=strict (see https://nodejs.org/api/cli.html#cli_unhandled_rejections_mode). (rejection id: 1)
(node:3902) [DEP0018] DeprecationWarning: Unhandled promise rejections are deprecated. In the future, promise rejections that are not handled will terminate the Node.js process with a non-zero exit code.
How did you configured the host parameter in config.json?
Is your Home Center exposing https interface?
Please enable also the http interface.
I'm currently implementing the support also for https. I will keep you updated.
I try but its locked, i will check this point on my side and reverse back to you when http will be activated, thanks for your quick support! :-)
it's works!! thanks a lot!very good job!
Check within a few hours, I should publish a release with https support.
@hoffmanncedric, try new version. You should put in the new param "url" the address of your Home Center like:
"url": "https://hc3-00000XXX.local"
Then you need to obtain the ca.cer file from your Home Center and put it in the same folder as config.json.
2/21/2021, 5:51:37 PM] [Fibaro_HC3] Error getting data from Home Center: Error: getaddrinfo ENOTFOUND hc3-00014803.local
at GetAddrInfoReqWrap.onlookup [as oncomplete] (dns.js:67:26) {
errno: -3008,
code: 'ENOTFOUND',
syscall: 'getaddrinfo',
hostname: 'hc3-00014803.local',
response: undefined
}
(node:10159) UnhandledPromiseRejectionWarning: Error: getaddrinfo ENOTFOUND hc3-00014803.local
at GetAddrInfoReqWrap.onlookup [as oncomplete] (dns.js:67:26)
(Use node --trace-warnings ... to show where the warning was created)
(node:10159) UnhandledPromiseRejectionWarning: Unhandled promise rejection. This error originated either by throwing inside of an async function without a catch block, or by rejecting a promise which was not handled with .catch(). To terminate the node process on unhandled promise rejection, use the CLI flag --unhandled-rejections=strict (see https://nodejs.org/api/cli.html#cli_unhandled_rejections_mode). (rejection id: 1)
@hoffmanncedric, where did you put the ca.cer file?
i try to copy the .cer into module and fibaro but same problem
Are you running homebridge in docker?
I'm currently looking in .homebridge in the user home directory or in the /var/lib/homebridge when you run the homebridge UX. Which setup are you running?
yes my homebridge is on a docker on my synology nas, with last setup on homebridge for docker
Are you able to tell the path as seen by the homebridge process?
yes its here :
Are you using this: https://github.com/oznu/docker-homebridge ?
@hoffmanncedric, can you try with 1.1.2 ? Leave ca.cer where you put it.
@ilcato where should i put certificate for raspberry pi installation?
update on 1.1.2
i thinks its a warning
i will pass on http :-)
If you use https (with cer file uploaded to homebridge) it will not accept access using an IP address because only hc-0000xxxx.local is listed in certificate subject alternative name. But when homebridge is running in docker container (on raspberry pi) the .local addresses are not resolved. Non-secure (http) access is not working either. It is always complaining about the certificate like mentioned by @hoffmanncedric above.
For http use host param and delete url param.
I'm using raspberry image, https, and url param. My ca.cer file is in "/var/lib/homebridge/" folder. I'm still getting
[22/02/2021, 23:21:42] [FibaroHC3] Error getting data from Home Center: Error: getaddrinfo ENOTFOUND hc3-00014186.local
at GetAddrInfoReqWrap.onlookup [as oncomplete] (dns.js:67:26) {
errno: -3008,
code: 'ENOTFOUND',
syscall: 'getaddrinfo',
hostname: 'hc3-00014186.local',
response: undefined
}
If you use https (with cer file uploaded to homebridge) it will not accept access using an IP address because only hc-0000xxxx.local is listed in certificate subject alternative name. But when homebridge is running in docker container (on raspberry pi) the .local addresses are not resolved. Non-secure (http) access is not working either. It is always complaining about the certificate like mentioned by @hoffmanncedric above. In the end I cannot make the 1.1.2 plugin working :-(
@galuszka, I confirm that no docker support is present for this feature. But you can use http by using the host param and removing the url param as it was before this change.
I'm using raspberry image, https, and url param. My ca.cer file is in "/var/lib/homebridge/" folder. I'm still getting
[22/02/2021, 23:21:42] [FibaroHC3] Error getting data from Home Center: Error: getaddrinfo ENOTFOUND hc3-XXX.local
at GetAddrInfoReqWrap.onlookup [as oncomplete] (dns.js:67:26) {
errno: -3008,
code: 'ENOTFOUND',
syscall: 'getaddrinfo',
hostname: 'hc3-XXX.local',
response: undefined
}
@vkamenski, are you able to resolve HC3 name from the raspberry?
i will pass on http :-)
@hoffmanncedric, as @galuszka said this doesn't work on Docker. Sorry.
|
2025-04-01T06:39:04.363534
| 2017-12-13T00:08:15
|
281581189
|
{
"authors": [
"ilkarman",
"xxccry"
],
"license": "MIT",
"license_source": "github-api",
"license_type": "permissive",
"provenance": "gharchive-dolma-0000.json.gz:6953",
"repo": "ilkarman/DeepLearningFrameworks",
"url": "https://github.com/ilkarman/DeepLearningFrameworks/issues/41"
}
|
gharchive/issue
|
where I want to download the vm file?
Hi @ilkarman ,
This project is such good. I want to download the vm file to run these deep learning framework, Can you share the download url? or I may buy the vm file. Thanks a lot
Hey xxccry, do you mean the VM Image file? This link ( https://azuremarketplace.microsoft.com/en-us/marketplace/apps/microsoft-ads.dsvm-deep-learning?tab=Overview ) should let you create one.
|
2025-04-01T06:39:04.366003
| 2022-10-04T22:09:26
|
1396899026
|
{
"authors": [
"echuber2",
"hjstn"
],
"license": "NCSA",
"license_source": "github-api",
"license_type": "permissive",
"provenance": "gharchive-dolma-0000.json.gz:6954",
"repo": "illinois/queue",
"url": "https://github.com/illinois/queue/issues/339"
}
|
gharchive/issue
|
Allow socket access through access token authentication
Currently, the socket that allows real-time updates on the Queue verifies user authentication using the stored JWT token, which is only present if the user authenticates through Shibboleth. This prevents other services (e.g. apps, etc.) from getting real-time information such as the queue status without regularly polling the Queue's other APIs.
Can you give more information about the use case for real-time updates in your course app? Is polling the API causing issues?
There were a few ideas that I had considered which I believe would beneift from real-time information:
An interactive display board showing the current queue status - polling would work fine but introduces additional complexity in identifying the changes between each poll.
A native queue app for course staff to interact with the queue more easily on mobile devices - polling would probably not work well unless on a very short interval, or it could potentially lead to multiple staff attempting to answer the same question or similar issues.
I think that it may be better to bring up a WebView to handle Shibboleth authentication in the latter case and use the real-time socket as normal, but I'm not sure what the best strategy to work around the lack of real-time information for the first case.
|
2025-04-01T06:39:04.368235
| 2021-02-14T22:14:14
|
808059461
|
{
"authors": [
"aabounegm",
"sallaben"
],
"license": "MIT",
"license_source": "github-api",
"license_type": "permissive",
"provenance": "gharchive-dolma-0000.json.gz:6955",
"repo": "illright/attractions",
"url": "https://github.com/illright/attractions/pull/244"
}
|
gharchive/pull-request
|
Bug: fix getCalendar day cursor in short months
Create day cursor in local timezone
Set date at same time as month to avoid short month issues
For more discussion see #243
Thanks, github-actions bot. Verified the change works on the 244 docs as well.
Thank you so much for your help, @sallaben!
|
2025-04-01T06:39:04.373860
| 2021-04-17T00:51:30
|
860274757
|
{
"authors": [
"bluebear94",
"illuhad"
],
"license": "BSD-2-Clause",
"license_source": "github-api",
"license_type": "permissive",
"provenance": "gharchive-dolma-0000.json.gz:6956",
"repo": "illuhad/hipSYCL",
"url": "https://github.com/illuhad/hipSYCL/issues/527"
}
|
gharchive/issue
|
Using reduction variable causes misaligned write with CUDA target?
Minimal working example (adapted from page 9 of the official cheat sheet to make it compile):
#include <algorithm>
#include <assert.h>
#include <CL/sycl.hpp>
using namespace cl::sycl;
int main() {
queue myQueue;
buffer<int> valuesBuf{1024};
{ // Initialize buffer on the host with 0, 1, 2, 3, ..., 1023
host_accessor a{valuesBuf};
std::iota(a.get_pointer(), a.get_pointer() + 1024, 0);
}
// Buffers with just 1 element to get the reduction results
int sumResult = 0;
buffer<int> sumBuf{&sumResult, 1};
int maxResult = 0;
buffer<int> maxBuf{&maxResult, 1};
myQueue.submit(
[&](handler &cgh) { // Input values to reductions are standard accessors
auto inputValues = valuesBuf.get_access<access_mode::read>(
cgh); // Create temporary objects describing variables with
// reduction semantics
accessor sumAccessor = sumBuf.get_access<access::mode::read_write>(cgh);
accessor maxAccessor = maxBuf.get_access<access::mode::read_write>(cgh);
auto sumReduction = reduction(sumAccessor, plus<int>());
auto maxReduction = reduction(
maxAccessor,
maximum<int>()); // parallel_for performs two reduction operations
// For each reduction variable, the implementation:
// - Creates a corresponding reducer
// - Passes a reference to the reducer to the lambda as a parameter
cgh.parallel_for(range<1>{1024}, sumReduction, maxReduction,
[=](id<1> idx, auto &sum, auto &max) {
// plus<>() corresponds to += operator, so sum can be
// updated via += or combine()
sum += inputValues[idx];
// maximum<>() has no shorthand operator, so max
// can only be updated via combine()
max.combine(inputValues[idx]);
});
});
// sumBuf and maxBuf contain the reduction results once
// the kernel completes
assert(maxBuf.get_host_access()[0] == 1023 &&
sumBuf.get_host_access()[0] == 523776);
}
When this is compiled (with /opt/hipSYCL/bin/syclcc -Wall -Wextra -Wpedantic -O3 -std=c++17 -g --hipsycl-gpu-arch=sm_50 --cuda-path=/usr/local/cuda -L/usr/local/cuda/lib64 rtest.cpp -o rtest) and run under cuda-memcheck, things don't look so good.
========= CUDA-MEMCHECK
[hipSYCL Warning] dag_direct_scheduler: Detected a requirement that is neither of discard access mode (SYCL 1.2.1) nor noinit property (SYCL 2020) that accesses uninitialized data. Consider changing to discard/noinit. Optimizing potential data transfers away.
[hipSYCL Warning] buffer_memory_requirement: Could not find embedded pointer in kernel blob for this requirement; do you have unnecessary accessors that are unused in your kernel?
[hipSYCL Warning] buffer_memory_requirement: Could not find embedded pointer in kernel blob for this requirement; do you have unnecessary accessors that are unused in your kernel?
========= Invalid __global__ write of size 4
========= at 0x00000428 in /opt/hipSYCL/bin/../include/hipSYCL/glue/cuda/../generic/hiplike/hiplike_reducer.hpp:76:__hipsycl_kernel__ZTSZZZN7hipsycl4glue23hiplike_kernel_launcherILNS_2rt10backend_idE0ENS2_10cuda_queueEE4bindI24__hipsycl_unnamed_kernelLNS2_11kernel_typeE1ELi1EZZ4mainENKUlRNS_4sycl7handlerEE21_7clESB_EUlNS9_2idILi1EEERT_RT0_E35_26JNS9_6detail29accessor_reduction_descriptorINS9_8accessorIiLi1ELNS9_11access_modeE2ELNS9_6targetE0ELNS9_6access11placeholderE0EEENS9_4plusIiEEEENSL_ISR_NS9_7maximumIiEEEEEEEvNSD_IXT1_EEENS9_5rangeIXT1_EEES10_mT2_DpT3_ENKUlPNS2_8dag_nodeEE605_16clES15_ENKUlDpT_E646_41clIJNS0_16hiplike_dispatch28hiplike_reduction_descriptorISU_NS1B_15reduction_stageILi1EEEEENS1C_ISX_S1E_EEEEEDaS18_EUliDpRS17_E744_44
========= by thread (0,0,0) in block (0,0,0)
========= Address 0x6a06ab1205f6b4d3 is misaligned
========= Saved host backtrace up to driver entry point at kernel launch time
========= Host Frame:/lib/libcuda.so.1 (cuLaunchKernel + 0x2b8) [0x223718]
========= Host Frame:/usr/local/cuda/lib64/libcudart.so.10.0 [0x1a23d]
========= Host Frame:/usr/local/cuda/lib64/libcudart.so.10.0 [0x1a2c7]
========= Host Frame:/usr/local/cuda/lib64/libcudart.so.10.0 (cudaLaunchKernel + 0x225) [0x4e3c5]
========= Host Frame:./rtest [0xc17f]
========= Host Frame:./rtest [0x1dced]
========= Host Frame:/opt/hipSYCL/bin/../lib/hipSYCL/librt-backend-cuda.so (_ZN7hipsycl2rt10cuda_queue13submit_kernelERKNS0_16kernel_operationESt10shared_ptrINS0_8dag_nodeEE + 0x9e) [0x1252e]
========= Host Frame:/opt/hipSYCL/bin/../lib/libhipSYCL-rt.so [0x20f82]
========= Host Frame:/opt/hipSYCL/bin/../lib/libhipSYCL-rt.so (_ZN7hipsycl2rt16kernel_operation8dispatchEPNS0_20operation_dispatcherESt10shared_ptrINS0_8dag_nodeEE + 0x56) [0x35886]
========= Host Frame:/opt/hipSYCL/bin/../lib/libhipSYCL-rt.so (_ZN7hipsycl2rt20multi_queue_executor15submit_directlyESt10shared_ptrINS0_8dag_nodeEEPNS0_9operationERKSt6vectorIS4_SaIS4_EE + 0x10b6) [0x200e6]
========= Host Frame:/opt/hipSYCL/bin/../lib/libhipSYCL-rt.so [0x2cca9]
========= Host Frame:/opt/hipSYCL/bin/../lib/libhipSYCL-rt.so (_ZN7hipsycl2rt20dag_direct_scheduler6submitESt10shared_ptrINS0_8dag_nodeEE + 0xd3c) [0x281fc]
========= Host Frame:/opt/hipSYCL/bin/../lib/libhipSYCL-rt.so [0x30094]
========= Host Frame:/opt/hipSYCL/bin/../lib/libhipSYCL-rt.so (_ZN7hipsycl2rt13worker_thread4workEv + 0x244) [0x32b44]
========= Host Frame:/lib/libstdc++.so.6 [0xcfbc4]
========= Host Frame:/lib/libpthread.so.0 [0x9299]
========= Host Frame:/lib/libc.so.6 (clone + 0x43) [0xff053]
=========
========= Program hit cudaErrorLaunchFailure (error 4) due to "unspecified launch failure" on CUDA API call to cudaMemcpyAsync.
========= Saved host backtrace up to driver entry point at error
========= Host Frame:/lib/libcuda.so.1 [0x37b2c3]
========= Host Frame:/opt/hipSYCL/bin/../lib/hipSYCL/librt-backend-cuda.so [0x76703]
========= Host Frame:/opt/hipSYCL/bin/../lib/hipSYCL/librt-backend-cuda.so (_ZN7hipsycl2rt10cuda_queue13submit_memcpyERKNS0_16memcpy_operationESt10shared_ptrINS0_8dag_nodeEE + 0x2ad) [0x11c4d]
========= Host Frame:/opt/hipSYCL/bin/../lib/libhipSYCL-rt.so [0x210d2]
========= Host Frame:/opt/hipSYCL/bin/../lib/libhipSYCL-rt.so (_ZN7hipsycl2rt16memcpy_operation8dispatchEPNS0_20operation_dispatcherESt10shared_ptrINS0_8dag_nodeEE + 0x57) [0x1b8f7]
========= Host Frame:/opt/hipSYCL/bin/../lib/libhipSYCL-rt.so (_ZN7hipsycl2rt20multi_queue_executor15submit_directlyESt10shared_ptrINS0_8dag_nodeEEPNS0_9operationERKSt6vectorIS4_SaIS4_EE + 0x10b6) [0x200e6]
========= Host Frame:/opt/hipSYCL/bin/../lib/libhipSYCL-rt.so [0x2cca9]
========= Host Frame:/opt/hipSYCL/bin/../lib/libhipSYCL-rt.so [0x2d53d]
========= Host Frame:/opt/hipSYCL/bin/../lib/libhipSYCL-rt.so [0x2ad63]
========= Host Frame:/opt/hipSYCL/bin/../lib/libhipSYCL-rt.so (_ZN7hipsycl2rt20dag_direct_scheduler6submitESt10shared_ptrINS0_8dag_nodeEE + 0xb19) [0x27fd9]
========= Host Frame:/opt/hipSYCL/bin/../lib/libhipSYCL-rt.so [0x30094]
========= Host Frame:/opt/hipSYCL/bin/../lib/libhipSYCL-rt.so (_ZN7hipsycl2rt13worker_thread4workEv + 0x244) [0x32b44]
========= Host Frame:/lib/libstdc++.so.6 [0xcfbc4]
========= Host Frame:/lib/libpthread.so.0 [0x9299]
========= Host Frame:/lib/libc.so.6 (clone + 0x43) [0xff053]
=========
[hipSYCL Error] from /home/kozet/gwaith/uofl-bioinformatics/gpgpu/hipSYCL/src/runtime/cuda/cuda_queue.cpp:221 @ submit_memcpy(): cuda_queue: Couldn't submit memcpy (error code = CUDA:4)
========= Program hit cudaErrorLaunchFailure (error 4) due to "unspecified launch failure" on CUDA API call to cudaEventQuery.
========= Saved host backtrace up to driver entry point at error
========= Host Frame:/lib/libcuda.so.1 [0x37b2c3]
========= Host Frame:/opt/hipSYCL/bin/../lib/hipSYCL/librt-backend-cuda.so [0x6c9de]
========= Host Frame:/opt/hipSYCL/bin/../lib/hipSYCL/librt-backend-cuda.so (_ZNK7hipsycl2rt15cuda_node_event11is_completeEv + 0x2b) [0xf81b]
========= Host Frame:/opt/hipSYCL/bin/../lib/libhipSYCL-rt.so (_ZNK7hipsycl2rt8dag_node11is_completeEv + 0x48) [0x22998]
========= Host Frame:/opt/hipSYCL/bin/../lib/libhipSYCL-rt.so (_ZN7hipsycl2rt17dag_submitted_ops22update_with_submissionESt10shared_ptrINS0_8dag_nodeEE + 0x4dd) [0x30b5d]
========= Host Frame:/opt/hipSYCL/bin/../lib/libhipSYCL-rt.so (_ZN7hipsycl2rt11dag_manager22register_submitted_opsESt10shared_ptrINS0_8dag_nodeEE + 0x4e) [0x2f90e]
========= Host Frame:/opt/hipSYCL/bin/../lib/libhipSYCL-rt.so (_ZN7hipsycl2rt20dag_direct_scheduler6submitESt10shared_ptrINS0_8dag_nodeEE + 0xe20) [0x282e0]
========= Host Frame:/opt/hipSYCL/bin/../lib/libhipSYCL-rt.so [0x30094]
========= Host Frame:/opt/hipSYCL/bin/../lib/libhipSYCL-rt.so (_ZN7hipsycl2rt13worker_thread4workEv + 0x244) [0x32b44]
========= Host Frame:/lib/libstdc++.so.6 [0xcfbc4]
========= Host Frame:/lib/libpthread.so.0 [0x9299]
========= Host Frame:/lib/libc.so.6 (clone + 0x43) [0xff053]
=========
========= Program hit cudaErrorLaunchFailure (error 4) due to "unspecified launch failure" on CUDA API call to cudaEventSynchronize.
========= Saved host backtrace up to driver entry point at error
========= Host Frame:/lib/libcuda.so.1 [0x37b2c3]
========= Host Frame:/opt/hipSYCL/bin/../lib/hipSYCL/librt-backend-cuda.so [0x6c83e]
========= Host Frame:/opt/hipSYCL/bin/../lib/hipSYCL/librt-backend-cuda.so (_ZN7hipsycl2rt15cuda_node_event4waitEv + 0x2b) [0xfd6b]
========= Host Frame:/opt/hipSYCL/bin/../lib/libhipSYCL-rt.so (_ZN7hipsycl2rt20dag_multi_node_event4waitEv + 0x29) [0x24a69]
========= Host Frame:/opt/hipSYCL/bin/../lib/libhipSYCL-rt.so (_ZNK7hipsycl2rt8dag_node4waitEv + 0x22) [0x23ee2]
========= Host Frame:./rtest [0x1919b]
========= Host Frame:./rtest [0x18862]
========= Host Frame:./rtest [0xfec6]
========= Host Frame:./rtest [0x699c]
========= Host Frame:/lib/libc.so.6 (__libc_start_main + 0xd5) [0x27b25]
========= Host Frame:./rtest [0x570e]
=========
[hipSYCL Error] from /home/kozet/gwaith/uofl-bioinformatics/gpgpu/hipSYCL/src/runtime/cuda/cuda_event.cpp:55 @ is_complete(): cuda_node_event: Couldn't query event status (error code = CUDA:4)
[hipSYCL Error] from /home/kozet/gwaith/uofl-bioinformatics/gpgpu/hipSYCL/src/runtime/cuda/cuda_event.cpp:66 @ wait(): cuda_node_event: cudaEventSynchronize() failed (error code = CUDA:4)
rtest: rtest.cpp:47: int main(): Assertion `maxBuf.get_host_access()[0] == 1023 && sumBuf.get_host_access()[0] == 523776' failed.
========= Error: process didn't terminate successfully
========= No CUDA-MEMCHECK results found
Is this a genuine bug with reduction variables or am I doing something wrong?
Thanks for reporting, it seems that #499 has introduced a regression that broke reductions with accessors. You can try reductions on top of USM pointers as a workaround. We will fix ASAP.
Fix in PR #528
This PR also adds a test case based on this code pattern where I have removed some things that are not necessary, such as the initialization of the buffers with a pointer.
|
2025-04-01T06:39:04.399279
| 2024-02-11T21:34:19
|
2129179439
|
{
"authors": [
"StarTroop",
"dccoder84",
"ilya-zlobintsev",
"oehme"
],
"license": "MIT",
"license_source": "github-api",
"license_type": "permissive",
"provenance": "gharchive-dolma-0000.json.gz:6957",
"repo": "ilya-zlobintsev/LACT",
"url": "https://github.com/ilya-zlobintsev/LACT/issues/266"
}
|
gharchive/issue
|
Allow setting a fan start/stop speed
Right now, if I have a fan curve like
50°C - 0%
60°C - 15%
70°C - 25%
and I'm running a light load (e.g. a 2D game) the graphics card reaches ~55°C and LACT will interpolate between 0% and 15%. This either leads to whining noises, because the fan can't start at lower than ~15%, or it leads to constant start/stop cycles as the temperature hovers around the point where the fan actually starts.
There should be a fan start speed setting. The fan would only start once the curve reaches that speed. If temps drop, the fan will slow down below this start speed (most fans can run below the speed they start at) and will stop once it reaches a certain minimum speed. It will then only start again once it reaches the start speed on the curve again. This way the start/stop cycles would be much longer and the fan would never get a PWM signal that it can't handle.
See a similar feature in corectrl and its implementation. They only have a start speed and are missing the stop speed.
In fancontrol these values are called MINSTART and MINSTOP
Sounds reasonable. This shouldn't be too difficult to implement.
@oehme Could you try building the feature/fan-threshold branch and see how it works for you? I've added fan start and stop thresholds, but my GPU doesn't have an audible transition between the fan being on and off, so I cannot test for sure that it works properly.
I guess this exactly the feature I wanted. I compiled and tried the feature branch "fan-threshold". For now I cannot say if it's working. What do the threshold numbers mean? Are they degree Celcius or percent? What I can tell is that these values are not saved. When I close Lact then my threshold settings are gone.
In my case I have a Radeon RX 6600 and when I watch videos for example on Youtube then the decocing happens on the GPU. By default without any tools like Lact the driver fan handling is a bit nervous going from zero to a bit noisy 1500 rpm when watching videos. With Lact I can lower that to 750 rpm which is silent to my ears. But as stated above the fans are going on and off the whole time. So lets say the fans should turn on at 55 °C with 750 rpm and then should cool down to 50 °C. How should I setup the threshold sliders?
Ah, and one more little thing. Can we have a make clean in the make file?
@dccoder84
Since this issue was created there was fan control hysteresis implemented in #291 , which lets you set a threshold for how much the temperature needs to change and for how long it needs to stay at a lower value. This can prevent the fan ramping up and down by keeping it at the same speed until the configured conditions are met. It should help in your case, and it's a more universal solution than start/stop thresholds.
However, if it doesn't work for you, and you want to try this branch:
What do the threshold numbers mean? Are they degree Celcius or percent?
They are in percentages, 0 to 100.
What I can tell is that these values are not saved. When I close Lact then my threshold settings are gone.
Make sure you're running a version of the daemon built from that branch, otherwise the settings won't get used.
For your setup, you should be able to set the stop threshold at your 55C speed and start threshold at the 50C speed.
Ah, and one more little thing. Can we have a make clean command in the make file? 😸
You just need to delete the target directory. You should only ever need to do this to free up space used up by old builds.
Sorry for the delayed response - The machine in question was not mine, but a friends, so I couldn't test for a while. The solution in #291 works fine for them, so this one is no longer needed from my perspective.
I'm going to close this as #291 should prevent the original issue of the fan speed being changed too often in the majority of cases.
Sorry to resurrect this, but is the fan stop/start threshold feature planned to be officially added? I have a 6600 xt which will not spin down below 31% if I use a custom fan curve (i.e. the fan will only turn off in Automatic mode). The fan start feature in Corectrl works nicely, so I'd like to see it implemented here. Was the aforementioned feature branch not working correctly?
Right now we have the features "Spindown delay" and "Speed change threshold". It solves the issue of permanently turning on and off of the fans to some degree. Personally I would like to be able to turn on the fans at 55 °C and turn them of at 50 °C. Maybe I could achieve this with the "speed change threshold" but then this would also affect the whole fan curve and not only at the beginning. So I think there is still room for improvement and discussion.
Yeah, the spindown delay and speed change threshold are good solutions for the issue of unnecessary speed ramping, but for my case of the Fan Stop feature not working in any custom mode, the originally proposed solution would be better (assuming it works like Corectrl). I'm not sure if I should create a new issue, or piggyback on this one. It may also be good to know if my issue is unique, as I would imagine that someone else might have noticed the Fan Stop feature not working if it were common spread. I know the 7XXX series has the issue of not being able to circumvent Fan Stop, but I haven't found any discussion about how it works in the 6XXX series.
So what I tried to say is that while I may be able to turn off fans at 50 °C and turn them on at 55 °C I cannot at the same time set a fine grained speed change for the rest of the fan curve for example change fan speed every 2 °C with the "speed change threshold".
|
2025-04-01T06:39:04.428241
| 2015-05-08T07:59:29
|
74274322
|
{
"authors": [
"dscho",
"hinerm"
],
"license": "bsd-2-clause",
"license_source": "bigquery",
"license_type": "permissive",
"provenance": "gharchive-dolma-0000.json.gz:6959",
"repo": "imagej/imagej-legacy",
"url": "https://github.com/imagej/imagej-legacy/pull/113"
}
|
gharchive/pull-request
|
Fix the -batch <file> handling
Triggered by a tweet of Stephan Preibisch, this developer tried to run a
macro in batch mode via
.../ImageJ-win32 -batch abc.ijm
and was surprised that it did not work. The reason it did not work was a
completely borked logic behind the -batch handling, which assumed that
.../ImageJ-win32 -eval ... -batch was the only use case.
This patch fixes that logic to work correctly.
Signed-off-by: Johannes Schindelin<EMAIL_ADDRESS>
Cool, thanks @dscho!
|
2025-04-01T06:39:04.436368
| 2018-09-17T18:30:14
|
360989466
|
{
"authors": [
"RiZKiT",
"chaucerbao"
],
"license": "mit",
"license_source": "bigquery",
"license_type": "permissive",
"provenance": "gharchive-dolma-0000.json.gz:6960",
"repo": "imagemin/imagemin",
"url": "https://github.com/imagemin/imagemin/issues/296"
}
|
gharchive/issue
|
Support for a configuration file
It would be pretty convenient to have support for reading plugin options from a configuration file.
It would also allow imagemin-cli and imagemin-micro to be more flexible in providing customizable image compression services.
Regarding to https://github.com/imagemin/imagemin/issues/171#issuecomment-315229115 there is at least this https://github.com/paulpflug/imagemin-manager, with some preprocessing, too.
|
2025-04-01T06:39:04.458066
| 2024-08-31T08:48:11
|
2498735519
|
{
"authors": [
"dit-zy"
],
"license": "MIT",
"license_source": "github-api",
"license_type": "permissive",
"provenance": "gharchive-dolma-0000.json.gz:6961",
"repo": "img02/HuntHelper",
"url": "https://github.com/img02/HuntHelper/pull/56"
}
|
gharchive/pull-request
|
update map coord math
the math used here worked for non-hw maps, but was noticeably off for hw maps. dalamud has logic to convert from raw to map coords that seems to be based on a game binary reference (https://github.com/goatcorp/Dalamud/blob/0fb7585973ffe22ab9353d853b0084a3f1c0803b/Dalamud/Utility/MapUtil.cs#L28). after some testing, we've confirmed that the dalamud calculation is accurate for all maps, including hw. so this change introduces the logic used by dalamud, to fix coords for hw maps, but especially the flags generated by hh for hw.
ah, don't publish yet ><. i did test this and it does work, but i just noticed it messes up the map ui. am about to make another pr to fix that ><
|
2025-04-01T06:39:04.474978
| 2020-07-01T13:59:35
|
649005887
|
{
"authors": [
"DarthSim",
"waroo"
],
"license": "MIT",
"license_source": "github-api",
"license_type": "permissive",
"provenance": "gharchive-dolma-0000.json.gz:6962",
"repo": "imgproxy/imgproxy",
"url": "https://github.com/imgproxy/imgproxy/issues/434"
}
|
gharchive/issue
|
examples incorrectly encode key/salt into hex
The example files at https://github.com/imgproxy/imgproxy/tree/master/examples show the key and salt encoded into hexadecimal before sending to the server. This is incorrect as it doesn't work in hex. They should be a normal string.
The server environment variable stores the key and salt in hexadecimal but the client does not.
Hey!
Sorry, I don't understand what's wrong with storing key/salt pair on the client side encoded to hex.
When I send the key/salt to the server as hex encoded, the server gives an invalid signature or forbidden response. However, when I send the key/salt to the server as an unencoded string it works fine.
Don't know if this is still an issue for you, but.
Taking the Go example, the key and the salt the way they are defined in the example ARE hex-encoded strings:
key := "943b421c9eb07c830af81030552c86009268de4e532ba2ee2eab8247c6da0881"
salt := "520f986b998545b4785e0defbc4f3c1203f22de2374a3d53cb7a7fe9fea309c5"
They should be sent to the server just as they defined in the example:
IMGPROXY_KEY="943b421c9eb07c830af81030552c86009268de4e532ba2ee2eab8247c6da0881"
IMGPROXY_SALT="520f986b998545b4785e0defbc4f3c1203f22de2374a3d53cb7a7fe9fea309c5"
Thus I don't see why you treat this as an error.
Don't know if this is still an issue for you, but.
Taking the Go example, the key and the salt the way they are defined in the example ARE hex-encoded strings:
key := "943b421c9eb07c830af81030552c86009268de4e532ba2ee2eab8247c6da0881"
salt := "520f986b998545b4785e0defbc4f3c1203f22de2374a3d53cb7a7fe9fea309c5"
They should be sent to the server just as they defined in the example:
IMGPROXY_KEY="943b421c9eb07c830af81030552c86009268de4e532ba2ee2eab8247c6da0881"
IMGPROXY_SALT="520f986b998545b4785e0defbc4f3c1203f22de2374a3d53cb7a7fe9fea309c5"
Thus I don't see why you treat this as an error.
Exactly, the key and salt should be sent to the server as they appear in the string.
However in the swift example it converts it to hex which is the issue.
let key = "943b421c9eb07c830af81030552c86009268de4e532ba2ee2eab8247c6da0881".hexadecimal()!;
let salt = "520f986b998545b4785e0defbc4f3c1203f22de2374a3d53cb7a7fe9fea309c5".hexadecimal()!;
If you take out .hexadecimal()! the code will work.
Exactly, the key and salt should be sent to the server as they appear in the string.
However in the swift example it converts it to hex which is the issue.
let key = "943b421c9eb07c830af81030552c86009268de4e532ba2ee2eab8247c6da0881".hexadecimal()!;
let salt = "520f986b998545b4785e0defbc4f3c1203f22de2374a3d53cb7a7fe9fea309c5".hexadecimal()!;
If you take out .hexadecimal()! the code will work.
I'm absolutely not familiar with Swift, but it looks like hexdecimal function decodes hex-encoded string that is correct approach.
I'm absolutely not familiar with Swift, but it looks like hexdecimal function decodes hex-encoded string that is correct approach.
|
2025-04-01T06:39:04.478202
| 2020-12-07T17:19:31
|
758696002
|
{
"authors": [
"DarthSim",
"toonvd"
],
"license": "MIT",
"license_source": "github-api",
"license_type": "permissive",
"provenance": "gharchive-dolma-0000.json.gz:6963",
"repo": "imgproxy/imgproxy",
"url": "https://github.com/imgproxy/imgproxy/issues/525"
}
|
gharchive/issue
|
go-nanoid compatibility
Hi
At the moment, the router is not compatible with the new go-nanoid version:
https://github.com/matoous/go-nanoid/commit/68b72474732cdc7bd8198c02d4af9c94846000df
For this version, nanoid.Nanoid() should become nanoid.New().
https://github.com/imgproxy/imgproxy/blob/9e3a1c6c2a4085ae1038787e72420ec2e6bb30c3/router.go#L79
Just a heads up to be careful when updating this dependency. (It also broke installation with go get for me but that is easily bypassed)
Hey!
Thanks for the warning!
It also broke installation with go get for me
Yeah, it turned out that go get completely ignores all the go mod stuff in most cases. That's why I rewrote the installation manual to use go build.
|
2025-04-01T06:39:04.479689
| 2022-05-19T12:59:16
|
1241736938
|
{
"authors": [
"nwaughachukwuma"
],
"license": "MIT",
"license_source": "github-api",
"license_type": "permissive",
"provenance": "gharchive-dolma-0000.json.gz:6964",
"repo": "imgproxy/imgproxy",
"url": "https://github.com/imgproxy/imgproxy/issues/874"
}
|
gharchive/issue
|
How to apply transformation to a signed imgproxy URL
Is it possible to apply transformations to a signed imgproxy url?
Say I have a url := http://imgproxy.example.com/oKfUtW34Dvo2BGQehJFR4Nr0_rIjOtdtzJ3QFsUcXH8/rs:fill:1080:1080:0/g:sm/aHR0cDovL2V4YW1w/bGUuY29tL2ltYWdl/cy9jdXJpb3NpdHku/anBn.png which has been signed after specifying env variables IMGPROXY_KEY and IMGPROXY_SALT, how might I go about changing the dimension to rs:fit:760:760:0/g:no?
OK thanks
|
2025-04-01T06:39:04.557167
| 2021-03-20T15:07:41
|
836831264
|
{
"authors": [
"calebmshafer",
"grigasp"
],
"license": "MIT",
"license_source": "github-api",
"license_type": "permissive",
"provenance": "gharchive-dolma-0000.json.gz:6966",
"repo": "imodeljs/imodeljs",
"url": "https://github.com/imodeljs/imodeljs/pull/1002"
}
|
gharchive/pull-request
|
Add deprecated tags to API summaries
Update the extract-api summary to include potentially two release tags as a single API can be both in a deprecated state in addition to alpha/beta/public (and technically internal but that doesn't make much sense...). This change should help give a better overview of the state of our APIs.
UI Framework API change, I noticed that WidgetState enum had deprecated setup incorrectly. Switched it to the correct syntax, please let me know if that deprecated flag was mistakenly added.
All other API changes are a by-product of the change to include both deprecated and other tags.
Not sure if it's possible, but wouldn't it be more convenient if there was one line per API, e.g. public;deprecated;MyApi?
Not sure if it's possible, but wouldn't it be more convenient if there was one line per API, e.g. public;deprecated;MyApi?
@grigasp I could do it that way as well. I found this way easier to filter and graph in Excel rather than having it all on the same line. However, if it makes it easier to read when it's directly in csv I can switch it.
Not sure if it's possible, but wouldn't it be more convenient if there was one line per API, e.g. public;deprecated;MyApi?
@grigasp I could do it that way as well. I found this way easier to filter and graph in Excel rather than having it all on the same line. However, if it makes it easier to read when it's directly in csv I can switch it.
It's my personal preference to see them in one line, but I don't mind them being two lines if that's useful somewhere.
|
2025-04-01T06:39:04.559842
| 2019-07-18T12:36:22
|
469745305
|
{
"authors": [
"JohnnyCrazy",
"imolorhe"
],
"license": "mit",
"license_source": "bigquery",
"license_type": "permissive",
"provenance": "gharchive-dolma-0000.json.gz:6967",
"repo": "imolorhe/altair",
"url": "https://github.com/imolorhe/altair/issues/874"
}
|
gharchive/issue
|
AUR Package published
Hi,
FYI, I just published an AUR package for arch linux users: community/altair since I'm using altair for some time now and really like it :+1:
It installs altair without the AppImage wrapper and uses a shared electron runtime, which reduces total install size.
Up to you to if you want it in the README (I can also supply a PR) :)
Hey @JohnnyCrazy
Yeah do create a PR for that. It would be best if there was a way to automate updating the package whenever altair updates.
Theoretically possible, but it would either require setting up a CI Script in your repo, which works as a maintainer to that package and publishes new versions automatically via git push to AUR, or an external service.
Will have a look at it. For now, I'm watching the releases and will update the package manually, which is also just a 15sec task.
|
2025-04-01T06:39:04.561584
| 2019-02-03T14:37:54
|
406094498
|
{
"authors": [
"drimpact"
],
"license": "mit",
"license_source": "bigquery",
"license_type": "permissive",
"provenance": "gharchive-dolma-0000.json.gz:6968",
"repo": "impactasaurus/app",
"url": "https://github.com/impactasaurus/app/issues/442"
}
|
gharchive/issue
|
Generate meeting summons
When creating a remote record, allow either a direct remote invite or a meeting summon to be generated.
This should adjust the form to collect only the required information. For summons, it should detail that it shouldn't be used when beneficiaries know each others IDs.
The summon form should create a summon ID and provide a smn link to the user (impactasaurus/server#192)
Generic Questionnaire Links
|
2025-04-01T06:39:04.562945
| 2018-03-01T22:00:51
|
301575707
|
{
"authors": [
"drimpact"
],
"license": "mit",
"license_source": "bigquery",
"license_type": "permissive",
"provenance": "gharchive-dolma-0000.json.gz:6969",
"repo": "impactasaurus/server",
"url": "https://github.com/impactasaurus/server/issues/78"
}
|
gharchive/issue
|
User directory dataloader
Need to dedupe the requests going to auth0. Use a dataloader
Assessment management
Closed in 141ae133e47aa6a44ea54d2dbb66f3a442774ed6
|
2025-04-01T06:39:04.577535
| 2018-11-27T21:15:02
|
384988782
|
{
"authors": [
"FUSAKLA",
"bwplotka"
],
"license": "Apache-2.0",
"license_source": "github-api",
"license_type": "permissive",
"provenance": "gharchive-dolma-0000.json.gz:6970",
"repo": "improbable-eng/thanos",
"url": "https://github.com/improbable-eng/thanos/issues/649"
}
|
gharchive/issue
|
[Bug] Query: metadata endpoint /api/v1/label/<label_name>/values should return external labels
The metadata endpoint of Thanos Query /api/v1/label/<label_name>/values should return all values of the specified label name. (see Prometheus docs).
This works for all labels but not for the labels specified added by the external_labels of Thanos Store instances.
This is used for example by Grafana in label_values(<label_name>) which can be used to load lavel values to query variable see http://docs.grafana.org/features/datasources/prometheus/#query-variable.
Thanks, valid issue, PRs welcome (:
The actual work to be done, is to glue external labels from blocks in store Gateway.
|
2025-04-01T06:39:04.583253
| 2023-12-01T23:34:32
|
2021730896
|
{
"authors": [
"JamesYopp"
],
"license": "Apache-2.0",
"license_source": "github-api",
"license_type": "permissive",
"provenance": "gharchive-dolma-0000.json.gz:6971",
"repo": "improving-app/riddl",
"url": "https://github.com/improving-app/riddl/issues/65"
}
|
gharchive/issue
|
Strange behavior on outputs
I am trying to define outputs for a dashboard page. I won't include all details to make it complete, but the relevant section can be seen below (this compiles):
page AOVDashboardPage {
contains DashboardSideNav as group DashboardNav
output AOVGraph shows record AOVData briefly "As a line graph"
output AOVSummary shows record AggregatedAOV
output OrderDetails shows record OrderDataTable
input TimeRangeSelector takes TimeRangeSelection
input OrderMetricSelector takes OrderMetricSelection
}
I would expect that the third line could/should be written as:
output AOVGraph shows graph AOVData briefly "As a line graph"
Likewise, line 5 could/should be written as:
output OrderDetails shows table OrderDataTable
However, when I make these changes I get an error like the following:
[error] Error: application/pages/dashboards/orderDashboards.riddl(24:35):
Expected one of ("}" | "input" | "form" | "text" | "button" | "picklist" | "selector" | "menu" | "output" | "document" | "list" | "table" | "graph" | "animation" | "picture" | "contains" | "group" | "page" | "pane" | "dialog" | "popup" | "frame" | "column" | "window" | "section" | "tab" | "flow" | "//" | "described" | "explained" | "briefly" | "brief" | "{" | "is" | "are" | ":" | "="):
output OrderDetails shows table OrderDataTable
^
Context: (outputDefinitions | briefly | description | parse0$3 | parse0$1 | "}")
Entered in the wrong place.
|
2025-04-01T06:39:04.586248
| 2023-03-07T00:31:19
|
1612468191
|
{
"authors": [
"miguelTaningcoYoppworks"
],
"license": "Apache-2.0",
"license_source": "github-api",
"license_type": "permissive",
"provenance": "gharchive-dolma-0000.json.gz:6972",
"repo": "improving-app/riddl",
"url": "https://github.com/improving-app/riddl/pull/42"
}
|
gharchive/pull-request
|
Remove UpdateStatus command from Tenant
Issue:
Currently in the Tenant entity, there is an UpdateStatus command. We want to remove this and in its place have commands that explicitly change the state of the entity.
Changes:
Remove UpdateStatus
Remove StatusUpdated
Add ActivateTenant
Add SuspendTenant
Add TenantActivated
Add TenantSuspended
Testing:
validated riddl
Notes:
I realized I named the branch wrong, it is supposed to be remove-update-status-from-tenant
Treating this branch as now unnecessary because of IA-128 which completely changes and overwrites this. Will close this branch and PR.
|
2025-04-01T06:39:04.595429
| 2019-05-15T08:56:12
|
444315850
|
{
"authors": [
"davidohayon669",
"mikaelengstrom"
],
"license": "MIT",
"license_source": "github-api",
"license_type": "permissive",
"provenance": "gharchive-dolma-0000.json.gz:6973",
"repo": "inProgress-team/react-native-youtube",
"url": "https://github.com/inProgress-team/react-native-youtube/pull/367"
}
|
gharchive/pull-request
|
Fix build issue with RN0.59
I was getting the following errors from Gradle when using this lib on a clean RN0.59 project:
This gradle-file resolve that, however, I know next nothing about gradle/java/android build process and have no idea if this affects backward-compatibility or other things. So please keep that in mind before merging :)
#370 #370
|
2025-04-01T06:39:04.634280
| 2015-03-20T16:13:07
|
63256368
|
{
"authors": [
"Minglee01",
"grahamgilchrist"
],
"license": "mit",
"license_source": "bigquery",
"license_type": "permissive",
"provenance": "gharchive-dolma-0000.json.gz:6974",
"repo": "incuna/angular-external-link-interceptor",
"url": "https://github.com/incuna/angular-external-link-interceptor/pull/5"
}
|
gharchive/pull-request
|
Add jshint and jslint files
Add jshint and jscs code style files
@incuna/js please reivew/merge
thumbsup
|
2025-04-01T06:39:04.649029
| 2015-03-01T19:50:52
|
59414146
|
{
"authors": [
"manojlds",
"mpgerlek"
],
"license": "Apache-2.0",
"license_source": "github-api",
"license_type": "permissive",
"provenance": "gharchive-dolma-0000.json.gz:6975",
"repo": "indix/gocd-s3-artifacts",
"url": "https://github.com/indix/gocd-s3-artifacts/issues/7"
}
|
gharchive/issue
|
Make Publish and Fetch task use the same client credentials as Material
The Material plugin picks up credentials as specified here - http://docs.aws.amazon.com/AWSJavaSDK/latest/javadoc/com/amazonaws/services/s3/AmazonS3Client.html#AmazonS3Client()
The Publish and Fetch tasks pick the credentials only from the environment variables.
+1
It would be simpler if the env vars for the key could be set in the s3material package repository dialog (or set it on a per-pipeline basis and use those env vars, like for pub and fetch).
Fixed in #53
|
2025-04-01T06:39:04.650493
| 2016-11-06T19:00:42
|
187581031
|
{
"authors": [
"TofPlay",
"indragiek"
],
"license": "mit",
"license_source": "bigquery",
"license_type": "permissive",
"provenance": "gharchive-dolma-0000.json.gz:6976",
"repo": "indragiek/CocoaMarkdown",
"url": "https://github.com/indragiek/CocoaMarkdown/issues/41"
}
|
gharchive/issue
|
Improving the readme
Hi,
Interesting component 🙂
Here a template for your readme: iOS-readme-template
Hi, not sure exactly which part you're referencing but I do have plans to add support for package managers. Going to close this out for now, feel free to re-open if you had specific suggestions in mind.
|
2025-04-01T06:39:04.709905
| 2024-04-28T03:52:33
|
2267323881
|
{
"authors": [
"KevinHuSh",
"t6am3"
],
"license": "Apache-2.0",
"license_source": "github-api",
"license_type": "permissive",
"provenance": "gharchive-dolma-0000.json.gz:6977",
"repo": "infiniflow/ragflow",
"url": "https://github.com/infiniflow/ragflow/issues/580"
}
|
gharchive/issue
|
[Bug]: connecting to redis:6379. Name or service not known
Is there an existing issue for the same bug?
[X] I have checked the existing issues.
Branch name
main
Commit ID
ae501c5
Other environment information
No response
Actual behavior
just upload and parse a pdf
Expected behavior
No response
Steps to reproduce
1. Just follow the quick start steps to luanch a local server.
2. Upload a pdf file and parse it, that's it.
Additional information
I found that there is no redis service in docker-compose-base.yml
You could ignore the warning log.
And in latest code and image, I've already remove the default redis configuration so the message will not show again.
|
2025-04-01T06:39:04.714298
| 2021-10-01T17:13:00
|
1013583347
|
{
"authors": [
"danberindei",
"ryanemerson"
],
"license": "Apache-2.0",
"license_source": "github-api",
"license_type": "permissive",
"provenance": "gharchive-dolma-0000.json.gz:6978",
"repo": "infinispan/infinispan",
"url": "https://github.com/infinispan/infinispan/pull/9568"
}
|
gharchive/pull-request
|
ISPN-13338 SingleFileStore migration to single file is broken
https://issues.redhat.com/browse/ISPN-13338
Handle 12.1 segment files just like 12.0 segment files
Thanks @danberindei
|
2025-04-01T06:39:04.717073
| 2023-07-11T11:50:41
|
1798761116
|
{
"authors": [
"fax4ever",
"tristantarrant"
],
"license": "apache-2.0",
"license_source": "bigquery",
"license_type": "permissive",
"provenance": "gharchive-dolma-0000.json.gz:6979",
"repo": "infinispan/protostream",
"url": "https://github.com/infinispan/protostream/pull/194"
}
|
gharchive/pull-request
|
IPROTO-271 Deprecate ProtoDoc annotation + Remove sample-domain projects
https://issues.redhat.com/browse/IPROTO-271
https://issues.redhat.com/browse/IPROTO-272
About Remove sample-domain projects, with #11118 and #11119 we're going to publish these within the Infinispan code base, lowering the coupling and raising the cohesion, since these projects are used mainly from the same Infinispan code base.
Merged, thanks
Thank you Tristan
|
2025-04-01T06:39:04.722378
| 2017-05-26T19:18:21
|
231713318
|
{
"authors": [
"skellock"
],
"license": "mit",
"license_source": "bigquery",
"license_type": "permissive",
"provenance": "gharchive-dolma-0000.json.gz:6980",
"repo": "infinitered/ignite",
"url": "https://github.com/infinitered/ignite/pull/1067"
}
|
gharchive/pull-request
|
Fix for wrong example files being generated.
Fixes #1066
The failure is caused by a missing dependency tempy. I will fix that next, but it's not part of this PR.
Also self-merging.
|
2025-04-01T06:39:04.739512
| 2022-02-03T09:52:10
|
1122869297
|
{
"authors": [
"lukaknezic",
"pablojimpas",
"srihariash999"
],
"license": "MIT",
"license_source": "github-api",
"license_type": "permissive",
"provenance": "gharchive-dolma-0000.json.gz:6981",
"repo": "infinum/flutter-charts",
"url": "https://github.com/infinum/flutter-charts/issues/36"
}
|
gharchive/issue
|
Tooltip Support.
It'll be very great if tooltips on tap feature is added especially in case on line charts with bubbles on data points.
Hey,
If you are okay with just showing data on that point then there is decoration for that already. You will have to add chartBehaviour and implement onItemClicked. Then you have to add SelectedItemDecoration with item you got from onItemClicked. That will show selected item in the chart.
Soon SelectedItemDecoration will also take a widget child that will be shown on click for more customisation.
Available in 2.0.0+2.
SelectedItemDecoration now takes a widget you can show on item when item is clicked
Hi @lukaknezic, since SelectedItemDecoration is deprecated now, can you show an example of implementing the same behavior using a WidgetDecoration
Hey @pablojimpas. Sure, I will add that to readme as well but here you go:
So this is a example where you can also individually change if you want to change chart item or the background/foreground:
class ChartTest extends StatefulWidget {
ChartTest({Key? key}) : super(key: key);
@override
State<ChartTest> createState() => _ChartTestState();
}
class _ChartTestState extends State<ChartTest> {
int? currentItem;
@override
Widget build(BuildContext context) {
return Chart<void>(
state: ChartState(
data: ChartData.fromList([5, 6, 4, 8, 2, 6].map((e) => ChartItem<void>(e.toDouble())).toList()),
/// 1) Here we change how the item will be shown. By default it will have 0.5 opacity. If we clicked on the item it will have be set as current item and it will have 1.0 opacity
itemOptions: BarItemOptions(barItemBuilder: (data) {
final isCurrent = data.itemIndex == currentItem;
return BarItem(
color: Colors.red.withOpacity(isCurrent ? 1 : 0.5),
);
}),
/// 2) Adding listener for clicking on the chart items, and setting that value to our current chart state
behaviour: ChartBehaviour(onItemClicked: (item) {
setState(() {
currentItem = item.itemIndex;
});
}),
backgroundDecorations: [
GridDecoration(
gridWidth: 1,
gridColor: Theme.of(context).dividerColor,
),
/// 3) If you want to change foreground/background of the selected item as well, then you should make your own `WidgetDecoration`.
/// This widget decoration will paint background of the item in gray if item is clicked. You can add info windows and similar things here.
WidgetDecoration(widgetDecorationBuilder: (context, chartState, itemWidth, verticalMultiplier) {
if (currentItem == null) {
return const SizedBox();
}
return Stack(
children: [
Positioned(
left: itemWidth * currentItem!,
top: 0,
bottom: 0,
width: itemWidth,
child: Container(
decoration: BoxDecoration(
color: Colors.grey.shade400.withOpacity(0.4),
),
),
),
],
);
}),
],
),
);
}
}
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.