id stringlengths 4 10 | text stringlengths 4 2.14M | source stringclasses 2
values | created timestamp[s]date 2001-05-16 21:05:09 2025-01-01 03:38:30 | added stringdate 2025-04-01 04:05:38 2025-04-01 07:14:06 | metadata dict |
|---|---|---|---|---|---|
1414796096 | Different code example
Describe the bug
The code snippet for using the SDK in the Readme is different to the code snippet on NPM official page.
To Reproduce
Read readme :
const authenticator = new IamAuthenticator({
// apikey: , // Event notifications service instance APIKey
apikey: "sc7ZkWsNcs6geZ2FCH6tVyC-Lpuwl-uru_9dz0TU_CsO",
});
and look at NPM page: (https://www.npmjs.com/package/@ibm-cloud/event-notifications-node-admin-sdk)
const authenticator = new IamAuthenticator({
// apikey: , // Event notifications service instance APIKey
apikey: "sc7ZkWsNcs6geZ2FCH6tVyC-Lpuwl-uru_9dz0TU_CsO",
url: "https://private.iam.cloud.ibm.com",
});
Expected behavior
Both are identical
Initiator fault : compared the generic code snippet with the snipped for private endpoints --< will close
incorrectly opened
| gharchive/issue | 2022-10-19T11:27:40 | 2025-04-01T04:32:37.872646 | {
"authors": [
"groezing"
],
"repo": "IBM/event-notifications-node-admin-sdk",
"url": "https://github.com/IBM/event-notifications-node-admin-sdk/issues/31",
"license": "Apache-2.0",
"license_type": "permissive",
"license_source": "github-api"
} |
1470290677 | SharedKeyHexDigest should be read as Str
hi
i have an issue when using shared key
the SharedKeyHexDigest seems to be decoded as bin ( in *Pong DecodeMsg )
but the PONG message returns it as a string ( as the protocol specify )
after modifying the code to read as string
it works fine
@vanonox At first glance, it looks like you're right. If you want to submit a PR with your fix, feel free, otherwise we'll see about fixing it on our end.
| gharchive/issue | 2022-11-30T22:17:52 | 2025-04-01T04:32:37.874163 | {
"authors": [
"ScarletTanager",
"vanonox"
],
"repo": "IBM/fluent-forward-go",
"url": "https://github.com/IBM/fluent-forward-go/issues/55",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
} |
1459995782 | [BUG] "The application xx is not open anymore"
Describe the bug
Using a rebranded version of the application, when running it as a Notification Center alert, sometimes when the notification is clicked a macOS error appears as below, saying "The application $rebranded_name is not open anymore".
The script continues as normal and action is taken based on what the user clicked, but this error message shows about 1/3 of the time. I can't confirm for sure, but it appears to happen when the user waits more than 5-10 minutes before clicking the notification.
To Reproduce
Steps to reproduce the behavior:
Rebrand IBM Notifier and deploy the package to a machine via JAMF.
Run the rebranded IBM Notifier package with a script deployed from JAMF, calling -type alert
User waits 5-10 minutes, then clicks on the notification.
See error
Expected behavior
Script should continue without showing the error.
Screenshots
Here's my script code:
#!/bin/bash
pending_updates=$(tail -n -1 /var/log/auto-update.log | awk '{ print $8 }')
notifier_path="/Applications/Utilities/Riskified Notifier.app/Contents/MacOS/Riskified Notifier"
BANNER_TITLE="App Updates Available"
BANNER_SUBTITLE="You have ${pending_updates} pending updates, click to install."
BUTTON_LABEL="Update"
# Runs on error, reporting the error w/ line number and exiting.
function error()
{
local parent_lineno="$1"
local message="$2"
local code="${3:-1}"
if [[ -n "$message" ]] ; then
echo "Error on or near line ${parent_lineno}: ${message}; exiting with status ${code}"
else
echo "Error on or near line ${parent_lineno}; exiting with status ${code}"
fi
exit "${code}"
}
# Any error will trigger the above error() function and exit
trap 'error ${LINENO}' ERR
# Get logged in user
loggedInUser=$( echo "show State:/Users/ConsoleUser" | /usr/sbin/scutil |
/usr/bin/awk '/Name :/ && ! /loginwindow/ { print $3 }' )
loggedInUID=$(/usr/bin/id -u "$loggedInUser" 2>/dev/null)
# Check for logged in user
if [[ -z "$loggedInUser" ]]; then
echo "No logged in user. Exiting..." >&2; exit 1
else
echo "User "${loggedInUser}" is logged in"
fi
# Check for notifier application
if [ ! -f "$notifier_path" ]; then
echo "Notifier application not present. Exiting..." >&2; exit 1
fi
# Show notification if updates are pending, exit if none
if [[ $pending_updates -gt 0 ]]; then
echo "${pending_updates} updates pending. Sending notification..."
command_result=$(/bin/launchctl asuser "$loggedInUID" "$notifier_path" -type alert -title "$BANNER_TITLE" -subtitle "$BANNER_SUBTITLE" -main_button_label "$BUTTON_LABEL"; echo $?)
else
echo "No updates pending. Exiting..."
exit 0
fi
# Open Auto-Update application if notification is clicked
if [ "$command_result" -eq 0 ] 2>/dev/null; then
echo "Notification clicked. Opening Auto-Update..."
/bin/launchctl asuser "$loggedInUID" open -a "/Applications/Managed Software Center.app"
elif [ "$command_result" -eq 239 ] 2>/dev/null; then
echo "Notification closed. Exiting..."
else
echo "Error ${command_result}"
fi
Desktop (please complete the following information):
OS: macOS 13.0.1
Project version: 2.9.1 Build 96
I'm having the same issue. In my case, it happens if I try to click on notification immediately after it appears. (can be even reproduced in Xcode). I can reproduce it in 10 from 10 cases with alerts, didn't test it with another types.
Best regards,
Evgenii
As we're unable to reproduce the issue, I'm transitioning this to a discussion thread where individuals can share their experiences. Additionally, I've established a new wiki page dedicated to tracking these sporadic issues, which are likely resultant from macOS anomalies or incorrect workflows.
| gharchive/issue | 2022-11-22T14:36:15 | 2025-04-01T04:32:37.880302 | {
"authors": [
"SMartorelli",
"bubolev",
"skoobasteeve"
],
"repo": "IBM/mac-ibm-notifications",
"url": "https://github.com/IBM/mac-ibm-notifications/issues/141",
"license": "Apache-2.0",
"license_type": "permissive",
"license_source": "github-api"
} |
1389048959 | "nca --version" doesn't work when nca is installed with pip install
It also doesn't work in the docker container.
Should move VERSION.txt under nca directory
PR #342 didn't fix it fully.
Running the docker image with --version now works,
Running nca --version after installing with pip doesn't. Seems like a problem with setup.cfg.
| gharchive/issue | 2022-09-28T09:38:45 | 2025-04-01T04:32:37.882167 | {
"authors": [
"zivnevo"
],
"repo": "IBM/network-config-analyzer",
"url": "https://github.com/IBM/network-config-analyzer/issues/339",
"license": "Apache-2.0",
"license_type": "permissive",
"license_source": "github-api"
} |
1195457036 | Version 0.46.3 is not accepting schema type array
Recently we have upgraded ibm-openapi-validator package from version 0.24.0 to 0.46.3 and the openapi.yaml that passed validation via lint-openapi command now throws error wherever request body schema is of type array.
This seems to be a breaking change .
Error message says : All request bodies must be structured as an object.
Is there a way to get past this issue?
Hello @ShilpaJalaja. With each feature release of the tool, we add new validations to better enforce our API requirements and guidelines. This means that an API that passes validation with one set of rules might not be compliant with an expanded set of rules. I understand that this could break your build, it is the nature of the tool to continually add validations. Our guidelines require that request bodies are structured as an object, hence the addition of this rule.
If this is not a rule you want to enforce, you can turn it off by configuring the tool. Use a .spectral.yaml file and set request-body-object to off. See this documentation for more information.
This is intentional behavior, so I'm going to close the issue, but feel free to respond here if you have questions about configuring the tool.
@dpopp07 Thank you for the quick response. I have set the rule on .spectral.yaml file as below:
| gharchive/issue | 2022-04-07T03:00:21 | 2025-04-01T04:32:37.885909 | {
"authors": [
"ShilpaJalaja",
"dpopp07"
],
"repo": "IBM/openapi-validator",
"url": "https://github.com/IBM/openapi-validator/issues/410",
"license": "Apache-2.0",
"license_type": "permissive",
"license_source": "github-api"
} |
299865134 | Failed to read csv file
When executing the generated code to read CSV file, it failed.
The code are
insert pandas DataFrame
import sys
import types
import pandas as pd
from botocore.client import Config
import ibm_boto3
def iter(self): return 0
@hidden_cell
The following code accesses a file in your IBM Cloud Object Storage. It includes your credentials.
You might want to remove those credentials before you share your notebook.
client_c0fe4b0610144a049d60e22 = ibm_boto3.client(service_name='s3',
ibm_api_key_id='',
ibm_auth_endpoint="https://iam.ng.bluemix.net/oidc/token",
config=Config(signature_version='oauth'),
endpoint_url='https://s3-api.us-geo.objectstorage.service.networklayer.com')
body = client_c0fe4b0610144a049d60e22.get_object(Bucket='leee5ac7151a0774a31aae95eada44af3e0',Key='example_facebook_data.csv')['Body']
add missing iter method, so pandas accepts body as file-like object
if not hasattr(body, "iter"): body.iter = types.MethodType( iter, body )
df_data_2 = pd.read_csv(body)
df_data_2.head()
The error messages are
ParserError Traceback (most recent call last)
in ()
21 if not hasattr(body, "iter"): body.iter = types.MethodType( iter, body )
22
---> 23 df_data_2 = pd.read_csv(body)
24 df_data_2.head()
25
/usr/local/src/conda3_runtime/home/envs/DSX-Python35-Spark/lib/python3.5/site-packages/pandas/io/parsers.py in parser_f(filepath_or_buffer, sep, delimiter, header, names, index_col, usecols, squeeze, prefix, mangle_dupe_cols, dtype, engine, converters, true_values, false_values, skipinitialspace, skiprows, nrows, na_values, keep_default_na, na_filter, verbose, skip_blank_lines, parse_dates, infer_datetime_format, keep_date_col, date_parser, dayfirst, iterator, chunksize, compression, thousands, decimal, lineterminator, quotechar, quoting, escapechar, comment, encoding, dialect, tupleize_cols, error_bad_lines, warn_bad_lines, skipfooter, skip_footer, doublequote, delim_whitespace, as_recarray, compact_ints, use_unsigned, low_memory, buffer_lines, memory_map, float_precision)
703 skip_blank_lines=skip_blank_lines)
704
--> 705 return _read(filepath_or_buffer, kwds)
706
707 parser_f.name = name
/usr/local/src/conda3_runtime/home/envs/DSX-Python35-Spark/lib/python3.5/site-packages/pandas/io/parsers.py in _read(filepath_or_buffer, kwds)
449
450 try:
--> 451 data = parser.read(nrows)
452 finally:
453 parser.close()
/usr/local/src/conda3_runtime/home/envs/DSX-Python35-Spark/lib/python3.5/site-packages/pandas/io/parsers.py in read(self, nrows)
1063 raise ValueError('skipfooter not supported for iteration')
1064
-> 1065 ret = self._engine.read(nrows)
1066
1067 if self.options.get('as_recarray'):
/usr/local/src/conda3_runtime/home/envs/DSX-Python35-Spark/lib/python3.5/site-packages/pandas/io/parsers.py in read(self, nrows)
1826 def read(self, nrows=None):
1827 try:
-> 1828 data = self._reader.read(nrows)
1829 except StopIteration:
1830 if self._first_chunk:
pandas/_libs/parsers.pyx in pandas._libs.parsers.TextReader.read()
pandas/_libs/parsers.pyx in pandas._libs.parsers.TextReader._read_low_memory()
pandas/_libs/parsers.pyx in pandas._libs.parsers.TextReader._read_rows()
pandas/_libs/parsers.pyx in pandas._libs.parsers.TextReader._tokenize_rows()
pandas/_libs/parsers.pyx in pandas._libs.parsers.raise_parser_error()
ParserError: Error tokenizing data. C error: Expected 1 fields in line 36, saw 2
@lee-zhg Did you find a workaround?
We are seeing issues with encodings, i.e. we had to fix our own CSV file with:
df_data_1 = pd.read_csv(body, encoding='latin-1')
| gharchive/issue | 2018-02-23T22:05:03 | 2025-04-01T04:32:37.899449 | {
"authors": [
"lee-zhg",
"scottdangelo"
],
"repo": "IBM/pixiedust-facebook-analysis",
"url": "https://github.com/IBM/pixiedust-facebook-analysis/issues/38",
"license": "Apache-2.0",
"license_type": "permissive",
"license_source": "github-api"
} |
1187563827 | Issue/3976
Fixe #3976
Closing this PR as we don't need so many changes, we just needed to change the lifecycle name from Update to Upgrade in service class.
| gharchive/pull-request | 2022-03-31T06:08:02 | 2025-04-01T04:32:37.922984 | {
"authors": [
"lokanalla",
"settipalli2F51"
],
"repo": "IBM/restconf-driver",
"url": "https://github.com/IBM/restconf-driver/pull/17",
"license": "Apache-2.0",
"license_type": "permissive",
"license_source": "github-api"
} |
1590242981 | feat(2023-02-14): Update SDK to use API generated on 2023-02-14 and semantic release fix
semantic release fix
As per https://github.com/semantic-release/semantic-release/releases/tag/v20.0.0
and https://github.com/semantic-release/semantic-release/commit/c7b8e10bd1960969e9ebe6ee2dd6c7375363718a
semantic-release now needs node v18 as the minimum required version.
updating the nvm command to nvm install --lts to always install the latest time term support version
distro upgraded from xeniel to focus to support node 18
:tada: This PR is included in version 0.15.0 :tada:
The release is available on GitHub release
Your semantic-release bot :package::rocket:
| gharchive/pull-request | 2023-02-18T07:18:38 | 2025-04-01T04:32:37.926188 | {
"authors": [
"deepaksibm",
"ibm-vpc"
],
"repo": "IBM/vpc-python-sdk",
"url": "https://github.com/IBM/vpc-python-sdk/pull/51",
"license": "Apache-2.0",
"license_type": "permissive",
"license_source": "github-api"
} |
1165232941 | 🛑 TICP-Favorites is down
In 6b88102, TICP-Favorites (https://oyjuw-oqaaa-aaaal-qac5q-cai.raw.ic0.app/metrics) was down:
HTTP code: 500
Response time: 543 ms
Resolved: TICP-Favorites is back up in 5a93d8a.
| gharchive/issue | 2022-03-10T13:32:29 | 2025-04-01T04:32:37.939923 | {
"authors": [
"roy4roll"
],
"repo": "IC-Naming/uptime",
"url": "https://github.com/IC-Naming/uptime/issues/40",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
} |
1165233083 | 🛑 TICP-Resolver is down
In 05ae25e, TICP-Resolver (https://okpdp-caaaa-aaaal-qac6q-cai.raw.ic0.app/metrics) was down:
HTTP code: 500
Response time: 498 ms
Resolved: TICP-Resolver is back up in f129319.
| gharchive/issue | 2022-03-10T13:32:37 | 2025-04-01T04:32:37.942500 | {
"authors": [
"roy4roll"
],
"repo": "IC-Naming/uptime",
"url": "https://github.com/IC-Naming/uptime/issues/42",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
} |
268708333 | Overwriting of MaxFunEvals
When setting MaxFunEvals for fmincon they are overwritten in getMultiStarts with 400*parameters.number.
Fixed on the new_optimizers branch. Needs to checked and merged...
| gharchive/issue | 2017-10-26T10:30:29 | 2025-04-01T04:32:37.943497 | {
"authors": [
"LoosC",
"paulstapor"
],
"repo": "ICB-DCM/PESTO",
"url": "https://github.com/ICB-DCM/PESTO/issues/112",
"license": "bsd-3-clause",
"license_type": "permissive",
"license_source": "bigquery"
} |
1961861592 | adding support for EfficientViT
I'd like to add in support for the EfficientViT models (https://github.com/mit-han-lab/efficientvit)
Is there a standard interface I need to follow?
I'm looking here: https://github.com/IDEA-Research/Grounded-Segment-Anything/tree/main/EfficientSAM
I'd like to add in support for the EfficientViT models (https://github.com/mit-han-lab/efficientvit) Is there a standard interface I need to follow? I'm looking here: https://github.com/IDEA-Research/Grounded-Segment-Anything/tree/main/EfficientSAM
Very welcome! We don't have any strict code format. You can just follow some similar interfaces in EfficientSAM. Thank you very much!
And we will merge your code as soon as possible!
| gharchive/issue | 2023-10-25T17:04:14 | 2025-04-01T04:32:38.048110 | {
"authors": [
"rentainhe",
"skunkwerk"
],
"repo": "IDEA-Research/Grounded-Segment-Anything",
"url": "https://github.com/IDEA-Research/Grounded-Segment-Anything/issues/385",
"license": "Apache-2.0",
"license_type": "permissive",
"license_source": "github-api"
} |
2382194487 | 🛑 IDR (well 592371) is down
In b2b58f1, IDR (well 592371) (https://idr.openmicroscopy.org/webclient/?show=well-592371) was down:
HTTP code: 0
Response time: 0 ms
Resolved: IDR (well 592371) is back up in cac05c3 after 8 minutes.
| gharchive/issue | 2024-06-30T10:38:57 | 2025-04-01T04:32:38.054421 | {
"authors": [
"snoopycrimecop"
],
"repo": "IDR/upptime",
"url": "https://github.com/IDR/upptime/issues/2979",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
} |
1373830335 | How to do additional rendering?
Can threejs render additional elements on top of an existing IFC model without knowing the xyz coordinates? I can use xyz render it now, but server don't know target element's coordinates.
@Fengjing95, can you elaborate more on your problem?
| gharchive/issue | 2022-09-15T02:49:13 | 2025-04-01T04:32:38.062362 | {
"authors": [
"Fengjing95",
"aka-blackboots"
],
"repo": "IFCjs/web-ifc-three",
"url": "https://github.com/IFCjs/web-ifc-three/issues/124",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
} |
291398911 | Historico de compras
Resumo
Menu historico de compras do usuario constando todos os produtos comprado pelo cliente.
Detalhamento
Neste menu terá:
Nome do produto
Data de compra do produto
Preço
Data de envio
Estimativa de tempo: 10 horas.
| gharchive/issue | 2018-01-24T23:34:00 | 2025-04-01T04:32:38.063998 | {
"authors": [
"GabrielDSti"
],
"repo": "IFGO/loja-exemplo",
"url": "https://github.com/IFGO/loja-exemplo/issues/24",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
} |
1129037094 | example codesystem needs to be clearly identified
http://profiles.ihe.net/ITI/mCSD/CodeSystem/mcsd-example-hierarchy
I think the current trend is to to have these marked as example codeSystems.
This has been updated and the QA is now happy. The change involved making the URL use example.org for the CodeSystem.
https://github.com/IHE/ITI.mCSD/blob/main/input/fsh/example-mcsd.fsh#L36
| gharchive/issue | 2022-02-09T21:26:03 | 2025-04-01T04:32:38.104513 | {
"authors": [
"JohnMoehrke",
"lukeaduncan"
],
"repo": "IHE/ITI.mCSD",
"url": "https://github.com/IHE/ITI.mCSD/issues/22",
"license": "CC-BY-4.0",
"license_type": "permissive",
"license_source": "github-api"
} |
2617703705 | Add benchmark and estimators tests
Adds test cases for benchmark (smoke tests checking exit code 0) and for whitebox estimators through estimate_uncertainty.
Github Action gives the following warning:
=============================== warnings summary ===============================
test/test_estimators.py::test_lexical_similarity_bleu
/opt/hostedtoolcache/Python/3.10.15/x64/lib/python3.10/site-packages/nltk/translate/bleu_score.py:577: UserWarning:
The hypothesis contains 0 counts of 2-gram overlaps.
Therefore the BLEU score evaluates to 0, independently of
how many N-gram overlaps of lower order it contains.
Consider using lower n-gram order or use SmoothingFunction()
warnings.warn(_msg)
| gharchive/pull-request | 2024-10-28T08:41:00 | 2025-04-01T04:32:38.159893 | {
"authors": [
"SpeedOfMagic",
"rvashurin"
],
"repo": "IINemo/lm-polygraph",
"url": "https://github.com/IINemo/lm-polygraph/pull/244",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
} |
2000288867 | Update integration-tests-docker.yml
Description
Updating Integration Test GitHub Actions .yml for Docker to work with the new test cases for new functionalities in AI Verify
Motivation and Context
To update Integration Test GitHub Actions for GitHub Actions to:
Update GitHub Actions for New Test Cases for AI Verify
Fix Some Minor Issues in GitHub Actions
Type of Change
feat: A new feature
fix: A bug fix
How to Test
[Provide clear instructions on how to test and verify the changes introduced by this pull request, including any specific unit tests you have created to demonstrate your changes.]
Checklist
Please check all the boxes that apply to this pull request using "x":
[ ] I have tested the changes locally and verified that they work as expected.
[ ] I have added or updated the necessary documentation (README, API docs, etc.).
[ ] I have added appropriate unit tests or functional tests for the changes made.
[ ] I have followed the project's coding conventions and style guidelines.
[ ] I have rebased my branch onto the latest commit of the main branch.
[ ] I have squashed or reorganized my commits into logical units.
[ ] I have added any necessary dependencies or packages to the project's build configuration.
[ ] I have performed a self-review of my own code.
[ ] I have read, understood and agree to the Developer Certificate of Origin below, which this project utilises.
Screenshots (if applicable)
[If the changes involve visual modifications, include screenshots or GIFs that demonstrate the changes.]
Additional Notes
[Add any additional information or context that might be relevant to reviewers.]
Developer Certificate of Origin
Developer Certificate of Origin
Version 1.1
Copyright (C) 2004, 2006 The Linux Foundation and its contributors.
Everyone is permitted to copy and distribute verbatim copies of this
license document, but changing it is not allowed.
Developer's Certificate of Origin 1.1
By making a contribution to this project, I certify that:
(a) The contribution was created in whole or in part by me and I
have the right to submit it under the open source license
indicated in the file; or
(b) The contribution is based upon previous work that, to the best
of my knowledge, is covered under an appropriate open source
license and I have the right under that license to submit that
work with modifications, whether created in whole or in part
by me, under the same open source license (unless I am
permitted to submit under a different license), as indicated
in the file; or
(c) The contribution was provided directly to me by some other
person who certified (a), (b) or (c) and I have not modified
it.
(d) I understand and agree that this project and the contribution
are public and that a record of the contribution (including all
personal information I submit with it, including my sign-off) is
maintained indefinitely and may be redistributed consistent with
this project or the open source license(s) involved.
Closing this PR as it is outdated already.
| gharchive/pull-request | 2023-11-18T07:22:40 | 2025-04-01T04:32:38.189469 | {
"authors": [
"imda-benedictlee"
],
"repo": "IMDA-BTG/aiverify",
"url": "https://github.com/IMDA-BTG/aiverify/pull/208",
"license": "Apache-2.0",
"license_type": "permissive",
"license_source": "github-api"
} |
357829826 | online bids validator stalling
Hi all, ran the bids validator last week without issues. Today it appears to be failing across all two servers that have access to the datasets I'm working with. Interestingly, one of the newer builds (e.g. https://1456-37161308-gh.circle-artifacts.com/0/root/web_version/index.html) that I spotted in an unrelated question on neurostars works just fine (see: https://neurostars.org/t/surprising-error-with-bids-validator/2347/2).
The developer console error is beyond my ken:
Here's the one in Chrome:
Uncaught TypeError: Cannot set property 'files' of undefined
at Object.format (app.min.js:1)
at app.min.js:1
at app.min.js:1
at u (app.min.js:1)
at app.min.js:1
at m.run (app.min.js:1)
at h (app.min.js:1)
And in firefox:
TypeError: u[f.code] is undefined[Learn More] app.min.js:1:148734
@olgn could you look into this?
@chrisfilo sure thing
Hey @olgn, the validator at that address fixes the issue. And now it looks like it's been pushed to the main validator address as that works now as well. Cheers for the fix!
| gharchive/issue | 2018-09-06T21:13:09 | 2025-04-01T04:32:38.205372 | {
"authors": [
"chrisfilo",
"ddwagner",
"olgn"
],
"repo": "INCF/bids-validator",
"url": "https://github.com/INCF/bids-validator/issues/544",
"license": "mit",
"license_type": "permissive",
"license_source": "bigquery"
} |
1846054943 | 🛑 neologismen.ivdnt.org is down
In 76c1a9f, neologismen.ivdnt.org (https://neologismen.ivdnt.org) was down:
HTTP code: 502
Response time: 743 ms
Resolved: neologismen.ivdnt.org is back up in 268c778.
| gharchive/issue | 2023-08-11T00:40:04 | 2025-04-01T04:32:38.217000 | {
"authors": [
"rvanvliet"
],
"repo": "INL/ivdnt-statusoverzicht",
"url": "https://github.com/INL/ivdnt-statusoverzicht/issues/1612",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
} |
1131694659 | 🛑 evenementen.ivdnt.org is down
In e23f0b2, evenementen.ivdnt.org (https://evenementen.ivdnt.org) was down:
HTTP code: 0
Response time: 0 ms
Resolved: evenementen.ivdnt.org is back up in e86268c.
| gharchive/issue | 2022-02-11T04:32:20 | 2025-04-01T04:32:38.220388 | {
"authors": [
"rvanvliet"
],
"repo": "INL/ivdnt-statusoverzicht",
"url": "https://github.com/INL/ivdnt-statusoverzicht/issues/464",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
} |
1042361816 | Need information regarding configuration required
The documentation here mentions that we are supposed to provide authPageUrl, callbackPageUrl and queryStringParams. What is the purpose of these, can you please elaborate in detail?
We are supposed to add Identity Providers ids (example aruba, infocert etc). How to obtain these ids? Are they unique for everyone who wants to integrate SPID in their app or we need to contact these Identity Providers in order to obtain ids?
Hi Nikhil,
this SDK works with a SPID web application which must already exist on your service provider and must be federated with the various Identity Providers.
authPageUrl: it is the SPID login URL of the service provider
queryStringParams: these are the parameters needed for authPageUrl (e.g., id of the Identity Providers)
callbackPageUrl: it is the redirect URL called after finishing the login process
Identity Providers ids are defined by the service provider web application, and they must be the same.
Ciao Daniele, potresti farmi un'esempio un po più dettagliato?
Dove posso ottenere questi 3 link?
static let authPageUrl = ""
static let callbackPageUrl = ""
static let queryStringParams = ""
per gli identityprovider invece questo codice è corretto?
struct IdentityProvider {
static let aruba = "https://loginspid.aruba.it"
static let etna = "https://id.eht.eu"
static let infocamere = "https://loginspid.infocamere.it"
static let infocert = "https://identity.infocert.it"
static let intesi = "https://idp.intesigroup.com"
static let lepida = "https://id.lepida.it/idp/shibboleth"
static let namirial = "https://idp.namirialtsp.com/idp"
static let poste = "https://posteid.poste.it"
static let sielte = "https://identity.sieltecloud.it"
static let spiditalia = "https://spid.register.it"
static let teamsystem = "https://spid.teamsystem.com/idp"
static let tim = "https://login.id.tim.it/affwebservices/public/saml2sso"
}
| gharchive/issue | 2021-11-02T13:37:36 | 2025-04-01T04:32:38.232541 | {
"authors": [
"BhosaleNikhil",
"dbattinelli",
"hank93939"
],
"repo": "INPS-it/SPIDlibraryIOS",
"url": "https://github.com/INPS-it/SPIDlibraryIOS/issues/2",
"license": "BSD-3-Clause",
"license_type": "permissive",
"license_source": "github-api"
} |
2476384589 | Add initial example for service to service communication
Adds an example for service to service communication.
So for some reason the event doesn't seem to be reaching the event callback, so I was wondering if you had any insight on why. I also still need to add the schemas.
Schemas are added.
So just to explain a little more. I initially didn't want to use events, but since the first service just sends the message and then returns, I needed to use a handler to make sure that the response is only sent once the second service has replied. Thus, I decided to just emit an event back to the client, and have the client print that out for the purposes of the test.
So just to explain a little more. I initially didn't want to use events, but since the first service just sends the message and then returns, I needed to use a handler to make sure that the response is only sent once the second service has replied. Thus, I decided to just emit an event back to the client, and have the client print that out for the purposes of the test.
@gecage952 This is the correct approach based on the current API, as the mechanisms of pub/sub seem to compel a nonblocking API. It would be nice to have a blocking API (where Service 1 doesn't have to use the event handler, but can just wait on Service 2's response before sending its own response) but I'm not sure how we would implement it in the SDK. One possible solution could be to expose a limited view of the ExternalRequest object; maybe this API could use Python's threading.Event logic. For example, the create_external_request() interface could look like:
def intersect_sdk_call_service_nonblocking(
self,
request: IntersectDirectMessageParams,
response_handler: INTERSECT_SERVICE_RESPONSE_CALLBACK_TYPE | None = None,
) -> list[UUID]: # this is the current interface
...
class BlockingCallbackObject:
set_flag: threading.Event # this should only be mutated by the SDK but the Capability should listen to it
response: Any # the actual message response - you shouldn't check the value for SDK control flow logic, though
def
def intersect_sdk_call_service_blocking(self, request: IntersectDirectMessageParams) -> list[BlockingCallbackObject]: # newer API, in most cases this will just be a list with a single threading event in it
...
Then you could use it in code like this:
@intersect_message
def my_blocking_function(self, param: str) -> str:
msg_to_send = IntersectDirectMessageParams(
destination='example-organization.example-facility.example-system.example-subsystem.service-two',
operation='ServiceTwo.test_service',
payload=text,
)
# Send intersect message to another service
callback_obj = self.intersect_sdk_call_service_blocking(msg_to_send, self.service_2_handler)[0]
while not callback_obj.set_flag.is_set():
callback_obj.set_flag.wait(10.0) # you can wait long amounts, and using threading.Event will immediately suspend this wait once the flag is set
# process response according to domain logic, and return it
return response
You may want to add some level of a maximum timeout regarding a response from the service, this is just an example.
@MichaelBrim @marshallmcdonnell pinging you both because there's some discussion about allowing both a blocking and a nonblocking API
I think it is a great suggestion and honestly, what I thought initially for the service-to-service communication being.
I'm perfectly fine with adding a ticket for the blocking API for service-to-service.
Not top priority but think it is good to capture that + useful eventually to others
I'm perfectly fine with adding a ticket for the blocking API for service-to-service.
https://github.com/INTERSECT-SDK/python-sdk/issues/15
| gharchive/pull-request | 2024-08-20T19:26:30 | 2025-04-01T04:32:38.238211 | {
"authors": [
"Lance-Drane",
"gecage952",
"marshallmcdonnell"
],
"repo": "INTERSECT-SDK/python-sdk",
"url": "https://github.com/INTERSECT-SDK/python-sdk/pull/14",
"license": "BSD-3-Clause",
"license_type": "permissive",
"license_source": "github-api"
} |
1289571815 | 🛑 Client Area is down
In 226fef9, Client Area (https://clientarea.io.gt) was down:
HTTP code: 403
Response time: 222 ms
Resolved: Client Area is back up in 8c6049c.
| gharchive/issue | 2022-06-30T04:34:37 | 2025-04-01T04:32:38.240782 | {
"authors": [
"aalonzolu"
],
"repo": "IOGT/upptime",
"url": "https://github.com/IOGT/upptime/issues/32",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
} |
1924510704 | Change the license
When we have the green light from Bristol we need to change the license everywhere it is mentioned:
[ ] LICENSE file
[ ] LICENSE header on all files
[ ] python setup.py files
[ ] conda packages
The CONTRIBUTORS file also needs updating to reflect the current situation.
[ ] CONTRIBUTORS file
Done in #45
| gharchive/issue | 2023-10-03T16:27:28 | 2025-04-01T04:32:38.338833 | {
"authors": [
"jbarnoud"
],
"repo": "IRL2/nanover-protocol",
"url": "https://github.com/IRL2/nanover-protocol/issues/15",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
} |
2255183299 | Fix/225 fix partner export
The download links for exporting are now working, I have changed the remaining "localhost:8000" to the api Endpoint, fixed a validator in meeting and removed the "organizacion" link
The export for volunteers documentation in admin > voluntarios > solicitudes, it's not working
Perhaps you have tried with a populate entity, that does not contain real filles. Try with a created from scratch volunteer
| gharchive/pull-request | 2024-04-21T17:23:46 | 2025-04-01T04:32:38.350951 | {
"authors": [
"FelixoGudiel",
"marnunrey2"
],
"repo": "ISPP-G5/NexONG_Frontend",
"url": "https://github.com/ISPP-G5/NexONG_Frontend/pull/226",
"license": "Apache-2.0",
"license_type": "permissive",
"license_source": "github-api"
} |
1816847768 | implement content deletion
Description of changes
deletion of flashcards and media on the content page, for invalid content it's directly besides the content
How has this been tested?
Please describe the test strategy you followed.
[ ] automated unit test
[ ] automated integration test
[ ] automated acceptance test
[ ] manual, exploratory test
In case of manual test, please document the test well including a set of user instructions and prerequisites. Each including an action, it's result, and where appropriate a screenshot.
Checklist before requesting a review
[ ] My code follows the coding guidelines of this project
[ ] I have performed a self-review of my code
[ ] I have commented my code, particularly in hard-to-understand areas
[ ] My code fulfilles all acceptance criteria
[ ] I have added tests that prove my fix is effective or that my feature works
[ ] New and existing unit tests pass locally with my changes
[ ] I have made corresponding changes to the documentation
[ ] I have added explanation of architectural decision and rationales to wiki/adr
[ ] I have updated the changes in the ticket description
Checklist for reviewer
[ ] The code works and does not throw errors
[ ] The code is easy to understand and there are no confusing parts
[ ] The code follows the coding guidelines of this project
[ ] The code change accomplishes what it is supposed to do
[ ] I cannot think of any use case in which the code does not behave as intended
[ ] The added and existing tests reasonably cover the code change
[ ] I cannot think of any test cases, input or edge cases that should be tested in addition
[ ] Description of the change is included in the documentation
Funktioniert wie es soll. Wenn der "conditional hook" error weg ist kann man es mergen.
| gharchive/pull-request | 2023-07-22T18:00:13 | 2025-04-01T04:32:38.360161 | {
"authors": [
"SkyDiverStar",
"v-morlock"
],
"repo": "IT-REX-Platform/gits-frontend",
"url": "https://github.com/IT-REX-Platform/gits-frontend/pull/96",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
} |
231973244 | Не работает авторизация на сайте
При попытке авторизации используя любой из сервисов вываливается ошибка
Possible duplicate to #148
Поправили.
| gharchive/issue | 2017-05-29T09:34:37 | 2025-04-01T04:32:38.361723 | {
"authors": [
"Photon79",
"Vitallium",
"icewind"
],
"repo": "IT61/it61-rails",
"url": "https://github.com/IT61/it61-rails/issues/171",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
} |
2742189146 | Markdown syntex
Add some markdown syntex to cheeps
Good to include tests in this Pull. Nice
| gharchive/pull-request | 2024-12-16T11:59:14 | 2025-04-01T04:32:38.400168 | {
"authors": [
"ItsLukV",
"yeeerdley-itu"
],
"repo": "ITU-BDSA2024-GROUP29/Chirp",
"url": "https://github.com/ITU-BDSA2024-GROUP29/Chirp/pull/105",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
} |
2579718437 | [ITensors] [BUG] random_mps(sites, ψ) with quantum number conservation errors
Description of bug
I'm trying to make a random MPS with quantum number conservation; random_mps(sites, ψ) results in an error buried pretty far down.
Minimal code demonstrating the bug or unexpected behavior
Minimal runnable code
using ITensors
using Pkg
Pkg.status("ITensors")
Nsites = 2
sites = siteinds("Electron", Nsites; conserve_qns = true)
ψ0 = MPS(sites,["UpDn", "Emp"])
random_mps(sites, ψ0)
Expected output or behavior
I expect a random MPS
Actual output or behavior
Output of minimal runnable code
[9136182c] ITensors v0.6.19
ERROR: LoadError: MethodError: no method matching state(::Index{Vector{Pair{QN, Int64}}}, ::ITensor)
Closest candidates are:
state(::Index, !Matched::AbstractString; kwargs...)
@ ITensors ~/.julia/packages/ITensors/FpnkY/src/lib/SiteTypes/src/sitetype.jl:595
state(::Index, !Matched::Integer)
@ ITensors ~/.julia/packages/ITensors/FpnkY/src/lib/SiteTypes/src/sitetype.jl:636
Stacktrace:
[1] (::ITensors.ITensorMPS.var"#398#400"{Vector{Index{Vector{Pair{QN, Int64}}}}, MPS})(j::Int64)
@ ITensors.ITensorMPS ./essentials.jl:0
[2] iterate
@ ./generator.jl:47 [inlined]
[3] collect
@ ./array.jl:834 [inlined]
[4] MPS(eltype::Type{Float64}, sites::Vector{Index{Vector{Pair{QN, Int64}}}}, states_::MPS)
@ ITensors.ITensorMPS ~/.julia/packages/ITensors/FpnkY/src/lib/ITensorMPS/src/mps.jl:421
[5] random_mps(rng::Random.TaskLocalRNG, eltype::Type{Float64}, sites::Vector{Index{Vector{Pair{QN, Int64}}}}, state::MPS; linkdims::Int64)
@ ITensors.ITensorMPS ~/.julia/packages/ITensors/FpnkY/src/lib/ITensorMPS/src/mps.jl:311
[6] random_mps(rng::Random.TaskLocalRNG, sites::Vector{Index{Vector{Pair{QN, Int64}}}}, state::MPS; linkdims::Int64)
@ ITensors.ITensorMPS ~/.julia/packages/ITensors/FpnkY/src/lib/ITensorMPS/src/mps.jl:292
[7] random_mps(sites::Vector{Index{Vector{Pair{QN, Int64}}}}, state::MPS; linkdims::Int64)
@ ITensors.ITensorMPS ~/.julia/packages/ITensors/FpnkY/src/lib/ITensorMPS/src/mps.jl:283
[8] random_mps(sites::Vector{Index{Vector{Pair{QN, Int64}}}}, state::MPS)
@ ITensors.ITensorMPS ~/.julia/packages/ITensors/FpnkY/src/lib/ITensorMPS/src/mps.jl:280
[9] top-level scope
@ ~/bugreport.jl:8
in expression starting at /Users/white/bugreport.jl:8
Version information
Output from versioninfo():
julia> versioninfo()
versioninfo()
Julia Version 1.10.5
Commit 6f3fdf7b362 (2024-08-27 14:19 UTC)
Build Info:
Official https://julialang.org/ release
Platform Info:
OS: macOS (arm64-apple-darwin22.4.0)
CPU: 8 × Apple M3
WORD_SIZE: 64
LIBM: libopenlibm
LLVM: libLLVM-15.0.7 (ORCJIT, apple-m1)
Threads: 1 default, 0 interactive, 1 GC (on 4 virtual cores)
Output from using Pkg; Pkg.status("ITensors"):
julia> using Pkg; Pkg.status("ITensors")
Status `~/.julia/environments/v1.10/Project.toml`
[9136182c] ITensors v0.6.19
I don't think we every supported passing an MPS as the second argument of random_mps, you should use random_mps(sites, ["UpDn", "Emp"]).
yep, this works. (Could've sworn (1) I tried this, and (2) the MPS is what I recall from the past, but you'd know better than I do.)
In any case thanks for the help, and sorry for the bother.
No worries! Probably we should try to catch that MPS case, if it accidentally works in some cases.
| gharchive/issue | 2024-10-10T19:56:07 | 2025-04-01T04:32:38.409021 | {
"authors": [
"christopherdavidwhite2",
"mtfishman"
],
"repo": "ITensor/ITensors.jl",
"url": "https://github.com/ITensor/ITensors.jl/issues/1540",
"license": "Apache-2.0",
"license_type": "permissive",
"license_source": "github-api"
} |
1914246678 | [NDTensors] Add SortedSets and TagSets
Description
This adds SortedSets and TagSets submodules to NDTensors.
SortedSets defines a generic sorted set-like data structure which are more efficient for smaller collections than hash-based sets. Inserting new elements keeps the set sorted (and unique). When combined with SmallVectors from #1202 (using a SmallVector as a data backend for a SortedSet), it should provide very fast data structures for applications like TagSets and QNs. This PR also adds a first draft of a new TagSet type design, which is generic but can make use of SmallVectors and SortedSets to reproduce speed that is similar to the current ITensors.TagSet type but with less code and a simpler, more generic design that will allow users to choose their own tag and storage types.
This is still a draft because I need to test the efficiency of SortedSets (I know there is some part that doesn't fully take advantage of the speed of using a SmallVector but instead uses slower generic code), work out a few more things around the design of the TagSet (and relatedly, QN) type, and add tests and examples.
Codecov Report
All modified lines are covered by tests :white_check_mark:
Comparison is base (f6c575b) 85.41% compared to head (f144c91) 67.22%.
Report is 6 commits behind head on main.
:exclamation: Current head f144c91 differs from pull request most recent head c5392b9. Consider uploading reports for the commit c5392b9 to get more accurate results
:exclamation: Your organization needs to install the Codecov GitHub app to enable full functionality.
Additional details and impacted files
@@ Coverage Diff @@
## main #1204 +/- ##
===========================================
- Coverage 85.41% 67.22% -18.19%
===========================================
Files 88 87 -1
Lines 8426 8388 -38
===========================================
- Hits 7197 5639 -1558
- Misses 1229 2749 +1520
Files
Coverage Δ
src/itensor.jl
80.96% <100.00%> (-1.30%)
:arrow_down:
... and 39 files with indirect coverage changes
:umbrella: View full report in Codecov by Sentry.
:loudspeaker: Have feedback on the report? Share it here.
@emstoudenmire I'm going to merge this so we can start testing it out in the wild. You can look at the new TagSet type (just a prototype for now, not yet used in ITensors.jl to replace the current one) to get an idea for how to design a QN type.
Some set operations on SortedSet (and the associated SmallSet, which is a type alias for a SortedSet wrapping a SmallVector) still need to be optimized, but that will be relatively simple. Basically each one (union, intersect, setdiff, symdiff) can take advantage of the data structures being sorted to run in linear time in the length of the sets, but I only implemented that for unionandsetdiff` so far.
| gharchive/pull-request | 2023-09-26T20:24:58 | 2025-04-01T04:32:38.419527 | {
"authors": [
"codecov-commenter",
"mtfishman"
],
"repo": "ITensor/ITensors.jl",
"url": "https://github.com/ITensor/ITensors.jl/pull/1204",
"license": "Apache-2.0",
"license_type": "permissive",
"license_source": "github-api"
} |
1086456545 | 🛑 Website is down
In 64ab5f7, Website (https://pt.ivao.aero/portal) was down:
HTTP code: 502
Response time: 8475 ms
Resolved: Website is back up in 7db7453.
| gharchive/issue | 2021-12-22T05:10:32 | 2025-04-01T04:32:38.425621 | {
"authors": [
"pt-hq"
],
"repo": "IVAO-Portugal/status-page",
"url": "https://github.com/IVAO-Portugal/status-page/issues/1107",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
} |
1095984936 | 🛑 Events is down
In 892c608, Events (https://events.pt.ivao.aero) was down:
HTTP code: 502
Response time: 18899 ms
Resolved: Events is back up in 16b35dd.
| gharchive/issue | 2022-01-07T04:57:34 | 2025-04-01T04:32:38.427861 | {
"authors": [
"pt-hq"
],
"repo": "IVAO-Portugal/status-page",
"url": "https://github.com/IVAO-Portugal/status-page/issues/1269",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
} |
464266710 | Provide containerized version of HelloWorld application
Currently can only run HelloWorld as non-containerized application.
Todo:
develop Dockerfile
extend travis script with HelloWorld build steps
HelloWorld container image requirements:
Containerized version is to be based on LRC Base image.
has to support the commandline options currently supported by the HelloWorld application.
completed. Images are published to docker hub
| gharchive/issue | 2019-07-04T13:26:39 | 2025-04-01T04:32:38.429715 | {
"authors": [
"bergtwvd",
"rhzg"
],
"repo": "IVCTool/TS_HelloWorld",
"url": "https://github.com/IVCTool/TS_HelloWorld/issues/40",
"license": "Apache-2.0",
"license_type": "permissive",
"license_source": "github-api"
} |
2385531868 | J16-17-18 + Statistics
Team statistics
League statistics
| gharchive/pull-request | 2024-07-02T08:01:07 | 2025-04-01T04:32:38.432692 | {
"authors": [
"Ian-Inizias"
],
"repo": "Ian-Liceranzu/BasketLeague2",
"url": "https://github.com/Ian-Liceranzu/BasketLeague2/pull/95",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
} |
212438235 | PersistedGrants table constantly grows
I noticed my database size constantly expanding quite fast and found that the PersistedGrants table is the culprit. I was wondering if It should be clearing out expired tokens? Is there any plan to build in a function to clear out old tokens?
Yep, check out this line in the example host.
And just today I had finished implementing my own solution..... Oh well!
| gharchive/issue | 2017-03-07T13:51:46 | 2025-04-01T04:32:38.466737 | {
"authors": [
"ravetroll",
"scottbrady91"
],
"repo": "IdentityServer/IdentityServer4.EntityFramework",
"url": "https://github.com/IdentityServer/IdentityServer4.EntityFramework/issues/59",
"license": "apache-2.0",
"license_type": "permissive",
"license_source": "bigquery"
} |
510685964 | Adding the nuget package dependency
Had to add the nuget package instruction, otherwise it won't work. Not sure down there is the best spot for the instruction though
What issue does this PR address?
A Missing instruction
Does this PR introduce a breaking change?
It's just text, so probably not.
Please check if the PR fulfills these requirements
[ ] The commit follows our guidelines
[ ] Unit Tests for the changes have been added (for bug fixes / features)
Other information:
Adding instruction to enable building the tutorial. Text should be reviewed =]
thanks!
| gharchive/pull-request | 2019-10-22T14:14:13 | 2025-04-01T04:32:38.469783 | {
"authors": [
"d3m4",
"leastprivilege"
],
"repo": "IdentityServer/IdentityServer4",
"url": "https://github.com/IdentityServer/IdentityServer4/pull/3759",
"license": "Apache-2.0",
"license_type": "permissive",
"license_source": "github-api"
} |
2229174686 | Refactor Unit Tests to Focus on Specific Components
Description
Our current unit tests are overly broad, testing multiple parts of the codebase simultaneously. This makes the tests difficult to maintain and troubleshoot, and it violates the principle of single responsibility in testing. To address this issue, I propose refactoring our unit tests to focus on specific components or functionalities, ensuring clearer test cases and better maintainability.
Proposed Solution:
Break down the existing unit tests into smaller, more focused tests, each targeting a specific component or functionality.
Ensure that each test case has a clear purpose and tests only one aspect of the codebase.
Use descriptive test names that reflect the specific behavior being tested.
Utilize mocking and stubbing techniques to isolate the component under test and its dependencies.
Expected Benefits:
Improved clarity and readability of unit tests.
Easier identification of failures, leading to faster debugging and troubleshooting.
Reduced risk of unintended side effects when making changes to the codebase.
Enhanced maintainability, as each test case will focus on a single responsibility.
Facilitated onboarding for new team members, as they can easily understand and navigate the test suite.
Closing because this was mostly solved in #264
| gharchive/issue | 2024-04-06T10:09:30 | 2025-04-01T04:32:38.473168 | {
"authors": [
"Ido-Barnea"
],
"repo": "Ido-Barnea/Chess-But-Better",
"url": "https://github.com/Ido-Barnea/Chess-But-Better/issues/282",
"license": "Apache-2.0",
"license_type": "permissive",
"license_source": "github-api"
} |
2159136759 | Incorrect view of Special Characters
Hey ,
Loving your project so far, congrats!
I wanted to let you know about something I noticed. Songs with special characters, like those in Japanese or the artist with the "Ë", show up as "□□□□".
Not a major issue, but I thought you might want to be aware of it in case it's helpful for future updates.
Thanks,
Hi,
Thank you for your interest in the project.
Yes, I see, special characters are not displayed correctly. I'll see what I can do about that, thanks for the feedback :)
I found a font that can handle very low resolutions (11px for the title). This font does this: PixelMPlus.
I merged it with the base font. I don't know if the result is perfect, but from what I've tested, it seems to work.
I'll set up a version where you can choose this extended font or the original one.
For the time being, only the title will be supported
Just found this project, and it's a godsend, thank you! The font also works for me too :)
Thank you for your feedback! Glad to see you like the project so much :)
| gharchive/issue | 2024-02-28T14:43:08 | 2025-04-01T04:32:38.573295 | {
"authors": [
"IceBotYT",
"ImFireGod",
"SirWhiwi"
],
"repo": "ImFireGod/SteelSeries-Spotify-Linker",
"url": "https://github.com/ImFireGod/SteelSeries-Spotify-Linker/issues/1",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
} |
249388062 | What does the gray open access color indicate?
We've begun working with the Unpaywall data available on Zenodo for our Sci-Hub Coverage Study (see https://github.com/greenelab/scihub-manuscript/issues/18).
We noticed 32 DOIs from unpaywall_100k.csv.gz have oa_color_long of gray. See unpaywall-gray.tsv.txt for the Unpaywall DOIs colored gray.
What does "gray" mean?
It was leftover from a previous coding scheme. It means "closed". Sorry about that!
Heather
| gharchive/issue | 2017-08-10T15:34:50 | 2025-04-01T04:32:38.617970 | {
"authors": [
"dhimmel",
"hpiwowar"
],
"repo": "Impactstory/oadoi-paper1",
"url": "https://github.com/Impactstory/oadoi-paper1/issues/1",
"license": "mit",
"license_type": "permissive",
"license_source": "bigquery"
} |
2709918933 | A rational which is a p-adic integer for all p is an integer.
Sounds easy, but when I say "p-adic integer for all p" I unfortunately actually mean
∀ (v : IsDedekindDomain.HeightOneSpectrum (𝓞 ℚ)),
↑((algebraMap ℚ (FiniteAdeleRing (𝓞 ℚ) ℚ)) x) v ∈ IsDedekindDomain.HeightOneSpectrum.adicCompletionIntegers ℚ v
so that adds a bit of a twist. Probably one should start by proving that for all such v, there's a prime p : Nat such that v=(p), and then show that ↑((algebraMap ℚ (FiniteAdeleRing (𝓞 ℚ) ℚ)) x) v ∈ IsDedekindDomain.HeightOneSpectrum.adicCompletionIntegers ℚ v is an interesting way of saying that p doesn't divide the denominator of v.
claim
I wonder whether we shouldn't be wasting our time with this and should refactor adeles so that they allow us to supply a Dedekind domain (like for finite adeles), meaning that we can just replace all this nonsense with \Z.
disclaim
Sorry for keeping this claimed - I started with
intro p hp hpdvd
let p' : IsDedekindDomain.HeightOneSpectrum (𝓞 ℚ) := {
asIdeal := Ideal.span {(p : 𝓞 ℚ)}
isPrime := by
rw [Ideal.span_singleton_prime (by simp [hp.ne_zero])]
have := map_prime Rat.ringOfIntegersEquiv.symm (Nat.prime_iff_prime_int.mp hp)
simpa
ne_bot := by simp [hp.ne_zero]
}
but didn't get anywhere and didn't have time to look further
| gharchive/issue | 2024-12-01T21:38:50 | 2025-04-01T04:32:38.620938 | {
"authors": [
"Ruben-VandeVelde",
"kbuzzard"
],
"repo": "ImperialCollegeLondon/FLT",
"url": "https://github.com/ImperialCollegeLondon/FLT/issues/254",
"license": "Apache-2.0",
"license_type": "permissive",
"license_source": "github-api"
} |
2547602884 | netcomp not working properly
Assuming have copied config and updated accordingly. @dalonsoa it'd be amazing if you could take a look please.
python -m venv ./sa_env
./sa_env/Scripts/activate.bat
pip install swmmanywhere
python -m swmmanywhere --config_path=minimum_viable_template.yml
Traceback (most recent call last):
File "<frozen runpy>", line 198, in _run_module_as_main
File "<frozen runpy>", line 88, in _run_code
File "C:\Users\bdobson\Downloads\temp\sa_env\Lib\site-packages\swmmanywhere\__main__.py", line 8, in <module>
from swmmanywhere import swmmanywhere
File "C:\Users\bdobson\Downloads\temp\sa_env\Lib\site-packages\swmmanywhere\swmmanywhere.py", line 22, in <module>
from swmmanywhere.metric_utilities import iterate_metrics, validate_metric_list
File "C:\Users\bdobson\Downloads\temp\sa_env\Lib\site-packages\swmmanywhere\metric_utilities.py", line 21, in <module>
import netcomp
ModuleNotFoundError: No module named 'netcomp'
It seems that netcomp was included in the source application but not in the wheel, so when installed from PyPI, it is not there. Let's see what I need to do to make that happen. This will require another release, I'm affraid, and mark the previous one as wrong, so no one installs it.
OK great thanks - how do I mark as wrong?
closed by #293
@barneydobson make sure to yank the release. Can be done by the manager of the package on https://pypi.org/ which I think is @dalonsoa?
I think @dalonsoa did 👍
@dalonsoa new bug with the distribution:
Run if [[ "false" != 'true' ]]; then
/tmp/baipp/dist/swmmanywhere-0.1.2-py3-none-any.whl: W009: Wheel contains multiple toplevel library entries:
netcomp/
swmmanywhere/
Error: Process completed with exit code 1.
You need to move the netcomp folder under swmmanywhere as a subpackage.
I've just seen it, but I don't understand why that is a problem. The wheel works and it is not uncommon to distribute multiple packages together. If we move netcomp to within swmmanywhere, we need to change all of import statements for netcomp.
looking at examples - could we move both into src?
That's what I was starting to think, to be honest.
I second the src-layout (I use it for all my projects). You have to make a couple of changes to pyproject.toml for hatchling, pytest, and coverage.
On it...
OK still failing: https://github.com/ImperialCollegeLondon/SWMManywhere/actions/runs/11036163538/job/30654221348
I guess we're not fixing this today - but any ideas from @cheginit or @dalonsoa are welcome!
( I will first try the suggestion here )
| gharchive/issue | 2024-09-25T10:28:09 | 2025-04-01T04:32:38.628679 | {
"authors": [
"barneydobson",
"cheginit",
"dalonsoa"
],
"repo": "ImperialCollegeLondon/SWMManywhere",
"url": "https://github.com/ImperialCollegeLondon/SWMManywhere/issues/290",
"license": "BSD-3-Clause",
"license_type": "permissive",
"license_source": "github-api"
} |
345180373 | Code change for issue #16
Updated Readme with Google group
updated Contributing with eclipse java formatter and guidelines to create pull request
closing this pull request. as it is covered in pull request 19.
| gharchive/pull-request | 2018-07-27T10:47:22 | 2025-04-01T04:32:38.634169 | {
"authors": [
"JeetenJaiswal"
],
"repo": "Impetus/fabric-jdbc-connector",
"url": "https://github.com/Impetus/fabric-jdbc-connector/pull/18",
"license": "Apache-2.0",
"license_type": "permissive",
"license_source": "github-api"
} |
2318430018 | Make AiService global
Closes #7.
I made the AiSevice live inside
JabRefGUI. It lives alongside the PreferenceService, and other classes that are often referenced in constructors.
Yes, I can't find the appropriate word for "classes that are often used in constructors", so I hope the reviewers will understand what I mean by looking at the changes (yes, I understand that's not the right way to behave).
Mandatory checks
[ ] Change in CHANGELOG.md described in a way that is understandable for the average user (if applicable)
[ ] Tests created for changes (if applicable)
[ ] Manually tested changed features in running JabRef (always required)
[ ] Screenshots added in PR description (for UI changes)
[ ] Checked developer's documentation: Is the information available and up to date? If not, I outlined it in this pull request.
[ ] Checked documentation: Is the information available and up to date? If not, I created an issue at https://github.com/JabRef/user-documentation/issues or, even better, I submitted a pull request to the documentation repository.
Okay, I checked the files again and came to conclusion:
Where you used a private final field and initialized it in constructor, I did the same for AiService.
Where you used an @Inject private final, I used it.
@InAnYan 🤣🤣 - I think, you found out what @Contracts are offered by JabRef (these can be injecgted; the others cannot) - See https://github.com/koppor/jabref/pull/687#issuecomment-2132827543 for some more links.
I created an internal issue (https://github.com/JabRef/jabref-issue-melting-pot/issues/440) to label all "contracts" as such. - If you have capacity, you can do it using your experienced gained. IMHO, these are 3 to 5 classes being contracts or are there more?
Other thing: Delete the merged branches. You can do it after merging, have refined github browser plugin doing it automatically, or use the branch overview now and then to delete.
| gharchive/pull-request | 2024-05-27T07:21:16 | 2025-04-01T04:32:38.661828 | {
"authors": [
"InAnYan",
"koppor"
],
"repo": "InAnYan/jabref",
"url": "https://github.com/InAnYan/jabref/pull/9",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
} |
205642276 | CPU Utilization
Hello,
What are the recommended specs for a host to run this script? I have deployed this to an AWS EC2 instance with 4 vCPUs and 16GB RAM, but the CPU utilization keeps randomly spiking and the instance crashes. Are there any recommendations for the specs of the host?
Thanks,
Ryan
Hi Ryan,
The instance you used should be more than sufficient for running the script. I would recommend you to check if you have any other processes running on the instance which might cause the issues that you face.
Regards,
Doron
| gharchive/issue | 2017-02-06T16:45:40 | 2025-04-01T04:32:38.682466 | {
"authors": [
"DoronLehmann",
"ryanenshaie"
],
"repo": "Incapsula/logs-downloader",
"url": "https://github.com/Incapsula/logs-downloader/issues/11",
"license": "bsd-2-clause",
"license_type": "permissive",
"license_source": "bigquery"
} |
134536268 | SSLError - Exception
Add an exception handler for connection timeout.
Hi,
The exception is handled as expected - an informative log is printed describing the issue, in addition to the trace.
Thanks,
Doron
Well, the problem is not the log. The problem is that the script stops after this error and has to be restarted.
| gharchive/issue | 2016-02-18T10:15:57 | 2025-04-01T04:32:38.684242 | {
"authors": [
"AntonioKL",
"DoronLehmann"
],
"repo": "Incapsula/logs-downloader",
"url": "https://github.com/Incapsula/logs-downloader/issues/4",
"license": "bsd-2-clause",
"license_type": "permissive",
"license_source": "bigquery"
} |
2628596029 | Evaluate using additional optimizations like LTO and PGO
Hi!
I just read the post on Reddit about IncognitoBin. Since one of the key features of the project is "Scalable and fast", I decided to propose several improvement ideas for IncognitoBin. I created the Issue just because Discussions are disabled for the repo now.
Link-Time Optimization (LTO)
I noticed that in the Cargo.toml file Link-Time Optimization (LTO) for the project is not enabled. I suggest switching it on since it will reduce the binary size (always a good thing to have) and will likely improve the application's performance a bit.
I suggest enabling LTO only for the Release builds so as not to sacrifice the developers' experience while working on the project since LTO consumes an additional amount of time to finish the compilation routine. If you think that a regular Release build should not be affected by such a change as well, then I suggest adding an additional dist or release-lto profile where additionally to regular release optimizations LTO will also be added. Such a change simplifies life for maintainers and others interested in the project persons who want to build the most performant version of the application. Using ThinLTO should also help to reduce the build-time overhead with LTO. E.g., check cargo-outdated Release profile.
Basically, it can be enabled with the following lines to both Cargo.toml:
[profile.release]
lto = true
I have made quick tests (Fedora 41) by adding lto = true to the Release profile. The binary size reduction is the following:
Server: from 13 Mib to 8.6 Mib
Worker: from 5.2 Mib to 3.9 Mib
Profile-Guided Optimization (PGO) and Post-Link Optimization (PLO)
According to my benchmarks, PGO measurably helps to optimize various applications (its CPU efficiency more precisely) like regular backends (like IncognitoBin's Server and Worker projects) and databases since ScyllaDB and Redis are used (for both of them PGO benchmarks are available in the "awesome-pgo" repo). That's why I think applying PGO (and a similar PLO technique via LLVM BOLT) could help to optimize IncognitoBin further. You may be interested in applying PGO and PLO to 3rd party projects used by IncognitoBin on the server side to improve response time, reduce CPU usage, and prepare for higher workloads (if you plan to do so, ofc).
Thank you.
Hi!
Thank you so much for your detailed feedback and for bringing this to my attention! I’ve just enabled Discussions for the repository, so future ideas and suggestions can be shared there as well.
I really appreciate your suggestion on enabling Link-Time Optimization (LTO). I’ve just made updates, and the next release will include LTO to enhance performance. I’m also excited to share that enabling LTO has already reduced the binary sizes significantly! Here’s a summary of the improvements:
Windows
Server binary: from 10,272 KB to 8,506 KB
Worker binary: from 4,449 KB to 3,968 KB
Ubuntu
Server binary: from 12,919 KB to 9,002 KB
Worker binary: from 5,414 KB to 4,034 KB
As for Profile-Guided Optimization (PGO) and Post-Link Optimization (PLO), I agree they sound promising for further optimization. I’ll need to educate myself more on these techniques before implementing them, but they’re definitely on my radar, especially given the potential to enhance CPU efficiency and scalability.
Thanks again for your suggestions and for helping to make IncognitoBin even better!
#13 Quick Login Feature & Optimize Binary Size for Worker and Server
Hi zamazan4ik,
I wanted to let you know that I’ve implemented Link-Time Optimization (LTO) as you recommended. The changes have already been pushed, and I'm seeing a noticeable improvement in performance. Thank you again for the suggestion and for your valuable guidance
| gharchive/issue | 2024-11-01T08:29:59 | 2025-04-01T04:32:38.694239 | {
"authors": [
"X-SP33D",
"zamazan4ik"
],
"repo": "IncognitoBin/IncognitoBin",
"url": "https://github.com/IncognitoBin/IncognitoBin/issues/8",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
} |
1642773585 | Feature/update docs 0.1.0
Description
Update docs for 0.1.0 release.
@docNord, let's merge this and we are ready for 0.1.0.
| gharchive/pull-request | 2023-03-27T20:34:10 | 2025-04-01T04:32:38.743891 | {
"authors": [
"mgcth"
],
"repo": "Ingenjorsarbete-For-Klimatet/ifk-smhi",
"url": "https://github.com/Ingenjorsarbete-For-Klimatet/ifk-smhi/pull/74",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
} |
1464242219 | 类型没有导出
类型没有导出
我不太明白你的意思,可以具体一些吗?
全量的类型在types文件夹中导出,拆包类型在对应的es和lib文件夹中,可以查阅一下。
我会第一时间跟进 @dmyz
@NelsonYong useRequest 封装分页hook,自带分页hook不适用
@dmyz 可以贴出代码来看看吗
@dmyz 是我那个分页hook不适用你的场景吗
@NelsonYong defaultParams 这个怎么赋值
@dmyz defaultParams的话你传进去就好了
@dmyz 你可以采取解构的形式,将默认值置于最前
是我没理解到吗 ?😂
你应该在service函数中传入你想要的值
()=>service({...pagination,current:pagination.current ?? defaultcurrnt})
实际上一开始你就已经指定 current为 1,这就是默认值,你直接传进去就好了
所以我有点不太明白defaultparams有什么问题 @dmyz
@NelsonYong 好的,我再看看,ahooks 的useRequest怎么用的。
我之前用的这个 vue-request,传参不一样
用法都应该差不多。你想表达的是不是既可以外部传其他值进来,又能使用table自己的参数,所以你想要默认参数。 @dmyz
我提个建议,hook新增第二个参数params,由外部传进来,到时和service组合使用
好的,我这还有个问题。
table分页,servie返回 { "": 0, list: [] }
hook 返回 list: []
现在面临一个问题,servie返回结果和hook要的结果不一致a
两种方案,比较推荐项目规范,code msg data三层需要和后端协商,第二种,可以写一个异步函数包一下这个service函数,await获取到值后,到时resolve出hook要的结构即可。 @dmyz
一般来说,这和hook关系其实不大,hook不会指定要拿data这个字段,完全是用户返回什么就是什么,需要用户自己去,如axios设定拦截器返回对应的结构
types
`
import { Ref, WatchSource } from 'vue';
import { CachedData } from 'vue-hooks-plus/es/useRequest/utils/cache';
export interface Options<TData, TParams extends unknown[]> {
manual?: boolean;
onBefore?: (params: TParams) => undefined;
onSuccess?: (data: TData, params: TParams) => undefined;
onError?: (e: Error, params: TParams) => undefined;
onFinally?: (params: TParams, data?: TData, e?: Error) => undefined;
defaultParams?: TParams;
// 依赖更新
refreshDeps?: WatchSource[] | unknown;
refreshDepsAction?: () => undefined;
// loading延迟
loadingDelay?: number | Ref;
// 格式化数据
formatResult?: (data?: TData) => unknown;
// 轮询
pollingInterval?: Ref | number;
pollingWhenHidden?: boolean;
// 屏幕聚焦重新请求
refreshOnWindowFocus?: Ref | boolean;
focusTimespan?: Ref | number;
// 防抖
debounceWait?: number;
debounceLeading?: Ref;
debounceTrailing?: Ref;
debounceMaxWait?: Ref;
// 节流
throttleWait?: number;
throttleLeading?: Ref;
throttleTrailing?: Ref;
// 请求缓存
cacheKey?: String;
cacheTime?: number;
staleTime?: number;
setCache?: (data: CachedData<TData, TParams>) => undefined;
getCache?: (params: TParams) => CachedData<TData, TParams> | undefined;
// 错误重试
retryCount?: number;
retryInterval?: number;
// 只有当 ready 为 true 时,才会发起请求
ready?: Ref | boolean;
// [x: string]: unknown
}
export interface Result<TData, TParams extends unknown[]> {
loading: Ref;
data?: TData;
error?: Error;
params: TParams | [];
cancel: () => void;
refresh: () => void;
refreshAsync: () => Promise;
run: (...params: TParams) => void;
runAsync: (...params: TParams) => Promise;
mutate: (data?: TData | ((oldData?: TData) => TData | undefined)) => void;
}
export type Service = <R, Params extends any[]>(...params: Params) => Promise;
export interface Data {
total: number;
list: any[];
}
export type Params = [{ current: number; pageSize: number; [key: string]: any }, ...any[]];
export type PaginationService<TData extends Data, TParams extends Params> = (...params: TParams) => Promise;
export interface PaginationResult<TData extends Data, TParams extends Params> extends Result<TData, TParams> {
dataSource: any[] | undefined;
pagination: {
current: number;
pageSize: number;
defaultCurrent: number;
defaultPageSize: number;
total: number;
// totalPage: number;
onChange: (pageInfo: { current: number; pageSize: number }) => void;
};
}
export interface PaginationOptions<TData extends Data, TParams extends Params> extends Options<TData, TParams> {
defaultPageSize?: number;
defaultCurrent?: number;
}
ts
import { Data, Params, PaginationResult, PaginationService, PaginationOptions } from './types';
import useRequest from 'vue-hooks-plus/es/useRequest';
const useHttpPagination = <TData extends Data, TParams extends Params>(
service: PaginationService<TData, TParams>,
options: PaginationOptions<TData, TParams> = {},
) => {
const { defaultCurrent = 1, defaultPageSize = 20, ...rest } = options;
const requestResult = useRequest<TData, TParams, any>(service, {
defaultParams: [{ current: defaultCurrent, pageSize: defaultPageSize }],
refreshDepsAction: () => {
changeCurrent(1);
},
...rest,
});
// const { current = 1, pageSize = defaultPageSize } = requestResult.params.value[0] || {};
const total = requestResult.data?.value.total || 0;
const onChange = ({ current, pageSize }: { current: number; pageSize: number }) => {
let toCurrent = current <= 0 ? 1 : current;
const toPageSize = pageSize <= 0 ? 1 : pageSize;
const tempTotalPage = Math.ceil(total / toPageSize);
if (toCurrent > tempTotalPage) {
toCurrent = Math.max(1, tempTotalPage);
}
const [oldPaginationParams = {}, ...restParams] = requestResult.params.value || [];
requestResult.run(
{
...oldPaginationParams,
current: toCurrent,
pageSize: toPageSize,
},
...restParams,
);
};
/**
分页
@param current 当前页
*/
const changeCurrent = (current: number) => {
onChange({ current, pageSize: 20 });
};
const { current = 1, pageSize = defaultPageSize } = requestResult.params.value[0] || {};
const result: PaginationResult<TData, TParams> = {
loading: requestResult.loading,
data: requestResult.data?.value,
dataSource: requestResult.data?.value.list,
error: requestResult.error?.value,
params: requestResult.params.value,
cancel: requestResult.cancel,
refresh: requestResult.refresh,
refreshAsync: requestResult.refreshAsync,
run: requestResult.run,
runAsync: requestResult.runAsync,
mutate: requestResult.mutate,
pagination: {
current,
pageSize,
defaultCurrent,
defaultPageSize,
total,
onChange,
},
};
return result;
};
export default useHttpPagination;
`
我明天看看 @dmyz
我不太明白为啥要吧use-request的类型自己再封装一遍呢
类型怎么导出, 没export
你就直接export导出就好了,然后从这个文件中拿😂
如果可以的话可以给我一个在线例子或者仓库,并且描述具体一些,目前我还没办法完全明白你的意思
我弄好了,换了一种思路好了
好的,1.4.2修复了类型检查报错,可以更一下看看 @dmyz
| gharchive/issue | 2022-11-25T09:14:05 | 2025-04-01T04:32:38.770201 | {
"authors": [
"NelsonYong",
"dmyz"
],
"repo": "InhiblabCore/vue-hooks-plus",
"url": "https://github.com/InhiblabCore/vue-hooks-plus/issues/43",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
} |
1266463132 | As a user, I want an about and contact page for the application.
As a user, I want a contact page that will allow my to leave a message for the developer and an about page that will breif me about the application.
completed
| gharchive/issue | 2022-06-09T17:44:25 | 2025-04-01T04:32:38.778981 | {
"authors": [
"khushpatel2002",
"tanmaysharma2001"
],
"repo": "InnoSWP/BS21-08-CV-Parser",
"url": "https://github.com/InnoSWP/BS21-08-CV-Parser/issues/4",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
} |
2315642936 | mochizuki-vscodeConfigurationAndModuleRenaming : Update VsCode config +rename module folders
Ca marche pour moi côté front:
J'ai corrigé et amélioré la configuration, tout marche chez moi.
Je te conseille de virer toutes tes extensions pour repartir sur du propre. Au premier démarrage de cette version, il est possible que vscode ne te propose pas toute les recommendations du workspace. C'est un bug connu de vscode. Il faut donc que dans la barre de recherche des extensions tu mette "@recommended", ou alors ctrl+shift+p = > Extension : show recommended extensions
Ensuite tu installe toutes les extensions recommandées par le workspace.
Je constate que l'extension karma test explorer provoque quelques interférences avec le debugger, il sera peut-être utile de la désactiver quand on l'utilise pas. Je ne suis pas sur que ca vienne de notre config, mais plutôt de la cohabitation des diverses extensions.
Pull request à traiter en REBASE
La manière dont les configurations s’extraient de l’espace de travail m’intrigue.
Les noms de répertoire sont en lower camel-case. Pas plus pertinent en kebab case ?
lowerCamelCase
kebab-case
Les noms de répertoire sont en lower camel-case. Pas plus pertinent en kebab case ?
lowerCamelCase
kebab-case
Oui, tu as raison. On reste dans la convention Angular pour le nommage des dossiers.
Je peux
| gharchive/pull-request | 2024-05-24T15:10:06 | 2025-04-01T04:32:38.790392 | {
"authors": [
"FYHenry",
"atsuhikoMochizuki"
],
"repo": "InoteBackup2/Inote_BrowserInterface",
"url": "https://github.com/InoteBackup2/Inote_BrowserInterface/pull/9",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
} |
304073883 | Add (more) rewards the harder a boss becomes
[x] +10% experience for each wither spawned
[x] +0.5% chance (up to 40%) to drop a wither skull for each wither spawned
rewards {
# How much more percentage experience will wither drop per wither spawned. The percentage is additive (e.g. 10% experience boost, 7 withers killed = 70% more experience)
D:experience_boost_per_spawned=10.0
# How much chance there's for a wither skull to drop for each killed wither
D:head_drop_per_spawned=0.5
# Maximum chance for wither skull to drop from wither
D:head_drop_maximum=40.0
}
[ ] +5% experience for each dragon killed
[x] +0.5% chance (up to 15%) to get a dragon egg for each ender dragon killed
rewards {
# How much more percentage experience will dragon drop per dragon killed. The percentage is additive (e.g. 5% experience boost, 7 dragons killed = 35% more experience)
D:experience_boost_per_spawned=5.0
# How much chance there's for a dragon egg to drop
D:egg_drop_per_killed=0.5
# Maximum chance for egg to drop from dragon
D:egg_drop_maximum=15.0
}
Changing the experience dropped from Ender Dragon is impossible right now.
| gharchive/issue | 2018-03-10T14:17:09 | 2025-04-01T04:32:38.792814 | {
"authors": [
"Insane-96"
],
"repo": "Insane-96/ProgressiveBosses",
"url": "https://github.com/Insane-96/ProgressiveBosses/issues/5",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
} |
1318279947 | ENH: Improve LabelOverlapMeasuresImageFilter
Add definition of false discovery rate
Separate FalsePositiveError with FPR and FDR
Indicate members of the confusion matrix in code comments
From @Joeycho. Closes #3490.
PR Checklist
[x] No API changes were made (or the changes have been approved)
[x] No major design changes were made (or the changes have been approved)
[ ] Added test (or behavior not changed)
[ ] Updated API documentation (or API not changed)
[ ] Added license to new files (if any)
[ ] Added Python wrapping to new files (if any) as described in ITK Software Guide Section 9.5
[ ] Added ITK examples for all new major features (if any)
@Joeycho could you modify the test (itkLabelOverlapMeasuresImageFilterTest.cxx) to also use the new methods? You can push the code to your fork, I could cherry-pick it from there. Or you could make a new PR to obsolete this one.
@Joeycho could you modify the test (itkLabelOverlapMeasuresImageFilterTest.cxx) to also use the new methods? You can push the code to your fork, I could cherry-pick it from there. Or you could make a new PR to obsolete this one.
https://github.com/InsightSoftwareConsortium/ITK/pull/3499
I have made another mistake, Commit message of merge. Otherwise, the above pull request is what you asked for :)
I cherry-picked it here.
And now I also squashed it into the main commit.
itkPyBufferMemoryLeakTest is flaky, but let's re-run the check.
/azp run ITK.macOS.Python
| gharchive/pull-request | 2022-07-26T13:52:55 | 2025-04-01T04:32:38.835645 | {
"authors": [
"Joeycho",
"dzenanz"
],
"repo": "InsightSoftwareConsortium/ITK",
"url": "https://github.com/InsightSoftwareConsortium/ITK/pull/3498",
"license": "Apache-2.0",
"license_type": "permissive",
"license_source": "github-api"
} |
2723287579 | ENH: GHA: cancel in progress builds for pull requests
When a PR is updated, cancel currently running builds to restart new one.
PR Checklist
[ ] No API changes were made (or the changes have been approved)
[ ] No major design changes were made (or the changes have been approved)
[ ] Added test (or behavior not changed)
[ ] Updated API documentation (or API not changed)
[ ] Added license to new files (if any)
[ ] Added Python wrapping to new files (if any) as described in ITK Software Guide Section 9.5
[ ] Added ITK examples for all new major features (if any)
Refer to the ITK Software Guide for
further development details if necessary.
Sounds nice! Can we directly copy paste this code to InsightSoftwareConsortium/ITKRemoteModuleBuildTestPackageAction/ for remote modules?
Sounds nice! Can we directly copy paste this code to InsightSoftwareConsortium/ITKRemoteModuleBuildTestPackageAction/ for remote modules?
It should. I copied it from SimpleITK.
I have one question on this code: do I understand correctly that it prevents having two PRs based on the same SHA running in parallel? That is my interpretation of the github.event.pull_request.head.sha group. Is this intentional?
I have one question on this code: do I understand correctly that it prevents having two PRs based on the same SHA running in parallel? That is my interpretation of the github.event.pull_request.head.sha group. Is this intentional?
The intention is to cancel in progress actions when a PR is updated.
This here implements by using a "workflow@SHA" as an identifier for this. I don't think this is right, as the SHA would change when a PR is updated.
I hastily grabbed this for this "Batch" build workflow in SimpleITK:
https://github.com/SimpleITK/SimpleITK/blob/master/.github/workflows/BatchBuild.yml#L14-L15
This one probably has closer to the right behavior:
https://github.com/SimpleITK/SimpleITK/blob/master/.github/workflows/Build.yml#L18-L19
https://docs.github.com/en/actions/writing-workflows/choosing-what-your-workflow-does/accessing-contextual-information-about-workflow-runs
I suspect that it should be something closer to:
group: '${{ github.workflow }}@${{ github.head_ref || github.ref }}'
Ok thanks. I used ${{ github.workflow }}@${{ github.head_ref || github.run_id }} in RTKConsortium/RTK#650 but that should not make any difference with ${{ github.workflow }}@${{ github.head_ref || github.ref }} given the following line, cancel-in-progress: ${{ github.event_name == 'pull_request' }}. I'll make a PR to continue the discussion.
| gharchive/pull-request | 2024-12-06T15:14:08 | 2025-04-01T04:32:38.845776 | {
"authors": [
"SimonRit",
"blowekamp"
],
"repo": "InsightSoftwareConsortium/ITK",
"url": "https://github.com/InsightSoftwareConsortium/ITK/pull/5019",
"license": "Apache-2.0",
"license_type": "permissive",
"license_source": "github-api"
} |
332530071 | Android microphone permission
Hi the library makes a note that microphone permission is required for iOS but doesn't mention for Android. Is it possible to make recording audio optional when the user provides feedback? I
Hey @modularity
Apologies for the late response. Recording audio is fully optional, however the permission is required on iOS.
| gharchive/issue | 2018-06-14T19:16:33 | 2025-04-01T04:32:38.851183 | {
"authors": [
"Korazy",
"modularity"
],
"repo": "Instabug/instabug-reactnative",
"url": "https://github.com/Instabug/instabug-reactnative/issues/166",
"license": "mit",
"license_type": "permissive",
"license_source": "bigquery"
} |
294138396 | Memory Leak Using IGListKit?
Through the help of everyone that contributes to this issues section I have been able to properly implement IGListKit for my comments section. I just want to start by saying thanks for that. Maybe this is an error and maybe this is not but I am using a viewController to display the cells that I use to render my comments. When I go to my viewController the memory spikes up which makes a little bit of sense. However when I leave the controller the memory never goes back down. When I go to a new controller the memory increases continously. Im trying to figure out if it is my implementation of IGListKit thats flawed or my overall logic in general because this seems like a memory leak to me. I will include my code below to see if anyone here could possibly help me.
This is my commentsController which implements the appropriate section controllers
import UIKit
import IGListKit
import Firebase
class NewCommentsViewController: UIViewController, UITextFieldDelegate,CommentsSectionDelegate,CommentInputAccessoryViewDelegate {
//array of comments which will be loaded by a service function
var comments = [CommentGrabbed]()
var messagesRef: DatabaseReference?
var bottomConstraint: NSLayoutConstraint?
public let addHeader = "addHeader" as ListDiffable
public var eventKey = ""
//This creates a lazily-initialized variable for the IGListAdapter. The initializer requires three parameters:
//1 updater is an object conforming to IGListUpdatingDelegate, which handles row and section updates. IGListAdapterUpdater is a default implementation that is suitable for your usage.
//2 viewController is a UIViewController that houses the adapter. This view controller is later used for navigating to other view controllers.
//3 workingRangeSize is the size of the working range, which allows you to prepare content for sections just outside of the visible frame.
lazy var adapter: ListAdapter = {
return ListAdapter(updater: ListAdapterUpdater(), viewController: self)
}()
// 1 IGListKit uses IGListCollectionView, which is a subclass of UICollectionView, which patches some functionality and prevents others.
let collectionView: UICollectionView = {
// 2 This starts with a zero-sized rect since the view isn’t created yet. It uses the UICollectionViewFlowLayout just as the ClassicFeedViewController did.
let view = UICollectionView(frame: CGRect.zero, collectionViewLayout: UICollectionViewFlowLayout())
// 3 The background color is set to white
view.backgroundColor = UIColor.white
return view
}()
//will fetch the comments from the database and append them to an array
fileprivate func fetchComments(){
comments.removeAll()
messagesRef = Database.database().reference().child("Comments").child(eventKey)
// print(eventKey)
// print(comments.count)
let query = messagesRef?.queryOrderedByKey()
query?.observe(.value, with: { (snapshot) in
guard let allObjects = snapshot.children.allObjects as? [DataSnapshot] else {
return
}
// print(snapshot)
allObjects.forEach({ (snapshot) in
guard let commentDictionary = snapshot.value as? [String: Any] else{
return
}
guard let uid = commentDictionary["uid"] as? String else{
return
}
UserService.show(forUID: uid, completion: { (user) in
if let user = user {
let commentFetched = CommentGrabbed(user: user, dictionary: commentDictionary)
commentFetched.commentID = snapshot.key
let filteredArr = self.comments.filter { (comment) -> Bool in
return comment.commentID == commentFetched.commentID
}
if filteredArr.count == 0 {
self.comments.append(commentFetched)
}
self.adapter.performUpdates(animated: true)
}else{
print("user is null")
}
self.comments.sort(by: { (comment1, comment2) -> Bool in
return comment1.creationDate.compare(comment2.creationDate) == .orderedAscending
})
self.comments.forEach({ (comments) in
})
})
})
}, withCancel: { (error) in
print("Failed to observe comments")
})
//first lets fetch comments for current event
}
//allows you to gain access to the input accessory view that each view controller has for inputting text
lazy var containerView: CommentInputAccessoryView = {
let frame = CGRect(x: 0, y: 0, width: view.frame.width, height: 50)
let commentInputAccessoryView = CommentInputAccessoryView(frame:frame)
commentInputAccessoryView.delegate = self
return commentInputAccessoryView
}()
@objc func handleSubmit(for comment: String?){
guard let comment = comment, comment.count > 0 else{
return
}
let userText = Comments(content: comment, uid: User.current.uid, profilePic: User.current.profilePic!,eventKey: eventKey)
sendMessage(userText)
// will clear the comment text field
self.containerView.clearCommentTextField()
}
@objc func handleKeyboardNotification(notification: NSNotification){
if let userinfo = notification.userInfo {
if let keyboardFrame = (userinfo[UIKeyboardFrameEndUserInfoKey] as? NSValue)?.cgRectValue{
self.bottomConstraint?.constant = -(keyboardFrame.height)
let isKeyboardShowing = notification.name == NSNotification.Name.UIKeyboardWillShow
self.bottomConstraint?.constant = isKeyboardShowing ? -(keyboardFrame.height) : 0
if isKeyboardShowing{
let contentInset = UIEdgeInsetsMake(0, 0, (keyboardFrame.height), 0)
collectionView.contentInset = UIEdgeInsetsMake(0, 0, (keyboardFrame.height), 0)
collectionView.scrollIndicatorInsets = contentInset
}else {
let contentInset = UIEdgeInsetsMake(0, 0, 0, 0)
collectionView.contentInset = UIEdgeInsetsMake(0, 0, 0, 0)
collectionView.scrollIndicatorInsets = contentInset
}
UIView.animate(withDuration: 0, delay: 0, options: UIViewAnimationOptions.curveEaseOut, animations: {
self.view.layoutIfNeeded()
}, completion: { (completion) in
if self.comments.count > 0 && isKeyboardShowing {
let item = self.collectionView.numberOfItems(inSection: self.collectionView.numberOfSections - 1)-1
let lastItemIndex = IndexPath(item: item, section: self.collectionView.numberOfSections - 1)
self.collectionView.scrollToItem(at: lastItemIndex, at: UICollectionViewScrollPosition.top, animated: true)
}
})
}
}
}
override var inputAccessoryView: UIView? {
get {
return containerView
}
}
override var canBecomeFirstResponder: Bool {
return true
}
override func viewDidLoad() {
super.viewDidLoad()
collectionView.frame = CGRect.init(x: 0, y: 0, width: UIScreen.main.bounds.width, height: UIScreen.main.bounds.height-40)
view.addSubview(collectionView)
collectionView.alwaysBounceVertical = true
adapter.collectionView = collectionView
adapter.dataSource = self
NotificationCenter.default.addObserver(self, selector: #selector(handleKeyboardNotification), name: NSNotification.Name.UIKeyboardWillShow, object: nil)
NotificationCenter.default.addObserver(self, selector: #selector(handleKeyboardNotification), name: NSNotification.Name.UIKeyboardWillHide, object: nil)
collectionView.register(CommentCell.self, forCellWithReuseIdentifier: "CommentCell")
// collectionView.register(CommentHeader.self, forCellWithReuseIdentifier: "HeaderCell")
collectionView.keyboardDismissMode = .onDrag
navigationItem.title = "Comments"
self.navigationItem.hidesBackButton = true
let backButton = UIBarButtonItem(image: UIImage(named: "icons8-Back-64"), style: .plain, target: self, action: #selector(GoBack))
self.navigationItem.leftBarButtonItem = backButton
}
@objc func GoBack(){
print("BACK TAPPED")
self.dismiss(animated: true, completion: nil)
}
//look here
func CommentSectionUpdared(sectionController: CommentsSectionController){
print("like")
self.fetchComments()
self.adapter.performUpdates(animated: true)
}
override func viewWillAppear(_ animated: Bool) {
super.viewWillAppear(animated)
fetchComments()
tabBarController?.tabBar.isHidden = true
//submitButton.isUserInteractionEnabled = true
}
//viewDidLayoutSubviews() is overridden, setting the collectionView frame to match the view bounds.
override func viewDidLayoutSubviews() {
super.viewDidLayoutSubviews()
// collectionView.frame = view.bounds
}
override func didReceiveMemoryWarning() {
super.didReceiveMemoryWarning()
// Dispose of any resources that can be recreated.
}
}
extension NewCommentsViewController: ListAdapterDataSource {
// 1 objects(for:) returns an array of data objects that should show up in the collection view. loader.entries is provided here as it contains the journal entries.
func objects(for listAdapter: ListAdapter) -> [ListDiffable] {
let items:[ListDiffable] = comments
//print("comments = \(comments)")
return items
}
// 2 For each data object, listAdapter(_:sectionControllerFor:) must return a new instance of a section controller. For now you’re returning a plain IGListSectionController to appease the compiler — in a moment, you’ll modify this to return a custom journal section controller.
func listAdapter(_ listAdapter: ListAdapter, sectionControllerFor object: Any) -> ListSectionController {
//the comment section controller will be placed here but we don't have it yet so this will be a placeholder
// if let object = object as? ListDiffable, object === addHeader {
// return CommentsHeaderSectionController()
// }
let sectionController = CommentsSectionController()
sectionController.delegate = self
return sectionController
}
// 3 emptyView(for:) returns a view that should be displayed when the list is empty. NASA is in a bit of a time crunch, so they didn’t budget for this feature.
func emptyView(for listAdapter: ListAdapter) -> UIView? {
let view = UIView()
view.backgroundColor = UIColor.white
return view
}
}
extension NewCommentsViewController {
func sendMessage(_ message: Comments) {
ChatService.sendMessage(message, eventKey: eventKey)
}
}
This is the controller that is tasked with presenting the commentsController on click of a button
I have only inluded the functions that contribute to presenting the controller to save time and reading.
let newCommentsController = NewCommentsViewController()
lazy var navController = UINavigationController(rootViewController: newCommentsController)
@objc func presentComments(){
print("Comments button pressed")
newCommentsController.eventKey = eventKey
newCommentsController.comments.removeAll()
newCommentsController.adapter.reloadData { (updated) in
}
present(navController, animated: true, completion: nil)
}
Through opening the commentSection in various post i have seen memory go from
70 mb (When app opened) to
126 mb ( when comment section of a post is opened)
165 mb (when other comment section is closed and new comment section is opened)
Hey @Smiller193! Have you tried any debugging using Xcode's retain cycle detection?
Umm im not exactly sure how to do that tbh
but I could look into it
@rnystrom
Yup that’s a good start! Take a look at that link and you can do some googling. Lots of resources on how to debug retain cycles that can at least help pinpoint what isn’t being released.
Sent with GitHawk
Okay so I found three leaks but im not sure how I go about fixing them now.. I don't really get any useful out of them it seems
@rnystrom
@Smiller193 what are the leaks?
@rnystrom
according to the stack trace the leaks is in the commentSectionController and it may be coming from here
override func sizeForItem(at index: Int) -> CGSize {
let frame = CGRect(x: 0, y: 0, width: collectionContext!.containerSize.width, height: 50)
let dummyCell = CommentCell(frame: frame)
dummyCell.comment = comment
dummyCell.layoutIfNeeded()
let targetSize = CGSize(width: collectionContext!.containerSize.width, height: 55)
let estimatedSize = dummyCell.systemLayoutSizeFitting(targetSize)
let height = max(40+8+8, estimatedSize.height)
return CGSize(width: collectionContext!.containerSize.width, height: height)
}
Is there any way to show what I see in the side menu
@rnystrom these are the leaks and each time I click one of the mallocbytes it takes me to that function that I listed earlier
oh wait I actually fixed it
The leaks on my phone and the leaks on the simulator also seem to be different
@Smiller193
Maybe you could try to use [weak self] when you observe the change of Firebase in method fetchComments().
And you can also trying to log malloc stack, maybe can help you much easier to find memory issue.
Sent with GitHawk
Okay I will try that
Not applicable anymore
| gharchive/issue | 2018-02-03T19:30:08 | 2025-04-01T04:32:38.864737 | {
"authors": [
"Smiller193",
"lorixx",
"marcuswu0814",
"rnystrom"
],
"repo": "Instagram/IGListKit",
"url": "https://github.com/Instagram/IGListKit/issues/1082",
"license": "mit",
"license_type": "permissive",
"license_source": "bigquery"
} |
527524528 | Update contributor's github page
Changes in this pull request
Issue fixed: #
Checklist
[x] All tests pass. Demo project builds and runs.
[ ] I added tests, an experiment, or detailed why my change isn't tested.
[ ] I added an entry to the CHANGELOG.md for any breaking changes, enhancements, or bug fixes.
[x] I have reviewed the contributing guide
cc @qhhonx
cc @qhhonx
LGTM. Thanks a lot.
| gharchive/pull-request | 2019-11-23T07:43:48 | 2025-04-01T04:32:38.868691 | {
"authors": [
"lorixx",
"qhhonx"
],
"repo": "Instagram/IGListKit",
"url": "https://github.com/Instagram/IGListKit/pull/1399",
"license": "mit",
"license_type": "permissive",
"license_source": "bigquery"
} |
246957588 | Import Inotify to FileDiff table
Import inotify to FileDiff table. Then analyze these file diffs.
Already done. Issue closed.
| gharchive/issue | 2017-08-01T05:18:32 | 2025-04-01T04:32:38.892705 | {
"authors": [
"qiyuangong"
],
"repo": "Intel-bigdata/SSM",
"url": "https://github.com/Intel-bigdata/SSM/issues/828",
"license": "apache-2.0",
"license_type": "permissive",
"license_source": "bigquery"
} |
1787778046 | Add minibatch training with SAR inference command in examples
As in title - add command to run example in README file
@hesham-mostafa ready to merge
| gharchive/pull-request | 2023-07-04T11:50:26 | 2025-04-01T04:32:38.901219 | {
"authors": [
"bgawrych"
],
"repo": "IntelLabs/SAR",
"url": "https://github.com/IntelLabs/SAR/pull/13",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
} |
2349955425 | error
librealsense
2.55.1 RELEASE
OS
Windows
Intel RealSense Viewer / Depth Quality Tool has crashed with the following error message:
RealSense error calling rs2_create_context_ex(api_version:25501, json_settings:nullptr):
failed to convert special folder: errno=42
A cause of a failed to convert special folder: errno=42 error may be if your Windows computer does not have a Documents folder. More information about this can be found at https://github.com/IntelRealSense/librealsense/issues/12910#issuecomment-2100569892
My Intel RealSense colleagues have provided further advice about the errno=42 error at https://github.com/IntelRealSense/librealsense/issues/13023#issuecomment-2167504813
Do you require further assistance with this case, please? Thanks!
Case closed due to no further comments received.
| gharchive/issue | 2024-06-13T01:38:06 | 2025-04-01T04:32:38.905098 | {
"authors": [
"MartyG-RealSense",
"zxcvbnmhkfghffgh"
],
"repo": "IntelRealSense/librealsense",
"url": "https://github.com/IntelRealSense/librealsense/issues/13027",
"license": "Apache-2.0",
"license_type": "permissive",
"license_source": "github-api"
} |
350003900 | How to get the depth value corresponding to the color in the depth image?
I got some depth images that were previously taken by others using the Intel RealSense Viewer tool and stored as ".RAW" files and ".PNG" files. I want to be able to get depth data directly from these depth images instead of the Intel Realsense D435 device.Our project requirements require specific depth values, at least to the nearest 0.01. How can I open a ".RAW" file or a ".PNG" file and get depth data?Or how to get the depth value corresponding to the color in the depth image?
Three days ago, I mentioned this question, but it was not completely solved. See
https://github.com/IntelRealSense/librealsense/issues/2200
[Realsense Customer Engineering Team Comment]
Hi @njnydxzc,
You can refer to "rs-align" example. And, use below code to get the depth value in the aligned color. Hope this is helpful.
const int w_other = other_frame.asrs2::video_frame().get_width();
const int h_other = other_frame.asrs2::video_frame().get_height();
cv::Mat otherMat(cv::Size(w_other, h_other), CV_8UC3, (void*)other_frame.get_data(), cv::Mat::AUTO_STEP);
const int w_aligned = aligned_depth_frame.asrs2::video_frame().get_width();
const int h_aligned = aligned_depth_frame.asrs2::video_frame().get_height();
cv::Mat alignMat(cv::Size(w_aligned, h_aligned), CV_16U, (void*)aligned_depth_frame.get_data(), cv::Mat::AUTO_STEP);
imshow("AlignFrame", alignMat);
.....
To get the frame center's depth: alignMat.at(w_aligned/2, h_aligned/2)
Hello @njnydxzc ,
The ".RAW" data is just a blob of the data transmitted over USB. The data layout is according to the UVC spec for the depth stream . It is intended for machine vision and tools such as matlab/opencv , as suggested in #2200.
Depth ".PNG" provides a colorized version of the depth suitable for human representation. Since the colorizer uses histogram equalization It is not possibly to extract depth data directly.
As for the other question, in order to create depth<->rgb correspondence librealsense provides "align" processing block (see rs-align demo).
In case you only have recorded depth and color in raw/bin formats then you'll still be able to use the "align" APIs but you'll have to write code to perform it manually:
Prerequisites:
Depth frames in ".RAW/.BIN" format + stream intrinsic.
Color frames in RGB/PNG format + stream intrinsic.
Depth<->Color extrinsic.
A sync map between Depth to Color frames that can be based on frame id or timestamp.
Flow:
Create "rs2::software_device"(rs-software-device demo).
Add synthetic streams for depth and color including intrinsic/extrinsic data.
Create and feed a synchronized pair of depth/color to synthetic streams with data obtained from ".RAW"/".PNG" files.
Use the synthetic frames as inputs to "align" block to produce "A aligned to B" frame.
Repeat [3,4] for all pairs of depth/color
Sorry, I didn't find the code of "rs-align" example in the SDK.
Which header files do I need to use to run this code?
[Realsense Customer Engineering Team Comment]
Hi @njnydxzc,
The rs-align sample code in below link. And, you need to include the opencv header like "#include <opencv2/opencv.hpp>" if you like to use above code.
https://github.com/IntelRealSense/librealsense/blob/53537d4a5115d0abbd79e5379eaa93d701344e4f/examples/align/readme.md
Depth data is available through the Realsense D435 device. Why can't I get deep data when I open a picture with OpenCV? Is it because the depth data is not saved?
Depth data is available through the Realsense D435 device. Why can't I get deep data when I open a image with OpenCV? Is it because the depth data is not saved?
Thank you for your reply.@RealSense Customer Engineering
If I don't have a Realsense D435 device connected, how do I define the types of other_frame and aligned_depth_frame in the code and load the image?
[Realsense Customer Engineering Team Comment]
Hi @njnydxzc,
My way is for active streams from D435. How about @ev-mp's suggestions to handle the RAW and PNG data?
Or, you can have the RGB and depth aligned, and then have the record and playback as the example "record-playback".
Thank you for your reply.@RealSense Customer Engineering
Our project wants to use a deep learning algorithm, so we will collect the images in advance and then process them.So, I want to know if there is a way to get deep data from a saved image without D435.
[Realsense Customer Engineering Team Comment]
Hi @njnydxzc,
Suggest to try @ev-mp's align flow if the saved PNG/RAW files used. And, RAW file has the depth data.
Is there a function that can be used directly to get deep data in a ".RAW" file?
[Realsense Customer Engineering Team Comment]
Hi @njnydxzc,
Since the RAW data layout is according to the UVC spec for the depth stream, you can try below sample code to get the 16bit depth data:
FILE fp = NULL;
int framesize = IMAGE_WIDTH * IMAGE_HEIGHT;
fp = fopen(fileName, "rb");
imagedata = (char)malloc(sizeof(short) * framesize);
fread(imagedata, sizeof(short), framesize, fp);
cv::Mat depthMat(Size(IMAGE_WIDTH, IMAGE_HEIGHT), CV_16U, (void*)imagedata, cv::Mat::AUTO_STEP);
// show the depth frame central value from the raw data
printf("Depth RAW Central Value: %d mm", depthMat.at(IMAGE_HEIGHT / 2, IMAGE_WIDTH / 2));
[Realsense Customer Engineering Team Comment]
Hi @njnydxzc,
still need any support for this topic?
Thank you for your reply.@RealSense Customer Engineering
When I run the code,“ depthMat.at(IMAGE_HEIGHT / 2, IMAGE_WIDTH / 2)" reports an error.
Which header files do I need to use to run this code?
"depthMat.at(IMAGE_HEIGHT / 2, IMAGE_WIDTH / 2)" is ok .
"depthMat.at(IMAGE_HEIGHT / 2, IMAGE_WIDTH / 2)" is ok .
"depthMat.at(IMAGE_HEIGHT / 2, IMAGE_WIDTH / 2)" is ok .
Thank you for your reply.@x-jeff.
Which header files do I need to use to run this code?
#define _CRT_SECURE_NO_WARNINGS #include<iostream> #include<opencv2/opencv.hpp>; using namespace std; using namespace cv; int main(){ int IMAGE_WIDTH = 424, IMAGE_HEIGHT = 240; FILE* fp; char* imagedata; int framesize = IMAGE_WIDTH*IMAGE_HEIGHT; fp = fopen("C:/personal document/SegmentationAlgorithm/grabcut/GrabcutTest/record/frame1/d_Depth.raw", "rb"); imagedata = (char*)malloc(sizeof(short)* framesize); fread(imagedata, sizeof(char), framesize, fp); cv::Mat depthMat(Size(IMAGE_WIDTH, IMAGE_HEIGHT), CV_8U, (void*)imagedata, cv::Mat::AUTO_STEP); printf("Depth RAW Central Value: %d mm", depthMat.at<uchar>(IMAGE_HEIGHT / 2, IMAGE_WIDTH / 2)); fclose(fp); free(imagedata); return 0; }
This is my code, the code can run normally, you can refer to it.
Thank you for your reply.@x-jeff.
I got some data from the code, but it doesn't seem to be very accurate. Does the data need to be extra processed to make it more accurate?
[Realsense Customer Engineering Team Comment]
Hi @njnydxzc,
you mean your depth data from RAW is not accurate? How is its accuracy error? ex: more than 2% error within 2m?
@RealSense Customer Engineering
I first got deep data through the "Intel RealSense Viewer" tool. Then, I save the depth image and get the depth data of the same coordinate through the code. The two data are not the same.
@x-jeff @RealSense Customer Engineering
I have solved the problem. Thank you very much for your help.
If I have other problems, I will create another new issue.
@x-jeff @RealSense Customer Engineering
I have solved the problem. Thank you very much for your help.
If I have other problems, I will create another new issue.
Hello, I have the same problem as you, could you tell me how you solved it in the end?
【Realsense客户工程团队点评】 你好@njnydxzc,
你的意思是你的RAW深度数据不准确?它的精度误差如何?例如:2m 内误差超过 2%?
There are errors in my depth image. I filtered the depth flow, but I could not extract the depth flow after filtering.
There is an error in the program.
`align_to = rs.stream.color
align = rs.align(align_to)
# Streaming loop
while True:
# Get frameset of depth
frames = pipeline.wait_for_frames()
aligened_frames=align.process(frames)
aligened_depth_frame=aligened_frames.get_depth_frame()
color_frame=frames.get_color_frame()
spatial=rs.spatial_filter()
spatial_depth=spatial.process(aligened_depth_frame)
temporal = rs.temporal_filter()
temporal.set_option(rs.option.filter_smooth_alpha, 0.1)
temporal.set_option(rs.option.filter_smooth_delta, 20)
tem_depth = temporal.process(spatial_depth)
hole_filling = rs.hole_filling_filter()
filled_depth = hole_filling.process(aligened_depth_frame)
# Colorize depth frame to jet colormap
depth_color_frame = colorizer.colorize(filled_depth)
# Convert depth_frame to numpy array to render image in opencv
depth_color_image = np.asanyarray(depth_color_frame.get_data())
depth_colormap = cv2.applyColorMap(cv2.convertScaleAbs(depth_color_image, alpha=0.41), cv2.COLORMAP_JET)
color_image=np.asanyarray(color_frame.get_data())
w=filled_depth.get_width()
h=filled_depth.get_height()
dis=round(filled_depth.get_distance(int(w/2),int(h/2))*100,2)
print("the dis are",dis,"cm away")
`
| gharchive/issue | 2018-08-13T12:12:35 | 2025-04-01T04:32:38.930991 | {
"authors": [
"L-xn",
"RealSense-Customer-Engineering",
"ev-mp",
"moonCheng",
"njnydxzc",
"x-jeff"
],
"repo": "IntelRealSense/librealsense",
"url": "https://github.com/IntelRealSense/librealsense/issues/2231",
"license": "Apache-2.0",
"license_type": "permissive",
"license_source": "github-api"
} |
171190445 | Windows 10 Version 1607 causes camera to crash
Required Info
Camera Model
SR300
Firmware Version
1.15.00 / 1.17.00
Operating System & Version
Windows 10 : version 1607 os build 14393.51
Build System
VS2015
The librealses seems to have problems with the recent windows 10 Build (version 1607) release on the 2.8.2016 .
I experince some random crashes of the camera. I could not pin done the exact line causing the issue but would recommand not to update to the current build yet.
(In case someone experinces the same problem, there is an option to revert the system back to the latest build ( without reinstalling the system). Officialy not available after 10 days with the new build.
UPDATE:
I testet this issue on different maschines :
Running old build of win 10 -> no problems with firmeware 1.15 /1.17
Running current build of win 10 -> error while init camera and set parameters
-> After reset to old build of win 10 -> no problems with firmeware 1.15 /1.17 anymore.
Running current old build of win 10 -> no problems with firmeware 1.15 /1.17
-> After installing current build -> error while init camera and set parameters
So this problems seems to be connected to the current windows build.
The error seems to be thrown here [uvc.wmf line 564] (https://github.com/IntelRealSense/librealsense/blob/master/src/uvc-wmf.cpp#L564)
when setting a F200/SR300 parameter.
ks_control->KsProperty((PKSPROPERTY)&node, sizeof(KSP_NODE), data, len, nullptr)
allways returns -2147467261 (HRESULT : E_POINTER Invalid pointer. ) which indicates an error.
Same program with previos build returns 0 (indicates no error).
If i do ignore those error some parameters are still beeing set to the camera. However it is not possible to check if it was executed correctly ( which usually results in a retry till it worked)
Is it possible, that some interfaces were changed at this point (windows side) ?
SOLVED:
Problem was that
ks_control->KsProperty((PKSPROPERTY)&node, sizeof(KSP_NODE), data, len, nullptr) does not accept nullpointer anymore. Changed to
ULONG bytes_received = 0;
check("IKsControl::KsProperty", ks_control->KsProperty((PKSPROPERTY)&node, sizeof(KSP_NODE), data, len, &b
Thank you, we'll make sure to integrate the fix
| gharchive/issue | 2016-08-15T15:11:18 | 2025-04-01T04:32:38.939307 | {
"authors": [
"Wollimayer",
"dorodnic"
],
"repo": "IntelRealSense/librealsense",
"url": "https://github.com/IntelRealSense/librealsense/issues/243",
"license": "Apache-2.0",
"license_type": "permissive",
"license_source": "github-api"
} |
397606912 | Understanding disparity and disparity shift
Camera Model: D415
Firmware Version: 5.10.??? (can't check right now, but updated recently)
Operating System & Version: MacOS 10.13.6
Platform: PC
SDK Version: 2.2.17
Language: python
Segment: Robot
Issue Description
I'm using a D415 camera as the eyes for a robot. I first calibrate by comparing the robot in various positions to the measurements from the camera at those points, and then use camera measurements of the world to decide where to move the robot. I'm trying to get a better understanding of how disparity and disparity shift work in order to get the best results.
The objects I'm measuring vary from about 10 to 30cm from the camera, which has necessitated changing disparity shift based on what I'm trying to measure.
Are these statements true?
Disparity and disparity shift are in terms of pixels
The disparity search range (126) cannot be changed
The disparity (in pixels) for a particular depth increases when resolution increases
Consequently, the size of the range of depths that can be measured effectively decreases as resolution increases
Is there a function available, either in code or as a mathematical expression, for determining the range of depths that can be read well given a resolution and camera attributes? or put another way, is there a function available to convert a disparity value into a depth, since I know the range given my disparity shift configuration.
Is it possible to recover the measured disparity for a particular pixel? I've only found distance in the API.
I saw something in another issue that suggested the bytes may need to be cast to float (in c++) and I tried doing that in python and got values similar to the depth values (~4000) I had before using the transform.
How did you cast and normalize the data to get those values?
| gharchive/issue | 2019-01-09T23:28:58 | 2025-04-01T04:32:38.944758 | {
"authors": [
"Novruz97",
"ajprax"
],
"repo": "IntelRealSense/librealsense",
"url": "https://github.com/IntelRealSense/librealsense/issues/3039",
"license": "Apache-2.0",
"license_type": "permissive",
"license_source": "github-api"
} |
575557877 | null pointer passed for argument "buffer" still present in 2.33.1
Just wanted to remind you. Already raised several times.
Happens sometimes when a map shall be saved.
I will label this post as 'bug' so that it can be tracked by the RealSense team and investigated regarding whether it is a bug.
Thanks @MartyG-RealSense. This problem is really a bit annoying. Imagine, you are walking through an area and intend to save the map afterwards. It is a 50:50 chance that it will fail and all the efforts are lost. More than that: You need to unplug the camera and restart from scratch, when it happens...
Any development here?
If this is a T265 related issue, the best way to get an answer will likely be to close this case and repost the exact same question as a new issue but make sure that the word 'T265' is included in the message title this time. This should make sure the right person picks it up for answering.
Thanks. Will do. Another bug ("Bus Error 10") today also popped up again
| gharchive/issue | 2020-03-04T16:32:18 | 2025-04-01T04:32:38.947624 | {
"authors": [
"MartyG-RealSense",
"neilyoung"
],
"repo": "IntelRealSense/librealsense",
"url": "https://github.com/IntelRealSense/librealsense/issues/5972",
"license": "Apache-2.0",
"license_type": "permissive",
"license_source": "github-api"
} |
686288901 | t265 imu problem
Before opening a new issue, we wanted to provide you with some useful suggestions (Click "Preview" above for a better view):
Consider checking out SDK examples.
Have you looked in our documentations?
Is you question a frequently asked one?
Try searching our GitHub Issues (open and closed) for a similar issue.
All users are welcomed to report bugs, ask questions, suggest or request enhancements and generally feel free to open new issue, even if they haven't followed any of the suggestions above :)
Required Info
Camera Model
{ t265 }
Firmware Version
0.2.0.951
Operating System & Version
Ubunut 16.04
Kernel Version (Linux Only)
4.15.0-112-generic
Platform
PC
SDK Version
librealsense SDK 2.36
Language
C++
Segment
{Robot/Smartphone/VR/AR/others }
Issue Description
The sensor of T265 (including IMU) is calibrated on the production line, so no further calibration process is required (unlike the IMU on D435i). However, realsense-viewer shows that the acceleration will vary from 8 to 10 when placed at rest in different directions. Does this have an impact on vio and how to solve it?
Hi @dorodnic can any people help me? thanks!
As far as I know, D435i has the same problem. D435i and T265 share the same imu. After I do rs-imu-calibration on D435i, its acceleration varies from 9.7~9.9, before calibration the range is 8~10. So I guess if I can modify the rs-imu-calibration tool to run on T265, then the problem will be handled. Will update this comment.
@youwyu did it work?
| gharchive/issue | 2020-08-26T12:43:34 | 2025-04-01T04:32:38.955102 | {
"authors": [
"davesmivers",
"hannibal051",
"lishanggui"
],
"repo": "IntelRealSense/librealsense",
"url": "https://github.com/IntelRealSense/librealsense/issues/7199",
"license": "Apache-2.0",
"license_type": "permissive",
"license_source": "github-api"
} |
2655448019 | Table read error fallback
We wish to allow the camera to enumerate even no flash tables.
Tracked on [RSDEV-2874] [RSDEV-2901]
Notes from @ev-mp
Can you add a note whether this applies to RGB or Depth sensor intrinsic for clarity?
What is the impact of returning empty intrinsic ? Providing arbitrary values may mask the issue making the user unaware of the actual problem
Notes from @ev-mp
Can you add a note whether this applies to RGB or Depth sensor intrinsic for clarity?
What is the impact of returning empty intrinsic ? Providing arbitrary values may mask the issue making the user unaware of the actual problem
Yes will do
The user will get a log error, if you prefer I can have an internal flag and override the start stream function and throw on this case, not sure it is worth it. thoughts?
| gharchive/pull-request | 2024-11-13T13:07:07 | 2025-04-01T04:32:38.959008 | {
"authors": [
"Nir-Az"
],
"repo": "IntelRealSense/librealsense",
"url": "https://github.com/IntelRealSense/librealsense/pull/13512",
"license": "Apache-2.0",
"license_type": "permissive",
"license_source": "github-api"
} |
427886616 | Include TM2 firmware directly
This has a few nice features:
Downloads the firmware with SHA1 hash to avoid having to ever re-download
Downloads the firmware into a versioned file to avoid having to re-download
when changing branches/versions
Avoids parsing the firmware
Avoids the process of creating huge headers with cmake (portable, but slow)
Avoids regenerating files on every cmake run
Greatly Speed up running cmake in the normal case
Simplifies the cmake code drastically
The fw/ directory should drop into any libtm replacement
Only writes into the build directory (not the source directory)
but it has a few drawbacks as well:
Less portable technique
We may need a few changes for Windows when switching between static and dynamic libraries
Doesn't support locally firmware development as well as the old code
Currently missing a compile-time sanity check for the central app version
Most of the drawbacks can easily be fixed, except for the portability which should be outweighed by the other benefits. This is based on and should be included into #3507, but it might need a bit more testing and I don't want to hold #3507 up. CC @claudiofantacci.
This worked well for me on Windows, except:
You have called ADD_LIBRARY for library fw without any source files. This typically indicates a problem with your CMakeLists.txt file warning. Any idea?
Another finding is that with the PR, when setting FORCE_WINUSB_UVC the build fails for some reason. This is driving the Win7 SDK, so this can be a problem.
In fw_target.h:16, (as well as fw_central and fw_central_app) changing ::GetModuleHandle to ::GetModuleHandleA solves the problem, and at least to me it makes sense since it is being called with non-unicode string.
Hey thanks for calling me here! I was not able to have a look into it today and I have some other things tomorrow, but I will have a look asap, possibly within next Friday 👍🏻
Thanks for debugging that @dorodnic! I (force) pushed a fix to use A and to move one of the files to the add_library call itself (instead of doing so through target_source) to avoid the warning (which oddly I wasn't getting myself).
We have a CI build that uses Win7 and FORCE_WINUSB_UVC which passed. Was it failing for you locally?
Another thing I noticed is:
Configuring done
CMake Error: install(EXPORT "realsense2Targets" ...) includes target "realsense2" which requires target "usb" that is not in the export set.
CMake Error: install(EXPORT "realsense2Targets" ...) includes target "realsense2" which requires target "tm" that is not in the export set.
Generating done
When setting BUILD_SHARED_LIBS=false. Then, it builds successfully.
We have a CI build that uses Win7 and FORCE_WINUSB_UVC which passed. Was it failing for you locally?
Yes, basically I like to sanity check everything locally. Not sure why it failed only for me, but the fix seem to make sense.
@dorodnic, I updated both #3507 and #3647 with fixes to the issues you mentioned, except for the FORCE_WINUSB_UVC=True issue which I can't reproduce.
Hi @radfordi, I tried to configure and compile this PR with ninja build, under Windows 10, with the standard options and unfortunately it does not work. I get the following error:
ninja: error: 'third-party/libtm/fw/fw.dir/fw.res', needed by 'realsense2.dll', missing and no known rule to make it
It instead works using Visual Studio 15 2017 toolchain (via CMake generator).
I'll try to have a look at the CMake code of this PR, and specifically for the libtm target, trying to help.
Thanks for testing @claudiofantacci! It seems that @dmirota's hack to access the .res file doesn't work with Ninja. :(
Hi @radfordi,
We have a new Android lib for unrooted devices which is not yet part of the gated tests.
It's currently fails, I will try to debug the issue and let you know what is the problem.
Thanks for testing @claudiofantacci! It seems that @dmirota's hack (632ed27) to access the .res file doesn't work with Ninja. :(
By my understanding the file ${CMAKE_CURRENT_BINARY_DIR}/fw.dir/${CMAKE_CFG_INTDIR}/fw.res used in target_link_libraries(fw INTERFACE "$<$<BOOL:${MSVC}>:${CMAKE_CURRENT_BINARY_DIR}/fw.dir/${CMAKE_CFG_INTDIR}/fw.res>") to be linked to fw is generated by some commands in the CMakeFiles and, not being a target, ninja does not know how to handle/build it. I wonder how the other build tools can deal with this. Probably they are less restricitve and they go around the problem.
My two cents here is that we the correct procedure would be to use an add_custom_target that uses file generating commands to create a custom target, say fw_res, containing fw.res. After that you can link fw_res to fw and everything should work just fine with all toolchains. I don't know whether by using add_custom_target you also need to use add_dependencies to help the toolchain building the target in the proper order.
What do you think?
Thanks for testing @claudiofantacci. We solved the problem with a technique like you suggested. This PR is working on Mac, Linux and Windows with Ninja and without, though CI Builds seem to not be triggering for this PR now. Maybe I have used our allotted number of builds for one issue? @dorodnic, any ideas?
@radfordi this is great 🎉
I just tested compilation and the RealSense viewer on our cameras and everything works flawlessly!
Thanks 🚀
Rebased on v2.20.0.
Android gradle solution built successfully.
| gharchive/pull-request | 2019-04-01T20:02:16 | 2025-04-01T04:32:38.973770 | {
"authors": [
"claudiofantacci",
"dorodnic",
"matkatz",
"radfordi"
],
"repo": "IntelRealSense/librealsense",
"url": "https://github.com/IntelRealSense/librealsense/pull/3647",
"license": "Apache-2.0",
"license_type": "permissive",
"license_source": "github-api"
} |
1605599772 | Día 3
Bases de datos SeaWiFS y MODIS del 2000 a la fecha. Son compuestos mensuales
Seleccionar el intervalo 2000 al 2008 y mapear el área que los compañeros selecciones para la información de diversidad.
file:///C:/Users/Nicolas%20Nickifor/Downloads/D%C3%ADa%201Hachaton.html
Ya subí mi cuaderno jupyter al repositorio, https://github.com/Intercoonecta/proy5-regiones-comparacion/tree/main/emilio
@Cotsikayala Cotsii podrías subir todo lo que hiciste para clorofila? O lo subiste y no lo encuentro :(
Este issue ya se puede cerrar, ya que no hay nada pendiente ni información útil que mantener visible
| gharchive/issue | 2023-03-01T20:10:13 | 2025-04-01T04:32:39.020024 | {
"authors": [
"Cotsikayala",
"emiliom",
"judithcamps"
],
"repo": "Intercoonecta/proy5-regiones-comparacion",
"url": "https://github.com/Intercoonecta/proy5-regiones-comparacion/issues/8",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
} |
1966716182 | Feature: Adding contributors section to the README.md file.
There is no Contributors section in readme file .
As we know Contributions are what make the open-source community such an amazing place to learn, inspire, and create.
The Contributors section in a README.md file is important as it acknowledges and gives credit to those who have contributed to a project, fosters community and collaboration, adds transparency and accountability, and helps document the project's history for current and future maintainers. It also serves as a form of recognition, motivating contributors to continue their efforts.
@ZwwWayne Kindly assign me this issue, I want to work on it. Thank You!!
Hi @Kalyanimhala ,
Glad to hear that. Thank you for your contribution!
| gharchive/issue | 2023-10-28T19:36:12 | 2025-04-01T04:32:39.067747 | {
"authors": [
"Kalyanimhala",
"ZwwWayne"
],
"repo": "InternLM/lagent",
"url": "https://github.com/InternLM/lagent/issues/60",
"license": "Apache-2.0",
"license_type": "permissive",
"license_source": "github-api"
} |
2322998071 | feat: skip invokeFlattenKV_v2_ when fp16 and bf16 with CacheType::kBlock
Motivation and Modification
as titled
Use cases (Optional)
If this PR introduces a new feature, it is better to list some use cases here, and update the documentation.
Checklist
Pre-commit or other linting tools are used to fix the potential lint issues.
The modification is covered by complete unit tests. If not, please add more unit tests to ensure the correctness.
If the modification has a dependency on downstream projects of a newer version, this PR should be tested with all supported versions of downstream projects.
The documentation has been modified accordingly, like docstring or example tutorials.
How about BF16, it should be the same as FP16.
How about BF16, it should be the same as FP16.
yep. I'll land the code soon.
Verified throughput and correctness on Llama2 13b Chat, consistent with the base.
9% performance drop estimated for prefilling 200k tokens with Llama3-8B.
9% performance drop estimated for prefilling approx 200k tokens with Llama3-8B.
this PR v0.4.2
69520.48 63873.17
69465.04 63672.55
69441.23 63659.92
69397.99 63625.35
69396.67 63574.90
Ok I'll run a detailed timeline analysis later with Llama3-8B. Do you have any suggestions, such as making this feature configurable.
| gharchive/pull-request | 2024-05-29T10:45:50 | 2025-04-01T04:32:39.072258 | {
"authors": [
"lzhangzz",
"zhyncs"
],
"repo": "InternLM/lmdeploy",
"url": "https://github.com/InternLM/lmdeploy/pull/1683",
"license": "Apache-2.0",
"license_type": "permissive",
"license_source": "github-api"
} |
1422191771 | Shouldn't ids:RequestMessage document its fields properly in the overview?
ids:RequestMessage is the basis for many other messages
Still the fields are not listed here:
https://github.com/International-Data-Spaces-Association/IDS-G/blob/main/Communication/Message-Types/README.md#idsrequestmessage
I would expect the fields like:
'@id', 'ids:securityToken' and more...
Any reason for this?
Same for https://github.com/International-Data-Spaces-Association/IDS-G/blob/main/Communication/Message-Types/README.md#idsresponsemessage
Thanks,
Matthias B.
Maybe the minimal documentation could be a link to this one here:
https://github.com/International-Data-Spaces-Association/IDS-G/tree/main/Communication/Message-Structure#idsmessage-properties
| gharchive/issue | 2022-10-25T09:55:51 | 2025-04-01T04:32:39.075212 | {
"authors": [
"matgnt"
],
"repo": "International-Data-Spaces-Association/IDS-G",
"url": "https://github.com/International-Data-Spaces-Association/IDS-G/issues/78",
"license": "CC-BY-4.0",
"license_type": "permissive",
"license_source": "github-api"
} |
2430077358 | FIX: Inspecting reward accounts uses wrong AddressInfo field for staking scripts
I would expect that inspecting a reward account address would use the infoStakeScriptHash field of AddressInfo, but it is using infoScriptHash. The pubkey version is using infoStakeKeyHash so this seems like a bug. I ran the tests both before and after the change, and all tests pass; they don't seem to cover this.
If this is the intended behavior, feel free to close this PR.
I wasn't sure if the changelog entry should be in a "Fixed" category or not so I just put it under "Added". I figured it could be reformatted latter, if necessary.
I see for historical reasons we have infoScriptHash for spending script hash credential. It should be called rather infoSpendingScriptHash - I will do this renaming in the separate PR in case you are not eager to do it in the current one:-)
I would rather you do the renaming since I am not as familiar with the code base :sweat_smile:. I'm not sure if tests would also need to be updated.
Actually I did a grep of the code base and infoScriptHash only seems to be used in that module. So I did a search & replace and re-ran the tests. The core tests are passing, but the command-line tests are failing. The command-line tests were failing even before I made changes, though.
@fallen-icarus how is it cli tests are falling?
You are going to nix develop and then in develop shell you run
cabal test cardano-addresses-cli:unit
?
If yes, what about after going to development shell
$ export LANG=C.UTF-8
$ cabal test cardano-addresses-cli:unit
$ cabal test cardano-addresses:unit
All passing now?
@paweljakubas I'm not using nix. I was originally doing cabal run tests from withing the command-line directory, and this results in 92 of 344 tests failing. I just tried your command cabal test cardano-addresses-cli:unit, and this results in only 2 of 344 tests failing.
It turned out the extra infoScriptHash -> infoStakeScriptHash you found in the json is what was causing the 2 tests to fail in the latter run. Now, running cabal test cardano-addresses-cli:unit is showing all tests pass.
But my original approach to running the tests is still showing 92 of 344 tests failing. I don't understand why cabal run tests and cabal test cardano-addresses-cli:unit would produce different results. I would think they would be equivalent.
I was looking over the automated checks to see why they were failing and saw this typescript test was failing. The code is shown below:
it('Shelley Stake Shared network tag 0', () => expect(inspectAddress("stake17pshvetj09hxjcm9v9jxgunjv4ehxmr0d3hkcmmvdakx7mq36s8xc")).resolves.toEqual({
"address_style": "Shelley",
"address_type": 15,
"network_tag": 0,
"spending_shared_hash": "61766572796e69636561646472726573736c6f6c6f6c6f6c6f6c6f6c",
"spending_shared_hash_bech32": "addr_shared_vkh1v9mx2unede5kxetpv3j8yun9wdekcmmvdakx7mr0d3hkcuuhu9r",
"stake_reference": "by value",
"stake_shared_hash": "61766572796e69636561646472726573736c6f6c6f6c6f6c6f6c6f6c",
"stake_shared_hash_bech32": "stake_shared_vkh1v9mx2unede5kxetpv3j8yun9wdekcmmvdakx7mr0d3hkcjta3en",
}));
I suspect the issue is that the spending_shared_hash fields should not actually be present since it is a staking address, but I am not familiar enough with the address types to know for sure. I think the jsonHash changes in this PR is what is now causing the test to fail since the infoScriptHash field was improperly being used for both spending_shared_hash and stake_shared_hash. I have zero experience with typescript so I don't feel comfortable touching this test, but I was hoping this PR could get merged soon.
hi @fallen-icarus Can you please rebase your branch?
That's the first time I've ever rebased on an upstream repo so please let me know if I messed something up.
@fallen-icarus re your remark -> https://github.com/IntersectMBO/cardano-addresses/pull/268#issuecomment-2343770861
Could you apply the following change (2 removes and 2 additions):
- "spending_shared_hash": "61766572796e69636561646472726573736c6f6c6f6c6f6c6f6c6f6c",
- "spending_shared_hash_bech32": "addr_shared_vkh1v9mx2unede5kxetpv3j8yun9wdekcmmvdakx7mr0d3hkcuuhu9r",
"stake_reference": "by value",
+ "stake_script_hash": "61766572796e69636561646472726573736c6f6c6f6c6f6c6f6c6f6c",
+ "stake_script_hash_bech32": "stake_vkh1v9mx2unede5kxetpv3j8yun9wdekcmmvdakx7mr0d3hkcjpqtv8",
make commit and push, please!
Oops. I see now I wasn't supposed to get rid of the shared stake parts.
| gharchive/pull-request | 2024-07-25T14:10:33 | 2025-04-01T04:32:39.084915 | {
"authors": [
"fallen-icarus",
"paweljakubas"
],
"repo": "IntersectMBO/cardano-addresses",
"url": "https://github.com/IntersectMBO/cardano-addresses/pull/268",
"license": "Apache-2.0",
"license_type": "permissive",
"license_source": "github-api"
} |
2750549765 | [WIP] vips driver
Adding more modifiers
Adding tests
I had a weird bug using getpoint and it turned out to be because of the sequential access, the vips image would just me empty after using it
For now, I'm just focusing on modifiers that I use in my project, but I can look at others after.
@olivervogel Let me know if you have any comments
@olivervogel thank you!
| gharchive/pull-request | 2024-12-19T14:39:50 | 2025-04-01T04:32:39.086942 | {
"authors": [
"deluxetom"
],
"repo": "Intervention/image-driver-vips",
"url": "https://github.com/Intervention/image-driver-vips/pull/1",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
} |
2522468648 | 🛑 frent.no is down
In 80ef99a, frent.no (https://frent.no) was down:
HTTP code: 0
Response time: 0 ms
Resolved: frent.no is back up in 1acbb9a after 39 minutes.
| gharchive/issue | 2024-09-12T13:52:41 | 2025-04-01T04:32:39.090108 | {
"authors": [
"KindCoder-no"
],
"repo": "Intus-AS/Types-status",
"url": "https://github.com/Intus-AS/Types-status/issues/3709",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
} |
234873011 | _bytesio._BytesIO(...).readinto(array.array("u", ...) broken
From @ironpythonbot on December 9, 2014 17:38
E:\vslrft\Merlin\Main\Languages\IronPython\Tests>26
Python 2.6.2 (r262:71605, Apr 14 2009, 22:40:02) [MSC v.1500 32 bit (Intel)]
on win32
Type "help", "copyright", "credits" or "license" for more information.
> > > import _bytesio
> > > import array
> > > b = _bytesio._BytesIO(bytearray(b'ab'))
> > > a = array.array("u", u"z")
> > > b.readinto(a)
> > > 2
> > > a
> > > array('u', u'\u6261')
> > > ^Z
E:\vslrft\Merlin\Main\Languages\IronPython\Tests>ipyd
IronPython 2.6 Beta 2 DEBUG (2.6.0.20) on .NET 2.0.50727.3053
Type "help", "copyright", "credits" or "license" for more information.
import _bytesio
import array
b = _bytesio._BytesIO(bytearray(b'ab'))
a = array.array("u", u"z")
b.readinto(a)
1
a
array('u', u'a')
Work Item Details
Original CodePlex Issue: Issue 24303
Status: Active
Reason Closed: Unassigned
Assigned to: Unassigned
Reported on: Aug 13, 2009 at 12:40 AM
Reported by: dfugate
Updated on: Feb 22, 2013 at 2:11 AM
Updated by: jdhardy
Test: _bytesio_test.py
Copied from original issue: IronLanguages/main#739
From @ironpythonbot on December 9, 2014 17:38
On 2009-10-22 07:08:40 UTC, dfugate commented:
Regression not re-enabled and still broken:
0.15s testing test_coverage FAIL (<type 'exceptions.AssertionError'>)
Test test_coverage failed throwing <type 'exceptions.AssertionError'> (expected ['a'], but found ['z'])
... run_test in D:\vsl\Merlin\Main\Bin\Debug\Lib\iptest\assert_util.py line 528
... test_coverage in modules_bytesio_test.py line 311
... AreEqual in D:\vsl\Merlin\Main\Bin\Debug\Lib\iptest\assert_util.py line 208
... Assert in D:\vsl\Merlin\Main\Bin\Debug\Lib\iptest\assert_util.py line 198
From @ironpythonbot on December 9, 2014 17:38
On 2009-11-10 03:35:13 UTC, sborde commented:
Its an undocumented method
This works on the latest code.
| gharchive/issue | 2017-06-09T16:07:03 | 2025-04-01T04:32:39.143485 | {
"authors": [
"slide"
],
"repo": "IronLanguages/ironpython2",
"url": "https://github.com/IronLanguages/ironpython2/issues/146",
"license": "apache-2.0",
"license_type": "permissive",
"license_source": "bigquery"
} |
935377301 | Diffutil not working when change a field
https://github.com/IsaiasCuvula/mvvm_note_app_kotlin_android_studio/blob/01a1d8f867b979c22379ec1e501a4438e7619335/app/src/main/java/com/bersyte/noteapp/fragments/HomeFragment.kt#L45
https://github.com/IsaiasCuvula/mvvm_note_app_kotlin_android_studio/blob/01a1d8f867b979c22379ec1e501a4438e7619335/app/src/main/java/com/bersyte/noteapp/fragments/HomeFragment.kt#L45
I tried demo update item[0], but Diffutil not working
Why are you trying to get and set list from diffutil in you click listen ?
I won't work because, he first go to another screen and than he don't get t
And set the values, from diffutil, so you need to get and set in different
places! The questions is what do you want to do? Get or set values from
diffutil ? Your current list you need to call it inside your adapter class
...
On Fri, Jul 2, 2021, 06:26 Danh @.***> wrote:
https://github.com/IsaiasCuvula/mvvm_note_app_kotlin_android_studio/blob/01a1d8f867b979c22379ec1e501a4438e7619335/app/src/main/java/com/bersyte/noteapp/fragments/HomeFragment.kt#L45
[image: image]
https://user-images.githubusercontent.com/48312687/124216124-ad20a900-db1f-11eb-9ef0-14a91fedacc5.png
I tried demo update item[0], but Diffutil not working
—
You are receiving this because you are subscribed to this thread.
Reply to this email directly, view it on GitHub
https://github.com/IsaiasCuvula/mvvm_note_app_kotlin_android_studio/issues/1#issuecomment-872685188,
or unsubscribe
https://github.com/notifications/unsubscribe-auth/AQJDWZBUDZZ7M5FEV3AVXTDTVUWWTANCNFSM47V4KVQQ
.
Why are you trying to get and set list from diffutil in you click listen ? I won't work because, he first go to another screen and than he don't get t And set the values, from diffutil, so you need to get and set in different places! The questions is what do you want to do? Get or set values from diffutil ? Your current list you need to call it inside your adapter class ...
…
On Fri, Jul 2, 2021, 06:26 Danh @.***> wrote: https://github.com/IsaiasCuvula/mvvm_note_app_kotlin_android_studio/blob/01a1d8f867b979c22379ec1e501a4438e7619335/app/src/main/java/com/bersyte/noteapp/fragments/HomeFragment.kt#L45 [image: image] https://user-images.githubusercontent.com/48312687/124216124-ad20a900-db1f-11eb-9ef0-14a91fedacc5.png I tried demo update item[0], but Diffutil not working — You are receiving this because you are subscribed to this thread. Reply to this email directly, view it on GitHub <#1 (comment)>, or unsubscribe https://github.com/notifications/unsubscribe-auth/AQJDWZBUDZZ7M5FEV3AVXTDTVUWWTANCNFSM47V4KVQQ .
I'm just trying to make you understand "Diffutil not working when change a field". I tested on your source code Note App.
You change in source code, maybe you change in the wrong way, because you
added things that i didn't use in the code, maybe those things are not
implemented in the right way, so
What field are you trying to change? What do you want to achieve with that
?
On Fri, Jul 2, 2021, 09:53 Danh @.***> wrote:
Why are you trying to get and set list from diffutil in you click listen ?
I won't work because, he first go to another screen and than he don't get t
And set the values, from diffutil, so you need to get and set in different
places! The questions is what do you want to do? Get or set values from
diffutil ? Your current list you need to call it inside your adapter class
...
… <#m_-5253587529778850751_>
On Fri, Jul 2, 2021, 06:26 Danh @.***> wrote:
https://github.com/IsaiasCuvula/mvvm_note_app_kotlin_android_studio/blob/01a1d8f867b979c22379ec1e501a4438e7619335/app/src/main/java/com/bersyte/noteapp/fragments/HomeFragment.kt#L45
[image: image]
https://user-images.githubusercontent.com/48312687/124216124-ad20a900-db1f-11eb-9ef0-14a91fedacc5.png
I tried demo update item[0], but Diffutil not working — You are receiving
this because you are subscribed to this thread. Reply to this email
directly, view it on GitHub <#1 (comment)
https://github.com/IsaiasCuvula/mvvm_note_app_kotlin_android_studio/issues/1#issuecomment-872685188>,
or unsubscribe
https://github.com/notifications/unsubscribe-auth/AQJDWZBUDZZ7M5FEV3AVXTDTVUWWTANCNFSM47V4KVQQ
.
I'm just trying to make you understand "Diffutil not working when change a
field". I tested on your source code Note App.
—
You are receiving this because you commented.
Reply to this email directly, view it on GitHub
https://github.com/IsaiasCuvula/mvvm_note_app_kotlin_android_studio/issues/1#issuecomment-872764519,
or unsubscribe
https://github.com/notifications/unsubscribe-auth/AQJDWZEBVIQOFABUPTBNXT3TVVO7HANCNFSM47V4KVQQ
.
You change in source code, maybe you change in the wrong way, because you added things that i didn't use in the code, maybe those things are not implemented in the right way, so What field are you trying to change? What do you want to achieve with that ?
…
On Fri, Jul 2, 2021, 09:53 Danh @.> wrote: Why are you trying to get and set list from diffutil in you click listen ? I won't work because, he first go to another screen and than he don't get t And set the values, from diffutil, so you need to get and set in different places! The questions is what do you want to do? Get or set values from diffutil ? Your current list you need to call it inside your adapter class ... … <#m_-5253587529778850751_> On Fri, Jul 2, 2021, 06:26 Danh @.> wrote: https://github.com/IsaiasCuvula/mvvm_note_app_kotlin_android_studio/blob/01a1d8f867b979c22379ec1e501a4438e7619335/app/src/main/java/com/bersyte/noteapp/fragments/HomeFragment.kt#L45 [image: image] https://user-images.githubusercontent.com/48312687/124216124-ad20a900-db1f-11eb-9ef0-14a91fedacc5.png I tried demo update item[0], but Diffutil not working — You are receiving this because you are subscribed to this thread. Reply to this email directly, view it on GitHub <#1 (comment) <#1 (comment)>>, or unsubscribe https://github.com/notifications/unsubscribe-auth/AQJDWZBUDZZ7M5FEV3AVXTDTVUWWTANCNFSM47V4KVQQ . I'm just trying to make you understand "Diffutil not working when change a field". I tested on your source code Note App. — You are receiving this because you commented. Reply to this email directly, view it on GitHub <#1 (comment)>, or unsubscribe https://github.com/notifications/unsubscribe-auth/AQJDWZEBVIQOFABUPTBNXT3TVVO7HANCNFSM47V4KVQQ .
It's just an example: I want to change the title at position 0, without navigate UpdateNoteFragment.kt.
Okay, i got you , so please allow me some more time , i will try to
reproduce this behavior ... And i will let you know
On Fri, Jul 2, 2021, 10:14 Danh @.***> wrote:
You change in source code, maybe you change in the wrong way, because you
added things that i didn't use in the code, maybe those things are not
implemented in the right way, so What field are you trying to change? What
do you want to achieve with that ?
… <#m_-7241396014252433547_>
On Fri, Jul 2, 2021, 09:53 Danh @.> wrote: Why are you trying to get
and set list from diffutil in you click listen ? I won't work because, he
first go to another screen and than he don't get t And set the values, from
diffutil, so you need to get and set in different places! The questions is
what do you want to do? Get or set values from diffutil ? Your current list
you need to call it inside your adapter class ... …
<#m_-5253587529778850751_> On Fri, Jul 2, 2021, 06:26 Danh @.> wrote:
https://github.com/IsaiasCuvula/mvvm_note_app_kotlin_android_studio/blob/01a1d8f867b979c22379ec1e501a4438e7619335/app/src/main/java/com/bersyte/noteapp/fragments/HomeFragment.kt#L45
[image: image]
https://user-images.githubusercontent.com/48312687/124216124-ad20a900-db1f-11eb-9ef0-14a91fedacc5.png
I tried demo update item[0], but Diffutil not working — You are receiving
this because you are subscribed to this thread. Reply to this email
directly, view it on GitHub <#1
https://github.com/IsaiasCuvula/mvvm_note_app_kotlin_android_studio/issues/1
(comment) <#1 (comment)
https://github.com/IsaiasCuvula/mvvm_note_app_kotlin_android_studio/issues/1#issuecomment-872685188>>,
or unsubscribe
https://github.com/notifications/unsubscribe-auth/AQJDWZBUDZZ7M5FEV3AVXTDTVUWWTANCNFSM47V4KVQQ
. I'm just trying to make you understand "Diffutil not working when change
a field". I tested on your source code Note App. — You are receiving this
because you commented. Reply to this email directly, view it on GitHub <#1
(comment)
https://github.com/IsaiasCuvula/mvvm_note_app_kotlin_android_studio/issues/1#issuecomment-872764519>,
or unsubscribe
https://github.com/notifications/unsubscribe-auth/AQJDWZEBVIQOFABUPTBNXT3TVVO7HANCNFSM47V4KVQQ
.
It's just an example: I want to change the title at position 0, without
navigate UpdateNoteFragment.kt.
—
You are receiving this because you commented.
Reply to this email directly, view it on GitHub
https://github.com/IsaiasCuvula/mvvm_note_app_kotlin_android_studio/issues/1#issuecomment-872776132,
or unsubscribe
https://github.com/notifications/unsubscribe-auth/AQJDWZAUGIU5KBESL7DJ62LTVVROLANCNFSM47V4KVQQ
.
| gharchive/issue | 2021-07-02T03:24:34 | 2025-04-01T04:32:39.179044 | {
"authors": [
"IsaiasCuvula",
"danhtran12797"
],
"repo": "IsaiasCuvula/mvvm_note_app_kotlin_android_studio",
"url": "https://github.com/IsaiasCuvula/mvvm_note_app_kotlin_android_studio/issues/1",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
} |
219273880 | Use library from crayfish-commons
GitHub Issue
Issue:
https://github.com/Islandora-CLAW/CLAW/issues/577
Related to:
https://github.com/Islandora-CLAW/Crayfish-Commons/pull/2
Depends on:
https://github.com/Islandora-CLAW/Crayfish/pull/18
#18 needs to land before this, since this pull contains the commits from it.
What does this Pull Request do?
Modifies Hypercube to use the class added to Crayfish-Commons here:
https://github.com/Islandora-CLAW/Crayfish-Commons/pull/2
How should this be tested?
Get a JWT. I added a dsm here and then performed a write operation on a FedoraResource to print it to the page.
Use an HTTP client like Postman, making sure to add your token as an Authorization header for both of the following requests.
Add a tiff to Fedora as a NonRdfSource at http://localhost:8080/fcrepo/rest/some/crazy/path
Run the builtin webserver with php -S localhost:8088 -t /path/to/Hypercube/src
Request the OCR for the tiff sending a GET request to Hypercube at http://localhost:8088/some/crazy/path
Interested parties
@Islandora-CLAW/committers @dannylamb
Nice, this looks good too. We could just let this subsume #18 and call it a day with both of them.
We're still gonna be blocked by Islandora-CLAW/CLAW#585. As soon as I'm done setting up my next slew of PRs for the d8 configuration to do images in islandora and islandora_image, I can take a stab at it.
Just rebased this on top of #18, should be good to go now.
| gharchive/pull-request | 2017-04-04T14:41:24 | 2025-04-01T04:32:39.191516 | {
"authors": [
"dannylamb",
"jonathangreen"
],
"repo": "Islandora-CLAW/Crayfish",
"url": "https://github.com/Islandora-CLAW/Crayfish/pull/19",
"license": "mit",
"license_type": "permissive",
"license_source": "bigquery"
} |
290964552 | Get grok to build on CentOS
Fix for issue #1
Added yum tasks for dependencies.
Added cmake command tailored for CentOS.
👍🏻 seems like it does the trick
| gharchive/pull-request | 2018-01-23T19:30:43 | 2025-04-01T04:32:39.200550 | {
"authors": [
"jonathangreen",
"seth-shaw-unlv"
],
"repo": "Islandora-Devops/ansible-role-grok",
"url": "https://github.com/Islandora-Devops/ansible-role-grok/pull/2",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
} |
1830276240 | "token stream design" = "commitment stream"
"token stream design" = "commitment stream"
creator has to generate and maintain commit streams.. i.e .3 month stream
re-trade has no stream.. buy buyer can transparently see the remaining time on commitment stream. royalty is added to commitment stream
maybe 1.6
Remove for now as we dont plan to do streams
| gharchive/issue | 2023-08-01T00:52:12 | 2025-04-01T04:32:39.240133 | {
"authors": [
"newbreedofgeek"
],
"repo": "Itheum/architecture-diagrams",
"url": "https://github.com/Itheum/architecture-diagrams/issues/2",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
} |
2338362164 | Fixed bug #615 - Resolved the overflow of text in the cards of FAQ section and added fifth star in the third card
Fixed :
Resolved text overflow issue in FAQ section cards.
Adjusted star rating in the third card.
Thank you @HritikaPh
| gharchive/pull-request | 2024-06-06T14:08:44 | 2025-04-01T04:32:39.243255 | {
"authors": [
"HritikaPh",
"Its-Aman-Yadav"
],
"repo": "Its-Aman-Yadav/Community-Site",
"url": "https://github.com/Its-Aman-Yadav/Community-Site/pull/617",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
} |
1488959950 | Fix secondary structure
Using this Secondary Structure dataset for my Master’s thesis we detected that the test subset has more than 364 sequences (due to an error in the used source files), i.e., more than the original new_pisces/NEW364 from ProtTrans paper. It is important to correct this since we are interested in a direct comparison with published results. This pull request aims to solve this problem using the correct subset.
Side note: The difference in the evaluation of models between the bad version and the good one is small, so I expect minimum impact with this change.
@sacdallago Please, merge this PR.
| gharchive/pull-request | 2022-12-10T21:02:07 | 2025-04-01T04:32:39.292400 | {
"authors": [
"joaquimgomez"
],
"repo": "J-SNACKKB/FLIP",
"url": "https://github.com/J-SNACKKB/FLIP/pull/21",
"license": "AFL-3.0",
"license_type": "permissive",
"license_source": "github-api"
} |
896911288 | The request is throttled
Hi,
thanks you for you job. But in environment with 2000 users I've got such as error:
Message: This request is throttled. Please try again after the value specified in the Retry-After header. CorrelationId: XXX
InnerError:
RequestId: YYY
DateTimeStamp: Thu, 6 May 2021 08:00:21 GMT
HttpStatusCode: 429
HttpStatusDescription: Completed
Error occurred while executing GetAuditSignInLogs
Code: UnknownError
and its over and over again. how can I handle with it, please?
@DewREW1989 is that 2k of guests or users? It's alittle surprising we have run this on some large orgs without issues.
Could you try creating a dedicated service account and running it again. If you are using the same account for a number of calls it also could cause an issue.
I noticed here that over 2000 requests per second are blocked.
https://docs.microsoft.com/en-us/graph/throttling
Lack of update
| gharchive/issue | 2021-05-20T14:32:22 | 2025-04-01T04:32:39.310271 | {
"authors": [
"DewREW1989",
"JBines"
],
"repo": "JBines/Remove-StaleGuests",
"url": "https://github.com/JBines/Remove-StaleGuests/issues/3",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
} |
2000033845 | py-scipy does not build on Mac using apple-clang@15.0.0
Describe the bug
When building py-scipy, the build fails with this message:
meson.build:1:0: ERROR: Unable to detect linker for compiler `/Users/steveherbener/spack-stack/spack/lib/spack/env/clang/clang -Wl,--version`
stdout:
stderr: ld: unknown options: --version
clang: error: linker command failed with exit code 1 (use -v to see invocation)
The issue appears to be in the meson package and it looks like this issue was fixed, and will be available in the upcoming meson 1.3.0 release: https://github.com/mesonbuild/meson/issues/12419. The fix is currently available in the release candidate: 1.3.0rc3 if we want to test now. (We are currently using meson version 1.1.0).
To Reproduce
Steps to reproduce the behavior:
Update command lines tools to latest version which installs apple-clang%15.0.0.
Build the unified-env
Expected behavior
Build succeeds
System:
mac0S 13.6 (Ventura)
Additional context
Add any other context about the problem here.
When I tried to build the entire spack-stack, using the prior fixes for boost and ecflow, I got the following messages at the end of the build.
==> Error: Installation request failed. Refer to reported errors for failing package(s).
==> Removing failure mark on py-scipy-1.9.3-zdschxduujbomzbeiwpgepy7hgschki3
==> Removing failure mark on py-cartopy-0.21.1-4n2t3bvyr6spqwnsasablb24t3qymwbp
==> Removing failure mark on gmao-swell-env-1.0.0-d4fruiuualecesvmsor5edpqjofr6ryx
==> Removing failure mark on ewok-env-1.0.0-ivjqzxy4sjzmgdec5nikkhosjtyvmlme
==> Removing failure mark on jedi-tools-env-1.0.0-vs2kuhnphzr264lbfu6u5gvzgvo6iawd
==> Removing failure mark on jedi-base-env-1.0.0-j5l4b3esr4oip7uokdckpncwvaedbu37
==> Removing failure mark on jedi-fv3-env-1.0.0-chhxd3f7sqvkxxypc3v2ce6lkjz6w5x6
==> Removing failure mark on jedi-ufs-env-1.0.0-4e2q4aeww6ljnljcnc5yet7g2p33ha5f
==> Removing failure mark on jedi-um-env-1.0.0-l2l4f5g2r4sidktiw3r3zthf6r62wwft
==> Removing failure mark on jedi-neptune-env-1.0.0-ejeldscgs3gjhofj2lz3vj3if3va3at3
==> Removing failure mark on jedi-mpas-env-1.0.0-uec34zh7ufospflou7lvi4txqjh6oe6f
==> Removing failure mark on soca-env-1.0.0-66vuqpxwi6xtsfnfe327ozyassydzkz7
I'm guessing that means that there are two packages left to debug: py-scipy and py-cartopy.
I tried building py-cartopy by itself (spack install -v py-cartopy) and it suffered the exact same error from meson. It appears that updating meson may enable the whole spack-stack build to complete successfully.
I tried building py-cartopy by itself (spack install -v py-cartopy) and it suffered the exact same error from meson. It appears that updating meson may enable the whole spack-stack build to complete successfully.
Scratch this part in my prior comment. I misread the log and it turns out that py-scipy is a dependency for py-cartopy (makes sense) and the build failed when trying to build py-scipy. So, we don't know about py-cartopy yet.
We will want to wait until we have an official release of meson available. Until then there will simply not be support for clang@15 ...
I tested a build using my Mac with apple-clang@15.0.0 using the Nov 23 spack merge PR (https://github.com/JCSDA/spack/pull/371) after it was merged, and the corresponding spack-stack updates (https://github.com/JCSDA/spack-stack/pull/884). The spack-stack build (unified-env) successfully completed, and I was able to successfully build jedi-bundle from scratch.
I see 37 ctest failures which appear to be similar to when building with apple-clang@14.0.3 on the Mac:
98% tests passed, 37 tests failed out of 2416
Label Time Summary:
CRTM_Tests = 169.56 sec*proc (160 tests)
GEOS = 6.98 sec*proc (3 tests)
HofX = 3.72 sec*proc (10 tests)
QC = 21.83 sec*proc (16 tests)
UV = 3.97 sec*proc (2 tests)
actions = 8.11 sec*proc (4 tests)
aircraft = 2.42 sec*proc (3 tests)
compo = 0.77 sec*proc (2 tests)
coupling = 13.75 sec*proc (9 tests)
crtm = 56.98 sec*proc (56 tests)
crtm_tests = 169.56 sec*proc (160 tests)
download_data = 59.26 sec*proc (1 test)
errors = 2.47 sec*proc (8 tests)
executable = 107.43 sec*proc (236 tests)
femps = 5.60 sec*proc (1 test)
filters = 155.51 sec*proc (179 tests)
fortran = 0.95 sec*proc (3 tests)
fov = 0.47 sec*proc (2 tests)
fv3-jedi = 229.90 sec*proc (125 tests)
fv3jedi = 231.41 sec*proc (126 tests)
gnssro = 0.93 sec*proc (1 test)
gsw = 1.32 sec*proc (6 tests)
instrument = 26.32 sec*proc (28 tests)
ioda = 68.26 sec*proc (315 tests)
iodaconv = 138.33 sec*proc (330 tests)
iodaconv_validate = 29.47 sec*proc (176 tests)
metoffice = 0.51 sec*proc (2 tests)
mpasjedi = 161.55 sec*proc (51 tests)
mpi = 1023.18 sec*proc (976 tests)
obsfunctions = 41.39 sec*proc (80 tests)
oops = 96.49 sec*proc (295 tests)
openmp = 204.38 sec*proc (270 tests)
operators = 84.17 sec*proc (146 tests)
ozone = 1.09 sec*proc (2 tests)
pibal = 0.97 sec*proc (1 test)
predictors = 8.80 sec*proc (20 tests)
profile = 28.89 sec*proc (41 tests)
radarVAD = 1.28 sec*proc (2 tests)
rass = 1.21 sec*proc (2 tests)
saber = 176.13 sec*proc (219 tests)
satwinds = 2.26 sec*proc (3 tests)
scatwinds = 2.21 sec*proc (3 tests)
script = 1381.38 sec*proc (2070 tests)
sfcLand = 4.39 sec*proc (3 tests)
sfcMarine = 4.38 sec*proc (3 tests)
soca = 88.72 sec*proc (75 tests)
sonde = 4.40 sec*proc (3 tests)
ufo = 355.47 sec*proc (485 tests)
ufo_data = 52.57 sec*proc (315 tests)
ufo_data_validate = 52.57 sec*proc (315 tests)
unit_tests = 65.87 sec*proc (90 tests)
utils = 0.52 sec*proc (2 tests)
vader = 6.79 sec*proc (33 tests)
variablenamemap = 0.27 sec*proc (1 test)
variabletransforms = 27.72 sec*proc (27 tests)
Total Test time (real) = 1569.37 sec
The following tests FAILED:
339 - saber_test_randomization_bump_nicas_UL2_1-1 (Failed)
359 - saber_test_error_covariance_training_bump_hdiag-nicas_2_1-1 (Failed)
362 - saber_test_error_covariance_training_bump_hdiag-nicas_5_1-1 (Failed)
407 - saber_test_dirac_fastlam_1_1-1 (Failed)
409 - saber_test_dirac_fastlam_3_1-1 (Failed)
447 - saber_test_randomization_bump_nicas_UL2_2-1 (Failed)
467 - saber_test_error_covariance_training_bump_hdiag-nicas_2_2-1 (Failed)
470 - saber_test_error_covariance_training_bump_hdiag-nicas_5_2-1 (Failed)
518 - saber_test_dirac_fastlam_1_2-1 (Failed)
1390 - ufo_test_tier1_instrument_sonde_geos_qc (Failed)
1393 - ufo_test_tier1_instrument_sfcLand_geos_qc (Failed)
1396 - ufo_test_tier1_instrument_sfcMarine_geos_qc (Failed)
1464 - ufo_test_tier1_test_ufo_qc_average_obs_to_mod_levels (Failed)
1480 - ufo_test_tier1_test_ufo_qc_variableassignment (Failed)
1500 - ufo_test_tier1_test_ufo_mhs_qc_filters_geos (Failed)
1542 - ufo_test_tier1_test_ufo_function_metoffice_rh_corr (Failed)
1809 - ufo_test_tier1_test_ufo_variabletransforms_rhumidity_part2 (Failed)
1825 - iodaconv_compo_coding_norms (Failed)
1826 - iodaconv_gsi_ncdiag_coding_norms (Failed)
1829 - iodaconv_land_coding_norms (Failed)
1830 - iodaconv_lib-python_coding_norms (Failed)
1831 - iodaconv_marine_coding_norms (Failed)
1838 - iodaconv_gnssro_coding_norms (Failed)
2182 - fv3jedi_test_tier1_increment_geos (Failed)
2183 - fv3jedi_test_tier1_errorcovariance (Failed)
2230 - fv3jedi_test_tier1_errorcovariance_bump (Failed)
2293 - test_soca_errorcovariance (Failed)
2328 - test_soca_sqrtvertloc (Failed)
2336 - test_soca_dirac_diffusion (Failed)
2356 - test_soca_convertincrement (Failed)
2363 - test_mpasjedi_errorcovariance (Failed)
2375 - test_mpasjedi_hofx4d (Failed)
2391 - test_mpasjedi_3denvar_amsua_bc (Failed)
2393 - test_mpasjedi_3dfgat (Failed)
2399 - test_mpasjedi_4dfgat (Failed)
2415 - test_coupled_hofx3d_fv3_mom6 (Failed)
2416 - test_coupled_hofx3d_fv3_mom6_dontusemom6 (Failed)
Errors while running CTest
Great, thanks! Then we can close this issue.
| gharchive/issue | 2023-11-17T22:36:49 | 2025-04-01T04:32:39.325489 | {
"authors": [
"climbfuji",
"srherbener"
],
"repo": "JCSDA/spack-stack",
"url": "https://github.com/JCSDA/spack-stack/issues/882",
"license": "CC0-1.0",
"license_type": "permissive",
"license_source": "github-api"
} |
1177812135 | 降低输出维度
我用veriwild训练,原来的维度是2048维,对于我的业务场景维度过高,存储成本太高。请问我训练的时候,只需要在HEADS加个EMBEDDING_DIM: 256就可以么?还有其他的修改么
是的,不过这样精度可能会降低
是的,不过这样精度可能会降低
好的多谢
| gharchive/issue | 2022-03-23T08:58:17 | 2025-04-01T04:32:39.332312 | {
"authors": [
"L1aoXingyu",
"qijiaojiao"
],
"repo": "JDAI-CV/fast-reid",
"url": "https://github.com/JDAI-CV/fast-reid/issues/644",
"license": "Apache-2.0",
"license_type": "permissive",
"license_source": "github-api"
} |
1353063775 | JavaScript Linter Github Action
As part of the story https://github.com/JDGiardino/BGG-Companion/issues/43
https://eslint.org/docs/latest/user-guide/getting-started#installation-and-usage
Officially have the JS linter running as a GH action and applied all the changes. The only thing left with the PR is determining if eslintrc.json is complete with all desired lint rules. Want to assure ESLint catches everything that it should be.
I believe "eslint:recommended" set catches a bulk of things, but I'm not sure if there are other styling rules that should be set.
Additionally I wonder if along with ESLint if Prettier should be configured to run to then also catch formatting errors.
| gharchive/pull-request | 2022-08-27T14:44:39 | 2025-04-01T04:32:39.334633 | {
"authors": [
"JDGiardino"
],
"repo": "JDGiardino/BGG-Companion",
"url": "https://github.com/JDGiardino/BGG-Companion/pull/68",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
} |
237382339 | Instructions for Ionic 2/3 typescript app
Im a newbie to typescript and am trying to add this to my Ionic 2 app. Can someone give me step by step instructions on how to add this? I tried:
npm install -S spotify-web-api-js
then browserify spotify-web-api.js -o spotifybundle.js (on the file in the node_modules folder)
then add spotifybundle.js to my index.html
I also tried doing a typings install with all the *.d.ts files provided and still cant get it working (I get "SpotifyWebApi is not defined" when I try to make the call var spotify = new SpotifyWebApi() ).
thanks in advance.
P.S. I am using this instead of the angular-spotify version bc it seems more frequently updated.
@shawns582 Have you checked that you are making the request to the bundle file from the browser correctly and that you are importing the SpotifyWebApi variable? The library exports it but you need to import it from your code, or make it available in the global scope for you to access it.
Yo @shawns582... I know its been like, 8 months.. but did you ever resolve your issue?
Closing the issue for now. Feel free to reopen it if needed.
| gharchive/issue | 2017-06-21T00:31:34 | 2025-04-01T04:32:39.453856 | {
"authors": [
"JMPerez",
"adizam",
"shawns582"
],
"repo": "JMPerez/spotify-web-api-js",
"url": "https://github.com/JMPerez/spotify-web-api-js/issues/64",
"license": "mit",
"license_type": "permissive",
"license_source": "bigquery"
} |
2243092206 | [server banner] DST appears to still be in effect
At the time of this writing the banner shows 19:24 when it should be showing 18:24.
If it helps with writing automation in: DST begins at 02:00 on the first Sunday of October, and finishes at 03:00 on the first Sunday of April the following year.
Will add a feature to automatically detect Daylight savings time in Australia and adjust the time difference accordingly
You can get a time string for the Adelaide time zone with the built-in Date object.
const d = new Date();
var timeString = d.toLocaleTimeString("en-AU", {hour: "2-digit", minute: "2-digit", timeZone: "Australia/Adelaide"});
// 12 hour time in Adelaide (e.g. "04:05 am", "04:05 pm", "12:30 am")
var timeString24 = d.toLocaleTimeString("en-AU", {hour: "2-digit", minute: "2-digit", timeZone: "Australia/Adelaide", hourCycle: "h23"});
// 24 hour time in Adelaide (e.g. "04:05", "16:05", "00:30")
Perhaps not ideal if we want to keep everything in actual time objects for something like bringing back the "goobin' time" text, but parsing the resulting string wouldn't be difficult. Otherwise we would probably want to add a time library like day.js, because it's a bit tricky to work with time zones, especially time zones with DST, using just Dates (as this issue proves).
You forgot to remove all references of calcTime function, causing an exception, fixing now
just have it delete / with --no-preserve-root and you're good
(i kid)
| gharchive/issue | 2024-04-15T08:55:42 | 2025-04-01T04:32:39.466978 | {
"authors": [
"BobVonBob",
"BrendanTCC",
"JMTNTBANG"
],
"repo": "JMTNTBANG/Bitey-Frank",
"url": "https://github.com/JMTNTBANG/Bitey-Frank/issues/39",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
} |
2568929757 | 🛑 Medicare Service is down
In 1e60ef1, Medicare Service (https://medicareservice.net/) was down:
HTTP code: 500
Response time: 785 ms
Resolved: Medicare Service is back up in a7229b0 after 22 minutes.
| gharchive/issue | 2024-10-06T21:59:23 | 2025-04-01T04:32:39.470687 | {
"authors": [
"iamthenewking"
],
"repo": "JNA-Dealer-Program/stats",
"url": "https://github.com/JNA-Dealer-Program/stats/issues/670",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
} |
2202281996 | Create URL parameter link from product page and scanned product.
URL param. should contain product ID or other unique identifier.
Parameter should be used to retrieve data of selected product to the product page
Solved as per commit
| gharchive/issue | 2024-03-22T11:26:56 | 2025-04-01T04:32:39.471826 | {
"authors": [
"AkselOldeide"
],
"repo": "JNettli/010-BarcodeAllergenScanner",
"url": "https://github.com/JNettli/010-BarcodeAllergenScanner/issues/39",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
} |
1180648208 | 🛑 Nextcloud is down
In a64e37d, Nextcloud ($SERVER_BASE/nextcloud/) was down:
HTTP code: 0
Response time: 0 ms
Resolved: Nextcloud is back up in d565f61.
| gharchive/issue | 2022-03-25T11:02:50 | 2025-04-01T04:32:39.477493 | {
"authors": [
"JSAnyone"
],
"repo": "JSAnyone/upptime",
"url": "https://github.com/JSAnyone/upptime/issues/70",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
} |
886808415 | Add support for boolean 'XOR' operator
The boolean operator XOR (exclusive or) is not part of the SQL standard, but many databases have support for them (ex. mySQL).
However, with the current parser these expressions cannot be parsed as the keyword and the expression for it is not part of the grammar.
This pull request adds support for the XOR Expression, that is treated as a Binary Conditional Operator.
The expression behaves the same as the usual AND or OR expressions, but its precedence is the lowest: AND > OR > XOR.
Are you sure you implemented the right precedence?
The precedence for the OR over the XOR is implemented the same way as the AND is "stronger" than OR.
Many internal, dummy tests showed the precedence is right and parsed as intended, however, I can add more test cases that show that parsing goes how it's supposed to do with OR, XOR and AND in the same condition.
I've added new tests to showcase the precedence and associativity.
@wumpz Could you review it again?
| gharchive/pull-request | 2021-05-11T10:29:59 | 2025-04-01T04:32:39.481750 | {
"authors": [
"arh-eu",
"wumpz"
],
"repo": "JSQLParser/JSqlParser",
"url": "https://github.com/JSQLParser/JSqlParser/pull/1193",
"license": "Apache-2.0",
"license_type": "permissive",
"license_source": "github-api"
} |
249617988 | Subscription to second channel overrides the first subscriber
This error happens when you send headers object to subscription method like this:
var headers = {'Authorization': `Bearer 123`};
var subscription1 = client.subscribe(destination1, callback1, headers);
var subscription2 = client.subscribe(destination2, callback2, headers);
If you do the second subscription you loose the reference to the first one (callback destination1 is never called). This issue came due to the fact that you break the atomicity rule:
https://github.com/JSteunou/webstomp-client/blob/master/src/client.js#L269
You should not manipulate object that are coming from outside.
Interesting, I suppose calling subscribe with a clone of headers fix the issue temporary, does it?
Yes it does. But I would suggest to do a clone internally within the subscribe method as method user might not be aware of this. It is also hard to debug this inconvenient.
@kuceram is that ok for you?
Hi, sorry for late respond. I will check it out as soon as possible and let you know. :-)
Works fine, now...
| gharchive/issue | 2017-08-11T12:15:17 | 2025-04-01T04:32:39.484880 | {
"authors": [
"JSteunou",
"kuceram"
],
"repo": "JSteunou/webstomp-client",
"url": "https://github.com/JSteunou/webstomp-client/issues/43",
"license": "apache-2.0",
"license_type": "permissive",
"license_source": "bigquery"
} |
1977867695 | 🛑 Glitch is down
In 13a0b97, Glitch ($GLITCH) was down:
HTTP code: 503
Response time: 361 ms
Resolved: Glitch is back up in 98d1788 after 26 minutes.
| gharchive/issue | 2023-11-05T16:45:04 | 2025-04-01T04:32:39.488373 | {
"authors": [
"JYFUX"
],
"repo": "JYFUX/upptime",
"url": "https://github.com/JYFUX/upptime/issues/2415",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
} |
2021681427 | 🛑 Glitch is down
In 4d5fb19, Glitch ($GLITCH) was down:
HTTP code: 503
Response time: 256 ms
Resolved: Glitch is back up in ebd63cb after 1 hour, 48 minutes.
| gharchive/issue | 2023-12-01T22:40:03 | 2025-04-01T04:32:39.490578 | {
"authors": [
"JYFUX"
],
"repo": "JYFUX/upptime",
"url": "https://github.com/JYFUX/upptime/issues/2937",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
} |
508885342 | error: package ConnectionStateHolder does not exist
Hi. I'm trying build the client with android studio. When I try to generate a new apk the terminal show those errors, among other mistakes which I have could resolved:
error: package ConnectionStateHolder does not exist
error: cannot find symbol variable ConnectionStateHolder
It looks like a package or part of the code is missing or that class is no longer used. I try to search this class in Android's API but doesn't appear. I'm using api level 29.
Did someone have the same thing or know how to fix it?
Thanks.
Did you pull the experimental branche?
Yes, the experimental
Ok, I resolved it. Android studio change automatically to master branche instead experimental. I downloaded the zip file and it build at the first without problem.
Thanks for helping and great work
| gharchive/issue | 2019-10-18T06:40:59 | 2025-04-01T04:32:39.528277 | {
"authors": [
"JackD83",
"franMadu"
],
"repo": "JackD83/ALVR",
"url": "https://github.com/JackD83/ALVR/issues/63",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
} |
2647578252 | Question: Why 16384?
What made you choose 16384 for the max length? Is it because it's roughly the length of a max short? Why not pick a bigger or smaller number? Just asking out of curiosity.
Not for any consequential reason; it used to be configurable, but I didn't want a config library for a <4KB mod so I removed the library and picked a nice looking number. It's large enough to do the job of being able to look back through messages but not enough to cause memory issues on a very spammy server. Also yeah, it's a power of 2: 2^14
| gharchive/issue | 2024-11-10T19:46:33 | 2025-04-01T04:32:39.534437 | {
"authors": [
"JackFred2",
"xEricL"
],
"repo": "JackFred2/MoreChatHistory",
"url": "https://github.com/JackFred2/MoreChatHistory/issues/14",
"license": "CC0-1.0",
"license_type": "permissive",
"license_source": "github-api"
} |
2295865871 | Update Card+ImageUris.swift
closes #21
Fixed artCrop and borderCrop not decoding properly by removing custom CodingKeys. The JSON decoder used in the networking client already converts from snakeCase when decoding the API response; these coding keys were inadvertently causing a coding key mis-match.
JSON decoder as set up in NetworkService.swift, lines 75 & 76:
let decoder = JSONDecoder()
decoder.keyDecodingStrategy = .convertFromSnakeCase
A quick test with the local changes (removing the custom coding keys) has solved the issue in my test app.
Thanks for the contribution!
My pleasure! Thank you for all your hard work on this project!
| gharchive/pull-request | 2024-05-14T16:07:02 | 2025-04-01T04:32:39.588492 | {
"authors": [
"Bonney",
"JacobHearst"
],
"repo": "JacobHearst/ScryfallKit",
"url": "https://github.com/JacobHearst/ScryfallKit/pull/22",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
} |
255796769 | Technical interview
This issue is...
[ ] Edit typos or links
[ ] Inaccurate information
[ ] New Resources
[ ] Suggestions
[ ] Questions
Description
(say something...)
Technical interview
어떤 의도의 issue인지 여쭤봐도 될까요?
아.. 저는 단지 좋은 글이라서 같이 참여하고 싶었습니다. 제가 아직 깃허브에 서툴러서 실수를 했나보네요. 죄송합니다. 이거 이슈한것을 취소하면 될까요?
아.. 저는 단지 좋은 글이라서 같이 참여하고 싶었습니다. 제가 아직 깃허브에 서툴러서 실수를 했나보네요. 죄송합니다. 이거 이슈한것을 취소하면 될까요?
아 괜찮습니다 :) 닫아주시면 되요~
| gharchive/issue | 2017-09-07T02:33:42 | 2025-04-01T04:32:39.620118 | {
"authors": [
"JaeYeopHan",
"stonpol"
],
"repo": "JaeYeopHan/Interview_Question_for_Beginner",
"url": "https://github.com/JaeYeopHan/Interview_Question_for_Beginner/issues/2",
"license": "mit",
"license_type": "permissive",
"license_source": "bigquery"
} |
324243048 | Include compiled file with npm package
Ok, lot's of stuff here. Decided to switch to react-scripts for the demo app, cleaned up a lot of outdated dependencies, and now include a bundled .js file with the npm package which should solve a lot of outstanding issue.
It would be good to get a sanity check on this before merging. Any thoughts? I'm going to go ahead and publish v1.0.8 on npm so we can see if it resolves issues like #21.
I tried using version 1.0.8, but it seems to be completely broken.
Even with the simplest possible usage
import React from "react";
import PropTypes from "prop-types";
import ImageUploader from 'react-images-upload';
export default class ImageFileWidget extends PureComponent {
constructor(props) {
super(props);
}
onChange = fileDataURLs => {
};
render() {
return (
<div>
<ImageUploader
withIcon
onChange={this.onChange}/>
</div>
);
}
}
I get this error when the component is loaded
Warning: React.createElement: type is invalid -- expected a string (for built-in components) or a class/function (for composite components) but got: object. You likely forgot to export your component from the file it's defined in, or you might have mixed up default and named imports.
Ok, I actually tested v1.0.95 in a create-react-app project and it works fine now. This should resolve #85.
I'll give the new version a try on Monday
@JakeHartnell the packing problems appear to be fixed in v1.0.95
| gharchive/pull-request | 2018-05-18T01:49:17 | 2025-04-01T04:32:39.626728 | {
"authors": [
"JakeHartnell",
"donalmurtagh"
],
"repo": "JakeHartnell/react-images-upload",
"url": "https://github.com/JakeHartnell/react-images-upload/pull/29",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
} |
490316984 | Every entry of R2 class has ID 0x0 with Android Plugin 3.6.0-alpha10 and Butterknife library plugin
Using
classpath 'com.android.tools.build:gradle:3.6.0-alpha10'
classpath 'com.jakewharton:butterknife-gradle-plugin:10.1.0'
and
apply plugin: 'com.android.library'
apply plugin: 'com.jakewharton.butterknife'
Here is an extrac of the R2 class :
public final class R2 {
public static final class anim {
@AnimRes
public static final int abc_fade_in = 0x0;
@AnimRes
public static final int abc_fade_out = 0x0;
@AnimRes
public static final int abc_grow_fade_in_from_bottom = 0x0;
@AnimRes
public static final int abc_popup_enter = 0x0;
@AnimRes
public static final int abc_popup_exit = 0x0;
@AnimRes
public static final int abc_shrink_fade_out_from_bottom = 0x0;
@AnimRes
public static final int abc_slide_in_bottom = 0x0;
@AnimRes
public static final int abc_slide_in_top = 0x0;
@AnimRes
public static final int abc_slide_out_bottom = 0x0;
@AnimRes
public static final int abc_slide_out_top = 0x0;
@AnimRes
public static final int abc_tooltip_enter = 0x0;
@AnimRes
public static final int abc_tooltip_exit = 0x0;
@AnimRes
public static final int btn_checkbox_to_checked_box_inner_merged_animation = 0x0;
@AnimRes
public static final int btn_checkbox_to_checked_box_outer_merged_animation = 0x0;
@AnimRes
public static final int btn_checkbox_to_checked_icon_null_animation = 0x0;
Reproducible sample : https://github.com/NitroG42/Epoxy6Canary10Issue
If needed I can make an issue on Android Bug Tracker, I'm just not sure if it's a bug or a new behavior that requires some dev in the butterknife plugin.
This is working as indented on the AGP side - does this break anything, Jake?
With 3.6 now in RC a fix for this is pretty important.
@athornz does this break anything? Does butterknife require the IDs to be unique? It is the expected behaviour in AGP.
@imorlowska yes it's breaking any usages of the butterknife plugin since ids generated in the R2 class are all 0x0.
You can temporarily go back to the previous behaviour by using android.useCompileClasspathLibraryRClasses=false (this will however get rid of the speed improvements that came with the compile classpath library r classes).
ah, I didn't know about that one!
Would still be great to see a butterknife fix - is it still possible for butterknife to generate the R2 class as before?
We'll see what we can do. :)
You can temporarily go back to the previous behaviour by using android.useCompileClasspathLibraryRClasses=false (this will however get rid of the speed improvements that came with the compile classpath library r classes).
@imorlowska It helps~ I didn't know this option, either. Thank you~
You can temporarily go back to the previous behaviour by using android.useCompileClasspathLibraryRClasses=false (this will however get rid of the speed improvements that came with the compile classpath library r classes).
How to set android.useCompileClasspathLibraryRClasses=false ?
You can temporarily go back to the previous behaviour by using android.useCompileClasspathLibraryRClasses=false (this will however get rid of the speed improvements that came with the compile classpath library r classes).
ButterKnife 10.2.1 need AndroidX support. But my system cannot been updated to AndroidX now to avoid possible conflict. So try to
set android.useCompileClasspathLibraryRClasses=false in gradle-wrapper.properties. But the problem still exits.
Which configuration file should been put to android.useCompileClasspathLibraryRClasses=false
@imorlowska
| gharchive/issue | 2019-09-06T13:05:24 | 2025-04-01T04:32:39.633950 | {
"authors": [
"NitroG42",
"athornz",
"bingsenxie",
"imorlowska",
"wangpengwen"
],
"repo": "JakeWharton/butterknife",
"url": "https://github.com/JakeWharton/butterknife/issues/1549",
"license": "apache-2.0",
"license_type": "permissive",
"license_source": "bigquery"
} |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.