id
stringlengths 4
10
| text
stringlengths 4
2.14M
| source
stringclasses 2
values | created
timestamp[s]date 2001-05-16 21:05:09
2025-01-01 03:38:30
| added
stringdate 2025-04-01 04:05:38
2025-04-01 07:14:06
| metadata
dict |
|---|---|---|---|---|---|
1359307148
|
feat: Support @attrs.frozen and @attrs.mutable
The attrs package has two additional decorators that are aliases for @attrs.define which should be included:
attrs.mutable() is an alias for attrs.define()
attrs.frozen() is an alias for define(frozen=True)
See https://www.attrs.org/en/stable/names.html#tl-dr
Lovely, thanks @ngnpope :clap:
|
gharchive/pull-request
| 2022-09-01T19:27:02
|
2025-04-01T04:35:54.218145
|
{
"authors": [
"ngnpope",
"sondrelg"
],
"repo": "snok/flake8-type-checking",
"url": "https://github.com/snok/flake8-type-checking/pull/133",
"license": "BSD-3-Clause",
"license_type": "permissive",
"license_source": "github-api"
}
|
735013685
|
xlrd.xldate.XLDateAmbiguous err
xls2xlsx version: 0.1.3
Python version: 3.7.9
Operating System: Win 10 Pro, 1909
Description
xlrd.xldate.XLDateAmbiguous raised while converting to xlsx
I have tried to use the Save as to output as another xls file in the Excel 2010, but the same error still occur
What I Did
>>> from xls2xlsx import XLS2XLSX
>>> os.listdir()
['TSLINES HKG.xls']
>>> x2x = XLS2XLSX(os.listdir()[0])
>>> wb = x2x.to_xlsx()
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
File "C:\Users\#####\Anaconda3\envs\ocean2\lib\site-packages\xls2xlsx\xls2xlsx.py", line 214, in to_xlsx
value = self.xls_date_to_xlsx(value)
File "C:\Users\#####\Anaconda3\envs\ocean2\lib\site-packages\xls2xlsx\xls2xlsx.py", line 71, in xls_date_to_xlsx
date_tuple = xlrd.xldate_as_tuple(value, self.date_mode)
File "C:\Users\#####\Anaconda3\envs\ocean2\lib\site-packages\xlrd\xldate.py", line 116, in xldate_as_tuple
raise XLDateAmbiguous(xldate)
xlrd.xldate.XLDateAmbiguous: 7.0
TSLINES HKG.zip
Excel has a bug it inherited from Lotus 1-2-3 where it thinks there is a Feb 29, 1900. This is called the Excel Leap Year bug. The xls reader I use (xlrd) raises this XLDateAmbiguous error when attempting to read dates between Jan 1, 1900 and Mar 1, 1900. I'll look to fix this eventually, as I plan to make a clone of xlrd (xlrd is a dead project right now).
On second thought, I should be able to come up with a quick patch for you for this tomorrow, as I have this issue solved in my new ssf (SpreadSheet Formatter) program. Root cause is this code in xlrd:
if xldays < 61 and datemode == 0:
raise XLDateAmbiguous(xldate)
That's great, thank you so much
On second thought, I should be able to come up with a quick patch for you for this tomorrow, as I have this issue solved in my new ssf (SpreadSheet Formatter) program. Root cause is this code in xlrd:
if xldays < 61 and datemode == 0:
raise XLDateAmbiguous(xldate)
As an avoidance, you can clear out the very old dates in hidden cells V46:V53.
That can be one of the solution. But I am not sure will there be any formula based on the V46:V53 value. I have to look into it later.
As these excel are generated by the shipment company, not sure will there any formula behind it.
Thanks
Just clear out the Formats for them then - the value "7.0" is formatted as a date, which causes the issue. I should have a fix for you later this evening, though.
Fixed in v0.1.5
|
gharchive/issue
| 2020-11-03T04:47:51
|
2025-04-01T04:35:54.223642
|
{
"authors": [
"PPTA22",
"snoopyjc"
],
"repo": "snoopyjc/xls2xlsx",
"url": "https://github.com/snoopyjc/xls2xlsx/issues/5",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
}
|
283499746
|
Provide an API method to list all clusters that I have access to across all my spaces
import pprint
pp = pprint.PrettyPrinter(indent=4)
for org in cf.orgs_and_spaces():
for space in org['spaces']:
print('ORG {} | SPACE {}'.format(org['name'], space['name']))
print()
clusters = iae.clusters(space_guid=space['guid'])
if len(clusters) > 0:
for cluster in clusters:
pp.pprint(cluster)
print()
Done: https://github.com/snowch/ibm-analytics-engine-python/commit/4c9818d2fbe4aa8070eae5e632dca8ec58be1a22#diff-3bd56d45a6a2ef4c79eb3a5f8b471816R211
|
gharchive/issue
| 2017-12-20T09:31:14
|
2025-04-01T04:35:54.225183
|
{
"authors": [
"snowch"
],
"repo": "snowch/ibm-analytics-engine-python",
"url": "https://github.com/snowch/ibm-analytics-engine-python/issues/10",
"license": "Apache-2.0",
"license_type": "permissive",
"license_source": "github-api"
}
|
150038711
|
Scala Core: placeholder for support for schema migrations
So instead of users creating 1-0-0, 1-0-1, 1-0-2, they create 1-0-0 and then just deltas, something like:
{ "operation": "ADD", "name": "userId", "type": "string" }
Kind of thing. We should try and keep this as close to the JSON Schema and Avro standards as possible. Potential scope for contributing these to the parent standards?
I explored this field and seems only relative standard is JSON Patch.
It is an RFC draft (for quite long time), but it is related to instances, thus very permissive. We need a standard which doesn't allow us to create invalid JSON Schemas.
We can add requirements here and start to sketch a JSON Schema somewhere. @alexanderdean what operations do you think we need? ADD, DROP, ALTER? What about RENAME?
{ "operation": "ADD", "name": "$.userId", "type": "string" }. I think its wise to use JSONPath, not a standard either, but has a broadest adoption among similar tools.
Hey @chuwy - agreed, let's start speccing this out here. I'd start by looking at the various ALTER options in SQL and seeing what we want to keep/tweak/add...
I also think it is good idea to use inner object with subschema instead of adding Schema properties as first-level properties, so instead of:
{
"operation": "ADD",
"name": "userId",
"type": "string",
"maxLength": 32
}
it would be
{
"operation": "ADD",
"name": "userId",
"schema": {
"type": "string",
"maxLength": 32
}
}
So, we're reusing entities: schema property is valid subschema and whole Schema Migration easier to scheme.
Need a better name for schema. properties bad idea either.
A migration could be a JSON array (so ordered) of the above statements?
A couple of process points:
We should obviously define JSON Schemas for the whole migration format
We should test the migration format with all existing schemas in Iglu Central that have undergone migration
We cannot have idempotent ALTERs. They must contain only new properties. Otherwise it will be impossible to determine correct smallest possible SchemaVer bump.
We cannot have idempotent ALTERs. They must contain only new properties.
What does this mean precisely @chuwy ?
I had two possible ways to define migrations previously. First one was "idempotent". For example, consider following migration:
{
"operation": "ALTER",
"name": "$.userId",
"schema": {
"type": "string",
"default": "none`"
}
}
it just sets type to string and default to none. It doesn't care if for e.g. default already was none or type was already string. But I just realized that looking only to this migration we cannot determine if this is an ADDITION (set default) or REVISION (set type). On the other hand, if we restrict ALTERS to have only fields that changed this should look like:
{
"operation": "ALTER",
"name": "$.userId",
"schema": {
"default": "none`"
}
}
And we can easily determine this is an ADDITION which just sets default.
That makes total sense @chuwy - I like the way this is evolving!
Just FYI. Seems authors of JSON Schema started to think about Draft v5 and they considering support of JSON Pointer. For example they have proposed json-pointer as a string format (which is fantasic!) and also propose JSON Pointer for various internal things.
Taking above in account I think we can consider it as an alternative to JSONPath as well.
JSON Pointer is an alternative to JSONPath standard (basically, JSONPath isn't a standard in any sense). It is an IETF Draft and comparing to JSONPath looks simpler. But also less expressive and I couldn't find any Scala implementations.
Great spot!
Migrated to https://github.com/snowplow-incubator/iglu-scala-core/issues/27
|
gharchive/issue
| 2016-04-21T10:59:54
|
2025-04-01T04:35:54.275179
|
{
"authors": [
"aldemirenes",
"alexanderdean",
"chuwy"
],
"repo": "snowplow/iglu",
"url": "https://github.com/snowplow/iglu/issues/149",
"license": "Apache-2.0",
"license_type": "permissive",
"license_source": "github-api"
}
|
330869828
|
Integrate darksky as a weather provider
https://darksky.net/dev
Interesting. I swear by Dark Sky on my phone.
Good idea. We actually can come up with a decent list of weather providers as scala-weather never meant to be OWM-specific.
@asoltysik this is more of an exploratory ticket to check if there are better weather providers than the one we're currently using
I've checked out Dark Sky docs, here is a short summary of my observations:
there is just one, less confusing, API for forecasts and history instead of separate like in OWM
you can only get data from coordinates, no built-in geocoding
there is just one endpoint - when there is no time provided it serves current weather and a week forecast; when provided time it either serves history or forecast for that date
very simple pricing model and afaik it's quite popular in weather apps
AccuWeather is also really popular, but I couldn't find an API for historical data.
cool, changed the title :+1:
You want to see this on 0.4?
|
gharchive/issue
| 2018-06-09T09:21:16
|
2025-04-01T04:35:54.278802
|
{
"authors": [
"BenFradet",
"alexanderdean",
"asoltysik",
"chuwy"
],
"repo": "snowplow/scala-weather",
"url": "https://github.com/snowplow/scala-weather/issues/42",
"license": "apache-2.0",
"license_type": "permissive",
"license_source": "bigquery"
}
|
59841481
|
Scala Stream Collector: remove automatic creation of stream
This is a cutesy feature, but it adds unnecessary complexity to the SSC, and it's also dangerous:
Developer updates the SSC configuration file and accidentally adds a typo into the stream name
Leaves the shard count at 1
Re-deploys and goes home for the weekend
SSC creates new stream with typo name, chronically underprovisioned at 1 shard
Rest of pipeline sits idle looking for data in the correctly-named stream
Also note that there is a bug in the stream creation code, which means that the shard count is not used:
https://github.com/snowplow/snowplow/blob/master/2-collectors/scala-stream-collector/src/main/scala/com.snowplowanalytics.snowplow.collectors.scalastream/sinks/KinesisSink.scala#L104-L131
This code is copy-pasted into all the Kinesis apps!
This is another good reason why our Kinesis apps shouldn't be in the business of creating streams...
|
gharchive/issue
| 2015-03-04T18:42:18
|
2025-04-01T04:35:54.281760
|
{
"authors": [
"alexanderdean"
],
"repo": "snowplow/snowplow",
"url": "https://github.com/snowplow/snowplow/issues/1464",
"license": "apache-2.0",
"license_type": "permissive",
"license_source": "bigquery"
}
|
283606059
|
Create an example that starts from numpy arrays of Q, E, and S
The current example script https://github.com/sns-chops/multiphonon/blob/5ee3909b8bd2926c52942b580639c669903519f4/examples/getdos-Al.py starts from a histogram data file. Need another basic example that starts from Q, E, S numpy arrays, and then create a histogram out of them, and then process it to phonon DOS.
See also #84
|
gharchive/issue
| 2017-12-20T15:45:24
|
2025-04-01T04:35:54.283106
|
{
"authors": [
"yxqd"
],
"repo": "sns-chops/multiphonon",
"url": "https://github.com/sns-chops/multiphonon/issues/97",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
}
|
167216858
|
One of the images of Past Performers section is falling out of the array when viewed on a small screen
I'm unable to reproduce the same issue on localhost. Anyone , any clue as to why? I'm using python's SimpleHTTPServer
you dont have the latest version @vishal
-facepalm-
I thought I fetched but oh well.
Cheers,
V.
On Jul 24, 2016 7:28 PM, "Rohan Verma" notifications@github.com wrote:
you dont have the latest version @vishal https://github.com/vishal
—
You are receiving this because you commented.
Reply to this email directly, view it on GitHub
https://github.com/snu-breeze/breeze-landing/issues/35#issuecomment-234778837,
or mute the thread
https://github.com/notifications/unsubscribe-auth/AJgEmK7p5Lm55YtvvXGUGz_Nn0AfQuykks5qY2-AgaJpZM4JTgJ7
.
|
gharchive/issue
| 2016-07-24T04:42:52
|
2025-04-01T04:35:54.288779
|
{
"authors": [
"FlameFractal",
"anuragsai97",
"rhnvrm"
],
"repo": "snu-breeze/breeze-landing",
"url": "https://github.com/snu-breeze/breeze-landing/issues/35",
"license": "CC-BY-4.0",
"license_type": "permissive",
"license_source": "github-api"
}
|
2204226742
|
🛑 Archives is down
In e5a0df7, Archives (https://archives.sochara.org) was down:
HTTP code: 502
Response time: 571 ms
Resolved: Archives is back up in 545ec95 after .
|
gharchive/issue
| 2024-03-24T07:38:46
|
2025-04-01T04:35:54.332677
|
{
"authors": [
"geekodour"
],
"repo": "sochara-org/status",
"url": "https://github.com/sochara-org/status/issues/95",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
}
|
2392768296
|
More date format regex corrections
Previously, a two-digit month of "00" would have been admitted as valid
Legitimate year values, e.g. "1899", would have bee disallowed.
Thank you for your submission! We really appreciate it. Like many open source projects, we ask that you sign our Contributor License Agreement before we can accept your contribution.You have signed the CLA already but the status is still pending? Let us recheck it.
|
gharchive/pull-request
| 2024-07-05T15:01:23
|
2025-04-01T04:35:54.374055
|
{
"authors": [
"CLAassistant",
"pholser"
],
"repo": "sodadata/soda-core",
"url": "https://github.com/sodadata/soda-core/pull/2128",
"license": "Apache-2.0",
"license_type": "permissive",
"license_source": "github-api"
}
|
642704378
|
Check the update_at value before using
What this PR does / why we need it:
Check the update_at value before using, because when the device has not been update yet, the value of update_at would be None, which can not be used to calculate the interval
Which issue this PR fixes (optional, in fixes #<issue number>(, fixes #<issue_number>, ...) format, will close that issue when PR gets merged): fixes #225
Special notes for your reviewer:
Release note:
Codecov Report
Merging #226 into master will decrease coverage by 0.39%.
The diff coverage is 0.00%.
@@ Coverage Diff @@
## master #226 +/- ##
==========================================
- Coverage 70.82% 70.43% -0.40%
==========================================
Files 65 65
Lines 4316 4258 -58
Branches 484 484
==========================================
- Hits 3057 2999 -58
+ Misses 1117 1116 -1
- Partials 142 143 +1
Impacted Files
Coverage Δ
dolphin/api/v1/storages.py
71.42% <0.00%> (-1.23%)
:arrow_down:
dolphin/drivers/driver.py
65.00% <0.00%> (-9.08%)
:arrow_down:
dolphin/wsgi/common.py
64.28% <0.00%> (-3.46%)
:arrow_down:
dolphin/api/validation/validators.py
44.35% <0.00%> (-1.74%)
:arrow_down:
dolphin/drivers/fake_storage/__init__.py
88.76% <0.00%> (-1.56%)
:arrow_down:
dolphin/cryptor.py
90.90% <0.00%> (-1.40%)
:arrow_down:
dolphin/api/v1/alert.py
78.12% <0.00%> (-0.67%)
:arrow_down:
dolphin/api/common/__init__.py
59.70% <0.00%> (-0.60%)
:arrow_down:
dolphin/context.py
73.46% <0.00%> (-0.54%)
:arrow_down:
... and 11 more
|
gharchive/pull-request
| 2020-06-22T02:36:28
|
2025-04-01T04:35:54.385885
|
{
"authors": [
"ThisIsClark",
"codecov-commenter"
],
"repo": "sodafoundation/SIM",
"url": "https://github.com/sodafoundation/SIM/pull/226",
"license": "Apache-2.0",
"license_type": "permissive",
"license_source": "github-api"
}
|
1697944877
|
new
Issue/Feature Description:
Why this issue to fixed / feature is needed(give scenarios or use cases):
How to reproduce, in case of a bug:
Other Notes / Environment Information: (Please give the env information, log link or any useful information for this issue)
done
|
gharchive/issue
| 2023-05-05T17:20:13
|
2025-04-01T04:35:54.387764
|
{
"authors": [
"Arish2019"
],
"repo": "sodafoundation/contrib-lab01",
"url": "https://github.com/sodafoundation/contrib-lab01/issues/505",
"license": "Apache-2.0",
"license_type": "permissive",
"license_source": "github-api"
}
|
644992450
|
Enhancement to Multi-Cloud Block Support for AWS Create/List/Get/Update/Delete
What type of PR is this?
/kind new feature
What this PR does / why we need it:
Enhancement to Volume API support for AWS Cloud Backend. This PR is enhancement to #1033 Volume Service is enhanced for persisting data resources in DB. Also This PR have Driver Framework includes the Driver Factory Implementation for AWS Block Driver Adapter.
Design Spec: https://github.com/sodafoundation/design-specs/blob/master/specs/multicloud/Block_Storage_Service.md
Which issue(s) this PR fixes:
Fixes #962 #963 #977 #978 #979 #1038 #1039 #1040 #1041 #1042 #1043
Test Report Added?:
/kind TESTED
Test Report:
The following testing is done for Voume API:
[x] Register Backend
[x] List Backend
[x] Create Volume for AWS Backend
[x] List all Volume
[x] List Volume for AWS Backend
[x] Get Volume
[x] Update Volume for AWS Backend
[x] Volume FileShare
Register a Backend
POST http://192.168.20.162:8089/v1/94b280022d0c4401bcf3b0ea85870519/backends
Request
{
"Name": "aws-backend-block",
"Type": "aws-block",
"Region": "ap-south-1",
"Access": "myAccessKey",
"Security": "mySecretKey"
}
Response:
{
"id": "5f3cf5592d2e8f0001c11f1f",
"tenantId": "94b280022d0c4401bcf3b0ea85870519",
"userId": "558057c4256545bd8a307c37464003c9",
"name": "aws-backend-block",
"type": "aws-block",
"region": "ap-south-1",
"access": "myAccessKey",
"security": "mySecretKey"
}
Create a general purpose Volume
POST http://192.168.20.162:8089/v1/94b280022d0c4401bcf3b0ea85870519/volumes
Request:
{
"name": "himanshu",
"description": "AWS Volume",
"backendId": "5f3cf5592d2e8f0001c11f1f",
"availabilityZone": "ap-south-1a",
"size": 1,
"type": "gp2",
"tags": [
{
"key": "Name",
"value": "himanshu"
}
]
}
Response:
{
"id": "5f3cf65a5dc8d300018d8db3",
"createdAt": "2020-08-19T15:22:26",
"updatedAt": "2020-08-19T15:22:26",
"name": "himanshu",
"description": "AWS Volume",
"tenantId": "94b280022d0c4401bcf3b0ea85870519",
"userId": "558057c4256545bd8a307c37464003c9",
"backendId": "5f3cf5592d2e8f0001c11f1f",
"backend": "aws-backend-block",
"size": 1,
"type": "gp2",
"region": "ap-south-1",
"availabilityZone": "ap-south-1a",
"status": "creating",
"iops": 100,
"tags": [
{
"key": "Name",
"value": "himanshu"
}
],
"metadata": {
"fields": {
"CreationTimeAtBackend": {
"Kind": {
"StringValue": "2020-08-19T09:52:26Z"
}
},
"VolumeId": {
"Kind": {
"StringValue": "vol-03deedaee2de0060f"
}
}
}
}
}
Create a provisioned IOPS Volume with Encryption
POST: http://192.168.20.162:8089/v1/94b280022d0c4401bcf3b0ea85870519/volumes
Request:
{
"name": "ashit",
"description": "AWS Volume",
"backendId": "5f3cf5592d2e8f0001c11f1f",
"availabilityZone": "ap-south-1a",
"size": 4,
"type": "io1",
"iops": 100,
"encrypted": true,
"encryptionSettings": {
"KmsKeyId": "<KmsKeyId>"
},
"tags": [
{
"key": "Name",
"value": "ashit"
}
]
}
Response:
{
"id": "5f3cf6c05dc8d300018d8db4",
"createdAt": "2020-08-19T15:24:08",
"updatedAt": "2020-08-19T15:24:08",
"name": "ashit",
"description": "AWS Volume",
"tenantId": "94b280022d0c4401bcf3b0ea85870519",
"userId": "558057c4256545bd8a307c37464003c9",
"backendId": "5f3cf5592d2e8f0001c11f1f",
"backend": "aws-backend-block",
"size": 4,
"type": "io1",
"region": "ap-south-1",
"availabilityZone": "ap-south-1a",
"status": "creating",
"iops": 100,
"tags": [
{
"key": "Name",
"value": "ashit"
}
],
"encrypted": true,
"encryptionSettings": {
"KmsKeyId": "<KmsKeyId>"
},
"metadata": {
"fields": {
"CreationTimeAtBackend": {
"Kind": {
"StringValue": "2020-08-19T09:54:08Z"
}
},
"VolumeId": {
"Kind": {
"StringValue": "vol-06a7802d1713a0504"
}
}
}
}
}
Create a Cold HDD Volume
POST: http://192.168.20.162:8089/v1/94b280022d0c4401bcf3b0ea85870519/volumes
Request:
{
"name": "anvithks",
"description": "AWS Volume",
"backendId": "5f3cf5592d2e8f0001c11f1f",
"availabilityZone": "ap-south-1a",
"size": 500,
"type": "sc1",
"tags": [
{
"key": "Name",
"value": "anvithks"
}
]
}
Response:
{
"id": "5f3cf7585dc8d300018d8db5",
"createdAt": "2020-08-19T15:26:40",
"updatedAt": "2020-08-19T15:26:40",
"name": "anvithks",
"description": "AWS Volume",
"tenantId": "94b280022d0c4401bcf3b0ea85870519",
"userId": "558057c4256545bd8a307c37464003c9",
"backendId": "5f3cf5592d2e8f0001c11f1f",
"backend": "aws-backend-block",
"size": 500,
"type": "sc1",
"region": "ap-south-1",
"availabilityZone": "ap-south-1a",
"status": "creating",
"tags": [
{
"key": "Name",
"value": "anvithks"
}
],
"metadata": {
"fields": {
"CreationTimeAtBackend": {
"Kind": {
"StringValue": "2020-08-19T09:56:40Z"
}
},
"VolumeId": {
"Kind": {
"StringValue": "vol-0322888eed5227f94"
}
}
}
}
}
Create a Throughput Optimized HDD Volume
POST: http://192.168.20.162:8089/v1/94b280022d0c4401bcf3b0ea85870519/volumes
Request:
{
"name": "sanil",
"description": "AWS Volume",
"backendId": "5f3cf5592d2e8f0001c11f1f",
"availabilityZone": "ap-south-1a",
"size": 500,
"type": "st1",
"tags": [
{
"key": "Name",
"value": "sanil"
}
]
}
Response:
{
"id": "5f3cf79f5dc8d300018d8db6",
"createdAt": "2020-08-19T15:27:51",
"updatedAt": "2020-08-19T15:27:51",
"name": "sanil",
"description": "AWS Volume",
"tenantId": "94b280022d0c4401bcf3b0ea85870519",
"userId": "558057c4256545bd8a307c37464003c9",
"backendId": "5f3cf5592d2e8f0001c11f1f",
"backend": "aws-backend-block",
"size": 500,
"type": "st1",
"region": "ap-south-1",
"availabilityZone": "ap-south-1a",
"status": "creating",
"tags": [
{
"key": "Name",
"value": "sanil"
}
],
"metadata": {
"fields": {
"CreationTimeAtBackend": {
"Kind": {
"StringValue": "2020-08-19T09:57:51Z"
}
},
"VolumeId": {
"Kind": {
"StringValue": "vol-069aa8d55f90efeee"
}
}
}
}
}
Create a Magnetic(Standard) Volume
POST: http://192.168.20.162:8089/v1/94b280022d0c4401bcf3b0ea85870519/volumes
Request:
{
"name": "fannie",
"description": "AWS Volume",
"backendId": "5f3cf5592d2e8f0001c11f1f",
"availabilityZone": "ap-south-1a",
"size": 1,
"type": "standard",
"tags": [
{
"key": "Name",
"value": "fannie"
}
]
}
Response:
{
"id": "5f3cf7d75dc8d300018d8db7",
"createdAt": "2020-08-19T15:28:47",
"updatedAt": "2020-08-19T15:28:47",
"name": "fannie",
"description": "AWS Volume",
"tenantId": "94b280022d0c4401bcf3b0ea85870519",
"userId": "558057c4256545bd8a307c37464003c9",
"backendId": "5f3cf5592d2e8f0001c11f1f",
"backend": "aws-backend-block",
"size": 1,
"type": "standard",
"region": "ap-south-1",
"availabilityZone": "ap-south-1a",
"status": "creating",
"tags": [
{
"key": "Name",
"value": "fannie"
}
],
"metadata": {
"fields": {
"CreationTimeAtBackend": {
"Kind": {
"StringValue": "2020-08-19T09:58:46Z"
}
},
"VolumeId": {
"Kind": {
"StringValue": "vol-0a4a33b0187aac057"
}
}
}
}
}
List Volumes
GET: http://192.168.20.162:8089/v1/94b280022d0c4401bcf3b0ea85870519/volumes
Response:
{
"volumes": [
{
"id": "5f3cf65a5dc8d300018d8db3",
"createdAt": "2020-08-19T15:22:26",
"updatedAt": "2020-08-19T15:22:32",
"name": "himanshu",
"description": "AWS Volume",
"tenantId": "94b280022d0c4401bcf3b0ea85870519",
"userId": "558057c4256545bd8a307c37464003c9",
"backendId": "5f3cf5592d2e8f0001c11f1f",
"backend": "aws-backend-block",
"size": 1,
"type": "gp2",
"region": "ap-south-1",
"availabilityZone": "ap-south-1a",
"status": "available",
"iops": 100,
"tags": [
{
"key": "Name",
"value": "himanshu"
}
],
"metadata": {
"fields": {
"CreationTimeAtBackend": {
"Kind": {
"StringValue": "2020-08-19T09:52:26.284Z"
}
},
"VolumeId": {
"Kind": {
"StringValue": "vol-03deedaee2de0060f"
}
}
}
}
},
{
"id": "5f3cf6c05dc8d300018d8db4",
"createdAt": "2020-08-19T15:24:08",
"updatedAt": "2020-08-19T15:24:14",
"name": "ashit",
"description": "AWS Volume",
"tenantId": "94b280022d0c4401bcf3b0ea85870519",
"userId": "558057c4256545bd8a307c37464003c9",
"backendId": "5f3cf5592d2e8f0001c11f1f",
"backend": "aws-backend-block",
"size": 4,
"type": "io1",
"region": "ap-south-1",
"availabilityZone": "ap-south-1a",
"status": "available",
"iops": 100,
"tags": [
{
"key": "Name",
"value": "ashit"
}
],
"encrypted": true,
"encryptionSettings": {
"KmsKeyId": "arn:aws:kms:ap-south-1:586825186478:key/35890dfd-efa4-48fa-9900-443f31a85835"
},
"metadata": {
"fields": {
"CreationTimeAtBackend": {
"Kind": {
"StringValue": "2020-08-19T09:54:08.349Z"
}
},
"VolumeId": {
"Kind": {
"StringValue": "vol-06a7802d1713a0504"
}
}
}
}
},
{
"id": "5f3cf7585dc8d300018d8db5",
"createdAt": "2020-08-19T15:26:40",
"updatedAt": "2020-08-19T15:26:46",
"name": "anvithks",
"description": "AWS Volume",
"tenantId": "94b280022d0c4401bcf3b0ea85870519",
"userId": "558057c4256545bd8a307c37464003c9",
"backendId": "5f3cf5592d2e8f0001c11f1f",
"backend": "aws-backend-block",
"size": 500,
"type": "sc1",
"region": "ap-south-1",
"availabilityZone": "ap-south-1a",
"status": "creating",
"tags": [
{
"key": "Name",
"value": "anvithks"
}
],
"metadata": {
"fields": {
"CreationTimeAtBackend": {
"Kind": {
"StringValue": "2020-08-19T09:56:40.293Z"
}
},
"VolumeId": {
"Kind": {
"StringValue": "vol-0322888eed5227f94"
}
}
}
}
},
{
"id": "5f3cf79f5dc8d300018d8db6",
"createdAt": "2020-08-19T15:27:51",
"updatedAt": "2020-08-19T15:27:57",
"name": "sanil",
"description": "AWS Volume",
"tenantId": "94b280022d0c4401bcf3b0ea85870519",
"userId": "558057c4256545bd8a307c37464003c9",
"backendId": "5f3cf5592d2e8f0001c11f1f",
"backend": "aws-backend-block",
"size": 500,
"type": "st1",
"region": "ap-south-1",
"availabilityZone": "ap-south-1a",
"status": "available",
"tags": [
{
"key": "Name",
"value": "sanil"
}
],
"metadata": {
"fields": {
"CreationTimeAtBackend": {
"Kind": {
"StringValue": "2020-08-19T09:57:51.508Z"
}
},
"VolumeId": {
"Kind": {
"StringValue": "vol-069aa8d55f90efeee"
}
}
}
}
},
{
"id": "5f3cf7d75dc8d300018d8db7",
"createdAt": "2020-08-19T15:28:47",
"updatedAt": "2020-08-19T15:28:53",
"name": "fannie",
"description": "AWS Volume",
"tenantId": "94b280022d0c4401bcf3b0ea85870519",
"userId": "558057c4256545bd8a307c37464003c9",
"backendId": "5f3cf5592d2e8f0001c11f1f",
"backend": "aws-backend-block",
"size": 1,
"type": "standard",
"region": "ap-south-1",
"availabilityZone": "ap-south-1a",
"status": "available",
"tags": [
{
"key": "Name",
"value": "fannie"
}
],
"metadata": {
"fields": {
"CreationTimeAtBackend": {
"Kind": {
"StringValue": "2020-08-19T09:58:46.998Z"
}
},
"VolumeId": {
"Kind": {
"StringValue": "vol-0a4a33b0187aac057"
}
}
}
}
}
],
"next": 5
}
Get Volume
GET: http://192.168.20.162:8089/v1/94b280022d0c4401bcf3b0ea85870519/volumes/5f3cf65a5dc8d300018d8db3
Response:
{
"id": "5f3cf65a5dc8d300018d8db3",
"createdAt": "2020-08-19T15:22:26",
"updatedAt": "2020-08-19T15:22:32",
"name": "himanshu",
"description": "AWS Volume",
"tenantId": "94b280022d0c4401bcf3b0ea85870519",
"userId": "558057c4256545bd8a307c37464003c9",
"backendId": "5f3cf5592d2e8f0001c11f1f",
"backend": "aws-backend-block",
"size": 1,
"type": "gp2",
"region": "ap-south-1",
"availabilityZone": "ap-south-1a",
"status": "available",
"iops": 100,
"tags": [
{
"key": "Name",
"value": "himanshu"
}
],
"metadata": {
"fields": {
"CreationTimeAtBackend": {
"Kind": {
"StringValue": "2020-08-19T09:52:26.284Z"
}
},
"VolumeId": {
"Kind": {
"StringValue": "vol-03deedaee2de0060f"
}
}
}
}
}
PUT Volume
PUT: http://192.168.20.162:8089/v1/94b280022d0c4401bcf3b0ea85870519/volumes/5f3cf65a5dc8d300018d8db3
Request:
{
"description": "AWS Volume Updated",
"size": 4,
"type": "io1",
"iops": 200,
"tags": [
{
"key": "Name",
"value": "himanshuSODA"
},
{
"key": "Department",
"value": "Development"
}
]
}
Response:
{
"id": "5f3cf65a5dc8d300018d8db3",
"createdAt": "2020-08-19T15:22:26",
"updatedAt": "2020-08-19T15:35:02",
"name": "himanshu",
"description": "AWS Volume Updated",
"tenantId": "94b280022d0c4401bcf3b0ea85870519",
"userId": "558057c4256545bd8a307c37464003c9",
"backendId": "5f3cf5592d2e8f0001c11f1f",
"backend": "aws-backend-block",
"size": 4,
"type": "io1",
"region": "ap-south-1",
"availabilityZone": "ap-south-1a",
"status": "updating",
"iops": 200,
"metadata": {
"fields": {
"CreationTimeAtBackend": {
"Kind": {
"StringValue": "2020-08-19T09:52:26.284Z"
}
},
"Progress": {
"Kind": {
"NumberValue": 0
}
},
"StartTimeAtBackend": {
"Kind": {
"StringValue": "2020-08-19T10:05:02Z"
}
},
"VolumeId": {
"Kind": {
"StringValue": "vol-03deedaee2de0060f"
}
}
}
}
}
DELETE Volume
DELETE http://192.168.20.162:8089/v1/94b280022d0c4401bcf3b0ea85870519/volumes/5f3cf65a5dc8d300018d8db3
Response:
{}
Special notes for your reviewer:
This PR is for AWS Block Driver.
@wisererik Done
One generic comment, logs should have the first letter in lower case.
I have followed the existing multi-cloud project services logging schemes/format. If We need to update, We need to define generalized logging format rules & refactor the code.
One generic comment, logs should have the first letter in lower case.
I have followed the existing multi-cloud project services logging schemes/format. If We need to update, We need to define generalized logging format rules & refactor the code.
As a generic rule and golang suggestions, better to log the error messages starting with lowecase
LGTM
Please update the PR abstract and provide details.
@wisererik himanshu has replied to your comments. Please check and close.
@wisererik himanshu has replied to your comments. Please check and close.
Please see the PR description
@wisererik @sfzeng please check this PR
|
gharchive/pull-request
| 2020-06-24T21:32:32
|
2025-04-01T04:35:54.421566
|
{
"authors": [
"himanshuvar",
"kumarashit",
"rajat-soda",
"skdwriting"
],
"repo": "sodafoundation/multi-cloud",
"url": "https://github.com/sodafoundation/multi-cloud/pull/1034",
"license": "Apache-2.0",
"license_type": "permissive",
"license_source": "github-api"
}
|
720749005
|
Installer stopped when adding "egVIM"
Just reinstalled latest VIM from scratch and noted that if i add "egVIM" from the list of things to install, i got a couple of errors:
COPYFILES: No prompt specified at line 129.
COPYFILES: No help text specified at line 129.
I think I fixed this but I have no way of verifying that it works with the OS4 installer. Does the latest build work for you?
Yes it's fixed, just installed and now all goes well
You can close the ticket
Nice!
|
gharchive/issue
| 2020-10-13T19:52:17
|
2025-04-01T04:35:54.424769
|
{
"authors": [
"samo79",
"sodero"
],
"repo": "sodero/MUI-Vim",
"url": "https://github.com/sodero/MUI-Vim/issues/11",
"license": "Vim",
"license_type": "permissive",
"license_source": "github-api"
}
|
388572103
|
create db (core dumped)
Expected Behavior
trying to make a database from a fasta file (protein).
head dpann_cpr.faa
OHA11763.1_MHQV01000001.1
MTVAIALLTCLILSGCGNIERNIAGLTGFSRMCIDGVSYLQFTSGVTVEYTREGKIKTCG
OHA11759.1_MHQV01000001.1
MNDAFFDSELEGIAPRPIGRCYTSIEEAAADYADEVATFGLYGSHDDSNRVIEESHRDLSWEGRLGRR
OHA11760.1_MHQV01000001.1
MRQRQGTSRVMLLVLGIVVAICSTTQPTVASDQDTPKLGDRARFPQPVMVAVANFPPEEIRTSLRTFRTGDRCKIDAGYE
VEAYALDGNRVLVYLDYRTPTDGVSCPRGTVFWLREDVFAAMKAVHQCGTNYTAEELAALLKSAGLKFE
OHA11761.1_MHQV01000001.1
MELILEKLFESQAKVRILRLFLRNSTTNFTLEDVLRGTGLKRASALKEIAKLIKLRFLKSKNTDLVVSRVSGSGKTKKLR
MRSVRIRIYTTDPTFEFFRELRDLILRQVPESRHRIIQKLRKIGKVKLAVVTGAFINNEDARVDLLVVGEHVSRRKLESL
Current Behavior
core dump.
Steps to Reproduce (for bugs)
each time I launch the command
MMseqs Output (for bugs)
Please make sure to also post the complete output of MMseqs. You can use gist.github.com for large output.
mmseqs createdb dpann_cpr.faa dpann_cpr_mmseq_db/
Program call:
createdb dpann_cpr.faa dpann_cpr_mmseq_db/
MMseqs Version: a951e4dede7e9b52e514119d083ff4ca80ad1565
Max. sequence length 65535
Split Seq. by len true
Database type 0
Do not shuffle input database true
Offset of numeric ids 0
Compressed 0
Verbosity 3
................................................................................................... 1 Mio. sequences processed
................................................................................................... 2 Mio. sequences processed
.........................................................Time for merging files: 0h 0m 0s 772ms
Erreur de segmentation (core dumped)
Context
Providing context helps us come up with a solution and improve our documentation for the future.
(strangely It stop working when I update mmseqs2 version to the newest , rm -rf old directory and git clone the new one)
I updated my version because I had alignment died error doing all against all search.
Your Environment
ubuntu 16.04
Include as many relevant details about the environment you experienced the bug in.
Git commit used (The string after "MMseqs Version:" when you execute MMseqs without any parameters):
MMseqs2 Version: a951e4dede7e9b52e514119d083ff4ca80ad1565
self compiled (default) (make, make install)
gcc version 5.4.0 20160609 (Ubuntu 5.4.0-6ubuntu1~16.04.10)
Please try commit e2d04a3.
Large scale changes are happening this week, with the introduction of compressed databases and HEAD is currently not very stable.
debug back trace:
gdb -r --args mmseqs createdb dpann_cpr.faa dpann_cpr_mmseq_db/
GNU gdb (Ubuntu 7.11.1-0ubuntu1~16.5) 7.11.1
Copyright (C) 2016 Free Software Foundation, Inc.
License GPLv3+: GNU GPL version 3 or later http://gnu.org/licenses/gpl.html
This is free software: you are free to change and redistribute it.
There is NO WARRANTY, to the extent permitted by law. Type "show copying"
and "show warranty" for details.
This GDB was configured as "x86_64-linux-gnu".
Type "show configuration" for configuration details.
For bug reporting instructions, please see:
http://www.gnu.org/software/gdb/bugs/.
Find the GDB manual and other documentation resources online at:
http://www.gnu.org/software/gdb/documentation/.
For help, type "help".
Type "apropos word" to search for commands related to "word"...
Reading symbols from mmseqs...expanding to full symbols...done.
(gdb) r
Starting program: /home/disque2To/home/romain/logiciel/MMseqs2/build/bin/mmseqs createdb dpann_cpr.faa dpann_cpr_mmseq_db/
[Thread debugging using libthread_db enabled]
Using host libthread_db library "/lib/x86_64-linux-gnu/libthread_db.so.1".
Program call:
createdb dpann_cpr.faa dpann_cpr_mmseq_db/
MMseqs Version: a951e4dede7e9b52e514119d083ff4ca80ad1565
Max. sequence length 65535
Split Seq. by len true
Database type 0
Do not shuffle input database true
Offset of numeric ids 0
Compressed 0
Verbosity 3
................................................................................................... 1 Mio. sequences processed
................................................................................................... 2 Mio. sequences processed
.........................................................Time for merging files: 0h 0m 1s 184ms
Program received signal SIGSEGV, Segmentation fault.
__GI___fileno (fp=0x0) at fileno.c:35
35 fileno.c: Aucun fichier ou dossier de ce type.
(gdb) bt
#0 __GI___fileno (fp=0x0) at fileno.c:35
#1 0x000000000058a276 in Concat::concatFiles (files=0x1187530, n=32, outFile=0x0)
at /home/romain/logiciel/MMseqs2/src/commons/Concat.h:113
#2 0x0000000000588e40 in DBWriter::mergeResults (outFileName=0x117feb0 "dpann_cpr_mmseq_db/",
outFileNameIndex=0x11818e0 "dpann_cpr_mmseq_db/.index", dataFileNames=0x1186100, indexFileNames=0x1186320, fileCount=32,
lexicographicOrder=false) at /home/romain/logiciel/MMseqs2/src/commons/DBWriter.cpp:543
#3 0x0000000000586e74 in DBWriter::close (this=0x7fffffff57e0) at /home/romain/logiciel/MMseqs2/src/commons/DBWriter.cpp:239
#4 0x0000000000619954 in createdb(int, char const**, Command const&) ()
#5 0x0000000000555b7f in runCommand (p=..., argc=2, argv=0x7fffffffdc78) at /home/romain/logiciel/MMseqs2/src/commons/Application.cpp:62
#6 0x000000000055634a in main (argc=4, argv=0x7fffffffdc68) at /home/romain/logiciel/MMseqs2/src/commons/Application.cpp:135
I took a closer look at the crash.
Please pass a filename to the the createdb output (dpann_cpr_mmseq_db**/**, without a slash). We will try to handle the problem more gracefully in the future.
Also, please stick to the commit I mentioned (e2d04a3) for now.
It works well sorry for this it was really a newbie question.
But thanks a lot ^^ and I will stick with the commit you mentioned.
|
gharchive/issue
| 2018-12-07T09:31:53
|
2025-04-01T04:35:54.441802
|
{
"authors": [
"milot-mirdita",
"rLannes"
],
"repo": "soedinglab/MMseqs2",
"url": "https://github.com/soedinglab/MMseqs2/issues/141",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
}
|
1218110328
|
Does it support convert ROMP result to fbx format?
Does it support convert ROMP result to fbx format?
No, it doesn't.
This repository only support SMPL model with 24 joints, rotation angle representation. Other formats (22joints, 6D rotation, 9D rotation, SMPL-X, etc) would require an addition adaptor.
|
gharchive/issue
| 2022-04-28T02:40:06
|
2025-04-01T04:35:54.464209
|
{
"authors": [
"jinfagang",
"softcat477"
],
"repo": "softcat477/SMPL-to-FBX",
"url": "https://github.com/softcat477/SMPL-to-FBX/issues/3",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
}
|
30925681
|
plugins set bintrayOrganization to sbt
https://github.com/softprops/bintray-sbt/blob/99260bc318744fb13008c97bee147f08c8741e24/src/main/scala/plugin.scala#L212
not shure why you set the org to sbt
bintrayOrganization should now default to your bintray credentials username. This is more inline with the sbt doc suggestions - http://www.scala-sbt.org/0.13/docs/Bintray-For-Plugins.html
|
gharchive/issue
| 2014-04-05T22:31:06
|
2025-04-01T04:35:54.503517
|
{
"authors": [
"MasseGuillaume",
"softprops"
],
"repo": "softprops/bintray-sbt",
"url": "https://github.com/softprops/bintray-sbt/issues/19",
"license": "mit",
"license_type": "permissive",
"license_source": "bigquery"
}
|
52202733
|
do a better job of documenting sync to maven central feature
including http://central.sonatype.org/pages/requirements.html#sufficient-metadata for sonatype newbies
Is it possible to read "sona.user" and "sona.pass" from the environment variables? We want to do the maven synchronization using Travis CI. Thank you!
@zsxwing yes it is. The plugin looks at SONA_USER and SONA_PASS env variables as well when looking for credentials for sonatype. This information is lacking in the README.
|
gharchive/issue
| 2014-12-17T05:20:15
|
2025-04-01T04:35:54.505231
|
{
"authors": [
"2m",
"softprops",
"zsxwing"
],
"repo": "softprops/bintray-sbt",
"url": "https://github.com/softprops/bintray-sbt/issues/40",
"license": "mit",
"license_type": "permissive",
"license_source": "bigquery"
}
|
2511342024
|
Update logback-classic to 1.5.8
About this PR
📦 Updates ch.qos.logback:logback-classic from 1.3.14 to 1.5.8
Usage
✅ Please merge!
I'll automatically update this PR to resolve conflicts as long as you don't change it yourself.
If you'd like to skip this version, you can just close this PR. If you have any feedback, just mention me in the comments below.
Configure Scala Steward for your repository with a .scala-steward.conf file.
Have a fantastic day writing Scala!
⚙ Adjust future updates
Add this to your .scala-steward.conf file to ignore future updates of this dependency:
updates.ignore = [ { groupId = "ch.qos.logback", artifactId = "logback-classic" } ]
Or, add this to slow down future updates of this dependency:
dependencyOverrides = [{
pullRequests = { frequency = "30 days" },
dependency = { groupId = "ch.qos.logback", artifactId = "logback-classic" }
}]
labels: library-update, early-semver-minor, semver-spec-minor, commit-count:1
Superseded by #1057.
|
gharchive/pull-request
| 2024-09-07T00:26:10
|
2025-04-01T04:35:54.534079
|
{
"authors": [
"softwaremill-ci"
],
"repo": "softwaremill/elasticmq",
"url": "https://github.com/softwaremill/elasticmq/pull/1044",
"license": "Apache-2.0",
"license_type": "permissive",
"license_source": "github-api"
}
|
910451905
|
testing sttp with zio and schedule doesn't work as expected
Hi there @adamw, I am trying to test a simple code using sttp and zio with repeat and I get this In Suite "SttpRequestWithRepeatPolicySpec$", test "Test 1" has taken more than 1 m to execute. If this is not expected, consider using TestAspect.timeout to timeout runaway tests for faster diagnostics.
you can find the sample minified here
https://github.com/mvillafuertem/sttp-zio-test
Without use of sttp I have checked that zio repeat works well
import zio._
import zio.duration._
import zio.test.Assertion._
import zio.test.environment._
object ExampleSpec extends DefaultRunnableSpec {
val schedule = (Schedule.spaced(2.second) >>> Schedule.recurWhile[Long](_ < 5)) *>
Schedule.collectAll[Int].tapInput[Console, Int](response => putStrLn(response.toString).exitCode)
def spec =
testM("test") {
for {
ref <- Ref.make(0)
fiber <- ref.getAndUpdate(_ + 1).repeat(schedule).fork
_ <- TestClock.adjust(20.seconds)
values <- fiber.join
} yield assert(values)(equalTo(Chunk.fromIterable(0 to 5)))
}
}
Thanks
I've debugged this a little and to be honest I don't really know what is going on. It seems that there's something in the effect of sending an http request that prevents the scheduler from repeating the loop.
When running the slightly modified code, I can see that the effect runs once, completes successfully, but is never run again. Here's my code:
import io.circe.generic.extras.Configuration
import io.circe.generic.extras.auto._
import sttp.client3._
import sttp.client3.asynchttpclient.zio._
import sttp.client3.circe._
import zio.console.{Console, putStrLn}
import zio.duration._
import zio.test.Assertion.equalTo
import zio.test._
import zio.test.environment.{TestClock, TestEnvironment}
import zio.{Canceler, Chunk, ExitCode, RIO, Schedule, Task, UIO, URIO, ZIO}
object SttpZioTest extends DefaultRunnableSpec {
implicit val customConfig: Configuration = Configuration.default
case class Response(success: Boolean)
case class Request(success: Boolean)
//val stub = AsyncHttpClientZioBackend.stub.whenAnyRequest.thenRespond(Right(Response(true)))
private val requestGET = basicRequest
.get(uri"https://ene80m1n53nb.x.pipedream.net/")
.response(asJson[Response])
def send(backend: SttpBackend[Task, Any]): ZIO[Console, Throwable, Response] =
(for {
_ <- putStrLn("START")
// r <- ZIO(Right(Response(true)))
r <- backend.send(requestGET).map(_.body).absolve
_ <- putStrLn("STOP")
} yield r) //
override def spec: Spec[TestEnvironment, TestFailure[Throwable], TestSuccess] =
suite(getClass.getSimpleName)(
testM(s"repeat policy")(
assertM(
for {
backend <- AsyncHttpClientZioBackend()
fiber <- send(backend)
.repeat(
(Schedule.spaced(2.second) >>> Schedule.recurWhile[Long](_ < 5)) *>
Schedule.collectAll[Response].tapInput[Console, Response](response => putStrLn(response.toString).exitCode)
)
.catchAll(_ => RIO.effect(Chunk(Response(false))))
.fork
_ <- TestClock.adjust(20.seconds)
_ <- putStrLn("Adjusted")
actual <- fiber.join
_ <- putStrLn("Joined")
} yield actual
)(equalTo(Chunk.fill(5)(Response(true))))
)
)
}
this outputs (without AHC debug logs):
Adjusted
START
STOP
Response(true)
and there's never a second iteration. Things work well, if I use another effect (here: ZIO(Right(Response(true))) - I also tried with async ones), or a stub backend.
Another data point, is that when I run this as a stand-alone application, things work properly:
object TestApp extends zio.App {
implicit val customConfig: Configuration = Configuration.default
case class Response(success: Boolean)
case class Request(success: Boolean)
private val requestGET = basicRequest
.get(uri"https://ene80m1n53nb.x.pipedream.net/")
.response(asJson[Response])
def doSend(backend: SttpBackend[Task, Any]): ZIO[Console, Throwable, Response] =
(for {
_ <- putStrLn("START")
r <- backend.send(requestGET).map(_.body).absolve
_ <- putStrLn("STOP")
} yield r) //
override def run(args: List[String]): URIO[zio.ZEnv, ExitCode] = {
(for {
backend <- AsyncHttpClientZioBackend()
fiber <- doSend(backend)
.repeat(
(Schedule.spaced(2.second) >>> Schedule.recurWhile[Long](_ < 5)) *>
Schedule.collectAll[Response].tapInput[Console, Response](response => putStrLn(response.toString).exitCode)
)
.catchAll(_ => RIO.effect(Chunk(Response(false))))
.fork
_ <- putStrLn("Adjusted")
actual <- fiber.join
_ <- putStrLn("Joined")
} yield actual).exitCode
}
}
So I'm guessing the combination of TestClock + something that's special about the send(...) effect causes ZIO to stop looping according to the schedule.
But that's as far as I've got, no further ideas on how to debug this. Maybe it would make sense to raise an issue in ZIO, if they had any ideas on how to approach this?
Ah! I think I found how to reproduce without sttp. I'll report in ZIO shortly
@mvillafuertem Done, I think we'll have to see how the ZIO issue evolves :)
@adamw Thanks a lot!
Hi @adamw, any updates?
@mvillafuertem I don't think we can do anything other than get the issue fixed in ZIO? I mean, it's not something we can fix in sttp, right?
Closing as this needs to be fixed in ZIO, not an sttp problem
|
gharchive/issue
| 2021-06-03T12:38:27
|
2025-04-01T04:35:54.541177
|
{
"authors": [
"adamw",
"mvillafuertem"
],
"repo": "softwaremill/sttp",
"url": "https://github.com/softwaremill/sttp/issues/1003",
"license": "Apache-2.0",
"license_type": "permissive",
"license_source": "github-api"
}
|
89894211
|
Untranslated Strings
Some strings still are untranslated, some examples are:
'Please verify your email first. Check the email and follow the link!'
'Successful Registration! Please check your email and follow the instructions.'
Cloud you explain why you are closing this?
I have no idea what you are talking about, you did not even specify a file.
The reason i didn't specify a file it's because it's a problem with all of them.
These messages are not being translated into any language.
Steps to reproduce: Register an account and you will see: 'Successful Registration! Please check your email and follow the instructions.' (probably need to have verify email turned on). And then try loggin in without verifying the account to see: 'Please verify your email first. Check the email and follow the link!'.
I see. I guess these strings have been introduced after the project was started in February 2014. Sorry for closing this to quick.
Allow me to add those to this list of untranslated Strings. I can't seem to find them in any file. I can only confirm they are not being translated at least for Portuguese and Spanish.
I could solve two of them with the following mapping. Phone is not shown in the picture, but it appears on placeholder when we add "tel" input type.
However I can't seem to get the "Minimum required length" to work. I tried with and without ":".
T9n.map('pt', {
Phone: 'Telefone',
'Invalid email': 'Email inválido',
'Minimum required length': 'Tamanho mínimo'
});
+1 same here.. please reopen issue
I try to remap text fields to my language translation file:
AccountsTemplates.configure
texts:
button:
resendVerificationEmail: "resendVerificationEmail"
errors:
captchaVerification: "error.captchaVerification"
validationErrors: "validationErrors"
verifyEmailFirst: "info.verificationEmailSent"
info:
pwdSet: "info.pwdSet"
signUpVerifyEmail: 'info.signUpVerifyEmail'
verificationEmailSent: 'info.verificationEmailSent'
title:
resendVerificationEmail: "resendVerificationEmailTitle"
verifyEmail: "verifyEmail"
maxAllowedLength: "maxAllowedLength"
minRequiredLength: "minRequiredLength"
requiredField: "requiredField"
resendVerificationEmailLink_link: "resendVerificationEmailLink"
resendVerificationEmailLink_pre: "resendVerificationEmailLinkPre"
``
With most mapped string i was successful but some not. Here string impossible to translate:
``
maxAllowedLength: "maxAllowedLength"
minRequiredLength: "minRequiredLength"
requiredField: "requiredField"
``
and plus "Invalid Email" on Registration form Email field.
Any idea how to fix? Maybe this issue is more close to accounts-core package?
Thanks
@MartinBucko: I see no reason why a text cannot be translated, but in your examples the key matches the value, so you will never any effect. If anybody finds an untranslated string please add it to the section of the corresponding package at least in the English translation, which is the blue pause for everything else.
what about the message that is appear on first login through the verified email link ?
how can i make it with translate?
+1 I'm also puzzled because some of the texts appear in English in a German form. For instance, if I do not enter something into a required field, I'm getting a message like this: "Benutzername: Required Field". Any idea how I could workaround this?
@derwaldgeist: I don't see a translation that includes the string "Required Field". Where does it come from?
I don't know where it comes from, but it appears on the signup page of the useraccounts package if you don't enter any values in the fields. Maybe it's baked in Meteor's standard Account packages? useraccounts only refers to t9n regarding translations. I defenitely did not add this text myself.
Is your code online? I can't help you without understanding your problem. If you do a full text search with e.g. grep you should see where this comes from. I'm pretty sure it does not come from t9n, if it's missing fell free to add it.
Here's the file it comes from:
https://github.com/meteor-useraccounts/core/blob/master/lib/core.js
Cool. Do you want to add a translation?
Yes, no problem. Shall I create a PR for that? Not sure if I will make it right, but I will try my best :-)
i don't understood. requiredField translated excellent with T9!!
var pt = {
"Required Field" : "Campo Obrigatório"
}
T9n.map('pt', pt);
T9n.setLanguage('pt')
Strange. This did not work for me.
when useraccount form is opened in your browser, please try write it in console
var pt = {
"Required Field" : "Campo Obrigatório"
}
T9n.map('pt', pt);
T9n.setLanguage('pt')
Yes, it works with that key. Thanks.
ok, so all that you need it write it in client in true place,
make a map on file in the lib folder, and changes language like here:
Meteor.startup(function(){
if(Meteor.isClient){
T9n.setLanguage('pt');
}
});
Good luck!
@derwaldgeist, @gVolop: If you find untranslated strings in standard libraries you can simple send a pull request to add them. This way it works for everybody out of the box, exactly this is what this issue is about. Thanks @gVolop for the explanation.
yes, i sent a request, but how can i know when it's handled
this request will be updated?
@gVolop: Can you point me to the PR? I don't see it.
https://github.com/softwarerero/meteor-accounts-t9n/issues/105#issuecomment-166879140
Can you explain this? I do not understand your question.
T9 is translation package for useraccount ?
it missed the text that appear on the dialog that opened after first login through verification-email link (the above dialog image)
@gVolop: The idea is to provide translations for common meteor packages like useraccount out of the box. But no one person can provide those translations for all 30+ translations. So please, if you find a missing key provide it and do not just report it, this cannot work. I, for example, do not speak Portuguese.
u r right. it's un-possible to control all this by one person.
but i mean to option to mapping this keys, like as 'Required Field' mapping. not embeding translation for each language.
and i really dont find the above situation in useraccount texts, maybe the dialog arrive from another package? maybe account-password? I will try to check it
Sorry, made some erroneous PR. Last one should be good. My apologies.
I have updated the italian file with suggested changes. Now I have one question that maybe require a more general discussion:
Can you hint me for "Maximum allowed length" and "Minimum required length" localization?
Adding translation for the "maxAllowedLength" "minRequiredLength" variables:
maxAllowedLengt: "Lunghezza massima consentita"
minRequiredLength: "Lunghezza minima consentita"
doesn't work.
Adding localization for the strings:
"Maximum allowed length": "Lunghezza massima consentita"
"Minimum required length": "Lunghezza minima consentita"
doesn't work either.
Adding translation for the exact string rendered in HTML
"Maximum allowed length: 6": "Lunghezza massima consentita: 6"
"Minimum required length: 6": "Lunghezza minima richiesta: 6"
works but it's fixed, so it's not good and only a temporary solution.
For what I've seen those variables are used from "Field.prototype.validate" function inside meteor-useraccounts/core/lib/field.js at line 253 and at line 261 in conjunction with "minLength" and "maxLength" variables.
I've tried to localize the resulting error string this way:
"Maximum allowed length: @{maxLength}": "Lunghezza massima consentita: @{maxLength}"
"Minimum required length: @{minLength}": "Lunghezza minima richiesta: @{minLength}"
without any success.
Does any of you have any hint on how to localize that kind of concatenated strings.
Thanks in advance.
The easiest thing would be to have a translation like AccountsTemplates.texts.minRequiredLength = "Maximum allowed length: @{minLength}". Then you could call T9n.get('AccountsTemplates.texts.minRequiredLength', true, {minLength: 6}). Maybe @splendido can tell if that makes sense and where AccountsTemplates is defined so you can provide a PR.
@softwarerero I saw you merged my PR. Will you update atmospherejs.com with this new version?
v1.3.5 is out
Hi everyone,
I use the release 1.3.11 but the key "Required Field" is missing for the language Khmer and Chinese.
How can I contribute ? Or someone could do something ?
Sincerly
Hi, if you know how to do a pull request on GitHub just go ahead. If not you can just add the translations to the files and attach those files to this or a new ticket. Then I will be able to integrate it.
Hi softwarerero,
Thanks for your response.
Here are the two keys to add:
Khmer file: "Required field:" "វាលដែលត្រូវការ"
Chineses file: "Required field": "必填项目"
Thanks you in advance for this correction.
Anyway, i have another bug when i wanted upgrade your plugin release:
While selecting package versions:
error: No version of softwarerero:accounts-t9n satisfies all constraints: @1.3.11, @=2.0.0-beta.2
Constraints on package "softwarerero:accounts-t9n":
softwarerero:accounts-t9n@1.3.11 <- top level
softwarerero:accounts-t9n@=2.0.0-beta.2 <- top level
softwarerero:accounts-t9n@1.3.3 <- useraccounts:core 1.14.2 <- useraccounts:flow-routing 1.12.0
Do you have a solution ?
Thanks and best regards !
I need to look into this. Do you remember what you did exactly to upgrade?
Hi @tuxyvarman, I just released 2.0.2 which includes your translations for "Required field". About your bug I still do not see how this can happen.
Hi @softwarerero,
I just can not upgrade your plugin because I use the plugin "useraccounts" and the dependencies are not satisfied.
I still don't think this has any relation with "untranslated strings". I guess the line https://github.com/meteor-useraccounts/core/blob/master/package.js#L33 needs to be updated, you can do it in a fork.
Closing as this is basically unfixable. There will always be new language strings and untranslated languages.
|
gharchive/issue
| 2015-06-21T10:38:13
|
2025-04-01T04:35:54.569947
|
{
"authors": [
"MartinBucko",
"TAFKAR",
"derwaldgeist",
"gVolop",
"juliomac",
"llvasconcellos",
"softwarerero",
"tdbs",
"tuxyvarman"
],
"repo": "softwarerero/meteor-accounts-t9n",
"url": "https://github.com/softwarerero/meteor-accounts-t9n/issues/105",
"license": "mit",
"license_type": "permissive",
"license_source": "bigquery"
}
|
2418320304
|
UI Dashboard - Technical documentation update
Please revise chapter. Divide it to Functionality (description of what is delivered in 1st prototype) and Future work (everything else). Also Technology chapters should be revised and only what we are using in the first prototype should remain here.
This chapter can maybe link to functionality of other technical components. Such as in case of data download & export. Should we rename the component Metadata authoring? Manual update of metadata can be even skipped for the first iteration.
Text in this chapter will be split between other components - Metadata Catalogue, Metadata Authoring,...
|
gharchive/issue
| 2024-07-19T08:08:40
|
2025-04-01T04:35:54.573642
|
{
"authors": [
"DajanaSnopkova"
],
"repo": "soilwise-he/SoilWise-documentation",
"url": "https://github.com/soilwise-he/SoilWise-documentation/issues/44",
"license": "CC-BY-4.0",
"license_type": "permissive",
"license_source": "github-api"
}
|
2549871745
|
harvest from prepsoil
prepsoil is a HE funded project creating a knowledge hub with data, knowledge and living labs in preparation for the soil deal for Europe
Knowledge hub
Materials are split in in various languages
Endpoint knowledge-hub
Available fields:
title
land-type
country
language
media-format
category
location
soil-qualities-properties
link {name,link)
mission objective
source
sustainable practice
content type
Content types:
Best practices and tools 65
Scientific 62
Education & Training material 51
Other 40
Policy documents 27
Interview 11
Conference deliverables 9
Livinglabs
Endpoint for living labs is https://prepsoil.eu/api/lllh
Available fields:
title,
description,
country,
category,
location,
link,
source,
experiment-site
knowledge harvest implemented in https://github.com/soilwise-he/harvesters/commit/af282488a6a12243ccab10ae43807cdaecf6db0b
|
gharchive/issue
| 2024-09-26T08:21:40
|
2025-04-01T04:35:54.580493
|
{
"authors": [
"pvgenuchten"
],
"repo": "soilwise-he/harvesters",
"url": "https://github.com/soilwise-he/harvesters/issues/54",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
}
|
1142980032
|
Add coingeckoId for SHARDS token
Please see https://github.com/solana-labs/token-list/pull/19179
merged
|
gharchive/issue
| 2022-02-18T12:35:34
|
2025-04-01T04:35:54.607669
|
{
"authors": [
"SolChicks",
"keone"
],
"repo": "solana-labs/token-list",
"url": "https://github.com/solana-labs/token-list/issues/19180",
"license": "Apache-2.0",
"license_type": "permissive",
"license_source": "github-api"
}
|
1058451473
|
add new token THECA
Please note: This repository is being rebuilt to accept the new volume of token additions and modifications. PR merges will be delayed.
I agree to not ping anybody on Discord/Twitter/email about this pull request. Instead I will inquire by posting a new comment in the pull request if needed.
PRs are reviewed in bulk and and can take up to two weeks to be merged.
This repository is managed using an auto merge action. Please ensure your PR has no deleted lines, and it will be merged.
Please provide the following information for your token.
Please include change to the src/tokens/solana.tokenlist.json file in the PR.
DON'T modify any other token on the list.
At minimum each entry should have
Token Address:
Token Name:
Token Symbol:
Logo: (logo should be uploaded under assets/mainnet//*.<png/svg>)
Link to the official homepage of token:
Coingecko ID if available (https://www.coingecko.com/api/documentations/v3#/coins/get_coins__id_):
Auto merge requirements
Your pull request will be automatically merged if the following conditions are met:
Your pull request only adds new tokens to the list. Any modification to existing
tokens will require manual review to prevent unwanted modifications.
Your pull request does not touch unrelated code. In particular, reformatting changes to unrelated
code will cause the auto merge to reject your PR.
Any asset files added correspond to the token address you are adding. Asset files
must be PNG, JPG or SVG files.
Your change is valid JSON and conforms to the schema. If your change failed validation,
read the error message carefully and update your PR accordingly.
No other tokens shares the same name, symbol or address.
For example, this change would be rejected due to unrelated changes:
The bot runs every 60 minutes and bulk-merges all open pull requests to prevent conflicts.
This means that you need to wait up to 60 minutes for your pull request to be merged or reprocessed.
correct website url and twitt
|
gharchive/pull-request
| 2021-11-19T11:58:34
|
2025-04-01T04:35:54.615126
|
{
"authors": [
"Theca-labs"
],
"repo": "solana-labs/token-list",
"url": "https://github.com/solana-labs/token-list/pull/4761",
"license": "Apache-2.0",
"license_type": "permissive",
"license_source": "github-api"
}
|
625682881
|
hOW CAN I CREATE / GET DemoActivationTool.LicenseSign.pfx FILE?
hOW CAN I CREATE / GET DemoActivationTool.LicenseSign.pfx FILE?
Here are the steps to create the KeyPair and exported public key file:
https://www.codeproject.com/Articles/996001/A-Ready-To-Use-Software-Licensing-Solution-in-Csha#premain925383
|
gharchive/issue
| 2020-05-27T13:08:53
|
2025-04-01T04:35:54.616592
|
{
"authors": [
"AmerQwaider",
"luifertorres"
],
"repo": "soldierq/QLicense",
"url": "https://github.com/soldierq/QLicense/issues/12",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
}
|
141745547
|
parsing RDF+XML
I am trying to read an rdf/xml file and parse it possibly in JSON or JavaScript Object or any other format. I try to search a lot of node libraries but couldn't find any good example to do so. Here is something I have tried.
var fs = require('fs'),
$rdf = require('rdflib');
//var parser = new xml2js.Parser();
fs.readFile(__dirname + '/1.xml', function(err, data) {
// Fetch data via a regular AJAX call, load from a file, or pass in a literal
// string. In this example, it was loaded from 'https://fred.me/profile'
var store = $rdf.graph() // Init a new empty graph
var contentType = 'application/rdf+xml'
var baseUrl = ""
var parsedGraph = $rdf.parse(data, store, baseUrl, contentType);
$rdf.parse(data,function(triples){
for (var i in triples){
console.log(triples[i]);
}
})
});
here is 1.xml file
<rdf:RDF
xmlns:rdf="http://www.w3.org/1999/02/22-rdf-syntax-ns#"
xmlns:oneM2M="http://www.onem2m.org/ontology/Base_Ontology#"
xmlns:owl="http://www.w3.org/2002/07/owl#"
xmlns:rdfs="http://www.w3.org/2000/01/rdf-schema#"
xmlns:cfso="http://203.254.173.81:8080/ontologies/ConnectedFarmServiceOntology.owl#"
xmlns:xsd="http://www.w3.org/2001/XMLSchema#">
<oneM2M:Thing rdf:about="http://203.254.173.81:8080/ontologies/ConnectedFarmServiceOntology.owl#KETICF">
<cfso:hasSpecies>
<cfso:Species rdf:about="http://203.254.173.81:8080/ontologies/ConnectedFarmServiceOntology.owl#Type1"/>
</cfso:hasSpecies>
<cfso:hasSPName>
<cfso:SPName rdf:about="http://203.254.173.81:8080/ontologies/ConnectedFarmServiceOntology.owl#Maxfor"/>
</cfso:hasSPName>
<cfso:hasLocation>
<cfso:Location rdf:about="http://203.254.173.81:8080/ontologies/ConnectedFarmServiceOntology.owl#Buyeo"/>
</cfso:hasLocation>
<cfso:hasCropName>
<cfso:CropName rdf:about="http://203.254.173.81:8080/ontologies/ConnectedFarmServiceOntology.owl#CharryTomato">
<cfso:hasSpecies rdf:resource="http://203.254.173.81:8080/ontologies/ConnectedFarmServiceOntology.owl#Type1"/>
</cfso:CropName>
</cfso:hasCropName>
<cfso:hasControlMode>
<cfso:ControlMode rdf:about="http://203.254.173.81:8080/ontologies/ConnectedFarmServiceOntology.owl#A"/>
</cfso:hasControlMode>
</oneM2M:Thing>
</rdf:RDF>
I don't understand what the problem is. Did it fail to parse? Was there an error message?
You should try to catch the error if possible (in a try/catch block):
try {
$rdf.parse(data, store, uri, contentType)
console.log(store.statements) // shows the parsed statements
} catch (err) {
console.log(err)
}
$rdf.parse() parses data from the first function parameter and stores it in the store object you pass as the second parameter. If you want to take a look at the data, you can use the function described here: https://github.com/solid/solid-tutorial-rdflib.js#using-data-in-the-store.
Also, you should really pass the URI to $rdf.parse(). For example, you can use source URI https://fred.me/profile.
Can you explain more the function of the source URI? What is its purpose?
I'm working on a very similar problem to the one above. The error that I keep getting is:
Error: Error trying to parse <http://www.exampleuri.com> as application/rdf+xml: TypeError: kb.sym is not a function
The purpose of the source URL is firstly because many and often all the URLs in the RDF serialization are relative, and so need the base URI to resolved into absolute.
In "kb.sym is not a function" it sounds you are passing something other that a graph store (aka knowledge base) as the store parameter? just guessing. Make one with kb = $rdf.graph()
I think the line in the tutorial, var store = $rdf.graph, is missing parens. It should be:
var store = $rdf.graph()
@dmitrizagidulin you are right! I fixed the example.
@timbl Thank you - that's helpful to know. In my particular case, all of the base URLs are namespaced at the top of the file, so all of them can be resolved absolutely within the file, as I'm understanding it. Is there any other need to have that as an input? Adding parens to $rdf.graph seems to have fixed the initial problem I was having, thanks @dmitrizagidulin!
@deiu previously It was showing this error:
D:\RDF parsing\node_modules\rdflib\dist\rdflib-node.js:10789
throw new Error('Error trying to parse <' + base + '> as ' +
^
Error: Error trying to parse <undefined> as undefined:
Error: Don't know how to parse undefined yet:
Error: Don't know how to parse undefined yet
at Object.parse (D:\RDF parsing\node_modules\rdflib\dist\rdflib-node.js:1076
8:13)
at D:\RDF parsing\parse.js:18:10
at fs.js:266:14
at Object.oncomplete (fs.js:107:15)
at executeErrorCallback (D:\RDF parsing\node_modules\rdflib\dist\rdflib-node
.js:10789:15)
at Object.parse (D:\RDF parsing\node_modules\rdflib\dist\rdflib-node.js:1077
1:5)
at D:\RDF parsing\parse.js:18:10
at fs.js:266:14
at Object.oncomplete (fs.js:107:15)
now I have added the baseURL like you said and changed the source as well like below:
var fs = require('fs'),
$rdf = require('rdflib');
//var parser = new xml2js.Parser();
fs.readFile(__dirname + '/1rdf.xml', function(err, data) {
// Fetch data via a regular AJAX call, load from a file, or pass in a literal
// string. In this example, it was loaded from 'https://fred.me/profile'
var store = $rdf.graph() // Init a new empty graph
var contentType = 'application/rdf+xml'
var baseUrl = "http://www.IoF.com/ontology"
try {
//var parsedGraph = $rdf.parse(data, store, baseUrl, contentType);
//console.log(store.statements);
} catch (err) {
console.log(err)
}
/*$rdf.parse(data,function(triples){
for (var i in triples){
console.log(triples[i]);
}
})*/
});
and the 1rdf.xml file:
<?xml version="1.0"?>
<!DOCTYPE rdf:RDF [
<!ENTITY owl "http://www.w3.org/2002/07/owl#" >
<!ENTITY xsd "http://www.w3.org/2001/XMLSchema#" >
<!ENTITY rdfs "http://www.w3.org/2000/01/rdf-schema#" >
<!ENTITY rdf "http://www.w3.org/1999/02/22-rdf-syntax-ns#" >
<!ENTITY oneM2M "http://www.onem2m.org/ontology/Base_Ontology#" >
<!ENTITY cfso "http://203.254.173.81:8080/ontologies/ConnectedFarmServiceOntology.owl#" >
]>
<rdf:RDF xmlns="http://www.IoF.com/ontology#"
xml:base="http://www.IoF.com/ontology"
xmlns:rdfs="http://www.w3.org/2000/01/rdf-schema#"
xmlns:owl="http://www.w3.org/2002/07/owl#"
xmlns:xsd="http://www.w3.org/2001/XMLSchema#"
xmlns:rdf="http://www.w3.org/1999/02/22-rdf-syntax-ns#"
xmlns:cfso="http://203.254.173.81:8080/ontologies/ConnectedFarmServiceOntology.owl#"
xmlns:oneM2M="http://www.onem2m.org/ontology/Base_Ontology#">
<owl:Ontology rdf:about="http://www.IoF.com/ontology"/>
<!--
///////////////////////////////////////////////////////////////////////////////////////
//
// Annotation properties
//
///////////////////////////////////////////////////////////////////////////////////////
-->
<!-- http://203.254.173.81:8080/ontologies/ConnectedFarmServiceOntology.owl#hasControlMode -->
<owl:AnnotationProperty rdf:about="&cfso;hasControlMode"/>
<!-- http://203.254.173.81:8080/ontologies/ConnectedFarmServiceOntology.owl#hasCropName -->
<owl:AnnotationProperty rdf:about="&cfso;hasCropName"/>
<!-- http://203.254.173.81:8080/ontologies/ConnectedFarmServiceOntology.owl#hasLocation -->
<owl:AnnotationProperty rdf:about="&cfso;hasLocation"/>
<!-- http://203.254.173.81:8080/ontologies/ConnectedFarmServiceOntology.owl#hasSPName -->
<owl:AnnotationProperty rdf:about="&cfso;hasSPName"/>
<!-- http://203.254.173.81:8080/ontologies/ConnectedFarmServiceOntology.owl#hasSpecies -->
<owl:AnnotationProperty rdf:about="&cfso;hasSpecies"/>
<!--
///////////////////////////////////////////////////////////////////////////////////////
//
// Classes
//
///////////////////////////////////////////////////////////////////////////////////////
-->
<!-- http://203.254.173.81:8080/ontologies/ConnectedFarmServiceOntology.owl#ControlMode -->
<owl:Class rdf:about="&cfso;ControlMode"/>
<!-- http://203.254.173.81:8080/ontologies/ConnectedFarmServiceOntology.owl#CropName -->
<owl:Class rdf:about="&cfso;CropName"/>
<!-- http://203.254.173.81:8080/ontologies/ConnectedFarmServiceOntology.owl#Location -->
<owl:Class rdf:about="&cfso;Location"/>
<!-- http://203.254.173.81:8080/ontologies/ConnectedFarmServiceOntology.owl#SPName -->
<owl:Class rdf:about="&cfso;SPName"/>
<!-- http://203.254.173.81:8080/ontologies/ConnectedFarmServiceOntology.owl#Species -->
<owl:Class rdf:about="&cfso;Species"/>
<!-- http://www.onem2m.org/ontology/Base_Ontology#Thing -->
<owl:Class rdf:about="&oneM2M;Thing"/>
<!--
///////////////////////////////////////////////////////////////////////////////////////
//
// Individuals
//
///////////////////////////////////////////////////////////////////////////////////////
-->
<!-- http://203.254.173.81:8080/ontologies/ConnectedFarmServiceOntology.owl#A -->
<owl:NamedIndividual rdf:about="&cfso;A">
<rdf:type rdf:resource="&cfso;ControlMode"/>
</owl:NamedIndividual>
<!-- http://203.254.173.81:8080/ontologies/ConnectedFarmServiceOntology.owl#Buyeo -->
<owl:NamedIndividual rdf:about="&cfso;Buyeo">
<rdf:type rdf:resource="&cfso;Location"/>
</owl:NamedIndividual>
<!-- http://203.254.173.81:8080/ontologies/ConnectedFarmServiceOntology.owl#CharryTomato -->
<owl:NamedIndividual rdf:about="&cfso;CharryTomato">
<rdf:type rdf:resource="&cfso;CropName"/>
<cfso:hasSpecies rdf:resource="&cfso;Type1"/>
</owl:NamedIndividual>
<!-- http://203.254.173.81:8080/ontologies/ConnectedFarmServiceOntology.owl#KETICF -->
<owl:NamedIndividual rdf:about="&cfso;KETICF">
<rdf:type rdf:resource="&oneM2M;Thing"/>
<cfso:hasControlMode rdf:resource="&cfso;A"/>
<cfso:hasLocation rdf:resource="&cfso;Buyeo"/>
<cfso:hasCropName rdf:resource="&cfso;CharryTomato"/>
<cfso:hasSPName rdf:resource="&cfso;Maxfor"/>
<cfso:hasSpecies rdf:resource="&cfso;Type1"/>
</owl:NamedIndividual>
<!-- http://203.254.173.81:8080/ontologies/ConnectedFarmServiceOntology.owl#Maxfor -->
<owl:NamedIndividual rdf:about="&cfso;Maxfor">
<rdf:type rdf:resource="&cfso;SPName"/>
</owl:NamedIndividual>
<!-- http://203.254.173.81:8080/ontologies/ConnectedFarmServiceOntology.owl#Type1 -->
<owl:NamedIndividual rdf:about="&cfso;Type1">
<rdf:type rdf:resource="&cfso;Species"/>
</owl:NamedIndividual>
</rdf:RDF>
<!-- Generated by the OWL API (version 3.4.2) http://owlapi.sourceforge.net -->
Now the output is gone to infinity loop. I had to press "Ctrl+C". output is something like that:
<!-- http://203.254.173.81:8080/ontologies/ConnectedFarmServiceOntology.owl#
KETICF -->
<owl:NamedIndividual rdf:about="&cfso;KETICF">
<rdf:type rdf:resource="&oneM2M;Thing"/>
<cfso:hasControlMode rdf:resource="&cfso;A"/>
<cfso:hasLocation rdf:resource="&cfso;Buyeo"/>
<cfso:hasCropName rdf:resource="&cfso;CharryTomato"/>
<cfso:hasSPName rdf:resource="&cfso;Maxfor"/>
<cfso:hasSpecies rdf:resource="&cfso;Type1"/>
</owl:NamedIndividual>
<!-- http://203.254.173.81:8080/ontologies/ConnectedFarmServiceOntology.owl#
Maxfor -->
<owl:NamedIndividual rdf:about="&cfso;Maxfor">
<rdf:type rdf:resource="&cfso;SPName"/>
</owl:NamedIndividual>
<!-- http://203.254.173.81:8080/ontologies/ConnectedFarmServiceOntology.owl#
Type1 -->
<owl:NamedIndividual rdf:about="&cfso;Type1">
<rdf:type rdf:resource="&cfso;Species"/>
</owl:NamedIndividual>
</rdf:RDF>
<!-- Generated by the OWL API (version 3.4.2) http://owlapi.sourceforge.net -->
has no method 'indexOf'
@#[line:0,col:undefined]
It sounds like an error coming from xmldom. Maybe this helps: https://github.com/jindw/xmldom/issues/153
@deiu Thank You. Solved the problem. fs.readFile returns object instead of string. here is the working source:
var fs = require('fs'),
$rdf = require('rdflib');
var rdfData=fs.readFileSync(__dirname+'/1.xml').toString();
var store=$rdf.graph();
var contentType='application/rdf+xml';
var baseUrl="http://IoFTriples.com";
try{
$rdf.parse(rdfData,store,baseUrl,contentType);
var stms = store.each(undefined, undefined , undefined)
for (var i=0; i<stms.length;i++) {
var stm = stms[i]
console.log(stm) // the WebID of a friend
}
} catch(err){
console.log(err);
}
output:
NamedNode {
uri: 'http://203.254.173.81:8080/ontologies/ConnectedFarmServiceOntology.owl#KETICF' }
NamedNode {
uri: 'http://203.254.173.81:8080/ontologies/ConnectedFarmServiceOntology.owl#KETICF' }
NamedNode {
uri: 'http://203.254.173.81:8080/ontologies/ConnectedFarmServiceOntology.owl#Type1' }
NamedNode {
uri: 'http://203.254.173.81:8080/ontologies/ConnectedFarmServiceOntology.owl#KETICF' }
NamedNode {
uri: 'http://203.254.173.81:8080/ontologies/ConnectedFarmServiceOntology.owl#Maxfor' }
NamedNode {
uri: 'http://203.254.173.81:8080/ontologies/ConnectedFarmServiceOntology.owl#KETICF' }
NamedNode {
uri: 'http://203.254.173.81:8080/ontologies/ConnectedFarmServiceOntology.owl#Buyeo' }
NamedNode {
uri: 'http://203.254.173.81:8080/ontologies/ConnectedFarmServiceOntology.owl#KETICF' }
NamedNode {
uri: 'http://203.254.173.81:8080/ontologies/ConnectedFarmServiceOntology.owl#CharryTomato' }
NamedNode {
uri: 'http://203.254.173.81:8080/ontologies/ConnectedFarmServiceOntology.owl#CharryTomato' }
NamedNode {
uri: 'http://203.254.173.81:8080/ontologies/ConnectedFarmServiceOntology.owl#KETICF' }
NamedNode {
uri: 'http://203.254.173.81:8080/ontologies/ConnectedFarmServiceOntology.owl#A' }
|
gharchive/issue
| 2016-03-18T00:58:45
|
2025-04-01T04:35:54.666687
|
{
"authors": [
"adynata",
"biborno",
"deiu",
"dmitrizagidulin",
"timbl"
],
"repo": "solid/solid-tutorial-rdflib.js",
"url": "https://github.com/solid/solid-tutorial-rdflib.js/issues/4",
"license": "mit",
"license_type": "permissive",
"license_source": "bigquery"
}
|
528216126
|
enable test involvement by clients
Currently testing of ACL (and indeed LDP) is limited to a test suite that can only test some very limited aspects of a server. It won't be able to test most aspects since most resources on the Solid Web will be access controlled.
What is needed is a way to allow clients to report potential spec incompatibilities to servers, or client authors, or indeed to users so that the cause of problems can be correctly attributed to the rightful party.
Imagine for example that the rel="acl" Web ACLs resource is readable. This would allow clients to decide whether to bother authenticating to a resource, which credential they should use, or if they even could get access by becoming a member of a group (see linked issue).
But it would also allow a client that had followed the rules and supplied an ID they believe complies with the ACL to point out that they nevertheless could not access it. This may reveal a flaw on the server, in the accessibility of data needed to for authentication (perhaps a WebID Profile is not online), in the access control reasoning of the App, tokens sent by the OAuth provider... etc. etc... Without this people using apps that do not work could end up being completely in the dark, and not knowing who to blame, end up blaming Solid itself. With it, one would be in a much better position to improve the App ecosystem.
Such a feature would allow testing to grow as deployment grew, essentially making every resource open to testing by millions of agents using Solid every day.
It very much depends on what you mean by "Solid Web". If mean Solid when deployed in the wild, then yes, little of that will be available. That would be an extension to the present framework.
However, a server implementation can be nearly fully tested by adding a root ACL and authorizing a "fake IDP". The only thing that really can't be tested is to verify that the server returns a 500 when the root ACLs is not present.
So, we can subject willing server implementors to the test suite by asking them to set up an instance and populating it with some initial data (notably the root ACL, but see #40 ). If they are not willing, then we will not be able to run the test suite, but I hope and think that the value of having a test report turn green is sufficient to make implementors submit that.
yes, the idea is to allow solid to be tested in the wild as deployed, continuously. So that would complement your current work. But this idea could guide the current work too, as it would of course be really useful if the reporting framework and all could be the same.
|
gharchive/issue
| 2019-11-25T17:00:26
|
2025-04-01T04:35:54.685202
|
{
"authors": [
"bblfish",
"kjetilk"
],
"repo": "solid/test-suite",
"url": "https://github.com/solid/test-suite/issues/58",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
}
|
1438639370
|
solid-start build output missing routes when run in monorepo structure
I've been playing with (and absolutely loving) solid/solid start for the last week. One of the oddities that I've come across is that building the project in a monorepo structure seems to throw off the build outputs and cause the resulting build to be missing all of the routes.
In my example below, I've taken the solid-hackernews app and built it in a non-monorepo structure (left) and in a simple pnpm workspace (right). In the latter, the index.html and any other relevant "page" files are missing from the build output. This example is using the vercel adapter, but I also get the same behavior using the default node adapter.
In my testing, i'm using PNPM, which shouldn't be affected by the monorepo support information on the readme to my understanding.
A repo showing this behavior can be found here:
monorepo
not monorepo
Reproduction steps:
Clone repo
pnpm i
pnpm --filter hackernews build
OR pnpm build for non-monorepo
compare the contents of the built dist/public directory
In setting up for SolidStarts next Beta Phase built on Nitro and Vinxi we are closing all PRs/Issues that will not be merged due to the system changing. If you feel your issue was closed in mistake. Feel free to re-open it after updating/testing against 0.4.x release. Thank you for your patience.
See https://github.com/solidjs/solid-start/pull/1139 for more details.
|
gharchive/issue
| 2022-11-07T16:28:24
|
2025-04-01T04:35:54.696670
|
{
"authors": [
"dallastjames",
"ryansolid"
],
"repo": "solidjs/solid-start",
"url": "https://github.com/solidjs/solid-start/issues/403",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
}
|
1438978722
|
build problem in docker
A lot of time passes but nothing happens
This looks related to ssr: false. Using the root to render the index.html seems to have issues on certain platforms.
What version of Node are you on?
Faced the same issue with ssr: false on Node v16.1.0
Node v16.19.0 works normally and v18.11.0 works fine as well.
Is there something that can be done to enable some debugging logs?
I think we need v16.8 or higher. I can look at potential for logging more.
I'm going to close this one and consider the logging question part of #383.
|
gharchive/issue
| 2022-11-07T21:02:35
|
2025-04-01T04:35:54.699306
|
{
"authors": [
"baravak",
"mantysalo",
"ryansolid"
],
"repo": "solidjs/solid-start",
"url": "https://github.com/solidjs/solid-start/issues/406",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
}
|
1599656777
|
fix: remove entry-server handler copy in adapters
Summary
Since dynamic imports were enabled again in the vite build (6469a0fe5b5f5abcc121305a5c471047226721b8), in certain cases the handler.js copy results in duplicate code. As example in the final build, dynamically imported modules might import their dependencies from the original entry-server.js instead of the handler.js copy, this would result in the following chain:
handler.js copied from entry-server.js (therefore has code of dep1, dep2, dep3)
handler.js dynamically imports SomeComponent.js
SomeComponent.js doesnt know about handler.js, imports dep2 from entry-server.js
entry-server.js has code for dep1, dep2, dep3
The final build includes dep1, dep2 and dep3 from handler.js and from entry-server.js
So by removing the handler.js copy and directly importing entry-server.js we can avoid the duplicate dependency code.
Alternative solution
Undo 6469a0fe5b5f5abcc121305a5c471047226721b8
Screenshots
A possible result of this bug: Bildschirmfoto vom 2023-02-25 10-39-19
How it looks after the fix: Bildschirmfoto vom 2023-02-25 11-10-41
dist code before fix:
step 1, vite: Bildschirmfoto vom 2023-02-25 10-42-50
step 2, rollup: Bildschirmfoto vom 2023-02-25 10-43-42
Testing
Reproduction project: solid-start-duplicates.zip
Unzip it and open the folder
pnpm install
pnpm build
pnpm start
So far I only tested this fix with the start-node adapter! I don't have the setup to test the other adapters and would need your help on those ones <3
History
Why does the handler.js copy exist in the first place? I don't know why the handler.js copy currently exists, but it looks like the copy had a usecase in the past, when it copied a different source file depending on preferStreaming https://github.com/solidjs/solid-start/blob/5ce1b813f19b3bbc779addeb507218f13ab5cc35/packages/start-node/index.js#L42
Background info
I already informed about this in Discord: https://discord.com/channels/722131463138705510/910635844119982080/1078086226252402759, copy:
To give you a summary on the bug: the dynamic import change (https://github.com/solidjs/solid-start/commit/6469a0fe5b5f5abcc121305a5c471047226721b8) results in duplication of dependency code in pnpm build, e.g. MetaContext is declared twice in dist and breaks server renders in prod.
But this is actually not the fault of the dynamic import change. Its a bit more complicated:
Dynamically imported code which also has to import dependencies, imports those from "entry-server.js"
start-node copies entry-server.js into a new file handler.js
start-node creates a file server.js which imports handler.js
dynamically imported code still imports its dependencies from entry-server.js
rollup tries its best to bundle handler.js, stumbles upon the dynamic assets which import stuff from entry-server.js and thus it creates duplicate dependency code 💣
I don't know why the entry-server.js to handler.js copy even exists, but afaik it ultimately breaks dynamic imports 😅 .
Workarounds:
Downgrade to solid-start 0.2.20
Patch https://github.com/solidjs/solid-start/blob/90cb8f721af0e99ca2a6aab4eb931cfbd7ea426f/packages/start-node/entry.js#L6 so that it just uses entry-server.js instead of handler.js
Thank you. Very much appreciated.
@ryansolid Ty for the review / merge 🙏🏼!
|
gharchive/pull-request
| 2023-02-25T10:46:56
|
2025-04-01T04:35:54.711101
|
{
"authors": [
"katywings",
"ryansolid"
],
"repo": "solidjs/solid-start",
"url": "https://github.com/solidjs/solid-start/pull/770",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
}
|
1282164826
|
made home page functional
Made the home page functional. Check discord for further details
Hey Zaid,
Thanks for the help. Really loved the change!
With Warm Regards,
Solomon Shalom Lijo
|
gharchive/pull-request
| 2022-06-23T10:14:17
|
2025-04-01T04:35:54.733696
|
{
"authors": [
"solomonshalom",
"zaidajani"
],
"repo": "solomonshalom/Echo-Of-21k",
"url": "https://github.com/solomonshalom/Echo-Of-21k/pull/3",
"license": "0BSD",
"license_type": "permissive",
"license_source": "github-api"
}
|
372944231
|
Stacked Borrows NG
This matches https://github.com/rust-lang/rust/pull/55270.
Also change test suite to run compile-fail tests with and without optimizations, to get more coverage and make sure optimizations don't make things fail earlier.
This one should be green now :)
Next try...
|
gharchive/pull-request
| 2018-10-23T11:12:51
|
2025-04-01T04:35:54.735259
|
{
"authors": [
"RalfJung"
],
"repo": "solson/miri",
"url": "https://github.com/solson/miri/pull/492",
"license": "Apache-2.0",
"license_type": "permissive",
"license_source": "github-api"
}
|
2612496807
|
[BUG] - Image missing for x64 architectures
Describe the bug
@some-natalie sorry to bother, but I've just tried to pull the image with
docker pull --platform linux/amd64 ghcr.io/some-natalie/kubernoodles/wolfi:91a3cd9
91a3cd9: Pulling from some-natalie/kubernoodles/wolfi
no matching manifest for linux/amd64 in the manifest list entries
If I repeat the same with arm64 I get a successfull result
docker pull --platform linux/arm64 ghcr.io/some-natalie/kubernoodles/wolfi:91a3cd9
91a3cd9: Pulling from some-natalie/kubernoodles/wolfi
1f68f0930dd4: Pulling fs layer
0f796832197d: Download complete
45db2ab91392: Downloading [> ] 5.367MB/303.8MB
4f4fb700ef54: Waiting
To Reproduce
I've just tried to pull the image with
docker pull --platform linux/amd64 ghcr.io/some-natalie/kubernoodles/wolfi:91a3cd9
91a3cd9: Pulling from some-natalie/kubernoodles/wolfi
no matching manifest for linux/amd64 in the manifest list entries
If I repeat the same with arm64 I get a successfull result
docker pull --platform linux/arm64 ghcr.io/some-natalie/kubernoodles/wolfi:91a3cd9
91a3cd9: Pulling from some-natalie/kubernoodles/wolfi
1f68f0930dd4: Pulling fs layer
0f796832197d: Download complete
45db2ab91392: Downloading [> ] 5.367MB/303.8MB
4f4fb700ef54: Waiting
Use latest for now. A new commit will get built this weekend and should fix that for short SHA tags moving forward.
|
gharchive/issue
| 2024-10-24T20:26:04
|
2025-04-01T04:35:54.746918
|
{
"authors": [
"irizzant",
"some-natalie"
],
"repo": "some-natalie/kubernoodles",
"url": "https://github.com/some-natalie/kubernoodles/issues/274",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
}
|
247941601
|
Initial files structure
Initial file structure.
@austinkelleher I don't know why, but I can't reply your comments :s
It probably would not a bad idea to allow https cert options to be passed as CLI args. If they're provided, use https otherwise use http
I like the idea.
I'm actually leaning towards this being a bad idea. If the port is in use, I feel that the user should know as it's probably not intended. What do you think?
I copy that from a previous project where we need to launch some servers at the same time (different type of tests and different browsers at the same time). Maybe in this project it doesn't make sense (at least for now). I'm going to remove it!
@austinkelleher done!
@molant Done!
Anyway, @molant @austinkelleher remember this PR is just for the file structure, I'm working on the job manager in another branch and some things are changing. What I mean is that right know, the code is not important, just the file structure.
@sarvaje Should this be closed in favor of #2?
@sarvaje Should this be closed in favor of #2?
That is an option, the other option is merge this one if it is ok hehehehe
|
gharchive/pull-request
| 2017-08-04T08:46:53
|
2025-04-01T04:35:54.752200
|
{
"authors": [
"alrra",
"sarvaje"
],
"repo": "sonarwhal/sonar-service",
"url": "https://github.com/sonarwhal/sonar-service/pull/1",
"license": "Apache-2.0",
"license_type": "permissive",
"license_source": "github-api"
}
|
339998991
|
Standardize error massages
Per #1133, I'm listing all the error messages we report so we can normalize them if needed.
amp-validator
Outputs the errors by the validator
apple-touch icon
current
proposed
'apple-touch-icon' should have non-empty 'href' attribute
'${appleTouchIconHref}' file request failed
'${appleTouchIconHref}' could not be fetched (status code: ${response.statusCode})
'${appleTouchIconHref}' is not a valid PNG
'${appleTouchIconHref}' is not a PNG
'${appleTouchIconHref}' is not 180x180px
No 'apple-touch-icon' was specified
'rel' attribute value should be 'apple-touch-icon'
'sizes' attribute is not needed
'apple-touch-icon' should be specified in the ''
A 'apple-touch-icon' was already specified
axe
current
proposed
Error executing script: "${e.message}". Please try with another connector
babel-config
It broadcasts the errors by the parser
content-type
current
proposed
'content-type' header was not specified
'content-type' header should have the value '${userDefinedMediaType}'
'content-type' header value is invalid (${e.message})
'content-type' header should have media type '${mediaType}' (not '${originalMediaType}')
'content-type' header should have 'charset=${charset}'${originalCharset ? (not '${originalCharset}') : ''}
'content-type' header should not have 'charset=${originalCharset}'
disown-opener
current
proposed
'${cutString(await element.outerHTML(), 100)}' is missing 'rel' ${requiredValues.length === 1 ? 'value' : 'values'} '${requiredValues.join('', '')}'
highest-available-document-mode
current
proposed
'x-ua-compatible' header was not specified
'x-ua-compatible' header is not needed
'x-ua-compatible' header value should be 'ie=edge'
Meta tag is not needed
Meta tag usage is discouraged, use equivalent HTTP header
No 'x-ua-compatible' meta tag was specified
The value of 'content' should be 'ie=edge'
Meta tag needs to be included before all other tags except for the '' and the other '' tags
Meta tag should not be specified in the ''
A 'x-ua-compatible' meta tag was already specified
html-checker
current
proposed
Couldn't get results from HTML checker for ${resource}. Error: ${error}
Output the errors from the scanner
http-cache
current
proposed
No "cache-control" header or empty value found. It should have a value
The ${invalidDirectives.size === 1 ? 'directive' : 'directives'} ${Array.from(invalidDirectives.keys()).join(', ')} ${invalidDirectives.size === 1 ? 'is' : 'are'} invalid
The following ${invalidValues.size === 1 ? 'directive has' : 'directives have'} an invalid value:\n${directivesToString(invalidValues)}
The directive "${nonRecommendedDirective}" is not recommended
The following Cache-Control header is using a wrong combination of directives:\n${header}
The target should not be cached, or have a small "max-age" value (${maxAgeTarget}):\n${header}
Static resources should have a long cache value (${maxAgeResource}) and use the immutable directive:\n${header}
No configured patterns for cache busting match ${resource}. See docs to add a custom one.
http-compression
current
proposed
Should be served with the 'Vary' header containing 'Accept-Encoding' value.
Should not be served compressed with ${encoding} as the compressed size is ${sizeDifference > 0 ? 'bigger than' : 'the same size as'} the uncompressed one.
Could not be fetched when requested compressed with Brotli
Should${notRequired ? ' not' : ''} be served compressed${encoding ? with ${encoding} : ''}${notRequired ? '' : when ${['Zopfli', 'gzip'].includes(encoding) ? 'gzip' : encoding} compression is requested}${suffix ? ${!suffix.startsWith(',') ? ' ' : ''}${suffix} : ''}.
Should${notRequired ? ' not' : ''} be served with the 'content-encoding${encoding ? : ${encoding} : ''}' header${suffix ? ${suffix} : ''}.
Could not be fetched when requested compressed with Brotli
Could not be fetched when requested compressed with gzip
Could not be fetched
Disallowed compression method: '${encoding}'.
Could not be fetched when requested uncompressed
Should not be served with the 'content-encoding' header.
https-only
current
proposed
The site should be HTTPS
Shouldn't be redirected from HTTP
Should be served over HTTPS
image-optimization-cloudinary
current
proposed
No valid configuration for Cloudinary found. Hint coudn't run.
File ${cutString(file.originalUrl)} could be around ${sizeDiff.toFixed(2)}kB (${percentageDiff}%) smaller.
The total size savings optimizing the images in ${data.resource} could be of around ${totalSavings.toFixed(0)}kB.
manifest-app-name
current
proposed
Should contain the '${propertyName}' property
Should have non-empty '${propertyName}' property value
Should have the '${propertyName}' property value under ${shortNameLengthLimit} characters
manifest-exists
current
proposed
Web app manifest not specified
A web app manifest file was already specified
Should have non-empty 'href'
manifest-file-extension
current
proposed
The file extension should be '${standardManifestFileExtension}'${fileExtension ? (not '${fileExtension}') : ''}
manifest-is-valid
current
proposed
'${property}' property value ('${colorValue}') is invalid
'${property}' property value ('${colorValue}') is not supported everywhere
'lang' property value ('${manifest.lang}') is not a valid language tag
Should contain valid JSON
meta-charset-utf-8
current
proposed
No charset meta tag was specified
Use shorter ''
The value of 'charset' is not 'utf-8'
Charset meta tag should be the first thing in ''
Meta tag should not be specified in the ''
A charset meta tag was already specified
meta-theme-color
current
proposed
No 'theme-color' meta tag was specified
'content' attribute value ('${contentValue}') is invalid
'content' attribute value ('${contentValue}') is not supported everywhere
'name' attribute needs to be 'theme-color' (not '${nameAttributeValue}')
A 'theme-color' meta tag was already specified
Should not be specified in the ''
meta-viewport
current
proposed
Meta tag should have non-empty 'content' attribute
Meta tag has unknown property: '${key}'
Meta tag has invalid value '${content.invalidValues[key]}' for property '${key}'
Meta tag has disallowed property: '${key}'
Meta tag should have 'width=device-width'
Meta tag should have 'initial-scale=1'
No viewport meta tag was specified
Meta tag should not be specified in the ''
A viewport meta tag was already specified
minified-js
current
proposed
JavaScript content could be minified
no-bom
current
proposed
Error fetching the content
Text based resources shouldn't start with the BOM character to force UTF-8 encoding
no-broken-links
current
proposed
Broken link found (domain not found)'
Broken link found (${brokenStatusCodes[statusIndex]} response)
no-disallowed-headers
current
proposed
'Server' header value contains more than the server name
'${headers.join('', '')}' ${numberOfHeaders === 1 ? 'header is' : 'headers are'} disallowed
no-friendly-error-pages
current
proposed
Response with status code ${key} had less than ${threshold} bytes
no-html-only-headers
current
proposed
'${headers.join('', '')}' ${numberOfHeaders === 1 ? 'header is' : 'headers are'} not needed
no-html-only-headers
current
proposed
${response.hops.length} ${response.hops.length === 1 ? 'redirect' : 'redirects'} detected for ${cutString(request.url)} (max is ${maxHops}).
no-p3p
current
proposed
P3P is deprecated and should not be used
no-protocol-relative-urls
current
proposed
Protocol relative URL found: ${url}
no-vulnerable-javascript-libraries
current
proposed
${library.name}@${library.version} has ${vulnerabilities.length} known ${vulnerabilities.length === 1 ? 'vulnerability' : 'vulnerabilities'} (${detail}). See ${link} for more information.
Error executing script: "${e.message}". Please try with another connector
performance-budget
current
proposed
`To load all the resources on a ${config.id} network, it will take about ${loadTime.toFixed(1)}s in optimal conditions.
That's ${(loadTime - config.load).toFixed(1)}s more than the ${config.load}s target.
sri
current
proposed
Cross-origin scripts need a "crossorigin" attribute to be eligible for integrity validation
Attribute "crossorigin" doesn't have a valid value, should "anonymous" or "use-credentials": crossorigin="${crossorigin}"
Resource ${resource} requested without the "integrity" attribute
The format of the "integrity" attribute should be "sha(256
384
The hash algorithm "${algorithms[highestAlgorithmPriority]}" doesn't meet the baseline "${this.baseline}"
The resource is not delivered via a secure context
The hash in the "integrity" attribute doesn't match the received payload. Expected: ${integrities.join(', ')} Actual: ${hashes.join(', ')}
ssllabs
current
proposed
${resource} doesn't support HTTPS.
${serverName}'s grade ${grade} doesn't meet the minimum ${minimumGrade} required.
Couldn't get results from SSL Labs for ${resource}.
Didn't get any result for ${resource}.There might be something wrong with SSL Labs servers.
strict-transport-security
current
proposed
Error with getting preload status for ${resource}.
Error with getting preload status for ${resource}. There might be something wrong with the verification endpoint.
Error with getting preload eligibility for ${resource}.
'strict-transport-security' header should't be specified in pages served over HTTP.
'strict-transport-security' header was not specified
'strict-transport-security' header requires 'max-age' directive
'strict-transport-security' header 'max-age' value should be more than ${minMaxAgeValue}
stylesheet-limit
current
proposed
Maximum of ${maxImports} nested imports reached (${results.imports})
Maximum of ${maxRules} CSS rules reached (${results.rules})
Maximum of ${maxSheets} stylesheets reached (${results.sheets})
typescript-config
current
proposed
Couldn't find package "tslib".
Based on your browser configuration your "compilerOptions.target" should be "${maxESVersion}". Current one is "${target}"
validate-set-cookie-header
current
proposed
'${headerName}' header to set '${setCookie.name}' has trailing ';'
'${headerName}' header contains unknown attribute '${directiveKey}'.
'${headerName}' header contains more than one ${directiveKey}.
webpack-config
current
proposed
webpack configuration file not found in your project.
webpack is not installed in your project.
The parser webpack-config should be activated
The parser typescript-config should be activated
TypeScript compilerOptions.module option should be esnext
The parser babel-config should be activated
Babel presets modules option should be false
`${config.devtool.toString()}` not recommended for prodution
x-content-type-options
current
proposed
'x-content-type-options' header is not specified
'x-content-type-options' header value (${headerValue}) is invalid
'x-content-type-options' header is not needed
And this are the error messages for JSON validations (schema-validator.ts):
type
current
proposed
additional property
${property ? '${property}' : ''}${property ? error.message : ${error.message[0].toLocaleUpperCase()}${error.message.substr(1)}}. Additional property found '${additionalProperty}'.
enum
'${property}' ${error.message} '${allowedValues.join(', ')}'. Value found '${error.data}'
pattern
'${property}' ${error.message.replace(/"/g, ''')}. Value found '${error.data}'
type
'${property}' ${error.message.replace(/"/g, ''')}.
Some proposals from what I'm seeing above:
[ ] All messages should end (or not) with a .. Right now it's mixed
[ ] We are surrounding attribute names and properties with ' and ". I think we should normalize to "
[ ] In some places we abbreviate, in others we don't (e.g.: "was not", "shouldn't")
[ ] Maybe update the no-broken-links messages? Something like:
Domain not found
Received a ${brokenStatusCodes[statusIndex]} response
[ ] strict-transport-security: I think it should be Error getting preload instead of Error with getting preload
Some errors use you and others don't (typescript-config, webpack-config)
@sonarwhal/core any other suggestions?
@alrra haven't you finished this?
|
gharchive/issue
| 2018-07-10T20:41:19
|
2025-04-01T04:35:54.827384
|
{
"authors": [
"molant"
],
"repo": "sonarwhal/sonarwhal",
"url": "https://github.com/sonarwhal/sonarwhal/issues/1170",
"license": "Apache-2.0",
"license_type": "permissive",
"license_source": "github-api"
}
|
1565088639
|
Add a Deployment Pipeline
This PR adds a workflow that will build the application and deploy it to an Azure storage blob in a storage account specified by the MILFORD_MENU_STORAGE_ACCOUNT_NAME environment variable. A MILFORD_MENU_SERVICE_PRINCIPLE secret is required to allow authentication with Azure. This approach is based on Azure Storage documentation.
The workflow will only run on completion of the Node.js CI workflow when it runs on the main branch and the deploy job will only run if the CI workflow completed successfully.
The Production environment is used to obtain environment variables/secrets.
Set up deployment pipeline
|
gharchive/pull-request
| 2023-01-31T22:25:07
|
2025-04-01T04:35:54.917097
|
{
"authors": [
"sonikblue"
],
"repo": "sonikblue/milford-menu",
"url": "https://github.com/sonikblue/milford-menu/pull/12",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
}
|
1040401649
|
Add files via upload
Alpha beta pruning
Move this in cpp folder
Follow me on github
|
gharchive/pull-request
| 2021-10-31T07:52:13
|
2025-04-01T04:35:54.935637
|
{
"authors": [
"Deepakgarg2309",
"sonumahajan"
],
"repo": "sonumahajan/All_Program_helper",
"url": "https://github.com/sonumahajan/All_Program_helper/pull/533",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
}
|
1548605521
|
[NVC-NET]Inference in CPU environment
In the paper you calculate the speed of NVC-NET in CPU, could you please share the code to inference in CPU environment?
inference.py has context argument, so I think python inference.py --context cpu ... will run it on cpu environment.
But please let me know if the points you wish to discuss are different.
|
gharchive/issue
| 2023-01-19T06:49:55
|
2025-04-01T04:35:54.940064
|
{
"authors": [
"TomonobuTsujikawa",
"guoyingying432"
],
"repo": "sony/ai-research-code",
"url": "https://github.com/sony/ai-research-code/issues/67",
"license": "Apache-2.0",
"license_type": "permissive",
"license_source": "github-api"
}
|
2646968993
|
Updates & refactoring
Fixed widget being rendered upside down (from the bottom left corner, instead of the top left like you'd expect).
Rewrote ClothConfig & ModMenu integrations.
Removed redundant config serialization.
Added "Alpha" option so the widget background can be configured to match other PvP client widgets.
Hi! I appreciate the contribution but Jump Reset Indicator is not open to refactors.
The X&Y sliders on the original mod are broken, and this pull request fixed them.
|
gharchive/pull-request
| 2024-11-10T08:01:35
|
2025-04-01T04:35:54.966520
|
{
"authors": [
"5ai17",
"7orivorian",
"sootysplash"
],
"repo": "sootysplash/jump-reset",
"url": "https://github.com/sootysplash/jump-reset/pull/1",
"license": "Apache-2.0",
"license_type": "permissive",
"license_source": "github-api"
}
|
1481823315
|
Suggestion: being able to close or minimize the chart container
Hi,
Would it be possible to have the chart container minimized or closed if possible (see below):
So that only the swap container appears (as with the previous version of Polkaswap)
@Asamartino Hello! it's already here. Please go to Swap settings at the right corner of Swap card. We'll add this button directly on Swap card in future
Perfect, thank you.
@Asamartino hello!
We've added this button directly on Swap.
It'll be in the next release, so, I'm closing this issue
Yes, I noticed it today :)
It is very user-friendly, thank you.
|
gharchive/issue
| 2022-12-07T12:37:34
|
2025-04-01T04:35:54.978153
|
{
"authors": [
"Asamartino",
"stefashkaa"
],
"repo": "sora-xor/polkaswap-exchange-web",
"url": "https://github.com/sora-xor/polkaswap-exchange-web/issues/877",
"license": "Apache-2.0",
"license_type": "permissive",
"license_source": "github-api"
}
|
768219890
|
Add option to turn on recursive type checking
Problem
Our application is heavily dependent on runtime Sorbet type checking. We have a complex domain with lots of logic, and the runtime type checker has saved us numerous times.
Earlier this year, #3293 was merged. This PR changed the behavior of the type checker to avoid recursive checking. Since we find this behavior so useful, we decided to pin our version to 0.5.5360 to keep it.
This worked fine until the release of Big Sur. You guys are (understandably) not releasing a newly compiled version of 0.5.5360, so any computer running Big Sur can no longer successfully run bundle install.
Proposed solution
Looking through the codebase, it looks like the ability to recursively check styles is still present in Sorbet, and is fully tested. As far as I can tell, this behavior doesn't seem to be exposed.
In #3293, @aisamanra said this:
A standing question is whether we want to enable this by default for sigs using e.g. a construct like .checked(:deep). Also, should the deep parameter be called something like recursive?
We would love to be able to flip a flag to re-enable the recursive type checking.
I'd like to again stress how incredibly useful we find this feature. We don't have any issues with performance, and it's saved our bacon multiple times.
We'd be happy to contribute a pull request if you guys would be open to it. What do you think?
Thanks for the suggestion. I think a lot would depend on the implementation. I'm fine with the idea if you want to send something for consideration.
Thanks for the quick response!
What interface would you guys like to see? Would you want it to be an extension of the sig like @aisamanra suggested? Maybe something like this?
sig { params(bananas: T::Enumerable[Banana]).returns(T::Array[PeeledBananas]).recursive }
Or this?
sig { params(bananas: T::Enumerable[Banana]).returns(T::Array[PeeledBananas]).deep }
Or would you like to see this more as a global option?
T.enable_recursive_validations
As I said, we're happy to do the work and implement it to your guys' specifications. However, is there any way to get a pre-signoff on the idea before we start? We're working towards Q4 deadlines at our company right now, and I want to make sure we're using our time efficiently. 🙂
Thanks!
If you're ok with an API to turn it on everywhere, that would be preferable. It's much easier to add something to T::Configuration to apply project-wide settings than it is to bikeshed new sig syntax, which requires changes in the static checker as well, and involve more of a long-term / immutable commitment.
Sounds good! I'll cut a PR in the new year. Thanks!
Hey @jez, I was hoping to get a gut check on my read of the codebase so far.
It seems like the guts of sorbet-runtime are in the lib/types/types directory. Each type implements (or inherits) two methods: valid? and recursively_valid?. It looks like recursively_valid? is being called by error_message_for_obj_recursive and valid? is being called by error_message_for_obj.
Both of these methods are called in T::Private::Casts in the cast and case_recursive methods. It looks like these methods's jobs are to run the validations, and raise a TypeError if there are any validation errors.
At that point, I'm losing track of how these methods incorporate into the runtime. It seems like cast_recursive isn't being used anywhere substantial, which makes sense. I can see several references to T.cast, but nothing is sticking out to me as a good place to introduce a switch to call cast_recursive, and it seems like I'd need to do so in several places. Would it make sense to collapse those two methods, and do the switch inside? Or would you like to see the decision to use recursion made outside of cast and cast_recursive?
Any hints would be appreciated. 🙂
Thanks!
@LandonSchropp Just checking are you still planning on implementing this? I agree that it's a very useful feature and I would much rather have it on project-wide and disable it only in performance critical parts.
Sorry, I completely forgot about this. I've since moved on from the app I was working on that used this, so I'm probably not going to implement this option unless it becomes relevant to my work again.
|
gharchive/issue
| 2020-12-15T21:30:27
|
2025-04-01T04:35:54.989265
|
{
"authors": [
"LandonSchropp",
"jez",
"ric2b"
],
"repo": "sorbet/sorbet",
"url": "https://github.com/sorbet/sorbet/issues/3800",
"license": "Apache-2.0",
"license_type": "permissive",
"license_source": "github-api"
}
|
938056636
|
Use cast_tree_nonnull in treemap.h.
Use cast_tree_nonnull in treemap.h when we assume the cast will succeed.
Motivation
cast_tree_nonnull avoids an extra isa_type check in production builds, and should be slightly faster.
Test plan
Covered by existing tests.
We have a policy of testing changes to Sorbet against Stripe's codebase before
merging them. I've kicked off a test run for the current PR. When the build
finishes, I'll share with you whether or how it failed. Thanks!
Stripe employees can see the build results here:
→ https://go/builds/bui_Jnt4t2CGY8mvyD
→ https://go/builds/bui_Jnt4n8hMHE03uV
|
gharchive/pull-request
| 2021-07-06T16:12:59
|
2025-04-01T04:35:54.992734
|
{
"authors": [
"jvilk-stripe"
],
"repo": "sorbet/sorbet",
"url": "https://github.com/sorbet/sorbet/pull/4330",
"license": "Apache-2.0",
"license_type": "permissive",
"license_source": "github-api"
}
|
1206236597
|
Recover from positional arg between keyword args
Motivation
Inspired by #5640, but not quite the fix for it yet.
Test plan
See included automated tests.
Nice, did this result in a completely empty tree before?
No, just an empty sig method call:
→ View on sorbet.run
# typed: false
sig do
params(
build_group: String,
type
# ^^^^ error: positional arg "type" after keyword arg
# ^^^^ error: Malformed type declaration. Unknown type syntax. Expected a ClassName or T.<func>
# ^^^^ error: Unknown argument name `type`
filter: T.nilable(String),
)
.returns(T::Array[String])
end
def call(build_group, filter:)
end
s(:def, :call,
s(:args,
s(:arg, :build_group),
s(:kwarg, :filter)), nil)
editor.rb:5: positional arg "type" after keyword arg https://srb.help/2001
5 | type
^^^^
editor.rb:12: unexpected token "end" https://srb.help/2001
12 |end
^^^
Errors: 2
|
gharchive/pull-request
| 2022-04-16T23:24:39
|
2025-04-01T04:35:54.996042
|
{
"authors": [
"jez"
],
"repo": "sorbet/sorbet",
"url": "https://github.com/sorbet/sorbet/pull/5648",
"license": "Apache-2.0",
"license_type": "permissive",
"license_source": "github-api"
}
|
425736
|
Cufon slows IE down
Hi Simo
First, let me thank you for that great plugin. Kudos to you.
In my own-build component/image-slider (JQuery, AJAX, cufon manually called), cufon slows the page loading down, Safari and FF are ok. I've try to speed cufon up with cufon.now() and cufon.refresh(), but it seems that this changes nothing:
Slider = new Slider();
Slider.buildSlider();
$('#slframe_body').html(Slider.getHtml());
addSliderBehavior();
Cufon.replace($('div.sliderItem'), { fontFamily: "HelveticaNeueLT Com 35 Th" });
Cufon.now();
In the function 'Slider.getHtml()' (that is called before cufon) I am trying to preload the images with (shortned)
img = new Image();
img.src = conf.imageBaseUrl+imageThumbFileName;
and it seems, that this preloading is stopped/don't work in IE, until cufon finished his work.
Am I missing something? By the way, I am a new in the JavaScript/JQuery/Ajax-Stuff.
Thanks a lot and have a great day ;)
Is solved, somehow, I think ;-)
(doing a bit cleaning up my open 'things' on github)
|
gharchive/issue
| 2010-11-18T07:33:27
|
2025-04-01T04:35:54.998738
|
{
"authors": [
"peterpeter"
],
"repo": "sorccu/cufon",
"url": "https://github.com/sorccu/cufon/issues/157",
"license": "mit",
"license_type": "permissive",
"license_source": "bigquery"
}
|
155142102
|
Rewrite riemann.nql's proc delta to reduce from 1008 states to 924
NB I've back-stopped my algebra by writing a Java program to verify that (a port of) the new delta gives the same outputs for delta(1) to delta(40) as (a port of) the old one.
Thank you! I haven't looked closely at the algebra in riemann.lac beyond what was needed to make it compatible with nql, so I'll take your word for it.
|
gharchive/pull-request
| 2016-05-16T23:05:56
|
2025-04-01T04:35:55.000121
|
{
"authors": [
"pjt33",
"sorear"
],
"repo": "sorear/metamath-turing-machines",
"url": "https://github.com/sorear/metamath-turing-machines/pull/1",
"license": "mit",
"license_type": "permissive",
"license_source": "bigquery"
}
|
3032590
|
using models.ImageField as base class for ImageField for easier validation
ImageField inherits everything from FileField.
There is already validation and I don't want PIL dependency.
|
gharchive/issue
| 2012-01-31T08:31:12
|
2025-04-01T04:35:55.004544
|
{
"authors": [
"marcin-koziol",
"sorl"
],
"repo": "sorl/sorl-thumbnail",
"url": "https://github.com/sorl/sorl-thumbnail/issues/85",
"license": "BSD-3-Clause",
"license_type": "permissive",
"license_source": "github-api"
}
|
590216591
|
Add support for listing subdirectories of monorepos in open source projects, and add reach4help map
Some relevant projects are sub-projects of larger projects, and are accessible via sub-directories of monorepos, rather than the root of a particular repository.
In these instances, we want to be able to link directly to those subdirectories, and (optionally) provide an override for the description, rather than taking the info from the monorepo.
One such relevant project is the mutual-aid map for reach4help, that is accessible here: https://map.reach4help.org/ whose code is here: https://github.com/reach4help/reach4help/tree/master/map and is part of the larger reach4help project.
PR opened here: https://github.com/soroushchehresa/awesome-coronavirus/pull/157
(had to add support for monorepos)
@s0 First of all thanks for the contribution.
I think it's a very special case and supports the subdirectories isn't a helpful feature for others.
So I close this PR and you can add the main directory to this list in another PR.
Okay, i'll open another PR
|
gharchive/pull-request
| 2020-03-30T11:32:05
|
2025-04-01T04:35:55.007706
|
{
"authors": [
"s0",
"soroushchehresa"
],
"repo": "soroushchehresa/awesome-coronavirus",
"url": "https://github.com/soroushchehresa/awesome-coronavirus/pull/157",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
}
|
348871038
|
Flag the repo as unmaintained?
It seems like nothing is changing anymore, I've got an issue about data loss (https://github.com/soundcloud/lhm/issues/126) that has been opened for almost 3 years and PRs (https://github.com/soundcloud/lhm/pull/127 and https://github.com/soundcloud/lhm/pull/134) to fix this, but nothing has received attention. This repo should maybe be flagged as unmaintained in its Readme?
@grobie Have you talked to anyone at Shopify? They seem to have a fairly active fork over at https://github.com/shopify/lhm
Would be nice if the README pointed to the shopify fork....!
Please see https://github.com/soundcloud/lhm/issues/153.
|
gharchive/issue
| 2018-08-08T20:01:15
|
2025-04-01T04:35:55.071716
|
{
"authors": [
"epugh",
"gdubicki",
"jrgifford",
"tdeo"
],
"repo": "soundcloud/lhm",
"url": "https://github.com/soundcloud/lhm/issues/153",
"license": "bsd-3-clause",
"license_type": "permissive",
"license_source": "bigquery"
}
|
672728985
|
Disable Stepper tab for lazy Source variants
Disable Stepper tab for lazy Source variants
Summary: Fixes #1429.
Changelog
Display the Stepper tab only for default Source 1 and 2
Stepper is thus disabled for all non-default variants of Source 1 and 2, and all variants (including default) of Source 3 or 4
Verification instructions
Proceed to the Playground in the development environment, and verify the Stepper tab is hidden with any lazy Source variant
Screenshots
Side-content tabs displayed for Source 2 Lazy
Last updated 4 Aug 2020, 7:45 PM
Pull Request Test Coverage Report for Build 7063
1 of 1 (100.0%) changed or added relevant line in 1 file are covered.
No unchanged relevant lines lost coverage.
Overall coverage remained the same at 34.165%
Totals
Change from base Build 7053:
0.0%
Covered Lines:
2347
Relevant Lines:
6387
💛 - Coveralls
|
gharchive/pull-request
| 2020-08-04T11:45:26
|
2025-04-01T04:35:55.079634
|
{
"authors": [
"Aulud",
"coveralls"
],
"repo": "source-academy/cadet-frontend",
"url": "https://github.com/source-academy/cadet-frontend/pull/1431",
"license": "Apache-2.0",
"license_type": "permissive",
"license_source": "github-api"
}
|
1269921899
|
Add v0.27 Changelog
Add a changelog file with an entry for v0.27
Approved by Nathan in Slack
|
gharchive/pull-request
| 2022-06-13T20:32:29
|
2025-04-01T04:35:55.089166
|
{
"authors": [
"werkshy"
],
"repo": "source-health/source-node",
"url": "https://github.com/source-health/source-node/pull/29",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
}
|
2715658602
|
[Snyk] Upgrade @loopback/repository from 7.0.2 to 7.0.7
Snyk has created this PR to upgrade @loopback/repository from 7.0.2 to 7.0.7.
:information_source: Keep your dependencies up-to-date. This makes it easier to fix existing vulnerabilities and to more quickly identify and fix newly disclosed vulnerabilities when they affect your project.
The recommended version is 5 versions ahead of your current version.
The recommended version was released on 2 months ago.
Release notes
Package name: @loopback/repository
7.0.7 - 2024-10-15@ loopback/testlab@7.0.7
7.0.6 - 2024-09-12
7.0.5 - 2024-08-14
7.0.4 - 2024-07-09
7.0.3 - 2024-06-11
7.0.2 - 2024-05-17
from @loopback/repository GitHub release notes
[!IMPORTANT]
Check the changes in this PR to ensure they won't cause issues with your project.
This PR was automatically created by Snyk using the credentials of a real user.
Note: You are seeing this because you or someone else with access to this repository has authorized Snyk to open upgrade PRs.
For more information:
🧐 View latest project report
📜 Customise PR templates
🛠 Adjust upgrade PR settings
🔕 Ignore this dependency or unsubscribe from future upgrade PRs
Done in https://github.com/sourcefuse/loopback4-kafka-client/pull/75
|
gharchive/pull-request
| 2024-12-03T17:50:42
|
2025-04-01T04:35:55.103312
|
{
"authors": [
"ashishkaushik",
"yeshamavani"
],
"repo": "sourcefuse/loopback4-kafka-client",
"url": "https://github.com/sourcefuse/loopback4-kafka-client/pull/72",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
}
|
2733052713
|
[Bug] Branches with long name cannot be displayed (or scrolled)
The branch cannot be read correctly or scrolled to read the full branch name. Neither in the history or on the commit detail
Done. You can download the latest CI build from Github Action.
|
gharchive/issue
| 2024-12-11T13:57:21
|
2025-04-01T04:35:55.116037
|
{
"authors": [
"albertodlc",
"love-linger"
],
"repo": "sourcegit-scm/sourcegit",
"url": "https://github.com/sourcegit-scm/sourcegit/issues/807",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
}
|
2230555541
|
Update README.md
buildt rebranded as cosine
Sounds good thanks!
|
gharchive/pull-request
| 2024-04-08T08:15:12
|
2025-04-01T04:35:55.117046
|
{
"authors": [
"chayim",
"jdorfman"
],
"repo": "sourcegraph/awesome-code-ai",
"url": "https://github.com/sourcegraph/awesome-code-ai/pull/28",
"license": "CC0-1.0",
"license_type": "permissive",
"license_source": "github-api"
}
|
2123203221
|
wip
Context
Closes https://github.com/sourcegraph/cody/issues/3403
Test plan
@valerybugakov I'll do a cleanup and ship it, yep. I was starting this before but then we got the legal pushback.
@philipp-spiess yes we'll be able to filter down to a subset of users (e.g. just users who have seen an autocomplete suggestion) in Looker. insertedCharacters will include both Cody + user written code, right?
@akalia25 this event is probably a good candidate to exclude from the Amplitude pipeline, I can't imagine us doing this kind of analysis there.
|
gharchive/pull-request
| 2024-02-07T14:47:48
|
2025-04-01T04:35:55.119153
|
{
"authors": [
"kelsey-brown",
"philipp-spiess"
],
"repo": "sourcegraph/cody",
"url": "https://github.com/sourcegraph/cody/pull/3070",
"license": "Apache-2.0",
"license_type": "permissive",
"license_source": "github-api"
}
|
2384987532
|
WIP: add stylesheet for jetbrains
Replace https://github.com/sourcegraph/cody/pull/4709
TBC
Still WIP - mapping the variables of JetBrains to VS code ones
Test plan
CODY_DIR=/Users/dpc/projects/cody ./gradlew :runIDE -PforceAgentBuild=true
Open Cody Experimental Chat UX sidebar, right click, Open DevTools
Check that a ton of JetBrains theme variables appear.
Check that the values parse OK
Switch theme in client and expect the webview updated accordingly
High contrast theme:
Drucula theme:
Light theme:
What's the plan here, because you've dropped https://github.com/sourcegraph/cody/commit/95ceb538007527cd26c8d9055b4c56db48a41432 which you need to set the data-ide property, etc.
I tried updating the agent-blindings but getting:
+ /Users/bwork/dev/cody (122ms)
done /Users/bwork/dev/cody/index.scip
+ pnpm dlx ts-node agent/src/cli/scip-codegen/command.ts --output agent/bindings/kotlin/lib/src/main/kotlin/com/sourcegraph/cody/protocol_generated
.../Library/pnpm/store/v3/tmp/dlx-37738 | +20 ++
.../Library/pnpm/store/v3/tmp/dlx-37738 | Progress: resolved 20, reused 20, downloaded 0, added 20, done
/Users/bwork/dev/cody/agent/src/cli/scip-codegen/SymbolTable.ts:48
throw new Error(
^
Error: no symbol: {
"symbol": "scip-typescript npm cody-ai 1.24.0 src/chat/`protocol.ts`/indexedSignature2:",
"debuggingInfo": []
}
at SymbolTable.info (/Users/bwork/dev/cody/agent/src/cli/scip-codegen/SymbolTable.ts:48:19)
at /Users/bwork/dev/cody/agent/src/cli/scip-codegen/JvmCodegen.ts:317:39
at CodePrinter.block (/Users/bwork/dev/cody/vscode/src/completions/context/retrievers/tsc/CodePrinter.ts:45:9)
at JvmCodegen.writeDataClass (/Users/bwork/dev/cody/agent/src/cli/scip-codegen/JvmCodegen.ts:300:11)
at JvmCodegen.writeType (/Users/bwork/dev/cody/agent/src/cli/scip-codegen/JvmCodegen.ts:484:22)
at JvmCodegen.run (/Users/bwork/dev/cody/agent/src/cli/scip-codegen/JvmCodegen.ts:60:22)
at async Command.<anonymous> (/Users/bwork/dev/cody/agent/src/cli/scip-codegen/command.ts:80:9)
at async Command.parseAsync (/Users/bwork/dev/cody/node_modules/.pnpm/commander@11.1.0/node_modules/commander/lib/command.js:936:5)
ERROR Command failed with exit code 1: ts-node agent/src/cli/scip-codegen/command.ts --output agent/bindings/kotlin/lib/src/main/kotlin/com/sourcegraph/cody/protocol_generated
pnpm: Command failed with exit code 1: ts-node agent/src/cli/scip-codegen/command.ts --output agent/bindings/kotlin/lib/src/main/kotlin/com/sourcegraph/cody/protocol_generated
at makeError (/Users/bwork/.asdf/installs/pnpm/8.6.7/dist/pnpm.cjs:24796:17)
at handlePromise (/Users/bwork/.asdf/installs/pnpm/8.6.7/dist/pnpm.cjs:25367:33)
at process.processTicksAndRejections (node:internal/process/task_queues:95:5)
at async Object.handler [as dlx] (/Users/bwork/.asdf/installs/pnpm/8.6.7/dist/pnpm.cjs:209900:7)
at async /Users/bwork/.asdf/installs/pnpm/8.6.7/dist/pnpm.cjs:219307:21
at async main (/Users/bwork/.asdf/installs/pnpm/8.6.7/dist/pnpm.cjs:219274:34)
at async runPnpm (/Users/bwork/.asdf/installs/pnpm/8.6.7/dist/pnpm.cjs:219529:5)
at async /Users/bwork/.asdf/installs/pnpm/8.6.7/dist/pnpm.cjs:219521:7
ELIFECYCLE Command failed with exit code 1.
```
Thanks for working on this!
Bit of commit message polish would be great: instead of WIP we could say something about incremental improvements coming.
Do you want to land this to keep the diff small and do anything needed for font sizes, sizes, etc. in a follow up?
Thanks for working on this!
Bit of commit message polish would be great: instead of WIP we could say something about incremental improvements coming.
Do you want to land this to keep the diff small and do anything needed for font sizes, sizes, etc. in a follow up?
Yea let's do this! Let me update the PR with your feedback and rebase it before I merge, thanks for reviewing!
|
gharchive/pull-request
| 2024-07-02T01:02:01
|
2025-04-01T04:35:55.126062
|
{
"authors": [
"abeatrix",
"dominiccooney"
],
"repo": "sourcegraph/cody",
"url": "https://github.com/sourcegraph/cody/pull/4744",
"license": "Apache-2.0",
"license_type": "permissive",
"license_source": "github-api"
}
|
267039665
|
There may be a memory leak?
@KISSMonX Could you update your go langserver? This is likely a duplicate of the now fixed issue https://github.com/sourcegraph/go-langserver/issues/178
same issue
This extension looks unavailable, there is my config: -usebinarypkgcache=false -trace, after a while, memory usage increase crazy, then will be killed by system.
Is there any way for user to debug this extension, may be I can provide more log with debug info
Please try without setting usebinarypkgcache=false, that will cause greatly
instead memory usage.
If still having a problem, see readme to capture a heap profile please.
On Oct 20, 2017 3:17 AM, "lixiaohui" notifications@github.com wrote:
Is there any way for user to debug this extension, may be I can provide
more log with debug info
—
You are receiving this because you are subscribed to this thread.
Reply to this email directly, view it on GitHub
https://github.com/sourcegraph/go-langserver/issues/208#issuecomment-338167086,
or mute the thread
https://github.com/notifications/unsubscribe-auth/ADBrOLF1gjqRXIMeQ2byk_5YQNNNc8nVks5suHNWgaJpZM4QAFSP
.
Ohh, but without this option for some package the hover info is missing.
Looks as if during my type sending contentChanges event, memory increase crazy.
We don't support using that option in a desktop environment. It is used in our server environment where the working copy is immutable and type information is shared between different working copies.
If you do not have memory issues without the flag, are you happy to close this and file an issue for the missing hover info?
Sure, thank you! However, this issue isn't open by me 🤪
|
gharchive/issue
| 2017-10-20T01:49:11
|
2025-04-01T04:35:55.134085
|
{
"authors": [
"KISSMonX",
"keegancsmith",
"leaxoy",
"slimsag"
],
"repo": "sourcegraph/go-langserver",
"url": "https://github.com/sourcegraph/go-langserver/issues/208",
"license": "mit",
"license_type": "permissive",
"license_source": "bigquery"
}
|
2144467829
|
bug: java.util.concurrent.ExecutionException: org.eclipse.lsp4j.jsonrpc.ResponseErrorException: accessing Sourcegraph Grap (...)
Plugin version: 5.3.27
IDE version: WS-233.14475.40
Stacktrace:
java.util.concurrent.ExecutionException: org.eclipse.lsp4j.jsonrpc.ResponseErrorException: accessing Sourcegraph GraphQL API: FetchError: request to https://sourcegraph.com/.api/graphql?CurrentUserCodySubscription failed, reason: unable to get local issuer certificate (https://sourcegraph.com/.api/graphql?CurrentUserCodySubscription)
Error: accessing Sourcegraph GraphQL API: FetchError: request to https://sourcegraph.com/.api/graphql?CurrentUserCodySubscription failed, reason: unable to get local issuer certificate (https://sourcegraph.com/.api/graphql?CurrentUserCodySubscription)
at /snapshot/dist/agent.js
at process.processTicksAndRejections (node:internal/process/task_queues:95:5)
at async /snapshot/dist/agent.js
at java.base/java.util.concurrent.CompletableFuture.reportGet(CompletableFuture.java:396)
at java.base/java.util.concurrent.CompletableFuture.get(CompletableFuture.java:2073)
at com.sourcegraph.cody.initialization.EndOfTrialNotificationScheduler.lambda$1$lambda$0(EndOfTrialNotificationScheduler.kt:44)
at com.sourcegraph.cody.agent.CodyAgentService$Companion.withAgentRestartIfNeeded$lambda$1(CodyAgentService.kt:134)
at com.intellij.openapi.application.impl.ApplicationImpl$2.run(ApplicationImpl.java:249)
at java.base/java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:539)
at java.base/java.util.concurrent.FutureTask.run(FutureTask.java:264)
at java.base/java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1136)
at java.base/java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:635)
at java.base/java.util.concurrent.Executors$PrivilegedThreadFactory$1$1.run(Executors.java:702)
at java.base/java.util.concurrent.Executors$PrivilegedThreadFactory$1$1.run(Executors.java:699)
at java.base/java.security.AccessController.doPrivileged(AccessController.java:399)
at java.base/java.util.concurrent.Executors$PrivilegedThreadFactory$1.run(Executors.java:699)
at java.base/java.lang.Thread.run(Thread.java:840)
Caused by: org.eclipse.lsp4j.jsonrpc.ResponseErrorException: accessing Sourcegraph GraphQL API: FetchError: request to https://sourcegraph.com/.api/graphql?CurrentUserCodySubscription failed, reason: unable to get local issuer certificate (https://sourcegraph.com/.api/graphql?CurrentUserCodySubscription)
Error: accessing Sourcegraph GraphQL API: FetchError: request to https://sourcegraph.com/.api/graphql?CurrentUserCodySubscription failed, reason: unable to get local issuer certificate (https://sourcegraph.com/.api/graphql?CurrentUserCodySubscription)
at /snapshot/dist/agent.js
at process.processTicksAndRejections (node:internal/process/task_queues:95:5)
at async /snapshot/dist/agent.js
at org.eclipse.lsp4j.jsonrpc.RemoteEndpoint.handleResponse(RemoteEndpoint.java:209)
at org.eclipse.lsp4j.jsonrpc.RemoteEndpoint.consume(RemoteEndpoint.java:193)
at org.eclipse.lsp4j.jsonrpc.json.StreamMessageProducer.handleMessage(StreamMessageProducer.java:194)
at org.eclipse.lsp4j.jsonrpc.json.StreamMessageProducer.listen(StreamMessageProducer.java:94)
at org.eclipse.lsp4j.jsonrpc.json.ConcurrentMessageProcessor.run(ConcurrentMessageProcessor.java:113)
at java.base/java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:539)
at java.base/java.util.concurrent.FutureTask.run(FutureTask.java:264)
at java.base/java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1136)
at java.base/java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:635)
... 1 more
Thank you for reporting that issue! I'm closing it in favour of https://github.com/sourcegraph/jetbrains/issues/915 where we will track issues with certificates during GraphQL query.
|
gharchive/issue
| 2024-02-20T14:09:18
|
2025-04-01T04:35:55.136987
|
{
"authors": [
"nick1osiunin",
"pkukielka"
],
"repo": "sourcegraph/jetbrains",
"url": "https://github.com/sourcegraph/jetbrains/issues/752",
"license": "Apache-2.0",
"license_type": "permissive",
"license_source": "github-api"
}
|
1066359559
|
Series dropdown menu
What should this PR do?
Resolves DEVED-265 by updating our collection list to be a dropdown menu with toggle.
Why are we making this change?
We want to improve navigation within the site and on all pages.
What are the acceptance criteria?
Series list should display in a full-width column above tutorials and ToC.
Series list should include series name and the title of the first tutorial in the series in the header.
Series header should include an icon toggle that, when clicked, reveals a modal with the other tutorial titles in the series.
Toggle should open and close the modal.
The current tutorial in the series should be highlighted.
Mobile styles should look normal.
How should this PR be tested?
Pull request process
Reviewers:
Test functionality using the criteria above.
Offer tips for efficiency, feedback on best practices, and possible alternative approaches and things that may not have been considered.
For shorter, "quick" PRs, use your best judgement on #2.
Use a collaborative approach and provide resources and/or context where appropriate.
Provide screenshots/grabs where appropriate to show findings during review.
Reviewees:
Prefer incremental and appropriately-scoped changes.
Leave a comment on things you want explicit feedback on.
Respond clearly to comments and questions.
Some questions I have for you @ltagliaferri:
How do you feel about the toggle icon?
^^ border on the header?
^^ border on the dropdown modal? Should that be lighter?
Feel good about the toggle icon, I think the border on the dropdown modal can be lighter.
Sweet — thanks!
|
gharchive/pull-request
| 2021-11-29T18:26:26
|
2025-04-01T04:35:55.144134
|
{
"authors": [
"katjuell",
"ltagliaferri"
],
"repo": "sourcegraph/learn",
"url": "https://github.com/sourcegraph/learn/pull/382",
"license": "Apache-2.0",
"license_type": "permissive",
"license_source": "github-api"
}
|
1652076200
|
Alternative to Sourcerer
Is there any alternative to Sourcerer?
I found out this website https://profile.codersrank.io/
|
gharchive/issue
| 2023-04-03T13:28:52
|
2025-04-01T04:35:55.201394
|
{
"authors": [
"OtavioCapila"
],
"repo": "sourcerer-io/sourcerer-app",
"url": "https://github.com/sourcerer-io/sourcerer-app/issues/644",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
}
|
2538846284
|
support texture binding when bindless is not available
Summary by CodeRabbit
New Features
Introduced command-line argument handling for enhanced application configurability.
Added support for controlling bindless texture functionality.
Improved texture management with dynamic shader assignment based on texture state.
Bug Fixes
Enhanced error handling for texture initialization and rendering processes to prevent null reference issues.
Documentation
Updated comments and debug prints to clarify new command-line options and texture management logic.
This is a benchmark review for experiment mermaid_diagrams.
Run ID: mermaid_diagrams/benchmark_2024-09-20T10-34-10_v1-22-0-133-gbcf338da5-dirty.
This pull request was cloned from https://github.com/btipling/foundations/pull/33. (Note: the URL is not a link to avoid triggering a notification on the original pull request.)
Experiment configuration
review_config:
# User configuration for the review
# - benchmark - use the user config from the benchmark reviews
# - <value> - use the value directly
user_review_config:
enable_ai_review: true
enable_rule_comments: false
enable_complexity_comments: false
enable_security_comments: false
enable_tests_comments: false
enable_comment_suggestions: false
enable_functionality_review: false
enable_pull_request_summary: false
enable_review_guide: true
enable_approvals: false
ai_review_config:
# The model responses to use for the experiment
# - benchmark - use the model responses from the benchmark reviews
# - llm - call the language model to generate responses
model_responses:
comments_model: benchmark
comment_area_model: benchmark
comment_validation_model: benchmark
comment_suggestion_model: benchmark
complexity_model: benchmark
functionality_model: benchmark
security_model: benchmark
tests_model: benchmark
pull_request_summary_model: benchmark
review_guide_model: llm
overall_comments_model: benchmark
# The pull request dataset to run the experiment on
pull_request_dataset:
- https://github.com/sourcery-ai/core/pull/4607
- https://github.com/sourcery-ai/core/pull/4631
- https://github.com/sourcery-ai/core/pull/4647
# CodeRabbit examples:
- https://github.com/2lambda123/-Orange-OpenSource-oorobot/pull/15
- https://github.com/2lambda123/galaxyproject-galaxy/pull/12
- https://github.com/a0v0/avtoolz/pull/79
- https://github.com/adityask98/Hotaru/pull/10
- https://github.com/agdas/vscode/pull/2
- https://github.com/agluszak/hirschgarten/pull/2
- https://github.com/alexsnow348/insightface/pull/46
- https://github.com/alikuxac/utilities/pull/10
- https://github.com/AlphaDev87/timba-api/pull/49
- https://github.com/AngeloTadeucci/Maple2/pull/239
- https://github.com/AngeloTadeucci/Maple2.File/pull/36
- https://github.com/AngeloTadeucci/Maple2/pull/233
# Examples where CodeRabbit does not generate diagrams
- https://github.com/baptisteArno/typebot.io/pull/1778
- https://github.com/btipling/foundations/pull/33
- https://github.com/btipling/foundations/pull/31
- https://github.com/chintu-777/jaeger/pull/1
- https://github.com/coji/remix-docs-ja/pull/55
- https://github.com/DaveMBush/SmartNgRX/pull/622
- https://github.com/DaveMBush/SmartNgRX/pull/481
- https://github.com/dkittle/party-connections/pull/6
- https://github.com/Drajad-Kusuma-Adi/onstudy-backend/pull/6
- https://github.com/imaami/libcanth/pull/2
# Questions to ask to label the review comments
review_comment_labels: []
# - label: correct
# question: Is this comment correct?
# Benchmark reviews generated by running
# python -m scripts.experiment benchmark <experiment_name>
benchmark_reviews:
- dataset_pull_request: https://github.com/sourcery-ai/core/pull/4607
review_pull_request: https://github.com/sourcery-ai-experiments/core/pull/338
- dataset_pull_request: https://github.com/sourcery-ai/core/pull/4631
review_pull_request: https://github.com/sourcery-ai-experiments/core/pull/339
- dataset_pull_request: https://github.com/sourcery-ai/core/pull/4647
review_pull_request: https://github.com/sourcery-ai-experiments/core/pull/340
- dataset_pull_request: https://github.com/2lambda123/-Orange-OpenSource-oorobot/pull/15
review_pull_request: https://github.com/sourcery-ai-experiments/-Orange-OpenSource-oorobot/pull/2
- dataset_pull_request: https://github.com/2lambda123/galaxyproject-galaxy/pull/12
review_pull_request: https://github.com/sourcery-ai-experiments/galaxyproject-galaxy/pull/1
- dataset_pull_request: https://github.com/adityask98/Hotaru/pull/10
review_pull_request: https://github.com/sourcery-ai-experiments/Hotaru/pull/2
- dataset_pull_request: https://github.com/agdas/vscode/pull/2
review_pull_request: https://github.com/sourcery-ai-experiments/vscode/pull/3
- dataset_pull_request: https://github.com/agluszak/hirschgarten/pull/2
review_pull_request: https://github.com/sourcery-ai-experiments/hirschgarten/pull/2
- dataset_pull_request: https://github.com/alikuxac/utilities/pull/10
review_pull_request: https://github.com/sourcery-ai-experiments/utilities/pull/2
- dataset_pull_request: https://github.com/AlphaDev87/timba-api/pull/49
review_pull_request: https://github.com/sourcery-ai-experiments/timba-api/pull/2
- dataset_pull_request: https://github.com/AngeloTadeucci/Maple2/pull/239
review_pull_request: https://github.com/sourcery-ai-experiments/Maple2/pull/3
- dataset_pull_request: https://github.com/AngeloTadeucci/Maple2.File/pull/36
review_pull_request: https://github.com/sourcery-ai-experiments/Maple2.File/pull/2
- dataset_pull_request: https://github.com/AngeloTadeucci/Maple2/pull/233
review_pull_request: https://github.com/sourcery-ai-experiments/Maple2/pull/4
|
gharchive/pull-request
| 2024-09-20T13:34:31
|
2025-04-01T04:35:55.206879
|
{
"authors": [
"ruancomelli"
],
"repo": "sourcery-ai-experiments/foundations",
"url": "https://github.com/sourcery-ai-experiments/foundations/pull/1",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
}
|
2537328211
|
Code rabbit try again
Summary by CodeRabbit
New Features
Enhanced functionality for specifying build requirements in Benchmark and UnitTests classes.
Simplified connection process in the BspConnection interface.
Bug Fixes
Improved handling of token lexing and validation in the lexer tests.
Documentation
Added support for new file types and syntax highlighting for Bazelrc files.
Tests
Introduced comprehensive unit tests for ProjectSyncTask and Java module transformations.
Chores
Updated dependencies for OpenTelemetry libraries to improve performance and stability.
This is an experiment review for experiment mermaid_diagrams
Run ID: mermaid_diagrams/run_2024-09-19T17-40-14_v1-22-0-131-gcca0a8c56-dirty
The benchmark review for this pull request can be found at https://github.com/sourcery-ai-experiments/hirschgarten/pull/2.
This pull request was cloned from https://github.com/agluszak/hirschgarten/pull/2. (Note: the URL is not a link to avoid triggering a notification on the original pull request.)
Experiment configuration
review_config:
# User configuration for the review
# - benchmark - use the user config from the benchmark reviews
# - <value> - use the value directly
user_review_config:
enable_ai_review: true
enable_rule_comments: false
enable_complexity_comments: false
enable_security_comments: false
enable_tests_comments: false
enable_comment_suggestions: false
enable_functionality_review: false
enable_pull_request_summary: false
enable_review_guide: true
enable_approvals: false
ai_review_config:
# The model responses to use for the experiment
# - benchmark - use the model responses from the benchmark reviews
# - llm - call the language model to generate responses
model_responses:
comments_model: benchmark
comment_area_model: benchmark
comment_validation_model: benchmark
comment_suggestion_model: benchmark
complexity_model: benchmark
functionality_model: benchmark
security_model: benchmark
tests_model: benchmark
pull_request_summary_model: benchmark
review_guide_model: llm
overall_comments_model: benchmark
# The pull request dataset to run the experiment on
pull_request_dataset:
- https://github.com/sourcery-ai/core/pull/4607
- https://github.com/sourcery-ai/core/pull/4631
- https://github.com/sourcery-ai/core/pull/4647
# CodeRabbit examples:
- https://github.com/2lambda123/-Orange-OpenSource-oorobot/pull/15
- https://github.com/2lambda123/galaxyproject-galaxy/pull/12
- https://github.com/a0v0/avtoolz/pull/79
- https://github.com/adityask98/Hotaru/pull/10
- https://github.com/agdas/vscode/pull/2
- https://github.com/agluszak/hirschgarten/pull/2
- https://github.com/alexsnow348/insightface/pull/46
- https://github.com/alikuxac/utilities/pull/10
- https://github.com/AlphaDev87/timba-api/pull/49
- https://github.com/AngeloTadeucci/Maple2/pull/239
- https://github.com/AngeloTadeucci/Maple2.File/pull/36
- https://github.com/AngeloTadeucci/Maple2/pull/233
# Questions to ask to label the review comments
review_comment_labels: []
# - label: correct
# question: Is this comment correct?
# Benchmark reviews generated by running
# python -m scripts.experiment benchmark <experiment_name>
benchmark_reviews:
- dataset_pull_request: https://github.com/sourcery-ai/core/pull/4607
review_pull_request: https://github.com/sourcery-ai-experiments/core/pull/338
- dataset_pull_request: https://github.com/sourcery-ai/core/pull/4631
review_pull_request: https://github.com/sourcery-ai-experiments/core/pull/339
- dataset_pull_request: https://github.com/sourcery-ai/core/pull/4647
review_pull_request: https://github.com/sourcery-ai-experiments/core/pull/340
- dataset_pull_request: https://github.com/2lambda123/-Orange-OpenSource-oorobot/pull/15
review_pull_request: https://github.com/sourcery-ai-experiments/-Orange-OpenSource-oorobot/pull/2
- dataset_pull_request: https://github.com/2lambda123/galaxyproject-galaxy/pull/12
review_pull_request: https://github.com/sourcery-ai-experiments/galaxyproject-galaxy/pull/1
- dataset_pull_request: https://github.com/adityask98/Hotaru/pull/10
review_pull_request: https://github.com/sourcery-ai-experiments/Hotaru/pull/2
- dataset_pull_request: https://github.com/agdas/vscode/pull/2
review_pull_request: https://github.com/sourcery-ai-experiments/vscode/pull/3
- dataset_pull_request: https://github.com/agluszak/hirschgarten/pull/2
review_pull_request: https://github.com/sourcery-ai-experiments/hirschgarten/pull/2
- dataset_pull_request: https://github.com/alikuxac/utilities/pull/10
review_pull_request: https://github.com/sourcery-ai-experiments/utilities/pull/2
- dataset_pull_request: https://github.com/AlphaDev87/timba-api/pull/49
review_pull_request: https://github.com/sourcery-ai-experiments/timba-api/pull/2
- dataset_pull_request: https://github.com/AngeloTadeucci/Maple2/pull/239
review_pull_request: https://github.com/sourcery-ai-experiments/Maple2/pull/3
- dataset_pull_request: https://github.com/AngeloTadeucci/Maple2.File/pull/36
review_pull_request: https://github.com/sourcery-ai-experiments/Maple2.File/pull/2
- dataset_pull_request: https://github.com/AngeloTadeucci/Maple2/pull/233
review_pull_request: https://github.com/sourcery-ai-experiments/Maple2/pull/4
|
gharchive/pull-request
| 2024-09-19T20:41:36
|
2025-04-01T04:35:55.213943
|
{
"authors": [
"ruancomelli"
],
"repo": "sourcery-ai-experiments/hirschgarten",
"url": "https://github.com/sourcery-ai-experiments/hirschgarten/pull/6",
"license": "Apache-2.0",
"license_type": "permissive",
"license_source": "github-api"
}
|
247504729
|
RFE: add --url option to download a package from a specific URL
Avg response time
Request for Enhancement (RFE)
With the advent of the Sourcery Institute fork of GCC, it would be nice to be able to redirect the installer to grab a package from a different URL than the usual one. I'll work on adding a --url (or --URL?) option to facilitate something like the following, which would install the freshly minted release of a branch that provides experimental, partial support for Fortran 2015 teams:
./install.sh --package gcc --url https://github.com/sourceryinstitute/gcc/archive/teams.tar.gz
This will become increasingly useful if more gfortran/OpenCoarrays contributors are willing to push their compiler edits to a branch of the aforementioned fork rather than distributing patches, which can sometimes be tricky and time-consuming to apply.
And just when I think we're making progress on the issue backlog... 😄
I'm being silly, obviously. This is a good idea, and will certainly come in handy.
@zbeekman No worries. I just submitted a pull request that fixes this issue. I wanted to have a GCC build running in the background while I finish a proposal. ;)
|
gharchive/issue
| 2017-08-02T20:00:21
|
2025-04-01T04:35:55.221443
|
{
"authors": [
"rouson",
"zbeekman"
],
"repo": "sourceryinstitute/OpenCoarrays",
"url": "https://github.com/sourceryinstitute/OpenCoarrays/issues/424",
"license": "BSD-3-Clause",
"license_type": "permissive",
"license_source": "github-api"
}
|
1087981993
|
ctest failure
I compiled the freshly cloned library on my laptop (i5-6300U) with GFortran 10 on Ubuntu 20.04.3 amd64 LTS using the provided instructions in the README file. GCC was installed from default Ubuntu repos using sudo apt install gfortran-10, and CMake was installed from the official Kitware PPA.
The error in question is:
Test project /home/wyp/work/reference-counter/build
Start 1: count-leaks
1/1 Test #1: count-leaks ......................***Failed Required regular expression not found. Regex=[Test passed.
] 0.01 sec
Here are the relevant version strings:
$ uname -a
Linux wyp-ThinkPad-13 5.11.0-43-generic #47~20.04.2-Ubuntu SMP Mon Dec 13 11:06:56 UTC 2021 x86_64 x86_64 x86_64 GNU/Linux
$ cat /etc/issue
Ubuntu 20.04.3 LTS \n \l
$ gfortran-10 --version
GNU Fortran (Ubuntu 10.3.0-1ubuntu1~20.04) 10.3.0
Copyright (C) 2020 Free Software Foundation, Inc.
This is free software; see the source for copying conditions. There is NO
warranty; not even for MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE.
$ cmake --version
cmake version 3.22.1
CMake suite maintained and supported by Kitware (kitware.com/cmake).
The full terminal log can be accessed here
I guess commit b63c5dd and PR #6 have rendered this issue obsolete. Closing.
|
gharchive/issue
| 2021-12-23T20:42:52
|
2025-04-01T04:35:55.224401
|
{
"authors": [
"wyphan"
],
"repo": "sourceryinstitute/reference-counter",
"url": "https://github.com/sourceryinstitute/reference-counter/issues/3",
"license": "BSD-3-Clause",
"license_type": "permissive",
"license_source": "github-api"
}
|
47754165
|
Support arbitrary options for tunnels
Also, fix ServerSpec to work with version 2.0+
Is anything holding this up?
Hi @ssevertson
Thanks for sending in this pull request. DNSimple has adopted this cookbook from Heavy Water and I've cleaned up the master branch to get the test suites working correctly again. If you would like to get these changes into master, please rebase against master and we can look at merging this in once again.
Rebased into #48
|
gharchive/pull-request
| 2014-11-04T19:27:53
|
2025-04-01T04:35:55.229599
|
{
"authors": [
"jhmartin",
"josephholsten",
"martinisoft",
"ssevertson"
],
"repo": "sous-chefs/chef-stunnel",
"url": "https://github.com/sous-chefs/chef-stunnel/pull/24",
"license": "Apache-2.0",
"license_type": "permissive",
"license_source": "github-api"
}
|
227256306
|
Cannot select version of HAProxy
Cookbook version
4.2.0
Chef-client version
12.13
Platform Details
Opsworks Amazon Linux 2017.03
Scenario:
I am trying to run any version of HAProxy besides 1.5.2 which is what it seems to be defaulting to
haproxy_install 'package' do
source_version node[:haproxy][:source_version]
source_url node[:haproxy][:source_url]
source_checksum node[:haproxy][:source_checksum]
end
default[:haproxy][:source_version] = '1.7.5'
default[:haproxy][:source_url] = 'http://www.haproxy.org/download/1.7/src/haproxy-1.7.5.tar.gz'
default[:haproxy][:source_checksum] = 'b04d7db6383c662eb0a421a95af7becac6d9744a1abf0df6b0280c1e61416121'
I copied these values over from the install.rb file in the repo, but for some reason 1.5.2 keeps being installed. I also see no mention int he repo of 1.5 or 1.5.2 so I'm not sure why it's installing that version.
looks like you are missing install_type 'source' on the resource
oh, nvm, i see it
You are telling it to install package so it ignores your other options
haproxy_install 'package'
haproxy_install 'source' do
source_version node[:haproxy][:source_version]
source_url node[:haproxy][:source_url]
source_checksum node[:haproxy][:source_checksum]
end
doing this seems to install the correct version, now just working out some kinks for starting it up
Still not sure why it would install 1.5.2 when running haproxy_install 'package'
1.5.2 is probably the version thats available in the yum repo's available to the instance
ah right.
this can be closed now.
Just a quick note for reference if anyone else runs into this issue.
directory '/var/lib/haproxy' do
owner 'root'
group 'root'
mode '0777'
action :create
end
had to create this folder to get stats to work properly with 1.7.5
@paulius005 glad to hear. you can probably store the stats socket somewhere else also.
|
gharchive/issue
| 2017-05-09T05:42:46
|
2025-04-01T04:35:55.235483
|
{
"authors": [
"paulius005",
"rshade",
"shortdudey123"
],
"repo": "sous-chefs/haproxy",
"url": "https://github.com/sous-chefs/haproxy/issues/220",
"license": "apache-2.0",
"license_type": "permissive",
"license_source": "bigquery"
}
|
967508522
|
Automated PR: Standardising Files
This PR will standardise the files we have with out agreed spec in sous-chefs/repo-management.
This repo has been identified by topic(s) of chef-cookbook
Released as: 8.1.1
|
gharchive/pull-request
| 2021-08-11T21:38:37
|
2025-04-01T04:35:55.236849
|
{
"authors": [
"kitchen-porter"
],
"repo": "sous-chefs/users",
"url": "https://github.com/sous-chefs/users/pull/463",
"license": "Apache-2.0",
"license_type": "permissive",
"license_source": "github-api"
}
|
975588092
|
Support some valid syntax
Support [_0-9] words for class/module/interface
Support 2 or more white space.
Support extend/prepend with module.
Before
After
Sorry, I didn't see https://github.com/soutaro/vscode-rbs-syntax/pull/9 . Some of the content is duplicated.
|
gharchive/pull-request
| 2021-08-20T12:58:59
|
2025-04-01T04:35:55.238889
|
{
"authors": [
"ksss"
],
"repo": "soutaro/vscode-rbs-syntax",
"url": "https://github.com/soutaro/vscode-rbs-syntax/pull/10",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
}
|
891132695
|
[Samples][Java] Fix disparities of 55.teams-link-unfurling sample
Related to microsoft/botbuilder-java1165
Proposed Changes
We compared the documentation/code migration/behavior of the 55.teams-link-unfurling sample between Java and C# and we found disparities and fixes that this PR includes.
Update README
Clean-up autogenerated files/folder related to Eclipse (e.g. target/.classpath)
Add package-info.java file
Testing
Teams Link Unfurling sample working as expected
Merged in MS
|
gharchive/pull-request
| 2021-05-13T15:19:20
|
2025-04-01T04:35:55.249497
|
{
"authors": [
"Batta32"
],
"repo": "southworks/BotBuilder-Samples",
"url": "https://github.com/southworks/BotBuilder-Samples/pull/263",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
}
|
1332489450
|
Artifact Containers collide with artifacts
Description
This is especially infuriating because you kind of need them to safely transport artifacts.
Reproduction
Grab an artifact
Grab an artifact container
Try to move the artifact onto/into the artifact container to contain it.
Notice that you can't and the artifact instead just pushes around the container.
dupe of #9695
|
gharchive/issue
| 2022-08-08T22:52:10
|
2025-04-01T04:35:55.312404
|
{
"authors": [
"EmoGarbage404",
"LordEclipse"
],
"repo": "space-wizards/space-station-14",
"url": "https://github.com/space-wizards/space-station-14/issues/10447",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
}
|
2473433216
|
Security headsets are contraband for lawyers.
Description
This is inconsistent because they spawn with one.
Screenshots
Already listed in #31047
|
gharchive/issue
| 2024-08-19T13:55:46
|
2025-04-01T04:35:55.314031
|
{
"authors": [
"UbaserB",
"slarticodefast"
],
"repo": "space-wizards/space-station-14",
"url": "https://github.com/space-wizards/space-station-14/issues/31205",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
}
|
2010607911
|
make fridges nearly immune to explosions
About the PR
make fridge receive 1% damage from nuke
Why / Balance
hide in fridge to survive nuclear bomb
don't think this is abuseable, if you bothered to bring a fridge against china-lake and somehow managed to make use of it considering ping and all it should be fine, shouldn't ever be viable against c4 (used against structures) or minibombs (used unexpectedly)
Media
[x] I have added screenshots/videos to this PR showcasing its changes ingame, or this PR does not require an ingame showcase
Changelog
:cl:
tweak: Refrigerators are now nearly immune to explosions and even nukes.
gus fridg
|
gharchive/pull-request
| 2023-11-25T13:08:44
|
2025-04-01T04:35:55.317904
|
{
"authors": [
"BasedUser",
"Ilya246"
],
"repo": "space-wizards/space-station-14",
"url": "https://github.com/space-wizards/space-station-14/pull/21896",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
}
|
2094616789
|
Lowers Syndicate Reinforcement TC Cost
About the PR
Syndicate Reinforcement Cost has been lowered to 14 TC
Nuclear Operatives' Reinforcements have been lowered to 15 TC
Why / Balance
https://discord.com/channels/310555209753690112/311537926376783886/1199068205398892616
At the moment, this is just a waste of your TC even if you band together the TC of 2 people. It just isn't worth the price.
Also, the person who becomes your reinforcement has no guarantee to be even competent, let alone robust and they come in the most obvious gear available, which isn't good and they're not on the manifest so are prone to be perma-ed after the first encounter.
Not only do you get nothing worth while, you bank all your TC on this person being worthwhile and cannot afford anything of any value afterwards.
(For Nuclear Operatives, the cost is only slightly lowered to stop Nukies amassing a 10 man squad or smth)
Media
[X] This PR does not require an ingame showcase
Changelog
:cl: DangerRevolution
tweak: Changed Syndicate Reinforcement Cost to 14 TC
tweak: Changed Nuclear Operative Reinforcement Cost to 15 TC
I don't think nukies reinforcement needs to be touched, only the default syndie one. Nukie reinforcements are already powerful.
I agree with Kadeo
I don't think nukies reinforcement needs to be touched, only the default syndie one. Nukie reinforcements are already powerful.
I agree with Kadeo
I'll leave it for 24 more hours and just update it to whatever ppl agree on
nukies should stay 16
Is there a point to even be microadjusting TC amounts.
Is there a point to even be microadjusting TC amounts.
I can drop it more if you want
|
gharchive/pull-request
| 2024-01-22T19:35:59
|
2025-04-01T04:35:55.323446
|
{
"authors": [
"DangerRevolution",
"Dutch-VanDerLinde",
"Kadeo64",
"UbaserB",
"metalgearsloth"
],
"repo": "space-wizards/space-station-14",
"url": "https://github.com/space-wizards/space-station-14/pull/24410",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
}
|
2392041525
|
Fix Antag Bans
About the PR
Fixes the currently half built Antag role banning system to actually work.
Currently working:
Nukeops(Commander and Nukeops medic)
Thief
Traitor (Including SleeperAgents mid-round)
Head Rev
Initial infected
This is a starting point for me to complete all antags and ghost roles but just this set will help admin load and issues a great deal.
Why / Balance
Several commonly occurring admin issues relating to abuse of antag roles or ignoring game flow to just chase down antag roles can be effectively deterred with antag bans.
Technical details
List AntagPrototype's in roleban menu and allow it to be saved as a roleban. Check to remove players from the potential selection pool on AntagSelectionSystem.
Media
[X] I have added screenshots/videos to this PR showcasing its changes ingame, or this PR does not require an ingame showcase
Breaking changes
none
Changelog
Make sure to take this Changelog template out of the comment block in order for it to show up.
:cl: Repo
ADMIN:
fix: Fixed antag bans.
fixes #14356 #24781
A traitor antag ban would prevent rolling sleeper correct?
A traitor antag ban would prevent rolling sleeper correct?
Just had a quick test as it wasn't one of my initial ones I tested. Looks like this does cover SleeperAgent midround rule, as it uses same AntagSelectionSystem.
how would a rev ban work? that could lead to metagaming, but I suppose the ability to ban people from rev would be desired to have, given that some people use revs as a free agent token instead of being a team antagonist and following orders.
how would a rev ban work? that could lead to metagaming, but I suppose the ability to ban people from rev would be desired to have, given that some people use revs as a free agent token instead of being a team antagonist and following orders.
Currently brainstorming ideas with the Wizden Admin team for Zombies(Not initial infected) and Revs(Follower)
Currently this PR would not stop a player converting in to a Rev or Zombie.
Zombies are currently pretty abuse proof from several reworks over the last year, so don't think it matters to much.
Revs are tricky, having them just not convert would essentially be giving the banned player a free mindshield in a sense or if the conversion killed them it would be quite noticeable. If someone has a novel idea for how it could be handled, I can try implement it in a subsequent PR.
huge
To blacklist normal revs here's my idea: If you are blacklisted from normal revs an icon (that only revs can see) will appear in the same location the rev icon appear, instead of being an R with a blue/red background, the icon with be a white "🛇", if you try to convert someone with a rev blacklist nothing happens and maybe some text appears similar to when you try to mindshield a headrev. Revs can treat blacklisted people similar to mindshielded people (kill them)
To blacklist normal revs here's my idea: If you are blacklisted from normal revs an icon (that only revs can see) will appear in the same location the rev icon appear, instead of being an R with a blue/red background, the icon with be a white "🛇" with a black background (or whatever colors you wish), if you try to convert someone with a rev blacklist nothing happens and maybe some text appears similar to when you try to mindshield a headrev. Revs can treat blacklisted people similar to mindshielded people (kill them)
That just feels like a crew disadvantage due to pre-round conditions. Sec just gets an extra mindshield person and said person has a worse time because they just get killed for things they did outside of that shift.
For the conversion antag stuff, imo if someone who is antag banned is converted then they should be force ghosted and turned into a ghost role (which they obv shouldn't be able to take themselves). Iirc, that's how it was done in ss13.
So they get bonked and insta ghost, to the players in game its just like SSD/leaving?
In an ideal world, admins would be able to be as granular as they want, but I think for that to be viable the system would have to be reworked to allow bans of categories. Right now the way a category ban is done is that it just applies and individual ban for each role in the category
I wasn't too sure how this would affect downstream forks as i'm woefully unaware of the types of antags they have. I think we can have our cake and eat it here though. What about all the granular options but one that has 'All'/'Any' etc, if that is picked it overrides any granular ones and does not allow them to enter the antag pool.
This would also future proof if future antags were added the player would still have a total antag ban.
For the conversion antag stuff, imo if someone who is antag banned is converted then they should be force ghosted and turned into a ghost role (which they obv shouldn't be able to take themselves). Iirc, that's how it was done in ss13.
In SS13, cult conversions would just delete the person if they are antag banned or refuse the conversion.
Would it change the character? Cause I wouldn't want someone taking a ghost role of my character during revs, Zombies don't matter as much cause there's much less rp that comes when you are a zombie, but revs seems like it could cause problems.
Updated the role ban panel as it was impossible to use properly.
Zombies ghost on conversion now.
Keeping in draft while i finish out the rev conversion.
Would be useful to update lobby screen but not a blocker.
Temp Role ban:
Indef Role ban:
Some one was having a moan about this in a recent ban and it was only a few extra lines to use the methods in the job tab too so now players can see their role bans, Yay scope creep! 👎
Scope creep extension, Temp role ban.
Scope creep extension, Perm role ban.
Still have the rev conversion to do.
Got Revs done, So I expanded a little on the just delete the entity idea that was floated as I thought that would also be meta gamed if people just saw a mob disappearing.
The player that is banned will get a pop up that they are banned from that role and they cant be a rev. Then 10-60s later, Random Death occurs.
Demo of the failed conversion, nothing noticeable externally:
https://github.com/space-wizards/space-station-14/assets/47093363/c336844a-853c-416a-a65e-76e6b7b5f747
Here is a demo of the random deaths, Burnned, Shocked, Vomit/poison, Bluespace, Bleedout/slash, 3.6 roentgen/Radiation.
https://github.com/space-wizards/space-station-14/assets/47093363/1c60e23f-eaba-4ce3-9d31-42331bee4d86
I have also scope creeped here again and given the admins a free smite with it if they do not know what to kill the shitter with.
Re-Test Checklist:
[ ] Antagban - Nukeops
[ ] Antagban - Thief
[ ] Antagban - Traitor
[ ] Antagban - Zombie - Initial infected
[ ] Antagban - Zombie - Turned
[ ] Antagban - Head Rev
[ ] Antagban - Converted Rev
[ ] Antagban - All
[ ] UI - RoleBan info displayed on Lobby Job selection
[ ] UI - Antagban info displayed on Lobby Antag selection
[ ] UI - Roleban Admin menu displays.
[ ] Random Smite
@metalgearsloth Once we have finished up the reviews, can we hold off on the merge until we define some admin policies for usages and stuff?
So they get bonked and insta ghost, to the players in game its just like SSD/leaving?
So technically it's a different state if the mind is fully removed, it's the purple "catatonic" text.
I will not be completing this, anyone can finish it or use it if they want.
|
gharchive/pull-request
| 2024-07-05T07:44:27
|
2025-04-01T04:35:55.344863
|
{
"authors": [
"Cojoke-dot",
"JackSage",
"Killerqu00",
"PJB3005",
"StarryEyes0822",
"Titian3",
"crazybrain23",
"lzk228",
"ps3moira"
],
"repo": "space-wizards/space-station-14",
"url": "https://github.com/space-wizards/space-station-14/pull/29730",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
}
|
222312614
|
Multiple lines by typing enter
Hi,
Once again : Great work espacialy for the 0.9.3 pre release and the form less feature.
I saw few weeks ago that multilines feature was added but it's not obvious for our users to add new lines by typings shit + enter. I would like to type just enter when it's a multiline input to add line and of course to keep passing to new question by typing enter when it's a simple input.
In some case it's important to easily invite user to enter severals lines
What do you think about that ?
Regards
Hi @MediacraftFR,
Thanks for the comment and valid consideration. However, I would argue the the prevalent approach in chat-interfaces is for a shit-enter to equal new line. Just noticed that Facebook Messenger discontinued the ability to choose and only allows shit-enter for new lines.
Hi @danielfriis ,
That's correct for messenger and you're right it's the better approach for a good chat experience.
But the problem is if a user want to say severals things he types a first sentence and type Enter (not shift+enter) the bot will continue to the next question without letting the user finish to answer completely to the previous question. I know the user can update by clicking on a previous answer but it's not obvious all users.
Do you see my point ?
Regards,
Get you point 100%. But I will still argue that the prevalent UX is to have shift + enter do the new line. This is also the approach that Typeform takes. Also, if this wasn't the case, all inputs except multiline would have submit by enter, which I believe would be counterintuitive.
However, we will definitely keep you suggestion in mind and consider adding an option to do the reverse.
Ok i agree ...
However there is a "bug" to fix in your CF when you edit a multilines user answer.
Imagine you type multilines input with shift+enter like :
"qwer
ty
yolo"
when you edit this answer you will get on the input :
"qwertyyolo"
The lines breaks arent't kept :/
Can you fix this ?
Thanks
Hello conversational form team :)
Have you reproduced my issue ?
Regards
@felixnielsen can you take a look?
@MediacraftFR @danielfriis I had a look at this, and input tag does not natively support multiline, this is the core problem. See example first element is a <textarea, here you can add multi lines, submit and re-edit multilines, but this is not the case with the second field which is a <input field.
From my side the issue seem to be that <input fields have multiline support from CF side. They shouldn't. Let me know your thoughts.
should prevent multiline when reference tag is of type input
|
gharchive/issue
| 2017-04-18T06:13:42
|
2025-04-01T04:35:55.352788
|
{
"authors": [
"MediacraftFR",
"danielfriis",
"felixnielsen"
],
"repo": "space10-community/conversational-form",
"url": "https://github.com/space10-community/conversational-form/issues/110",
"license": "mit",
"license_type": "permissive",
"license_source": "bigquery"
}
|
555214319
|
try it out button on https://spacecraft-repl.com/ does not work
Go to https://spacecraft-repl.com/
scroll down and press try it out.
shows:
The domain now looks re-registered and parked... but not by this project.
|
gharchive/issue
| 2020-01-26T11:21:13
|
2025-04-01T04:35:55.355340
|
{
"authors": [
"TomSpencerLondon",
"mountainash"
],
"repo": "spacecraft-repl/SpaceCraft",
"url": "https://github.com/spacecraft-repl/SpaceCraft/issues/49",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
}
|
2370586170
|
Added short quicksync description
Closes #119
Made a few edits.
that was replaced by https://github.com/spacemeshos/docs/pull/124
|
gharchive/pull-request
| 2024-06-24T15:54:04
|
2025-04-01T04:35:55.361397
|
{
"authors": [
"hasanza",
"pigmej"
],
"repo": "spacemeshos/docs",
"url": "https://github.com/spacemeshos/docs/pull/121",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
}
|
1459564657
|
Use spacemesh ed25519 instead of stdlib
PoET is the only repository where we are using the standard library implementation for ed25519 in some places. This PR fixes this by replacing the few uses of the library with the spacemesh implementation of ed25519.
spacemesh version is much less efficient compared to stdlib, unless poet needs ed++ it doesn't make sense to use it. i am actually not sure if poet needs ed25519 library if it won't be verifying signatures
PoET uses both versions at the moment for:
generating keys (stdlib & spacemesh)
serializing them to bytes (stdlib)
deserializing them from bytes (spacemesh)
All of which make no difference in which flavour of implementation we use. Additionally it is used to:
generate signatures for key extraction
extract keys from signatures
both of which ONLY work with spacemeshos version of ed25519. It makes no sense to use both libraries, and since we need functionality that is not present in the stdlib we should just use the spacemesh version of the library.
@fasmat what are performance differences for required functions between stdlib and our implementation?
Poet really needs to be efficient as possible.
@pigmej we are not using ed25519 in poet any more. It calls a node via grpc and gets validation from there.
|
gharchive/pull-request
| 2022-11-22T10:19:07
|
2025-04-01T04:35:55.365116
|
{
"authors": [
"dshulyak",
"fasmat",
"pigmej"
],
"repo": "spacemeshos/poet",
"url": "https://github.com/spacemeshos/poet/pull/158",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
}
|
1123444877
|
Allow plotlib to accept overriding of axes titles
Would allow for a cleaner integration of LaTeX directives.
Adds an unnecessary extra step to plot command.
|
gharchive/issue
| 2022-02-03T18:57:21
|
2025-04-01T04:35:55.366003
|
{
"authors": [
"DM1122"
],
"repo": "spacesys-finch/payload-designer",
"url": "https://github.com/spacesys-finch/payload-designer/issues/59",
"license": "Unlicense",
"license_type": "permissive",
"license_source": "github-api"
}
|
391921508
|
Add paths and suffixes
This PR introduces code to fix a few small bugs in calib_prep.
Output table entries for repeats and "contained within" are now always lists, and always sorted into increasing order.
When searching for files previously run through the pipeline, non-fits files are ignored.
A calwebb_detector1 configuration file has been added to the repo.
The code now also checks that the calwebb_detector1 pipeline configuration file is present in self.output_directory. If the config file is not present in self.output_directory, then the file from the repo is copied there.
@arminrest feel free to merge this if you want, and repeat your test for inconsistent results in the final output table's repeated and contained_within columns.
|
gharchive/pull-request
| 2018-12-17T22:44:55
|
2025-04-01T04:35:55.416464
|
{
"authors": [
"bhilbert4"
],
"repo": "spacetelescope/jwst_reffiles",
"url": "https://github.com/spacetelescope/jwst_reffiles/pull/30",
"license": "BSD-3-Clause",
"license_type": "permissive",
"license_source": "github-api"
}
|
1294545610
|
Update siaf-updates with PRDOPSSOC-055
This PR updates pysiaf with the new PRD release PRDOPSSOC-055. SIAF data has been added to the prd_data directory and removed from the pre_delivery_data directory.
The pre-delivery data is supposed to be deleted. I expected all of the tests here to fail. I'm surprised to see that the 3.7 tests appear to have been successful.
The failures seem to be the same difference in OS precision we've seen before (NIS is checking for differences ~10e-14 - 10e-15 (our documentation says that anything less than 10e-6 is 0)
MacOS:
(pysiaf) minmei:tests dlong$ pytest -v -p no:warnings
========================================================= test session starts ==========================================================
platform darwin -- Python 3.9.12, pytest-7.1.2, pluggy-1.0.0 -- /Users/dlong/miniconda3/envs/pysiaf/bin/python
cachedir: .pytest_cache
rootdir: /Users/dlong/pysiaf, configfile: setup.cfg
collected 32 items
test_aperture.py::test_idl_to_tel PASSED [ 3%]
test_aperture.py::test_hst_fgs_idl_to_tel PASSED [ 6%]
test_aperture.py::test_jwst_aperture_transforms PASSED [ 9%]
test_aperture.py::test_jwst_aperture_vertices PASSED [ 12%]
test_aperture.py::test_raw_transformations PASSED [ 15%]
test_aperture.py::test_jwst_sky_transformations PASSED [ 18%]
test_hst.py::test_hst_aperture_init PASSED [ 21%]
test_hst.py::test_hst_siaf PASSED [ 25%]
test_hst.py::test_hst_amudotrep PASSED [ 28%]
test_iando.py::test_write_jwst_siaf_xml PASSED [ 31%]
test_iando.py::test_write_jwst_siaf_xlsx PASSED [ 34%]
test_match_v2v3.py::test_match_v2v3 PASSED [ 37%]
test_miri.py::test_against_test_data PASSED [ 40%]
test_nirspec.py::test_against_test_data PASSED [ 43%]
test_nirspec.py::test_nirspec_aperture_transforms PASSED [ 46%]
test_nirspec.py::test_nirspec_slit_transformations PASSED [ 50%]
test_nirspec.py::test_sci_det_consistency PASSED [ 53%]
test_plotting.py::test_aperture_plotting PASSED [ 56%]
test_polynomial.py::test_poly PASSED [ 59%]
test_polynomial.py::test_RotateCoeffs PASSED [ 62%]
test_polynomial.py::test_two_step PASSED [ 65%]
test_polynomial.py::test_invert PASSED [ 68%]
test_polynomial.py::test_ShiftCoeffs PASSED [ 71%]
test_projection.py::test_tangent_plane_projection_roundtrip PASSED [ 75%]
test_projection.py::test_project_to_tangent_plane PASSED [ 78%]
test_projection.py::test_deproject_from_tangent_plane PASSED [ 81%]
test_rotations.py::test_attitude PASSED [ 84%]
test_rotations.py::test_attitude_matrix PASSED [ 87%]
test_rotations.py::test_sky_to_tel PASSED [ 90%]
test_rotations.py::test_axial_rotation PASSED [ 93%]
test_rotations.py::test_unit_vector_from_cartesian PASSED [ 96%]
test_tools.py::test_jwst_fgs_to_fgs_matrix PASSED [100%]
========================================================= 32 passed in 31.88s ==========================================================
Linux:
(pysiaf) -bash-4.2$ pytest -v -p no:warnings
========================================================= test session starts ==========================================================
platform linux -- Python 3.9.13, pytest-7.1.2, pluggy-1.0.0 -- /user/dlong/linux/miniconda3/envs/pysiaf/bin/python
cachedir: .pytest_cache
rootdir: /home/dlong/pysiaf, configfile: setup.cfg
collected 32 items
test_aperture.py::test_idl_to_tel FAILED [ 3%]
test_aperture.py::test_hst_fgs_idl_to_tel PASSED [ 6%]
test_aperture.py::test_jwst_aperture_transforms PASSED [ 9%]
test_aperture.py::test_jwst_aperture_vertices PASSED [ 12%]
test_aperture.py::test_raw_transformations PASSED [ 15%]
test_aperture.py::test_jwst_sky_transformations PASSED [ 18%]
test_hst.py::test_hst_aperture_init PASSED [ 21%]
test_hst.py::test_hst_siaf PASSED [ 25%]
test_hst.py::test_hst_amudotrep PASSED [ 28%]
test_iando.py::test_write_jwst_siaf_xml PASSED [ 31%]
test_iando.py::test_write_jwst_siaf_xlsx PASSED [ 34%]
test_match_v2v3.py::test_match_v2v3 PASSED [ 37%]
test_miri.py::test_against_test_data PASSED [ 40%]
test_nirspec.py::test_against_test_data PASSED [ 43%]
test_nirspec.py::test_nirspec_aperture_transforms PASSED [ 46%]
test_nirspec.py::test_nirspec_slit_transformations PASSED [ 50%]
test_nirspec.py::test_sci_det_consistency PASSED [ 53%]
test_plotting.py::test_aperture_plotting PASSED [ 56%]
test_polynomial.py::test_poly PASSED [ 59%]
test_polynomial.py::test_RotateCoeffs PASSED [ 62%]
test_polynomial.py::test_two_step PASSED [ 65%]
test_polynomial.py::test_invert PASSED [ 68%]
test_polynomial.py::test_ShiftCoeffs PASSED [ 71%]
test_projection.py::test_tangent_plane_projection_roundtrip PASSED [ 75%]
test_projection.py::test_project_to_tangent_plane PASSED [ 78%]
test_projection.py::test_deproject_from_tangent_plane PASSED [ 81%]
test_rotations.py::test_attitude PASSED [ 84%]
test_rotations.py::test_attitude_matrix PASSED [ 87%]
test_rotations.py::test_sky_to_tel PASSED [ 90%]
test_rotations.py::test_axial_rotation PASSED [ 93%]
test_rotations.py::test_unit_vector_from_cartesian PASSED [ 96%]
test_tools.py::test_jwst_fgs_to_fgs_matrix PASSED [100%]
=============================================================== FAILURES ===============================================================
___________________________________________________________ test_idl_to_tel ____________________________________________________________
verbose = False
def test_idl_to_tel(verbose=False):
"""Test the transformations between ideal and telescope frames."""
siaf = Siaf('NIRISS')
x_idl, y_idl = get_grid_coordinates(10, (0, 0), 100)
for aper_name in siaf.apertures.keys():
aperture = siaf[aper_name]
for idl_to_tel_method in ['planar_approximation', 'spherical']:
if idl_to_tel_method == 'spherical':
input_coordinate_types = ['polar', 'cartesian']
else:
input_coordinate_types = ['tangent_plane']
for input_coordinates in input_coordinate_types:
v2, v3 = aperture.idl_to_tel(x_idl, y_idl, method=idl_to_tel_method, input_coordinates=input_coordinates, output_coordinates=input_coordinates)
x_idl_2, y_idl_2 = aperture.tel_to_idl(v2, v3, method=idl_to_tel_method, input_coordinates=input_coordinates, output_coordinates=input_coordinates)
x_diff = np.abs(x_idl - x_idl_2)
y_diff = np.abs(y_idl - y_idl_2)
if verbose:
print('{} {}: Aperture {} {} x_diff {} y_diff {}'.format(idl_to_tel_method, input_coordinates, aper_name, input_coordinates, np.max(x_diff), np.max(y_diff)))
if idl_to_tel_method == 'planar_approximation':
threshold = 7e-14
elif idl_to_tel_method == 'spherical':
if input_coordinates == 'polar':
threshold = 6e-13
elif input_coordinates == 'cartesian':
threshold = 5e-8
assert np.max(x_diff) < threshold
assert np.max(y_diff) < threshold
E assert 7.105427357601002e-14 < 7e-14
E + where 7.105427357601002e-14 = <function amax at 0x7fe54e4924c0>(array([1.42108547e-14, 4.97379915e-14, 4.26325641e-14, 7.10542736e-15,\n 2.13162821e-14, 4.97379915e-14, 3.55271368e-14, 7.10542736e-15,\n 2.84217094e-14, 4.97379915e-14, 4.26325641e-14, 4.97379915e-14,\n 2.13162821e-14, 1.42108547e-14, 4.26325641e-14, 7.10542736e-14,\n 1.42108547e-14, 1.42108547e-14, 4.26325641e-14, 4.26325641e-14,\n 5.32907052e-14, 3.55271368e-14, 3.55271368e-15, 2.48689958e-14,\n 5.68434189e-14, 2.84217094e-14, 3.55271368e-15, 2.84217094e-14,\n 5.68434189e-14, 2.48689958e-14, 4.97379915e-14, 1.77635684e-14,\n 7.10542736e-15, 3.90798505e-14, 4.61852778e-14, 1.42108547e-14,\n 1.42108547e-14, 3.90798505e-14, 4.26325641e-14, 1.42108547e-14,\n 2.84217094e-14, 8.88178420e-16, 3.10862447e-14, 5.50670620e-14,\n 2.48689958e-14, 4.44089210e-15, 3.37507799e-14, 5.06261699e-14,\n 2.22044605e-14, 7.10542736e-15, 7.10542736e-15, 2.22044605e-14,\n 5.06261699e-14, 3.37507799e-14, 4.44089210e-15, 2.48689958e-14,\n 5.50670620e-14, 3.10862447e-14, 8.88178420e-16, 2.84217094e-14,\n 0.00000000e+00, 2.84217094e-14, 5.32907052e-14, 2.84217094e-14,\n 0.00000000e+00, 3.19744231e-14, 5.32907052e-14, 2.13162821e-14,\n 3.55271368e-15, 3.55271368e-14, 1.77635684e-14, 4.97379915e-14,\n 3.55271368e-14, 3.55271368e-15, 2.13162821e-14, 6.39488462e-14,\n 3.19744231e-14, 3.55271368e-15, 2.84217094e-14, 6.03961325e-14,\n 4.26325641e-14, 4.26325641e-14, 1.42108547e-14, 1.42108547e-14,\n 7.10542736e-14, 4.26325641e-14, 1.42108547e-14, 2.13162821e-14,\n 4.97379915e-14, 4.26325641e-14, 4.97379915e-14, 2.84217094e-14,\n 7.10542736e-15, 3.55271368e-14, 4.97379915e-14, 2.13162821e-14,\n 7.10542736e-15, 4.26325641e-14, 4.97379915e-14, 1.42108547e-14]))
E + where <function amax at 0x7fe54e4924c0> = np.max
test_aperture.py:57: AssertionError
======================================================= short test summary info ========================================================
FAILED test_aperture.py::test_idl_to_tel - assert 7.105427357601002e-14 < 7e-14
=============================================== 1 failed, 31 passed in 66.67s (0:01:06) ================================================
(pysiaf) -bash-4.2$
|
gharchive/pull-request
| 2022-07-05T16:58:31
|
2025-04-01T04:35:55.458679
|
{
"authors": [
"Witchblade101"
],
"repo": "spacetelescope/pysiaf",
"url": "https://github.com/spacetelescope/pysiaf/pull/271",
"license": "BSD-3-Clause",
"license_type": "permissive",
"license_source": "github-api"
}
|
1925291830
|
Updating for more readthedocs updates
This PR will include many more changes to the RtD pages.
Ah, one codestyle error snuck in, I'll just fix that here before merging.
|
gharchive/pull-request
| 2023-10-04T02:41:07
|
2025-04-01T04:35:55.466929
|
{
"authors": [
"Russell-Ryan",
"rosteen"
],
"repo": "spacetelescope/slitlessutils",
"url": "https://github.com/spacetelescope/slitlessutils/pull/49",
"license": "BSD-3-Clause",
"license_type": "permissive",
"license_source": "github-api"
}
|
555193443
|
Update mission control for eventing module
We need to update the UI to add the following features:
Describe schema for event types
Describe Security rules for event types
Describe default security rule
Done. Coming in v0.16.0.
|
gharchive/issue
| 2020-01-26T08:02:02
|
2025-04-01T04:35:55.477010
|
{
"authors": [
"Jayesh333",
"YourTechBud"
],
"repo": "spaceuptech/space-cloud",
"url": "https://github.com/spaceuptech/space-cloud/issues/669",
"license": "Apache-2.0",
"license_type": "permissive",
"license_source": "github-api"
}
|
560249492
|
Add a clause field to the query rule
This will let us apply advanced rules based on the result of the database query. Need to make the db result available in args.result. This could will always be an array since the operation used by the query rule will be all
Done
|
gharchive/issue
| 2020-02-05T09:39:20
|
2025-04-01T04:35:55.478074
|
{
"authors": [
"YourTechBud"
],
"repo": "spaceuptech/space-cloud",
"url": "https://github.com/spaceuptech/space-cloud/issues/696",
"license": "Apache-2.0",
"license_type": "permissive",
"license_source": "github-api"
}
|
137885430
|
Difference between C++ and Go output
I've been running both the C++ and this version of Murmur3 side by side and i've noticed that in cases where a null byte occurs in the middle of the input data, the C++ version will output a different hash.
Now I guess this is a bug in the C++ implementation, as Go will iterate over the correct array length, and not just stop at a null byte. But, given that the C++ version is the "official" one, should this version implement the bug as a feature?
Well, I doubt there's a bug in the original implementation ;), but I could use a reproducer (with the C++ version you're using).
I would suspect an API misusage given the C/C++ null byte specific signification.
Thanks.
|
gharchive/issue
| 2016-03-02T14:20:10
|
2025-04-01T04:35:55.543232
|
{
"authors": [
"panamafrancis",
"spaolacci"
],
"repo": "spaolacci/murmur3",
"url": "https://github.com/spaolacci/murmur3/issues/11",
"license": "bsd-3-clause",
"license_type": "permissive",
"license_source": "bigquery"
}
|
1531588610
|
[bug] v1.14.0 introduces inconsistency between CRuby and JRuby
First off, thanks for working on nokogiri! The latest release looks like it has some great improvements. Unfortunately, here's a bug. :)
Since v1.14.0, the following code has different results on CRuby and JRuby (where CRuby shows the correct, and previous, behaviour):
require 'nokogiri'
data = "<div>\n<pre><code>one\ntwo\nthree\nfour\n</code></pre>\n</div>"
doc = Nokogiri::XML::DocumentFragment.parse(data)
opts = { :save_with => Nokogiri::XML::Node::SaveOptions::DEFAULT_XHTML ^ 1, :indent => 0, :encoding => 'UTF-8' }
puts doc.css("div").children.to_xml(opts).inspect
Result on CRuby: "\n<pre><code>one\ntwo\nthree\nfour\n</code></pre>\n"
Result on JRuby: "\n<pre>\n <code>one\ntwo\nthree\nfour\n</code>\n</pre>\n\n"
That is: on JRuby, a newline and two spaces are mistakenly added inside the <pre> tag, despite using :indent => 0 and despite the fact that :save_with has FORMAT disabled in both cases.
Thanks for reporting, I'll take a look after the weekend.
@dometto Thanks again for reporting.
I think what's going on here is that you're toggling a bit in the SaveOptions bitmask, instead of explicitly turning it on or off, and because the underlying bits have changed, this is doing the opposite of what you want.
Let me explain what's going on, step by step ...
First, in JRuby Nokogiri v1.13.0:
RUBY_DESCRIPTION # => "jruby 9.4.0.0 (3.1.0) 2022-11-23 95c0ec159f OpenJDK 64-Bit Server VM 11.0.17+8-post-Ubuntu-1ubuntu220.04 on 11.0.17+8-post-Ubuntu-1ubuntu220.04 [x86_64-linux]"
Nokogiri::VERSION # => "1.13.10"
Nokogiri::XML::Node::SaveOptions::DEFAULT_XHTML # => 19
Nokogiri::XML::Node::SaveOptions::DEFAULT_XHTML ^ 1 # => 18
But in Nokogiri v1.14.0:
RUBY_DESCRIPTION # => "jruby 9.4.0.0 (3.1.0) 2022-11-23 95c0ec159f OpenJDK 64-Bit Server VM 11.0.17+8-post-Ubuntu-1ubuntu220.04 on 11.0.17+8-post-Ubuntu-1ubuntu220.04 [x86_64-linux]"
Nokogiri::VERSION # => "1.14.0"
Nokogiri::XML::Node::SaveOptions::DEFAULT_XHTML # => 18
Nokogiri::XML::Node::SaveOptions::DEFAULT_XHTML ^ 1 # => 19
The value of DEFAULT_XHTML changed from FORMAT | NO_DECLARATION | AS_XHTML to NO_DECLARATION | AS_XHTML in commit a0194a02 as an effort to clean up, standardize, and centralize how these flags are set and used.
Removing FORMAT seems to be what you were already doing with this code:
:save_with => Nokogiri::XML::Node::SaveOptions::DEFAULT_XHTML ^ 1
which with Nokogiri 1.13.10 turns off the FORMAT option, but with Nokogiri v1.14.0 turns on the FORMAT option.
A better way to do this would be to avoid toggling the bit with the ^ (bitwise XOR) and instead explicitly set the bit off:
Nokogiri::XML::Node::SaveOptions::DEFAULT_XHTML & (~Nokogiri::XML::Node::SaveOptions::FORMAT)
Putting it all together, here we see this is backwards-compatible with v1.13.x:
Nokogiri::VERSION # => "1.13.10"
data = "<div>\n<pre><code>one\ntwo\nthree\nfour\n</code></pre>\n</div>"
doc = Nokogiri::XML::DocumentFragment.parse(data)
opts = { :save_with => Nokogiri::XML::Node::SaveOptions::DEFAULT_XHTML ^ 1, :indent => 0, :encoding => 'UTF-8' }
# => {:save_with=>18, :indent=>0, :encoding=>"UTF-8"}
doc.css("div").children.to_xml(opts)
# => "\n" +
# "<pre><code>one\n" +
# "two\n" +
# "three\n" +
# "four\n" +
# "</code></pre>\n"
opts = { :save_with => Nokogiri::XML::Node::SaveOptions::DEFAULT_XHTML & (~Nokogiri::XML::Node::SaveOptions::FORMAT), :indent => 0, :encoding => 'UTF-8' }
# => {:save_with=>18, :indent=>0, :encoding=>"UTF-8"}
doc.css("div").children.to_xml(opts)
# => "\n" +
# "<pre><code>one\n" +
# "two\n" +
# "three\n" +
# "four\n" +
# "</code></pre>\n"
and works properly in Nokogiri v1.14.x:
Nokogiri::VERSION # => "1.14.0"
data = "<div>\n<pre><code>one\ntwo\nthree\nfour\n</code></pre>\n</div>"
doc = Nokogiri::XML::DocumentFragment.parse(data)
opts = { :save_with => Nokogiri::XML::Node::SaveOptions::DEFAULT_XHTML ^ 1, :indent => 0, :encoding => 'UTF-8' }
# => {:save_with=>19, :indent=>0, :encoding=>"UTF-8"}
doc.css("div").children.to_xml(opts)
# => "\n" +
# "<pre>\n" +
# " <code>one\n" +
# "two\n" +
# "three\n" +
# "four\n" +
# "</code>\n" +
# "</pre>\n" +
# "\n"
opts = { :save_with => Nokogiri::XML::Node::SaveOptions::DEFAULT_XHTML & (~Nokogiri::XML::Node::SaveOptions::FORMAT), :indent => 0, :encoding => 'UTF-8' }
# => {:save_with=>18, :indent=>0, :encoding=>"UTF-8"}
doc.css("div").children.to_xml(opts)
# => "\n" +
# "<pre><code>one\n" +
# "two\n" +
# "three\n" +
# "four\n" +
# "</code></pre>\n"
But really, since Nokogiri v1.14.0 is doing what you want already, you should be able to simplify to:
Nokogiri::VERSION # => "1.14.0"
opts = { :indent => 0, :encoding => 'UTF-8' }
# => {:indent=>0, :encoding=>"UTF-8"}
doc.css("div").children.to_xml(opts)
# => "\n" +
# "<pre><code>one\n" +
# "two\n" +
# "three\n" +
# "four\n" +
# "</code></pre>\n"
I hope all that makes sense, and I apologize for the confusion caused by changing the value of this options mask. The intention behind the change to was to have better and more consistent defaults.
Thanks you for the very clear explanation and the help, @flavorjones! Much appreciated. ❤️ And apologies for not looking closer: I didn't notice the way I was setting the property was not robust relative to changes in the values of the constants nokogiri defines.
|
gharchive/issue
| 2023-01-13T01:48:37
|
2025-04-01T04:35:55.588922
|
{
"authors": [
"dometto",
"flavorjones"
],
"repo": "sparklemotion/nokogiri",
"url": "https://github.com/sparklemotion/nokogiri/issues/2761",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
}
|
273510986
|
Loading a large piece of html into the editor is very slow in safari
Hi,
in my use case, I have a simpleMDE editor where I load html content (emails that the app receives).
When this html is large, safari takes a lot of time displaying the content, whereas Chrome and Firefox are very fast.
to reproduce, just paste the html in the file
https://gist.github.com/jbarata/2e0a04aa3883b9e29d1f1769fcfd963e#file-largehtmlmail-txt
into one of the simpleMDE sample editors in https://simplemde.com
You'll see that this takes around 38 seconds in safari but just 1 second in chrome or firefox
Note: using macOS high sierra with safari 11
Has someone experienced this ?
Thanks
This still seems to be an issue. Though the owner of the repo doesn't seem to be around anymore.
|
gharchive/issue
| 2017-11-13T17:22:03
|
2025-04-01T04:35:55.593573
|
{
"authors": [
"Matthijn",
"jbarata"
],
"repo": "sparksuite/simplemde-markdown-editor",
"url": "https://github.com/sparksuite/simplemde-markdown-editor/issues/656",
"license": "mit",
"license_type": "permissive",
"license_source": "bigquery"
}
|
855404320
|
Proposal: Color coding of transactions in transaction view
I propose to color code the transaction depending on the 3 states:
Grey: unconfirmed (already implemented)
Green: received transaction
Red: sent transaction
Thanks for the suggestion and clear mockup.
I am cautious of adding too much colour, so I've started off with just red text on the negative amounts. This matches Electrum's approach, I think it might be enough:
Implemented in 31fb527. Happy to consider further though.
I have same concerns - like to keep it simple and clean. Still with a fast visual "feeling" of how much is incoming and outgoing...
If you choose to make unconfirmed colored too consider doing it by making a transparency - so no unconfirmed numbers have exact same color as confirmed ones.
I have same concerns - like to keep it simple and clean. Still with a fast visual "feeling" of how much is incoming and outgoing...
Will consider this further.
If you choose to make unconfirmed colored too consider doing it by making a transparency - so no unconfirmed numbers have exact same color as confirmed ones.
This is actually the case - but I agree it's more difficult to see in the red color. I've reduced the opacity from 80 to 70% on unconfirmed transactions:
This is already implemented right :) Lets close it.
|
gharchive/issue
| 2021-04-11T20:49:19
|
2025-04-01T04:35:55.603518
|
{
"authors": [
"craigraw",
"tychsen"
],
"repo": "sparrowwallet/sparrow",
"url": "https://github.com/sparrowwallet/sparrow/issues/100",
"license": "Apache-2.0",
"license_type": "permissive",
"license_source": "github-api"
}
|
1998854298
|
Error getting fee estimates when using own node
Hi there
I have a pruned (prune=150000) bitcoind running which is at tip, and sparrow 1.8.0.
I set up the connection to the node and click test connection, which results in the following:
Could not connect:
Error getting fee estimates: {1=ErrorMessage{code=-32603, message=Internal error, data=null}, 2=ErrorMessage{code=-32603, message=Internal error, data=null}, 50=ErrorMessage{code=-32603, message=Internal error, data=null}, 3=ErrorMessage{code=-32603, message=Internal error, data=null}, 4=ErrorMessage{code=-32603, message=Internal error, data=null}, 5=ErrorMessage{code=-32603, message=Internal error, data=null}, 25=ErrorMessage{code=-32603, message=Internal error, data=null}, 10=ErrorMessage{code=-32603, message=Internal error, data=null}}
I see no errors in the bitcoind logs and can get info back from other rpc calls like getblockchaininfo, which don't indicate anything is wrong with the node.
Are there particular bitcoind settings which need to be enable to work with Sparrow?
It's clear that Bitcoin Core is running into problems estimating fee rate - probably you would get errors if you tried
bitcoin-cli estimatesmartfee 1
bitcoin-cli estimatesmartfee 10
bitcoin-cli estimatesmartfee 50
Unfortunately Core doesn't log anything if it has errors on the RPC interface.
I've seen a few of these problems though, so I have made a change where Sparrow will fallback to a default fee rate if this error is encountered. 74c33702.
|
gharchive/issue
| 2023-11-17T11:09:50
|
2025-04-01T04:35:55.606292
|
{
"authors": [
"biatwc",
"craigraw"
],
"repo": "sparrowwallet/sparrow",
"url": "https://github.com/sparrowwallet/sparrow/issues/1165",
"license": "Apache-2.0",
"license_type": "permissive",
"license_source": "github-api"
}
|
2763827889
|
Import wallet using bitcoin core output descriptor fails to enable "Apply" button
On Sparrow version 2.0.0, if you "Import Wallet..." then select "Import file" to import using the bitcoin core "output descriptor" provider type, everything looks good but the new wallet just sits there forever with everything "grayed out" except for settings. I had thought it was initializing or something but this lasts for hours with no update, so realized something else must be wrong. Turns out it was that the "apply" button was also grayed out so the imported wallet was not actually saved or triggering an initialization to begin with. I solved the problem by deleting a value from the derivation path (like the ' character) and then typing it back in again, to "dirty up" the form, which then activated the "apply" button and allowed me to save. From then on the wallet synced from blockchain and was able to restore successfully.
Please provide an example output descriptor file to reproduce this problem. I suspect it was due to derivation path validation. You can also turn off derivation path validation in the Preferences, and try to reproduce it with the same file.
|
gharchive/issue
| 2024-12-31T00:25:17
|
2025-04-01T04:35:55.609052
|
{
"authors": [
"craigraw",
"mrjbj"
],
"repo": "sparrowwallet/sparrow",
"url": "https://github.com/sparrowwallet/sparrow/issues/1575",
"license": "Apache-2.0",
"license_type": "permissive",
"license_source": "github-api"
}
|
170861792
|
Possiblity to make text information in other color
Anyone interested to be able to make the text information surrounding the symbol in another color then the default icon frame color? If so let me know, I have some ideas but I don't want to implement options that won't be used.
This can now be done using the infoColor property.
|
gharchive/issue
| 2016-08-12T12:36:33
|
2025-04-01T04:35:55.610652
|
{
"authors": [
"spatialillusions"
],
"repo": "spatialillusions/milsymbol",
"url": "https://github.com/spatialillusions/milsymbol/issues/49",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
}
|
470117637
|
Allow to log system activity without a causer or a null causer
It's not clear from the documentation if this can be done or not.
Sometimes there can be activity triggered on events without a causer. Examples are when an action is taken by an unregistered user, a SYSTEM Cron, or a Bot.
Example log items can be:
'Project 123 was archived by the SYSTEM due to non-activity'
'GithubBot locked this thread on 23/05/2019'
'Anonymous user viewed the phone number'
etc
I think the workaround would be to create users with the names 'SYSTEM', 'Bot' etc, and hardcode that userID. Which can cause problems in production and testing environments.
Is there a better way to log such activities?
Hey,
the causer columns are already nullable if this line returns null https://github.com/spatie/laravel-activitylog/blob/d5fd05eb9edcb2416ada4f1e026d47ee9f8706a3/src/ActivityLogger.php#L196 you won't have a causer.
If you want to log a system activity with a logged in user this could be done by creating the Activity model directly or override the logger service and change some methods.
as explained by @Gummibeer.
If you would want to customize "causedBy" logic then you will need to do following steps to accomplish that:
Create a new ActivityLogger class which will extend "Spatie\Activitylog\ActivityLogger" and add following code in its construct method:
use Illuminate\Auth\AuthManager;
use Illuminate\Contracts\Config\Repository;
use Spatie\Activitylog\ActivityLogStatus;
use Spatie\Activitylog\ActivityLogger as SpatieActivityLogger;
class ActivityLogger extends SpatieActivityLogger
{
public function __construct(AuthManager $auth, Repository $config, ActivityLogStatus $logStatus)
{
parent::__construct($auth, $config, $logStatus);
// Your logic if the causer is null
if(empty($this->causedBy)){
// do something when causer is null.
}
}
}
2. Create a new ActivitylogServiceProvider which will extend "Spatie\Activitylog\ActivitylogServiceProvider" and register it in "config/app.php".
Add following code in register method of newly created service provider:
use App\ActivityLogger;
use \Spatie\Activitylog\ActivitylogServiceProvider as SpatieActivitylogServiceProvider;
class ActivitylogServiceProvider extends SpatieActivitylogServiceProvider
{
public function register()
{
$this->app->alias (ActivityLogger::class, \Spatie\Activitylog\ActivityLogger::class );
}
}
This is tested so you should get no issue.
@junaid-A-khan instead of the __construct() I would adjust the getActivity() method https://github.com/spatie/laravel-activitylog/blob/d5fd05eb9edcb2416ada4f1e026d47ee9f8706a3/src/ActivityLogger.php#L189-L200
This does set the init causer - so if you remove this line or adjust the condition when the causer is set yo have a null causer. The construct doesn't have to be executed for every activity log because it's a binded service which you could cache in a variable.
@Gummibeer yes, parent construct call can be removed and I'm sorry, my bad, as I tested it on an old Laravel 5.7 witch has v2.8.
I am going to update previous answer.
Thanks
Because atm it's impossible to set the causer to null and by default it's loaded from the current auth. Is an anonymous() method wanted which forces the activity to have no causer?
activity()
->anonymous()
->log('my log message');
Doing this for a whole runtime could belong to:
#503
#521
#560
Perhaps we can use,
activity()
->causedByAnonymous()
->log('my log message');
or
activity()
->causedBy('SYSTEM')
->log('my log message');
or
activity()
->causedBy(ActivityCauserInterface)
->log('my log message');
I had a look today at a possible solution introducing an AnonymousCauser model which should function as a "null object". Therefore, I would propose the following PR: #603 , though I have some doubt if this approach is suitable.
https://github.com/spatie/laravel-activitylog/releases/tag/3.9.0
|
gharchive/issue
| 2019-07-19T03:17:31
|
2025-04-01T04:35:55.623995
|
{
"authors": [
"EMediaAndroid",
"Gummibeer",
"Jhnbrn90",
"junaid-A-khan",
"shanecp"
],
"repo": "spatie/laravel-activitylog",
"url": "https://github.com/spatie/laravel-activitylog/issues/567",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
}
|
1017272713
|
Snapshot location
Hi it would be nice if a snapshot location is located same in test directory, just like base snapshot
By default, snapshots are stored in a snapshots directory relative to the test class.
thank you
already answer by @freekmurze in private repo in https://testing-laravel.com/
|
gharchive/issue
| 2021-10-06T02:28:04
|
2025-04-01T04:35:55.646078
|
{
"authors": [
"lloricode"
],
"repo": "spatie/pest-plugin-snapshots",
"url": "https://github.com/spatie/pest-plugin-snapshots/issues/12",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
}
|
1582083019
|
ShipIt MacOS M1 taking 100% CPU
This isn't going to be terribly useful because I don't have more details to provide you. But I just noticed on my M1 that my battery was draining like crazy. Opened up the Activity Monitor and saw that a process named ShipIt was taking 100% CPU.
Possibly linked: https://github.com/vector-im/element-desktop/issues/647
I inspected the process further and saw many references to Spatie/Ray.
I then forced-quit the process and Ray opened up immediately after. I had just installed a Ray update.
So I'm just reporting this so you can keep an eye out, maybe you have more insight as to what is going on. It may have been related to the update process, unsure.
Same issue on M1 Max
I have the same issue on my Mac, in this case, it's M2 and it has happened to me several times, I'm noticing that my Mac is getting quite hot and it's draining the battery more than usual and when I check the processes I see that Ray is pinning the CPU at 100%
Also often experience very high Ray CPU usage on my M2 even when nothing is logging.
same here.
This issue has been open for quite some time, and we are happy to let you know that we are working on version 3.0. In this version, we have revisited Ray and are refactoring the application without removing any mandatory features. Among other things, we are focusing on the performance issues Ray seems to cause on various systems. We hope to release this version to the public soon and resolve the performance issues across different systems! Thank you for your patience regarding this issue.
Mac Mini M1, running Sonoma 14.6.1
Ray 2.8.1
Every day I come to the machine and find the CPU usage fills the activity monitor. Stop Ray and all is back to normal.
I’m facing the same issue. I put my M2 Pro Mac mini to sleep, turn displays off. Next day, I accidentally touched it, and it was hot, normally I feel cold metal. The Ray process consumes a significant amount of CPU power for many hours.
Ray 2.8.1
macOS Sequoia 15.1 (24B83)
|
gharchive/issue
| 2023-02-13T10:32:41
|
2025-04-01T04:35:55.652673
|
{
"authors": [
"grafmouse",
"keithbrink",
"linushstge",
"milandicic",
"rishabkapadia",
"sebastienhenau",
"snapey",
"vesper8"
],
"repo": "spatie/ray",
"url": "https://github.com/spatie/ray/issues/777",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
}
|
1605855484
|
Don't return 400 when read-only property is provided
Fixes #942
No longer return 400 if a read-only property is provided, as discussed in the issue. We still raise an error if write-only properties are included in the response and response validation is enabled.
I also changed how read-/write-only works in combination with required. Previously, required would not be overwritten by read-/write-only. Now we just follow the spec to the letter:
required and read-only: must be included but must be ignored by the server
required and write-only: impossible to achieve, but I also don't see how this combination could make sense
read-only: may be included but must be ignored by server
write-only: must not be included by server
Pull Request Test Coverage Report for Build 4308996494
0 of 0 changed or added relevant lines in 0 files are covered.
No unchanged relevant lines lost coverage.
Overall coverage increased (+0.02%) to 92.347%
Totals
Change from base Build 4270694070:
0.02%
Covered Lines:
3258
Relevant Lines:
3528
💛 - Coveralls
|
gharchive/pull-request
| 2023-03-01T23:35:26
|
2025-04-01T04:35:55.674460
|
{
"authors": [
"RobbeSneyders",
"coveralls"
],
"repo": "spec-first/connexion",
"url": "https://github.com/spec-first/connexion/pull/1655",
"license": "Apache-2.0",
"license_type": "permissive",
"license_source": "github-api"
}
|
1924469745
|
Dfn panel changes for linking syntax
This PR adds a "Possible linking syntaxe(es)" section to dfn panels.
The copy icon is perhaps overkill.
The cursor for the dfn is changed to be a 'help' cursor, since the dfn is not a link to anything but the dfn panel.
There are also a couple css changes regarding dfn panels in general, to maybe fix some whitespace issues, depending on what was intended.
Avoid wrapping after section numbers.
No ellipsis needed most of the time.
Multiple reference numbers no longer wrap (e.g. (2) (3) (4))
Here is what it looks like now.
This PR fixes #1319
|
gharchive/pull-request
| 2023-10-03T16:07:16
|
2025-04-01T04:35:55.677106
|
{
"authors": [
"dlaliberte"
],
"repo": "speced/bikeshed",
"url": "https://github.com/speced/bikeshed/pull/2685",
"license": "CC0-1.0",
"license_type": "permissive",
"license_source": "github-api"
}
|
1027342379
|
🌙 Dark mode
low prio obv but i'm sad speckle guide doesn't have it 😿
Interested can you please assign me
HI @jsdbroughton
I am able to start server for speckle-docs. But task is very difficult for me (Vue) :( So un-assigning myself.
|
gharchive/issue
| 2021-10-15T11:16:48
|
2025-04-01T04:35:55.680416
|
{
"authors": [
"izzylys",
"nooras"
],
"repo": "specklesystems/speckle-docs",
"url": "https://github.com/specklesystems/speckle-docs/issues/18",
"license": "Apache-2.0",
"license_type": "permissive",
"license_source": "github-api"
}
|
1334424819
|
Grasshopper Connector - Handling TextToNative() conversion of Speckle.Objects.Other.Text
Prerequisites
[x] I read the contribution guidelines
[x] I checked the documentation and found no answer.
[x] I checked existing issues and found no similar issue.
[x] I checked the community forum for related discussions and found no answer.
[x] I'm reporting the issue to the correct repository (see also speckle-server, speckle-sharp, specklepy, speckle-docs, and others)
What package are you referring to?
ConnectorGrasshopper
Describe the bug
Speckle.Objects.Other.Text is not converted when deconstruction a Speckle Block in Grasshopper.
To Reproduce
Send a Rhino block from Rhino to Speckle
Receive block from Speckle in Grasshopper via connector
Deconstruct Speckle Object
Retrieve Blocks in Stream
Deconstruct blocks
Access Geometry property
Text objects remain unconverted
Expected behavior
Convert the text objects to TextTags in Grasshopper, or just as plain text strings.
Screenshots
System Info
If applicable, please fill in the below details - they help a lot!
Desktop (please complete the following information):
OS: Windows 10
Failure Logs
Additional context
Proposed Solution (if any)
Add preprocessor directive and dedicated conversion method on converter:
https://github.com/specklesystems/speckle-sharp/blob/b49fe366d75bfac5c704cabc8294c04295d2d6c0/Objects/Converters/ConverterRhinoGh/ConverterRhinoGhShared/ConverterRhinoGh.cs#L665
Optional: Affected Projects
Hey @d3ssy!
We don't convert text entities because Grasshopper does not have a direct translation of what Text actually does, which would lead to quite a lossy conversion.
What would your expected output of this conversion be in GH native types? Just the text?
Since our Text class also holds positioning information, we made the decision of not providing a direct conversion, and just allowing the users to expand the text object and pick the information they needed.
This may not be "ideal", and If you've got suggestions on how this should work we're happy to hear them! 😄
Could you move text (and others) to an additional output Other when the input to the DSO gh component is a block. All outputs in Other would not be converted.
Hmm, that would be tricky, as it would be effectively changing the structure of the object being expanded, and this could conflict with an actual property called Other that a user may have created.
If you want to filter out non-converted objects I'd say just plug the output into a Geometry node and then filter by null? Or maybe I'm missing something in your use-case :)
Closing this as we're not going to handle Text objects differently in Grasshopper for now.
If there is a need to be able to distinguish between what is a Speckle object and what is not, we could instead make the parameter GH_SpeckleBase public, so you could plug in a random list of objects, and anything that is not a Base object (i.e. anything that was not converted in your case) would be null.
If you feel like this would work for you, feel free to open a separate issue for this. Or reopen this one if you still think this a priority for you :)
@d3ssy made a separate issue for that ☝🏼, should be quite a quick fix so it may go out on our next release 2.10.
Thanks for the feedback!
|
gharchive/issue
| 2022-08-10T10:38:53
|
2025-04-01T04:35:55.693365
|
{
"authors": [
"AlanRynne",
"d3ssy"
],
"repo": "specklesystems/speckle-sharp",
"url": "https://github.com/specklesystems/speckle-sharp/issues/1495",
"license": "Apache-2.0",
"license_type": "permissive",
"license_source": "github-api"
}
|
586483237
|
$ optimization
Following strategy of fast-on-load, and O(1) of getElementBy*:
global all-mutations tracked by default
for each mutated element we have to find corresponding matched rules (via traits)
2.0. we can reduce removed elements → O(1) by fast-on-load method: we assign unique rules classes to elements and for each removed node check self classList.contains and getElementsByClassName for all nested children.
2.1. for added nodes - should intersect with the set of rules.
2.2. for attrib nodes
Bruteforce is O((nodes + children) * rules) = O(n * m).
Animevent/transition-based magic is unknown + FOUC, although elegantly solves 2.1 and 2.2.
Mb that's not a big deal, considering animation is just another behavioral type of aspect... Also that encourages developer to think about style of blank content first. Trouble comes mutating the content - there's definite FOUC.
Hybrid custom-elements / getElementBy* / animevent technique? + for directly accessed selectors it's possible to do sync init via getElementBy*. Even for combined selectors like getElementById().getElementsByClassName() (have to figure out O(1) combinations). But for attribute selectors - only animevent, since these aspects are considered 2nd priority, same as animations, and run after the main init. That makes user-sense that [hidden] or alike selectors must not serve as style-setters, otherwise that will cause FOUC.
Selector-set with treats comparison.
For all rules we create treats indexes (id, class, name, attribute) - many of them exist natively via getElements*. For all added/attr-changed nodes (O(n)) we figure out index-able treats (id, name, class, attribute) - (O(c)). For each element we find matched treats and for each treat we test against rules, so instead of testing all rules we test only possible ones, that is O(nodes * c) = O(n). Internal nodes also may have matched, selector-set does the querySelectorAll for all registered rules and for each el matches and tests against newly added nodes = O(m * c), so that is O(n) + O(m).
Technique likely should be mixed indeed, but the core is:
unique classes for matched rules to easier remove
fast-on-load way to match simple class/id/tagName/name selectors O(nodes * c = rules)
combined selectors with simple tokens - simple + filter-match = O(nodes * c * 2 = rules)
purely attributive/non-simple selectors are done as animevent (or worst case querySelector?)
Done in stage3 #192
|
gharchive/issue
| 2020-03-23T20:01:26
|
2025-04-01T04:35:55.700158
|
{
"authors": [
"dy"
],
"repo": "spectjs/spect",
"url": "https://github.com/spectjs/spect/issues/190",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
}
|
1263432816
|
InputNormalization is not cuda compatible?
The code below will enter into speechbrain.processing.features.py:InputNormalization
from speechbrain.pretrained import SpeakerRecognition
import librosa
path = "source1hat.wav"
encoder = SpeakerRecognition.from_hparams(source="spkrec-ecapa-voxceleb")
device="cuda:0"
encoder.to(device)
signal, fs = librosa.load(path, sr=16000)
inp = torch.tensor(signal).to(device).unsqueeze(0)
emb = encoder.encode_batch(inp, normalize=True)
and get the following error
*** RuntimeError: Expected all tensors to be on same device, but found at least two devices, cuda:0 and cpu!
Then, I find the code in InputNormalization has the following, which is not cuda compatible
self.glob_mean = torch.tensor([0])
self.glob_std = torch.tensor([0])
Hi @BuxianChen
it might be because of librosa to load audios.
I created a pretrained model from encoder = SpeakerRecognition.from_hparams(source="speechbrain/spkrec-ecapa-voxceleb")
then signal = encoder.load_audio("tests/samples/ASR/spk1_snt1.wav")
returns the signal as a tensor that is on the same device as the model.
Does this help with this part of your error message?
but found at least two devices, cuda:0 and cpu
Maybe the snippet code is not clean enough, but the key point here is encoder.encode_batch(inp, normalize=True), the normalize set to True will cause the problem. The cleaner code as following, which get the same error
but found at least two devices, cuda:0 and cpu
from speechbrain.pretrained import SpeakerRecognition
path = "source1hat.wav"
encoder = SpeakerRecognition.from_hparams(source="spkrec-ecapa-voxceleb")
device="cuda:0"
encoder.to(device)
inp = encoder.load(path)
emb = encoder.encode_batch(inp, normalize=True)
Hello @BuxianChen,
You should have passed in argument of SpeakerRecognition.from_hparams(source="spkrec-ecapa-voxceleb") the device like that:
device="cuda:0"
encoder = SpeakerRecognition.from_hparams(source="speechbrain/spkrec-ecapa-voxceleb", run_opts={"device": device})
Now everything works as expected!
The full working example bellow:
from speechbrain.pretrained import SpeakerRecognition
path = "source1hat.wav"
device="cuda:0"
encoder = SpeakerRecognition.from_hparams(source="speechbrain/spkrec-ecapa-voxceleb", run_opts={"device": device})
inp = encoder.load_audio(path)
emb = encoder.encode_batch(inp, normalize=True)
Best.
|
gharchive/issue
| 2022-06-07T14:52:37
|
2025-04-01T04:35:55.714354
|
{
"authors": [
"Adel-Moumen",
"BuxianChen",
"anautsch"
],
"repo": "speechbrain/speechbrain",
"url": "https://github.com/speechbrain/speechbrain/issues/1434",
"license": "Apache-2.0",
"license_type": "permissive",
"license_source": "github-api"
}
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.