added
stringdate
2025-04-01 04:05:38
2025-04-01 07:14:06
created
timestamp[us]date
2001-10-09 16:19:16
2025-01-01 03:51:31
id
stringlengths
4
10
metadata
dict
source
stringclasses
2 values
text
stringlengths
0
1.61M
2025-04-01T06:38:20.551982
2022-07-23T18:02:38
1315714292
{ "authors": [ "dnillovna", "skalkin" ], "license": "MIT", "license_source": "github-api", "license_type": "permissive", "provenance": "gharchive-dolma-0000.json.gz:5141", "repo": "datagrok-ai/public", "url": "https://github.com/datagrok-ai/public/issues/831" }
gharchive/issue
HitTriage Workflow: [ ] Ingestion [ ] Enrichment [ ] Filtering [ ] Submission This issue has been mirrored in Jira: https://reddata.atlassian.net/browse/GROK-14812 This issue has been mirrored in Jira: https://reddata.atlassian.net/browse/GROK-16071
2025-04-01T06:38:20.561746
2021-07-07T06:59:28
938556502
{ "authors": [ "SwadX", "anshbansal", "hsheth2" ], "license": "Apache-2.0", "license_source": "github-api", "license_type": "permissive", "provenance": "gharchive-dolma-0000.json.gz:5142", "repo": "datahub-project/datahub", "url": "https://github.com/datahub-project/datahub/issues/2839" }
gharchive/issue
The sqlalchemy recipe for DB2 source is not respecting schema/table allow / deny pattern Describe the bug When using a sqlalchemy based recipe to fetch metadata from Db2 source , the schema / table pattern information provided is not being used. All the tables metadata is being fetched. To Reproduce source: type: sqlalchemy config: connect_uri: "db2+ibm_db://:<my-password@host:port/" platform: "DB2-ZOS" table_pattern: allow: - "schema".<table_name>" sink: type: "console" Additional context I am using python:3.8.10 docker image with ibm_db and ibm_db_sa python libraries installed. The allow/deny patterns use regexes - could that possibly be the issue? It's hard to diagnose given that the recipe above has been anonymized, but logs when run with datahub --debug ingest ... would be helpful. Hey @SwadX - did you figure this one out? Closing due to inactivity. Please open new issue if issue persists with latest releases
2025-04-01T06:38:20.564239
2022-08-26T08:39:23
1351947629
{ "authors": [ "hugwi" ], "license": "Apache-2.0", "license_source": "github-api", "license_type": "permissive", "provenance": "gharchive-dolma-0000.json.gz:5143", "repo": "datahub-project/datahub", "url": "https://github.com/datahub-project/datahub/issues/5735" }
gharchive/issue
Change documentation for curl command that creates a user group Describe the bug A clear and concise description of what the bug is. When using the curl command to create a user group taking from the datahub documentation it fails as raised in #5161. The issue was identiefied changing (corpUser -> corpuser) but the documentation was updated. The correct command should be curl 'http://localhost:8080/entities?action=ingest' -X POST --data '{ "entity":{ "value":{ "com.linkedin.metadata.snapshot.CorpGroupSnapshot":{ "urn":"urn:li:corpGroup:dev", "aspects":[ { "com.linkedin.identity.CorpGroupInfo":{ <EMAIL_ADDRESS> "admins":[ "urn:li:corpuser:jdoe" ], "members":[ "urn:li:corpuser:datahub", "urn:li:corpuser:jdoe" ], "groups":[ ] } } ] } } } }' I can probably change this myself however adding it here in case I'd forget
2025-04-01T06:38:20.569169
2024-12-02T10:29:48
2711387391
{ "authors": [ "acrylJonny", "hsheth2" ], "license": "Apache-2.0", "license_source": "github-api", "license_type": "permissive", "provenance": "gharchive-dolma-0000.json.gz:5144", "repo": "datahub-project/datahub", "url": "https://github.com/datahub-project/datahub/pull/11997" }
gharchive/pull-request
feat(ingestion/sql-common): add column level lineage for external tables Checklist [ ] The PR conforms to DataHub's Contributing Guideline (particularly Commit Message Format) [ ] Links to related issues (if applicable) [ ] Tests for the changes have been added/updated (if applicable) [ ] Docs related to the changes have been added/updated (if applicable). If a new feature has been added a Usage Guide has been added for the same. [ ] For any breaking change/potential downtime/deprecation/big changes an entry has been made in Updating DataHub The SqlParsingAggregator has a add_known_lineage_mapping that generates CLL based on the schema of the downstream. Ideally we'd centralize on using that as the external lineage mechanism. Long term, I want to move sql_common.py to use the SqlParsingAggregator instead of the older SqlParsingBuilder. Internal ticket tracking that - https://linear.app/acryl-data/issue/ING-779/refactor-move-sql-common-to-use-sqlparsingaggregator In the short term, I'm ok with having this CLL generation logic, although all the complexity of the simplify_field_path logic worries me a bit on this PR. Now that https://github.com/datahub-project/datahub/pull/12220 has been merged, we can make this implementation be a bit cleaner.
2025-04-01T06:38:20.572460
2023-02-09T11:44:59
1577739293
{ "authors": [ "jjoyce0510", "looppi" ], "license": "Apache-2.0", "license_source": "github-api", "license_type": "permissive", "provenance": "gharchive-dolma-0000.json.gz:5145", "repo": "datahub-project/datahub", "url": "https://github.com/datahub-project/datahub/pull/7293" }
gharchive/pull-request
feat: add chart entities to similar browsepath as dashboards If the ingested workspaces have multiple reports in them, usually the result is that there are a ton of ingested chart entities. Although common use case might not be to find report or datasource of a report by finding the chart first, I think it makes sense to expand similar behavior of browsepaths as the current implementation has on dashboards. Checklist [ ] The PR conforms to DataHub's Contributing Guideline (particularly Commit Message Format) [ ] Links to related issues (if applicable) [ ] Tests for the changes have been added/updated (if applicable) [ ] Docs related to the changes have been added/updated (if applicable). If a new feature has been added a Usage Guide has been added for the same. [ ] For any breaking change/potential downtime/deprecation/big changes an entry has been made in Updating DataHub Thanks for the PR @looppi ! We are reviewing on our side. On the surface looking good
2025-04-01T06:38:20.574728
2023-01-19T22:56:50
1550069676
{ "authors": [ "CBroz1", "kabilar" ], "license": "MIT", "license_source": "github-api", "license_type": "permissive", "provenance": "gharchive-dolma-0000.json.gz:5146", "repo": "datajoint/element-array-ephys", "url": "https://github.com/datajoint/element-array-ephys/pull/125" }
gharchive/pull-request
Adjusting pyopenephys requirement for pypi publication In this PR, I update the pyopenephys to reflect our merged PR and pypi publication. Thanks @CBroz1! Now that the requirements.txt are fixed, please make a release of version 0.2.3 so that we can get an updated version of element-array-ephys published to PyPI.
2025-04-01T06:38:20.635891
2021-10-19T13:23:26
1030305270
{ "authors": [ "arthur-albuquerque", "datalorax" ], "license": "CC-BY-4.0", "license_source": "github-api", "license_type": "permissive", "provenance": "gharchive-dolma-0000.json.gz:5147", "repo": "datalorax/equatiomatic", "url": "https://github.com/datalorax/equatiomatic/issues/204" }
gharchive/issue
"Error: {equatiomatic} only supports models where each random effect has a corresponding fixed effect." Hi, I am having an issue trying to use equatiomatic with lme4, please see below: d = dplyr::tribble( ~study, ~treat, ~n, ~event, ~control, 1, 0, 377, 113, 1, 1, 1, 377, 128, 0, 2, 0, 40, 4, 1, 2, 1, 41, 6, 0, 3, 0, 100, 20, 1, 3, 1, 101, 22, 0, 4, 0, 1010, 201, 1, 4, 1, 1001, 241, 0 ) m1 = lme4::glmer( cbind(event, n - event) ~ 1 + factor(treat) + (control + treat - 1|study), data=d, family=binomial(link="logit")) summary(m1) equatiomatic::extract_eq(m1) I get this error message: Error: {equatiomatic} only supports models where each random effect has a corresponding fixed effect. You specified the following variables as randomly varying without including the corresponding fixed effect: control, treat Would it be possible to add support for this type of model? Thanks! Hi! I see. The model was extracted from this article, section: "Model 6: the “Van Houwelingen bivariate” model". Honestly, I am an inexperienced medical student, so I am not sure I can be of any help, but the article above describes the model. Thanks, I'll take a look and see if I can figure it out.
2025-04-01T06:38:20.682962
2017-03-24T08:56:15
216705598
{ "authors": [ "andrewstevenson", "iyuq", "stheppi" ], "license": "apache-2.0", "license_source": "bigquery", "license_type": "permissive", "provenance": "gharchive-dolma-0000.json.gz:5148", "repo": "datamountaineer/stream-reactor", "url": "https://github.com/datamountaineer/stream-reactor/issues/150" }
gharchive/issue
influxdb duplicate field stored I write a kcql like insert into record select * from record WITHTAG (ptype, pid). when I run this and look at the data in the influxdb, there exists two field ptype and ptype_1, pid and pid_1, ptype and pid is tag, ptype_1 and pid_1 is not, why should the duplicate ptype_1 and pid_1 should be stored? Is there any way to avoid this? We don't duplicate the field names. we will investigate the issue. thank you, I use the stream-reactor-0.2.4-3.1.1.tar.gz package, use the following config in confluent connect config. key.converter=org.apache.kafka.connect.json.JsonConverter value.converter=org.apache.kafka.connect.json.JsonConverter key.converter.schemas.enable=false value.converter.schemas.enable=false kafa-version: kafka_2.11-<IP_ADDRESS> jre:1.8.0 influxdb 1.2.0 @iyuq i can confirm we don't actually duplicate the item. i have been trying to get to the influxdb code where they would do this but i can't find it. But let's go over what you want to do, because i have a pretty good idea what's happening. You have a row with columnd ptype,pid and then you want to add the same names as a tag. Think of the database where you have a query, joining two tables, and returning the same field table1.A, table2.A. If you run that in rdbms you will always see the second one returned as A_1. So here they would, most likely do the same thing. so how about you check 'show tag values ...' to only look at the tags. To be honest, i don't see the value of adding the tags which are a copy of the fields already. why would you duplicate data, plus in this case is not a tag. if you want to have those fields as tags you do: SELECT * FROM record IGNORE ptype, pid WITHTAG(ptype, pid). Hope this helps. I haved tried this, but got the following error instead. [2017-03-27 13:10:56,146] INFO InfluxSinkConfig values: connect.influx.connection.database = mydb connect.influx.connection.password = [hidden] connect.influx.connection.url = http://localhost:8086 connect.influx.connection.user = root connect.influx.consistency.level = ALL connect.influx.error.policy = THROW connect.influx.max.retires = 20 connect.influx.retention.policy = autogen connect.influx.retry.interval = 60000 connect.influx.sink.kcql = INSERT INTO record SELECT * FROM record IGNORE ptype, pid WITHTAG (ptype, pid) WITHTIMESTAMP time (com.datamountaineer.streamreactor.connect.influx.config.InfluxSinkConfig:180) [2017-03-27 13:10:56,146] INFO Sink task WorkerSinkTask{id=influx-record-sink-0} finished initialization and start (org.apache.kafka.connect.runtime.WorkerSinkTask:222) [2017-03-27 13:10:56,251] INFO Discovered coordinator localhost:9092 (id:<PHONE_NUMBER> rack: null) for group connect-influx-record-sink. (org.apache.kafka.clients.consumer.internals.AbstractCoordinator:555) [2017-03-27 13:10:56,251] INFO Revoking previously assigned partitions [] for group connect-influx-record-sink (org.apache.kafka.clients.consumer.internals.ConsumerCoordinator:333) [2017-03-27 13:10:56,252] INFO (Re-)joining group connect-influx-record-sink (org.apache.kafka.clients.consumer.internals.AbstractCoordinator:381) [2017-03-27 13:10:56,254] INFO Successfully joined group connect-influx-record-sink with generation 363 (org.apache.kafka.clients.consumer.internals.AbstractCoordinator:349) [2017-03-27 13:10:56,255] INFO Setting newly assigned partitions [record-0] for group connect-influx-record-sink (org.apache.kafka.clients.consumer.internals.ConsumerCoordinator:225) [2017-03-27 13:10:56,259] ERROR Task influx-record-sink-0 threw an uncaught and unrecoverable exception (org.apache.kafka.connect.runtime.WorkerSinkTask:404) java.lang.IllegalArgumentException: ptype can't be found on the values list:ip,bit,ua,browser,os,query,deviceType,agent,resolution,origin,cookieEnabled,region,time,title,lang,sid,device at com.datamountaineer.streamreactor.connect.influx.writers.TagsExtractor$$anonfun$fromMap$1$$anonfun$apply$4.apply(TagsExtractor.scala:52) at com.datamountaineer.streamreactor.connect.influx.writers.TagsExtractor$$anonfun$fromMap$1$$anonfun$apply$4.apply(TagsExtractor.scala:52) at scala.collection.MapLike$class.getOrElse(MapLike.scala:128) at scala.collection.AbstractMap.getOrElse(Map.scala:59) at com.datamountaineer.streamreactor.connect.influx.writers.TagsExtractor$$anonfun$fromMap$1.apply(TagsExtractor.scala:52) at com.datamountaineer.streamreactor.connect.influx.writers.TagsExtractor$$anonfun$fromMap$1.apply(TagsExtractor.scala:49) at scala.collection.immutable.Stream.foldLeft(Stream.scala:610) at com.datamountaineer.streamreactor.connect.influx.writers.TagsExtractor$.fromMap(TagsExtractor.scala:49) at com.datamountaineer.streamreactor.connect.influx.writers.InfluxBatchPointsBuilderFn$$anonfun$6.apply(InfluxBatchPointsBuilderFn.scala:126) at com.datamountaineer.streamreactor.connect.influx.writers.InfluxBatchPointsBuilderFn$$anonfun$6.apply(InfluxBatchPointsBuilderFn.scala:117) at scala.Option.map(Option.scala:146) @iyuq ptype is not a field in the message schema ptype can't be found on the values list:ip,bit,ua,browser,os,query,deviceType,agent,resolution,origin,cookieEnabled,region,time,title,lang,sid,device @andrewstevenson @stheppi I am not using the schema message, I am using the json message without schema, I'm sure it's a bug now, I have the following test using the influx-java-client Point point1 = Point.measurement("cpu") .time(System.currentTimeMillis(), TimeUnit.MILLISECONDS) .tag("atag", "a") .addField("atag", "a") .addField("idle", 90L) .addField("user", 9L) .addField("system", 1L) .build(); and I got a atag and atag_1 in the cpu measurements, so you guys must add the tag field both in .tag and .addField method. We are doing as you instructed via KCQL, nothing more nothing less. First:" Select *"=> this picks up all the fields and will result in an addField. Then you say "withtag(ptype) "=> translates in addTag. So the code does what it was instructed in the KCQL. Therefore is not a bug. What i suggested with ignoring fields should be the way to go in your case bit seems we have to relax the validation rules( which i thought we did already). If you know all your fields you can configure KCQL like: Insert into record select field1,field2,.. from record withtag(ptype,pid) But the code will throw an error when the tags field not in the select field list, that's the problem, as I said before. So when I use select * from record withtag(ptype, pid), i will get two duplicate field, when I use select * from record IGNORE ptype, pid withtag(ptype, pid), just the same as Insert into record select field1,field2,.. from record withtag(ptype,pid), I got the error java.lang.IllegalArgumentException: ptype can't be found on the values list:ip,bit,ua,browser,os,query,deviceType,agent,resolution,origin,cookieEnabled,region,time,title,lang,sid,device, either case I can't get what I actual wanted. So if this is the bug, so the check field and throw error must be a bug. @stheppi @iyuq Are you sure that you have ptype, consistently in your json for every message? @iyuq We'll certainly relax the checking of ignored columns being present in the message, it's shouldn't be an error but a warning. @andrewstevenson yeah, I'm pretty sure about that. For when I add ptype in the select fields, the error disapeared. I will have someone who knows scala to have a look at the code, thank you all for your help. @andrewstevenson you were right all along. the stack and error clearly shows the ptype is not present in the json payload!! I am updating the code to avoid the error and just add a big warning. @iyuq : you would need to take the latest and build it yourself before we release the next version. the code has been changed to avoid throwing exceptions if the tag is not present (like in your case). @stheppi @andrewstevenson thank you! @stheppi I build the new version and find out it is not what I need either. The influx tag is like a field with index, the new version will not throw a error, but also can not insert the tag to the influxdb. I got the following error message: [2017-03-28 17:26:17,236] WARN Tag can't be set because field:ptype can't be found or is null on the incoming value. topic=record;partition=0;offset=38 (com.datamountaineer.streamreactor.connect.influx.writers.TagsExtractor$:79) [2017-03-28 17:26:17,237] WARN Tag can't be set because field:pid can't be found or is null on the incoming value. topic=record;partition=0;offset=38 (com.datamountaineer.streamreactor.connect.influx.writers.TagsExtractor$:79) I think the correct way is to extact the field in the select field join the tag field from message, then add field of select field exclude the tag field to builder and add tag in the tag field to the builder. I am not following what you said at all. Let me explain kcql because i think it adds value. 'Withtag field1'=> means the code looks at the payload for field1 and adds it to the the influxdb point as a tag. From your message field1 doesn't exist in the kafka message value. Are those two fields:ptype and pid part of the kafka message key ? If so, we have no support for such extraction at the moment. @stheppi ptype and pid is just the same as other fields, they are the field key of the kafka message json field, not the kafka message key. Well i can tell you we pick them up if they are present in the json message. It looks like they are not. Look at the 38th message on your topic and you see they are not there. Sorry to tell that I get another problem, the WITHTIMESTAMP doesn't take effect and the system_time used instead. it is the order in KCQL: SELECT * FROM $topic IGNORE ptype, pid WITHTIMESTAMP time WITHTAG (ptype, pid) @stheppi Thank you!
2025-04-01T06:38:20.689207
2022-10-19T15:54:37
1415228039
{ "authors": [ "BillBuilt", "akirill0v", "gregwebs", "ruslan-kurbanov-jr" ], "license": "MIT", "license_source": "github-api", "license_type": "permissive", "provenance": "gharchive-dolma-0000.json.gz:5149", "repo": "datanymizer/datanymizer", "url": "https://github.com/datanymizer/datanymizer/issues/189" }
gharchive/issue
Problem with copying generated columns Anonymizer throws error when dealing with table with generated columns Error: db error: ERROR: column "tsv" is a generated column DETAIL: Generated columns cannot be used in COPY. pg_dump on its own works fine. postgresql and pg_dump version is 12.12 pg_datanymizer version is 0.6.0 @ruslan-kurbanov-jr Thanks, It's interesting problem... We'll research it as soon as possible. The main approach we use is to replace COPY stage in the dump... We'll see how pg_dumper uses it when working with similar columns. i just ran in to this problem as well. Is there a solution? pg_dump (PostgreSQL) 16.0 pg_datanymizer version is 0.6.0 A workaround approach for this could be to create a view of the table and dump that instead (using table filters). Do a find and replace on the dump file before importing to rename the view to the table name. It seems like PG is supposed to work if the dump value for the generated column is default: https://stackoverflow.com/questions/64600614/restoring-pg-database-from-dump-fails-due-to-generated-columns Tried adding the --inserts pg_dump argument but it still used COPY. I was hoping that since the error specifically mentioned COPY, then using --inserts would allow it to work. Thanks a lot! Is new release going to be published?
2025-04-01T06:38:20.697924
2019-08-07T12:51:08
477909763
{ "authors": [ "lazycoder9", "roman-dubrovsky" ], "license": "MIT", "license_source": "github-api", "license_type": "permissive", "provenance": "gharchive-dolma-0000.json.gz:5150", "repo": "datarockets/datarockets-style", "url": "https://github.com/datarockets/datarockets-style/pull/86" }
gharchive/pull-request
[Fix #85] use leading underscores in cached instance variable name Before you submit a pull request, please make sure you have to follow: [x] read and know items from the Contributing Guide [x] add a description of the problem you're trying to solve (short summary from related issue) [x] verified that cops are ordered by alphabet [x] add a note to the style guide docs (if it needs) [x] add a note to the changelog file [x] the commit message contains the number of the related issue (if it presents) and word Fix if this PR closes related issue [x] squash all commits before submitting to review @lazycoder9 rebase pls your changes
2025-04-01T06:38:20.716126
2021-07-13T01:18:47
942588966
{ "authors": [ "Tomcli", "YiannisGkoufas" ], "license": "Apache-2.0", "license_source": "github-api", "license_type": "permissive", "provenance": "gharchive-dolma-0000.json.gz:5151", "repo": "datashim-io/datashim", "url": "https://github.com/datashim-io/datashim/pull/112" }
gharchive/pull-request
S3 bucket creation command is missing for archive handler In the newer version of aws command and S3 server, S3 bucket needs to be create with an separate S3 API since the aws s3 cp command no longer creates a new bucket. Below are the quick fixes to make it working. Ideally we should make these commands configurable via a configmap, so when there's new changes to S3 API, we can update the configmap rather than building a new image every time. @YiannisGkoufas can you review this? Thanks. thanks @Tomcli looks good!
2025-04-01T06:38:20.717696
2023-12-02T01:32:22
2021794529
{ "authors": [ "erichare", "hemidactylus" ], "license": "Apache-2.0", "license_source": "github-api", "license_type": "permissive", "provenance": "gharchive-dolma-0000.json.gz:5152", "repo": "datastax/astrapy", "url": "https://github.com/datastax/astrapy/pull/137" }
gharchive/pull-request
fix OPS_API_RESPONSE union type to include lists I inadvertently gave a very silly definition of the OPS_API_RESPONSE, which did not include List[Any] as one of the types (the DevOps API as a matter of fact does return top-level lists for some calls, such as get_databases). Now I fixed it and to satisfy the type checker the whole of the ops methods are, correctly, moved to the OPS_API_RESPONSE return type. LGTM! :)
2025-04-01T06:38:20.721160
2021-08-11T17:25:37
967122918
{ "authors": [ "HadesArchitect", "atifahsan" ], "license": "Apache-2.0", "license_source": "github-api", "license_type": "permissive", "provenance": "gharchive-dolma-0000.json.gz:5153", "repo": "datastaxdevs/workshop-intro-to-cassandra", "url": "https://github.com/datastaxdevs/workshop-intro-to-cassandra/issues/282" }
gharchive/issue
[HW] Muhammad Atif Ahsan Name: Muhammad Atif Ahsan Email<EMAIL_ADDRESS>Linkedin Profile: https://www.linkedin.com/in/atifahsan/ Attach the homework screenshots below for both step II and step III: Hey Muhammad Atif Ahsan, great job! Congrats! Here is your badge! https://api.badgr.io/public/assertions/SlhV6khfTzKnRot2Oexd8Q
2025-04-01T06:38:20.724785
2017-07-02T00:20:38
239980202
{ "authors": [ "dmpetrov", "efiop" ], "license": "apache-2.0", "license_source": "bigquery", "license_type": "permissive", "provenance": "gharchive-dolma-0000.json.gz:5154", "repo": "dataversioncontrol/dvc", "url": "https://github.com/dataversioncontrol/dvc/issues/103" }
gharchive/issue
pull/pull cache for all branches dvc sync [file] uploads/downloads data snapshot for only the current brunch. dvc sync -all [file] should upload\download all versions of the file(s). I am not sure it is worth implementing, as there are usually more branches that don't even belong to you(i.e. various branches from origin, upstream and etc), which makes pushing/pulling all branches counter-productive, as in those rare cases when you indeed need to sync a few branches, you can easily do so by checking out those branches with git yourself. Actually, same logic applies to dvc metrics, but there the cost of it was pretty low, so it was an easy choice to just implement it. Lets discuss this one later. Moving to 0.9.8 for consideration.
2025-04-01T06:38:20.726656
2019-03-29T05:23:48
426822225
{ "authors": [ "curran" ], "license": "MIT", "license_source": "github-api", "license_type": "permissive", "provenance": "gharchive-dolma-0000.json.gz:5155", "repo": "datavis-tech/codemirror-ot", "url": "https://github.com/datavis-tech/codemirror-ot/issues/28" }
gharchive/issue
Throttle/group Ops Currently each keystroke results in a fresh op. It would be more efficient if ops were batched by some time interval, so that quickly typing multple characters in a single word results in a single op. The time interval in ms could be passed into the constructor, making this behavior opt-in. Here's a working implementation of this feature https://github.com/datavis-tech/codemirror-6-experiments/blob/master/packages/experiments/src/client/codeMirrorShareDBBinding.js#L31 Closing as ShareDB batches like this internally.
2025-04-01T06:38:20.727501
2017-08-02T16:41:57
247452410
{ "authors": [ "alexsb", "mstreit" ], "license": "Apache-2.0", "license_source": "github-api", "license_type": "permissive", "provenance": "gharchive-dolma-0000.json.gz:5156", "repo": "datavisyn/datavisyn.github.io", "url": "https://github.com/datavisyn/datavisyn.github.io/issues/13" }
gharchive/issue
Replace screenshots The screenshots aren't too great. They should look good. I've uploaded new ones to the GDrive folder 'Platform/Ordino Screenshots'
2025-04-01T06:38:20.732205
2018-07-26T13:44:35
344850506
{ "authors": [ "containscafeine", "kflynn" ], "license": "apache-2.0", "license_source": "bigquery", "license_type": "permissive", "provenance": "gharchive-dolma-0000.json.gz:5157", "repo": "datawire/ambassador", "url": "https://github.com/datawire/ambassador/issues/663" }
gharchive/issue
Automate upgrading envoy There is some build+test+push awesomeness at https://github.com/datawire/envoy/tree/datawire/extauth-build-automation/DATAWIRE that would be awesome to have automated. Let's do it :tm: Closing since I think @LukeShu managed to do this...
2025-04-01T06:38:20.735712
2019-08-06T19:12:34
477548492
{ "authors": [ "ankurpshah", "kflynn" ], "license": "apache-2.0", "license_source": "bigquery", "license_type": "permissive", "provenance": "gharchive-dolma-0000.json.gz:5158", "repo": "datawire/ambassador", "url": "https://github.com/datawire/ambassador/pull/1740" }
gharchive/pull-request
Adding max_request_headers_kb configuration support. Description Exposing max_request-headers_kb envoy parameter through the ambassador. Related Issues None Testing max_request-headers_kb configured in Ambassador Global configuration. For e.g. --- apiVersion: ambassador/v1 kind: Module name: ambassador config: service_port: 4567 max_request-headers_kb: 90 Todos [X] Tests [X] Documentation Hey, thanks for this! A couple of questions: First, this looks like it'll enforce a 60KB limit on headers globally for all Ambassador installations, which is a behavioral change that's probably not desirable. If the user doesn't specify a size, I think we shouldn't apply any limit. Second, this is definitely one that needs a test -- maybe set the limit to 1KB, then send a request through with longer headers than that. Should the request fail? or do the headers get truncated? or... what? Thanks again!
2025-04-01T06:38:20.753005
2024-12-26T20:54:37
2760137472
{ "authors": [ "demenech", "rufuspollock", "vit-zikmund" ], "license": "MIT", "license_source": "github-api", "license_type": "permissive", "provenance": "gharchive-dolma-0000.json.gz:5159", "repo": "datopian/giftless", "url": "https://github.com/datopian/giftless/pull/181" }
gharchive/pull-request
Dockerhub deprecation warning Related to: https://github.com/datopian/giftless/issues/119 Changes Added a "docker-prerun.sh" script to serve as the entry point of the docker file, that right now only echoes a deprecation warning in case the image is being pulled from Dockerhub, but also that can be extended in the future for different deprecation warnings If the image is build with --build-arg IS_DOCKERHUB=true, upon running the image, the following warning can be seen: I have built the docker image at the point of this PR and pushed that to Dockerhub, please try it out if you can https://hub.docker.com/repository/docker/datopian/giftless/tags/0.6.2/sha256-a7f53727881796de19c169899e7f9cb4d9e701803958855f52f8150c4d10f9b5 Future If this PR is approved, I'll also push the same image with the "latest" tag and tweak the descriptions at the Dockerhub repo to flag that it's deprecated. At first, I didn't want to have an "IS_DOCKERHUB" env var. Wanted something like "PRERUN_ARGS" so that the prerun script could be extended more easily, but had some difficulties with that, perhaps something to revisit later on @athornton @vit-zikmund how does this look? The warning seems legit, but the handling of tini is not right. My OCD also tells me not to introduce any runtime ENV var unless its being used by the main code. Also my "overengineering gate" (which is a thing I'm starting to embrace fairly recently) tells me not to introduce new generic features unless it's apparent the extensibility is worth the loss of code simplicity. Working on a followup commit that would adhere to what I described ;) Here's the update, please @demenech have a look. I took the liberty to use the same branch, we can revert/force push that commit if you don't like my solution. On retrospect, this move was rather presumptuous and I don't want to shadow anyone. Sorry. Next time, I'll rather use my own branch. FYI the Dockerfile as a whole is pretty suboptiomal for build re-runs and it's necessarily bloated, containing all the project files, where only a fraction is actually needed. I'd try at some improvements, but not before this PR is done.
2025-04-01T06:38:20.761449
2023-11-13T05:37:22
1989926194
{ "authors": [ "mohamedsalem401", "olayway", "rufuspollock" ], "license": "MIT", "license_source": "github-api", "license_type": "permissive", "provenance": "gharchive-dolma-0000.json.gz:5160", "repo": "datopian/markdowndb", "url": "https://github.com/datopian/markdowndb/pull/48" }
gharchive/pull-request
Refactor code and simplify file processing Pull Request: Main Refactoring and Conversion Summary of Changes This pull request introduces significant improvements and refactoring to enhance the MarkdownDB functionality. The key modifications include: Class Breakdown: The MarkdownDB class has been refactored into two distinct classes, separating concerns for indexing and querying. Conversion to TypeScript Objects and SQL: The Markdown file processing has been optimized by converting MD files into TypeScript objects before transforming them into SQL data. Function Refactoring: Few smaller functions have undergone refactoring. Also, not sure if parse.ts and markdownToObject need to be separate. They seem to have overlapping responsibilities. I'd merge these two. @mohamedsalem401 What do you think? I'm not sure about it The reason why I think they should be independent is that parseFile.ts parses the links and tags from a string source, irrespective of whether it's in a local file or not, while markdownToObject.ts is responsible for loading files from the local file system. The reason why I think they should be independent is that parseFile.ts parses the links and tags from a string source, irrespective of whether it's in a local file or not, while markdownToObject.ts is responsible for loading files from the local file system. Yes, I agree, let's leave them separate @mohamedsalem401 Where are the tests? 😄 I believe this pull request (PR) includes numerous changes at the moment. Therefore, I plan to open a new PR specifically for the latest changes, including tests. @mohamedsalem401 Where are the tests? 😄 I believe this pull request (PR) includes numerous changes at the moment. Therefore, I plan to open a new PR specifically for the latest changes, including tests. I think it would be better if we add tests to the same PR... Also, I wouldn't merge it into main. We don't want to publish a new version of the package until the whole refactoring is ready. (Note we have an auto-publish workflow in place.) Let's create another branch, .e.g v2, off of main and reopen this PR against that branch. I think it would be better if we add tests to the same PR... Also, please don't merge it to main. We don't want to publish a new version of the package until the whole refactoring is ready. (Note we have an auto-publish workflow in place.) Let's create another branch, .e.g v2, off of main and reopen this PR against that branch. OK, I will add them in this pull request and will switch to the new branch v2 This was for #47 and we did this (for now) in a simpler way where we don't refactor existing code - see resolution details in #47. We reused some of this and will probably reuse more in future.
2025-04-01T06:38:20.775906
2015-02-23T20:05:47
58640289
{ "authors": [ "davdar", "dvanhorn" ], "license": "bsd-3-clause", "license_source": "bigquery", "license_type": "permissive", "provenance": "gharchive-dolma-0000.json.gz:5161", "repo": "davdar/maam", "url": "https://github.com/davdar/maam/issues/11" }
gharchive/issue
Cite and discuss Hardekopf His paper introduces an time stamp feature to AAM for which properties can be proven and re-used. Thus it's relevant to discuss as an instance of re-usable metatheory for static analysis. This still hasn't happened and needs to. Done.
2025-04-01T06:38:20.792669
2019-03-31T10:35:43
427382220
{ "authors": [ "colombod", "daveaglick" ], "license": "MIT", "license_source": "github-api", "license_type": "permissive", "provenance": "gharchive-dolma-0000.json.gz:5162", "repo": "daveaglick/Buildalyzer", "url": "https://github.com/daveaglick/Buildalyzer/pull/106" }
gharchive/pull-request
change msbuild discovery path logic Change to logic for msbuild.exe discovery Thanks for pulling this out separate from the WIP solution loading - makes it easier to merge while figuring out the solution part.
2025-04-01T06:38:20.796017
2023-03-16T13:30:44
1627481627
{ "authors": [ "evguu" ], "license": "MIT", "license_source": "github-api", "license_type": "permissive", "provenance": "gharchive-dolma-0000.json.gz:5163", "repo": "daveschumaker/artbot-for-stable-diffusion", "url": "https://github.com/daveschumaker/artbot-for-stable-diffusion/pull/62" }
gharchive/pull-request
Refactor advanced options page This should help with #60 Good luck reviewing this :D Changes: Facefixer strength slider now only shows when facefixer is active Made the giant advanced options file 500 lines less giant :smirk:
2025-04-01T06:38:20.801200
2019-02-25T11:05:04
414050192
{ "authors": [ "davestephens", "tcharewicz" ], "license": "MIT", "license_source": "github-api", "license_type": "permissive", "provenance": "gharchive-dolma-0000.json.gz:5164", "repo": "davestephens/ansible-nas", "url": "https://github.com/davestephens/ansible-nas/pull/56" }
gharchive/pull-request
Deluge Add deluge application to ansible-nas repository. Thank you for submitting this PR, getting Deluge merged will be awesome. This has however, reminded me why I didn't do it myself in the first place :smile: Few things that will need fixing; You've supplied a default username/password, but not mentioned in the docs what it is or how to change it (I think mentioning the default for the container, and how to change it is fine) The docs need to mention that having Transmission and Deluge pointing at a watch directory will cause problems The paths in the supplied config don't look right (ie /root/Downloads) The added variables need adding to tests/test.yml You've specified a local network range that's different to the one specified in the example config and transmission. It's probably worth pulling this (and the transmission one) out to a central variable in the config shared by both containers. Also I'm not clear on; Why are config files supplied? Is there something you're changing that's required that isn't possible to do via environment variables? (if so, is there a better image floating around that could be configured with environment variables?) Please tell me, why you didn't do this? Deluge after installation is not configured properly, it misses configuration about watch folder and where to download files that why I add configuration files to the repository. But I can delete these files, and add information to deluge docs the user after the first login needs to configure watch and downloads directory manually in configuration from webUI. Have a read of this: https://github.com/davestephens/ansible-nas/blob/master/docs/contributing.md It's generally considered good practise to read contribution guidelines before contributing to a project on GitHub. If you don't and still submit a PR, you should expect some sort of feedback along the lines of what the guidelines say, which is what I did. I'm not at a computer right now but I want to test what you say about directories properly, if possible I don't want to supply config files. Reason being, if you run the playbook, change the config, run the playbook again then you're going to break people's config for them, which is not great.
2025-04-01T06:38:20.806380
2022-03-02T17:11:05
1157470053
{ "authors": [ "alexito4" ], "license": "MIT", "license_source": "github-api", "license_type": "permissive", "provenance": "gharchive-dolma-0000.json.gz:5165", "repo": "daveverwer/iOSDevDirectory", "url": "https://github.com/daveverwer/iOSDevDirectory/pull/610" }
gharchive/pull-request
Added twitch.tv/alejandromp4 I saw there is now a section for twitch 🎉 CI issue seems to be on ruby setup not on validate.
2025-04-01T06:38:20.818912
2021-12-16T04:34:07
1081761631
{ "authors": [ "imnnos" ], "license": "BSD-3-Clause", "license_source": "github-api", "license_type": "permissive", "provenance": "gharchive-dolma-0000.json.gz:5166", "repo": "david-fisher/320-F21-Track-1", "url": "https://github.com/david-fisher/320-F21-Track-1/pull/114" }
gharchive/pull-request
Add pretty-print functionality to the rule classes At request of @vjawahar , let us know if this is good Will do this better and reopen a PR in a sec
2025-04-01T06:38:20.830813
2019-08-16T02:51:11
481418642
{ "authors": [ "davidalekna", "williamli" ], "license": "MIT", "license_source": "github-api", "license_type": "permissive", "provenance": "gharchive-dolma-0000.json.gz:5167", "repo": "davidalekna/react-organizer", "url": "https://github.com/davidalekna/react-organizer/issues/1" }
gharchive/issue
can events be added to the calendar? I see that there are events: boolean in a few views, but how do I pass the actual events into the component? Hi @williamli, events should be controlled from outside of the component, so you need to hold a state outside, and just pass those in as an array of events into the Organizer component. Here is an example repo - https://github.com/davidalekna/organizer-examples/blob/master/src/index.js let me know if you have more questions 👍🏻 I was looking for the definition of event object. I found it in https://github.com/davidalekna/organizer-examples/blob/master/src/helpers/index.js https://github.com/davidalekna/organizer-examples/blob/master/src/helpers/index.js Thanks. On 16 Aug 2019, at 15:55, David<EMAIL_ADDRESS>wrote: Hi @williamli https://github.com/williamli, events should be controlled from outside of the component, so you need to hold a state outside, and just pass those in as an array of events into the Organizer component. Here is an example repo - https://github.com/davidalekna/organizer-examples/blob/master/src/index.js https://github.com/davidalekna/organizer-examples/blob/master/src/index.js let me know if you have more questions 👍🏻 — You are receiving this because you were mentioned. Reply to this email directly, view it on GitHub https://github.com/davidalekna/react-organizer/issues/1?email_source=notifications&email_token=AABL4MMKJEKVZHLE2UA6NYLQEZMNNA5CNFSM4IMDQQIKYY3PNVWWK3TUL52HS4DFVREXG43VMVBW63LNMVXHJKTDN5WW2ZLOORPWSZGOD4N6L5Q#issuecomment-521922038, or mute the thread https://github.com/notifications/unsubscribe-auth/AABL4MJESEIHWBDTETPJQF3QEZMNNANCNFSM4IMDQQIA. No worries mate. This package is being moved to a monorepo and is in a progress on being rewritten in typescript. https://github.com/davidalekna/react-components/tree/master/packages/alekna-organizer
2025-04-01T06:38:20.834367
2020-03-22T11:18:41
585695589
{ "authors": [ "Den4ik", "davidalger" ], "license": "MIT", "license_source": "github-api", "license_type": "permissive", "provenance": "gharchive-dolma-0000.json.gz:5168", "repo": "davidalger/warden", "url": "https://github.com/davidalger/warden/issues/122" }
gharchive/issue
Introduce request: Use one subdomain for multiple projects Idea: Currently I should generate certificates for each project and use something like app.project.test as domain. It would be good to use domains like project.magento2.test, project.magento1.test, project.laravel.test In this case I need generate one wildcard certificate for all projects and urls looks more clear @Den4ik I'm not sure this is somewhere I'd like to go with this project. Using domains in this way would have confusing semantics, and would result in conflicts for the domains generated for auxiliary services for things like Mailhog and RabbitMQ which both run on a per-project basis at mailhog.project.test and rabbitmq.project.test currently. The use of a root CA to sign SSL certificates was done because by-design each project should have a separate domain name. If you're merely concerned about the manual step during setup for other devs working on the project, perhaps adopting an init script similar to this one would be a good idea (the repo this is in mirrors how I and my colleagues at Mediotype setup each Magento project to get started): https://github.com/davidalger/warden-env-magento2/blob/develop/tools/init.sh Appreciate the suggestion, but I'm going to go ahead and close this one out.
2025-04-01T06:38:20.841122
2015-06-26T21:14:39
91364229
{ "authors": [ "davidedc", "rumblesan" ], "license": "MIT", "license_source": "github-api", "license_type": "permissive", "provenance": "gharchive-dolma-0000.json.gz:5169", "repo": "davidedc/livecodelab", "url": "https://github.com/davidedc/livecodelab/issues/262" }
gharchive/issue
Syntax for inlining of function calls Currently we support inlining in two forms. rotate 1, 2 fill red box rotate 1, 2 >> fill red >> box Could we standardise on the latter of these? It still gives us a useful inlining short cut, but it's (imho) a bit easier to see what's an argument and what's a function. Also it's a damn site easier to parse, and would mean that I'll be able to extend it to work with arbitrary functions more easily. That separation is not a difficult part to do - I do that separation part in 134 badly-written lines in two functions called in sequence (findQualifiers and fleshOutQualifiers) here https://github.com/davidedc/livecodelab/blob/master/coffee/languages/livelangv1/code-preprocessor.coffee#L1080-L1214 before those two functions the input is rotate 1, 2 fill red box and after those two functions becomes rotate 1, 2, -> fill red, -> box;; (I then need more transformations, but that's where all the "chevrons" positioning is done). And I made no attempt to be short about it and I'm sure that there is redundant code in there, it's probably just 1 screen of clean code rather than my 3... and I have no clever tokenisation in place which I think you have in place, so really the matching/transformation in your situation would be shorter and cleaner... It's just unnecessary symbols, and it's so much easier not to use the chevrons, in fact I never used them, so no I don't think is good to mandate them. heh fair enough if you want to keep it working without the >> operator. was asking incase I could make my life easier :) I'm realising that I'm going to have to have a more extensive rewriter/preprocessor anyway. I'm adding in support for using closures as arbitrary expressions, so we can more easily pass them into functions without having to assign them to variables before hand. turns out that because we don't have a prefix for closures, it's really difficult to parse. Ah well :p
2025-04-01T06:38:20.882212
2018-09-25T20:56:08
363762753
{ "authors": [ "davidjtferguson" ], "license": "MIT", "license_source": "github-api", "license_type": "permissive", "provenance": "gharchive-dolma-0000.json.gz:5170", "repo": "davidjtferguson/silly-sam", "url": "https://github.com/davidjtferguson/silly-sam/issues/19" }
gharchive/issue
improve physics drawing I've been drawing each physics object manually but just saw this tutorial for doing it all together https://love2d.org/wiki/Tutorial:PhysicsDrawing Should go through and try and apply. Need to look into if I can expand it to change colour for each shape or texture each object. Would be really nice to centralize object drawing if poss. I'm not going to do this.
2025-04-01T06:38:20.884859
2017-10-06T15:12:04
263479751
{ "authors": [ "davidkpiano", "mewben", "stevenmason" ], "license": "mit", "license_source": "bigquery", "license_type": "permissive", "provenance": "gharchive-dolma-0000.json.gz:5171", "repo": "davidkpiano/react-redux-form", "url": "https://github.com/davidkpiano/react-redux-form/issues/964" }
gharchive/issue
Pulling react-redux-forms from npm doesn't have 1.14.2 built code After updating to 1.14.2 and using isValid an undefined error is thrown. It appears the changes are in the /src folder but not in /lib. I built it manually and it works fine. @davidkpiano would you be able to push the changes to npm please? Same here... exported isValid is found in the /src but not in the /lib 1.14.4 was just published! Thanks @davidkpiano !
2025-04-01T06:38:20.892080
2024-11-12T21:23:36
2653356410
{ "authors": [ "davidlj95" ], "license": "MIT", "license_source": "github-api", "license_type": "permissive", "provenance": "gharchive-dolma-0000.json.gz:5172", "repo": "davidlj95/ngx", "url": "https://github.com/davidlj95/ngx/pull/1034" }
gharchive/pull-request
chore: remove unneeded entries in CI's Makefile Issue or need Some entries in CI/CD's .ci/Makefile are just redirecting to run scripts. The purpose of that file is to add commands that are only different when running in CI/CD Proposed changes Remove CI's Makefile targets that just run regular run scripts Quick reminders 🤝 I will follow Code of Conduct ✅ No existing pull request already does almost same changes 👁️ Contributing docs are something I've taken a look at 📝 Commit messages convention has been followed 💬 TSDoc comments have been added or updated indicating API visibility if API surface has changed. 🧪 Tests have been added if needed. For instance, if adding new features or fixing a bug. Or removed if removing features. ⚙️ API Report has been updated if API surface is altered. #1034 👈 main This stack of pull requests is managed by Graphite. Learn more about stacking. Join @davidlj95 and the rest of your teammates on Graphite
2025-04-01T06:38:20.897511
2020-02-11T19:31:04
563426958
{ "authors": [ "davidmc24", "dcabasson" ], "license": "Apache-2.0", "license_source": "github-api", "license_type": "permissive", "provenance": "gharchive-dolma-0000.json.gz:5173", "repo": "davidmc24/gradle-avro-plugin", "url": "https://github.com/davidmc24/gradle-avro-plugin/pull/102" }
gharchive/pull-request
Implement task configuration avoidance for avro plugin This PR supports https://github.com/davidmc24/gradle-avro-plugin/issues/97 @davidmc24 - not quite ready yet. This needs a bit of work to make sure the tests are passing. Some tests will need to be adjusted, but I think there is an issue with the code. I will look into it as soon as I find a bit of time to do so. Sounds good. Let me know when it’s ready On Mon, Feb 24, 2020 at 2:06 PM Denis Cabasson<EMAIL_ADDRESS>wrote: @davidmc24 https://github.com/davidmc24 - not quite ready yet. This needs a bit of work to make sure the tests are passing. Some tests will need to be adjusted, but I think there is an issue with the code. I will look into it as soon as I find a bit of time to do so. — You are receiving this because you were mentioned. Reply to this email directly, view it on GitHub https://github.com/davidmc24/gradle-avro-plugin/pull/102?email_source=notifications&email_token=AADNKULEF2DPTSB5V3GJKCLREQLC3A5CNFSM4KTHBAQKYY3PNVWWK3TUL52HS4DFVREXG43VMVBW63LNMVXHJKTDN5WW2ZLOORPWSZGOEMZEITI#issuecomment-590496845, or unsubscribe https://github.com/notifications/unsubscribe-auth/AADNKUJ54BBOQIR5GGAZFP3REQLC3ANCNFSM4KTHBAQA . -- David M. Carr <EMAIL_ADDRESS> Went with a different implementation for #97. Thanks for the contribution. It was a helpful example of some of the techniques needed.
2025-04-01T06:38:20.910947
2019-07-20T20:30:30
470711845
{ "authors": [ "Alex-Witkowski", "farr64" ], "license": "MIT", "license_source": "github-api", "license_type": "permissive", "provenance": "gharchive-dolma-0000.json.gz:5174", "repo": "davidortinau/Xappy", "url": "https://github.com/davidortinau/Xappy/pull/41" }
gharchive/pull-request
Remove BorderlessEntryRenderer that was left after cleanup. Fixed iOS build. With cleanup a few days ago BorderlessEntry was removed but the renderer was still in iOS project and broke the build. iOS builds now, but Xappy stops immediately after displaying its blue launch screen. I have dealt with the usual suspects ("xapping" the .vs, bin and obj directories) without success. I haven't changed anything and this is the cloned respository directly from GitHub to my Mac. I have attached a file with the 16 warnings reported during the build. The head and tail of the installation, In case this helps: /Library/Frameworks/Xamarin.iOS.framework/Versions/Current/bin/mlaunch -sdkroot "/Applications/Xcode.app/Contents/Developer" --installdev "~/Documents/Xamarin_solutions/Xappy/Xappy/Xappy.iOS/bin/iPhone/Debug/device-builds/iphone8.1-12.3.1/Xappy.iOS.app" --device ios "--devname=FAR iP6s" --install-progress Installing application bundle 'com.companyname.Xappy' on 'FAR iP6s' Installing application bundle 'com.companyname.Xappy' on 'FAR iP6s' TransferringPackage - PercentComplete: 10% CopyingFile - Path: ~/Documents/Xamarin_solutions/Xappy/Xappy/Xappy.iOS/bin/iPhone/Debug/device-builds/iphone8.1-12.3.1/Xappy.iOS.app/META-INF/ CopyingFile - PercentComplete: 10% . . . CreatingStagingDirectory - PercentComplete: 5% ExtractingPackage - PercentComplete: 15% InspectingPackage - PercentComplete: 20% TakingInstallLock - PercentComplete: 20% PreflightingApplication - PercentComplete: 30% InstallingEmbeddedProfile - PercentComplete: 30% VerifyingApplication - PercentComplete: 40% CreatingContainer - PercentComplete: 50% InstallingApplication - PercentComplete: 60% PostflightingApplication - PercentComplete: 70% SandboxingApplication - PercentComplete: 80% GeneratingApplicationMap - PercentComplete: 90% Application bundle 'com.companyname.Xappy' installed on 'FAR iP6s' Upload succeeded. 190721_Xappy_warnings.txt Thanks. More FYI: I attempted to include all nightly builds even remotely associated with this issue. Pease see attached report with specific details: Could not add packages. Are there any specific packages to include and/or to exclude? All I'm trying to do is to get some consistent state that will build and run without gobbling up all of my scarce time, with the objective of using some of the working code as an example of what Xamarin can do. Thanks for doing all the hard work on the fundamentals. 190721_Xappy_could_not_add_packages.txt Hi @farr64, for me the iOS build an run works with the configuration Debug | iPhoneSimulator > iPhone XR iOS 12.2 Not tried it with real device or other configuration. Maybe that helps. Best Alex P.S. If you still have errors it may help if you attach the Application Output besides the Tool Output (build). Hi Alex, It most certainly helped: Xappy works wonderfully on the simulator. Thanks for the great tip ;-) Now, if we can only get this baby out of the simulator and into the real world, that would be a giant step for Xumanity. Almost there, an important step at a time. I appreciate all of your work. I know how challenging each step is. Well . . . I let the simulator run for a few minutes and then the simulator decided to eject Xappy (or perhaps vice versa). I enclose the crash log. I didn't bother to send it to Apple, so I just copied and pasted and saved it for you. Thanks. 190721_Xappy_Simulator_crash.txt So the native crash log don't help. I have fixed one more crash that comes when navigating to about but the PullRequest is currently missing. Will see if I can do it today evening (CEST). It looks like the iOS Version needs some general love ;)
2025-04-01T06:38:20.954672
2024-03-03T14:06:34
2165376793
{ "authors": [ "JeromeGsq", "Mayorc1978", "Neoplayer", "PilarHidalgo", "Sarinoty", "bgeneto", "camsique", "clintonruairi", "davila7", "djacquensf9", "gudzenkov", "luqmanyusof", "matiaszanolli", "pokhreldipesh", "red-daut" ], "license": "MIT", "license_source": "github-api", "license_type": "permissive", "provenance": "gharchive-dolma-0000.json.gz:5175", "repo": "davila7/code-gpt-docs", "url": "https://github.com/davila7/code-gpt-docs/issues/237" }
gharchive/issue
VSCode plugin login failure CodeGPT plugin login does redirect tot http://localhost:54112/auth/callback?code=XXX Once logged in with Google, page shows Connection Success and redirect back to VSCode app. Launched external handler for 'vscode://'. However, nothing really happens, plugin still shows as logged out. The only log entry is this - [15:15:51] Registering CodeGPT Copilot provider Re-installing the plugin did not help. Removing plugin files from ~/.vscode/extensions/danielsanmedium.dscodegpt-3.2.1/ with clean install did not help. VSCode upgrade 1.86.2 to 1.87.0 did not help. Another confusing thing is version - plugin version is 3.2.1, Chat shows 3.1.1, welcome banner shows 2.0. VSCode Version: 1.87.0 Commit: 019f4d1419fbc8219a181fab7892ebccf7ee29a2 Date: 2024-02-27T23:42:51.279Z Electron: 27.3.2 ElectronBuildId: 26836302 Chromium: 118.0.5993.159 Node.js: 18.17.1 V8: <IP_ADDRESS>-electron.0 OS: Darwin arm64 23.3.0 +1 Having the exact same issue. @gudzenkov @red-daut Hi guys!, try to update Node.js: 18.17.1 to Node.js 20. That is Node.js bundled with VSCode 1.87.0 My local node is already 21.6.2 facing the same issue. not redirect to VSC after web login Version: 1.87.0 (user setup) Commit: 019f4d1419fbc8219a181fab7892ebccf7ee29a2 Date: 2024-02-27T23:41:44.469Z Electron: 27.3.2 ElectronBuildId: 26836302 Chromium: 118.0.5993.159 Node.js: 18.17.1 V8: <IP_ADDRESS>-electron.0 OS: Windows_NT x64 10.0.22621 local node version 20.11 Having the same issue. Mostly the same happening here. Ubuntu 23.10, VS Code Insiders (latest as of this post). I get the "Connection Success" message" but when I click on "Open in VSCode Insider", Chrome silently fails to reach my running VSCode instance. Some issue here. Can't login. Is there any alternative form of login to codeGPT vscode extension? Same issue too. I click on "Open in VSCode" after web login but nothing happens. I tried with with Firefox and Chrome, same issue. I'm on Linux Mint. Same issue here, vscode on macos. CodeGPT: v3.2.4 VSCode: 1.87.2 I cannot login, stuck here: If I click signin it opens the browser at this local page: http://localhost:54112/login But get this: VSCODE: Versione: 1.87.2 (user setup) Commit: 863d2581ecda6849923a2118d93a088b0745d9d6 Data: 2024-03-08T15:20:17.278Z Electron: 27.3.2 ElectronBuildId: 26836302 Chromium: 118.0.5993.159 Node.js: 18.17.1 V8: <IP_ADDRESS>-electron.0 Sistema operativo: Windows_NT x64 10.0.19045 same problem here. Is this going to be fixed? if not I'll just use cursor We have not been able to replicate the error, so far every time we log in, the account remains connected in the extension. The only error that currently exists is that depending on the browser the buttons to return to VSCode do not work, this will be resolved in the next version. You could follow this tutorial to confirm that the entire Login process is correctly working on your accounts. https://youtu.be/yErnyqXobcI?si=p3Tr925PcvjvzR_h If after following the tutorial the Connected icon still does not appear in CodeGPT, could you send me an email to<EMAIL_ADDRESS>with a video or images of the complete flow you are doing so we can fix the problem. Thank you very much for your help reporting the error, we will try to solve it as soon as possible Emailed you with the video! @bgeneto @matiaszanolli @clintonruairi Hi guys! I just test it on Ubuntu 22.04 and works! , just follow the tutorial is the same for Ubuntu https://youtu.be/yErnyqXobcI?si=p3Tr925PcvjvzR_h Can you guys provide a way to copy the token and paste in the extension via cmd? I'm stuck, can't login to cursor too. no need the token just set the connection on the menu, please follow the tutorial https://youtu.be/yErnyqXobcI?si=p3Tr925PcvjvzR_h I followed the tutorial the first time. It didn't work. After you posted this message, I made sure to follow it again, very slowly, to ensure I was not doing anything incorrectly. Same result. When prompted to sign in by the extension, it is still redirected to a dead page on localhost. The codeGPT plugin still gets stuck on an infinite loading screen. When manually clicking sign in from the extension bar, it still redirects to a dead page on localhost. This problem has been echoed by dozens of your users. If this extension worked I would happily pay the annual subscription fee. If your 'solution' is that we should all watch your Youtube video again, and that it works for you, I guess we should all just use cursor instead. https://cursor.sh/ Screencast from 2024-04-17 22-21-23.webm @clintonruairi Now that we can see the video, possibly the problem is that port 54112 or 54113 are being used by another service. Could you check if by turning off the services that run through those ports you can now lift the extension? Nope, no process running on those ports. Ran the following commands: sudo lsof -i :54112 sudo lsof -i :54113 sudo fuser 54112/tcp sudo fuser 54113/tcp sudo fuser -k 54113/tcp sudo fuser -k 54112/tcp First to see if there were any processes running on those ports - returned nothing. Then killing any processes on those ports, just to be sure. Then installed codeGPT, opened the sidebar - still stuck on infinite loading. Clicked the sign in prompt in the bottom right of vs code, redirected to 'Not found' link on localhost. Screenshot: Other things I have tried so far (all unsuccessful), each after uninstalling codeGPT, closing VS code, and then reinstalling codeGPT: disabled firewall disabled VPN cleared browser cache for codeGPT changed browser to firefox disabled adblocker made sure VS code up to date made sure codeGPT up to date tried on multiple other wifi networks tried on 2 other machines. Windows 11 and MacOs Sonoma 14.4.1 restarted/ Reinstalled VS code Behaviour is the same across all of the above configs. Leads me to believe this is almost certainly a problem with codeGPT itself, or conflicting behaviour with another extensions. My installed extensions: code --list-extensions cweijan.vscode-office dbaeumer.vscode-eslint ecmel.vscode-html-css esbenp.prettier-vscode file-icons.file-icons github.github-vscode-theme grapecity.gc-excelviewer infeng.vscode-react-typescript magicstack.magicpython mkxml.vscode-filesize monokai.theme-monokai-pro-vscode ms-azuretools.vscode-docker ms-python.debugpy ms-python.python ms-python.vscode-pylance ms-vscode-remote.remote-containers ms-vscode.remote-repositories ms-vscode.vscode-typescript-next pmneo.tsimporter prisma.prisma rust-lang.rust rust-lang.rust-analyzer tomoki1207.pdf xabikos.javascriptsnippets yoavbls.pretty-ts-errors Got any suggestions? @davila7 Thank you for the information @clintonruairi We are evaluating with the team what might be happening... we will keep you updated. @clintonruairi Now that we can see the video, possibly the problem is that port 54112 or 54113 are being used by another service. Could you check if by turning off the services that run through those ports you can now lift the extension? Used this command to check if those ports were occupied by a service: Get-NetTCPConnection | where {$.LocalPort -eq 54112 -or $.LocalPort -eq 54113} Returned nothing when VSCODE wasn't active. Returned values and PID were of a VSCODE process, suspecting it was CODEGPT, I uninstalled the extension. And now It doesn't detect it. Which means the problem is not dependent on some other process cause as soon I reinstalled again those ports were detected as used...by CodeGPT. But still cannot connect. @bgeneto @matiaszanolli @clintonruairi Hi guys! I just test it on Ubuntu 22.04 and works! , just follow the tutorial is the same for Ubuntu https://youtu.be/yErnyqXobcI?si=p3Tr925PcvjvzR_h Hi, For info, it is not the right way to reproduce this issue. You need to do a remote connection, for example through an SSH connection to an GNU/Linux instance. Thanks @bgeneto @matiaszanolli @clintonruairi Sorry guys, you are right, I haven't try on remote server yet, but I made a procedure on WSL here: https://medium.com/p/881b91ba193e im getting unable to connect to the extension services, however i have latest node and vscode version and port 54112 is unused.
2025-04-01T06:38:20.961201
2019-03-28T10:57:01
426427539
{ "authors": [ "RafayAK", "davisking", "nmaynes" ], "license": "BSL-1.0", "license_source": "github-api", "license_type": "permissive", "provenance": "gharchive-dolma-0000.json.gz:5176", "repo": "davisking/dlib", "url": "https://github.com/davisking/dlib/issues/1706" }
gharchive/issue
Support for reading images in correct orientation using EXIF data. Current Behavior I'm working on image processing with some images I collected myself. Dlib's dlib.load_rgb_image('image_path') method swaps the rows and columns on some images while OpenCV's cv2.imread('image_path') method does not. Check out the results below img = dlib.load_rgb_image("myimg.jpg") print(img.shape) -------------------- OUTPUT: (1944, 2592, 3) (the resultant image is rotated 90 degrees clockwise) while OpenCV's method returns the correct shape: img = cv2.imread("myimg.jpg") print(img.shape) -------------------- OUTPUT: (2592, 1944, 3) dlib.load_rgb_image() does not take into account the EXIF orientation metadata, so some images are read incorrectly. I don't want to go in and rotate some of these offending images myself manually because I'm creating an app. Is there a way in Dlib to read images using orientation information? Note: I asked this question of stackoverflow, one of the comments told me to create an issue here Version: 19.17.0 Where did you get dlib: pip Platform: Windows 10 - 64bit Compiler: python 3.6 Added platform info Yeah, it doesn't do anything with EXIF data. It would be cool if it loader used it. Someone should submit a pull request that adds that feature :) I'll see what I can do. Would this require changes somewhere towards the top of image_loader.h? That would be sensible.
2025-04-01T06:38:20.963678
2018-04-11T13:56:21
313335711
{ "authors": [ "jaroslawk" ], "license": "BSL-1.0", "license_source": "github-api", "license_type": "permissive", "provenance": "gharchive-dolma-0000.json.gz:5177", "repo": "davisking/dlib", "url": "https://github.com/davisking/dlib/pull/1253" }
gharchive/pull-request
[fix] this change fixes compilation problems under mac I have noticed the following problem with compilation on Mac: dlib/tools/python/src/other.cpp:56:1: error: reference to 'list' is ambiguous list _max_cost_assignment ( ^ /Library/Developer/CommandLineTools/usr/include/c++/v1/list:805:28: note: candidate found by name lookup is 'std::__1::list' class _LIBCPP_TEMPLATE_VIS list ^ /usr/local/include/boost/python/list.hpp:57:7: note: candidate found by name lookup is 'boost::python::list' class list : public detail::list_base ^ /Users/jaroslaw/code/dlib/tools/python/src/other.cpp:72:11: error: reference to 'list' is ambiguous const list& assignment ^ /Library/Developer/CommandLineTools/usr/include/c++/v1/list:805:28: note: candidate found by name lookup is 'std::__1::list' class _LIBCPP_TEMPLATE_VIS list ^ /usr/local/include/boost/python/list.hpp:57:7: note: candidate found by name lookup is 'boost::python::list' class list : public detail::list_base ^ 2 errors generated. make[2]: *** [CMakeFiles/dlib_.dir/src/other.cpp.o] Error 1 error: cmake build failed! this change fixes it. PLEASE DOUBLE CHECK ME - I have not read this code or code in c++ recently... :) I see the fix in place already :D
2025-04-01T06:38:20.967381
2024-01-10T17:03:07
2074797644
{ "authors": [ "davisriedel", "seldstein" ], "license": "MIT", "license_source": "github-api", "license_type": "permissive", "provenance": "gharchive-dolma-0000.json.gz:5178", "repo": "davisriedel/obsidian-typewriter-mode", "url": "https://github.com/davisriedel/obsidian-typewriter-mode/issues/44" }
gharchive/issue
Cannot access command palette from fullscreen Hi Davis, I can't access the command palette from fullscreen mode. Let me know if there's other info you need that would help solve this. Thanks! Also: I was trying to access the palette because I wanted to split the screen to see two notes at once. I left fullscreen mode, split the screen, and reactivated fullscreen. One of the notes closed. So it seems that you can't split the screen in fullscreen mode. Let me know if you'd like me to open a separate issue thread for this. Thanks again! Hey, thanks for opening the issue. To achieve Fullscreen Mode I am just calling requestFullscreen on the Editor element. Since Obsidian places the command palette on a completely separate div that is not a child of the editor element, there is no way to make it appear in fullscreen mode as it is implemented at the moment. This also explains why you can only fullscreen one editor, as only one element can be fullscreen. To solve these issues we'd have to fullscreen the complete obsidian window and do some css tricks to make the editor appear fullscreen. I tried some things, but somehow position: fixed on the editor is not working. I suppose obsidian is using one of these properties https://stackoverflow.com/a/52937920 in any of the parent nodes. But I could not identifiy it quickly. So fixing this requires some more thought and experimentation. I have thus postponed it to a later release. Gotcha. Thanks for the explanation. For now, I'm using fullscreen with the Minimal theme and Hider plugin. That's what I was doing before, and it works fine. Hey 👋 in the mean time I have completely rewritten the fullscreen mode, which fixes this issue as well. If any problems remain, feel free to open an issue.
2025-04-01T06:38:21.014147
2018-05-07T10:11:36
320749044
{ "authors": [ "RPaetau", "dazinator" ], "license": "mit", "license_source": "bigquery", "license_type": "permissive", "provenance": "gharchive-dolma-0000.json.gz:5180", "repo": "dazinator/Dazinator.AspNet.Extensions.FileProviders", "url": "https://github.com/dazinator/Dazinator.AspNet.Extensions.FileProviders/issues/21" }
gharchive/issue
Fileprovider.Watch("") Does not work as expected Hello again, ive been using this in memory file magic some more, and ive found that the root directory (in this case i mean the root directory of any given provider) does not function properly, i cannot watch for changes in this root directory forexample, i get a null argument exception, which i dont get with the physical file provider forexample.. Its quite unsettling for me that my tests have completely different behaviors in this regard than my actual physical file provider. [Fact] public async Task GetFilesToBackupQuery_NewApproach_FilewatchIsApplied() { // Arrange var rootLocation = "/RAP"; var directory = new InMemoryDirectory(rootLocation); var fp = new InMemoryFileProvider(directory); var watcher = fp.Watch(""); // This line throws, but i expect it to watch rootLocation var changeMe = false; watcher.RegisterChangeCallback((state) => { changeMe = true; }, new object()); // Act directory.AddFile(rootLocation, new StringFileInfo("Test", "Test.txt")); await Task.Delay(2000); // Assert changeMe.Should().Be(true); } Interesting. Does Watch("/") work? It doesent throw if i use it with "/", however this is different from how the physical file provider works (: It also dident register any change when i added that file a couple lines later (: What do you think? Sounds like its a bug if its not behaving consistently with how Watch works with other providers like the PhysicalFileProvider. I'd welcome a PR or added test coverage in this area. Otherwise it may be a while until I can look at it, but I'll add it to my list! You can see there are various tests for watch here: https://github.com/dazinator/Dazinator.AspNet.Extensions.FileProviders/blob/master/src/Dazinator.AspNet.Extensions.FileProviders.Tests/InMemoryFileProviderTests.cs#L215 These were produced at the time based on the physical file watcher tests. However I can't see one that tests watching for a new file - only tests are for watching changes to an existing file. It would be interesting to see how PhysicalFileWatcher handles that. Also I am pretty sure watching("") used to be correct, are you using asp.net core 1.X or 2.X ? Perhaps things have changed with physicalfileprovider in 2.X compared to the 1.X version? My manager has agreed to me spending some of my work time doing a PR, so ill see what i can figure out (: Im using 2.X of asp.net core (: After digging around in your code a bit, it seems to me that this code was never designed to support watching directories, only files, and the tests seem to back this theory since there arent any that watches a directory (: So i think its gonna take a bit of a redesign.. This whole filters concept seems to work really nice for checking if a file has changed, but i cant really make it work with directories, since its not what its designed for sadly.. Im totally down for helping, but i think you might need to make some design considerations (: From what ive learned so far it seems that we can check if a path has a file extension (.txt ect) and if it does not then we can add a "/" wildcard to make the globbing thing work, but then we get another design issue later when it then tries to notify the watcher cause it cannot find a watcher with the path "/" cause the watcher would be watching "". Im gonna keep going for a while here, i just wanted to put down these thoughts while they were still fresh! I am up for changes in design that facilitate the end goal of mirroring physicalfileprovider behaviours in terms of watching. It would be good initially just to get a few failing tests up that we want to pass, and that PhysicalFileProvider passes with - if you get a chance to add a few such tests we can then discuss any design changes in terms of making those tests pass. Closing here, thanks for this, it was my first contribution to an OS framework, and it was a fun and educating experience (: @dazinator Just one last comment, any plans on making a new release, or should i just use the unstable version for a while? (: If you can use the latest unstable nuget package for now that would be great. I will issue a new release but I want to ensure there has been some time for this change to sink in - if you hit any further issues please let me know.
2025-04-01T06:38:21.082306
2021-01-18T16:48:12
788415109
{ "authors": [ "alumni", "uslss" ], "license": "Apache-2.0", "license_source": "github-api", "license_type": "permissive", "provenance": "gharchive-dolma-0000.json.gz:5181", "repo": "dbeaver/dbeaver", "url": "https://github.com/dbeaver/dbeaver/issues/11036" }
gharchive/issue
Incorrect format of generated binary literals System information: Operating system (distribution) and version: any DBeaver version: 7.2.3 and earlier Java version: any Additional extensions: none Connection specification: Database name and version: HANA Cloud (4.0), also earlier Driver name: com.sap.db.jdbc.Driver, 2.7.9 and earlier Do you use tunnels or proxies (SSH, SOCKS, etc)? no Describe the problem you're observing: In short: Binary literals are generated as 0x<value> instead of X'<value>', where <value> is the value in hexadecimal format. Long: Generated binary literals use the non-standard format: 0xAABB. However, the SQL standard specifies that X'AABB' should be used instead. Some of the DBs (like MariaDB) still support the non-standard format probably for compatibility reasons, but others (like HANA) don't (HANA actually treats 0xAABB as an hexadecimal integer value). Snippet from the ANSI SQL '92 Standard: <hex string literal> ::= X <quote> [ <hexit>... ] <quote> [ { <separator>... <quote> [ <hexit>... ] <quote> }... ] HANA 2.0.5 docs: https://help.sap.com/viewer/4fe29514fd584807ac9f2a04f6754767/2.0.05/en-US/20a1569875191014b507cf392724b7eb.html Steps to reproduce, if exist: Create a table with a binary primary key. Go to table data and try to delete a row. Save and refresh the data. The row is still there. Alternatively, go to table data and right click a row, then generate delete query. Run the generated query to get the following error: SQL Error [266] [07006]: SAP DBTech JDBC: [266] (at 43): inconsistent datatype: INT type is not comparable with VARBINARY type.: line 2 col 12 verified
2025-04-01T06:38:21.091825
2021-06-04T11:58:12
911427566
{ "authors": [ "HeikkiVesanto", "uslss" ], "license": "Apache-2.0", "license_source": "github-api", "license_type": "permissive", "provenance": "gharchive-dolma-0000.json.gz:5182", "repo": "dbeaver/dbeaver", "url": "https://github.com/dbeaver/dbeaver/issues/12735" }
gharchive/issue
Function header and trailler missing for PostgreSQL I suspect this is similar to https://github.com/dbeaver/dbeaver/issues/3892 but it impacts PostgreSQL versions well above 8.4. System information: Windows Tested on: DBeaver 7.3.0 DBeaver 21.1.0 Connection specification: Tested on: PostgreSQL 9.5 and 10.10: PostgreSQL 9.5.23 on x86_64-pc-linux-gnu, compiled by gcc (GCC) 4.8.5 20150623 (Red Hat 4.8.5-39), 64-bit PostgreSQL 10.10 on x86_64-pc-linux-gnu, compiled by gcc (GCC) 4.8.5 20150623 (Red Hat 4.8.5-36), 64-bit Driver name PostgreSQL JDBC Driver Do you use tunnels or proxies (SSH, SOCKS, etc)? Happens both on and off SSH. Describe the problem you're observing: Sometimes when loading a function in PostgreSQL you do not get the header. You end up with: Whereas it should look like: Also missing from the end is: $function$; Steps to reproduce, if exist: Open up a PostgreSQL connection. Open the functions list in a schema with functions (more than 1 would show the results best). Open up (view function) one of the functions in the schema. This should load as expected, close the function. Leave DBeaver running for at least an hour with no interaction. Open up one of the functions that was not opened in step 3. This will not load the header. Opening up the same function that was opened in step 3 still loads correctly. Workaround: Restart of DBeaver. Or just Disconnect the connection and reconnect (Invalidate/Reconnect does not seem to be enough). Include any warning/errors/backtraces from the logs Logs seem to show: 2021-06-04 12:50:21.729 - Error reading procedure body org.postgresql.util.PSQLException: This connection has been closed. at org.postgresql.jdbc.PgConnection.checkClosed(PgConnection.java:767) at org.postgresql.jdbc.PgConnection.prepareStatement(PgConnection.java:1659) at org.postgresql.jdbc.PgConnection.prepareStatement(PgConnection.java:373) at org.jkiss.dbeaver.model.impl.jdbc.exec.JDBCConnectionImpl.prepareStatement(JDBCConnectionImpl.java:244) at org.jkiss.dbeaver.model.impl.jdbc.JDBCUtils.queryString(JDBCUtils.java:624) at org.jkiss.dbeaver.ext.postgresql.model.PostgreProcedure.getObjectDefinitionText(PostgreProcedure.java:400) at org.jkiss.dbeaver.ui.editors.sql.SQLSourceViewer.getSourceText(SQLSourceViewer.java:85) at org.jkiss.dbeaver.ui.editors.sql.SQLEditorNested$ObjectDocumentProvider$1.lambda$0(SQLEditorNested.java:271) at org.jkiss.dbeaver.model.exec.DBExecUtils.tryExecuteRecover(DBExecUtils.java:169) at org.jkiss.dbeaver.ui.editors.sql.SQLEditorNested$ObjectDocumentProvider$1.run(SQLEditorNested.java:269) at org.jkiss.dbeaver.model.runtime.AbstractJob.run(AbstractJob.java:105) at org.eclipse.core.internal.jobs.Worker.run(Worker.java:63) thanks for the report closed as the duplicate of #12649
2025-04-01T06:38:21.108681
2021-07-26T18:58:06
953193262
{ "authors": [ "LonwoLonwo", "Matvey16", "bdietz400", "xantari" ], "license": "Apache-2.0", "license_source": "github-api", "license_type": "permissive", "provenance": "gharchive-dolma-0000.json.gz:5183", "repo": "dbeaver/dbeaver", "url": "https://github.com/dbeaver/dbeaver/issues/13322" }
gharchive/issue
IBMi DB2 / AS400 - Auto Generated / Auto Increment not correctly showing System information: Operating system (distribution) and version: Windows 10 / 64bit DBeaver version: 21.0.1 and 21.0.2. Also verified issue in 6.0.0 Additional extensions: None Connection specification: Database name and version: com.ibm.as400.access.AS400JDBCDriver, jt400-10.5.jar (10.5) Driver name: JT400 / com.ibm.as400.access.AS400JDBCDriver Do you use tunnels or proxies (SSH, SOCKS, etc)? No Describe the problem you're observing: When viewing the column properties on the table it is not identifying identity columns on the table properly and showing us they auto-increment. Example: You can see the fields are not marked as auto incrementing, even though it is an identity column which if I use the native ACS viewer from IBM I see this: Steps to reproduce, if exist: Double click a table with an identity column in DBeaver to bring up that tables property window. Include any warning/errors/backtraces from the logs No warnings. Hello @xantari Please add table DDL info for tests. I can't reproduce it for now for DB2 LUW. This is for DB2 for IBMi, which is a different DB2 dialect then LUW and uses a different JDBC driver then the LUW version. Here is the DDL: CREATE TABLE ARRTFLIB.CEACTIVITYAPIUSERS FOR SYSTEM NAME CEACTAPIU ( IDSPROVIDERID FOR COLUMN IDSPROID INTEGER GENERATED ALWAYS AS IDENTITY ( START WITH 1 INCREMENT BY 1 NO MINVALUE NO MAXVALUE NO CYCLE NO ORDER NO CACHE ) , CLIENTID VARCHAR(200) CCSID 37 NOT NULL , ALLSPONSORACCESS FOR COLUMN ALLSPAXS SMALLINT NOT NULL DEFAULT 0 , CONSTRAINT ARRTFLIB.Q_ARRTFLIB_CEACTAPIU_IDSPROID_00001 PRIMARY KEY( IDSPROVIDERID ) ) RCDFMT CEACTAPIU ; LABEL ON TABLE ARRTFLIB.CEACTIVITYAPIUSERS IS '-CEACTIVITYAPIUSERS' ; LABEL ON COLUMN ARRTFLIB.CEACTIVITYAPIUSERS ( IDSPROVIDERID TEXT IS 'Identity Server PoviderID' , CLIENTID TEXT IS 'Client ID' , ALLSPONSORACCESS TEXT IS 'All sponsor access flag' ) ; Ok, thanks for the bug report. For AS400 DBeaver shows only what the driver gives. So maybe this is a driver issue. We don't have a DB2 AS400 test environment for testing, unfortunately. Therefore we can't fix it now. Maybe someday. @xantari This driver can be very sensitive to it's properties and changing their default values can fix some issues for some people, but break something for others. So my advice to you for now is to try carefully tweaking them (or first looking in the documentation for a needed property) to test if it can be changed @Matvey16 Thanks, I did some experimentation and found a result that does show the auto increment properties properly. But then it messes up the column retrieval information. Here is what I had: metadata source: 0 translate binary: true date format: iso time format: iso extended metadata: false With the above settings you get the column names in addition to the column text underneath the column as follows when you do a select * from table: To fix the identity column information issue I then changed it to the following: metadata source: 1 translate binary: true date format: iso time format: iso extended metadata: true So lines #1 and #5 above were changed. You can not leave metadata source: 0 and just leave "extended metadata" to true. As that still doesn't allow you to view the identity column information, even though the documentation says it should. The problem now though, is with #1 and #5 above changed I now get this when viewing that same table as above: As you can see in the above image, though I've fixed the issue with the identity column property info now being displayed in DBeaver, I have completely lost the column names on the result sets. So a bit more experimentation. Used the following driver properties: metadata source: 1 translate binary: true date format: iso time format: iso extended metadata: true When I have this unmarked: I get this: When marking it: I get the column names back: Now, I figured how do I get the column headers back. And I noticed this property in DBeavber: It was already marked. and should show the column description/labels in the header right? Kinda wondering if this is now a DBeaver bug. Closing this in favor of #13335 since the original issue this report is about is fixed, but a separate issue has now cropped up. Ok, thanks for the bug report. For AS400 DBeaver shows only what the driver gives. So maybe this is a driver issue. We don't have a DB2 AS400 test environment for testing, unfortunately. Therefore we can't fix it now. Maybe someday. You may be able to get a free IBM i account at https://pub400.com/.
2025-04-01T06:38:21.116784
2022-04-01T21:49:59
1190301921
{ "authors": [ "Chealer", "ShadelessFox" ], "license": "Apache-2.0", "license_source": "github-api", "license_type": "permissive", "provenance": "gharchive-dolma-0000.json.gz:5184", "repo": "dbeaver/dbeaver", "url": "https://github.com/dbeaver/dbeaver/issues/16049" }
gharchive/issue
No shortcut to open a table from the References panel Is your feature request related to a problem? Please describe. When following references between multiple tables, the References panel comes useful to go from table A to table B. However, if table C has a foreign key refering to table B, it is then inconvenient to get to C. I see no way to do that with DBeaver 22.0.1. Describe the solution you'd like A simple way would be to allow opening the References panel in its own full tab, so that we can then use References again. Describe alternatives you've considered Perhaps a more elaborate solution like Toad's Master-Detail Browser would be even better. Hello @Chealer, Sorry for the late response. I'm unsure if I understand you correctly. Can you please describe (or show using a video) a use case for your feature request? Nothing to be sorry about @ShadelessFox A use case would involve 2 entities indirected linked with foreign keys, through a third entity. For example, an organization can have members, and a member can have skills. If a table for skills has a foreign key to a table for members, and the members table has a foreign key to an ORGANIZATION table, it would be great to be able to quickly go from SKILL to ORGANIZATION through MEMBER, by selecting SKILL's foreign key, then MEMBER's foreign key. So, basically, you want this combo menu to show references from the table that is currently shown in that panel? I'm not sure what data that panel should display in such case. Should it result in query that looks something like this? SELECT * FROM skills WHERE member_id IN (SELECT member_id FROM members WHERE organization_id = <selected row>); Thanks. That is not really what I wanted @ShadelessFox , but please excuse me and disregard my previous comment. I haven't used DBeaver in a while and was confused. I retested, and here is the actual use case which is problematic: For example, an organization can have members, and a member can have skills. If a table for skills has a foreign key to a table for members, and the members table has a foreign key to an ORGANIZATION table, it would be great to be able to quickly go from ORGANIZATION to SKILL through MEMBER, by selecting MEMBER's reverse foreign key from ORGANIZATION, then SKILL's reverse foreign key. This would allow to quickly find skills found in an organization. I don't know the best way to do this, but one way would be to add a button in the References panel allowing to make it a full-fledged tab (rather than a sub-tab). There could be some "Separate into new tab" button. To achieve the above, I would go to ORGANIZATION, select a row, open the References panel, select a member in the References panel, use the new "Separate into new tab" button, and from there open the References panel again. By the way, while Toad's Master Detail can serve as inspiration, its design is not that intuitive, so I would not advise to replicate it exactly. The provided solution is actually pretty useful, so we can stick with it. I was unable to take a glance of Toad's Master Detail. Can you please, if possible, provide a footage that shows its functionality? There is a workaround that involves using the References tab in the metadata editor (it's not the same as the References panel):
2025-04-01T06:38:21.118890
2022-05-20T13:39:45
1243165763
{ "authors": [ "itphonim", "uslss" ], "license": "Apache-2.0", "license_source": "github-api", "license_type": "permissive", "provenance": "gharchive-dolma-0000.json.gz:5185", "repo": "dbeaver/dbeaver", "url": "https://github.com/dbeaver/dbeaver/issues/16555" }
gharchive/issue
is there a way not to have to reinstall plugins after every updates ? Hi, i'm using DBeaver in zip version (Windows) every time I migrate to the newer version I have to redownload et reinstall the plugins. Fortunatly I'm using only one. having this message for instance with the Office addon: thanks for the suggestion closed a a duplicate of #5317
2025-04-01T06:38:21.124340
2018-07-06T13:43:15
338944714
{ "authors": [ "nkiseev", "serge-rider" ], "license": "Apache-2.0", "license_source": "github-api", "license_type": "permissive", "provenance": "gharchive-dolma-0000.json.gz:5186", "repo": "dbeaver/dbeaver", "url": "https://github.com/dbeaver/dbeaver/issues/3763" }
gharchive/issue
I can't modify cell in resultset. Earlier this function was work. It is very necessary function and it not work. Can't reproduce. Please provide more details. What is your database How do you edit cells (in table editor or in custom query results). Are you sure that exactly the same results were editable in earlier versions? Sorry. I rechecked and made sure that I can edit cell in the result of query for MySQL and PostgreSQL. I can't edit cell for Amazon Redshift, after update the driver for this DB. It is driver issue (i think) or interaction between DBeaver and Driver Amazon Redshift. I tried edit cell in the custom query results. This was fixed in 5.1.5.
2025-04-01T06:38:21.126168
2018-08-01T12:07:14
346571521
{ "authors": [ "khushalc", "serge-rider" ], "license": "Apache-2.0", "license_source": "github-api", "license_type": "permissive", "provenance": "gharchive-dolma-0000.json.gz:5187", "repo": "dbeaver/dbeaver", "url": "https://github.com/dbeaver/dbeaver/issues/3900" }
gharchive/issue
SQL Editor Issue Even ever i press multiple enter, and then do backspace their are scroll bar coming on editor screen, its irritating after few occurrence. Please check workaround in #3916
2025-04-01T06:38:21.132182
2016-06-19T15:17:18
161077493
{ "authors": [ "SirBenJammin", "andreescastano", "dburtonDRW", "earsonlau", "eng543", "lalato", "serge-rider", "shungabubus", "xenago", "yonisade" ], "license": "Apache-2.0", "license_source": "github-api", "license_type": "permissive", "provenance": "gharchive-dolma-0000.json.gz:5188", "repo": "dbeaver/dbeaver", "url": "https://github.com/dbeaver/dbeaver/issues/546" }
gharchive/issue
Additional Sound Notifications Hey Serge, I see that in 3.6.7 you added sound notification support "beep after query finish". Is it possible we can expand on this capability, for example rather than beep after each query could we have a notification sound after the run of all queries within one SQL editor? Also could there be a different notification sound based on the outcome per query/per SQL script. See example notification sounds below: Success: http://www.soundsnap.com/node/92951 Failed: http://www.soundsnap.com/error_tone_chime_blastwavefx_16379 This would really help me and hopefully other people when you running many queries, want to monitor but don't always have to be at your desk. Thanks, Ben Agreed, that would be a good feature. Hello All, +1 vote for this... Hello All, I also +1 vote for this. This small feature can contribute so much for efficient time utilization while waiting for long running SQL. +1 for this! +1 I would also suggest the ability to trigger a macOS notification on query completion too. (Should that be a separate issue?) I sometimes need to kick off a long-running query and then I go do work in some other app while I'm waiting. For various reasons, sound notifications aren't always viable. +1 I would also suggest the ability to trigger a macOS notification on query completion too. (Should that be a separate issue?) I sometimes need to kick off a long-running query and then I go do work in some other app while I'm waiting. For various reasons, sound notifications aren't always viable. i don't even know how to set the notification:( What's the latest on this feature? Is it still planned? +1 dont't forget us! Any updates on this? Has it been abandoned?
2025-04-01T06:38:21.138657
2020-08-17T17:26:06
680401425
{ "authors": [ "kseniiaguzeeva", "pdanie", "serge-rider", "uslss" ], "license": "Apache-2.0", "license_source": "github-api", "license_type": "permissive", "provenance": "gharchive-dolma-0000.json.gz:5189", "repo": "dbeaver/dbeaver", "url": "https://github.com/dbeaver/dbeaver/issues/9567" }
gharchive/issue
DBeaver: When editing an INTEGER value in a grid, the cell is not opened if start typing a minus or plus sign DBeaver - Version 7.1.4 CE DBeaver driver: MS SQL Server / Microsoft Driver (the new Microsoft Driver) Operating System: Windows 7 / Windows 8.1 / Windows 10 Database Server: Microsoft SQL Express 2014, 2016, 2017 When editing an INTEGER value in a grid, the cell is not opened if start typing a minus "-" or a plus "+", this workes if I enter a number "0".."9" instead. The grid ought to set the INTEGER cell in edit mode not only when I press "0".."9" on the keyboard, pressing "+" or "-" ought to open the cell in edit mode too. Create a test table and fill it with test data: CREATE TABLE TestInteger (Id INTEGER, Value INTEGER); GO INSERT INTO TestInteger VALUES (1, 10), (2, 20); View the table data TestInteger in a grid. Click on the 1st row Value column (with the value of 10). Press number "5" on the keyboard. The cell is changed to edit mode. This is OK. Click on the 2nd row Value column (with the value of 20). Press minus sign (the "-" character) or the plus sign (the "+" character) on the keyboard. Nothing happens. This is an ERROR. My guess is that the "-" and the "+" character should be added to characters that change the grid cell into edit mode. `thanks for the bug report Fixed + still doesn't change int cell in edit mode Keypad buttons handle was added verified
2025-04-01T06:38:21.145291
2021-03-04T18:23:45
822385727
{ "authors": [ "jlmaurer", "leiyangleon" ], "license": "Apache-2.0", "license_source": "github-api", "license_type": "permissive", "provenance": "gharchive-dolma-0000.json.gz:5190", "repo": "dbekaert/RAiDER", "url": "https://github.com/dbekaert/RAiDER/issues/273" }
gharchive/issue
[BUG] helper command fails due to losreader problem Describe the bug When running the 2nd raiderDelay helper command, it fails due to array dimension error in the losreader.py file. To Reproduce Steps to reproduce the behavior: Command used raiderDelay.py --date 20200103 --time 23:00:00 -b 39 40 -79 -78 --model GMAO --zref 15000 --heightlvs 0 100 200 -v Error Output Weather model GMAO is available from 2014-02-20 00:00:00-Present WARNING: Rounded given hour from 23 to 0 Traceback (most recent call last): File "/opt/anaconda3/envs/RAiDER/bin/raiderDelay.py", line 4, in <module> __import__('pkg_resources').run_script('RAiDER==0.0.1', 'raiderDelay.py') File "/opt/anaconda3/envs/RAiDER/lib/python3.7/site-packages/pkg_resources/__init__.py", line 665, in run_script self.require(requires)[0].run_script(script_name, ns) File "/opt/anaconda3/envs/RAiDER/lib/python3.7/site-packages/pkg_resources/__init__.py", line 1463, in run_script exec(code, namespace, namespace) File "/opt/anaconda3/envs/RAiDER/lib/python3.7/site-packages/RAiDER-0.0.1-py3.7-macosx-10.9-x86_64.egg/EGG-INFO/scripts/raiderDelay.py", line 12, in <module> parseCMD() File "/opt/anaconda3/envs/RAiDER/lib/python3.7/site-packages/RAiDER-0.0.1-py3.7-macosx-10.9-x86_64.egg/RAiDER/runProgram.py", line 196, in parseCMD _tropo_delay(new_args) File "/opt/anaconda3/envs/RAiDER/lib/python3.7/site-packages/RAiDER-0.0.1-py3.7-macosx-10.9-x86_64.egg/RAiDER/runProgram.py", line 208, in _tropo_delay (_, _) = tropo_delay(args_copy) File "/opt/anaconda3/envs/RAiDER/lib/python3.7/site-packages/RAiDER-0.0.1-py3.7-macosx-10.9-x86_64.egg/RAiDER/delay.py", line 153, in tropo_delay los = getLookVectors(los, lats, lons, hgts, zref) File "/opt/anaconda3/envs/RAiDER/lib/python3.7/site-packages/RAiDER-0.0.1-py3.7-macosx-10.9-x86_64.egg/RAiDER/losreader.py", line 338, in getLookVectors look_vecs = _getZenithLookVecs(lat, lon, hgt, zref=zref) File "/opt/anaconda3/envs/RAiDER/lib/python3.7/site-packages/RAiDER-0.0.1-py3.7-macosx-10.9-x86_64.egg/RAiDER/losreader.py", line 320, in _getZenithLookVecs zenLookVecs = (np.array((e, n, u)).T * (zref - heights)[..., np.newaxis]) ValueError: operands could not be broadcast together with shapes (2,3) (3,1) @leiyangleon yes I saw this bug last night when I was testing as well. I think I have the fix in place, will push asap. @jlmaurer the command still fails but with a new error now: Traceback (most recent call last): File "/opt/anaconda3/envs/RAiDER/bin/raiderDelay.py", line 4, in <module> __import__('pkg_resources').run_script('RAiDER==0.0.1', 'raiderDelay.py') File "/opt/anaconda3/envs/RAiDER/lib/python3.7/site-packages/pkg_resources/__init__.py", line 665, in run_script self.require(requires)[0].run_script(script_name, ns) File "/opt/anaconda3/envs/RAiDER/lib/python3.7/site-packages/pkg_resources/__init__.py", line 1463, in run_script exec(code, namespace, namespace) File "/opt/anaconda3/envs/RAiDER/lib/python3.7/site-packages/RAiDER-0.0.1-py3.7-macosx-10.9-x86_64.egg/EGG-INFO/scripts/raiderDelay.py", line 12, in <module> parseCMD() File "/opt/anaconda3/envs/RAiDER/lib/python3.7/site-packages/RAiDER-0.0.1-py3.7-macosx-10.9-x86_64.egg/RAiDER/runProgram.py", line 154, in parseCMD args = checkArgs(args, p) File "/opt/anaconda3/envs/RAiDER/lib/python3.7/site-packages/RAiDER-0.0.1-py3.7-macosx-10.9-x86_64.egg/RAiDER/checkArgs.py", line 42, in checkArgs lat, lon, llproj, bounds, flag, pnts_file = readLL(args.query_area) File "/opt/anaconda3/envs/RAiDER/lib/python3.7/site-packages/RAiDER-0.0.1-py3.7-macosx-10.9-x86_64.egg/RAiDER/llreader.py", line 33, in readLL fname = ' '.join(*args) TypeError: sequence item 0: expected str instance, float found @leiyangleon This is actually not the same bug, but I can't reproduce the error either way because I'm getting a KeyError in the _load_model_level function in gmao.py. Can you tell me if there is a quick fix for this? (RAiDER) jlmd9g@rt01jlmd9g tmp1 % raiderDelay.py --date 20200103 --time 23:00:00 -b 39 40 -79 -78 --model GMAO --zref 15000 --heightlvs 0 100 200 -v Weather model GMAO is available from 2014-02-20 00:00:00-Present WARNING: Rounded given hour from 23 to 0 ERROR: Unable to save weathermodel to file Traceback (most recent call last): File "/Users/jlmd9g/software/miniconda3/envs/RAiDER/lib/python3.8/site-packages/RAiDER-0.0.1-py3.8-macosx-10.9-x86_64.egg/RAiDER/models/gmao.py", line 143, in _fetch writeWeatherVars2NETCDF4(self, lats, lons, h, q, p, t, outName=out) File "/Users/jlmd9g/software/miniconda3/envs/RAiDER/lib/python3.8/site-packages/RAiDER-0.0.1-py3.8-macosx-10.9-x86_64.egg/RAiDER/utilFcns.py", line 730, in writeWeatherVars2NETCDF4 nc_outfile = write2NETCDF4core(nc_outfile, dimension_dict, dataset_dict, tran, mapping_name='WGS84') File "/Users/jlmd9g/software/miniconda3/envs/RAiDER/lib/python3.8/site-packages/RAiDER-0.0.1-py3.8-macosx-10.9-x86_64.egg/RAiDER/utilFcns.py", line 801, in write2NETCDF4core dataset_dict[data]['dataset'][np.isnan(dataset_dict[data]['dataset'])] = FillValue TypeError: only integer scalar arrays can be converted to a scalar index Traceback (most recent call last): File "/Users/jlmd9g/software/miniconda3/envs/RAiDER/bin/raiderDelay.py", line 4, in __import__('pkg_resources').run_script('RAiDER==0.0.1', 'raiderDelay.py') File "/Users/jlmd9g/software/miniconda3/envs/RAiDER/lib/python3.8/site-packages/pkg_resources/__init__.py", line 650, in run_script self.require(requires)[0].run_script(script_name, ns) File "/Users/jlmd9g/software/miniconda3/envs/RAiDER/lib/python3.8/site-packages/pkg_resources/__init__.py", line 1446, in run_script exec(code, namespace, namespace) File "/Users/jlmd9g/software/miniconda3/envs/RAiDER/lib/python3.8/site-packages/RAiDER-0.0.1-py3.8-macosx-10.9-x86_64.egg/EGG-INFO/scripts/raiderDelay.py", line 12, in parseCMD() File "/Users/jlmd9g/software/miniconda3/envs/RAiDER/lib/python3.8/site-packages/RAiDER-0.0.1-py3.8-macosx-10.9-x86_64.egg/RAiDER/runProgram.py", line 196, in parseCMD _tropo_delay(new_args) File "/Users/jlmd9g/software/miniconda3/envs/RAiDER/lib/python3.8/site-packages/RAiDER-0.0.1-py3.8-macosx-10.9-x86_64.egg/RAiDER/runProgram.py", line 208, in _tropo_delay (_, _) = tropo_delay(args_copy) File "/Users/jlmd9g/software/miniconda3/envs/RAiDER/lib/python3.8/site-packages/RAiDER-0.0.1-py3.8-macosx-10.9-x86_64.egg/RAiDER/delay.py", line 113, in tropo_delay weather_model_file = prepareWeatherModel( File "/Users/jlmd9g/software/miniconda3/envs/RAiDER/lib/python3.8/site-packages/RAiDER-0.0.1-py3.8-macosx-10.9-x86_64.egg/RAiDER/processWM.py", line 91, in prepareWeatherModel f = weather_model.load( File "/Users/jlmd9g/software/miniconda3/envs/RAiDER/lib/python3.8/site-packages/RAiDER-0.0.1-py3.8-macosx-10.9-x86_64.egg/RAiDER/models/weatherModel.py", line 201, in load self.load_weather(*args, **kwargs) File "/Users/jlmd9g/software/miniconda3/envs/RAiDER/lib/python3.8/site-packages/RAiDER-0.0.1-py3.8-macosx-10.9-x86_64.egg/RAiDER/models/gmao.py", line 156, in load_weather self._load_model_level(f) File "/Users/jlmd9g/software/miniconda3/envs/RAiDER/lib/python3.8/site-packages/RAiDER-0.0.1-py3.8-macosx-10.9-x86_64.egg/RAiDER/models/gmao.py", line 168, in _load_model_level h = np.array(f.variables['H'][:]) KeyError: 'H' @jlmaurer not sure what the cause is as I have never seen that error before. Would suggest to try other models. I suspect this is not related to GMAO only as I haven't touched the GMAO code for months...
2025-04-01T06:38:21.170281
2023-05-17T05:51:36
1713170223
{ "authors": [ "abd-gang", "sdimantsd" ], "license": "MIT", "license_source": "github-api", "license_type": "permissive", "provenance": "gharchive-dolma-0000.json.gz:5191", "repo": "dbolya/yolact", "url": "https://github.com/dbolya/yolact/issues/817" }
gharchive/issue
Training yolact on resnet18 and got some error Hi, I was training the yolact model on resnet18 backbone and it was going all good but then suddenly I got some error and the training got aborted- `[ 2] 32930 || B: 3.996 | C: 4.955 | M: 4.408 | S: 1.036 | T: 14.395 || ETA: 1 day, 4:21:46 || timer: 0.127 [ 2] 32940 || B: 4.020 | C: 5.072 | M: 4.458 | S: 1.104 | T: 14.655 || ETA: 1 day, 4:23:08 || timer: 0.132 [ 2] 32950 || B: 4.063 | C: 5.246 | M: 4.552 | S: 1.199 | T: 15.060 || ETA: 1 day, 4:23:34 || timer: 0.151 [ 2] 32960 || B: 4.057 | C: 5.435 | M: 4.608 | S: 1.271 | T: 15.371 || ETA: 1 day, 4:23:31 || timer: 0.158 [ 2] 32970 || B: 4.071 | C: 5.574 | M: 4.675 | S: 1.306 | T: 15.626 || ETA: 1 day, 4:24:23 || timer: 0.166 [ 2] 32980 || B: 4.033 | C: 5.746 | M: 4.758 | S: 1.381 | T: 15.918 || ETA: 1 day, 4:26:38 || timer: 0.140 [ 2] 32990 || B: 4.031 | C: 5.817 | M: 4.741 | S: 1.411 | T: 15.999 || ETA: 1 day, 4:25:21 || timer: 0.139 [ 2] 33000 || B: 4.055 | C: 5.763 | M: 4.799 | S: 1.412 | T: 16.028 || ETA: 1 day, 4:25:51 || timer: 0.128 Traceback (most recent call last): File "/home/gangwa/miniconda3/lib/python3.9/multiprocessing/queues.py", line 245, in _feed obj = _ForkingPickler.dumps(obj) File "/home/gangwa/miniconda3/lib/python3.9/multiprocessing/reduction.py", line 51, in dumps cls(buf, protocol).dump(obj) File "/home/gangwa/miniconda3/lib/python3.9/site-packages/torch/multiprocessing/reductions.py", line 364, in reduce_storage shared_cache[cache_key] = StorageWeakRef(storage) File "/home/gangwa/miniconda3/lib/python3.9/site-packages/torch/multiprocessing/reductions.py", line 65, in setitem self.free_dead_references() File "/home/gangwa/miniconda3/lib/python3.9/site-packages/torch/multiprocessing/reductions.py", line 70, in free_dead_references if storage_ref.expired(): File "/home/gangwa/miniconda3/lib/python3.9/site-packages/torch/multiprocessing/reductions.py", line 35, in expired return torch.Storage._expired(self.cdata) # type: ignore[attr-defined] File "/home/gangwa/miniconda3/lib/python3.9/site-packages/torch/storage.py", line 757, in _expired return eval(cls.module)._UntypedStorage._expired(*args, **kwargs) AttributeError: module 'torch.cuda' has no attribute '_UntypedStorage' Traceback (most recent call last): File "/home/gangwa/miniconda3/lib/python3.9/multiprocessing/queues.py", line 245, in _feed obj = _ForkingPickler.dumps(obj) File "/home/gangwa/miniconda3/lib/python3.9/multiprocessing/reduction.py", line 51, in dumps cls(buf, protocol).dump(obj) File "/home/gangwa/miniconda3/lib/python3.9/site-packages/torch/multiprocessing/reductions.py", line 364, in reduce_storage shared_cache[cache_key] = StorageWeakRef(storage) File "/home/gangwa/miniconda3/lib/python3.9/site-packages/torch/multiprocessing/reductions.py", line 65, in setitem self.free_dead_references() File "/home/gangwa/miniconda3/lib/python3.9/site-packages/torch/multiprocessing/reductions.py", line 70, in free_dead_references if storage_ref.expired(): File "/home/gangwa/miniconda3/lib/python3.9/site-packages/torch/multiprocessing/reductions.py", line 35, in expired return torch.Storage._expired(self.cdata) # type: ignore[attr-defined] File "/home/gangwa/miniconda3/lib/python3.9/site-packages/torch/storage.py", line 757, in _expired return eval(cls.module)._UntypedStorage._expired(*args, **kwargs) AttributeError: module 'torch.cuda' has no attribute '_UntypedStorage' [ 2] 33010 || B: 4.178 | C: 5.768 | M: 4.934 | S: 1.417 | T: 16.296 || ETA: 1 day, 4:49:09 || timer: 0.126 Traceback (most recent call last): File "/home/gangwa/miniconda3/lib/python3.9/multiprocessing/queues.py", line 245, in _feed obj = _ForkingPickler.dumps(obj) File "/home/gangwa/miniconda3/lib/python3.9/multiprocessing/reduction.py", line 51, in dumps cls(buf, protocol).dump(obj) File "/home/gangwa/miniconda3/lib/python3.9/site-packages/torch/multiprocessing/reductions.py", line 364, in reduce_storage shared_cache[cache_key] = StorageWeakRef(storage) File "/home/gangwa/miniconda3/lib/python3.9/site-packages/torch/multiprocessing/reductions.py", line 65, in setitem self.free_dead_references() File "/home/gangwa/miniconda3/lib/python3.9/site-packages/torch/multiprocessing/reductions.py", line 70, in free_dead_references if storage_ref.expired(): File "/home/gangwa/miniconda3/lib/python3.9/site-packages/torch/multiprocessing/reductions.py", line 35, in expired return torch.Storage._expired(self.cdata) # type: ignore[attr-defined] File "/home/gangwa/miniconda3/lib/python3.9/site-packages/torch/storage.py", line 757, in _expired return eval(cls.module)._UntypedStorage._expired(*args, **kwargs) AttributeError: module 'torch.cuda' has no attribute '_UntypedStorage' Traceback (most recent call last): File "/home/gangwa/miniconda3/lib/python3.9/multiprocessing/queues.py", line 245, in _feed obj = _ForkingPickler.dumps(obj) File "/home/gangwa/miniconda3/lib/python3.9/multiprocessing/reduction.py", line 51, in dumps cls(buf, protocol).dump(obj) File "/home/gangwa/miniconda3/lib/python3.9/site-packages/torch/multiprocessing/reductions.py", line 364, in reduce_storage shared_cache[cache_key] = StorageWeakRef(storage) File "/home/gangwa/miniconda3/lib/python3.9/site-packages/torch/multiprocessing/reductions.py", line 65, in setitem self.free_dead_references() File "/home/gangwa/miniconda3/lib/python3.9/site-packages/torch/multiprocessing/reductions.py", line 70, in free_dead_references if storage_ref.expired(): File "/home/gangwa/miniconda3/lib/python3.9/site-packages/torch/multiprocessing/reductions.py", line 35, in expired return torch.Storage._expired(self.cdata) # type: ignore[attr-defined] File "/home/gangwa/miniconda3/lib/python3.9/site-packages/torch/storage.py", line 757, in _expired return eval(cls.module)._UntypedStorage._expired(*args, **kwargs) AttributeError: module 'torch.cuda' has no attribute '_UntypedStorage' ` Anyone has any idea why I got this after around 1-2 hours of training. Thanks This repo did not updated 3 years. It's better for you to train with yolov8-seg I am able to make it run but getting very slow training speed Solution - Don't use torch1.12. Either upgrade or degrade the version of torch with cuda.
2025-04-01T06:38:21.171761
2020-09-29T17:32:51
711315936
{ "authors": [ "dbones" ], "license": "Apache-2.0", "license_source": "github-api", "license_type": "permissive", "provenance": "gharchive-dolma-0000.json.gz:5192", "repo": "dbones-labs/auditable", "url": "https://github.com/dbones-labs/auditable/issues/20" }
gharchive/issue
Oauth/IClaimsPrinicpal collector gab the current IClaimsPrinicpal for a ASPNET application extract the name and id, using the approved XML namespaces https://github.com/dbones-labs/auditable/commit/d88e68c6e2a46e7919ef6cc36458f1c36f5f743e
2025-04-01T06:38:21.175408
2017-03-21T09:46:15
215679656
{ "authors": [ "jtrain" ], "license": "MIT", "license_source": "github-api", "license_type": "permissive", "provenance": "gharchive-dolma-0000.json.gz:5193", "repo": "dbrgn/drf-dynamic-fields", "url": "https://github.com/dbrgn/drf-dynamic-fields/issues/13" }
gharchive/issue
Add support for concrete nested serializers This is a proposal to add support for editing the fields of nested serializers not on a per-request basis, but at instantiation time. Here is an example. There is a serializer for user called UserSerializer it has rather a lot of fields class UserSerializer(...): class Meta: fields = ('id', 'url', 'name', 'email', 'accounts', 'friends', 'most_recent_activity') The UserSerializer is nested inside another serializer class MessageSerializer(...): from_users = UserSerializer(many=True, read_only=True) class Meta: fields = ('id', 'url', 'from_users', 'to_users', 'account', 'created_at', 'modified_at') But we only want a few bits of info for each user in the MessageSerializer, just name, email, id and url. Just enough context to help the front end render without relying on a user lookup. This proposal is made to solve that situation: class MessageSerializer(...): from_users = UserSerializer(many=True, read_only=True, fields=('id', 'url', 'name', 'email')) I have already coded something like this up, and can I can see how there is some overlap with this project. Enough to justify putting it in, I think. Unfortunately it doesn't directly support the purpose of this project, which is dynamic per-request fields. This is more like dynamic fields at runtime. I'm not interested in this feature anymore. Nested ModelSerializers pay a significant penalty when instantiated. This was my main use-case. Instead I create a normal Serializer for my nested serializers. The performance benefits are huge. In case anyone was wondering, it is the get_fields function in ModelSerializer that is particularly taxing.
2025-04-01T06:38:21.185248
2024-02-22T23:57:54
2150141605
{ "authors": [ "QMalcolm", "codecov-commenter" ], "license": "Apache-2.0", "license_source": "github-api", "license_type": "permissive", "provenance": "gharchive-dolma-0000.json.gz:5194", "repo": "dbt-labs/dbt-common", "url": "https://github.com/dbt-labs/dbt-common/pull/85" }
gharchive/pull-request
Upgrade Jinja2 dependency version specification to address CVE-2024-22195 resolves CVE-2024-22195 Description CVE-2024-22195 identified an issue in Jinja2 versions <= 3.1.2. As such we've gone and changed our dependency requirement specification to be 3.1.2 or greater (but less than 4). Note: Preivously we were using the ~= version specifier. However due to some issues with the ~= we've moved to using >= in combination with <. This gives us the same range that ~= gave us, but avoids a pip resolution issue when multiple packages in an environment use ~= for the same dependency. Checklist [x] I have read the contributing guide and understand what's expected of me [x] I have signed the CLA [x] I have run this code in development and it appears to resolve the stated issue [x] I have opened an issue to add/update docs, or docs changes are not required/relevant for this PR [x] I have run changie new to create a changelog entry Codecov Report All modified and coverable lines are covered by tests :white_check_mark: Project coverage is 54.04%. Comparing base (c61d318) to head (7b3f164). Additional details and impacted files @@ Coverage Diff @@ ## main #85 +/- ## ======================================= Coverage 54.04% 54.04% ======================================= Files 49 49 Lines 2866 2866 ======================================= Hits 1549 1549 Misses 1317 1317 Flag Coverage Δ unit 54.04% <ø> (ø) Flags with carried forward coverage won't be shown. Click here to find out more. :umbrella: View full report in Codecov by Sentry. :loudspeaker: Have feedback on the report? Share it here.
2025-04-01T06:38:21.193362
2022-06-30T12:08:54
1290041562
{ "authors": [ "abhijithp05", "jtcohen6", "nelsoncardenas" ], "license": "Apache-2.0", "license_source": "github-api", "license_type": "permissive", "provenance": "gharchive-dolma-0000.json.gz:5195", "repo": "dbt-labs/dbt-docs", "url": "https://github.com/dbt-labs/dbt-docs/issues/283" }
gharchive/issue
[CT-794] Export Lineage Graph as SVG or PDF Describe the feature Problem: if we want to generate an image for our Lineage Graph, there is the option to export the graph as PNG, but this image has no quality options. Example: To add more flexibility, dbt can allow generating not just the PNG, but an SVG or PDF file with vectorized images. Reference: Scalable Vector Graphics Describe alternatives you've considered You can try to add PNG configurations as ppi and dimensions, but a scalable image format is a more general solution. Who will this benefit? Users who want to use Lineage Graphs in slides or documentations. Users who want to edit Lineage Graphs using graphic design programs as Illustrator. Are you interested in contributing this feature? Maybe, but I'm not so familiar with the dbt repo. @nelsoncardenas Sorry for the delay getting back to you! The logic for the graph export is neatly self-contained in just a few lines of code. We use the cytoscape library's built-in .png() function to create a PNG here: https://github.com/dbt-labs/dbt-docs/blob/85dec858c5d213699fbc2cefa388ba1e80c94889/src/app/components/graph/graph-viz.js#L210-L214 It looks like the cytoscape library has built-in support for PNG, JPG, and JSON as export options (no SVG): https://js.cytoscape.org/#core/export But it also looks like someone has developed an extension to the cytoscape library, for SVG exports: https://github.com/kinimesi/cytoscape-svg Is that something you'd be interested in experimenting with? @nelsoncardenas @jtcohen6 Is this still open? @abhijithp05 sorry, I have been busy, and I don't think I'll have any time soon to work on this problem. @nelsoncardenas I have issue while running the project related to assets/css reference. @nelsoncardenas Can you tell me where to find the export button @nelsoncardenas Created a PR for the issue. Please review. and merge
2025-04-01T06:38:21.198537
2023-03-09T13:43:08
1617288479
{ "authors": [ "followingell", "mikealfare" ], "license": "Apache-2.0", "license_source": "github-api", "license_type": "permissive", "provenance": "gharchive-dolma-0000.json.gz:5196", "repo": "dbt-labs/dbt-spark", "url": "https://github.com/dbt-labs/dbt-spark/pull/676" }
gharchive/pull-request
ADAP-56: Python 3.11 Support resolves #524 Description Add support for Python 3.11 Checklist [x] I have read the contributing guide and understand what's expected of me [x] I have signed the CLA [ ] I have run this code in development and it appears to resolve the stated issue [x] This PR includes tests, or tests are not required/relevant for this PR [ ] I have opened an issue to add/update docs, or docs changes are not required/relevant for this PR [x] I have run changie new to create a changelog entry We have a better way of doing this. We have a better way of doing this. @mikealfare Can you link to the better way if ready please? We have a better way of doing this. @mikealfare Can you link to the better way if ready please? Fair point. Here's the PR we merged: https://github.com/dbt-labs/dbt-spark/pull/818
2025-04-01T06:38:21.202213
2022-12-16T10:30:55
1499963785
{ "authors": [ "boxysean", "jasnonaz" ], "license": "Apache-2.0", "license_source": "github-api", "license_type": "permissive", "provenance": "gharchive-dolma-0000.json.gz:5197", "repo": "dbt-labs/docs.getdbt.com", "url": "https://github.com/dbt-labs/docs.getdbt.com/issues/2592" }
gharchive/issue
A large enterprise moving from Core to Cloud to enable more dbt developers Contact Details @boxysean I have read the dbt Developer Blog contribution guidelines. [X] I have read the dbt Developer Blog contribution guidelines. Which of these best describes you? [ ] I am a dbt Community member or partner contributing to the Developer Blog [X] I work for dbt Labs and am creating this issue for a community or marketing approved piece. What is the topic of your post? This post is a success story targeted towards dbt Core users at large enterprises who are looking to expand their dbt usage by adopting dbt Cloud. It will include key technical challenges and solutions used to solve them that others can follow. Link to an initial outline. https://www.notion.so/dbtlabs/848888d520f541a78c11e9e147a31581 Hey @boxysean - this is an awesome topic for a post. We have been wanting to do a guide on moving for Core to Cloud for a while and I still think we should do that, but starting with a single example and going deep makes a ton of sense. Let's plan on you, me and @dave-connors-3 spending some time digging into this in early Jan.
2025-04-01T06:38:21.214252
2024-11-08T00:24:53
2642510359
{ "authors": [ "chrusher", "dcollinsn" ], "license": "MIT", "license_source": "github-api", "license_type": "permissive", "provenance": "gharchive-dolma-0000.json.gz:5198", "repo": "dbvideostriketeam/wubloader", "url": "https://github.com/dbvideostriketeam/wubloader/pull/461" }
gharchive/pull-request
thrimshim: use send_file to serve templates from database I don't claim to understand why this works, but this makes downloading the thumbnail templates (to preview on the thumbnail management page, or to select the "hole" in the advanced crop settings in the editor) way faster. Something to do with how flask is chunking the memoryview object to serve. Applying this changed the timings to download "fiddling.png" from 818ms waiting and 6770ms receiving, to 777ms waiting and 14ms receiving. Resolves #458 It will not spot all Python bugs but running a linter such as Pyflakes before pushing is good practice. ekim has a better fix (bytes(image) apparently is enough), closing
2025-04-01T06:38:21.230630
2016-08-23T23:44:06
172833232
{ "authors": [ "coveralls", "shashi", "tlnagy" ], "license": "mit", "license_source": "bigquery", "license_type": "permissive", "provenance": "gharchive-dolma-0000.json.gz:5199", "repo": "dcjones/Gadfly.jl", "url": "https://github.com/dcjones/Gadfly.jl/pull/883" }
gharchive/pull-request
add stronger typing to poetic plotting, fixes #882 Related to #881 and #871 Coverage increased (+0.2%) to 65.67% when pulling 47336cb3de67c6fdff901d0d08834033d0ce758b on tlnagy:pull-request/24f5d936 into 4a3683797227463b2a5bb4f736ce64fc19fd016d on dcjones:master. Coverage increased (+0.02%) to 65.458% when pulling 47336cb3de67c6fdff901d0d08834033d0ce758b on tlnagy:pull-request/24f5d936 into 4a3683797227463b2a5bb4f736ce64fc19fd016d on dcjones:master. Wow this is a pretty action at a distance kind of problem. I guess this fix is warranted though. Indeed. I would've preferred a different workaround to this (something more like #874), but that was a lucky fix. However, this is a much more robust fix for this error and it looks like the tests pass. Hopefully no one is passing anything too funky to the a and b parameters. Also, based on your suggestion in #882, I added a test for the ambiguity method error. Coverage increased (+0.02%) to 65.458% when pulling 8804dde2af4781692cacbdd0333a291ec1762e81 on tlnagy:pull-request/24f5d936 into 4a3683797227463b2a5bb4f736ce64fc19fd016d on dcjones:master. Coverage increased (+0.02%) to 65.458% when pulling 8804dde2af4781692cacbdd0333a291ec1762e81 on tlnagy:pull-request/24f5d936 into 4a3683797227463b2a5bb4f736ce64fc19fd016d on dcjones:master. Hopefully no one is passing anything too funky to the a and b parameters. julia> brightness(x::RGB) = (x.r+x.g+x.b)/3 brightness (generic function with 1 method) julia> plot([brightness], colorant"black", colorant"white") ERROR: MethodError: `isless` has no method matching isless(::ColorTypes.RGB{FixedPointNumbers.UFixed{UInt8,8}}, ::ColorTypes.RGB{FixedPointNumbers.UFixed{UInt8,8}}) Closest candidates are: isless(::DataArrays.NAtype, ::Any) isless(::Any, ::DataArrays.NAtype) in plot at /home/shashi/.julia/v0.4/Gadfly/src/poetry.jl:44 haha. yeah, if someone's using funky a and b then they can open an issue.
2025-04-01T06:38:21.242039
2019-12-10T18:22:00
535903866
{ "authors": [ "CraigJZ", "dcollie2" ], "license": "MIT", "license_source": "github-api", "license_type": "permissive", "provenance": "gharchive-dolma-0000.json.gz:5200", "repo": "dcollie2/enrollchat", "url": "https://github.com/dcollie2/enrollchat/pull/113" }
gharchive/pull-request
Try rack cas This branch removes devise and devise_cas_authenticatable in favor of the rack-cas gem. The devise_cas_authenticatable gem relies on the ruby-cas client gem, which is no longer maintained and produces deprecation warnings in Rails 6. Aside from the initial setup for rack-cas, the branch introduces 4 new methods into the Application Controller: authenticate_user! - directs visitors to either the login or the unregistered page set_current_user - updates the login stats for a user and sets a @current_user, if available get_current_user - gets the user from a CAS session current_user - helper method for use in views In order to maintain the user statistics that Devise's trackable module provided, the update_login_stats method has been introduced on the User model. This method gets called in the set_current_user method. A sessions controller was also introduced to handle closing these out before sending the user to the CAS logout path. The authenticated root we used with Devise has been replicated using a condition in StaticPagesController#home The test suite has been updated to reflect these changes and should be passing. To handle visitors that may be authenticated at the central level by CAS but are not registered in the app, an unregistered path has been introduced. The replicates devise_cas_authenticatable's unregistered method but provides a custom layout. In manual testing, a registered user should see no difference in their experience with the app. An unregistered user should now get the custom unregistered page. Definitely appreciate the additional eyes on this!! :) I think this is good to go. I manually resolved conflicts in the gem lock file.
2025-04-01T06:38:21.247416
2017-03-29T21:12:44
218010885
{ "authors": [ "ajazam", "karlkfi", "mesosphere-ci" ], "license": "Apache-2.0", "license_source": "github-api", "license_type": "permissive", "provenance": "gharchive-dolma-0000.json.gz:5201", "repo": "dcos/dcos-docs", "url": "https://github.com/dcos/dcos-docs/pull/999" }
gharchive/pull-request
Doc changes for installing DCOS v1.9.0 on Google GCE Description Urgency [ ] Blocker [ ] High [x] Medium Requirements Test all commands and procedures. Build content locally and test for formatting/links. Add redirects to dcos-website/redirect-files. Change all affected versions (e.g. 1.7, 1.8, and 1.9). See the contribution guidelines. Can one of the admins verify this patch? Please can somebody review the changes? I think the dcos_installer_filename usage is just confusing here. If you change it to dcos_generate_config.sh everywhere it'll be easier to follow and copy/paste. Alright. this is fine, but needs to be rebased. Can one of the admins verify this patch? Hello @karlkfi, I'm not sure I've followed the correct process here Sorry for the slow turnaround! I thought I had already merged this...
2025-04-01T06:38:21.253805
2016-12-09T20:43:48
194692367
{ "authors": [ "MatApple", "leemunroe" ], "license": "Apache-2.0", "license_source": "github-api", "license_type": "permissive", "provenance": "gharchive-dolma-0000.json.gz:5202", "repo": "dcos/dcos-ui", "url": "https://github.com/dcos/dcos-ui/pull/1566" }
gharchive/pull-request
Prevent stop action for scheduler tasks Scheduler tasks don't like to be stopped so we hide the stop button if one is selected. @leemunroe - should we hide this button or disable it? If we disable it, I presume we'd also want a tooltip to explain why. https://mesosphere.atlassian.net/browse/DCOS-10693 @MatApple Good point. I think disabled with a tooltip is a better UX in this case. Stopping a scheduler task is not supported. Thanks @leemunroe - will make the appropriate changes. @jfurrow - I'm having trouble getting the tooltip to work with the "Stop" button. I've tried wrapping the button with the Tooltip and also having the Tooltip as the immediate child of the button. Either way, the button doesn't like it and the Tooltip doesn't work correctly - especially with the "disabled" attribute on the button. Any suggestions? Thanks @mesosphere-ci retest this please @MatApple Functionally looks good but why does the disabled state have a grey background and hover state? It makes it stand out more 😪 We should remove the background if disabled. And no hover state needed (i.e. no underline). @leemunroe - completely agree, the button is weird. Talked to @ashenden about this. Instead of adding custom CSS, we want to handle this by updating the button styles for the disabled state in CNVS.
2025-04-01T06:38:21.256256
2017-08-02T23:00:13
247547067
{ "authors": [ "MatApple", "weblancaster" ], "license": "Apache-2.0", "license_source": "github-api", "license_type": "permissive", "provenance": "gharchive-dolma-0000.json.gz:5203", "repo": "dcos/dcos-ui", "url": "https://github.com/dcos/dcos-ui/pull/2350" }
gharchive/pull-request
fix(TasksView): disable restart from tasksViews Disable restart in TasksView when is SDK, this is part of #2343 Closes DCOS-16564 Checklist [x] Did you add a JIRA issue in a commit message or as part of the branch name? [ ] Did you add new unit tests? [ ] Did you add new integration tests? [ ] If this is a regression, did you write a test to catch this in the future? Nice catch @bstavroulakis @weblancaster 👍
2025-04-01T06:38:21.272009
2019-05-06T14:08:50
440718440
{ "authors": [ "GeorgiSTodorov", "brandonc", "mesosphere-ci", "pierrebeitz" ], "license": "Apache-2.0", "license_source": "github-api", "license_type": "permissive", "provenance": "gharchive-dolma-0000.json.gz:5204", "repo": "dcos/dcos-ui", "url": "https://github.com/dcos/dcos-ui/pull/3865" }
gharchive/pull-request
DCOS-52747 Disable package installation until "manager" requirement is satisfied https://jira.mesosphere.com/browse/DCOS-52747 ⚠️ depends on https://github.com/dcos/dcos-ui/pull/3864, you might want to get that merged first. some packages (currently only kubernetes-cluster) have a manager that must be installed before they can be used. currently the API restricts the installation of kubernetes-cluster if kubernetes is not installed yet. you have to go through the whole installation-process in the UI to afterwards be notified that stuff did not work. as a remedy a check has been implemented that renders an infobox with a nice warning as well as the reviewAndRun-button disabled - so now we can't waste a lot of work in the first place when we don't know about dependencies upfront. Review The functionality has been implemented in two steps (commit). The first commit (i mean the one that starts with refactor) refactors geReviewAndRunButtons -> renderReviewAndRunButton in the PackageDetailTab as i can not deal with lets somehow... it also does some dummy-changes that cause huge whitespace-changes. the second commit should contain all the logic related to dependency-checking. it should enable to get an overview of the approach taken. Testing Try to install kubernetes-cluster on a cluster that does not have a kubernetes service running to see the warning. install kubernetes and confirm that the message disappears (it should disappear in realtime as soon as the kubernetes-task has the state "TASK_RUNNING"). To make the warning show up programatically you can make hasUnresolvedDependency return true. @TattdCodeMonkey I did notice that we're checking that the manager package exists, but not the status or it. So if its deploying you will get an error when trying to run the package with the dependency. We're currently looking for a TASK_RUNNING, which was the best i could come up with. There seems to be no accurate service status for the kubernetes package yet, so this currently is the best we can do, right? would it be ok for you, if i opened an issue that reminds us of implementing a more sophisticated check once a package with a manager implements those statuses? i'll happily implement more/different checks though if you have something in mind! Finding it disquieting that the cosmos system test failed here. Could be relevant! 👀 I'll watch it Frustrating because they pass locally @pierrebeitz Cypress can't seem to complete mesos stream proxy requests so all the universe system tests fail with this change. So in order to preserve the universe system tests, I've added a commit that modifies the PackageDetailsTab to consult the DCOSStore (marathon groups endpoint) to check for an installed kubernetes package. Unfortunately, this is a worse solution because "kubernetes" could appear to be satisfied even if it has failed to start. This can result in the failure to install kubernetes-cluster. I don't think this is a huge deal because in the end we give the user an appropriate error message after kubernetes-cluster fails to install. Another alternative is to disable universe system tests but I opted to not do this. Another thought I had was that there's not enough information to figure out if "manager dependencies" are truly satisfied. Would it require that all or any tasks of that package be running? For kubernetes, this is not an issue because there's only one task. The button looks disabled, but I can still click it. Is this intentional? @brandonc , I approve of your current solution. The button looks disabled, but I can still click it. Is this intentional? no, not at all! should be fixed now! Why do we have "kubernetes" in quotation marks? apparently i worked with a wrong design doc in the beginning. removed them! I ran "kubernetes", tried to install "kubernetes-cluster", but got this error. Maybe we need to show the infobox in case "kubernetes" isn't configured correctly? that seems to be an error from the server. any idea on how to find out whether kubernetes is configured correctly upfront? Are there designs for the infobox in this case? I think gray is sort of easily ignored color, I think yellow or red are better. @mperrotti talked to design. they want it to be gray. here's what i consider the design doc: https://mesosphere.invisionapp.com/share/2TRTTM5SZQX#/screens/361260526_k8cluster-Details-Page-Disabled @pierrebeitz , thank you for addressing my feedback and for the quick response. I have one concern though. If our logic for checking if the dependency is somewhat incorrect, we prevent the user from installing a certain package. I think we should just display a warning. We have a server error in case the user insists in trying to install the package. Also, the tooltip message isn't in the message catalog and cannot be translated. I have one concern though. If our logic for checking if the dependency is somewhat incorrect, we prevent the user from installing a certain package. I think we should just display a warning. We have a server error in case the user insists in trying to install the package. i really admire that idea! we need to talk to design about this. Also, the tooltip message isn't in the message catalog and cannot be translated. @GeorgiSTodorov updated the fixup-commit @GeorgiSTodorov @TattdCodeMonkey the ServiceTree().getLabels method flattens out those running labels so we should be able to detect. The only thing we can't detect is whether or not the underling task requirement is satisfied. But I think that is a limitation of the API. @pierrebeitz I think this is ready to go I'm merging this because the "installed" requirement seems like the correct one to focus on since there are many scenarios we can't detect and this is meant as a helpful shortcut for someone who definitely does not have kubernetes installed before installing kubernetes-cluster :tada: This PR is included in version 2.96.0 :tada: The release is available on GitHub release Your semantic-release bot :package::rocket:
2025-04-01T06:38:21.283604
2019-02-04T20:09:53
406498341
{ "authors": [ "takirala" ], "license": "apache-2.0", "license_source": "bigquery", "license_type": "permissive", "provenance": "gharchive-dolma-0000.json.gz:5205", "repo": "dcos/dcos", "url": "https://github.com/dcos/dcos/pull/4422" }
gharchive/pull-request
[1.12][DCOS-45428] Bump cosmos to latest version High-level description What features does this change enable? What bugs does this change fix? Backport of #4409 Corresponding DC/OS tickets (obligatory) These DC/OS JIRA ticket(s) must be updated (ideally closed) in the moment this PR lands: DCOS-45428 Cosmos decodes URL parameter and it breaks resource links Related tickets (optional) Other tickets related to this change: DCOS_OSS- Foo the Bar so it stops Bazzing. Checklist for all PRs [x] Added a comprehensible changelog entry to CHANGES.md or explain why this is not a user-facing change: No user visible changes. [x] Included a test which will fail if code is reverted but test is not. If there is no test please explain here: No test included in dcos integration tests. However, package installation for dcos-ui would fail if code is reverted. [x] Read the DC/OS contributing guidelines [x] Followed relevant code rules Rules for Packages and Systemd Checklist for component/package updates: If you are changing components or packages in DC/OS (e.g. you are bumping the sha or ref of anything underneath packages), then in addition to the above please also include: [x] Change log from the last version integrated (this should be a link to commits for easy verification and review): View diff [x] Test Results: link to CI job test results for component [x] Code Coverage (if available): link to code coverage report PLEASE FILL IN THE TEMPLATE ABOVE / DO NOT REMOVE ANY SECTIONS ABOVE THIS LINE Instructions and review process What is the review process and when will my changes land? All PRs require 2 approvals using GitHub's pull request reviews. Reviewers should be: Developers who understand the code being modified. Developers responsible for code that interacts with or depends on the code being modified. It is best to proactively ask for 2 reviews by @mentioning the candidate reviewers in the PR comments area. The responsibility is on the developer submitting the PR to follow-up with reviewers and make sure a PR is reviewed in a timely manner. Once a PR has 2 ship-it's, no red reviews, and all tests are green it will be included in the next train. @mesosphere-mergebot bump-ee @mesosphere-mergebot label Ready For Review
2025-04-01T06:38:21.288275
2019-04-15T13:12:00
433274398
{ "authors": [ "mesosphere-teamcity" ], "license": "apache-2.0", "license_source": "bigquery", "license_type": "permissive", "provenance": "gharchive-dolma-0000.json.gz:5206", "repo": "dcos/dcos", "url": "https://github.com/dcos/dcos/pull/5118" }
gharchive/pull-request
[master] Bump Mesos to nightly 1.8.x 85462fc High-level description This is a routine bump to the latest Mesos and mesos-modules. Related JIRA Issues Checklist for all PRs [ ] Included a test which will fail if code is reverted but test is not. If there is no test please explain here: [x] Read the DC/OS contributing guidelines [x] Followed relevant code rules Rules for Packages and Systemd Checklist for component/package updates: If you are changing components or packages in DC/OS (e.g. you are bumping the sha or ref of anything underneath packages), then in addition to the above please also include: [x] Changelog: https://github.com/apache/mesos/compare/0c503b01d3a9428ec9db35d09da5e237d737c570...85462fc183a60ae18d85729bccb1fffb59aa572c [ ] Test Results: [link to CI job test results for component] [ ] Code Coverage (if available): [link to code coverage report] @mesosphere-mergebot bump-ee
2025-04-01T06:38:21.291697
2020-07-29T15:19:09
667928335
{ "authors": [ "jkoelker" ], "license": "apache-2.0", "license_source": "bigquery", "license_type": "permissive", "provenance": "gharchive-dolma-0000.json.gz:5207", "repo": "dcos/dcos", "url": "https://github.com/dcos/dcos/pull/7490" }
gharchive/pull-request
exhibitor: bump High-level description What features does this change enable? What bugs does this change fix? Corresponding DC/OS tickets (required) D2IQ-ID JIRA title / short description. Related tickets (optional) D2IQ-ID JIRA title / short description. @mesosphere-mergebot test teamcity/dcos/build/dcos teamcity/dcos/build/tox @mesosphere-mergebot test all @mesosphere-mergebot test all @mesosphere-mergebot test all
2025-04-01T06:38:21.293536
2023-05-18T06:10:47
1715054367
{ "authors": [ "dcramer" ], "license": "Apache-2.0", "license_source": "github-api", "license_type": "permissive", "provenance": "gharchive-dolma-0000.json.gz:5208", "repo": "dcramer/peated", "url": "https://github.com/dcramer/peated/issues/20" }
gharchive/issue
Improve suggested tags We want to add improved weighting that gives better suggestions. In order of bias where tags are seen: Bottle Brand/Distillery Region Country Added randomization to the list so at least selection bias goes away. Currently this is only weighted by Bottle, and likely requires tags to be materialized/indexed to do more.
2025-04-01T06:38:21.302332
2017-11-17T07:28:14
274775694
{ "authors": [ "RogueElement", "chappjc", "gozart1" ], "license": "isc", "license_source": "bigquery", "license_type": "permissive", "provenance": "gharchive-dolma-0000.json.gz:5209", "repo": "dcrdata/dcrdata", "url": "https://github.com/dcrdata/dcrdata/pull/276" }
gharchive/pull-request
[WIP] Added decode tx page and view raw link For issues #190 and #136 . The rpc api has no "getrawblock" function so that can't be added, but the view raw link works as intended now, showing the txid and hex while the view decoded link shows the json output. The decode tx pages works as intended as well but doesn't look so good for now. Just pushed up a bit more styling for this https://github.com/dcrdata/dcrdata/tree/raw-tx @gozart1 thanks looks pretty good right now. @chappjc this is ready for merge. Might be nice to pass the rejection reason out to the UI. In the server log I see: Rejected transaction 005969df2bc92ac33b97255521762ec4d69a27d6188e958c3246f01856d78259: transaction already exists But the UI currently just shows Error: Could not send hex Try decoding it first to see if it's valid Added that We need to be cautious with accepting and processing external data. Some thoughts: broadcast needs to be rate limited (per IP and server-wide) websocket input needs to somehow stop receiving data after X bytes (otherwise an attacker can send an infinite data stream, eating bandwidth and eventually filling up the server's memory) the html form should screen the data before sending, but an attacker would just ignore the form and send directly to the websocket the decode handler needs to have a final check on the size of the data before decoding it Rate limiting may not be needed as the node won't broadcast junk or duplicates, and txns cost $$. Took a shot at abusing this. My initial attempt was passing 460,000 valid hex characters to decode transaction. It completely crippled dcrdata. Local network request, just major CPU usage, forever. I assume it's at least as costly to do this to send transaction. I started with some valid transaction, like this one: 01000000020000000000000000000000000000000000000000000000000000000000000000ffffffff00ffffffffc0c343c6fb9ed34f26dd6a2bc7e233f22f82b4da4f363a7c5d437f7a555d30c60000000001ffffffff0400000000000000000000266a2463769d4b15965779ec9e542f888848426ec49dd80c0d175f3500000000000000b8df020000000000000000000000086a06010005000000a6e567000000000000001abb76a914a92c9ac541dd5ac40c630e6659ecce25e9fdd70e88aca53471d90100000000001abb76a91454ee8f3f4ceb3dfbdf38b3f55ec34c318c55b65c88ac0000000000000000028ffa46080000000000000000ffffffff020000bd1f92d10100000082d702000600000091483045022100f317885cfed85eb7355ea25a6fee0125b5fcd81afdcad91071a8e8ecc991f3ee02201683600dedd352d88a2ae0e11a5550ddfde68f87d53f9e072963f984f3bb8b1d014751210306559376b41006b6e16341b003c8d957cd3974ec665cea4e932826d9e7f1c7c72102f536ba2b34501eb3d3522c4397272d99803223087034a04f687ac6424c5aa8b652ae Then I pasted that in the text box, holding down CTRL+V for a few seconds, then select all, and paste for another couple of seconds. 😆 We also need to limit the length of the hex logged: Received decodetx signal for hex: . But what I was seeing is the process chugging away with multiple CPU cores after printing the offending hex. It seems easy to clog up the websocket. 2017-11-21 17:25:27.351 [ERR] SQLT: SendRawTransaction failed: -1: Rejected transaction 24a719667306c2284af9cb564be3178417bd4906da09746d3f73ed34df94b2be: transaction already exists 2017-11-21 17:25:27.351 [DBG] EXPR: Failed to encode WebSocketMessage decodedtx: write tcp <IP_ADDRESS>:7777-><IP_ADDRESS>:44582: i/o timeout 2017-11-21 17:25:27.921 [DBG] EXPR: Failed to encode WebSocketMessage 3: write tcp <IP_ADDRESS>:7777-><IP_ADDRESS>:44582: i/o timeout 2017/11/21 17:25:27 "GET http://<IP_ADDRESS>:7777/explorer/decodetx/ws HTTP/1.1" from <IP_ADDRESS>:44582 - 000 0B in 1m48.000770061s I'm just clicking both buttons like a maniac. Going in the right direction, but it still chokes. Try this: https://pastebin.com/raw/FRAfMiqb @RogueElement nice job with this. I'm just being picky on this because of the attack surface it exposes.
2025-04-01T06:38:21.360663
2020-09-28T13:17:34
710257252
{ "authors": [ "awesomebytes", "dddomodossola" ], "license": "apache-2.0", "license_source": "bigquery", "license_type": "permissive", "provenance": "gharchive-dolma-0000.json.gz:5217", "repo": "dddomodossola/remi", "url": "https://github.com/dddomodossola/remi/issues/407" }
gharchive/issue
https://remiguieditor.daviderosa.repl.co/ no module named remi Hello, just letting you know that https://remiguieditor.daviderosa.repl.co/ is not loading :) @awesomebytes thank you a lot for the info, I will now fix @awesomebytes Fixed thank you ;-) It seems that because of recent repl.it updates it is not possible to run the editor in a stable way. I removed that link from the readme. In the future I will look for a different solution.
2025-04-01T06:38:21.365608
2015-05-19T14:11:36
78104789
{ "authors": [ "Baachi", "ddeboer", "sagikazarmark" ], "license": "mit", "license_source": "bigquery", "license_type": "permissive", "provenance": "gharchive-dolma-0000.json.gz:5218", "repo": "ddeboer/data-import", "url": "https://github.com/ddeboer/data-import/issues/208" }
gharchive/issue
Item typehint is strictly array in Writers So, if the item can be an array, object or anything then shouldn't we remove the array typehint from the Writer interface? Because currently it only accepts arrays. In other words: do we have to convert the item to an array when it arrives at the Writer? It's not really easy to remove this, because some writers accept only arrays. I removed some month the array type hints from the converts/items. For example the ExcelWriter use phpexcel to create the spreadsheet. And i have no idea how we can extract the properties. In this case, shouldn't we make the assumption that by the end of the process (aka the workflow gets to the writer part) the item should be converted into an array? I think we should clear this (in the documentation) so that everyone knows why is this. See this library: https://github.com/plumphp/plum/ It is inspired by data-import and it says it accepts any kind of data. I am leaving this open as a reminder for documentation. See this library: https://github.com/plumphp/plum/ Interesting, I didn’t know about plumphp. I’ll contact Florian and see whether it makes sense to combine our efforts. In this case, shouldn't we make the assumption that by the end of the process (aka the workflow gets to the writer part) the item should be converted into an array? I think we should clear this (in the documentation) so that everyone knows why is this. Agreed. I decided to have arrays as the data element format: most readers can output it, most writers can handle it (and some, such as PHPExcel require it).
2025-04-01T06:38:21.372279
2016-11-23T14:56:10
191293447
{ "authors": [ "OliverHopt", "borsna" ], "license": "mit", "license_source": "bigquery", "license_type": "permissive", "provenance": "gharchive-dolma-0000.json.gz:5219", "repo": "ddialliance/lion", "url": "https://github.com/ddialliance/lion/issues/61" }
gharchive/issue
Cardinalities missing in XMI export There are a few cardinalities in the content, that are not included in the output as XMI. One example is the property target cardinality 2..2 of the propperts maps in AgentSimilarityPair. some more examples on: https://ddi-alliance.atlassian.net/browse/DMT-108 as i wrote in DMT-108, is this just a matter of adding 2..2 in the list of cardinalities? deployed https://github.com/ddialliance/lion/commit/6b7b1c7e1ec9f5bb2124430b2064edb0d919b6a1 to production. @OliverHopt is this resolved? Solved
2025-04-01T06:38:21.379908
2016-01-06T18:16:17
125231707
{ "authors": [ "TiagoCardoso1983", "ddollar" ], "license": "mit", "license_source": "bigquery", "license_type": "permissive", "provenance": "gharchive-dolma-0000.json.gz:5220", "repo": "ddollar/foreman", "url": "https://github.com/ddollar/foreman/issues/596" }
gharchive/issue
Foreman in production: how to preload rails app? I've been toying with using a kind of rails app preloader to share memory between process types. In a rails environment, one uses rails to preload the code. In theory, a foreman engine could access the environment.rb file and use it to preload it, before forking the different web/worker/etc processes. This would be therefore not a lot different than what the unicorn and puma (in cluster mode) do. The caveat is that it seems that foreman forks shell commands (which will later load different instances of the VM). Therefore, this: class Foreman::Engine::RailsCLI < Foreman::Engine::CLI def startup super require File.expand_path('../../config/environment', __FILE__) end end is not enough. Has anyone ever played around with such a thing? This is a rails-specific solution, so I don't hope to see it in foreman (or should I? Ideally every framework could create its own foreman engine and script). I can envision that instead of Process.spawn, one would use fork. I'd prefer not to add things like this to foreman proper. If you want to combine processes together do it one level before foreman as a single process. Well, I also agree that this isn't a foreman concern. Nevertheless, I was eyeing a workflow that could be "lobbied" into frameworks like rails, i.e. a kind of foreman engine that each framework could implement and reuse. I post here my POC for Rails: # bin/foreman-rails #!/usr/bin/env ruby require 'foreman/cli' module Foreman class Engine::RailsCLI < Engine::CLI def startup super require File.expand_path('../../config/environment', __FILE__) end def register(name, command, options={}) options[:env] ||= env options[:cwd] ||= File.dirname(command.split(" ").first) process = RailsProcess.new(command, options) @names[process] = name @processes << process end end class RailsCLI < CLI no_tasks do def engine @engine ||= begin engine_class = Engine::RailsCLI engine = engine_class.new(options) engine end end end end class RailsProcess < Process def run(options={}) env = @options[:env].merge(options[:env] || {}) output = options[:output] || $stdout runner = "#{Foreman.runner}".shellescape final_command = expanded_command(env) Dir.chdir(cwd) do fork do env.each do |k, v| ENV[k] ||= v end log_args = output.is_a?(IO) ? [output] : [output, 'w'] $stdout.reopen(*log_args) $stderr.reopen(*log_args) argv = final_command.split(/\s+/).reject { |s| %w(bundle exec).include?(s) } executable = argv.shift ARGV.clear argv.each do |v| ARGV << v end load Bundler.which(executable) end end end end end Foreman::RailsCLI.start This works... mostly. Basically by loading rails environment.rb, I'm eagerloading/kickstarting the initialization process, which is interpreted as "web process is starting" by some dependencies, which take decisions based on whether the current process is the web process or not (I have problems with sidekiq because of that). This is mainly because Rails itself doesn't provide proper hooks to signalize this IMO. The memory savings however, would be huge, if such a process could be implemented. I've created this discussion, but sadly not a lot of follow up, maybe I'll reopen it as an issue.
2025-04-01T06:38:21.384114
2023-12-07T19:14:25
2031371881
{ "authors": [ "ddworken", "mustafa0x" ], "license": "MIT", "license_source": "github-api", "license_type": "permissive", "provenance": "gharchive-dolma-0000.json.gz:5221", "repo": "ddworken/hishtory", "url": "https://github.com/ddworken/hishtory/issues/142" }
gharchive/issue
set -o nounset causes unbound variable warnings With source ~/.hishtory/config.sh in .bashrc. $> set -u -bash: HISHTORY_AT_PROMPT: unbound variable -bash: HISHTORY_FIRST_PROMPT: unbound variable Reason: https://github.com/ddworken/hishtory/blob/cc123854a02374a7e4ee7fc87a974b19566fa142/client/lib/config.sh#L10 Solutions: https://stackoverflow.com/questions/7832080/test-if-a-variable-is-set-in-bash-when-using-set-o-nounset This should be fixed! If you run hishtory update you'll get the latest version with the fix. If you're still experiencing this issue (or run into anything else!) please reopen this so I can take another look. Seems to be working; thanks for the quick fix @ddworken!
2025-04-01T06:38:21.420211
2021-01-14T11:52:08
785927113
{ "authors": [ "maael", "ofhouse" ], "license": "Apache-2.0", "license_source": "github-api", "license_type": "permissive", "provenance": "gharchive-dolma-0000.json.gz:5222", "repo": "dealmore/terraform-aws-next-js", "url": "https://github.com/dealmore/terraform-aws-next-js/pull/34" }
gharchive/pull-request
feat: 🎸 Support passing through tags Adding more flexibility by adding the ability to add tags to all used resources that support tag metadata - useful for billing/project grouping etc. Released in v0.5.2. Released in v0.5.2.
2025-04-01T06:38:21.430038
2017-03-17T09:31:28
214950448
{ "authors": [ "SwedishBotMafia", "deanmalmgren", "frostchick", "mstanojevic118" ], "license": "mit", "license_source": "bigquery", "license_type": "permissive", "provenance": "gharchive-dolma-0000.json.gz:5223", "repo": "deanmalmgren/textract", "url": "https://github.com/deanmalmgren/textract/issues/135" }
gharchive/issue
Errror decode() argument 1 must be string, not None when run textract.process I am having trouble when convertTotext with UTF-8 file... text = textract.process("1.pdf", method='pdfminer') Traceback (most recent call last): File "/usr/local/lib/python2.7/dist-packages/twisted/internet/defer.py", line 150, in maybeDeferred result = f(*args, **kw) File "/usr/local/lib/python2.7/dist-packages/pydispatch/robustapply.py", line 55, in robustApply return receiver(*arguments, **named) File "td-net.py", line 178, in close textData = convertToText(path,self.date) # convert pdf to text after download File "td-net.py", line 239, in convertToText text = textract.process("data/pdf/{1}/{0}.pdf".format(path,sDate), method='pdfminer') File "/usr/local/lib/python2.7/dist-packages/textract/parsers/init.py", line 58, in process return parser.process(filename, encoding, **kwargs) File "/usr/local/lib/python2.7/dist-packages/textract/parsers/utils.py", line 46, in process unicode_string = self.decode(byte_string) File "/usr/local/lib/python2.7/dist-packages/textract/parsers/utils.py", line 65, in decode return text.decode(result['encoding']) TypeError: decode() argument 1 must be string, not None That's odd. Looks like chardet could not determine an encoding for your file 1.pdf. Can you try running chardet 1.pdf to see what the output looks like? I wonder if this is related to #133 somehow... This is exactly the problem I was having. I just pinned chardet to 2.1.1 to address #107. I think this will likely address your issue as well. Try pulling from the latest master on github to see if that fixes it. I'm going to close this, but feel free to reopen if it remains a problem. Hello, I have this issue. I went back to 2.1.1 and now I got another error: ModuleNotFoundError: No module named 'universaldetector' which happens because chardet 2.1.1 is too old. What should I do?
2025-04-01T06:38:21.433217
2017-09-03T22:33:39
254905209
{ "authors": [ "InnovativeInventor", "deathbybandaid" ], "license": "WTFPL", "license_source": "github-api", "license_type": "permissive", "provenance": "gharchive-dolma-0000.json.gz:5224", "repo": "deathbybandaid/piholeparser", "url": "https://github.com/deathbybandaid/piholeparser/issues/68" }
gharchive/issue
Cluttered README I think this project is awesome. It combines so many blacklists to create a comprehensive ad blocking list. Unfortunately, when I was reading the README, I found it to be rather confusing and hard to navigate. Maybe it could be split up into separate files with the stats placed somewhere else? The script is still in an "active development state" I have some major additions and changes on the horizon, and for now, the stats help me visualize the information at-a-glance. I like the stats being there, and that is probably going to stay. However, I'll take any suggestions regarding the rest of the main README.md
2025-04-01T06:38:21.439038
2018-11-22T18:42:04
383633503
{ "authors": [ "Hacker-spe", "TheMercyless1", "analjesus" ], "license": "mit", "license_source": "bigquery", "license_type": "permissive", "provenance": "gharchive-dolma-0000.json.gz:5225", "repo": "deathsec/instagram-py", "url": "https://github.com/deathsec/instagram-py/issues/16" }
gharchive/issue
fatal error:: configuration file not found at /root/instapy-config.json After lots of trial and error, I have finally made it to the very last step...but now I can't get passed this error message: user123@computer123:/usr/share/wordlists$ sudo instagram-py -u username123 -pl rockyou.txt.gz Instagram-Py 2.0.7 , Slick Instagram brute force command line tool. Copyright (C) 2018 The Future Shell , Antony Jr. [+] Started @ 2018-11-22 07:45:28.707465 fatal error:: configuration file not found at /root/instapy-config.json python instagram-py -dc AnalJesus just move instagram-config.json file to /root .. cp instagram-config.json /root/
2025-04-01T06:38:21.460805
2020-11-06T12:55:59
737737421
{ "authors": [ "ani-sha", "jpechane" ], "license": "apache-2.0", "license_source": "bigquery", "license_type": "permissive", "provenance": "gharchive-dolma-0000.json.gz:5226", "repo": "debezium/debezium", "url": "https://github.com/debezium/debezium/pull/1935" }
gharchive/pull-request
DBZ-1720 Move CI to GH Actions @Naros need to configure a secrets for WEBHOOK_URL for Gitter. @ani-sha Hi, this looks very nice! I have just few comments for improvements make sure that the order in paths field is same for all workflows, first the changed module and then dependencies. Ideally the dependencies will be graphically separated for example via comment. IIUC the deps block should be the same for all workflows would it be possible to use matrix for jobs that executes more than once per connector? would it be possible to add one more workflow that will trigger a new docs build when documentation is changed? Great work! Thanks, @jpechane for the suggestions. In which case would we be needing to run the connectors more than once? I believe depending on the no of keys in a matrix, that many no of times a job will be executed. Surely would be adding a workflow for docs. Also had a discussion with @Naros; thought of implementing this after fixing GH actions for website. @ani-sha See for example MongoDB connector - it is keyed by version.mongo.server Maven property @ani-sha See for example MongoDB connector - it is keyed by version.mongo.server Maven property yep, I can create a matrix depending on the versions for mongodb. Anything for postgres? @ani-sha For postgres there is version.postgres.server. Unfortunately profile names are changed as well so it needs to be somehow woven together. @jpechane I tried using a matrix for mongodb locally, but it fails the build with this error. [INFO] ------------------------------------------------------------------------ [INFO] Reactor Summary for Debezium Parent POM 1.4.0-SNAPSHOT: [INFO] [INFO] Debezium Checkstyle Rules .......................... FAILURE [ 2.882 s] [INFO] Debezium IDE Formatting Rules ...................... SKIPPED [INFO] Debezium Revapi Rules .............................. SKIPPED [INFO] Debezium Parent POM ................................ SKIPPED [INFO] Debezium API ....................................... SKIPPED [INFO] Debezium Core ...................................... SKIPPED [INFO] Debezium Assembly Descriptors ...................... SKIPPED [INFO] Debezium Embedded .................................. SKIPPED [INFO] Debezium Connector for MongoDB ..................... SKIPPED [INFO] ------------------------------------------------------------------------ [INFO] BUILD FAILURE [INFO] ------------------------------------------------------------------------ [INFO] Total time: 58.448 s [INFO] Finished at: 2020-11-09T09:46:02Z [INFO] ------------------------------------------------------------------------ Error: Unknown lifecycle phase "4.2". You must specify a valid lifecycle phase or a goal in the format <plugin-prefix>:<goal> or <plugin-group-id>:<plugin-artifact-id>[:<plugin-version>]:<goal>. Available lifecycle phases are: validate, initialize, generate-sources, process-sources, generate-resources, process-resources, compile, process-classes, generate-test-sources, process-test-sources, generate-test-resources, process-test-resources, test-compile, process-test-classes, test, prepare-package, package, pre-integration-test, integration-test, post-integration-test, verify, install, deploy, pre-clean, clean, post-clean, pre-site, site, post-site, site-deploy. -> [Help 1] Error: Error: To see the full stack trace of the errors, re-run Maven with the -e switch. Error: Re-run Maven using the -X switch to enable full debug logging. Error: Error: For more information about the errors and possible solutions, please read the following articles: Error: [Help 1] http://cwiki.apache.org/confluence/display/MAVEN/LifecyclePhaseNotFoundException Error: Process completed with exit code 1. @ani-sha Could you please show the maven command you use? @ani-sha Could you please show the maven command you use? mvn clean install -B -pl debezium-connector-mongodb -am -Passembly -Dcheckstyle.skip=true -Dformat.skip=true -Drevapi.skip -Dversion.mongo.server= ${{ matrix.version-mongo-server }} -Dorg.slf4j.simpleLogger.log.org.apache.maven.cli.transfer.Slf4jMavenTransferListener=warn @ani-sha Try removing the space between = and ${ in -Dversion.mongo.server= ${{ matrix.version-mongo-server }} @ani-sha Try removing the space between = and ${ in -Dversion.mongo.server= ${{ matrix.version-mongo-server }} @jpechane Overcoming the first error throws a new error. [INFO] Tests run: 113, Failures: 0, Errors: 0, Skipped: 0 [INFO] [INFO] [INFO] --- maven-jar-plugin:3.0.2:jar (default-jar) @ debezium-connector-mongodb --- [INFO] Building jar: /home/runner/work/debezium/debezium/debezium-connector-mongodb/target/debezium-connector-mongodb-1.4.0-SNAPSHOT.jar [INFO] [INFO] --- maven-source-plugin:3.1.0:test-jar-no-fork (attach-test-sources) @ debezium-connector-mongodb --- [INFO] Building jar: /home/runner/work/debezium/debezium/debezium-connector-mongodb/target/debezium-connector-mongodb-1.4.0-SNAPSHOT-test-sources.jar [INFO] [INFO] --- maven-jar-plugin:3.0.2:test-jar (test-jar) @ debezium-connector-mongodb --- [INFO] Building jar: /home/runner/work/debezium/debezium/debezium-connector-mongodb/target/debezium-connector-mongodb-1.4.0-SNAPSHOT-tests.jar [INFO] [INFO] --- maven-assembly-plugin:3.1.1:single (default) @ debezium-connector-mongodb --- [INFO] Building tar: /home/runner/work/debezium/debezium/debezium-connector-mongodb/target/debezium-connector-mongodb-1.4.0-SNAPSHOT-plugin.tar.gz [INFO] Building zip: /home/runner/work/debezium/debezium/debezium-connector-mongodb/target/debezium-connector-mongodb-1.4.0-SNAPSHOT-plugin.zip [INFO] [INFO] --- docker-maven-plugin:0.31.0:build (start) @ debezium-connector-mongodb --- [INFO] [INFO] --- docker-maven-plugin:0.31.0:start (start) @ debezium-connector-mongodb --- [INFO] DOCKER> Pulling from library/mongo [INFO] DOCKER> Digest: sha256:efc408845bc917d0b7fd97a8590e9c8d3c314f58cee651bd3030c9cf2ce9032d [INFO] DOCKER> Status: Downloaded newer image for mongo:4 [INFO] DOCKER> Pulled mongo:4 in 8 seconds Error: DOCKER> Error occurred during container startup, shutting down... Error: DOCKER> I/O Error [Unable to pull 'debezium/mongo-initiator:4' : {"message":"manifest for debezium/mongo-initiator:4 not found: manifest unknown: manifest unknown"} (Not Found: 404)] [INFO] ------------------------------------------------------------------------ [INFO] Reactor Summary for Debezium Parent POM 1.4.0-SNAPSHOT: [INFO] [INFO] Debezium Checkstyle Rules .......................... SUCCESS [01:31 min] [INFO] Debezium IDE Formatting Rules ...................... SUCCESS [ 0.778 s] [INFO] Debezium Revapi Rules .............................. SUCCESS [ 0.084 s] [INFO] Debezium Parent POM ................................ SUCCESS [01:10 min] [INFO] Debezium API ....................................... SUCCESS [ 31.257 s] [INFO] Debezium Core ...................................... SUCCESS [01:06 min] [INFO] Debezium Assembly Descriptors ...................... SUCCESS [ 0.057 s] [INFO] Debezium Embedded .................................. SUCCESS [ 30.882 s] [INFO] Debezium Connector for MongoDB ..................... FAILURE [ 37.975 s] [INFO] ------------------------------------------------------------------------ [INFO] BUILD FAILURE [INFO] ------------------------------------------------------------------------ [INFO] Total time: 06:31 min [INFO] Finished at: 2020-11-09T10:13:45Z [INFO] ------------------------------------------------------------------------ Error: Failed to execute goal io.fabric8:docker-maven-plugin:0.31.0:start (start) on project debezium-connector-mongodb: I/O Error: Unable to pull 'debezium/mongo-initiator:4' : {"message":"manifest for debezium/mongo-initiator:4 not found: manifest unknown: manifest unknown"} (Not Found: 404) -> [Help 1] Error: Error: To see the full stack trace of the errors, re-run Maven with the -e switch. Error: Re-run Maven using the -X switch to enable full debug logging. Error: Error: For more information about the errors and possible solutions, please read the following articles: Error: [Help 1] http://cwiki.apache.org/confluence/display/MAVEN/MojoExecutionException Error: Error: After correcting the problems, you can resume the build with the command Error: mvn <args> -rf :debezium-connector-mongodb Error: Process completed with exit code 1. @ani-sha It seems that 4.0 was converted to 4 only when passed to the mvn command @ani-sha It seems that 4.0 was converted to 4 only when passed to the mvn command Yes. But 4.0 was provided in the matrix it took 4. @jpechane Fixed the matrix for mongodb. @ani-sha Nice! Do you think you'll manage to do it for postgres as well? It might be a bit more complicated as two things are updated @ani-sha Nice! Do you think you'll manage to do it for postgres as well? It might be a bit more complicated as two things are updated Well the first issue I am facing right now is with the dependencies in the postgres-connector. So I am not able to run or test anything locally for postgres. @ani-sha Could you please share the full error message in the log? This should work the same way as MongoDB does. @ani-sha Could you please share the full error message in the log? This should work the same way as MongoDB does. Error: Errors: Error: PostgresConnectorIT.shouldResumeStreamingFromSlotPositionForCustomSnapshot:1524->waitForSnapshotToBeCompleted:2378->AbstractConnectorTest.waitForSnapshotToBeCompleted:1035 » ConditionTimeout [INFO] Error: Tests run: 219, Failures: 0, Errors: 1, Skipped: 3 [INFO] [INFO] [INFO] --- docker-maven-plugin:0.31.0:stop (stop) @ debezium-connector-postgres --- 07:14:53.469 postgresLOG: received smart shutdown request 07:14:53.469 postgresLOG: autovacuum launcher shutting down 07:14:53.469 postgresFATAL: terminating autovacuum process due to administrator command 07:14:53.759 postgresLOG: shutting down 07:14:53.829 postgresLOG: database system is shut down [INFO] DOCKER> [debezium/postgres-server-test-database:latest]: Stop and removed container 0f9e4630a416 after 0 ms [INFO] [INFO] --- maven-source-plugin:3.1.0:jar-no-fork (attach-sources) @ debezium-connector-postgres --- [INFO] Building jar: /home/runner/work/debezium/debezium/debezium-connector-postgres/target/debezium-connector-postgres-1.4.0-SNAPSHOT-sources.jar [INFO] [INFO] --- maven-checkstyle-plugin:3.1.1:checkstyle (check-style) @ debezium-connector-postgres --- [INFO] Starting audit... Audit done. [INFO] [INFO] --- maven-failsafe-plugin:3.0.0-M3:verify (verify) @ debezium-connector-postgres --- [INFO] ------------------------------------------------------------------------ [INFO] Reactor Summary for Debezium Parent POM 1.4.0-SNAPSHOT: [INFO] [INFO] Debezium Checkstyle Rules .......................... SUCCESS [ 2.313 s] [INFO] Debezium IDE Formatting Rules ...................... SUCCESS [ 0.282 s] [INFO] Debezium Revapi Rules .............................. SUCCESS [ 0.069 s] [INFO] Debezium Parent POM ................................ SUCCESS [ 1.406 s] [INFO] Debezium API ....................................... SUCCESS [ 5.895 s] [INFO] Debezium Core ...................................... SUCCESS [01:22 min] [INFO] Debezium Assembly Descriptors ...................... SUCCESS [ 0.107 s] [INFO] Debezium Embedded .................................. SUCCESS [ 14.299 s] [INFO] Debezium Connector for PostgreSQL .................. FAILURE [12:15 min] [INFO] ------------------------------------------------------------------------ [INFO] BUILD FAILURE [INFO] ------------------------------------------------------------------------ [INFO] Total time: 14:02 min [INFO] Finished at: 2020-11-11T07:14:55Z [INFO] ------------------------------------------------------------------------ Error: Failed to execute goal org.apache.maven.plugins:maven-failsafe-plugin:3.0.0-M3:verify (verify) on project debezium-connector-postgres: There are test failures. Error: Error: Please refer to /home/runner/work/debezium/debezium/debezium-connector-postgres/target/failsafe-reports for the individual test results. Error: Please refer to dump files (if any exist) [date].dump, [date]-jvmRun[N].dump and [date].dumpstream. @ani-sha This looks like an intermittent failure issue so you can ignore it for now. But if it is reliable reproducibale on your machine then we are going to borrow it to find the root cause :-) @ani-sha This looks like an intermittent failure issue so you can ignore it for now. But if it is reliable reproducibale on your machine then we are going to borrow it to find the root cause :-) ok sure; for postrges we would be using two matrixes one being the version.postgres.server with [9.6, 10] and other being plugin matrix would contain? It seems to me that -Dversion.postgres.server=9.6-devel could be removed and matrix will be made out of different profile settings assembly assembly,wal2json assembly,postgres-10,pgoutput and the strings from above will be passed as -P{...} to the maven command @ani-sha I think there is no need to delay this later - looks good and let's give it a try! @ani-sha I think there is no need to delay this later - looks good and let's give it a try! Absolutely! 🚀
2025-04-01T06:38:21.463419
2023-08-17T12:26:41
1854901789
{ "authors": [ "ani-sha", "jpechane" ], "license": "apache-2.0", "license_source": "bigquery", "license_type": "permissive", "provenance": "gharchive-dolma-0000.json.gz:5227", "repo": "debezium/debezium", "url": "https://github.com/debezium/debezium/pull/4794" }
gharchive/pull-request
DBZ-6803 Add REPEAT function for MySQL https://issues.redhat.com/browse/DBZ-6803 Upstream - https://github.com/antlr/grammars-v4/pull/3667 @ani-sha Applied, thanks
2025-04-01T06:38:21.464914
2018-07-13T07:18:48
340908914
{ "authors": [ "gunnarmorling", "jpechane" ], "license": "Apache-2.0", "license_source": "github-api", "license_type": "permissive", "provenance": "gharchive-dolma-0000.json.gz:5228", "repo": "debezium/oracle-vagrant-box", "url": "https://github.com/debezium/oracle-vagrant-box/pull/6" }
gharchive/pull-request
DBZ-720 Updating required permissions for connector user for snapshot… …ting @jpechane Small update regarding grants for initial snapshotting. Btw. this might make us re-think whether the flashback query is the best way to do the initial snapshot. We might also get away with reading within a transaction using the right isolation level. I found the "AS OF SCN ..." approach quite attractive, though, so I went for it. We can re-evaluate later on, but I wanted to bring it to your attention. @gunnarmorling Applied, thanks!
2025-04-01T06:38:21.533738
2024-06-25T15:00:55
2372927258
{ "authors": [ "Da-Colon", "adamgall", "mudrila", "tomstuart123" ], "license": "MIT", "license_source": "github-api", "license_type": "permissive", "provenance": "gharchive-dolma-0000.json.gz:5229", "repo": "decentdao/decent-interface", "url": "https://github.com/decentdao/decent-interface/issues/2051" }
gharchive/issue
Fetch NFT prices and use them to calculate treasury total https://docs.moralis.io/web3-data-api/evm/reference/get-nft-sale-prices @tomstuart123 from a product perspective, do users want to see NFTs in their Safes count toward their total balance? Just logging here Do we still want to do this?
2025-04-01T06:38:21.551590
2023-12-11T18:12:38
2036254447
{ "authors": [ "NickKhalow", "popuz" ], "license": "Apache-2.0", "license_source": "github-api", "license_type": "permissive", "provenance": "gharchive-dolma-0000.json.gz:5230", "repo": "decentraland/unity-explorer", "url": "https://github.com/decentraland/unity-explorer/pull/192" }
gharchive/pull-request
Feat/billboard Introduced Billboard feature as Explorer/Assets/DCL/Billboard Locate at Nullables enabled Covered with tests For demo created scenes Assets/DCL/Billboard/Demo/BillboardDemoTest.unity - shows multiple Assets/DCL/Billboard/Demo/BillboardPlayground.unity - makes possible to tweak options in runtime and to see a difference IDemoWorld for easy reuse Looks very cool, man 💪 I have just 3 suggestions: It would be nice to have all SDK-related components under one root folder. Maybe Explorer/Assets/DCL/Scenes/Billboard or Explorer/Assets/DCL/SDKScenes/Billboard Since you already have nice integration test environment, can you assemble then a Performance Test (for 200 and 500 entities). As we already have Unity Performance Testing imported in the project Please, rename the header of the PR
2025-04-01T06:38:21.554893
2022-02-16T20:41:58
1140563433
{ "authors": [ "BasileiosKal", "tplooker" ], "license": "Apache-2.0", "license_source": "github-api", "license_type": "permissive", "provenance": "gharchive-dolma-0000.json.gz:5231", "repo": "decentralized-identity/bbs-signature", "url": "https://github.com/decentralized-identity/bbs-signature/pull/64" }
gharchive/pull-request
Messages and generators notation update for readability First attempt to close issue #54 (see in the issue for other options). This PR proposes to pass the generators directly to functions instead of their indexes, with the goal of increasing spec readability. IMO this simplifies notation. For example, using indexes in spkGen will require notation for 5 different lists of indexes. This can get confusing. By passing the generators as a list, only requires notation for 2 lists of indexes (and one less input argument in SpkGen). Also, Passing the messages as a map between the message and the index of the generator will still require a lot of notation for indexes. Mentioning that those generators are not necessarily the L first elements from the global (or not) generators list, also preserves the flexibility required from the blind signatures. Mentioning that implementations may choose to pass the indexes of the generators instead and pointing to a reference implementation or perhaps a more detailed explanation in the Appendix IMO will be enough to address the efficiency of the applications concerns, while keeping the spec more readable. Also, changes in this PR use the terminology from PR #62 to some places, but I will update it elsewhere after that PR is merged. Discussed on WG call 21st of Feb, awaiting review from other WG members @BasileiosKal can you please update this PR to resolve the conflicts? Multiple approvals, PR open for 2 weeks and discussed on WG call, massive improvement in notation across the spec, merging
2025-04-01T06:38:21.585148
2020-06-22T02:50:22
642709116
{ "authors": [ "Finspire13", "limbo0000" ], "license": "Apache-2.0", "license_source": "github-api", "license_type": "permissive", "provenance": "gharchive-dolma-0000.json.gz:5232", "repo": "decisionforce/TPN", "url": "https://github.com/decisionforce/TPN/issues/12" }
gharchive/issue
Feature extraction Hi, thanks for the great codebase. Could you kindly provide the code to extract features from custom videos using pre-trained models? @Finspire13 Sorry for late reply. You can easily modify the config files (\eg remove the cls head) and test_video.py or test_recognizer.py to extract features.
2025-04-01T06:38:21.600870
2021-01-04T03:24:11
777788040
{ "authors": [ "Donkey-Doug", "deckerst" ], "license": "BSD-3-Clause", "license_source": "github-api", "license_type": "permissive", "provenance": "gharchive-dolma-0000.json.gz:5233", "repo": "deckerst/aves", "url": "https://github.com/deckerst/aves/issues/30" }
gharchive/issue
Thumbnails not loading I encountered three different problems: All thumbnails in a folder are visible No thumbnail in a folder is visible In the same folder, some photos have thumbnails while others do not. In case for the failing thumbnail, a dark grey square is shown instead. The failing thumbnails are for common formats like JPG and PNG, or for something else? Do they fail for files that no longer exist but are still registered in the media store? On Mon, Jan 4, 2021 at 12:24 PM Donkey-Doug<EMAIL_ADDRESS>wrote: I encountered three different problems: All thumbnails in a folder are visible No thumbnail in a folder is visible In the same folder, some photos have thumbnails while others do not. In case for the failing thumbnail, a dark grey square is shown instead. — You are receiving this because you are subscribed to this thread. Reply to this email directly, view it on GitHub https://github.com/deckerst/aves/issues/30, or unsubscribe https://github.com/notifications/unsubscribe-auth/ADKBEXNTBWXLWFMIU35LAMDSYEYGRANCNFSM4VSNNUMQ . The failing thumbnails are for common formats like JPG and PNG, or for something else? Do they fail for files that no longer exist but are still registered in the media store? On Mon, Jan 4, 2021 at 12:24 PM Donkey-Doug<EMAIL_ADDRESS>wrote: I encountered three different problems: All thumbnails in a folder are visible No thumbnail in a folder is visible In the same folder, some photos have thumbnails while others do not. In case for the failing thumbnail, a dark grey square is shown instead. — You are receiving this because you are subscribed to this thread. Reply to this email directly, view it on GitHub https://github.com/deckerst/aves/issues/30, or unsubscribe https://github.com/notifications/unsubscribe-auth/ADKBEXNTBWXLWFMIU35LAMDSYEYGRANCNFSM4VSNNUMQ . I tried to find a pattern in the failing thumbnails before reporting the issue, but failed to find any. I tried to find a pattern in the failing thumbnails before reporting the issue, but failed to find any. Some of the thumbnails that were not visible earlier started to show. Apparently it takes very long for some thumbnails to become visible. Some of the thumbnails that were not visible earlier started to show. Apparently it takes very long for some thumbnails to become visible. Thanks for the update. Indeed some apps delete files without properly removing them from the Media Store. Ideally, these broken files should be handled more gracefully by Aves. When the app detects such file, it could even suggest to fix the situation by removing them from the Media Store. Thanks for the update. Indeed some apps delete files without properly removing them from the Media Store. Ideally, these broken files should be handled more gracefully by Aves. When the app detects such file, it could even suggest to fix the situation by removing them from the Media Store.
2025-04-01T06:38:21.634969
2022-04-13T13:58:32
1203342787
{ "authors": [ "bates64", "ethteck", "nanaian" ], "license": "MIT", "license_source": "github-api", "license_type": "permissive", "provenance": "gharchive-dolma-0000.json.gz:5234", "repo": "decompme/decomp.me", "url": "https://github.com/decompme/decomp.me/issues/439" }
gharchive/issue
Use CompilerConfig model in Scratch model A little refactor. Make sure to not break the existing create scratch endpoint's interface (#148 would help here!) @ethteck did we get anywhere with this I've been working on it This isn't needed anymore since we store Presets in the backend, which are essentially CompilerConfig
2025-04-01T06:38:21.640596
2016-07-01T15:44:56
163422760
{ "authors": [ "jrick", "marcopeereboom" ], "license": "isc", "license_source": "bigquery", "license_type": "permissive", "provenance": "gharchive-dolma-0000.json.gz:5235", "repo": "decred/Paymetheus", "url": "https://github.com/decred/Paymetheus/pull/57" }
gharchive/pull-request
Hook up fee estimation to transaction creation. This change requires a newer version of the wallet's RPC API. Fixes #10. Fixes #11. Two issues; you see Estimated Remaining Balance at 1.0, should be 0. And the crash of course If priority checks were run at all that means the fee was not high enough. I'll look through it for bugs. Rebased over master and fixed a magnitude error with the default fee. Shouldn't see any more tx rejected errors for low priority. Coins were unconfirmed, and therefore not usable to fund a transaction. The same transaction went through after change received the required number of block confirmations. We should display the spendable balance next to the selected account, instead of the user relying on the total balance of all accounts in the corner. so I am ok with this going in. It isn't perfect but certainly works much better. just fixing up an issue in the transaction authoring code where it could create dust change outputs. Will merge after that is fixed.
2025-04-01T06:38:21.644954
2021-01-28T14:18:22
796045473
{ "authors": [ "dnldd", "jholdstock" ], "license": "ISC", "license_source": "github-api", "license_type": "permissive", "provenance": "gharchive-dolma-0000.json.gz:5236", "repo": "decred/dcrpool", "url": "https://github.com/decred/dcrpool/pull/304" }
gharchive/pull-request
multi: stratum improvements. This extends client read/write timeouts, mining.subscribe response handling and other stratum related improvements. Deployed to https://pool.jholdstock.uk/ Successfully mining blocks, but still seeing a lot of errors on the pool log: 2021-02-10 10:12:34.351 [ERR] POOL: submitted work from d99f78a0/cpu is not less than the network target difficulty 2021-02-10 10:25:12.450 [ERR] POOL: submitted work from d99f78a0/cpu is not less than the network target difficulty 2021-02-10 10:26:33.480 [ERR] POOL: submitted work from d99f78a0/cpu is not less than the network target difficulty 2021-02-10 10:31:27.851 [ERR] POOL: submitted work from d99f78a0/cpu is not less than the network target difficulty 2021-02-10 10:31:34.448 [ERR] POOL: submitted work from d99f78a0/cpu is not less than the network target difficulty 2021-02-10 10:31:41.004 [ERR] POOL: submitted work from d99f78a0/cpu is not less than the network target difficulty 2021-02-10 10:31:47.273 [ERR] POOL: submitted work from d99f78a0/cpu is not less than the network target difficulty 2021-02-10 10:32:11.731 [INF] POOL: Mined work 0000000ce28b245f67ae1103ec8619fce82b41a623d697e4e2689924557e29cb confirmed by connected block #618036
2025-04-01T06:38:21.651162
2020-03-17T16:24:31
583136740
{ "authors": [ "Gilthoniel" ], "license": "BSD-3-Clause", "license_source": "github-api", "license_type": "permissive", "provenance": "gharchive-dolma-0000.json.gz:5237", "repo": "dedis/fabric", "url": "https://github.com/dedis/fabric/pull/14" }
gharchive/pull-request
QSC initial implementation This includes an initial implementation of the QSC algorithm. It does not support Byzantine behavior. @nkcr I had to change the address iterator thing as we need the length of the set in the Call functions. Let me know what you think about the Take function. @nkcr comments fixed.
2025-04-01T06:38:21.660521
2024-06-27T20:21:54
2379029645
{ "authors": [ "elijahpetty" ], "license": "Apache-2.0", "license_source": "github-api", "license_type": "permissive", "provenance": "gharchive-dolma-0000.json.gz:5238", "repo": "deephaven/deephaven-docs-community", "url": "https://github.com/deephaven/deephaven-docs-community/issues/251" }
gharchive/issue
USER GUIDE + REFERENCE: Groovy LivenessScopes Groovy equivalent of https://deephaven.io/core/docs/conceptual/liveness-scope-concept/#how-to-create-a-liveness-scope the associated reference docs. User guide is complete, Reference is not
2025-04-01T06:38:21.670684
2017-06-14T06:35:46
235776133
{ "authors": [ "dm-jrae", "jnwei", "jramapuram" ], "license": "apache-2.0", "license_source": "bigquery", "license_type": "permissive", "provenance": "gharchive-dolma-0000.json.gz:5239", "repo": "deepmind/dnc", "url": "https://github.com/deepmind/dnc/pull/14" }
gharchive/pull-request
AbstractModule Fixes Sonnet changes required that the parameters to the constructor to be named parameters. This pushes those changes in. Also added a python gitignore file :) I signed it! Your changes worked for me, thanks @jramapuram!! Thanks!
2025-04-01T06:38:21.673322
2019-08-23T10:13:47
484450547
{ "authors": [ "uberspot", "yoshi-1224" ], "license": "Apache-2.0", "license_source": "github-api", "license_type": "permissive", "provenance": "gharchive-dolma-0000.json.gz:5240", "repo": "deepmind/kapitan", "url": "https://github.com/deepmind/kapitan/issues/345" }
gharchive/issue
Add standalone binary to released files After https://github.com/deepmind/kapitan/pull/323 we have a process to generate a standalone binary of kapitan and it's built in travis too. The produced binary lives in dist/ It would be great to also include that to the files released by travis on each release. i.e. in .travis.yml we already do that with CHANGELOG.md deploy: - provider: releases api_key: secure: blabla= file: CHANGELOG.md prerelease: $PRERELEASE on: tags: true repo: deepmind/kapitan @uberspot I may be misunderstanding, but is this issue already addressed in #349? Yes it is. :) i just made an issue to track that work. I think the binary is now on the releases page on github and every build works fine on travis so i'd consider this issue resolved for now. :)
2025-04-01T06:38:21.688480
2023-05-29T08:12:48
1730306758
{ "authors": [ "Kallinteris-Andreas", "yuvaltassa" ], "license": "apache-2.0", "license_source": "bigquery", "license_type": "permissive", "provenance": "gharchive-dolma-0000.json.gz:5241", "repo": "deepmind/mujoco", "url": "https://github.com/deepmind/mujoco/issues/889" }
gharchive/issue
data.body("...").xpos does not match data.qpos Hi, I'm a maintainer of Gymnasium-Robotics and I'm trying to use MuJoCo to develop the v5 revision of the Gymansium/mujoco RL environments https://github.com/Farama-Foundation/Gymnasium-Robotics/pull/104. I'm looking for some help with understanding why the following unit test fails. data.qpos[0] == data.body("torso").xpos[0] is true but after steeping the MuJoCo model data.qpos[0] == data.body("torso").xpos[0] is False Here is a model which explains my question: The Ant.xml in Gymnasium/MuJoCo/Ant environment https://github.com/Farama-Foundation/Gymnasium/blob/main/gymnasium/envs/mujoco/assets/ant.xml Here is a unit test illustrating my question: def test_ant_com(): env = gym.make('Ant-v4') # `env` contains `data : MjData` and `model : MjModel` env.reset() # randomly initlizies the `data.qpos` and `data.qvel` x_position_before = env.unwrapped.data.qpos[0] x_position_before_com = env.unwrapped.data.body("torso").xpos[0] assert(x_position_before == x_position_before_com) # This succeeds random_control = env.action_space.sample() _, _, _, _, info = env.step(random_control) # This calls mujoco.mj_step(env.model, env.data, nstep=env.frame_skip) x_position_after = env.unwrapped.data.qpos[0] x_position_after_com = env.unwrapped.data.body("torso").xpos[0] assert(x_position_after == x_position_after_com) # This fails Note: this is the case for other body.xpos & body.xquat Is this normal/expected? $ pip list | grep mujoco mujoco 2.3.3 Hi! Some clarifications are in order. A clarification for future readers: This model's DoFs start with a body-centered free joint, therefore the first 3 elements of qpos have the same semantics as the first 3 elements of xpos. They are generally not the same thing. A clarification for us: is this an old test that is newly breaking or a new test? Regarding the test: It should fail. If it ever passed that is surprising. Explanation: The purpose ofmj_step is to advance the state (qpos and qvel, in this case). It does this and nothing more. xpos is a derived quantity that is computed from qpos during mj_step (step 2 in the first link above), but at the end of mj_step, qpos gets updated. The only reason it passes after your Reset call is that (presumably) mj_forward or something similar was called at the end of the Reset. So your options are: Compare the current qpos to the xpos measured after the previous step. Call mj_kinematics (or the full mj_forward) after the step, and then compare. Does this make sense? this is a new test reset() does call mj_forward calling mj_forward after step() does indeed resolve the issue def test_ant_com(): env = gym.make('Ant-v5', frame_skip=5) # `env` contains `data : MjData` and `model : MjModel` env.reset() # randomly initlizies the `data.qpos` and `data.qvel`, calls mujoco.mj_forward(env.model, env.data) x_position_before = env.unwrapped.data.qpos[0] x_position_before_com = env.unwrapped.data.body("torso").xpos[0] assert(x_position_before == x_position_before_com), "before failed" # This succeeds random_control = env.action_space.sample() _, _, _, _, info = env.step(random_control) # This calls mujoco.mj_step(env.model, env.data, nstep=env.frame_skip) mujoco.mj_forward(env.unwrapped.model, env.unwrapped.data) # <-- This is new x_position_after = env.unwrapped.data.qpos[0] x_position_after_com = env.unwrapped.data.body("torso").xpos[0] assert(x_position_after == x_position_after_com), "after failed" # This succeeds now can you explain the difference of xpos and qpos my current understanding: qpos is part of the state (https://mujoco.readthedocs.io/en/latest/computation.html#physics-state) and xpos is from what I can tell the kinematic approximation of the body frames positions Thanks! fyi mj_kinematics is enough, but other than performance no harm in mj_forward. Yes qpos is the joint configuration. xpos is the global Cartesian position of the body frames. One last thing (and the reason I created the unit test, in the first place) If you wanted to calculate the displacement of the torso body after mj_step would you do Option A: # note `env` holds `data` and `model` x_position_before = env.data.body("torso").xpos[0] mujoco.mj_step(env.model, env.data, nstep=env.frame_skip) # Note: we do not call `mj_kinematics` x_position_after = env.data.body("torso").xpos[0] dx = x_position_after - x_position_before # displacement Option B: # note `env` holds `data` and `model` x_position_before = env.data.qpos[0] mujoco.mj_step(env.model, env.data, nstep=env.frame_skip) x_position_after = env.data.qpos[0] dx = x_position_after - x_position_before # displacement We currently use option A for Ant-v2, Ant-v3, Ant-v4, Could you confirm that option B is a more accurate way of getting dx (displacement) (I am considering of updating it to use option B on Ant-v5) Thanks! Both options are equally accurate, but the second one is more up to date (by 1 timestep). You might legitimately now ask "why would I want a delayed measurement if I can get one that is more up to date?" There are sometimes good reasons for this. For example imagine that you want to compute some value that is a function of your dx and some contact force. Forces are only determined during the step and could not be computed now since they depend on the controls. I.e. contact forces are inherently linked not to a state but a state transition. So while it possible to get some values (i.e. functions only of position and velocity) w.r.t to the current timestep, if you want all your measurements (including force/acc related quantities) to be correctly "synced", you have to pay the price of a delay of 1 timestep. In your case you may not care, but in general this can be important. Hope this makes sense.
2025-04-01T06:38:21.702234
2019-05-24T02:21:05
447949533
{ "authors": [ "ethan052", "lightning20", "mschen97" ], "license": "apache-2.0", "license_source": "bigquery", "license_type": "permissive", "provenance": "gharchive-dolma-0000.json.gz:5242", "repo": "deepmind/pysc2", "url": "https://github.com/deepmind/pysc2/issues/272" }
gharchive/issue
Failed to find .build.info file at path: /home/x/StarCraftII/.build.info Tried to install pysc2 on ubuntu18.4 with warcraft linux 4.7.1 and map package Ladder 2019 Season 1. When run a sample code zerg_agent.py, it give error as below. I am a newbie. Is it possible give any suggestion, thanks. python zerg_agent.py pygame 1.9.6 Hello from the pygame community. https://www.pygame.org/contribute.html I0524 10:06:19.386617<PHONE_NUMBER>19776 sc_process.py:110] Launching SC2: /home/d/StarCraftII/Versions/Base70154/SC2_x64 -listen <IP_ADDRESS> -port 24676 -dataDir /home/d/StarCraftII/ -tempDir /tmp/sc-9q1vetas/ -displayMode 0 -windowwidth 640 -windowheight 480 -windowx 50 -windowy 50 I0524 10:06:19.390377<PHONE_NUMBER>19776 remote_controller.py:163] Connecting to: ws://<IP_ADDRESS>:24676/sc2api, attempt: 0, running: True Version: B70326 (SC2.2018Season4) Build: Nov 27 2018 03:26:30 Command Line: '"/home/d/StarCraftII/Versions/Base70154/SC2_x64" -listen <IP_ADDRESS> -port 24676 -dataDir /home/d/StarCraftII/ -tempDir /tmp/sc-9q1vetas/ -displayMode 0 -windowwidth 640 -windowheight 480 -windowx 50 -windowy 50' Starting up... Startup Phase 1 complete Fatal Error: Failed to find .build.info file at path: /home/d/StarCraftII/.build.info Terminating... W0524 10:06:20.393357<PHONE_NUMBER>19776 remote_controller.py:160] SC2 isn't running, so bailing early on the websocket connection. I0524 10:06:20.393687<PHONE_NUMBER>19776 sc_process.py:201] Shutdown gracefully. I0524 10:06:20.393838<PHONE_NUMBER>19776 sc_process.py:182] Shutdown with return code: -15 Traceback (most recent call last): File "zerg_agent.py", line 45, in app.run(main) File "/usr/local/lib/python3.6/dist-packages/absl/app.py", line 300, in run _run_main(main, args) File "/usr/local/lib/python3.6/dist-packages/absl/app.py", line 251, in _run_main sys.exit(main(argv)) File "zerg_agent.py", line 27, in main visualize=True) as env: File "/usr/local/lib/python3.6/dist-packages/pysc2/env/sc2_env.py", line 276, in init self._launch_sp(map_inst, interfaces[0]) File "/usr/local/lib/python3.6/dist-packages/pysc2/env/sc2_env.py", line 351, in _launch_sp want_rgb=interface.HasField("render"))] File "/usr/local/lib/python3.6/dist-packages/pysc2/run_configs/platforms.py", line 208, in start want_rgb=want_rgb, extra_args=extra_args, **kwargs) File "/usr/local/lib/python3.6/dist-packages/pysc2/run_configs/platforms.py", line 97, in start self, exec_path=exec_path, version=version, **kwargs) File "/usr/local/lib/python3.6/dist-packages/pysc2/lib/sc_process.py", line 116, in init self._host, self._port, self, timeout_seconds=timeout_seconds) File "/usr/local/lib/python3.6/dist-packages/pysc2/lib/remote_controller.py", line 143, in init sock = self._connect(host, port, proc, timeout_seconds) File "/usr/local/lib/python3.6/dist-packages/pysc2/lib/stopwatch.py", line 201, in _stopwatch return func(*args, **kwargs) File "/usr/local/lib/python3.6/dist-packages/pysc2/lib/remote_controller.py", line 174, in _connect raise ConnectError("Failed to connect to the SC2 websocket. Is it up?") pysc2.lib.remote_controller.ConnectError: Failed to connect to the SC2 websocket. Is it up? I0524 10:06:20.450495<PHONE_NUMBER>19776 sc2_env.py:656] Environment Close Did you download SC2 4.7.1 from https://github.com/Blizzard/s2client-proto#downloads ? .build.info is the last unzipped file, may be the file that you get was broken. HTTP request sent, awaiting response... 200 OK Length:<PHONE_NUMBER> (3.1G) [application/zip] Saving to: ‘SC<IP_ADDRESS>.zip’ Really thankful ethan052. The build.info was missing when unzip. Thanks again and close the issue THANKS
2025-04-01T06:38:21.744030
2021-03-16T00:06:05
832294261
{ "authors": [ "Jeffkw213", "Timoeller" ], "license": "Apache-2.0", "license_source": "github-api", "license_type": "permissive", "provenance": "gharchive-dolma-0000.json.gz:5243", "repo": "deepset-ai/COVID-QA", "url": "https://github.com/deepset-ai/COVID-QA/pull/117" }
gharchive/pull-request
Commenting the aggregates testing Hey @Jeffkw213 thanks for starting to contribute to this repo. But may I ask: what are you testing here? : ) Please give more details on what you are trying to achieve in this PR. Oh. I'm just learning new things about GitHub and how to contribute to public projects. And its for school.
2025-04-01T06:38:21.765383
2014-11-04T09:40:34
47694119
{ "authors": [ "bleurose", "jalcine" ], "license": "mit", "license_source": "bigquery", "license_type": "permissive", "provenance": "gharchive-dolma-0000.json.gz:5244", "repo": "deepthawtz/faker", "url": "https://github.com/deepthawtz/faker/pull/12" }
gharchive/pull-request
Adding a seed to faker to provide for repeatable data sequences Introduction Faker has proved quite useful to my python testing library but I find that I need to be able to replicate the test sequences. Fortunately it is based on the python random library and all random requires to reproduce its random sequences exactly is to use the same "seed" value each time. faker/init.py I added a new method to the class, Faker.reset(self, seed), which reseeds the random number generator if the seed value is an integer (for anything else including None, it resets the generator with a random value based on time of day). I also modified Faker.__init__(self, seed) to take an optional seed and call the .reset() method with it (or with None). tests/test_api.py There is a new test, test_seed(), which tests resetting the seed. Comments A call to random.seed() with an integer argument is sufficient to reset the random number generator. If you call the seed again with the same exact value, all subsequent random calls will be reproduced identically. EVERY single call must be exact, and in the same order and with the same arguments. Since faker calls the random number generator on most method calls, the sequence of faker calls must also be the same. Changing even one of them will create a new order. For instance: >>> import faker # with new changes >>> f = faker.Faker(1234) >>> f.name() u'Vita Kertzmann' >>> f.city() u'Feiltown' >>> f.state() u'LA' >>> f .seed(1234) >>> f.name() u'Vita Kertzmann' >>> f.state() u'AL' >>> f.city() u'New Art' Enjoy! Curious if this is planned to be pulled into master. I've been using this patch and it's nifty to have controllable fake data with this. Glad to see someone is using it :-) I was surprised that it never got into master since its pretty simply and I think pretty useful as well. But c'est la vie, its here for anyone who needs it. Happy Holidaze!
2025-04-01T06:38:21.769624
2019-11-18T01:15:56
524083172
{ "authors": [ "inci90" ], "license": "MIT", "license_source": "github-api", "license_type": "permissive", "provenance": "gharchive-dolma-0000.json.gz:5245", "repo": "deezer/spleeter", "url": "https://github.com/deezer/spleeter/issues/111" }
gharchive/issue
[Bug] cannot load ffprobe on OSX Description Spleeter cannot load ffprobe on OSX. ffmpeg was installed using homebrew on python 3.75 ffmpeg and ffprobe both call the program when entered in terminal so it's definitely linked correctly... Step to reproduce $ brew install ffmpeg $ pip3 install spleeter $ spleeter separate -i 'its_not_fair.mp3' spleeter:2stems -o splits Installed using pip3 Run as user Got WARNING:spleeter:ffprobe error (see stderr output for detail) error Output $ spleeter separate -i 'its_not_fair.mp3' spleeter:2stems -o splits INFO:spleeter:Loading audio b'spleeter:2stems' from 0.0 to 600.0 INFO:spleeter:Loading audio b'its_not_fair.mp3' from 0.0 to 600.0 WARNING:spleeter:ffprobe error (see stderr output for detail) INFO:spleeter:Audio data loaded successfully Environment OS MacOS Installation type pip So I'm an idiot and the problem was actually that I forgot the '-p' flag so then the training models never downloaded. spleeter separate -i its_not_fair.mp3 -p spleeter:2stems -o itsnotfair works fine.
2025-04-01T06:38:21.772267
2023-10-05T15:52:46
1928605093
{ "authors": [ "bdw617", "cmwylie19" ], "license": "Apache-2.0", "license_source": "github-api", "license_type": "permissive", "provenance": "gharchive-dolma-0000.json.gz:5246", "repo": "defenseunicorns/pepr", "url": "https://github.com/defenseunicorns/pepr/issues/299" }
gharchive/issue
Create a library of helper functions that make working with Pepr easier. feature request We keep re-building helper functions in each module, and i think we can generate a set of well tested ones that are easy for people to use. Critical needs across capabilities: create a secret class that hides the base64 implementation, and manages if you get a buffer or a string properly. create a class to checksum a deployment/statefulset/daemonset to restart the pods per the app's configuration. Relates to: #245 #279 Could be worth adding it to the PeprValidateRequest class as something like request.getContainers() or at least as a helper function you can import. Thoughts? // Returns all containers in the pod export function containers(request: PeprValidateRequest<a.Pod>) { return [ ...(request.Raw.spec?.containers || []), ...(request.Raw.spec?.initContainers || []), ...(request.Raw.spec?.ephemeralContainers || []), ]; }
2025-04-01T06:38:21.775694
2023-02-02T03:17:28
1567249674
{ "authors": [ "corang" ], "license": "Apache-2.0", "license_source": "github-api", "license_type": "permissive", "provenance": "gharchive-dolma-0000.json.gz:5247", "repo": "defenseunicorns/zarf", "url": "https://github.com/defenseunicorns/zarf/pull/1328" }
gharchive/pull-request
Add wait to injection method so if the cluster is slow it can catch up Description As the title says. Tbh I'd prefer to actually parse the error or something like that but I don't know how to find the error type that the serviceAccount error is. Related Issue Fixes #1327 Type of change [X] Bug fix (non-breaking change which fixes an issue) [ ] New feature (non-breaking change which adds functionality) [ ] Other (security config, docs update, etc) Checklist before merging [X] Test, docs, adr added or updated as needed [X] Contributor Guide Steps followed Actually maybe it's better to check for serviceAccount existence after namespace creation... Actually maybe it's better to check for serviceAccount existence after namespace creation... Ended up implementing this instead!
2025-04-01T06:38:21.817112
2020-06-09T05:23:24
635120706
{ "authors": [ "caduckett", "ctneal91", "todd-m" ], "license": "MIT", "license_source": "github-api", "license_type": "permissive", "provenance": "gharchive-dolma-0000.json.gz:5248", "repo": "defund12/defund12.org", "url": "https://github.com/defund12/defund12.org/issues/1054" }
gharchive/issue
Additional email address for OKC <EMAIL_ADDRESS>must be added to Oklahoma City as he directly oversees the city budget on it now Upon further review, it looks like we are already sending Craig Freeman an email through the City Manager email address<EMAIL_ADDRESS>. https://www.okc.gov/government/city-manager/about-the-city-manager @mahrer I think we can close this issue thanks for checking @ctneal91, gonna close this I hear what you're saying, but I've received direct correspondence from <EMAIL_ADDRESS>within the last week and I fear that the other email is an older one from a former city manager. On Tue, Jun 9, 2020, 8:58 AM Christian Neal-Herman<EMAIL_ADDRESS>wrote: Upon further review, it looks like we are already sending Craig Freeman an email through the City Manager email address<EMAIL_ADDRESS>. https://www.okc.gov/government/city-manager/about-the-city-manager @mahrer https://github.com/mahrer I think we can close this issue — You are receiving this because you authored the thread. Reply to this email directly, view it on GitHub https://github.com/defund12/defund12.org/issues/1054#issuecomment-641315472, or unsubscribe https://github.com/notifications/unsubscribe-auth/AKHECFOWPSF5ZYZLVVSPZB3RVY5Z3ANCNFSM4NZBW3OA .