id
stringlengths
4
10
text
stringlengths
4
2.14M
source
stringclasses
2 values
created
timestamp[s]date
2001-05-16 21:05:09
2025-01-01 03:38:30
added
stringdate
2025-04-01 04:05:38
2025-04-01 07:14:06
metadata
dict
1470600002
Add fix for queries containing compound functors. When query results contain compound functors (e.g. a(b(1), c(1))) pyswip returns them as a quoted string. a(b(1), c(1)) gets returned as a('b(1)', 'c(1)'). This leads to the contained functors being treated as single atoms and not as functors themselves, leading to unexpected behaviour. A MWE (examples/compound-functor) has also been included to showcase this behaviour. The expected solution f(A):-valid(A) cannot be found since the pyswip query results contain strings, which leads to the examples failing when tested. Popper successfully finds the expected solution after introducing the (admittedly hacky) fix in this commit. Thank you @AbhijeetKrishnan !
gharchive/pull-request
2022-12-01T03:14:09
2025-04-01T06:44:50.441588
{ "authors": [ "AbhijeetKrishnan", "andrewcropper" ], "repo": "logic-and-learning-lab/Popper", "url": "https://github.com/logic-and-learning-lab/Popper/pull/57", "license": "MIT", "license_type": "permissive", "license_source": "github-api" }
1083159916
Do we once again need to use --report-csv together with --report-path In an erlier version --report-csv was not needed wen using --report-path. Or am i dreaming? Just forget about that. It was just my bad. --report-csv is not needed wen using --report-path.
gharchive/issue
2021-12-17T11:22:19
2025-04-01T06:44:50.457624
{ "authors": [ "tp111" ], "repo": "logpresso/CVE-2021-44228-Scanner", "url": "https://github.com/logpresso/CVE-2021-44228-Scanner/issues/116", "license": "Apache-2.0", "license_type": "permissive", "license_source": "github-api" }
2504250352
Add Metadata Plugins Submit a new Plugin to Marketplace Plugin Github repo URL: https://github.com/CorrectRoadH/logseq-metadata Github releases checklist [x] a legal package.json file. [x] a valid CI workflow build action for Github releases. (theme plugin for this). [x] a release which includes a release zip pkg from a successful build. [x] a clear README file, ideally with an image or gif showcase. (For more friendly to users, it is recommended to have English version description). [x] a license in the LICENSE file. use case: insert book(movies) metadata: https://github.com/user-attachments/assets/45ee2eb6-d379-4367-b95b-e16ec597bce1 change the insert template https://github.com/user-attachments/assets/0e487c16-8109-46f8-9eb6-4ef86f072048 Hi @CorrectRoadH I found that the release zip package is missing from https://github.com/CorrectRoadH/logseq-metadata/releases . Maybe you need to trigger the build manually: https://github.com/CorrectRoadH/logseq-metadata/blob/master/.github/workflows/main.yml#L9 Hi @CorrectRoadH I found that the release zip package is missing from https://github.com/CorrectRoadH/logseq-metadata/releases . Maybe you need to trigger the build manually: https://github.com/CorrectRoadH/logseq-metadata/blob/master/.github/workflows/main.yml#L9 Thanks for the reminder, it has been fixed now👌 Hi @CorrectRoadH I found that the release zip package is missing from https://github.com/CorrectRoadH/logseq-metadata/releases . Maybe you need to trigger the build manually: https://github.com/CorrectRoadH/logseq-metadata/blob/master/.github/workflows/main.yml#L9 Are there any other issues blocking the merge?
gharchive/pull-request
2024-09-04T03:58:22
2025-04-01T06:44:50.474464
{ "authors": [ "CorrectRoadH", "xyhp915" ], "repo": "logseq/marketplace", "url": "https://github.com/logseq/marketplace/pull/582", "license": "MIT", "license_type": "permissive", "license_source": "github-api" }
1971240553
[Bug] Error on Startup after Updating I'm getting this error when trying to start up langflow locally. 2023-10-31 16:18:41 - ERROR - utils utils.py:216 - Error migrating flows: (sqlite3.OperationalError) no such column: flow.user_id [SQL: SELECT flow.data AS flow_data, flow.name AS flow_name, flow.description AS flow_description, flow.id AS flow_id, flow.user_id AS flow_user_id FROM flow WHERE flow.user_id IS NULL] (Background on this error at: https://sqlalche.me/e/14/e3q8) Deleting your cached langflow files fixed this for me.
gharchive/issue
2023-10-31T20:22:10
2025-04-01T06:44:50.478548
{ "authors": [ "aidanVEITA" ], "repo": "logspace-ai/langflow", "url": "https://github.com/logspace-ai/langflow/issues/1103", "license": "MIT", "license_type": "permissive", "license_source": "github-api" }
172537038
Take advantage of new LS 2.4/5.x features Thanks for contributing to Logstash! If you haven't already signed our CLA, here's a handy link: https://www.elastic.co/contributor-agreement/ Use new 'concurrency :single' for concurrency Use pipeline encoded codec for greater parallelism Require Logstash 2.4+ (Plugin API 1.60.1+) LGTM
gharchive/pull-request
2016-08-22T19:43:24
2025-04-01T06:44:50.541499
{ "authors": [ "andrewvc", "suyograo" ], "repo": "logstash-plugins/logstash-output-stdout", "url": "https://github.com/logstash-plugins/logstash-output-stdout/pull/9", "license": "Apache-2.0", "license_type": "permissive", "license_source": "github-api" }
1044226314
PatternJsonProvider: provide a #tryLong and #tryDouble operation similar to #tryJson It may be handy to provide #tryLong and #tryDouble operations similar to #tryJson. These two new operations would return the input (string) value when the conversion fails instead of null. Returning the input value (that is a string) does not make much sense for operations whose purpose is to convert to a type other than string. Instead, we could provide a way to specify the default value to use when the conversion fails. For instance: explicit default value #asLong{...}:<default value> #asJson{...}:<default value> <default value> will be used when #asLong{} fails to convert the input. It must be convertible into the target type (long in this case) or an error is raised at configuration time. use null #asLong{...}:? #asJson{...}:? null is returned when the conversion fails - this is the default behaviour, similar to #asLong{...}. leave input unchanged #asLong{...}:- #asJson{...}:- Return the input string unchanged - same behaviour as #tryJson{...}. Can be used only with operations returning a String - will not be accepted for #asLong{} for instance... This syntax will be preferred over the try..{} form. The latter will still be supported but slowly deprecated... @philsttr your opinion? 👍 Sounds good to me I made a mistake when describing the behaviour of the :- modifier. It is supposed to be an alternative to the try..{} variants. It should therefore always return the input string when the conversion fails - even when it does not match the return type of the operation. This is already the case for #tryJson{}: it may return a string instead of a JSON object... However, I'm questioning the utility of this feature: the type of the field is not constant - sometimes a long (or an json object), sometimes a string. This can make consumption of the JSON output at the receiving side challenging. I'm wondering if a better alternative would be to output the original string value in another field whose name is derived from the original. Suppose the following pattern: { "numericValue": "#asLong(foo)" } foo is not a valid Long representation and causes #asLong{} to fail at runtime. This would produce something like this at runtime: { "numericValue": null, "numericValue_invalid": "foo" } or { "numericValue": null, "$numericValue": "foo" } or { "numericValue": null, "@errors": { "numericValue": "foo" } } Fields declared in the pattern keep their desired type. Furthermore, errors are clearly identifiable and the faulty value is available for later post processing. Now that I think of it... this kind of mechanism could benefit to all JsonProviders. Instead of silently ignoring errors, JsonProviders could add an entry describing their issue under a special root field (@errors in the example above). I'm wondering if a better alternative would be to output the original string value in another field whose name is derived from the original. I really like this idea. Now that I think of it... this kind of mechanism could benefit to all JsonProviders. Instead of silently ignoring errors, JsonProviders could add an entry describing their issue under a special root field (@errors in the example above). Since logback already has a generic mechanism for reporting errors (via StatusManager), I'm hesitant to add another. If you feel strongly that having all errors reported inline would be best, I'd like to see this idea more fully spec'ed out, since the simple example for @errors above is not generic enough for all types of errors. I think a targetted solution for writing un-parsable values as a separate string field is sufficient. I like the _INVALID suffix. The $ prefix seems likely to already have meaning in some systems. Perhaps even provide the ability to customize via a regex replacement string, with \1_INVALID being the default. This new behavior would need to be able to be disabled to provide backwards compatibility.
gharchive/issue
2021-11-03T23:49:36
2025-04-01T06:44:50.552620
{ "authors": [ "brenuart", "philsttr" ], "repo": "logstash/logstash-logback-encoder", "url": "https://github.com/logstash/logstash-logback-encoder/issues/694", "license": "Apache-2.0", "license_type": "permissive", "license_source": "github-api" }
1322318657
How to set default value for url if enviroment variable is undefined Hi there. I have my spring app with following configuration property in application.properties: loki4j.url: ${LOKI4J_URL} and in logback-spring.xml I have following: ` <http> <url>${loki4j.url:-localhost:3100/loki/api/v1/push}</url> ..... ` Inspite of information about possibillity of setting default value via :-, when I dont explicitly set LOKI4J_URL variable I get java.lang.IllegalStateException: java.lang.IllegalStateException: Logback configuration error detected: ERROR in ch.qos.logback.core.joran.spi.Interpreter@38:16 - RuntimeException in Action for tag [appender] java.lang.IllegalArgumentException: URI with undefined scheme Could you please tell me is it possible to get what I want or no? Thank you in advance. Hi @setofaces, First off, you should be able to use env vars directly in your Logback config. Second, the error that you're getting - "URI with undefined scheme" - is exactly what is wrong, you just need to add scheme to your URI: <url>${LOKI_URL:-http://localhost:3100/loki/api/v1/push}</url> Closing this issue for now, but feel free to ask any further questions here or in dedicated discussions section.
gharchive/issue
2022-07-29T14:25:03
2025-04-01T06:44:50.579229
{ "authors": [ "nehaev", "setofaces" ], "repo": "loki4j/loki-logback-appender", "url": "https://github.com/loki4j/loki-logback-appender/issues/146", "license": "BSD-2-Clause", "license_type": "permissive", "license_source": "github-api" }
1169755366
Fix geometric spacing Geometric spacing is the inverse of what it should be (it should be closely spaced near beta = 1). Fixed in 5ed6aa0
gharchive/issue
2022-03-15T14:31:55
2025-04-01T06:44:50.680274
{ "authors": [ "lollcat" ], "repo": "lollcat/fab-jax", "url": "https://github.com/lollcat/fab-jax/issues/7", "license": "MIT", "license_type": "permissive", "license_source": "github-api" }
1552958101
Feature Request: "disableUserEditable" but only after creation Currently a field can have the option disableUserEditable. When enabled, and a defaultValueOnCreate is provided, no setters in the action or graphql mutation will be created. A common scenario is that a field can only be set on creation, but never be changed after that. This can currently be enforced through validators or by excluding the fields in the action definition. It could be convenient to have this as a configuration on the field. Suggestion: "immutable"? That could work for either user-provided or "default on create" values. The "...UserEditable" also suggests that it relates to editing, while it also has effect creating. Perhaps it should be renamed to something like "userProvided"? this seems like the same as this https://github.com/lolopinto/ent/issues/572, right? immutable makes sense do we also prevent it from being settable in builders? generally, i like everything to be settable in builders so that developers still have the ability to do anything they want e.g. an internal tool changing this Ah yeah, seems the same as #572. I think there are certain fields, such as a creatorID, which should be immutable on the lowest level of the ent framework. Any overrides should in that case happen directly in SQL. Ideally the ent framework should prevent a developer from making changes that go against the data model rules (such as changing a creator). It could be different from permission-based fields; e.g. changing an "owner" if the viewer has sufficient permissions to do so. On Mon, Jan 23, 2023 at 5:58 PM Ola Okelola @.***> wrote: do we also prevent it from being settable in builders? generally, i like everything to be settable in builders so that developers still have the ability to do anything they want e.g. an internal tool changing this — Reply to this email directly, view it on GitHub https://github.com/lolopinto/ent/issues/1331#issuecomment-1400671965, or unsubscribe https://github.com/notifications/unsubscribe-auth/AALMH3SN4XXNRG5OPEHTPWDWT22DNANCNFSM6AAAAAAUDXWOTI . You are receiving this because you authored the thread.Message ID: @.***> hmm, not sure if i like that. will have to think about that. I think that's why I ended up with userEditable vs other names because it's not editable by the end facing user but the developer can still do whatever they want or in actions with a schema like the following: { excludedFields: ['bar'], operation: ActionOperation.Edit, } you can still do something like: const action = EditFooAction.create(vc, foo, input); action.builder.updateInput({bar: 'newValue'}); await action.saveX(); or update in a trigger as needed Yeah I think those are okay alternatives. I'm mainly thinking from a situation where there are many developers, and not everyone might know the data model rules / assumptions. I would like to prevent that someone, in a trigger, sets a value which someone else decided to be immutable. Although perhaps the two last alternatives are enough signal that someone is working around the main assumptions. On Mon, Jan 23, 2023 at 6:13 PM Ola Okelola @.***> wrote: hmm, not sure if i like that. will have to think about that. I think that's why I ended up with userEditable vs other names because it's not editable by the end facing user but the developer can still do whatever they want or in actions with a schema like the following: { excludedFields: ['bar'], operation: ActionOperation.Edit,} you can still do something like: const action = EditFooAction.create(vc, foo, input);action.builder.updateInput({bar: 'newValue'});await action.saveX(); or update in a trigger as needed — Reply to this email directly, view it on GitHub https://github.com/lolopinto/ent/issues/1331#issuecomment-1400695187, or unsubscribe https://github.com/notifications/unsubscribe-auth/AALMH3QSJJYQEEBQXANW7ODWT233NANCNFSM6AAAAAAUDXWOTI . You are receiving this because you authored the thread.Message ID: @.***> maybe immutable is strong enough that the builder API is overrideCreatorId(newCreator) and it doesn't show up in builder so it's clearer what's happening and someone is making a conscious decision to override. const action = EditFooAction.create(vc, foo, input); action.builder.updateInput({creatorId: newCreator}); // doesn't work action.builder.overrideCreatorId(newCreator); // works
gharchive/issue
2023-01-23T11:48:53
2025-04-01T06:44:50.694957
{ "authors": [ "lolopinto", "rvlasveld" ], "repo": "lolopinto/ent", "url": "https://github.com/lolopinto/ent/issues/1331", "license": "MIT", "license_type": "permissive", "license_source": "github-api" }
228378253
Update for better readability When the extension is used with a shell theme that has a light background, the text becomes very hard to read, if not impossible, since the white color is hardcoded. This PR updates the styling for the items to ensure that they are readable against most background colors, while still looking good against the default shell theme. This fixes #9 It also removes some of the entry styling to help ensure that the entry box better blends in with more themes. This allows GNOME Shell to use the theme's built-in styling for Entry widgets. Thanks. Added the changes.
gharchive/pull-request
2017-05-12T18:53:48
2025-04-01T06:44:50.696652
{ "authors": [ "isantop", "lolstring" ], "repo": "lolstring/gnome-shell-extension-mycroft", "url": "https://github.com/lolstring/gnome-shell-extension-mycroft/pull/11", "license": "mit", "license_type": "permissive", "license_source": "bigquery" }
743762140
Config settings for dropping metrics with names longer than specified threshold Fix for issue: https://github.com/lomik/carbon-clickhouse/issues/60 Hi @lomik , could you merge this PR?
gharchive/pull-request
2020-11-16T11:33:11
2025-04-01T06:44:50.703127
{ "authors": [ "arrekb", "franek1709" ], "repo": "lomik/carbon-clickhouse", "url": "https://github.com/lomik/carbon-clickhouse/pull/77", "license": "mit", "license_type": "permissive", "license_source": "bigquery" }
226364565
feat: Global modal dotcom-core is now a part of Rizzo Next to render the global modal. The code lives in header_component. I think we can merge this and #648 and the same time. https://github.com/lonelyplanet/dotcom-profile/issues/61
gharchive/pull-request
2017-05-04T18:11:46
2025-04-01T06:44:50.709095
{ "authors": [ "jcreamer898", "thomasthesecond" ], "repo": "lonelyplanet/rizzo-next", "url": "https://github.com/lonelyplanet/rizzo-next/pull/649", "license": "mit", "license_type": "permissive", "license_source": "bigquery" }
1507182305
build(image): bump the base image From SLE BCI 15.3 to 15.4 for security issue ref: longhorn/longhorn#5107 Good catch! We missed this change. @Mergifyio backport v1.4.x
gharchive/pull-request
2022-12-22T03:01:39
2025-04-01T06:44:50.716698
{ "authors": [ "innobead", "mantissahz" ], "repo": "longhorn/longhorn-ui", "url": "https://github.com/longhorn/longhorn-ui/pull/570", "license": "Apache-2.0", "license_type": "permissive", "license_source": "github-api" }
627605381
[BUG] Orphaned Backup Blocks during concurrent backup creation / deletion Describe the bug At the moment, when there is a backup in progress while we receive a request to delete a backup. We remove the backup but since we cannot determine, whether the in progress backup requires any of the deleted backup blocks we don't remove the blocks. related to longhorn/longhorn#1431 This leads to having orphaned blocks. We can fix this as part of the gc process by looking at all the used blocks vs all the present blocks. Instead of comparing the used blocks against just the blocks from the deleted backup. This way we can cleanup all orphaned blocks that might exists for whatever reason. To Reproduce Steps to reproduce the behavior: create vol blkand mount to a node on /mnt/blk take backup(1) dd if=/dev/urandom of=/mnt/blk/data2 bs=1048576 count=512 status=progress take backup(2) dd if=/dev/urandom of=/mnt/blk/data3 bs=1048576 count=512 status=progress take backup(3) immediately delete backup(1) immediately delete backup(2) wait for backup(3) completion delete backup(3) wait verify blocks folder not empty verify volume.cfg BlockCount != 0 Expected behavior after deleting all backups, the blocks folder should be empty Pre-merged Checklist [ ] Does the PR include the explanation for the fix or the feature? [ ] Is the backend code merged? The PR is at [ ] Is the reproduce steps/test steps documented? [ ] If the fix introduces the code for backward compatibility Has an separate issue been filed with the label release/obsolete-compatibility? The compatibility issue is filed at [ ] If labeled: area/ui Has the UI issue filed or ready to be merged? The UI issue/PR is at [ ] if labeled: require/doc Has the necessary document PR submitted or merged? The Doc issue/PR is at [ ] If labeled: require/automation-e2e Has the end-to-end test plan been merged? Have QAs agreed on the automation test case? The automation skeleton PR is at The automation test case PR is at [ ] if labeled: require/automation-engine Has the engine integration test been merged? The engine automation PR is at [ ] if labeled: require/manual-test-plan Has the manual test plan been documented? The updated manual test plan is at Manual Validation: PASSED Create vol blkand mount to a node on /mnt/blk Take backup(1) Write data to volume using dd if=/dev/urandom of=/mnt/blk/data2 bs=1M count=512 status=progress Dake backup(2) Write data to volume using dd if=/dev/urandom of=/mnt/blk/data2 bs=1M count=512 status=progress Take backup(3) While backup(3) is in progress, Immediately delete backup(1) & backup(2) Wait for backup(3) completion Delete backup(3) In backupstore, verify blocks folder doesn't exists (PASSED) in backupstore, in volume.cfg file, verify BlockCount == 0 (PASSED)
gharchive/issue
2020-05-30T00:09:12
2025-04-01T06:44:50.728951
{ "authors": [ "joshimoo", "meldafrawi" ], "repo": "longhorn/longhorn", "url": "https://github.com/longhorn/longhorn/issues/1435", "license": "Apache-2.0", "license_type": "permissive", "license_source": "github-api" }
1421876359
How can I create a psd with layer mask? How can I create a psd with layer mask? I could create a psd, or I could create a layer, but I couldn't find how to create a psd with layer mask. @loonghao This is probably something you'll have to do with action descriptors, I don't see the ability to create a layer mask in the latest javascript scripting reference. I would use the ScriptListener, add a layer mask to a layer, then check the scriptlistener log to find the correct actiondescriptor code
gharchive/issue
2022-10-25T05:23:10
2025-04-01T06:44:50.768086
{ "authors": [ "MrTeferi", "odieXin" ], "repo": "loonghao/photoshop-python-api", "url": "https://github.com/loonghao/photoshop-python-api/issues/196", "license": "MIT", "license_type": "permissive", "license_source": "github-api" }
75834526
synchronization in async-http when i used sycn mode in async-http,i catch a unhandled exception:AsyncHttpRequest.java:205,NetWorkOnMainThreadException,how can i fix it? Don't use Synchronous mode in UI Thread, or you will suffer with such exceptions. Use asynchronous way with proper callback or synchronous way from service / background thread / etc.. i don't understand "synchronous way from service / background thread ",what i want is some steps one by one,if i use service or background thread, actually it's async 。It's not what i want
gharchive/issue
2015-05-13T03:54:04
2025-04-01T06:44:50.777505
{ "authors": [ "DXuYang", "smarek" ], "repo": "loopj/android-async-http", "url": "https://github.com/loopj/android-async-http/issues/858", "license": "Apache-2.0", "license_type": "permissive", "license_source": "github-api" }
219916952
JSON syntax highlighting looks different from sample version First of all, thank you for this amazing tool! Now, my JSON snippets look different from sample version. Here's what I mean: My snippet: Sample docs one: I think what I got is less readable. I've tried to change Rouge's code at the layout file, but I was able to only change colors, not the confusing "markup" itself. I've tested it on Chrome and Safari (latest) and Ruby 2.4.1 same here I'm getting the same error with a newly forked Slate but not one forked a year ago Hey peeps, using any of the themes with latest release doesn't render json properly, all the attributes and values are rendered in the same colour. Have tried changing the themes with the same outcome. Same issue here. Both attributes and their values are given the s2 class. See this (closed) issue on rouge. Presumably slate could peg its dependency on rouge<=2.0.5 if deemed important enough. Thanks @MikeRalphson for the issue — while I much prefer the old rendering, I think it's a bit crazy to peg ourselves at an old version of Rouge just so a single language's colors are slightly nicer, even if JSON is pretty common in docs. @qB0t you could use one of the Node.js ports of Slate listed in the Wiki (such as Shins). They tend to use highlight-js which doesn't have this limitation.
gharchive/issue
2017-04-06T14:26:30
2025-04-01T06:44:50.782310
{ "authors": [ "Evilbits", "MikeRalphson", "Slettan", "damircuca", "danilobytes", "guidev", "lord" ], "repo": "lord/slate", "url": "https://github.com/lord/slate/issues/744", "license": "Apache-2.0", "license_type": "permissive", "license_source": "github-api" }
167185678
File streaming API Need API for sending and receiving arbitrarily large files. Sending we need the ability to pass arbitrary large files as parameters to remote hosts, while receiving we may just need a single API to retrieve files by path. This is available as of Chopsticks 0.5.
gharchive/issue
2016-07-23T14:05:57
2025-04-01T06:44:50.783762
{ "authors": [ "lordmauve" ], "repo": "lordmauve/chopsticks", "url": "https://github.com/lordmauve/chopsticks/issues/1", "license": "apache-2.0", "license_type": "permissive", "license_source": "bigquery" }
1317652123
Fix responsive CSS and add animation to Hamburger menu [ ] Fix the CSS for the small and medium devices of the navbar [ ] Need to add a simple animation to the Hamburger menu Hi, Sorry its my first time trying to help on this platform and I'm not sure if I did upload things correctly. But I think I have fixed some margins and paddings. Also I did try to add some sort of animation to the burger menu. But the way the burger menu is build there is not many options in term of animation. Thanks, @makssmeinart to help me I would love to see your work please create a PR so I can check and merge it. And about the burger menu, you can change anything.
gharchive/issue
2022-07-26T03:37:53
2025-04-01T06:44:50.793412
{ "authors": [ "losier", "makssmeinart" ], "repo": "losier/v4", "url": "https://github.com/losier/v4/issues/15", "license": "MIT", "license_type": "permissive", "license_source": "github-api" }
1323529837
🛑 1001 Festas is down In 2b96055, 1001 Festas (https://1001festas.pt) was down: HTTP code: 521 Response time: 207 ms Resolved: 1001 Festas is back up in 0d7b014.
gharchive/issue
2022-07-31T17:42:13
2025-04-01T06:44:50.813050
{ "authors": [ "nsouto" ], "repo": "lostlink/upptime", "url": "https://github.com/lostlink/upptime/issues/39", "license": "MIT", "license_type": "permissive", "license_source": "github-api" }
2471642683
[FEATURE] check if bounding box is valid before continuing As of now, the bounding box isnt being "validated" before requesting the data from the api. we should make sure the bounding box makes sense before proceeding, and error out if it isnt. this could be a rather simple check. here's an example i wrote def validate_bounding_box(bbox): """ Validates a bounding box represented as a string in the format "min_lng,min_lat,max_lng,max_lat". Parameters: bbox (str): The bounding box string. Returns: bool: True if the bounding box is valid, False otherwise. """ try: # Split the input string into components parts = bbox.split(',') if len(parts) != 4: return False # Convert the components to float min_lng, min_lat, max_lng, max_lat = map(float, parts) # Validate the ranges if not (-180 <= min_lng <= 180 and -180 <= max_lng <= 180): return False if not (-90 <= min_lat <= 90 and -90 <= max_lat <= 90): return False if min_lng >= max_lng or min_lat >= max_lat: return False return True except ValueError: # In case of conversion error, input was not a valid float return False # Example usage: bbox = "12.34,-56.78,23.45,-45.67" print(validate_bounding_box(bbox)) # Should print True or False based on validity Thanks, I created a PR for your implementation and merged it!
gharchive/issue
2024-08-17T19:31:31
2025-04-01T06:44:50.815050
{ "authors": [ "amir16yp", "louis-e" ], "repo": "louis-e/arnis", "url": "https://github.com/louis-e/arnis/issues/44", "license": "Apache-2.0", "license_type": "permissive", "license_source": "github-api" }
505410784
反地理编码检索中,location字段不合法 RN:0.59.9 react-native-baidu-map: 1.0.7 最新版本,全部按照流程来, getCurrentPosition()报错:反地理编码检索中,location字段不合法, 有人遇到过这种问题吗? 另外:geocode()方法是可以的 我iOS也是同样的问题,请问解决了吗 我iOS也是同样的问题,请问解决了吗 刚遇到的,等待解决中啊 "react": "16.8.3", "react-native": "0.59.9", "react-native-baidu-map": "^1.0.7" 同样遇到这个问题,真机环境也是这个问题。 @mgfjxxiexiaolong 感谢大神🙏,按照你的方法已经解决了 @mgfjxxiexiaolong 感谢大神🙏,按照你的方法已经解决了 但是这样只能取到lat,lng数据,取不到完整地址,你这儿也这样吗? @paraofheaven 对,我是拿到经纬度之后用reverseGeoCode再去取地址
gharchive/issue
2019-10-10T17:33:00
2025-04-01T06:44:50.826379
{ "authors": [ "mgfjxxiexiaolong", "paraofheaven", "wangqing6" ], "repo": "lovebing/react-native-baidu-map", "url": "https://github.com/lovebing/react-native-baidu-map/issues/280", "license": "MIT", "license_type": "permissive", "license_source": "github-api" }
257252633
bug: numbers like "404" are separating to "4-4" related to #24 this I believe is due to this line https://github.com/lovell/limax/blob/master/lib/limax.js#L54-L56 expected result is 404, however I get 4-4 I think I will need to revert to using speakingurl 😦 Hello, I added the following test cases that pass for me when run locally. assert.strictEqual( slug('404', { separateNumbers: true }), '404' ); assert.strictEqual( slug('404', { separateNumbers: false }), '404' ); assert.strictEqual( slug('状态404页面未找到', { separateNumbers: true }), 'zhuang4-tai4-404-ye4-mian4-wei4-zhao3-dao4' ); assert.strictEqual( slug('状态404页面未找到', { separateNumbers: false }), 'zhuang4-tai4-404-ye4-mian4-wei4-zhao3-dao4' ); Are you able to provide more information on the failing case such as any other options used? The test cases from the comment above are now in the unit tests. https://github.com/lovell/limax/blob/02f7748913f7883dfef67ff256eea34021de5829/test/unit.js#L66-L82
gharchive/issue
2017-09-13T04:14:27
2025-04-01T06:44:50.829254
{ "authors": [ "lovell", "niftylettuce" ], "repo": "lovell/limax", "url": "https://github.com/lovell/limax/issues/25", "license": "Apache-2.0", "license_type": "permissive", "license_source": "github-api" }
2321578891
WebP image size is increased when processed through sharp What are the steps to reproduce? This is actual image but it can be replicated using other image too https://s-usweb.dotomi.com/images/63249/ADS_MP_60682/970x250/bg_img_CAR_Summer_OrangeJuice_20240329.webp When sending this image to sharp the returned image is increased size if we use the same dimension for e.g. this image dimension is 1940*500 if its just processed to have same dimension its size is increased by more that twice. Image size is areoung 130KB but passing to sharp its around 235KB What is the expected behaviour? Sizes should be less than or equal to original size, not larger. Please provide a minimal, standalone code sample, without other dependencies, that demonstrates this problem sharp(inputBuffer) .resize(1940, 500) .toFile('output.webp', (err, info) => { ... }); Please provide sample image(s) that help explain this problem https://s-usweb.dotomi.com/images/63249/ADS_MP_60682/970x250/bg_img_CAR_Summer_OrangeJuice_20240329.webp On processing with sharp https://s-usweb.dotomi.com/imageresizer/63249/?img=https%3A%2F%2Fs-usweb.dotomi.com%2Fimages%2F63249%2FADS_MP_60682%2F970x250%2Fbg_img_CAR_Summer_OrangeJuice_20240329.webp&ts1716916460293 Even if I reduce the size of the image it is still higher than actual image. On the service deployed we can change height and width based as below from query params to test that https://s-usweb.dotomi.com/imageresizer/63249/?img=https%3A%2F%2Fs-usweb.dotomi.com%2Fimages%2F63249%2FADS_MP_60682%2F970x250%2Fbg_img_CAR_Summer_OrangeJuice_20240329.webp&w=1800&h=350 The input WebP image was created with a "quality" ("Base Q") of approximately 44. $ webpinfo -bitstream_info bg_img_CAR_Summer_OrangeJuice_20240329.webp File: bg_img_CAR_Summer_OrangeJuice_20240329.webp RIFF HEADER: File size: 137626 Chunk VP8X at offset 12, length 18 ICCP: 1 Alpha: 0 EXIF: 1 XMP: 0 Animation: 0 Canvas size 1940 x 500 Chunk ICCP at offset 30, length 488 Chunk VP8 at offset 518, length 136914 Width: 1940 Height: 500 Alpha: 0 Animation: 0 Format: Lossy (1) Parsing lossy bitstream... Key frame: Yes Profile: 0 Display: Yes Part. 0 length: 19756 Width: 1940 X scale: 0 Height: 500 Y scale: 0 Color space: 0 Clamp type: 0 Use segment: 1 Update map: 1 Update data: 1 Absolute delta: 1 Quantizer: 44 37 31 19 Filter strength: 14 8 6 17 Prob segment: 126 53 184 Simple filter: 0 Level: 17 Sharpness: 0 Use lf delta: 0 Total partitions: 1 Base Q: 44 DQ Y1 DC: 0 DQ Y2 DC: 0 DQ Y2 AC: 0 DQ UV DC: -2 DQ UV AC: -2 Chunk EXIF at offset 137432, length 194 The default WebP output "quality" setting is 80. https://sharp.pixelplumbing.com/api-output#webp Thank you for the quick feedback. Appreciate taking time reviewing it.
gharchive/issue
2024-05-28T17:25:57
2025-04-01T06:44:50.835566
{ "authors": [ "lovell", "sthpratik" ], "repo": "lovell/sharp", "url": "https://github.com/lovell/sharp/issues/4117", "license": "Apache-2.0", "license_type": "permissive", "license_source": "github-api" }
161907945
Crash on running tile() Hi, I'm getting a crash from a minimal test case: require('sharp')('/test.jpg').tile().toFile('/test.dzi') This is fully repeatable with putting the above line to crash.js next to Dockerfile: FROM ubuntu:16.04 RUN apt-get update && \ apt-get install -y build-essential automake gtk-doc-tools liborc-0.4-dev libxml2-dev libfftw3-dev \ libmagickwand-dev gobject-introspection libgsf-1-dev git wget RUN git clone https://github.com/jcupitt/libvips RUN cd libvips && \ ./bootstrap.sh && \ ./configure --with-orc && \ make -j4 && \ make install && \ ldconfig RUN wget https://deb.nodesource.com/setup_6.x && \ bash setup_6.x && \ apt-get install -y nodejs COPY . /dzi-sharp RUN cd /dzi-sharp && \ npm install lovell/sharp --sharp-cxx11=1 RUN wget -O /test.jpg https://processing.org/tutorials/pixels/imgs/pointillism.jpg CMD node /dzi-sharp/crash.js The crash gives error: *** Error in /usr/bin/node': free(): invalid size: 0x00007fffd4000960 *** and has backtrace: #0 0x00007ffff6b79418 in __GI_raise (sig=sig@entry=6) at ../sysdeps/unix/sysv/linux/raise.c:54 #1 0x00007ffff6b7b01a in __GI_abort () at abort.c:89 #2 0x00007ffff6bbb72a in __libc_message (do_abort=do_abort@entry=2, fmt=fmt@entry=0x7ffff6cd46b0 "*** Error in `%s': %s: 0x%s ***\n") at ../sysdeps/posix/libc_fatal.c:175 #3 0x00007ffff6bc3f4a in malloc_printerr (ar_ptr=<optimized out>, ptr=<optimized out>, str=0x7ffff6cd14a7 "free(): invalid size", action=3) at malloc.c:5007 #4 _int_free (av=<optimized out>, p=<optimized out>, have_lock=0) at malloc.c:3868 #5 0x00007ffff6bc7abc in __GI___libc_free (mem=<optimized out>) at malloc.c:2969 #6 0x00007ffff46f72e8 in __gnu_cxx::new_allocator<std::_List_node<vips::VOption::Pair*> >::deallocate ( this=0x7fffd4000948, __p=<optimized out>) at /usr/include/c++/5/ext/new_allocator.h:110 #7 std::__cxx11::_List_base<vips::VOption::Pair*, std::allocator<vips::VOption::Pair*> >::_M_put_node ( __p=<optimized out>, this=0x7fffd4000948) at /usr/include/c++/5/bits/stl_list.h:396 #8 std::__cxx11::_List_base<vips::VOption::Pair*, std::allocator<vips::VOption::Pair*> >::_M_clear ( this=0x7fffd4000948) at /usr/include/c++/5/bits/list.tcc:79 #9 std::__cxx11::_List_base<vips::VOption::Pair*, std::allocator<vips::VOption::Pair*> >::~_List_base ( this=0x7fffd4000948, __in_chrg=<optimized out>) at /usr/include/c++/5/bits/stl_list.h:446 #10 std::__cxx11::list<vips::VOption::Pair*, std::allocator<vips::VOption::Pair*> >::~list (this=0x7fffd4000948, __in_chrg=<optimized out>) at /usr/include/c++/5/bits/stl_list.h:507 #11 vips::VOption::~VOption (this=0x7fffd4000940, __in_chrg=<optimized out>) at VImage.cpp:107 #12 0x00007ffff46f7329 in vips::VOption::~VOption (this=0x7fffd4000940, __in_chrg=<optimized out>) at VImage.cpp:113 #13 0x00007ffff46f880d in vips::VImage::call_option_string ( operation_name=operation_name@entry=0x7ffff42c8e39 "VipsForeignLoadJpegFile", option_string=option_string@entry=0x7fffdcf8a730 "", options=0x7fffd4000940) at VImage.cpp:503 #14 0x00007ffff470f548 in vips::VImage::new_from_file (name=<optimized out>, options=0x7fffd4000940) at VImage.cpp:536 #15 0x00007ffff4930c50 in PipelineWorker::Execute() () from /home/acca/dzi-sharp/node_modules/sharp/build/Release/sharp.node #16 0x00000000010334c1 in ?? () #17 0x0000000001042638 in ?? () #18 0x00007ffff6f146fa in start_thread (arg=0x7fffdcf8c700) at pthread_create.c:333 #19 0x00007ffff6c4ab5d in clone () at ../sysdeps/unix/sysv/linux/x86_64/clone.S:109 #0 0x00007ffff6b79418 in __GI_raise (sig=sig@entry=6) at ../sysdeps/unix/sysv/linux/raise.c:54 #1 0x00007ffff6b7b01a in __GI_abort () at abort.c:89 #2 0x00007ffff6bbb72a in __libc_message (do_abort=do_abort@entry=2, fmt=fmt@entry=0x7ffff6cd46b0 "*** Error in `%s': %s: 0x%s ***\n") at ../sysdeps/posix/libc_fatal.c:175 #3 0x00007ffff6bc3f4a in malloc_printerr (ar_ptr=<optimized out>, ptr=<optimized out>, str=0x7ffff6cd14a7 "free(): invalid size", action=3) at malloc.c:5007 #4 _int_free (av=<optimized out>, p=<optimized out>, have_lock=0) at malloc.c:3868 #5 0x00007ffff6bc7abc in __GI___libc_free (mem=<optimized out>) at malloc.c:2969 #6 0x00007ffff46f72e8 in __gnu_cxx::new_allocator<std::_List_node<vips::VOption::Pair*> >::deallocate ( this=0x7fffd4000948, __p=<optimized out>) at /usr/include/c++/5/ext/new_allocator.h:110 #7 std::__cxx11::_List_base<vips::VOption::Pair*, std::allocator<vips::VOption::Pair*> >::_M_put_node ( __p=<optimized out>, this=0x7fffd4000948) at /usr/include/c++/5/bits/stl_list.h:396 #8 std::__cxx11::_List_base<vips::VOption::Pair*, std::allocator<vips::VOption::Pair*> >::_M_clear ( this=0x7fffd4000948) at /usr/include/c++/5/bits/list.tcc:79 #9 std::__cxx11::_List_base<vips::VOption::Pair*, std::allocator<vips::VOption::Pair*> >::~_List_base ( this=0x7fffd4000948, __in_chrg=<optimized out>) at /usr/include/c++/5/bits/stl_list.h:446 #10 std::__cxx11::list<vips::VOption::Pair*, std::allocator<vips::VOption::Pair*> >::~list (this=0x7fffd4000948, __in_chrg=<optimized out>) at /usr/include/c++/5/bits/stl_list.h:507 #11 vips::VOption::~VOption (this=0x7fffd4000940, __in_chrg=<optimized out>) at VImage.cpp:107 #12 0x00007ffff46f7329 in vips::VOption::~VOption (this=0x7fffd4000940, __in_chrg=<optimized out>) at VImage.cpp:113 #13 0x00007ffff46f880d in vips::VImage::call_option_string ( operation_name=operation_name@entry=0x7ffff42c8e39 "VipsForeignLoadJpegFile", option_string=option_string@entry=0x7fffdcf8a730 "", options=0x7fffd4000940) at VImage.cpp:503 #14 0x00007ffff470f548 in vips::VImage::new_from_file (name=<optimized out>, options=0x7fffd4000940) at VImage.cpp:536 #15 0x00007ffff4930c50 in PipelineWorker::Execute() () from /home/acca/dzi-sharp/node_modules/sharp/build/Release/sharp.node #16 0x00000000010334c1 in ?? () #17 0x0000000001042638 in ?? () #18 0x00007ffff6f146fa in start_thread (arg=0x7fffdcf8c700) at pthread_create.c:333 #19 0x00007ffff6c4ab5d in clone () at ../sysdeps/unix/sysv/linux/x86_64/clone.S:109 This is pretty similar to earlier sharp issues where the problem was C++ ABI, but here all the parts are built with the same toolchain. Also, the same crash occurs when setting export _GLIBCXX_USE_CXX11_ABI=0 before any build steps. The vips command line tools work without problems with the same system. Hello, it looks like you've run into https://github.com/lovell/sharp/issues/442 that will be addressed in the next release by the inclusion of https://github.com/lovell/sharp/pull/456 You can test this now via: npm install lovell/sharp --sharp-cxx11=1 http://sharp.dimens.io/en/latest/install/#linux Thanks for the quick reply. However, that change just help. I updated the Dockerfile in my earlier comment with that, but it results in the same crash nevertheless. How odd, I'm pretty certain this is related to the gcc C++ ABI change. Have you tried installing sharp without having libvips installed, which will download and use pre-built binaries (that are compiled with _GLIBCXX_USE_CXX11_ABI=0)? You won't get *magick support but tiling should work, assuming that's your primary use case. Ok, now I got it working at least with the old ABI and self-built vips. Thanks for your help! The only weird thing now is how the vips command line tool looped by shell is a third faster than looping the same operation within sharp. I'd expect that it would go the other way around if anything. @vuakko Are you able to share the vips command you're running? Simply vips dzsave -t 512 -o 1 image.jpg out_image looped for 20 ~24Mpix JPGs (different output paths of course). With 4 parallel runs this loop spends 4.5 seconds while sharp (same 4 parallel calls within node) takes 6 seconds. Thanks for the details, I'll attempt to reproduce this difference locally then profile to discover what's going on here. I'm unable to reproduce such a time disparity when running locally. $ time node -e "require('sharp')('10000x10000.jpg').tile().toFile('tile_sharp.dzi', console.log);" real 0m9.549s user 0m25.270s sys 0m1.444s $ time vips dzsave --tile-size=256 --overlap=0 10000x10000.jpg tile_vips real 0m8.762s user 0m25.018s sys 0m1.109s Profiling suggests the slightly slower node timings are mostly due to the overhead of script parsing at start-up. Ok, thanks for checking, I'll test some more later when I have time again and make a new issue if needed.
gharchive/issue
2016-06-23T12:13:56
2025-04-01T06:44:50.845700
{ "authors": [ "lovell", "vuakko" ], "repo": "lovell/sharp", "url": "https://github.com/lovell/sharp/issues/475", "license": "Apache-2.0", "license_type": "permissive", "license_source": "github-api" }
1372472009
[chip dv] Do not use cfg.clk_rst_vif.wait_clks() to wait for clock cycles We need to replace the cfg.clk_rst_if.wait_clks() invocations in our test sequences with the appropriate clk / rst monitor in chip_if for example, cfg.chip_vif.sys_clk_rst_if.wait_clks(). The reason for this is explained in this commit: https://github.com/lowRISC/opentitan/pull/14917/commits/bb2bb8abf3cefcb5bb0b25680b6c74051809e5cb Previously, the wait_clks on this would wait on the external clock source, which was unsed in most tests, so it was incorrect all along. As of the commit above, the clock port of this interface is unconnected. The clock remains active to prevent breakages. @weicaiyang, FYI. SGTM this is probably something that can't be deferred to M3 right? It sounds like if we don't fix this we might see random errors AFAICT, there have not been any breakages related to this yet. Maybe we can defer to M3 if tests are largely passing. Its just that the DV sequences are waiting on clock events using a clock that has nothing to do with the chip. that's kind of surprising there are no failures... alright, maybe let's defer like you said and see if anything catches fire hehe. most of the wait_clk just wants to add some delays. How big the delay is may not really matter. For things we really care, we should make it wait for an event. wait but i thought the issue was that the "clk" we're sitting on may not always be there, so if we wait on that clock, things might just stall? wait but i thought the issue was that the "clk" we're sitting on may not always be there, so if we wait on that clock, things might just stall? cfg.clk_rst_if is a TB clock, which is always on but not used by DUT. I think Sri's point is to replace it with an actually design clock like sys_clk. ah sounds good, i think i misunderstood it to mean the clock was not always on.
gharchive/issue
2022-09-14T07:07:27
2025-04-01T06:44:50.854212
{ "authors": [ "sriyerg", "tjaychen", "weicaiyang" ], "repo": "lowRISC/opentitan", "url": "https://github.com/lowRISC/opentitan/issues/14918", "license": "Apache-2.0", "license_type": "permissive", "license_source": "github-api" }
1604669264
[chip-test] chip_plic_fatal_alert Test point name chip_plic_fatal_alert Host side component SystemVerilog OpenTitanTool infrastructure implemented None Contact person No response Checklist Please fill out this checklist as items are completed. Link to PRs and issues as appropriate. [ ] Check if existing test covers most or all of this testpoint (if so, either extend said test to cover all points, or skip the next 3 checkboxes) [ ] Device-side (C) component developed [ ] Bazel build rules developed [ ] Host-side component developed [ ] HJSON test plan updated with test name (so it shows up in the dashboard) [ ] Test added to dvsim nightly regression (and passing at time of checking) Note that this test ("Verify that the fatal alert is fired from PLIC due to bus integrity violation.") seems to overlap with chip_sw_plic_alerts ("Verify alerts from PLIC due to both, TL intg and reg WE onehot check faults."), which is tracked in https://github.com/lowRISC/opentitan/issues/16835. In particular, alerts due to reg WE onehot check faults seem to be verified already since https://github.com/lowRISC/opentitan/pull/16845. This leaves verifying TL integrity errors, which this test is about. Note that this test ("Verify that the fatal alert is fired from PLIC due to bus integrity violation.") seems to overlap with chip_sw_plic_alerts ("Verify alerts from PLIC due to both, TL intg and reg WE onehot check faults."), which is tracked in #16835. In particular, alerts due to reg WE onehot check faults seem to be verified already since #16845. This leaves verifying TL integrity errors, which this test is about. @cindychip: I think the chip_sw_plic_alerts testpoint could be implemented by this test plus chip_sw_all_escalation_resets. WDYT? Thanks @andreaskurt, I agree this should be part of the escalation resets, or at least can share the same test infra. @msfschaffner to cross-reference test cases. @cindychip can you close this once you added the SVA? thanks Michael, closing it.
gharchive/issue
2023-03-01T10:11:51
2025-04-01T06:44:50.862408
{ "authors": [ "andreaskurth", "cindychip", "moidx", "msfschaffner" ], "repo": "lowRISC/opentitan", "url": "https://github.com/lowRISC/opentitan/issues/17413", "license": "Apache-2.0", "license_type": "permissive", "license_source": "github-api" }
583825871
Any other method is available for Simulation other than Synopsys VCS and Cadence Xcelium. I do not have software VCS and Xcelium . But I want check simulation ,How should I check?? One doubt ..verilator is depends on this VCS and Xcelium?? I was thing of another approach as ILA (Integrated logic Analyzer ) for simulation ,it will work for this opentitan project??? No, Verilator doesn't depend on VCS or XCelium, and can be used to record and dump wave forms. Please read https://docs.opentitan.org/doc/ug/getting_started_verilator/ for more information, and read the Verilator manual at https://www.veripool.org/projects/verilator/wiki/Manual-verilator.
gharchive/issue
2020-03-18T15:48:10
2025-04-01T06:44:50.864674
{ "authors": [ "imphil", "ladmangesh805" ], "repo": "lowRISC/opentitan", "url": "https://github.com/lowRISC/opentitan/issues/1782", "license": "Apache-2.0", "license_type": "permissive", "license_source": "github-api" }
833709263
[otbn] Flag state not cleared on start If you run a simulation with X propagation, you can see x's coming out in the trace. For example: util/dvsim/dvsim.py hw/ip/otbn/dv/uvm/otbn_sim_cfg.hjson --fixed-seed 2191206664 -w -i otbn_single This example happens because it does a csrrw near the start of the instruction stream which reads the FLAGS CSR. However, the flags don't actually have a definite value at this point. They get set to x just after reset (because signals like operation_i are x in otbn_alu_bignum.sv). I think that's fine, but we need to make sure that they get zeroed at the start of the operation. @GregAC The issue was dodgy enable signals that weren't factoring in whether we had a valid instruction. We could introduce a 0 all flags at start though we might want similar behaviour for other CSRs/WSRs or indeed the register file. #5678 will fix this bug and we can discuss whether we want a flag clear at start along with the other state clearing stuff. Thanks for tracking this down. Re state clearing: I'm not sure. The RIG has been careful not to use WDRs/GPRs before writing to them so that the results are independent of state clearing. I think the fact that I didn't check flags have been set is a bug: I'll file a follow-up to track that.
gharchive/issue
2021-03-17T12:16:43
2025-04-01T06:44:50.867600
{ "authors": [ "GregAC", "rswarbrick" ], "repo": "lowRISC/opentitan", "url": "https://github.com/lowRISC/opentitan/issues/5655", "license": "Apache-2.0", "license_type": "permissive", "license_source": "github-api" }
1498283327
[lint,prim_generic] Turn off unused Verilator lint in clock buf The reason we want to keep these parameters is that when we don't use the generic primitive these parameters are used, so it is useful to keep those in sync. CW310 bitstream builds find locally for me, so I think this is an issue with CI. Marking this PR as ready for review. @andreaskurth would you mind merging this?
gharchive/pull-request
2022-12-15T11:45:22
2025-04-01T06:44:50.868942
{ "authors": [ "marnovandermaas" ], "repo": "lowRISC/opentitan", "url": "https://github.com/lowRISC/opentitan/pull/16852", "license": "Apache-2.0", "license_type": "permissive", "license_source": "github-api" }
1762261776
[dv, chip-test] Update VIO POK paths The VIO POK path for both VIOA and VIOB in the test needs to be updated due to changes in the AST's RTL in the closed source. CC: @sha-ron
gharchive/pull-request
2023-06-18T12:07:09
2025-04-01T06:44:50.870222
{ "authors": [ "walaahattab" ], "repo": "lowRISC/opentitan", "url": "https://github.com/lowRISC/opentitan/pull/18950", "license": "Apache-2.0", "license_type": "permissive", "license_source": "github-api" }
2322490780
[rom] Initialize output argument when loading boot_data Fix #22927. Initialize the boot_data struct to all zeros before calling boot_data_page_info_update which expects an in-out argument. I missed that precondition. Thanks for reviewing!
gharchive/pull-request
2024-05-29T06:37:57
2025-04-01T06:44:50.871353
{ "authors": [ "sasdf" ], "repo": "lowRISC/opentitan", "url": "https://github.com/lowRISC/opentitan/pull/23354", "license": "Apache-2.0", "license_type": "permissive", "license_source": "github-api" }
2343493525
[prim_fifo_sync/rtl] Fix err_o behavior When an err_o is raised by the u_fifo_cnt (there is a ptr error inside this module), we should not raise the rvalid_o signal of the FIFO and we should not forward rdata_o = rdata_int as this data is not valid. Instead, set rdata_o = '0 and deasser the rvalid_o signal. Closes lowRISC/opentitan#23578 I am slightly wary of killing the valid is the presence of an error. In particular I worry this will cause some issue where a bus response vanishes which could lock things up. I note you've just closed this so perhaps you're already doing a different fix anyway?
gharchive/pull-request
2024-06-10T10:07:06
2025-04-01T06:44:50.873026
{ "authors": [ "GregAC", "nasahlpa" ], "repo": "lowRISC/opentitan", "url": "https://github.com/lowRISC/opentitan/pull/23595", "license": "Apache-2.0", "license_type": "permissive", "license_source": "github-api" }
1833453970
Fusion events Looks cool. Would this plugin be capable of handling fusion events? I have a custom-built tool in Python for detecting mitochondrial fission and fusion events and am looking for a tool for plotting mitochondrial lineages. Hi @akas0018 - yes, it should work, although the functionality is a little experimental. You can see more in the PR where we added the mergin functionality: https://github.com/lowe-lab-ucl/arboretum/pull/94
gharchive/issue
2023-08-02T15:53:29
2025-04-01T06:44:50.874423
{ "authors": [ "akas0018", "quantumjot" ], "repo": "lowe-lab-ucl/arboretum", "url": "https://github.com/lowe-lab-ucl/arboretum/issues/105", "license": "MIT", "license_type": "permissive", "license_source": "github-api" }
273602179
Add multiple devices Hey everyone, great code - but how do I add different speaker / devices? Thanks and cheers Did it, so if someone is interested. Just ignore the required parameters via optimist and add the devices directly in the file, e.g. play_stdin.js in /examples host = "192.168.1.11" var device = airtunes.add(host, { port: 5000 volume: 100 }); var host2 = '192.168.1.17'; var device = airtunes.add(host2, { port: 1024 volume: 100 });
gharchive/issue
2017-11-13T22:33:36
2025-04-01T06:44:50.882294
{ "authors": [ "7rncs" ], "repo": "lperrin/node_airtunes", "url": "https://github.com/lperrin/node_airtunes/issues/44", "license": "bsd-2-clause", "license_type": "permissive", "license_source": "bigquery" }
247371267
Align closing 'end' of anonymous functions in argument list with top-level call Currently, we are aligning the starting -> and the end tokens of anonymous functions when passing them as arguments. Current formatting: test "fn in long function calls" do """ Enum.find([1,2,3,4], fn num -> rem(num, 2) == 0 end) """ ~> """ Enum.find [1, 2, 3, 4], fn num -> rem(num, 2) == 0 end """ """ Logger.debug fn -> "Hey this is a long log message!" end """ ~> """ Logger.debug fn -> "Hey this is a long log message!" end """ end Desired formatting The idiomatic way usually employed (example in the style guide) aligns the end token with the calling function. Could be a bit of a tricky one. :)
gharchive/issue
2017-08-02T12:25:24
2025-04-01T06:44:50.884636
{ "authors": [ "jfornoff", "lpil" ], "repo": "lpil/exfmt", "url": "https://github.com/lpil/exfmt/issues/54", "license": "apache-2.0", "license_type": "permissive", "license_source": "bigquery" }
430542855
[In Review] Fix for domain command name exercisePart Checklist [x] I run yarn run webpack:build:main: the project builds without errors. [x] I run yarn lint: the project builds without code style warnings. [ ] I updated the documentation and models. [x] I tested the changes and all related features on the test server https://artemistest.ase.in.tum.de. [ ] I added (end-to-end) test cases for the new functionality. Motivation and Context In the last PR for integrating the markdown-editor into the programming-exercises I replaced the checkmark in the problem statement with its unicode counterpart (it wasn't stored properly in my database). However this change doesn't make sense thinking about it, we should directly exchange it with a domainCommand (even though we don't have a button for it yet). Description Adjusted the problem statement template with the new domainCommand [exercisePart] Changed the text parser to use this domain command One note: The problem statements of programming exercises that were created since the merge of the integration PR for the markdown-editor will not show the exercise part information, as I don't think it is feasible to do an additional check for checkmarks and replace them with the domainCommands. It should be fine if nobody has created a real problem statement yet. Steps for Testing Login to ArTEMiS Create a new programming exercise Use the preview of the markdown-editor for the problem-statement: It should show the correct number of exerciseParts Instead of calling it exercisePart, I would rather call it task. Could you change it? @krusche I also prefer task! I just pushed a commit.
gharchive/pull-request
2019-04-08T16:23:23
2025-04-01T06:44:50.894823
{ "authors": [ "krusche", "thilo-behnke" ], "repo": "ls1intum/ArTEMiS", "url": "https://github.com/ls1intum/ArTEMiS/pull/258", "license": "mit", "license_type": "permissive", "license_source": "bigquery" }
690327496
Inconsistent usage of "back" buttons Describe the bug The Artemis frontend has some pages which have a "back" button and some which do not have a "back" button. Also, there are different "back" buttons in use. To Reproduce Go to Course Management -> Tutor Course Dashboard (for any course) There is a white arrow at the top left which brings one back to the Course Management view. Go to Course Management -> Instructor Dashboard (for any course) This page has no "back" button, even though it is quite similar to the Tutor course Dashboard. Go to Course Management -> View (for any course) There is a colored button which reads "Back" and has an arrow in it at the bottom. Go into the code editor There is no "back" button. Expected behavior The frontend should be comfortably navigable without having to use the browser's "back" functionality. Currently, only some pages allow this, others force the user to make use of the browser's "back" feature or click on Course Overview or Course Management which means some unnecessary clicks in some cases. Screenshots Tutor Course Dashboard: "back" button at the top left Instructor Dashboard: no "back" button Course Overview: "Back" button at the button Code Editor: no "back" button I would argue that it's not necessary to "reimplement" the back button of the browser. Why should we do this, if the browser offers this functionality and we don't need to write even one line of code? I would rather suggest that we use bread crump navigation as part of the black navigation bar (at the top) by e.g. using https://getbootstrap.com/docs/4.0/components/breadcrumb/ Then we could remove all custom back buttons as they are not necessary any more! I would argue that it's not necessary to "reimplement" the back button of the browser. Why should we do this, if the browser offers this functionality and we don't need to write even one line of code? I would rather suggest that we use bread crump navigation as part of the black navigation bar (at the top) by e.g. using https://getbootstrap.com/docs/4.0/components/breadcrumb/ Then we could remove all custom back buttons as they are not necessary any more! I agree, this method of navigation gives more choice and information to the users and can be implemented consistently on all pages. It is a feature I would like to have. When proposing the use of a back button, I thought that it might make it easier for users to know how they can navigate Artemis, but bread crumb navigation also does a better job at this. Should I adapt this issue or close it and create a new one for the bread crumb navigation feature? This issus is no longer relevant since bread crumb navigation seems to be superior in every way. A corresponding feature request can be found in #2097.
gharchive/issue
2020-09-01T17:41:20
2025-04-01T06:44:50.903938
{ "authors": [ "danielrainer", "krusche" ], "repo": "ls1intum/Artemis", "url": "https://github.com/ls1intum/Artemis/issues/2083", "license": "MIT", "license_type": "permissive", "license_source": "github-api" }
1314533530
Quiz exercises: Disable save quiz button if no changes are present Checklist General [ ] This is a small issue that I tested locally and was confirmed by another developer on a test server. [x] Language: I followed the guidelines for inclusive, diversity-sensitive, and appreciative language. [x] I chose a title conforming to the naming conventions for pull requests. Client [x] I followed the coding and design guidelines and ensured that the layout is responsive. [x] Following the theming guidelines, I specified colors only in the theming variable files and checked that the changes look consistent in both the light and the dark theme. [x] I added multiple integration tests (Jest) related to the features (with a high test coverage), while following the test guidelines. [x] I added authorities to all new routes and checked the course groups for displaying navigation elements (links, buttons). [x] I documented the TypeScript code using JSDoc style. [x] I translated all newly inserted strings into English and German. Motivation and Context Close #5159. Also a small cleanup of the quiz-exercise-detail-component which had some old and unused code/variables. Description Fixed the issue where the 'saved entity' was not created correctly and therefore the 'Save' button would always be enabled. Also fixed the issue where even after undoing changes the 'Save' button would still be enabled. Remove the initCompleted and the 'entity' variables which aren't used anymore. There are some cases where making a change and then undoing it will still cause the 'Save' button to be enabled. These are: adding/removing categories changing the Batch mode type changing the content/text of any of the questions (This can be handled in a future PR if the effort is considered worth it.) Steps for Testing You can just follow the steps in #5159. Additionally, you can test the other fixes: Prerequisites: 1 Instructor Log in to Artemis Navigate to Course Administration Navigate to the exercises page for a course Click 'Edit' for quiz exercise (if there's no quiz then create one first) Check that the 'Save' button is disabled at first Check that changing some fields enables the 'Save' button Check that after undoing the changes done in the previous step the 'Save' button is disabled again (There are some exceptions described above) Review Progress Code Review [x] Review 1 [ ] Review 2 Manual Tests [x] Test 1 [x] Test 2 Test Coverage Class/File Branch Line quiz-exercise-detail-component.ts 80% 95% Screenshots tested on ts2, works well. however, when editing and removing the edit of the content of a question (eg. MC question content), the save button remains enabled As described in the 'Description' section, there are still some cases where the 'Save' button will still be enabled after making changes and reverting them, and changing the content of a question is one of them Tested on TS1. It works as expected. During the creation of quizzes, the save button is unavailable if there are no changes. However, if I open the quiz to edit, by default (immediately) it shows unsaved changes. If you can fix this in this PR would be super. I approved anyway because you solved the issue that open this PR. I can't reproduce this. When editing an existing quiz, the Save button starts disabled and only becomes enabled after making changes.
gharchive/pull-request
2022-07-22T07:34:35
2025-04-01T06:44:50.916405
{ "authors": [ "manuelmanso" ], "repo": "ls1intum/Artemis", "url": "https://github.com/ls1intum/Artemis/pull/5455", "license": "MIT", "license_type": "permissive", "license_source": "github-api" }
1024432888
Tickets/dm 32137 Initial thoughts. I'd like some feedback before developing further @leannep I can't assign you as a reviewer on your own PR, so this is just a ping to be sure you've seen this. Not a big rush.
gharchive/pull-request
2021-10-12T21:26:11
2025-04-01T06:44:50.952762
{ "authors": [ "gpdf", "leannep" ], "repo": "lsst-dm/dmtn-202", "url": "https://github.com/lsst-dm/dmtn-202/pull/1", "license": "CC-BY-4.0", "license_type": "permissive", "license_source": "github-api" }
1768493261
(site/ls/role/daq-mgt) updated BTS daq to R5-V6.5 Looking to update the BTS DAQ to R5-V6.5 in coordination with CCS. Ran puppet from branch and looks fine. Looks like I need a merge now, though. @jgthayer Wouldn't this put BTS ahead of TTS? Yes, excepting that BTS is not in use, and for this is leading. This is also why Tony and I can do this without bringing it up at CAP.
gharchive/pull-request
2023-06-21T21:11:26
2025-04-01T06:44:50.954711
{ "authors": [ "jgthayer", "jhoblitt" ], "repo": "lsst-it/lsst-control", "url": "https://github.com/lsst-it/lsst-control/pull/850", "license": "Apache-2.0", "license_type": "permissive", "license_source": "github-api" }
1446469641
🛑 Mirror-爬 is down In 2088bdf, Mirror-爬 (https://ncov.zhouym.tech/) was down: HTTP code: 404 Response time: 934 ms Resolved: Mirror-鱼露 is back up in 44b8883.
gharchive/issue
2022-11-12T14:17:41
2025-04-01T06:44:50.963529
{ "authors": [ "lsy223622" ], "repo": "lsy223622/xdncov-mirror-status", "url": "https://github.com/lsy223622/xdncov-mirror-status/issues/10490", "license": "MIT", "license_type": "permissive", "license_source": "github-api" }
1447118625
🛑 Mirror-爬 is down In ce29222, Mirror-爬 (https://ncov.zhouym.tech/) was down: HTTP code: 404 Response time: 880 ms Resolved: Mirror-鱼露 is back up in 41cf44b.
gharchive/issue
2022-11-13T22:32:41
2025-04-01T06:44:50.966168
{ "authors": [ "lsy223622" ], "repo": "lsy223622/xdncov-mirror-status", "url": "https://github.com/lsy223622/xdncov-mirror-status/issues/10617", "license": "MIT", "license_type": "permissive", "license_source": "github-api" }
1476033033
🛑 Mirror-爬 is down In 32e60df, Mirror-爬 (https://ncov.zhouym.tech/) was down: HTTP code: 404 Response time: 1959 ms Resolved: Mirror-鱼露 is back up in c8f281c.
gharchive/issue
2022-12-05T08:47:01
2025-04-01T06:44:50.969090
{ "authors": [ "lsy223622" ], "repo": "lsy223622/xdncov-mirror-status", "url": "https://github.com/lsy223622/xdncov-mirror-status/issues/12652", "license": "MIT", "license_type": "permissive", "license_source": "github-api" }
1481656062
🛑 Mirror-爬 is down In 0732e2d, Mirror-爬 (https://ncov.zhouym.tech/) was down: HTTP code: 404 Response time: 946 ms Resolved: Mirror-鱼露 is back up in 878b49a.
gharchive/issue
2022-12-07T11:12:40
2025-04-01T06:44:50.971569
{ "authors": [ "lsy223622" ], "repo": "lsy223622/xdncov-mirror-status", "url": "https://github.com/lsy223622/xdncov-mirror-status/issues/12861", "license": "MIT", "license_type": "permissive", "license_source": "github-api" }
1205281593
🛑 Mirror-没有女朋友跨年的群傻逼 is down In f576601, Mirror-没有女朋友跨年的群傻逼 (https://x.ksfu.top/) was down: HTTP code: 0 Response time: 0 ms Resolved: Mirror-木生 is back up in b95b666.
gharchive/issue
2022-04-15T05:32:44
2025-04-01T06:44:50.974034
{ "authors": [ "lsy223622" ], "repo": "lsy223622/xdncov-mirror-status", "url": "https://github.com/lsy223622/xdncov-mirror-status/issues/1753", "license": "MIT", "license_type": "permissive", "license_source": "github-api" }
1326954411
🛑 Mirror-爬 is down In 111dfba, Mirror-爬 (https://ncov.zhouym.tech/) was down: HTTP code: 404 Response time: 1400 ms Resolved: Mirror-鱼露 is back up in 01ce3ad.
gharchive/issue
2022-08-03T09:44:24
2025-04-01T06:44:50.976497
{ "authors": [ "lsy223622" ], "repo": "lsy223622/xdncov-mirror-status", "url": "https://github.com/lsy223622/xdncov-mirror-status/issues/3626", "license": "MIT", "license_type": "permissive", "license_source": "github-api" }
1336825308
🛑 Mirror-爬 is down In 71fdb15, Mirror-爬 (https://ncov.zhouym.tech/) was down: HTTP code: 404 Response time: 1746 ms Resolved: Mirror-鱼露 is back up in c57eb1b.
gharchive/issue
2022-08-12T06:53:49
2025-04-01T06:44:50.978938
{ "authors": [ "lsy223622" ], "repo": "lsy223622/xdncov-mirror-status", "url": "https://github.com/lsy223622/xdncov-mirror-status/issues/4391", "license": "MIT", "license_type": "permissive", "license_source": "github-api" }
1345390183
🛑 Mirror-爬 is down In 64cd636, Mirror-爬 (https://ncov.zhouym.tech/) was down: HTTP code: 404 Response time: 754 ms Resolved: Mirror-鱼露 is back up in 8ab9908.
gharchive/issue
2022-08-21T05:22:51
2025-04-01T06:44:50.981654
{ "authors": [ "lsy223622" ], "repo": "lsy223622/xdncov-mirror-status", "url": "https://github.com/lsy223622/xdncov-mirror-status/issues/5143", "license": "MIT", "license_type": "permissive", "license_source": "github-api" }
1384796043
🛑 Mirror-爬 is down In 7c075a5, Mirror-爬 (https://ncov.zhouym.tech/) was down: HTTP code: 404 Response time: 844 ms Resolved: Mirror-鱼露 is back up in abfe6a0.
gharchive/issue
2022-09-24T19:59:46
2025-04-01T06:44:50.984091
{ "authors": [ "lsy223622" ], "repo": "lsy223622/xdncov-mirror-status", "url": "https://github.com/lsy223622/xdncov-mirror-status/issues/7543", "license": "MIT", "license_type": "permissive", "license_source": "github-api" }
1497789221
🛑 Školský šport is down In 5bb0e79, Školský šport (https://skolskysport.sk/) was down: HTTP code: 403 Response time: 1022 ms Resolved: Školský šport is back up in a8046b1.
gharchive/issue
2022-12-15T04:59:48
2025-04-01T06:44:51.033438
{ "authors": [ "lubosm" ], "repo": "lubosm/minedusk-uptime", "url": "https://github.com/lubosm/minedusk-uptime/issues/677", "license": "MIT", "license_type": "permissive", "license_source": "github-api" }
1179913457
🛑 DGC prod is down In 50e43a2, DGC prod (https://digichanges.com/) was down: HTTP code: 0 Response time: 0 ms Resolved: DGC prod is back up in a208341.
gharchive/issue
2022-03-24T18:50:10
2025-04-01T06:44:51.037284
{ "authors": [ "lucalop" ], "repo": "lucalop/monitoring", "url": "https://github.com/lucalop/monitoring/issues/146", "license": "MIT", "license_type": "permissive", "license_source": "github-api" }
1289623393
🛑 DGC dev is down In b5b2b49, DGC dev (https://develop.digichanges.com/) was down: HTTP code: 403 Response time: 263 ms Resolved: DGC dev is back up in e5a7fb7.
gharchive/issue
2022-06-30T05:54:07
2025-04-01T06:44:51.039624
{ "authors": [ "lucalop" ], "repo": "lucalop/monitoring", "url": "https://github.com/lucalop/monitoring/issues/278", "license": "MIT", "license_type": "permissive", "license_source": "github-api" }
351916250
Expose underlying data stream in h2quic Background When using HTTP/2, http.Request and http.ResponseWriter do not implement http.Hijacker. This makes sense in the context of TCP because the stream multiplexing happens outside of the wire protocol; the call to Hijack would return a TCP connection over which multiple request/response cycles travel, not just the one being hijacked. As such, it is often said that HTTP/2 and Hijack are fundamentally incompatible. In the context of QUIC, this is no longer true. Each request/response cycle is mapped to its own OSI Layer-4 stream, which can be manipulated independently of others. As such, HTTP/2 over QUIC is not fundamentally incompatible with Hijack-like semantics. Proposal I would like to propose a mechanism for exposing the quic.Stream that underpins HTTP/2 over QUIC, similar to the Hijack semantics in net/http. The use-cases for this feature are mostly the same ones as for WebSocket. Nevertheless there are a few important design considerations: This is NOT a proposal to implement WebSocket. The intent is not to expose message-oriented semantics, nor any of the special features proposed by the protocol. The WebSocket spec contains too many corner-cases that map onto QUIC and HTTP/2 quite poorly. Rather, the idea is to expose the quic.Stream directly, enabling bidirectional streaming. This is NOT a proposal to implement http.Hijacker. Implementing http.Hijacker sends a strong message that this library can be used interchangeably with standard transports. This is likely to break things that don't expect separate streams for headers and data, e.g. gorilla/websocket. I propose instead to make the data stream available to those who are intentionally looking for it, while making it clear this "thing" behaves differently from Hijack. Instead of implementing Hijacker, I propose the following: type DataStreamer interface { DataStream() quic.Stream } I have to admit to a small bit of pleasure that the interface is so simple. Used in conjunction with http.ResponseWriter, application developers can both send response headers and enjoy bidirectional streaming with a familiar interface. Consider the following example, which shows a homegrown protocol for negotiating such a connection upgrade: func handler(w http.ResponseWriter, r *http.Request) { if r.Header.Get("Connection") != "Upgrade" || r.Header.Get("Upgrade") != "QUIC-Stream" { http.Error(w, "raw QUIC stream required", http.StatusUpgradeRequired) return } w.WriteHeader(http.StatusSwitchingProtocols) // this writes to the header stream s := w.(h2quic.DataStreamer).DataStream() useStream(s) } Final note @lucas-clemente, @marten-seemann and contributors: I wanted to open a discussion before submitting a PR. I already have this working on a private branch, which I have been happily using for several months, so I'm happy to submit this implementation for review if you're okay with this in principle. Cheers, Louis Hello @lthibault, that sounds like a reasonable proposal. Thanks for writing this up, I wish every feature request came with such a well though out description. I'm probably something obvious here, but I'm wondering why you're not using a POST request? That should allow you do bidirectional streaming as well, doesn't it? Are you concerned about the H2 framing overhead? It would be great if you could send us a PR with your implementation. I'm particularly curious how you worked around this gQUIC hack https://github.com/lucas-clemente/quic-go/blob/ad9618ef9e878bbc0f820bd133869dfbb80d0190/h2quic/server.go#L205-L209 Closing due to inactivity. Reopening, as this would be very useful for MASQUE. Reopening, as this would be very useful for MASQUE. The interesting problem to solve here is how to do the same thing on the client. Maybe have the http.Request.Body also implement DataStreamer? The interesting problem to solve here is how to do the same thing on the client. Maybe have the http.Request.Body also implement DataStreamer? The interesting problem to solve here is how to do the same thing on the client. Maybe have the http.Request.Body also implement DataStreamer? Yes, please add this as well. Looking at the code it shouldn't have many changes.
gharchive/issue
2018-08-19T16:35:48
2025-04-01T06:44:51.053880
{ "authors": [ "lthibault", "marten-seemann", "superfashi" ], "repo": "lucas-clemente/quic-go", "url": "https://github.com/lucas-clemente/quic-go/issues/1492", "license": "MIT", "license_type": "permissive", "license_source": "github-api" }
990253448
List projects using quic-go Added a table with a few prominent projects which use quic-go. That was at least the criteria i used. Btw it's quite hard to find those projects as Githubs code search doesn't allow to filter by project stars
gharchive/pull-request
2021-09-07T18:47:36
2025-04-01T06:44:51.055223
{ "authors": [ "bt90" ], "repo": "lucas-clemente/quic-go", "url": "https://github.com/lucas-clemente/quic-go/pull/3266", "license": "MIT", "license_type": "permissive", "license_source": "github-api" }
234125921
order log levels ascending Make the log level numbers more intuitive. Logging nothing is now log level 0, debug logging is level 3. We'll need to change the wiki again ;) This is a breaking change, so I'd like to get a bit more reasoning before we do this. I agree that probably our initial choice was weird, but is there a clear advantage of reversing the ordering at this time? Second, maybe exposing numeric log levels as part of the API are a bad idea in general? What do you think about QUIC_GO_LOG_LEVEL=DEBUG instead? Also, this should be in the changelog. Codecov Report Merging #647 into master will increase coverage by 0.1%. The diff coverage is 100%. @@ Coverage Diff @@ ## master #647 +/- ## ========================================= + Coverage 91.25% 91.35% +0.1% ========================================= Files 80 80 Lines 5555 5608 +53 ========================================= + Hits 5069 5123 +54 + Misses 305 302 -3 - Partials 181 183 +2 Impacted Files Coverage Δ utils/log.go 100% <100%> (ø) :arrow_up: h2quic/client.go 91% <0%> (+3.93%) :arrow_up: Continue to review full report at Codecov. Legend - Click here to learn more Δ = absolute <relative> (impact), ø = not affected, ? = missing data Powered by Codecov. Last update 500d988...e339226. Read the comment docs. @lucas-clemente: Please have another look. @lucas-clemente: I just added a Changelog entry. Please approve this PR, then I'll merge it once #648 is merged. Thanks for adding the line, but I don't think it's particularly helpful like this (what is the change?!). What about: Changed the log level environment variable to only accept strings ("DEBUG", "INFO", "ERROR"), see [the wiki](https://github.com/lucas-clemente/quic-go/wiki/Logging) for more details. Otherwise LGTM. Oh, and please update our and caddy's wiki as soon as this is merged, and close #633 :) This broke my method to add a prefix to the log I will write a PR to add a QUIC_GO_LOG_PREFIX ENV if you don't have any objections Sounds good to me.
gharchive/pull-request
2017-06-07T08:08:22
2025-04-01T06:44:51.065889
{ "authors": [ "codecov-io", "lucas-clemente", "marten-seemann", "twdkeule" ], "repo": "lucas-clemente/quic-go", "url": "https://github.com/lucas-clemente/quic-go/pull/647", "license": "MIT", "license_type": "permissive", "license_source": "github-api" }
1322609693
🛑 Livieres Guggiari is down In 00dd27e, Livieres Guggiari (https://livieresg.com.py) was down: HTTP code: 0 Response time: 0 ms Resolved: Livieres Guggiari is back up in 448818a.
gharchive/issue
2022-07-29T19:32:56
2025-04-01T06:44:51.068767
{ "authors": [ "lucasboh" ], "repo": "lucasboh/upptime", "url": "https://github.com/lucasboh/upptime/issues/82", "license": "MIT", "license_type": "permissive", "license_source": "github-api" }
1011364802
Positional encoding frequency bands should be linearly spaced A small bug, but as alluded to in this comment by @marcdumon, it seems as though the frequency bands are indeed spaced linearly in the official JAX implementation. @djl11 thanks Daniel, i think you are correct :pray: let me know if this version looks correct to you https://github.com/lucidrains/perceiver-pytorch/releases/tag/0.7.4 Looks good!
gharchive/issue
2021-09-29T19:27:32
2025-04-01T06:44:51.083269
{ "authors": [ "djl11", "lucidrains" ], "repo": "lucidrains/perceiver-pytorch", "url": "https://github.com/lucidrains/perceiver-pytorch/issues/52", "license": "MIT", "license_type": "permissive", "license_source": "github-api" }
340010035
installation issue we keep getting this issue when installing zeppelin solr from scratch. this happens after following the steps mentioned on the readme, and then trying to create a notebook and running the "list" command org.apache.thrift.TApplicationException: Internal error processing createInterpreter at org.apache.thrift.TApplicationException.read(TApplicationException.java:111) at org.apache.thrift.TServiceClient.receiveBase(TServiceClient.java:71) at org.apache.zeppelin.interpreter.thrift.RemoteInterpreterService$Client.recv_createInterpreter(RemoteInterpreterService.java:209) at org.apache.zeppelin.interpreter.thrift.RemoteInterpreterService$Client.createInterpreter(RemoteInterpreterService.java:192) at org.apache.zeppelin.interpreter.remote.RemoteInterpreter$2.call(RemoteInterpreter.java:169) at org.apache.zeppelin.interpreter.remote.RemoteInterpreter$2.call(RemoteInterpreter.java:165) at org.apache.zeppelin.interpreter.remote.RemoteInterpreterProcess.callRemoteFunction(RemoteInterpreterProcess.java:135) at org.apache.zeppelin.interpreter.remote.RemoteInterpreter.internal_create(RemoteInterpreter.java:165) at org.apache.zeppelin.interpreter.remote.RemoteInterpreter.open(RemoteInterpreter.java:132) at org.apache.zeppelin.interpreter.remote.RemoteInterpreter.getFormType(RemoteInterpreter.java:299) at org.apache.zeppelin.notebook.Paragraph.jobRun(Paragraph.java:407) at org.apache.zeppelin.scheduler.Job.run(Job.java:188) at org.apache.zeppelin.scheduler.RemoteScheduler$JobRunner.run(RemoteScheduler.java:307) at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511) at java.util.concurrent.FutureTask.run(FutureTask.java:266) at java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.access$201(ScheduledThreadPoolExecutor.java:180) at java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.run(ScheduledThreadPoolExecutor.java:293) On further investigation I see this exception in the interpreter log: Caused by: java.lang.ClassNotFoundException: com.lucidworks.zeppelin.solr.SolrInterpreter at java.net.URLClassLoader.findClass(URLClassLoader.java:381) at java.lang.ClassLoader.loadClass(ClassLoader.java:424) at sun.misc.Launcher$AppClassLoader.loadClass(Launcher.java:338) at java.lang.ClassLoader.loadClass(ClassLoader.java:357) at java.lang.Class.forName0(Native Method) at java.lang.Class.forName(Class.java:264) at org.apache.zeppelin.interpreter.remote.RemoteInterpreterServer.createInterpreter(RemoteInterpreterServer.java:301) can any anyone help me in understanding what am I doing wrong here ? @kiranchitturi would you please advise ? I was able to get around this issue by adding com.lucidworks.zeppelin:zeppelin-solr:0.0.1-beta as an artifact while creating the interpreter. it would be good though to understand why it does not work as advertised on the readme though. Hi @aroopganguly. Apologies for being inactive for a while. I have looked into this installation issue and replicated with 0.8.0 version. The command works fine with 0.7.x. Looks like the problem is with the interpreter name, the command I had in README used zeppelin-solr instead of solr and that caused 0.8.0 to fail for some reason. I will fix the README ./bin/install-interpreter.sh --name solr --artifact com.lucidworks.zeppelin:zeppelin-solr:0.0.1-beta2
gharchive/issue
2018-07-10T21:16:03
2025-04-01T06:44:51.098672
{ "authors": [ "aroopganguly", "kiranchitturi" ], "repo": "lucidworks/zeppelin-solr", "url": "https://github.com/lucidworks/zeppelin-solr/issues/9", "license": "Apache-2.0", "license_type": "permissive", "license_source": "github-api" }
2201410075
🛑 Bitwarden is down In b4f5c43, Bitwarden (https://bitwarden.kerl.io) was down: HTTP code: 502 Response time: 559 ms Resolved: Bitwarden is back up in 80903f2 after 4 minutes.
gharchive/issue
2024-03-21T23:50:15
2025-04-01T06:44:51.101342
{ "authors": [ "lucienkerl" ], "repo": "lucienkerl/status", "url": "https://github.com/lucienkerl/status/issues/1650", "license": "MIT", "license_type": "permissive", "license_source": "github-api" }
341560813
[Request] Web editor command and interface minor addition Two parts here for my ideas, only tangentially related. The first: the web editor command doesn't have any way to edit everything - you can edit all users, all groups, all groups and any online users, or a specific user or group. It could be useful to be able to edit all users and groups in one session, for instance to perform a fix when some issues are in the groups and others are in the users but not all affected users are online. Alternatively, rearrange group permissions and then redefine group listings for all users. Maybe an /lp editor all command for this? The second: it would be really nice (although I imagine probably not easy) if the web editor had a way to show what the actual end permissions would be for a given user, after running the inheritance chain. Sometimes, I have to commit four, five, or even more changes from the editor to figure out how to get my users the permissions they should have (and negate the ones they shouldn't). It would just be nice if I could look at a page tab that calculates the full inheritance chain (on demand, not automatically, because I imagine this would be a possibly intensive process). These combined would create a nice easy-to-use graphical editor of all permissions on a server. It'd be extra nice if the second one could also display all detected permissions (like are shown in the popup/dropdown suggestions when typing one into the add permission box) for each user, and indicate whether it was true or false, maybe also pointing out where it was set to that (last change only is fine if tracing it all the way to the root isn't feasible, but it'd help with tracking down where a permission bug is coming from). Maybe also indicate where the "default permissions" stuff is applied when there's no override in the permissions hierarchy? It could be useful to be able to edit all users and groups in one session, for instance to perform a fix when some issues are in the groups and others are in the users but not all affected users are online. That's basicly what bulkedit/bulkupdate does. The first: the web editor command doesn't have any way to edit everything - you can edit all users, all groups, all groups and any online users, or a specific user or group. It could be useful to be able to edit all users and groups in one session, for instance to perform a fix when some issues are in the groups and others are in the users but not all affected users are online. Alternatively, rearrange group permissions and then redefine group listings for all users. Maybe an /lp editor all command for this? There are servers that use LP with 000,000's of unique players. It's not realistically feasible to load all users into the web editor at the same time. Just loading online players is a good compromise. As Andre said, if you want to edit all users/groups at once, bulkupdate should be your tool of choice. The second: it would be really nice (although I imagine probably not easy) if the web editor had a way to show what the actual end permissions would be for a given user, after running the inheritance chain. Sometimes, I have to commit four, five, or even more changes from the editor to figure out how to get my users the permissions they should have (and negate the ones they shouldn't). It would just be nice if I could look at a page tab that calculates the full inheritance chain (on demand, not automatically, because I imagine this would be a possibly intensive process). Perhaps the /lp tree command could help? https://github.com/lucko/LuckPerms/wiki/Command-Usage:-General#lp-tree Running /lp tree . PrincessRTFM for example, would output a list of all permissions, and highlight each entry depending on if you had the perm. It's not really possible to implement all of this live on the browser, everything has to be calculated on the server, then packaged into a payload, to be sent to the web viewer and simply displayed. Bulk editing works for reasonably simple things, but once you get into needing to essentially rewrite half the permissions that half a dozen users have and move things from user to group and group to other group and the like, it gets a lot more complicated. On a small private server (mine is only for about seven people, including me) it'd be extremely useful. Still, understandable if it's not really feasible or worth the effort to implement. Possible alternative: can I open two editor sessions at once, so I have one page for all users and one for all groups? Because I saw that I can do those individually. You can separate them out, but for the reasons explained above, it is not possible to list all known users. LP is written to accommodate small servers and large servers alike.
gharchive/issue
2018-07-16T15:13:02
2025-04-01T06:44:51.109296
{ "authors": [ "Andre601", "PrincessRTFM", "lucko" ], "repo": "lucko/LuckPerms", "url": "https://github.com/lucko/LuckPerms/issues/1101", "license": "MIT", "license_type": "permissive", "license_source": "github-api" }
2561395146
Assertion error during edit mode on second list Description An error occurs when trying to perform actions in edit mode on a second list. The issue arises when the SwipeActionCell is used on two different lists, where one of the lists doesn't use a controller or index, but the other one does. When entering the edit mode on the second list (which has the controller and index), the following assertion error appears in the console: [ERROR:flutter/runtime/dart_vm_initializer.cc(41)] Unhandled Exception: 'package:flutter_swipe_action_cell/core/cell.dart': Failed assertion: line 306 pos 14: 'widget.controller != null && widget.index != null': is not true. E/flutter (26883): #0 _AssertionError._doThrowNew (dart:core-patch/errors_patch.dart:50:61) E/flutter (26883): #1 _AssertionError._throwNew (dart:core-patch/errors_patch.dart:40:5) E/flutter (26883): #2 SwipeActionCellState._listenEvent.<anonymous closure> (package:flutter_swipe_action_cell/core/cell.dart:306:14) E/flutter (26883): #3 _RootZone.runUnaryGuarded (dart:async/zone.dart:1594:10) E/flutter (26883): #4 CastStreamSubscription._onData (dart:_internal/async_cast.dart:85:11) E/flutter (26883): #5 _RootZone.runUnaryGuarded (dart:async/zone.dart:1594:10) E/flutter (26883): #6 _BufferingStreamSubscription._sendData (dart:async/stream_impl.dart:365:11) E/flutter (26883): #7 _BufferingStreamSubscription._add (dart:async/stream_impl.dart:297:7) E/flutter (26883): #8 _ForwardingStreamSubscription._add (dart:async/stream_pipe.dart:123:11) E/flutter (26883): #9 _WhereStream._handleData (dart:async/stream_pipe.dart:195:12) E/flutter (26883): #10 _ForwardingStreamSubscription._handleData (dart:async/stream_pipe.dart:153:13) E/flutter (26883): #11 _RootZone.runUnaryGuarded (dart:async/zone.dart:1594:10) E/flutter (26883): #12 _BufferingStreamSubscription._sendData (dart:async/stream_impl.dart:365:11) E/flutter (26883): #13 _DelayedData.perform (dart:async/stream_impl.dart:541:14) E/flutter (26883): #14 _PendingEvents.handleNext (dart:async/stream_impl.dart:646:11) E/flutter (26883): #15 _PendingEvents.schedule.<anonymous closure> (dart:async/stream_impl.dart:617:7) E/flutter (26883): #16 _microtaskLoop (dart:async/schedule_microtask.dart:40:21) E/flutter (26883): #17 _startMicrotaskLoop (dart:async/schedule_microtask.dart:49:5) E/flutter (26883): Steps to Reproduce Create two separate lists using the SwipeActionCell widget. The lists can be on different screens, between which you navigate via Navigator.of(context).push. For the first list, do not provide a controller or index. For the second list, provide a SwipeActionController and set the index. Enter the edit mode on the second list and perform controller.selectCellAt Observe the error in the console. Code sample import 'package:flutter/material.dart'; import 'package:flutter_swipe_action_cell/flutter_swipe_action_cell.dart'; class BugReproduce extends StatefulWidget { const BugReproduce({super.key}); @override State<BugReproduce> createState() => _BugReproduceState(); } class _BugReproduceState extends State<BugReproduce> { final SwipeActionController _swipeActionController = SwipeActionController(); final SwipeActionController _anotherActionController = SwipeActionController(); final List<String> words = <String>[ 'Apple', 'Banana', 'Cherry', 'Date', ]; @override Widget build(BuildContext context) { return Scaffold( body: Column( children: [ Expanded( child: ListView.builder( itemCount: 5, itemBuilder: (context, index) { final number = index + 1; return SwipeActionCell( key: ObjectKey(number), // Uncomment this lines to fix the issue // index: index, // controller: _anotherActionController, trailingActions: [ SwipeAction( onTap: (handler) async { await handler(false); }, color: Colors.grey, icon: const Icon(Icons.edit, color: Colors.white), ) ], child: ListTile( title: Text("Number $number"), ), ); }, ), ), Expanded( child: ListView.builder( itemCount: words.length, itemBuilder: (context, index) { final word = words[index]; return SwipeActionCell( key: ObjectKey(word), index: index, controller: _swipeActionController, child: ListTile( title: Text(word), onLongPress: () { _swipeActionController.startEditingMode(); _swipeActionController.selectCellAt(indexPaths: [index]); }, ), ); }, ), ), ], ), ); } } Workaround Uncommenting the index and controller lines in the first list resolves the issue. However, the library should handle the case when these properties are not provided to prevent this assertion error. Expected Behavior SwipeActionCell should not conflict with each other. Plz use new version on pub 3.1.5
gharchive/issue
2024-10-02T11:55:39
2025-04-01T06:44:51.122151
{ "authors": [ "SoMWbRa", "luckysmg" ], "repo": "luckysmg/flutter_swipe_action_cell", "url": "https://github.com/luckysmg/flutter_swipe_action_cell/issues/71", "license": "Apache-2.0", "license_type": "permissive", "license_source": "github-api" }
2282430952
Crash when InputDevice of SpeechRecognition is set to anything other than Default The device ID is taken from the Get Available Audio Input Devices function; Is Audio Input Device ID Valid returns True, so the ID is correct. Callstack Unhandled exception at 0x00007FFCD2C253AC in UnrealEditor.exe: Microsoft C++ exception: std::runtime_error at memory location 0x000000BCAD62F930. KernelBase.dll!00007ffcd2c253ac() Unknown vcruntime140.dll!00007ffcbfab6ba7() Unknown > UnrealEditor-AzSpeech.dll!__azac_handle_native_ex(unsigned __int64 hr, bool throwException) Line 709 C++ [Inline Frame] UnrealEditor-AzSpeech.dll!__azac_rethrow(unsigned __int64) Line 724 C++ UnrealEditor-AzSpeech.dll!Microsoft::CognitiveServices::Speech::SpeechRecognizer::FromConfig(std::shared_ptr<Microsoft::CognitiveServices::Speech::SpeechConfig> speechconfig, std::shared_ptr<Microsoft::CognitiveServices::Speech::Audio::AudioConfig> audioInput) Line 100 C++ UnrealEditor-AzSpeech.dll!FAzSpeechRecognitionRunnableBase::InitializeAzureObject() Line 223 C++ UnrealEditor-AzSpeech.dll!FAzSpeechRunnableBase::Run() Line 86 C++ [Inline Frame] UnrealEditor-AzSpeech.dll!FAzSpeechRecognitionRunnableBase::Run() Line 27 C++ UnrealEditor-AzSpeech.dll!FAzSpeechRecognitionRunnable::Run() Line 21 C++ UnrealEditor-Core.dll!FRunnableThreadWin::Run() Line 149 C++ UnrealEditor-Core.dll!FRunnableThreadWin::GuardedRun() Line 71 C++ kernel32.dll!00007ffcd4ba257d() Unknown ntdll.dll!00007ffcd55caa48() Unknown Both Speech to Text with Default Options and Speech to Text with Custom Options crash.
gharchive/issue
2024-05-07T06:55:50
2025-04-01T06:44:51.123998
{ "authors": [ "yeczrtu" ], "repo": "lucoiso/UEAzSpeech", "url": "https://github.com/lucoiso/UEAzSpeech/issues/258", "license": "MIT", "license_type": "permissive", "license_source": "github-api" }
1454026483
Check the viability and implement Motion Matching for character's animation https://docs.unrealengine.com/5.1/en-US/unreal-engine-5.1-release-notes/#animation There is no documentation about Pose Search and even less about Motion Matching. The implementation of this feature at the current time can affect the usability of the project.
gharchive/issue
2022-11-17T21:36:26
2025-04-01T06:44:51.125339
{ "authors": [ "lucoiso" ], "repo": "lucoiso/UEProject_Elementus", "url": "https://github.com/lucoiso/UEProject_Elementus/issues/40", "license": "MIT", "license_type": "permissive", "license_source": "github-api" }
223613006
Users have inconsistent SuperParents When a user created, it gets a superparent of 0. When a user edits himself, the edit tries to be helpful, and fills in the supeparent. I can't remember if there was ever a good reason to use a zero'd superparent for users. It's true they're unnecessary as far as the root node is concerned. It's an optimization optimization (i.e. by omitting them, it makes the optimization, i.e. superparent, work better). One solution is to switch edit code over to the 'safe' edit function, which should preserve hierarchies. Then a query needs to be done to correct the superparents. Something like this: UPDATE sh_node SET superparent=0 WHERE type ='user'; (assuming the goal is to wipe them). Currently we can always look at that to know if a user has ever been edited, but we can also look at the versioning too. I'm still not sure why this is a thing. At some point users became their own authors, but I'm not sure the superparents got fixed.
gharchive/issue
2017-04-23T05:03:44
2025-04-01T06:44:51.142181
{ "authors": [ "mikekasprzak", "povrazor" ], "repo": "ludumdare/ludumdare", "url": "https://github.com/ludumdare/ludumdare/issues/745", "license": "mit", "license_type": "permissive", "license_source": "bigquery" }
2562942849
Installation failed for new conda env with python 3.11 on win 11 Collecting conda-env-export Using cached conda_env_export-0.5.0-py2.py3-none-any.whl.metadata (6.7 kB) Collecting conda==4.3.16 (from conda-env-export) Using cached conda-4.3.16.tar.gz (299 kB) Preparing metadata (setup.py) ... error error: subprocess-exited-with-error × python setup.py egg_info did not run successfully. │ exit code: 1 ╰─> [19 lines of output] WARNING: Your current install method for conda only supports conda as a python library. You are not installing a conda executable command or activate/deactivate commands. If your intention is to install conda as a standalone application, currently supported install methods include the Anaconda installer and the miniconda installer. If you'd still like for setup.py to create entry points for you, use `utils/setup-testing.py`. Traceback (most recent call last): File "<string>", line 2, in <module> File "<pip-setuptools-caller>", line 34, in <module> File "C:\Users\yuliu\AppData\Local\Temp\pip-install-kowiebl5\conda_7743ed4164444e73a2a02a50d8b5bfe4\setup.py", line 35, in <module> import conda._vendor.auxlib.packaging # NOQA ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "C:\Users\yuliu\AppData\Local\Temp\pip-install-kowiebl5\conda_7743ed4164444e73a2a02a50d8b5bfe4\conda\__init__.py", line 13, in <module> from .common.compat import iteritems, text_type File "C:\Users\yuliu\AppData\Local\Temp\pip-install-kowiebl5\conda_7743ed4164444e73a2a02a50d8b5bfe4\conda\common\compat.py", line 77, in <module> from collections import Iterable ImportError: cannot import name 'Iterable' from 'collections' (D:\ProgramData\Anaconda3\envs\cee-debug1\Lib\collections\__init__.py) [end of output] note: This error originates from a subprocess, and is likely not a problem with pip. error: metadata-generation-failed × Encountered error while generating package metadata. ╰─> See above for output. note: This is an issue with the package mentioned above, not pip. hint: See above for details. Solved in v0.6.1 by auto-installing conda at the 1st run.
gharchive/issue
2024-10-03T01:41:19
2025-04-01T06:44:51.152446
{ "authors": [ "luffy-yu" ], "repo": "luffy-yu/conda_env_export", "url": "https://github.com/luffy-yu/conda_env_export/issues/9", "license": "MIT", "license_type": "permissive", "license_source": "github-api" }
161539125
Config is undefined I'm getting an error: ExceptionsManager.js:61 Cannot read property 'API_BASE_URL' of undefined I tried this: import Config from 'react-native-config'; and this: let Config = require('react-native-config'); Debugging I can see that Config is undefined. I tried with an without specifying ENVFILE. Ideas? Odd, what version are you on? Did you do the ios and/or android setup? Hi, I'm facing the same issue, did you manage to get to the bottom of this? Thanks, Janos Actually it is caused by not running the rnpm command so an RTFM issue - apologies :) No problem, thanks for sharing the solution! rnpm-install info Android module react-native-config is already linked rnpm-install info iOS module react-native-config is already linked And Config.KEY_NAME is undefined Only: { APPLICATION_ID: 'com.ignitern', VERSION_CODE: 1, VERSION_NAME: '1.0', FLAVOR: '', DEBUG: true, BUILD_TYPE: 'debug' } Not keys I set in .env May be it Ignite issue not react-native-config, but I will happy to accept any information. @Arelav Did you manage to get it work? I'm facing this issue as well. on Android everything works great, but on IOS I get 'undefined' on Config keys. Also package is already linked. Hey @efratyo, I'm facing a similar issue. On Android, it is working great, but on ios the Config object is empty. Did you find any solution? I'm having the same issue! Did you guys find any solutions? @tarun29061990 Did you get any solution on this?
gharchive/issue
2016-06-21T21:04:40
2025-04-01T06:44:51.161504
{ "authors": [ "Arelav", "efratyo", "jarecsni", "jpmazza", "pedro", "pmemids", "scalfs", "timscott" ], "repo": "luggit/react-native-config", "url": "https://github.com/luggit/react-native-config/issues/16", "license": "mit", "license_type": "permissive", "license_source": "bigquery" }
352420330
Integration with appcenter build Hi, is there a possiblitiy to integrate with appcenter thru setting the ENVFILE? Setting it as global variable doesnt work. Thanks @ivanpagac any luck with this? I'm also trying to get my app set up in AppCenter but not sure how to approach setting it up to use react-native-config. @jordanmkoncz for the ios its quite straightforward, just create new scheme and use it in your build. With Android I assume as per the code for gradle describes the order of precedence how to pick the right config file. For me setting up the ENVFILE in appcenter build was enought as this is a sys variable and is picked first. I was using wrong global variable .ENVFILE (with the . at the beginning) thats why it was not picking it. @jordanmkoncz if you are still having issues, on iOS builds make sure to follow the instructions for adding build scripts to your build. RN Config uses a pre-build script on iOS to set the correct env file. The best I've found for Android is to use the ENV App Center provides. @jordanmkoncz for the ios its quite straightforward, just create new scheme and use it in your build. With Android I assume as per the code for gradle describes the order of precedence how to pick the right config file. For me setting up the ENVFILE in appcenter build was enought as this is a sys variable and is picked first. I was using wrong global variable .ENVFILE (with the . at the beginning) thats why it was not picking it. For Android, does this mean we can't tell the Microsoft Visual Studio App Center to look at the .env files we create then? I couldn't quite grasp what you were saying when you mentioned setting up the ENVFILE in appcenter build. Like you said, with iOS we can redirect the Build configuration to look at the different schemes, so it can read the specific .env file we've tied to that scheme. I would rather not have to manually enter the environment variables into the build configuration in App Center, but instead have it read directly from a specific .env file for each of my branches. any updates here? +1 @ivanpagac Can you please elaborate on "setting up the ENVFILE in appcenter build"? What exactly did you do? Thank you very much. @ivanpagac @sabun123 any updates? @sabun123 I faced your exact issue. What I found is that the env variables set in Appcenter build config can be accessed in very specific ways as shown here. I got it to work with react-native-config by creating a appcenter-pre-build.sh script #!/usr/bin/env bash # Creates an .env from existing env files for use with react-native-config # based on branch if [ "$APPCENTER_BRANCH" == "master" ]; then cp .env.prod .env else cp .env.dev .env fi printf "\n.env created with contents:\n" cat .env Thus achieving what you were looking : I would rather not have to manually enter the environment variables into the build configuration in App Center, but instead have it read directly from a specific .env file for each of my branches. Lemme know if it helps @vforvasile @platonish Alternatively instead of tying the environment config to a branch name we can specify the same in Appcenter config like this: And then having a pre build script in this form: #!/usr/bin/env bash # Creates an .env from ENV variables for use with react-native-config if [ $ENVIRONMENT_VARIABLE = "prod" ]; then cp .env.prod .env else cp .env.dev .env fi printf "\n.env created with contents:\n" cat .env Also the following script is very useful if you want to pick out certain ENV variables(matching a pattern) from Appcenter Build config and create .env file #!/usr/bin/env bash # Creates an .env from ENV variables for use with react-native-config ENV_WHITELIST=${ENV_WHITELIST:-"^RN"} printf "Creating an .env file with the following whitelist:\n" printf "%s\n\n" $ENV_WHITELIST set | egrep -e $ENV_WHITELIST | egrep -v "^_" | egrep -v "WHITELIST" > .env printf "\n.env created with contents:\n" cat .env For example this will create .env file which has RN_LABEL=APP_NAME alone (refer image) If this works @arjithn you're my hero! App Center works fine for me when I mention the ENVFILE in the Environment variables I haven't been able to find this in the documentation, but isn't it supported to simply specify environment variables normally, outside of the .env file, which react-native-config would pick up? The point of the .env file is to help set up development, in a production/build environment it's better to specify real environment variables. Then there would be no reason to use ENVFILE or other prebuild scripts. It seems like "real" environment variables are not supported, but using printenv to write them to a file seems to work. That means adding printenv > .env to appcenter-pre-build.sh, that way you can use any environment variable set up via the AppCenter UI. @andreialecu point should be more loud and clear: The Environment variables section does not inject variables into your app So create an .env file programmatically via appcenter-pre-build.sh If you add the AppSecret to each .env file, you can access it by BuildConfig.ANDROID_APP_CENTER_SECRET in Android and NSString *appSecret = [ReactNativeConfig envFor:@”IOS_APP_CENTER_SECRET”]; in iOS. That way you don't even need to create Config.plist or add to asset. I wrote an article about it that may help: https://medium.com/@aureosouza0801/integrate-appcenter-and-react-native-config-85d44b7a3d2
gharchive/issue
2018-08-21T07:39:30
2025-04-01T06:44:51.175353
{ "authors": [ "YahiaJabeur", "andreialecu", "arjithn", "aureosouza", "chenop", "ivanpagac", "jacobbodkin", "jordanmkoncz", "platonish", "ronilitman", "sabun123", "vforvasile", "watadarkstar" ], "repo": "luggit/react-native-config", "url": "https://github.com/luggit/react-native-config/issues/282", "license": "mit", "license_type": "permissive", "license_source": "bigquery" }
590764727
Need to supply a boolean to AndroidManifest.xml I need to supply a boolean value to a library via the tag: <meta-data android:name="com.some-sdk.SomeFlag" android:value="@string/SOME_FLAG" /> Where SOME_FLAG is true or false. However, this gets deserialized as a string and the library doesn't accept it. I tried @bool/SOME_FLAG but compilation failed. Is there a way to get my env variable in as a boolean value for this library? Go To android>app>build.gradle indefaultConfig add this line manifestPlaceholders = [flagValue:project.env.get("SOME_FLAG").toBoolean()] and in AndroidManifest read it like this <meta-data android:name="com.some-sdk.SomeFlag" android:value="${flagValue}" />
gharchive/issue
2020-03-31T03:40:31
2025-04-01T06:44:51.178408
{ "authors": [ "ayaahmed177", "kodayashi" ], "repo": "luggit/react-native-config", "url": "https://github.com/luggit/react-native-config/issues/440", "license": "mit", "license_type": "permissive", "license_source": "bigquery" }
2067789448
Waybar is not able to communicate with DBus service (at startup) It reach an internal timeout when calling a DBus service and then it exits and needs manual intervention in order to continue a normal working Probably it's a problem with the xdg portals that I recently modified a weeks ago Related
gharchive/issue
2024-01-05T17:49:55
2025-04-01T06:44:51.191326
{ "authors": [ "luisnquin" ], "repo": "luisnquin/nixos-config", "url": "https://github.com/luisnquin/nixos-config/issues/35", "license": "MIT", "license_type": "permissive", "license_source": "github-api" }
455522565
tools/fetch.py fails. File can not be downloaded. Hi there. When I run fetch.py It fails with the following errors: [13.06.2019` 03:43:43.186][INFO][16608][cakechat.dialog_model.inference_model.InferenceCakeChatModel][130] Looking for the previously trained model [13.06.2019 03:43:43.186][INFO][16608 [cakechat.dialog_model.inference_model.InferenceCakeChatModel][131] Model params str: {"corpus_name": "processed_dialogs", "dense_dropout_ratio": 0.2, "epochs_num": 2, "hidden_layer_dim": 768, "input_context_size": 3, "input_seq_len": 30, "is_reverse_model": true, "optimizer": {"clipvalue": 5.0, "decay": 0.0, "epsilon": 1e-07, "lr": 6.0, "rho": 0.95}, "output_seq_len": 32, "token_embedding_dim": 128, "train_batch_size": 196, "training_callbacks": {"CakeChatEvaluatorCallback": {"eval_state_per_batches": 500}}, "training_data": "train_processed_dialogs", "validation_data": "context_free_validation_set,val_processed_dialogs", "voc_size": 101, "w2v_model": "train_processed_dialogs_window10_voc50000_vec128_sgTrue"} [13.06.2019 03:43:43.260][INFO][16608][cakechat.utils.s3.bucket][19] Getting file nn_models/reverse_cakechat_v2.0_keras_tf_617dfa4a1691.tar.gz from AWS S3 and saving it as /mnt/amadeus/chatbot/results/nn_models/reverse_cakechat_v2.0_keras_tf_617dfa4a1691.tar.gz [13.06.2019 03:43:44.084][WARNING][16608][cakechat.utils.s3.resolver.S3FileResolver][45] File can not be downloaded from AWS S3 because: An error occurred (404) when calling the HeadObject operation: Not Found [13.06.2019 03:43:44.085][ERROR][16608][cakechat.dialog_model.inference_model.InferenceCakeChatModel][136] Can't find previously trained model in /mnt/amadeus/chatbot/results/nn_models/reverse_cakechat_v2.0_keras_tf_617dfa4a1691 Thanks for all your work on this project! @MikaGoetze hi! I see, that vocab_size has changed in your case. That led to a different hash of the model (617dfa4a1691) and indeed there is no such model on AWS. If you want to fetch the pre-trained model for your experiments, make sure that you have an appropriate tokens index file here: cakechat/data/tokens_index/t_idx_processed_dialogs.json. If you don't have the original tokens index, you can do the following: Ah okay thanks so much! I can't believe I missed that. I knew I was doing something silly. Thanks a lot!
gharchive/issue
2019-06-13T03:51:54
2025-04-01T06:44:51.194779
{ "authors": [ "MikaGoetze", "nicolas-ivanov" ], "repo": "lukalabs/cakechat", "url": "https://github.com/lukalabs/cakechat/issues/62", "license": "Apache-2.0", "license_type": "permissive", "license_source": "github-api" }
2028233360
perf: prevent multiple calls of M.refresh() by autocmds eliminates performance bottleneck in big files prevents multiple async calls to M.refresh(), by autocmds refactor debounce timer Overview While profiling, I've discovered that M.refresh() is actually called twice many times. This should be fixed now. Demo Demo (run in bash) 50K LOCs: cd PATH_TO_IBL_REPO gh pr checkout 790 nvim -c "set ft=typescript" <(curl -L https://raw.githubusercontent.com/microsoft/TypeScript/main/src/compiler/parser.ts) Related #776 Profiling files For comparison: ibl.original.profile.txt - from master branch ibl.skip-doublerefresh-new-timer.profile.txt - current PR There are probably cases where this can be optimized, but this PR removes important functionality. The first call to debounced_refresh should not be debounced, or there is noticeable lag when first opening a buffer, or jumping to a section that was not rendered yet. Creating and removing the timer on each refresh is going to hurt performance. There are cases that need to take priority and rerender, just not allowing this will cause bugs. There are cases that need to take priority and rerender, just not allowing this will cause bugs. This is still true. Just not allowing the refresh is not possible. I took another look at the code, and there is one small issue I found that leads to multiple refresh calls. But when testing it did not happen often enough to make a difference I think. #791 There are cases that need to take priority and rerender, just not allowing this will cause bugs. This is still true. Just not allowing the refresh is not possible. This PR skips M.refresh from running again if it's already running asynchronously by some other autocmd. Lua puts multiple M.refresh calls for the same buffer on stack producing huge overhead otherwise. Btw, the lag is still noticiable even with old timer approach. This PR skips M.refresh from running again if it's already running asynchronously by some other autocmd. Which will cause bugs. Some events need to run even if the buffer is already being refreshed, like changes to the config. Every none important event goes through the debounce already. Lua puts multiple M.refresh calls for the same buffer on stack producing huge overhead otherwise If this happens to you, it probably means the execution of refresh takes longer than the debounce. In that case, you need to set debounce higher. I am fairly confident that there are no duplicate triggers of refresh that don't get debounced. If there are, please show me where. The performance gain is obvious. I haven't observed any bugs so far. I wouldn't really want M.refresh() to race, regardless what it does inside. Best. The performance gain is obvious You will get the same performance gain by increasing debounce. This is working like it should. You can easily test it by adding a counter to refresh, and setting debounce to a high value, like 1 second. Refresh won't be called more than once a second. I wouldn't really want M.refresh() to race, regardless what it does inside. refresh updates the buffer linearly from top to bottom. It is safe, and necessary, to run it before the previous execution is done. But only in special cases, normal moving around the buffer will not cause this. Unless refresh takes longer than what debounce is set to (in which case you need to update your config), there won't be multiple calls to refresh for the same buffer at the same time. With the exception of special events like config changes.
gharchive/pull-request
2023-12-06T10:30:36
2025-04-01T06:44:51.213139
{ "authors": [ "hinell", "lukas-reineke" ], "repo": "lukas-reineke/indent-blankline.nvim", "url": "https://github.com/lukas-reineke/indent-blankline.nvim/pull/790", "license": "MIT", "license_type": "permissive", "license_source": "github-api" }
710566099
Investigate Testing with fixed dataset Can the figma api be mocked to test the whole plugin? Worked around this issue.
gharchive/issue
2020-09-28T20:45:34
2025-04-01T06:44:51.215489
{ "authors": [ "lukasoppermann" ], "repo": "lukasoppermann/design-tokens", "url": "https://github.com/lukasoppermann/design-tokens/issues/4", "license": "MIT", "license_type": "permissive", "license_source": "github-api" }
1084482680
Add keep css class option Allow to keep some css class from dragged element and apply them to placehoder. Hey @heyyo-droid thank you for the PR. I left a few comments. I am thinking if it may make more sense to have an initialize placeholder function that the user can define. This would enable this use case but allow for more flexibility. We would add a initPlaceholder: fn option. This would get the element it is a placeholder for and a clone of the placeholder and would return the placeholder element which in turn would be attached to the dom and moved into position. What do you think?
gharchive/pull-request
2021-12-20T08:10:28
2025-04-01T06:44:51.217160
{ "authors": [ "heyyo-droid", "lukasoppermann" ], "repo": "lukasoppermann/html5sortable", "url": "https://github.com/lukasoppermann/html5sortable/pull/854", "license": "mit", "license_type": "permissive", "license_source": "bigquery" }
454114993
Allow FormationEnergy to take a dictionary of reactants and products. Also update the documentation to show the dictionary use. Irrelevant as of #95
gharchive/issue
2019-06-10T11:01:10
2025-04-01T06:44:51.218004
{ "authors": [ "lukasturcani" ], "repo": "lukasturcani/stk", "url": "https://github.com/lukasturcani/stk/issues/21", "license": "MIT", "license_type": "permissive", "license_source": "github-api" }
225821056
Ability to watch for changes Hello, Instead of having to inject the gallery component into my parent component, and appending this.gallery.smallImages = this.galleryImages.map(img => img.small); this.gallery.mediumImages = this.galleryImages.map(img => img.medium); this.gallery.bigImages = this.galleryImages.map(img => img.big); this.gallery.descriptions = this.galleryImages.map(img => img.description); each time I update my gallery images, it would be nice if the component has a lifecycle hook "ngOnChanges." StackOverflow for rationale: http://stackoverflow.com/questions/43742024/dynamic-images-with-angular-and-ngx-gallery?noredirect=1#comment74528706_43742024 Hi, please check version 0.8.1 HI, I checked version 0.8.1 and I am not sure what am I to do, I update galleryImages, but nothing happens. I have added example "14. Change Images" on demo page https://lukasz-galka.github.io/ngx-gallery-demo/ and as you can see everything is working fine. You can check code for demo page here https://github.com/lukasz-galka/ngx-gallery-demo and here is app component https://github.com/lukasz-galka/ngx-gallery-demo/blob/master/src/app/app.component.ts Maybe try to uninstall and install plugin again? npm uninstall ngx-gallery npm cache clear npm install --save ngx-gallery Let me know if it helps. I tried reinstalling the plug-in, I cleared my cache, I rebuilt the project. Gallery doesn't update. I do this throughout my code: this.galleryImages.push({ 'small': asset.url, 'medium': asset.url, 'big': asset.url, 'description': asset.shotCode.description }); and I run this: this.gallery.smallImages = this.galleryImages.map(img => img.small); this.gallery.mediumImages = this.galleryImages.map(img => img.medium); this.gallery.bigImages = this.galleryImages.map(img => img.big); this.gallery.descriptions = this.galleryImages.map(img => img.description); immediately after, otherwise the gallery will not update. Yes, you are right, it is not working for push/unset or replace. It is because of angular change detection: "During change detection, when Angular checks components' input properties for change, it uses (essentially) === for dirty checking. For arrays, this means the array references (only) are dirty checked." from http://stackoverflow.com/a/34799257/5946227 I will take a look on ngDoCheck https://angular.io/docs/ts/latest/guide/lifecycle-hooks.html#!#docheck and try implement custom login for change detection. As you can see my use case does require checks to see if the array has been pushed/unset, or had elements replaced. Is there a temporary workaround for this issue? Thanks As described here http://stackoverflow.com/a/34799257/5946227 "Assign a new array to rawLapsData whenever you make any changes to the array contents. Then ngOnChanges() will be called because the array (reference) will appear as a change." You have to create new array on every push/unset/update. For example: This will not work: this.galleryImages.push({ 'small': asset.url, 'medium': asset.url, 'big': asset.url, 'description': asset.shotCode.description }); but this should work: let newImages = this.galleryImages.slice(0, this.galleryImages.length); newImages .push({ 'small': asset.url, 'medium': asset.url, 'big': asset.url, 'description': asset.shotCode.description }); this.galleryImages = newImages; Thank you so much for fixing this issue!
gharchive/issue
2017-05-02T21:04:38
2025-04-01T06:44:51.227216
{ "authors": [ "lukasz-galka", "moshecmc" ], "repo": "lukasz-galka/ngx-gallery", "url": "https://github.com/lukasz-galka/ngx-gallery/issues/6", "license": "mit", "license_type": "permissive", "license_source": "bigquery" }
438598491
move all the libs importing to the entry bin script I found that the mode of importing lib scripts from different libexec/lib scripts causes a really bad issue - some of the libs are imported and executed many times. This can cause other potential problems and effect performance probably. CASE In the bin\scoop.ps1 entry bin script, it imported lib\core.ps1 and lib\bucket.ps1:: https://github.com/lukesampson/scoop/blob/2db651c13a83f1c0ebb53723716efb5d96018fc0/bin/scoop.ps1#L7-L9 In the libexec\scoop-bucket.ps1 script, it imported lib\core.ps1 and lib\bucket.ps1: https://github.com/lukesampson/scoop/blob/2db651c13a83f1c0ebb53723716efb5d96018fc0/libexec/scoop-bucket.ps1#L22-L23 And in the lib\bucket.ps1 script, it also imported lib\core.ps1: https://github.com/lukesampson/scoop/blob/2db651c13a83f1c0ebb53723716efb5d96018fc0/lib/buckets.ps1#L1 So, when I run scoop bucket list, the core lib lib\core.ps1 is actually imported by FOUR times, and scoop will execute all of the code of lib\core.ps1 by four times. You can simply add a line of code to verify it: diff --git a/lib/core.ps1 b/lib/core.ps1 index 148f12539..6e6c54df9 100644 --- a/lib/core.ps1 +++ b/lib/core.ps1 @@ -10,6 +10,8 @@ if((test-path $oldscoopdir) -and !$env:SCOOP) { $scoopdir = $oldscoopdir } +Write-Host "Hi" + $globaldir = $env:SCOOP_GLOBAL, "$env:ProgramData\scoop" | Select-Object -first 1 # Note: Setting the SCOOP_CACHE environment variable to use a shared directory Then you will see 4 line of Hi printed before the bucket list. $ scoop bucket list Hi Hi Hi Hi extras main Holy crap! And you will see all other libexec scripts are suffering this issue. To be honest, I have realized this problem of scoop before I started to maintain pshazz. I have released a patch for pshazz, which was also effected, by changing to import all libs from the pshazz.ps1 entry script. I didn't make patches for scoop because of the main bucket extraction event. And now I point out the issue to see if anyone has interest to fix this. Best solution for this. We should keep that current behaviour when each module is usable standalone. So you could easily import lib\manifest.ps1 in some external script and everything will work fine (no need to manually import all other stuff which is required for manifest module functioning (core and autoupdate in manifest file example)) I would suggest to adopt Powershell modules for this. (Relates #2733, #module Demo # main.ps1 Import-Module "$PSScriptRoot\COSI.psm1" -Force . "$PSScriptRoot\A.ps1" Write-Host 'Should see imported message once' # COSI.psm1 Write-Host 'MODULE IMPORTED' -f Yellow # A.ps1 Import-Module "$PSScriptRoot\COSI.psm1" Write-host 'ANO' -f Yellow Execute main and you will see only 1 message about imports (unless you specify Force parameter in A file)
gharchive/issue
2019-04-30T04:21:56
2025-04-01T06:44:51.283644
{ "authors": [ "Ash258", "h404bi" ], "repo": "lukesampson/scoop", "url": "https://github.com/lukesampson/scoop/issues/3418", "license": "unlicense", "license_type": "permissive", "license_source": "bigquery" }
102234000
upgrade dart to 1.11.3 Upgrade the Dart SDK from 1.11.1 to 1.11.3. also cc'ing @guillermooo lgtm! Also: https://gist.github.com/guillermooo/f579fdb84aab4bc488da It's a script to automate the manifest update. It needs to be fixed to pin the version to a real version number, though. Ah cool! I just found this guy: https://github.com/lukesampson/scoop-extras/blob/master/dart-dev.json. I didn't know there was a dart-dev install available. It looks like that needs updating as well I'll take a look at that. :+1:
gharchive/pull-request
2015-08-20T20:48:52
2025-04-01T06:44:51.286957
{ "authors": [ "deevus", "devoncarew", "guillermooo" ], "repo": "lukesampson/scoop", "url": "https://github.com/lukesampson/scoop/pull/470", "license": "unlicense", "license_type": "permissive", "license_source": "bigquery" }
1342483274
refactor: replace >0 by !=0 What does this PR introduce? !=0 is a cheaper operation than >0 and safe to use on uint with solidity ˆ0.8.0 #242 Nice thanks! Do you know the gas difference @skimaharvey ? @CJ42 I would guess 3 gas but I will double check when I have a min. I think that when using != you are using the ISZERO opcode which only needs 1 stack input vs less/greater than needs 2 stack inputs (the value and 0 in our cases) so I guess the difference is the cost of pushing to stack (3 gas). Amazing thanks for this great explanation @skimaharvey ! 👌🏻
gharchive/pull-request
2022-08-18T02:21:09
2025-04-01T06:44:51.304357
{ "authors": [ "CJ42", "skimaharvey" ], "repo": "lukso-network/lsp-smart-contracts", "url": "https://github.com/lukso-network/lsp-smart-contracts/pull/263", "license": "Apache-2.0", "license_type": "permissive", "license_source": "github-api" }
349763234
Improvements and fixes I made some changes and fixes. Take a look! :nerd_face: Improvements include: New SFV version compatibility fixes Allowing RTE to scan for large-file character scripts (like Zeku and G) Single merged .exe file for each program (no separate .dll's) Various quality-of-life improvements (like ToolTips on RTE, and pausing for MoveTool when no arguments are passed so you can read what it says) Note about some of the commits: There was a weird end-of-line inconsistency (CLRF) thing that caused some of the diffs to show entire files being replaced, even if it was just a couple lines. :sweat: Thanks DoctorBearPhD! All of these changes looks good to me. The code was quite messy in places, I see you even cleaned it up a bit. "* Successful test of name conversion!" means that there are no breaking changes since last version, right? If so, that's excellent! About the diffs, I can't actually see what you changed in the files that are "replaced". It would be better if you could fix that somehow. It's going to be hard to find out what needs to be looked at later if your changes introduces new bugs. Does "Single merged .exe file for each program (no separate .dll's)" mean that there's no separate MoveLib.dll anymore? I don't know if others are using movelib in their projects. I think sfvsim (https://gitlab.com/loic.petit/sfvsim) used to use it, but Loïc made his own tool I think (https://gitlab.com/loic.petit/sfvfile)? Would this change make it harder for others to use movelib in their projects? Over to something completely different: I think MoveTool needs a new maintainer. I'm not really into SF5, never was actually. Especially after those responsible (Capcom? Valve?) dropped the ball on linux support even after it was officially announced (http://www.capcom-unity.com/combofiend/blog/2015/12/17/announcing-steam-os-support-for-sfv). I don't use Windows anymore, only linux, so maintaining MoveTool would also be difficult for me. Testing it is more or less impossible for me as SF5 doesn't run in Wine. At least that was the case last time I checked. I think I've seen your nick before. Are you by any chance working on Mysterious Mod? I think MM is great and also one of the biggest projects that use MoveTool. Handing the MoveTool project over to someone working on MM feels like a natural move to me. Loïc (@WydD) also used MoveTool when he was making some of his tools I think. He also has a very detailed wiki about how all the values work (https://gitlab.com/loic.petit/sfvsim/wikis/home). If either of you, DoctorBearPhD and Loïc, or both, want to take over for me, that would be great. In that case, please clone this repository somewhere and give me the link. I will put it at the top of the readme so that everyone who comes here knows MoveTool moved and to where. Yeah I made my own movetool for sfvsim but I'm still using a modified version of movetool for sfvdiff (using bson not json) and for exploratory digging. I think that most of the work on Movetool has been made lately by @DoctorBearPhD (I only made a small change when season 3 started). Maybe we could just use his fork and start from there ? I'm not a MM contributor but he is and I think that MM is the prime user of Movetool now. That said, @DoctorBearPhD, you know that if you need help you know where to find me :) That sounds good to me, @WydD. Would that be okay with you @DoctorBearPhD? Correct; no breaking changes since last version. I'll look up some solutions to the whole-file-replaced issue. Correct, the single .exe means that users only need the .exe files to run the programs. I imagine if someone wants MoveLib.dll, they could get it from this repo? :man_shrugging: The visual studio output creates a .dll, when it builds MoveLib.sln so it could be added with release if needed. Why yes, I am working on MM :smiley: and made a GUI tool (for Windows) for modding SFV. Still needs work, so it's not released outside of the Discord server. Loïc's Wiki has saved me tons of hassle and taught me a bunch about the values. :pray: Sure, I'd love to take over as maintainer of MoveTool. :+1: It's been interesting, to say the least. :bear: In the diff viewer, it looks like you can click Diff Settings > Hide whitespace changes Thanks, that's good to hear! I'll update the readme to point people who come here to your fork. A GUI would be nice for sure, I know lots of people get scared by the json. Getting the dll from this repository means people would get an outdated version after a while though. Anyone can just get the updated source and build it of course, so I guess it's fine. You're right about the Diff Settings. I didn't know about that. I wouldn't worry about the "replaced" files then. As I've said, I'm not really into SF5, but I do watch some MM-stuff once in a while. It looks great. It looks like more fun than the regular game to be honest. Keep up the great work guys!
gharchive/pull-request
2018-08-11T20:57:25
2025-04-01T06:44:51.319758
{ "authors": [ "DoctorBearPhD", "WydD", "lullius" ], "repo": "lullius/MoveTool", "url": "https://github.com/lullius/MoveTool/pull/8", "license": "mit", "license_type": "permissive", "license_source": "bigquery" }
2203557242
一上传FLAC就闪退 尝试上传一首FLAC,然后就闪退了 详细描述下环境, 个人测试可以的
gharchive/issue
2024-03-23T00:19:27
2025-04-01T06:44:51.321057
{ "authors": [ "NickeL9740", "lulu-ls" ], "repo": "lulu-ls/cloud-uploader", "url": "https://github.com/lulu-ls/cloud-uploader/issues/154", "license": "MIT", "license_type": "permissive", "license_source": "github-api" }
2538768944
feat: The number of forks can be seen on the Repository page card I don't have any information about your design rules or design drawings, but it would be nice if we could see the number of forks as well as the number of stars. Added forks_count [done]
gharchive/issue
2024-09-20T13:00:34
2025-04-01T06:44:51.336535
{ "authors": [ "SpadeyDev", "tw4" ], "repo": "lumi-work/git.cow", "url": "https://github.com/lumi-work/git.cow/issues/26", "license": "MIT", "license_type": "permissive", "license_source": "github-api" }
1779629672
missing prototypes warnings when building on windows using msys2 I've manually built luafilesystem on window using msys2, a unix like environment on windows similar to cygwin. From gcc I get the following warnings. src/lfs.c:171:5: warning: no previous prototype for 'lfs_win32_pusherror' [-Wmissing-prototypes] 171 | int lfs_win32_pusherror(lua_State * L) | src/lfs.c:184:8: warning: no previous prototype for 'windowsToUnixTime' [-Wmissing-prototypes] 184 | time_t windowsToUnixTime(FILETIME ft) | src/lfs.c:192:5: warning: no previous prototype for 'lfs_win32_lstat' [-Wmissing-prototypes] 192 | int lfs_win32_lstat(const char *path, STAT_STRUCT * buffer) | I guess all the functions mentioned in the warnings should be declared as static. Below is the proposed change. In lfs.c at line 169 #ifdef _WIN32 static int lfs_win32_pusherror(lua_State * L) { int en = GetLastError(); lua_pushnil(L); if (en == ERROR_FILE_EXISTS || en == ERROR_SHARING_VIOLATION) lua_pushstring(L, "File exists"); else lua_pushstring(L, strerror(en)); return 2; } #define TICKS_PER_SECOND 10000000 #define EPOCH_DIFFERENCE 11644473600LL static time_t windowsToUnixTime(FILETIME ft) { ULARGE_INTEGER uli; uli.LowPart = ft.dwLowDateTime; uli.HighPart = ft.dwHighDateTime; return (time_t) (uli.QuadPart / TICKS_PER_SECOND - EPOCH_DIFFERENCE); } static int lfs_win32_lstat(const char *path, STAT_STRUCT * buffer) { WIN32_FILE_ATTRIBUTE_DATA win32buffer; if (GetFileAttributesEx(path, GetFileExInfoStandard, &win32buffer)) { if (!(win32buffer.dwFileAttributes & FILE_ATTRIBUTE_REPARSE_POINT)) { return STAT_FUNC(path, buffer); } buffer->st_mode = _S_IFLNK; buffer->st_dev = 0; buffer->st_ino = 0; buffer->st_nlink = 0; buffer->st_uid = 0; buffer->st_gid = 0; buffer->st_rdev = 0; buffer->st_atime = windowsToUnixTime(win32buffer.ftLastAccessTime); buffer->st_mtime = windowsToUnixTime(win32buffer.ftLastWriteTime); buffer->st_ctime = windowsToUnixTime(win32buffer.ftCreationTime); buffer->st_size = 0; return 0; } else { return 1; } } #endif Declare all these functions as static. @hishamhm Do you prefer a pull request here as well. This bug report is standing for more than a year now. The solution is really simple. Add static to these three functions lfs_win32_pusherror, windowsToUnixTime, lfs_win32_lstat in order to fix the missing prototypes warnings under windows. I think it would be good to add to CI first, showing the failures. Here's some prior art; https://github.com/lunarmodules/luasystem/blob/master/.github/workflows/build.yml It uses both MSVC as well as the MinGW/gcc toolchains (for PuC and LuaJIT Lua's respectively) Don't really know whether MSVC shows a -Wmissing-prototypes warning or similar as well. If it doesn't then this issue can easily be overlooked. According to workflows/build.yml MinGW/gcc toolchain is only being used for the Luajit build. But looking from a logical perspective. All functions in lfs.c which aren't being declared with LFS_EXPORT are being declared as static with the exception of the following three windows specific functions, lfs_win32_pusherror windowsToUnixTime lfs_win32_lstat Yet, all these three functions are fully local in lfs.c and don't have a prototype in lfs.h. So why not declare them as static, so that every function in lfs.c which isn't being declared with LFS_EXPORT is being declared as static. Then, only the functions which are being declared with LFS_EXPORT have a prototype in lfs.h. Alright, made a pull request here as well.
gharchive/issue
2023-06-28T19:41:30
2025-04-01T06:44:51.344914
{ "authors": [ "FractalU", "Tieske" ], "repo": "lunarmodules/luafilesystem", "url": "https://github.com/lunarmodules/luafilesystem/issues/168", "license": "MIT", "license_type": "permissive", "license_source": "github-api" }
1841478288
Functions gethostbyname and gethostbyaddr are deprecated Hi luasocket developers! While trying to build this library on Open Build Service, I received a warning: binary-or-shlib-calls-gethostbyname /usr/local/lib/lua/5.1/socket/core.so binary-or-shlib-calls-gethostbyname /usr/local/lib/lua/5.1/socket/serial.so binary-or-shlib-calls-gethostbyname /usr/local/lib/lua/5.1/socket/unix.so binary-or-shlib-calls-gethostbyname /usr/local/lib/lua/5.1/ssl.so The binary calls gethostbyname(). Please port the code to use getaddrinfo(). It seems like this library is using gethostbyname and gethostbyaddr in src/usocket.h#L23, but these functions are obsolete and have been marked as deprecated in favor of getaddrinfo and getnameinfo. Note that this is explained in the man pages for gethostbyname: https://man7.org/linux/man-pages/man3/gethostbyname.3.html I couldn't find any existing issues related to this, but if that isn't the case, feel free to close this. PR contributions welcome! Several of us are around and helping to facilitate contributions, but none of us have much spare time for development work in this right now.
gharchive/issue
2023-08-08T14:53:35
2025-04-01T06:44:51.348131
{ "authors": [ "alerque", "marcosbc" ], "repo": "lunarmodules/luasocket", "url": "https://github.com/lunarmodules/luasocket/issues/407", "license": "MIT", "license_type": "permissive", "license_source": "github-api" }
1694004474
Bump version (major) Action to bump version (major) was triggered. Check changes and approve accordingly. Major version feels too much. Close? @raguiar9080 @festinaj ? I'm under the impression @jstasiak opened this hence by reluctant to close without feedback, but I wonder now if it was by mistake since there's been no interaction here. I had nothing to do with this PR (I think) Ok, just wanted to make sure. I'll close it 👍
gharchive/pull-request
2023-05-03T12:43:39
2025-04-01T06:44:51.350343
{ "authors": [ "jstasiak", "raguiar9080", "rbruggem" ], "repo": "lune-climate/lune-ts", "url": "https://github.com/lune-climate/lune-ts/pull/224", "license": "MIT", "license_type": "permissive", "license_source": "github-api" }
570118654
ScriptRuntimeException.ToString() should return InnerException information The Core CLR's implementation of Exception.ToString() includes details about the InnerException. It would be nice if the ScriptRuntimeException.ToString() method did so as well. The alternative is to search out that specific exception type and write code specifically to examine the InnerException property and log it. The problem is that Scriban is not a scripting engine that should give access to internal stack trace in normal mode (scriban scripts are usually used in end-user contexts, not by "developers"). But It's possible to do what you want, though it would require maybe to put a public static bool EnableDisplayInnerException {get; set;} on ScriptRuntimeException and change the ToString() method accordingly. PR welcome.
gharchive/issue
2020-02-24T20:14:47
2025-04-01T06:44:51.351940
{ "authors": [ "Ansssss", "xoofx" ], "repo": "lunet-io/scriban", "url": "https://github.com/lunet-io/scriban/issues/225", "license": "bsd-2-clause", "license_type": "permissive", "license_source": "bigquery" }
814762452
rustsbi qemu panic due to invalid instruction 加载 rustsbi 时panic,qemu 5.2.50,我的机器是一台m1 mac,但是据反馈,intel的mac也有此问题,但其降到5.2.0就OK了,m1 的 mac qemu 降到5.2.0暂时无法运行,我这边没法尝试。 我觉得还是和qemu 5.2.50有关。 ➜ os git:(main) qemu-system-riscv64 \ -machine virt \ -nographic \ -bios ../bootloader/rustsbi-qemu.bin [rustsbi] RustSBI version 0.1.1 .______ __ __ _______.___________. _______..______ __ | _ \ | | | | / | | / || _ \ | | | |_) | | | | | | (----`---| |----`| (----`| |_) || | | / | | | | \ \ | | \ \ | _ < | | | |\ \----.| `--' |.----) | | | .----) | | |_) || | | _| `._____| \______/ |_______/ |__| |_______/ |______/ |__| [rustsbi] Platform: QEMU (Version 0.1.0) [rustsbi] misa: RV64ACDFIMSU [rustsbi] mideleg: 0x222 [rustsbi] medeleg: 0xb1ab [rustsbi-dtb] Hart count: cluster0 with 1 cores [rustsbi] Kernel entry: 0x80200000 [rustsbi-panic] hart 0 panicked at 'invalid instruction, mepc: 0000000080003eba, instruction: 0000000030200073', platform\qemu\src\main.rs:458:17 [rustsbi-panic] system shutdown scheduled due to RustSBI panic mret指令解析出问题了,这个问题很多人报道过,qemu5.2出现的问题,一个修复的方法是降级qemu到5.1以下,或者用4.x版本。编译最新rustsbi的0.2.0-alpha.1版不会解决问题。 不了解这是qemu以后会修复的bug,还是设计如此,以后就这样了? 如果是后者rustsbi会尝试适配否?THX! RustSBI 是 RISC-V SBI 标准的实现,qemu 怎么做和 rustsbi 无关。 RustSBI 只适配 RISC-V SBI 标准,不适配 qemu。 等qemu修复吧,要么降级,我这软件是按sbi标准设计的,不方便改 OK, Thanks. Close issue. 最新版本的 rustsbi-qemu 已经貌似已经解决了这个问题:https://github.com/rustsbi/rustsbi-qemu
gharchive/issue
2021-02-23T19:24:56
2025-04-01T06:44:51.365722
{ "authors": [ "SKTT1Ryze", "imjustfly", "luojia65" ], "repo": "luojia65/rustsbi", "url": "https://github.com/luojia65/rustsbi/issues/12", "license": "MIT", "license_type": "permissive", "license_source": "github-api" }
810247073
Worse performance with CRF layer At first I really want to thank you for providing a simple CRF alternative! My problem with your CRF layer is a worse performace than without it. I really don't get why. Below there is my model: `model = Sequential() model.add(Embedding(vocab_size, 100, weights=[embedding_vectors], input_length=max_seq_length, trainable= False)) model.add(Conv1D(1000, 1, activation=LeakyReLU(alpha=0.1))) model.add(Conv1D(200, 1, activation=LeakyReLU(alpha=0.1))) model.add(Dropout(0.2)) model.add(Bidirectional(LSTM(units=100, recurrent_dropout=0.4, dropout=0.4, return_sequences=True, kernel_regularizer=l2(0.000001)))) model.add(Dropout(0.4)) model.add(TimeDistributed(Dense(512, activation=LeakyReLU(alpha=0.1)))) model.add(Dense(n_tags, activation='softmax')) opt = RMSprop(learning_rate=0.0008) model.compile(loss="categorical_crossentropy", optimizer=opt, metrics=["categorical_accuracy"])` If I add your CRF layer to it, it looks like this: `input_layer = Input(shape=(max_seq_length,)) output = Embedding(vocab_size, 100, weights=[embedding_vectors], input_length=max_seq_length, trainable= False)(input_layer) sequence_mask = Lambda(lambda x: greater(x, 0))(input_layer) output = Conv1D(1000, 1,activation=LeakyReLU(alpha=0.1))(output) output = Conv1D(200, 1,activation=LeakyReLU(alpha=0.1))(output) output = Dropout(0.2)(output) output = Bidirectional(LSTM(units=100, recurrent_dropout=0.4, dropout=0.4, return_sequences=True))(output) output = Dropout(0.4)(output) output = Dense(512, activation=LeakyReLU(alpha=0.1))(output) output = Dense(n_tags, activation=None)(output) crf = CRF(n_tags) output = crf(output, mask=sequence_mask) model = Model(input_layer, output) model.compile(optimizer="rmsprop", metrics=[crf.accuracy], loss=crf.neg_log_likelihood)` Unfortunately the performance is worse. Before it was like 62% F1-Score, now it is 59%. Do you any idea why it is worse now? Did I miss anything which is has to be added for your CRF layer? I'm not sure what caused this worse performance. But if you can get better performance without a CRF, you just remove this CRF layer. In any case, the CRF is not necessary.
gharchive/issue
2021-02-17T14:44:27
2025-04-01T06:44:51.375856
{ "authors": [ "RH1994R", "luozhouyang" ], "repo": "luozhouyang/keras-crf", "url": "https://github.com/luozhouyang/keras-crf/issues/2", "license": "Apache-2.0", "license_type": "permissive", "license_source": "github-api" }
623583579
Fix vsPagination.vue Fix initial justify align when the the description is not provided. Is there some notices about this PR ? Is there some notices about this PR ?
gharchive/pull-request
2020-05-23T05:05:50
2025-04-01T06:44:51.459902
{ "authors": [ "RamsesMartinez" ], "repo": "lusaxweb/vuesax", "url": "https://github.com/lusaxweb/vuesax/pull/834", "license": "MIT", "license_type": "permissive", "license_source": "github-api" }
385262767
[OpenVINO] Add v3 IR support This PR covers !9 from #191 In particular, IR v3 compared to IR v2 is support of new Layer - TensorIterator with the complex internal structure Since the diff for openvino-parser.js is large and will bloat the repro we should try and minimize making changes to this file. Are you sure the current version is finally and will need no further edits? Otherwise, can you do some testing and add any other changes so this file doesn't need to be regenerated or updated again. Alternatively, switching to DOMParser and xmldom sooner would allow for some more flexibility. @lutzroeder I confirm, this is the latest major change for the parser before transitioning to more robust parser libraries. please merge it if you have no other questions @lutzroeder thank you, now switching to the .bin content processing and adding the -metadata.json support for OpenVINO
gharchive/pull-request
2018-11-28T13:15:23
2025-04-01T06:44:51.472380
{ "authors": [ "demid5111", "lutzroeder" ], "repo": "lutzroeder/netron", "url": "https://github.com/lutzroeder/netron/pull/196", "license": "MIT", "license_type": "permissive", "license_source": "github-api" }
2524336874
Tapahtun ilmolista nurinperin Description Aika pieni bugi, mutta kuiteski tuskin haluttu toiminnallisuus. Eli uusimmat ilmot menee listan ekaksi eikä vikaksi, eli toi numero järjestys ei tee mtn järkeä. Reproduce Ilmoa tapahtumaan Ole numero 1. listalla Screenshots No response Area Frontend (next) Kiitos jälleen 🫡 Pitäs olla korjattu
gharchive/issue
2024-09-13T09:22:04
2025-04-01T06:44:51.474880
{ "authors": [ "Borsas", "KasperiP" ], "repo": "luuppiry/luuppi-next", "url": "https://github.com/luuppiry/luuppi-next/issues/170", "license": "MIT", "license_type": "permissive", "license_source": "github-api" }
1069672246
请问build之后如何启用此插件 按照readme执行,build之后通过console启动,emqx_ctl plugins list 和models list 列表里面并没有出现。 存在这个目录:_build/emqx/lib/emqx_persistence_plugin 请问如何启动呢? 我也是这个问题,我直接尝试了添加 $PERSISTENCE 没有生效,请问你是怎么解决的
gharchive/issue
2021-12-02T15:21:02
2025-04-01T06:44:51.480834
{ "authors": [ "dingiso", "ycgambo" ], "repo": "luxingwen/emqx-persistence-plugin", "url": "https://github.com/luxingwen/emqx-persistence-plugin/issues/10", "license": "Apache-2.0", "license_type": "permissive", "license_source": "github-api" }
1379564337
fixed gen2-multiple-devices/main.py OAK cameras now connect in parallel (using threads), which leads to faster bootup time and potentially resolve the issue with connecting oak poe cameras LGTM
gharchive/pull-request
2022-09-20T14:57:05
2025-04-01T06:44:51.481962
{ "authors": [ "Erol444" ], "repo": "luxonis/depthai-experiments", "url": "https://github.com/luxonis/depthai-experiments/pull/405", "license": "MIT", "license_type": "permissive", "license_source": "github-api" }
1032227380
[BUG] Low Latency Encoding for Streaming Application Hi Luxonis Team, I have been working on low latency streaming with your hardware encoder. The goal is to stream a encoded bitstream over a network, which is not doable with with raw images. Low latency is crucial for our application. With the raw images, I achieve a very decent latency of < 100 [ms] (same values as what is mentioned on your documentation). However, when I read the encoded bitstream and decodes it on my computer, using a GStreamer pipeline, I measure a latency always greater than 300 [ms]. To do this in GStreamer, I wrote a little C++ plugin which reads the bitstream from DepthAI and sources it inside GST using "appsrc" (I can provide you with the code to build this, of course !). The overall pipeline looks like this. "appsrc name=dai_to_gst ! h264parse ! avdec_h264 ! videoconvert ! autovideosink" I tried to tune a little the parameters of the VideoEncoder (the quality, the number of B Frames, the profile, the rate control mode) and I observed something quite interesting: the latency associated with a high quality stream is the same as the one of an extremely poor quality stream. It seems to indicate that the problem somehow comes from hardware encoder itself. Is is possible that the hardware encoder is not "low latency compatible" ? I saw that there are several kinds of hardware encoder architectures, some of them are made specifically for low latency, some others target more the lossless compression for better results. Of course, it is possible that I did something wrong with GStreamer. It seems like GStreamer takes a little more time than OpenCV to display images (I measure + 70 [ms] than with OpenCV when streaming raw images). Still, it remains that there is a higher latency. Do you have any insight on this use case ? Would you know what is the best set of settings to have the lowest possible latency when performing hardware encoding for streaming applications ? Thanks in advance again for your help, Best, Arthur I tried to reduce both the FPS and the resolution. I am using 1080P with 25 to 30 FPS. Sorry, forgot to mention this detail !
gharchive/issue
2021-10-21T08:49:46
2025-04-01T06:44:51.486137
{ "authors": [ "arthurBricq" ], "repo": "luxonis/depthai", "url": "https://github.com/luxonis/depthai/issues/509", "license": "MIT", "license_type": "permissive", "license_source": "github-api" }
958463250
Fix OpenPose nn This PR fixes the openpose2 neural network, it crashed on main with Traceback (most recent call last): File "./depthai_demo.py", line 264, in <module> nn_data = nn_manager.decode(in_nn) File "/Users/leeroy/depthai/depthai_helpers/managers.py", line 469, in decode return self.handler.decode(self, in_nn) File "/Users/leeroy/depthai/resources/nn/openpose2/handler.py", line 152, in decode valid_pairs, invalid_pairs = getValidPairs(outputs, w, h, detected_keypoints) File "/Users/leeroy/depthai/resources/nn/openpose2/handler.py", line 46, in getValidPairs pafA = outputs[0, mapIdx[k][0], :, :] IndexError: index 47 is out of bounds for axis 1 with size 46 The source of this issue was the [::-1] transformation of tensor dimensions. The bug that it resolved was fixed in the latest API release and after the update, the transformation started to bring the opposite effect (yielded incorrect tensor dims) Also, since we're using full FOV now for preview, had to adjust handler.py for openpose to scale the NN results accordingly One note here: since we're using a full FOV preview, the results are being scaled to fit the target frame. This is mostly fine except that networks like deeplab (that return a full frame overlay) tend not to scale well and can be inconsistent around the edges. The best way to visualize them is to use -s nn_input as then the NN results will be applied without any scaling Yes the handling of the NN data looks wrongly-scaled @VanDavv : So I think we should do a follow-up PR to either fix these directly, or if we can't, make the preview size for such networks be the the same aspect ratio of the neural network. Thoughts? We should merge this first though... as this is better than crashing. But we should then circle back to make sure the overlays are correct - doing whatever is necessary to ensure that. Sure, will work on this and circle back with PR @Luxonis-Brandon the PR with fixes is here Thanks!
gharchive/pull-request
2021-08-02T20:01:13
2025-04-01T06:44:51.490806
{ "authors": [ "Luxonis-Brandon", "VanDavv" ], "repo": "luxonis/depthai", "url": "https://github.com/luxonis/depthai/pull/435", "license": "MIT", "license_type": "permissive", "license_source": "github-api" }
2056698972
distributed loss for multiple GPUs Hi Luyu, thank you for your nice work. I have a question on Distributed Contrastive Loss: https://github.com/luyug/GradCache/blob/33695437d104e50a961cd9beba18b55c85a6537a/src/grad_cache/loss.py#L30-L34 It seems both x and y are gathered to all the GPUs. Then what is the meaning of distributed data parallel? For dense passage retrieval, we usually adopt the in-batch negatives technique, where only the passages (y) are gathered, not the queries (x). Is my understanding correct? Besides, when I use multiple GPUs with Gradient Caching for training on Natural Questions, I got lower retrieval results (77.7 top-20 acc) than those in your paper and the original DPR paper (78.4 top-20 acc). Have you tried multiple GPUs in your case? Would appreciate it if you have any emiprical results. The current implementation is a specific distributed compute design. In Pytorch, distributed operations are not differentiable so it is at the moment necessary to handcraft all communication for computing the gradients of the representations. The implementation in this repo materialize the full forward pass on all devices while keeping the backward sharded. This particular implementation avoids writing extra autograd function for communication in backward. I see. So in https://github.com/luyug/GradCache/blob/33695437d104e50a961cd9beba18b55c85a6537a/src/grad_cache/grad_cache.py#L193-L211 you only use loss.backward() on each GPU and do not need to communicate the gradients, right? But will gathering x to all the GPUs consume more GPU memory than distrubting x to seperate GPUs? Is the memory difference significant? I see. So you mean when we use DDP and loss.backward(), dist.all_gather() cannot backpropogate gradients from the gathered tensors to the model parameters on other GPUs, right? Effectively. The all_gather should give you leaf tensors, which won't get grads back-prop during the backward call. I see. Thanks for your explanations.
gharchive/issue
2023-12-26T20:52:27
2025-04-01T06:44:51.524953
{ "authors": [ "luyug", "x-zb" ], "repo": "luyug/GradCache", "url": "https://github.com/luyug/GradCache/issues/25", "license": "Apache-2.0", "license_type": "permissive", "license_source": "github-api" }
2416317625
Bring TwoColumnOperatorTransformer tests in line with new test set up What? There are a few required changes for this, summarised below from (https://github.com/lvgig/tubular/issues/150) Double check "minimal attribute dict" fixture has the correct set of minimal arguments required Set up the TestInit TestFit, TestTransform, and TestOtherBaseBehaviour classes to inherit the appropriate test classes and then copy the setup_class class method into each test class, changing transformer_name to the transformer you are testing (examples linked) . Extra tests specific to this class should be added, e.g. if this transformer also needed to check that when len(mappings)>2 then mappings is an ordered dict, you would add this to TestInit. Where extra tests would apply to all child classes, they should instead be added to the parent test class. Delete any unnecessary tests from original transformer test file (i.e. tests testing inherited behaviour that is now tested by the inherited test classes). An example would be. In general asserting calls is a red flag, as this tests the applied process rather than the desired outcome. Debug the failing tests from the base test suite. Bring the test file in line with best practices by insuring there are no implementation tests (in this case looks like these will have been removed as part of the base test changes) Bonus: If you identify any gaps in the testing, please set up new issues :) Why? Revamp tubular testing to make package more maintainable/modifiable Forgot to link PR - but this has been completed PR here: https://github.com/lvgig/tubular/pull/291
gharchive/issue
2024-07-18T13:07:41
2025-04-01T06:44:51.552420
{ "authors": [ "Chip2916", "davidhopkinson26" ], "repo": "lvgig/tubular", "url": "https://github.com/lvgig/tubular/issues/274", "license": "BSD-3-Clause", "license_type": "permissive", "license_source": "github-api" }
2110738788
fatal error: SDL2/SDL.h: No such file or directory Executing task: C:\Users\Joaquin.platformio\penv\Scripts\platformio.exe run --target execute --environment emulator_64bits Processing emulator_64bits (platform: native@^1.1.3) Verbose mode can be enabled via -v, --verbose option LDF: Library Dependency Finder -> https://bit.ly/configure-pio-ldf LDF Modes: Finder ~ chain, Compatibility ~ soft Found 2 compatible libraries Scanning dependencies... Dependency Graph |-- lvgl @ 8.2.0 |-- lv_drivers @ 8.2.0 Building in release mode Compiling .pio\build\emulator_64bits\lib58c\lv_drivers\sdl\sdl.o : fatal error: SDL2/SDL.h: No such file or directory compilation terminated. Any help would be more than welcome, as I'm a newbie with all these tools. Hi, Have you seen installed SDL? This guide can be helpful to install SDL on Windiws: https://lazyfoo.net/tutorials/SDL/01_hello_SDL/windows/index.php C:/msys64/ucrt64/bin/../lib/gcc/x86_64-w64-mingw32/13.1.0/../../../../x86_64-w64-mingw32/bin/ld.exe: cannot find -lSDL2: No such file or directory It indicates that SDL2 is not installed or not seen by PlatformIO. I have it installed in 2 different places and both paths added to the PATH enviroment variable as indicated. Any way, I have given up and cleaned it all. Once again, thank you for your advice :-) I'm sorry that we couldn't figure it out :slightly_frowning_face: No problem. My post was mainly to store somewhere the resolution of the issue I had. First developer reflex is to google the error, so finding a hint to resolution is always a good sign :) Have a good day 👍
gharchive/issue
2024-01-31T18:42:16
2025-04-01T06:44:51.558708
{ "authors": [ "Joaquin500", "kisvegabor", "lgrosbois-AtlasLaser" ], "repo": "lvgl/lv_platformio", "url": "https://github.com/lvgl/lv_platformio/issues/60", "license": "MIT", "license_type": "permissive", "license_source": "github-api" }
1640445417
Why is the adapter layer merged in gpt-neox-20b_peft? In the gpt-neox-20b_peft example, step 1 is the SFT with lora, step 2 is the merging of the trained lora adapters' weights into the base model, and step 3 is loading the merged model for PPO training with lora. I am unclear on why the models are merged in step 2. Is it necessary to merge the adapter layer, or can we simply load both the base model and lora weights in step 3 (gpt-neo-20b_sentiment_peft.py) instead of loading the merged model? The code to load both models in step 3 would look like this: peft_model_id = "edbeeching/gpt-neo-125M-imdb-lora" peft_config = PeftConfig.from_pretrained(peft_model_id) model = AutoModelForCausalLM.from_pretrained( peft_config.base_model_name_or_path, ... ) model = PeftModel.from_pretrained(model, peft_model_id) I think the main reason is to save vRAM, and also an adapter lora model maybe is enough for a single task() Hi @ohashi56225 Thanks for your question Indeed it is needed to first merge the LORA layers inside the base model, as later on we train new adapters for the RLHF part. As you cannot have multiple adapters on the same model, you need to merge the adapters that were trained first, use this merged model as a standalone transformers model, and use the latest as a base model for the new LORA layers that you want to train Does that makes things clearer? Let me know if this is still unclear Hi @ohashi56225 Thanks a lot for following up! If we initialize the weights of the new LoRA layers randomly, wouldn't we risk losing the ability to generate IMDB reviews learned in the SFT step, resulting in poor sentence generation at the beginning of the RLFH training loop? In fact when we "merge" the LoRA layers directly to the base model, it means that we are transferring the knowledge that has been learned by the LoRA layers directly on the base model. Precisely, this also means that once you have merged the LoRA layers, you can push these new "merged" weights on the Hub (or locally) and directly load it and benefit from the SFT fine-tuned performance. An example can be found here: https://huggingface.co/edbeeching/gpt-neo-125M-imdb-lora-adapter-merged , as you can see it, there no more adapters anymore in this repository. Wouldn't it make more sense to fine-tune the pre-trained LoRA layers directly, rather than adding a new LoRA layer for RLHF? This would also reduce the number of parameters in the model. It is indeed possible, but the model will convergence quite slowly as the model needs to : 1- Adapt in the imdb domain 2- Learn to generate positive content We have empirically find out that the "domain" gap is an important factor to consider, i.e. fine-tuning a model that is already prepared to generate movie reviews converges faster / easier than base models. Let me know if anything else is unclead! Thank you for your answer! However, I realized that my question may have been unclear, leading to an answer that didn't quite address my intended meaning. First of all, is my understanding correct that the current implementation follows the steps shown in the image below? If so, my first question means that if the addition of new LoRA layers in step 3 is affecting the review generation ability of the merged model. Specifically, since the parameters of these LoRA layers are initialized randomly (which I believe corresponds to line 112 of gpt-neo-20b_sentiment_peft.py), they need to be trained from scratch during RL . @ohashi56225 I had exactly the same questions as you. I also think your understanding of the 3steps procedure is correct. However, I think I understand why simply fine-tuning the adapter weights without merging them works less well. It is related to the KL divergence in the PPO training. In fact, in https://github.com/lvwerra/trl/blob/b5cce0d13e95c8e21eb9d57177930253d9092a02/examples/sentiment/scripts/gpt-neox-20b_peft/gpt-neo-20b_sentiment_peft.py#L209, the ref_model=None. In this case, and since model is a PEFT_model, the ref_model simply becomes the PEFT_model with the adapters removed https://github.com/lvwerra/trl/blob/b5cce0d13e95c8e21eb9d57177930253d9092a02/trl/trainer/ppo_trainer.py#L559. In our case, if there is no merging step, this corresponds to taking as the ref_model the gpt-neo-125m model without any adapters and thus not adapted to imdb. With the merging step, this amounts to taking as ref_model the gpt-neo-125M-imdb-lora-adapter-merged model with better quality ref_logprobs. A bug fix that should make the merging obsolete would be to explicitly choose the adapted ref_model rather than letting ref_model=None. That's my current understanding from reading the code, but experimental validation is certainly desirable. Hey @Sanster @alexrame Thanks a lot for the nice diagram btw, I made a diagram based on your to explain the procedure Actually when lora weights gets merged, the Lora layers 'disappear', and this is what I meant by yo can use the merged model as a standalone model ! Btw, this functionality will be supported soon on peft, check: https://github.com/huggingface/peft/pull/227 In our case, if there is no merging step, this corresponds to taking as the ref_model the gpt-neo-125m model without any adapters and thus not adapted to imdb. With the merging step, this amounts to taking as ref_model the gpt-neo-125M-imdb-lora-adapter-merged model with better quality ref_logprobs. I think that this totally valid and this is my empirical understanding of things! @younesbelkada @alexrame I now have a complete understanding of how the LoRA layers are merged to create a standalone model, as well as the reasons behind this process. Thank you both for providing such a detailed explanation! And I agree that it works to choose the adapted ref_model instead of leaving ref_model=None. Therefore, I think it's appropriate to close this issue. Thank you once again! Thanks for the nice explanations. So should we indicate the ref_model = gpt-neo-125M-imdb-lora-adapter-merged instead leave it to None? If your base model is "gpt-neo-125M-imdb-lora-adapter-merged", then you can either set ref_model = gpt-neo-125M-imdb-lora-adapter-merged or None, they both are equivalent. yet, the None option is preferred because it's more efficient (you don't need to load the same model twice). In contrast, if you have not applied the merge procedure, the ref model should be explicitly called. ref_model = Loader.load_peft_model(ref_base_model, peft_name=ref_peft_name)
gharchive/issue
2023-03-25T09:21:45
2025-04-01T06:44:51.579010
{ "authors": [ "AttentionAllUNeed", "Sanster", "alexrame", "ohashi56225", "younesbelkada" ], "repo": "lvwerra/trl", "url": "https://github.com/lvwerra/trl/issues/250", "license": "Apache-2.0", "license_type": "permissive", "license_source": "github-api" }
2135246109
"Error: EBUSY: resource busy or locked" in Powershell 7 While starting the project with npm run dev in Powershell 7 on Windows 10, the following error is displayed and the command fails: Until the issue is solved, the following workarounds seem to work: Use older version of Powershell or git on Windows open separate Powershell 7 window and run npm run watch:eleventy directly i use your template (and very nice it is too) and sometimes encounter the same "EBUSY: resource busy or locked" scenario, and it is always to do with the favicon, though it is also intermittent. it happens in git bash as well as powershell in windows 10, and also happens with npm run build - it usually goes away after a retry. not sure what to add really as it's not happening currently! I replaced this nunjucks code {%- favicons 'assets/favicon.svg', appleIconPadding=0 -%} in favicons.njk with this metadata output in from _site ` ` and it fixed it for me I replaced this nunjucks code {%- favicons 'assets/favicon.svg', appleIconPadding=0 -%} in favicons.njk with this metadata output in from _site and it fixed it for me Since this issue has to do with an unresolved bug in an external plugin and I can't put enough time to diagnose and fix it, starting from v2.4.0 Bliss no longer ships with automatic favicon generation. If you liked the feature, feel free to reimplement it on your own - consult change history from this PR to get it back. Thank you Le sam. 2 mars 2024 à 00:43, Łukasz Wójcik @.***> a écrit : Since this issue has to do with an unresolved bug in an external plugin and I can't put enough time to diagnose and fix it, starting from v2.4.0 Bliss no longer ships with automatic favicon generation. If you liked the feature, feel free to reimplement it on your own - consult change history from this PR https://github.com/lwojcik/eleventy-template-bliss/pull/99 to get it back. — Reply to this email directly, view it on GitHub https://github.com/lwojcik/eleventy-template-bliss/issues/91#issuecomment-1974086128, or unsubscribe https://github.com/notifications/unsubscribe-auth/AWBQEYLFGMLJTJDH7EGNLLDYWEHDJAVCNFSM6AAAAABDJEX74OVHI2DSMVQWIX3LMV43OSLTON2WKQ3PNVWWK3TUHMYTSNZUGA4DMMJSHA . You are receiving this because you commented.Message ID: @.***>
gharchive/issue
2024-02-14T21:38:51
2025-04-01T06:44:51.612857
{ "authors": [ "Bouali-Ramzi", "joel-lbth", "lwojcik" ], "repo": "lwojcik/eleventy-template-bliss", "url": "https://github.com/lwojcik/eleventy-template-bliss/issues/91", "license": "MIT", "license_type": "permissive", "license_source": "github-api" }